The Deep Learning Revolution by Terrence Sejnowski

The Deep Learning Revolution by Terrence Sejnowski

Author:Terrence Sejnowski
Language: eng
Format: epub
Tags: Machine learning; neural networks; artificial intelligence; human intelligence; neuroscience; reinforcement learning; learning; backpropagation; Boltzmann machines; AlphaGo; Google translate; Geoff Hinton; John Hopfield; big data; NIPS
Publisher: MIT Press
Published: 2018-09-19T16:00:00+00:00


Box 10.1

Temporal Difference Learning

In this model of the honeybee brain, actions are chosen (such as landing on a flower) that maximize all future discounted rewards:

R(t) = rt+1 + γ rt+2 + γ2 rt+3 + …,

where rt+1 is the reward at time t+1 and 0 < γ < 1 is a discount factor. The predicted future reward based on current sensory inputs s(t) is computed by neuron P:

Pt (s) = wY sY + wB sB,

where the sensory input from yellow (Y) and blue (B) flowers are weighted by wY and wB. The reward prediction error δ(t) at time t is given by

δt = rt + γ Pt(st) – Pt(st−1),

where rt is the current reward. The change in each weight is given by:

Δ wt = α δt st−1,

where α is the learning rate. If the current reward is greater than the predicted reward, and δt is positive, the weight is increased on the sensory input that was present before the reward, but if the current reward is less than expected, and δt is negative, the weight is decreased.

Adapted from Montague, P. R., and Sejnowski, T. J., “The Predictive Brain: Temporal Coincidence and Temporal Order in Synaptic Learning Mechanisms,” figure 6A, Learning & Memory 1 (1994): 1–33.



Download



Copyright Disclaimer:
This site does not store any files on its server. We only index and link to content provided by other sites. Please contact the content providers to delete copyright contents if any and email us, we'll remove relevant links or contents immediately.