Fundamentals of Computational Neuroscience 2e December 31, 2009 - - PowerPoint PPT Presentation

fundamentals of computational neuroscience 2e
SMART_READER_LITE
LIVE PREVIEW

Fundamentals of Computational Neuroscience 2e December 31, 2009 - - PowerPoint PPT Presentation

Fundamentals of Computational Neuroscience 2e December 31, 2009 Chapter 9: Modular networks, motor control, and reinforcement learning Mixture of experts Expert 1 Integration Expert 2 Output network Input Expert n Gating network A.


slide-1
SLIDE 1

Fundamentals of Computational Neuroscience 2e

December 31, 2009 Chapter 9: Modular networks, motor control, and reinforcement learning

slide-2
SLIDE 2

Mixture of experts

Expert 1 Expert 2 Expert n Gating network Integration network Output Input

X abs(x )

  • B. Mixture of expert for absolute function
  • A. Absolute function

x f (x ) = abs (x )

ΣΠ

slide-3
SLIDE 3

The ‘what-and-where’ task

1 2 3 4 5 1 2 3 4 5 1 5 10 15 20 25 30 35 1 5 10 15 1 5 10 15 20 25 30 35 1 5 10 15

  • B. Without bias towards short connections
  • C. With bias towards short connections
  • A. Model retina with sample image

Hidden node # Hidden node # Output node # Output node #

Jacobs and Jordan (1992)

slide-4
SLIDE 4

Coupled attractor networks

Node group 1 Node group 2

Connections between groups

0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 1 1 1 1 1 0 0 0 0 0 0 0 0 1 1 0 1 1 0 0 0 0 0 0 0 1 1 0 0 0 1 1 0 0 0 0 0 1 1 1 0 0 0 1 1 1 0 0 0 0 1 1 1 1 1 1 1 1 1 0 0 0 1 1 1 1 1 1 1 1 1 1 1 0 0 1 1 1 0 0 0 0 0 1 1 1 0 0 1 1 1 0 0 0 0 0 1 1 1 0 0 1 1 1 0 0 0 0 0 1 1 1 0 0 1 1 1 0 0 0 0 0 1 1 1 0 1 1 1 1 1 1 1 1 1 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 0 0 1 1 1 0 0 0 0 0 0 1 1 1 0 1 1 1 0 0 0 0 0 1 1 1 0 0 1 1 1 0 0 0 1 1 1 0 0 0 0 1 1 1 1 1 1 1 0 0 0 0 0 0 1 1 1 1 1 1 1 0 0 0 0 0 0 1 1 1 0 0 0 1 1 1 0 0 0 0 1 1 1 0 0 0 0 0 1 1 1 0 0 1 1 1 0 0 0 0 0 0 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 0 0 1 1 1 1 1 1 1 1 1 0 0 0 0

  • A. Coupled attractor networks
  • B. The left--right universe with letters
slide-5
SLIDE 5

Limit on modularity

0.2 0.4 0.6 0.8 1 0.02 0.04 0.06 0.08 0.1 0.12 g : Relative intermodular strength Load capacity

α :

c

g : Relative intramodular strength m : Number of modules

  • A. Load capacity
  • B. Bounds on intermodular strength

m = 4 m = 2 m = 1

2 4 6 8 10 0.5 1 1.5 2

slide-6
SLIDE 6

Sequence learning

w w w AB

AA BB BA

Input Pathway

Module A Module B

w

4 8 10 12 14 16 18 20 −1000 1000 2000 3000

Overlap in A

−500 500 1000 1500 2000

Overlap in B

  • A. Modular attractor model
  • B. Time evolution of overlaps

6 2 4 8 10 12 14 16 18 20

Time [τ]

6 2

Lawrence, Trappenberg and Fine (2006); (Sommer and Wennekers (2005))

slide-7
SLIDE 7

Working memory

PFC PMC HCMP

O’Reilly, Braver, and Cohen 1999

slide-8
SLIDE 8

Limit on working memory

10 20 20 40 60 80 100 120 10 20 20 40 60 80 100 120 10 20 20 40 60 80 100 120

  • A. One object
  • B. Two objects
  • C. Four objects

Time Node number Time Node number Time Node number

slide-9
SLIDE 9

Motor learning and control

Motor command generator Desired state Controlled

  • bject

Sensory system Motor command Actual state

  • Afferent

Re-afferent Disturbance

slide-10
SLIDE 10

Forward model controller

Motor command generator Desired state Controlled

  • bject

Sensory system Motor command Actual state

  • Afferent

Re-afferent

  • +

Disturbance Forward dynamic model Forward output model

slide-11
SLIDE 11

Inverse model controller

Motor command generator Desired state Controlled

  • bject

Sensory system Motor command Actual state

  • Afferent

Re-afferent

  • +

Disturbance Inverse model

slide-12
SLIDE 12

Cerebellum

Excitatory synapse Inhibitory synapse Mossy fibre Spinal cord External cuneate nucleus Reticular nuclei Pontine nuclei

{

Molecular layer Intracerebellar and vestibular nuclei Golgi cell Purkinje neuron Granule cell Out Inferior olive Climbing fibre Purkinje layer Granular layer

{

{

{

Stellate cell Basket cell Parallel fibre

slide-13
SLIDE 13

Reinforcement learning

slide-14
SLIDE 14

Basal Ganglia

Thalamus Cerebral cortex Substantia nigra

( )

pars reticulata pars compacta

Superior colliculus

Globus pallidus Putamen Subthalamic nucleus

  • A. Outline of basic BG anatomy

Caudate nucleus

  • C. Recordings of SNc neurons and simulations

Pattern 4 Pattern 3 Pattern 2

Episode rhat 50 100

Stimulus B Stimulus A Reward Stimulus A No reward

slide-15
SLIDE 15

temporal difference learning

slow

r (t) V(t −1)

slow fast

  • C. Temporal difference rule
  • B. Temporal delta rule
  • A. Linear predictor node

r j

in(t-1)

r 2

in

r 1

in

r 3

in

(t)

r 4

in

(t) (t) (t)

r 2

in

r 1

in

r 3

in

(t)

r 4

in

(t) (t) (t)

V(t −1) r (t) r (t) r (t) V(t)

γ V(t)

r 2

in

r 1

in

r 3

in

(t)

r 4

in

(t) (t) (t)

in(t-1)

r j

V(t) V(t)

slide-16
SLIDE 16

Actor-critique and Q-learning

Cerebral cortex

(frontal)

ST DA PD ST SPm SPs

Primary reinforcement

Basal ganglia

Matrix module Striosomal module

TH F C C C

  • B. Actor-critic model of BG

(actor) (critic)

  • D. Q-learning model of BG

Cerebral cortex

state / action coding

Pallidum

action selection

Striatum

reward prediction

Thalamus SNc

Primary reinforcement

state action

slide-17
SLIDE 17

Actor-critique controller

Motor command generator (actor) Desired state Controlled

  • bject

Sensory system Motor command Actual state

  • Afferent

Re-afferent Critic Disturbance Reinforcement signal

slide-18
SLIDE 18

Further Readings

Robert A. Jacobs, Michael I. Jordan, and Andrew G. Barto (1991), Task decomposition through competition in a modular connectionist architecture: the what and where tasks, in Cognitive Science 15: 219–250. Geoffrey Hinton (1999), Products of experts, in Proceedings of the Ninth International Conference on Artificial Neural Networks, ICANN ’99, 1:1–6. Yaneer Bar-Yam (1997), Dynamics of complex systems, Addison-Wesley. Edmund T. Rolls and Simon M. Stringer (1999), A model of the interaction between mood and memory, in Networks: Comptutation in neural systems 12: 89–109.

  • N. J. Nilsson (1965), Learning machines: foundations of trainable pattern-classifying

systems, McGraw-Hill.

  • O. G. Selfridge (1958), Pandemonium: a paradigm of learning, in the mechanization of

thought processes, in Proceedings of a Symposium Held at the National Physical Laboratory, November 1958, 511–27, London HMSO. Marvin Minsky (1986), The society of mind, Simon & Schuster. Akira Miyake and Priti Shah (eds.) (1999), Models of working memory, Cambridge University Press. Daniel M. Wolpert, R. Chris Miall, and Mitsuo Kawato (1998), Internal models in the cerebellum, in Trends Cognitive Science 2: 338–47. Edmund T. Rolls and Alessandro Treves (1998), Neural networks and brain function, Oxford University Press. James C. Houk, Joel L. Davis, and David G. Beiser (eds.) (1995), Models of information processing in the basal ganglia, MIT Press. Richard S. Sutton and Andrew G. Barto (1998), Reinforcement learning: an introduction, MIT Press. Peter Dayan and Laurence F . Abbott (2001), Theoretical Neuroscience, MIT Press.