Fundamentals of Computational Neuroscience 2e January 1, 2010 - - PowerPoint PPT Presentation

fundamentals of computational neuroscience 2e
SMART_READER_LITE
LIVE PREVIEW

Fundamentals of Computational Neuroscience 2e January 1, 2010 - - PowerPoint PPT Presentation

Fundamentals of Computational Neuroscience 2e January 1, 2010 Chapter 10: The cognitive brain Hierarchical maps and attentive vision A. Ventral visual pathway B. Layered cortical maps 50 Inferior Layer 4 Layer 4 Temporal cortex Receptive


slide-1
SLIDE 1

Fundamentals of Computational Neuroscience 2e

January 1, 2010 Chapter 10: The cognitive brain

slide-2
SLIDE 2

Hierarchical maps and attentive vision

Posterior Inferior V4 V2 V1 LGN Temporal cortex Cccipital cortex Thalamus

}

  • A. Ventral visual pathway
  • B. Layered cortical maps

Eccentricity / deg Receptive field size / deg 50 20 8.0 3.2 1.3 50 20 8.0 3.2 1.3 Layer 4 Layer 3 Layer 2 Layer 1 Layer 1 Layer 4 Layer 2 Layer 3

slide-3
SLIDE 3

Attention in visual search and object recognition

Given : Particular Features

( Target Object )

Function : Scanning

( Attentional Window Scanns the Entire Scene )

WHERE

Visual search Given : Particular features

( target object )

Function : Scanning

( attentional window scans the entire scene )

WHERE

Object Recognition Given : Particular Spatial Location

( Target Position)

Function : Binding

( Attentional Windo Bind Features for Identification )

WHAT

Object recognition Given : Particular spatial location

( target position)

Function : Binding

( attentional window binds features for identification )

WHAT

Gustavo Deco

slide-4
SLIDE 4

Model

Inhibitory pool Inhibitory pool

.... ....

Visual field

Inhibitory Pool Inhibitory pool

( ) „ “ „ “

Locus attentional preferred Gabor jets

IT ( Object recognition ) PP Spatial location V1 V4 (Feature extraction ) LGN Where What Top-down bias ( Object specific ) Top-down bias ( Location specific )

slide-5
SLIDE 5

Example results

E X X Time Number

  • f

items 1 2 3 PP Number

  • f

items 1 2 3 PP E F F E F F Time Activity 2 3 Activity

  • A. `Parallel search’
  • B. `Serial search’
slide-6
SLIDE 6

The interconnecting workspace hypothesis

Global workspace

Evaluative system (VALUE) Long-term memory (PAST) Attentional system (Focusing) Perceptual system (PRESENT) Motor system (FUTURE) Stanislas Dehaene, M. Kergsberg, and J.P . Changeux, PNAS 1998

slide-7
SLIDE 7

Stroop task modelling

COLOUR black NAMING RESPONSE black INPUTS & OUTPUTS SPECIALIZED PROCESSORS WORKSPACE NEURONS REWARD

(error signal)

VIGILANCE

attentional suppression

  • f word

attentional amplification

  • f colour

WORD grey

slide-8
SLIDE 8

The anticipating brain

  • 1. The brain can develop a model of the world, which can be used

to anticipate or predict the environment.

  • 2. The inverse of the model can be used to recognize causes by

evoking internal concepts.

  • 3. Hierarchical representations are essential to capture the richness
  • f the world.
  • 4. Internal concepts are learned through matching the brain’s

hypotheses with input from the world.

  • 5. An agent can learn actively by testing hypothesis through

actions.

  • 6. The temporal domain is an important degree of freedom.
slide-9
SLIDE 9

Outline

Agent Environment

) | c, a ( s p ) | ( a , s p a ,c ) | ( s p c ) | ( a c p

External states PNS Sensation PNS Action

) |s , c ( s p ) ( p a

CNS Action Internal states

c ,c ) | ( p c

CNS Sensation

|s , c

slide-10
SLIDE 10

Recurrent networks with hidden nodes

The Boltzmann machine:

Hidden nodes Visible nodes

Energy: Hnm = − 1

2

  • ij wijsn

i sm j

Probabilistic update: p(sn

i = +1) = 1 1+exp(−β P

j wijsn j )

Boltzmann-Gibbs distribution: p(sv; w) = 1

Z

  • m∈h exp(−βHvm)
slide-11
SLIDE 11

Training Boltzmann machine

Kulbach-Leibler divergence KL(p(sv), p(sv; w)) =

v

  • s

p(sv) log p(sv) p(sv; w) =

v

  • s

p(sv) log p(sv) −

v

  • s

p(sv) log p(sv; w) Minimizing KL is equivalent to maximizing the average log-likelihood function l(w) =

v

  • s

p(sv) log p(sv; w) = log p(sv; w). Gradient decent → Boltzmann Learning ∆wij = η ∂l

∂wij = η β 2

  • sisjclamped − sisjfree
  • .
slide-12
SLIDE 12

The restricted Boltzmann machine

Hidden nodes Visible nodes

Contrastive Hebbian learning: Alternating Gibbs sampling

t=1 t=2 t=3 t=

8

slide-13
SLIDE 13

Deep generative models

Model retina RBM layers Recognition readout and stimulation Image input Concept input RBM/Helm- holtz layers

slide-14
SLIDE 14

Adaptive Resonance Theory (ART)

g Input F1 F2 ρ + + − + −

Reset Attentional subsystem Orienting subsystem Gain control

wb wt t u s v

slide-15
SLIDE 15

Further Readings

Edmund T. Rolls and Gustavo Deco (2001), Computational neuroscience of vision, Oxford University Press. Karl Friston (2005), A theory of cortical responses, in Philosophical Transactions of the Royal Society B 360, 815–36. Jeff Hawkins with Sandra Blakeslee (2004), On intelligence, Henry Holt and Company. Robert Rosen (1985), Anticipatory systems: Philosophical, mathematical and methodological foundations, Pergamon Press. Geoffrey E. Hinton (2007), Learning Multiple Layers of Representation, in Trends in Cognitive Sciences 11: 428–434. Stephen Grossberg (1976), Adaptive pattern classification and universal recoding: Feedback, expectation, olfaction, and illusions, in Biological Cybernetics 23: 187–202. Gail Carpenter and Stephen Grossberg (1987), A massively parallel architecture for a self-organizing neural pattern recognition machine in Computer Vision, Graphics and Image Processing 37: 54–115. Daniel S. Levine (2000), Introduction to neural and cognitive modeling, Lawrence Erlbaum, 2nd edition. James A. Freeman (1994), Simulating neural networks with Mathematica, Addison-Wesley.