Coding and computing with balanced spiking networks Sophie Deneve - - PowerPoint PPT Presentation

coding and computing with
SMART_READER_LITE
LIVE PREVIEW

Coding and computing with balanced spiking networks Sophie Deneve - - PowerPoint PPT Presentation

Coding and computing with balanced spiking networks Sophie Deneve Ecole Normale Suprieure, Paris Poisson Variability in Cortex Variance of Spike Count Mean Spike Count Trial 1 Trial 2 Trial 3 Trial 4 Cortical spike trains are highly


slide-1
SLIDE 1

Coding and computing with balanced spiking networks

Sophie Deneve Ecole Normale Supérieure, Paris

slide-2
SLIDE 2

Poisson Variability in Cortex

Trial 1 Trial 2 Trial 3 Trial 4

Mean Spike Count Variance of Spike Count

slide-3
SLIDE 3

Cortical spike trains are highly variable

From Churchland et al, Nature neuroscience 2010

slide-4
SLIDE 4

Cortical spike trains are variable

From Churchland et al, Nature neuroscience 2010

slide-5
SLIDE 5

Balance between excitation and inhibition

Excitatory, Poisson Inhibitory, Poisson Integrate and fire neuron:

exc inh

V V I I     

exc

I

inh

I

slide-6
SLIDE 6

Balance between excitation and inhibition

Excitatory, Poisson Inhibitory, Poisson Integrate and fire neuron:

exc inh

V V I I     

Output is more regular than input

exc

I

inh

I

slide-7
SLIDE 7

Balance between excitation and inhibition

Excitatory, Poisson Inhibitory, Poisson Integrate and fire neuron:

exc inh

V V I I     

Output is more regular than input How does Poisson-like variability survives?

exc

I

inh

I

slide-8
SLIDE 8

Balanced excitation/inhibition

Excitatory Inhibitory Random walk

Shadlen and Newsome, 1996

exc inh

V V I I     

exc

I

inh

I

slide-9
SLIDE 9

Balanced excitation/inhibition

Excitatory Inhibitory Random walk

Shadlen and Newsome, 1996

exc inh

V V I I     

exc

I

inh

I

Variability is conserved when mean excitation =mean inhibition

slide-10
SLIDE 10

E/I balance: stimulus driven response

Wehr and Zador, 2003

slide-11
SLIDE 11

Okun and Lampl, 2008

E/I balance: spontaneous activity

slide-12
SLIDE 12

Two types of balanced E/I

E I E

Feed-forward inhibition: Recurrent inhibition:

E I E Not random: Highly structured…

slide-13
SLIDE 13

Balanced neural networks generate their own variability

E I

EE

J

EI

J

IE

J

II

J

EE E IE I EI E II I

J v J v J v J v    

Weak, sparse random connections

e.g C. van Vreeswijk and H. Sompolinsky, Science (1996);

ext

I

slide-14
SLIDE 14

Brunel, 2001

slide-15
SLIDE 15

Balanced neural networks generate their own variability

E I

EE

J

EI

J

IE

J

II

J

EE E IE I EI E II I

J v J v J v J v    

e.g C. van Vreeswijk and H. Sompolinsky, Science (1996);

Asynchronous irregular regime Low firing rate

ext

I

slide-16
SLIDE 16

Balanced neural networks generate their own variability

E I

EE

J

EI

J

IE

J

II

J

EE E IE I EI E II I

J v J v J v J v    

e.g C. van Vreeswijk and H. Sompolinsky, Science (1996);

Asynchronous irregular regime Low firing rate

ext

I

Shuffle one spike by 0.1ms

slide-17
SLIDE 17

Balanced neural networks generate their own variability

E I

EE

J

EI

J

IE

J

II

J

EE E IE I EI E II I

J v J v J v J v    

e.g C. van Vreeswijk and H. Sompolinsky, Science (1996);

Asynchronous irregular regime Low firing rate Shuffle one spike by 0.1ms

ext

I

Reshuffle all later spikes

slide-18
SLIDE 18

Balanced neural networks generate their own variability

E I

EE

J

EI

J

IE

J

II

J

EE E IE I EI E II I

J v J v J v J v    

e.g C. van Vreeswijk and H. Sompolinsky, Science (1996);

Asynchronous irregular regime Low firing rate Chaotic attractor: Shuffle one spike by 0.1ms

ext

I

Reshuffle all later spikes

slide-19
SLIDE 19

Population coding

  • Asynchronous, irregular spike trains.
  • Population coding.
  • E/I balance.

Decoding = summing from large populations Code = Mean firing rates Spikes = random samples “Requiem for a spike”

slide-20
SLIDE 20

Continuous variable: Population coding

Georgopoulos 1982

slide-21
SLIDE 21

Population Codes

Tuning Curves Average pattern of activity

  • 100

100 20 40 60 80 100 Direction (deg) Activity

  • 100

100 20 40 60 80 100 Preferred Direction (deg) Activity

s?

   

i i i

s f x x f x   

slide-22
SLIDE 22

Noisy population Codes

Pattern of activity (s)

  • 100

100 20 40 60 80 100 Preferred Direction (deg) Activity

s?

     

 

exp | !

i

s i i i i

f x f x p s x s  

Poisson noise: Independent:

   

| |

i i

p s x p s x 

slide-23
SLIDE 23

Population decoding

x

𝑦=argmin

𝑦

𝑦 − 𝑦 2 𝑠𝑗 = 𝑔

𝑗 𝑦 +Noise

slide-24
SLIDE 24

Population vector

Optimal when tuning curves are cosyne, noise gaussian, uniformely distributed over all orientation…

i i i

x r x 

Decoding is easy… but usually suboptimal.

slide-25
SLIDE 25

Optimal: Maximum likelihood

Pattern of activity (s)

  • 100

100 20 40 60 80 100 Preferred Direction (deg) Activity

s?

x

 

| p s x

Likelihood

ˆ x

Maximum likelihood estimates

 

 

ˆ |

argmax

x

x p s x 

Decoding always optimal … but usually hard.

slide-26
SLIDE 26

Optimal Population decoding

Decoding = summing from large neural populations

ˆ

j j j

x r  

x

slide-27
SLIDE 27

Efficient population coding

Decoding = summing from large neural populations

ˆ

j j j

x r  

Efficient coding:

   

 

2

ˆ argmin

r

r x x C r   

x

Christian Machens Wieland Brendel Ralph Bourdoukan Pietro Vertechi

In collaboration with

slide-28
SLIDE 28

Single neuron Decoding= Post-synaptic integration

input signal x(t) Time

slide-29
SLIDE 29

Single neuron Decoding= Post-synaptic integration

input signal x(t) Time

  •  

 

ˆ x t r t  

   

*exp r t

  • t

  

slide-30
SLIDE 30

Single neuron Decoding= Post-synaptic integration

input signal x(t) Time

Where do we place the spikes?

  •  

 

ˆ x t r t  

slide-31
SLIDE 31

Single neuron

Input signal x(t) Time Minimize: Decoding error

   

 

2

ˆ

t

E x t x t  

slide-32
SLIDE 32

Single neuron

Input signal x(t) Time Minimize:

spike no spike t t

E E 

Greedy spike rule:

   

 

2

ˆ

t

E x t x t  

slide-33
SLIDE 33

Single neuron

Input signal x(t) Time Minimize:

spike no spike t t

E E 

Greedy spike rule:

   

 

   

 

2 2

ˆ ˆ x t x t x t x t     

   

 

2

ˆ

t

E x t x t  

slide-34
SLIDE 34

Single neuron

Input signal x(t) Time Minimize:

spike no spike t t

E E 

Greedy spike rule:

   

 

   

 

2 2

ˆ ˆ x t x t x t x t     

   

 

2

ˆ 2 x t x t     

   

 

2

ˆ

t

E x t x t  

slide-35
SLIDE 35

Single neuron

Input signal x(t) Time Minimize:

spike no spike t t

E E 

Greedy spike rule:

   

 

2

ˆ 2 x t x t    

   

 

2

ˆ

t

E x t x t  

slide-36
SLIDE 36

Single neuron

Input signal x(t) Time Minimize:

spike no spike t t

E E 

Greedy spike rule:

     

 

2

ˆ 2 V t x t x t     

Membrane potential Threshold Decoding error

   

 

2

ˆ

t

E x t x t  

slide-37
SLIDE 37

Single neuron

Input signal x(t) Time Minimize:

spike no spike

E E 

Greedily

ˆ 2 V x x    

Membrane potential Threshold Decoding error

 

2

ˆ E x x  

spike if

x ˆ x

ˆ V x x  

slide-38
SLIDE 38

Single neuron

Input signal x(t) Time

2

2 

 

2

V V x x

     

Input Reset Leak Threshold:

 

ˆ V x x   

ˆ x

  • x

x    

Leaky Integrate and fire

slide-39
SLIDE 39

Neural population

 

1 2

, , ,

J

x x x x 

ˆi

ij j j

x r  

Minimize:

     

2 2

ˆ

t

E x t x t r t    

Decoding error Quadratic cost

slide-40
SLIDE 40

Neural population

 

1 2

, , ,

J

x x x x 

Minimize:

spike j no spike j t t

E E 

ˆi

ij j j

x r  

     

2 2

ˆ

t

E x t x t r t    

slide-41
SLIDE 41

Neural population

 

1 2

, , ,

J

x x x x 

spike j no spike j t t

E E 

 

2

ˆ 2

T

V x x r         

Minimize:

ˆi

ij j j

x r  

     

2 2

ˆ

t

E x t x t r t    

slide-42
SLIDE 42

Neural population

 

1 2

, , ,

J

x x x x 

spike j no spike j t t

E E 

 

2

ˆ 2

T

V x x r         

Decoding error Cost Threshold

Minimize:

ˆi

ij j j

x r  

     

2 2

ˆ

t

E x t x t r t    

slide-43
SLIDE 43

Neural population

Time

 

T j j ij i i kj k i i k

V V x x

          

 

Input

Reset +Recurrent

T

T

 

 

1 2

, , ,

J

x x x x 

ˆi

ij j j

x r  

slide-44
SLIDE 44

Neural population

Input signal x(t) Time

x

T

ˆ x r  

T

 

Input signal x(t)

slide-45
SLIDE 45

Homogeneous network

1

ˆ x

1 

1

x

ˆ x x

ˆ

j j

x r 

2

V

1

V

3

V

slide-46
SLIDE 46

Neural variability = Degeneracy

1

ˆ x

1 

1

x

ˆ x x

ˆ

j j

x r 

Equivalent to

slide-47
SLIDE 47

“Chaotic”

  • 100

100

ˆ , x x

200 600 1000

Time (ms)

1400 1600 2000 2400

c Shift 1 spike by 1ms

x c 

slide-48
SLIDE 48

Membrane potentials and spikes

1

ˆ x

1 

1

x 

ˆ x x

ˆ

j j

x r 

2

V

1

V

3

V

Correlated Uncorrelated

slide-49
SLIDE 49

Yu and Ferster, 2009

slide-50
SLIDE 50

Time (ms)

1

  • 1 0

1000 2

Shuffle

Spikes are NOT random samples

1 2

Time (ms)

1000

ˆ , x x ˆ , x x

Input signal x(t)

… But fired exactly at the right moment

slide-51
SLIDE 51

1 ˆ x x N 

Why are balanced networks so precise?

x ˆ x

1 N

Time

2 

2  

V 

1 N

Time

x ˆ x

1 N

Balanced network Random spikes

1 ˆ x x N 

slide-52
SLIDE 52

1 ˆ x x N 

Why are balanced networks so precise?

x ˆ x

1 N

Time

2 

2  

V 

1 N

Time

x ˆ x

1 N

Balanced network Random spikes

1 ˆ x x N 

slide-53
SLIDE 53

Network maintains tight E/I

ˆ

j j j

V x x   

100 ms

1 2

Time (s) Currents

jx

 ˆ

jx



slide-54
SLIDE 54

Learning the connections

Cancel the feedforward currents Point in the direction of input currents.

j k

xi

 

2

x r C r   

Find  minimizing

slide-55
SLIDE 55

Over- compensation Under- compensation Balanced

∆⌦ −✏

∆⌦

¯ ˆ x ˆ x F ˆ − − ˆ x ˆ x Γ − ˆ

− ∆⌦ −✏

∆⌦

¯ ˆ x ˆ x F ˆ − − ˆ x ˆ x Γ − ˆ

Untrained FF. C. Trained FF. C.

D

V

x

^

x

^

x x

Feedforward connections should point in the direction of the input current

slide-56
SLIDE 56
  • +

∆⌦ −✏

∆⌦

¯ ˆ x ˆ x ˆ − − ˆ x ˆ x Γ − ˆ

− ∆⌦ −✏

∆⌦

¯ ˆ x ˆ x ˆ − − ˆ x ˆ x Γ − ˆ

x

x, x

^

x = Dr

^

x = Dr

^

x x F Wr = -Fx =-FDr

^

A C

F

Lateral connections should cancel the feed- forward inputs

slide-57
SLIDE 57

Spike-time dependent learning rules

j

∆ij ~ 𝑝

𝑘

𝑦𝑗 − i𝑘

Hebbian STDP Anti-Hebbian STDP

k

xi

kj

 Neuron k spikes

j

V 2

j

V 

 

2

kj k j kj

  • V

      

slide-58
SLIDE 58

Learning the connections

j

∆ij ~ 𝑝

𝑘

𝑦𝑗 − i𝑘

Hebbian STDP Anti-Hebbian STDP

k

xi

time

pre post

t t 

j

r

k

V

pre post

t t  

Potentiation Depression

2

kj k j kj

  • V

      

slide-59
SLIDE 59

We can learn networks that respect Dale’s Law E I

WEI WII WIE WEE F

A B

slide-60
SLIDE 60

Feed-forward weights Estimate Membrane potential and spikes

Before learning

x2 x1

After learning

Signal

x2 x1

slide-61
SLIDE 61

Tuning curves

Quadratic optimization :

  • F. Rate (Hz)

xi

20

  • 0.1

0.1

  • F. Rate (Hz)

10

  • 10

360

Direction of x

  • 20

ˆ

j j j

x r  

x

   

 

*

2 * *

arg min

r

r x r C r    

slide-62
SLIDE 62

Tuning curves

Quadratic optimization :

ˆ

j j j

x r  

x

   

 

*

2 * *

arg min

r

r x r C r

   

Amplitude tuning: Direction tuning:

slide-63
SLIDE 63

1 2

x r r   Pocking the network in various ways

1

r

2

r

1 x  2 x  3 x 

slide-64
SLIDE 64

1 2

x r r   Pocking the network in various ways

1

r

2

r

Change stimulus:

1 x  2 x  3 x 

slide-65
SLIDE 65

1 2

x r r  

1

r

2

r

Inactivate:

Pocking the network in various ways

1

r

2

r

Change stimulus:

1 x  2 x  3 x 

slide-66
SLIDE 66

1 2

x r r  

1

r

2

r

Inactivate:

Pocking the network in various ways

1

r

2

r

Change stimulus:

1 x  2 x  3 x 

slide-67
SLIDE 67

Instant compensation for neural loss

200 400 600 800 1000 1200

Time (ms)

  • 100

100

  • 100

100

Time (ms)

400 600 800 1000 1200 200

c

ˆ , x x

Inactivations (e.g. halorhodopsin)

Robust to perturbation, connection noise, background noise, synaptic failure…

slide-68
SLIDE 68

Exemple: Neurones integrateurs de position des yeux

Activité Position des yeux ~ 30 neurone de chaque côté

slide-69
SLIDE 69

Activité Activité Position des yeux Position des yeux

slide-70
SLIDE 70

Activité Activité Position des yeux Position des yeux

slide-71
SLIDE 71

Activité Activité Position des yeux Position des yeux

slide-72
SLIDE 72

Activité Activité Position des yeux Position des yeux

slide-73
SLIDE 73

Direction tuning

x

Direction

slide-74
SLIDE 74

Direction tuning

x

slide-75
SLIDE 75

Visual orientation tuning

Crook and Eysel, 1992

slide-76
SLIDE 76

Population coding

x

𝑦=argmin

𝑦

𝑦 − 𝑦 2 𝑠

𝑗 = 𝑔 𝑗 𝑦 +Noise

ˆ

j j j

x r  

x

   

 

*

2 * *

arg min

r

r x r C r    

slide-77
SLIDE 77

Population coding with changing context

x

𝑠

𝑗 = 𝑔 𝑗 𝑦, 𝑏 +Noise

x

   

 

*

2 * *

arg min

r

r x r C r    

Change the output Output unchanged

Pertubation: Adaptation , neural death, lesions…

slide-78
SLIDE 78

Conclusions

  • Efficient population coding implies:

1. Asynchronous irregular spike trains. 2. Balanced, correlated E/I. 3. Diverse, context dependent tuning curves. 4. Instant changes in those tuning curves to compensate for perturbations.

  • This coding can be learnt using a simple spike-time dependent plasticity

rule.

  • Balanced spiking networks orders of magnitude more precise (and robust)

than equivalent rate models.

  • There is a constant decoder, and thus, an invariant representation at the

population level, but neural responses are themselves highly non- invariant.