Coding and computing with balanced spiking networks
Sophie Deneve Ecole Normale Supérieure, Paris
Coding and computing with balanced spiking networks Sophie Deneve - - PowerPoint PPT Presentation
Coding and computing with balanced spiking networks Sophie Deneve Ecole Normale Suprieure, Paris Poisson Variability in Cortex Variance of Spike Count Mean Spike Count Trial 1 Trial 2 Trial 3 Trial 4 Cortical spike trains are highly
Sophie Deneve Ecole Normale Supérieure, Paris
Trial 1 Trial 2 Trial 3 Trial 4
Mean Spike Count Variance of Spike Count
From Churchland et al, Nature neuroscience 2010
From Churchland et al, Nature neuroscience 2010
Excitatory, Poisson Inhibitory, Poisson Integrate and fire neuron:
exc inh
V V I I
exc
I
inh
I
Excitatory, Poisson Inhibitory, Poisson Integrate and fire neuron:
exc inh
V V I I
Output is more regular than input
exc
I
inh
I
Excitatory, Poisson Inhibitory, Poisson Integrate and fire neuron:
exc inh
V V I I
Output is more regular than input How does Poisson-like variability survives?
exc
I
inh
I
Excitatory Inhibitory Random walk
Shadlen and Newsome, 1996
exc inh
V V I I
exc
I
inh
I
Excitatory Inhibitory Random walk
Shadlen and Newsome, 1996
exc inh
V V I I
exc
I
inh
I
Variability is conserved when mean excitation =mean inhibition
Wehr and Zador, 2003
Okun and Lampl, 2008
E I E
Feed-forward inhibition: Recurrent inhibition:
E I E Not random: Highly structured…
Balanced neural networks generate their own variability
EE
J
EI
J
IE
J
II
J
EE E IE I EI E II I
J v J v J v J v
Weak, sparse random connections
e.g C. van Vreeswijk and H. Sompolinsky, Science (1996);
ext
I
Brunel, 2001
Balanced neural networks generate their own variability
EE
J
EI
J
IE
J
II
J
EE E IE I EI E II I
J v J v J v J v
e.g C. van Vreeswijk and H. Sompolinsky, Science (1996);
Asynchronous irregular regime Low firing rate
ext
I
Balanced neural networks generate their own variability
EE
J
EI
J
IE
J
II
J
EE E IE I EI E II I
J v J v J v J v
e.g C. van Vreeswijk and H. Sompolinsky, Science (1996);
Asynchronous irregular regime Low firing rate
ext
I
Shuffle one spike by 0.1ms
Balanced neural networks generate their own variability
EE
J
EI
J
IE
J
II
J
EE E IE I EI E II I
J v J v J v J v
e.g C. van Vreeswijk and H. Sompolinsky, Science (1996);
Asynchronous irregular regime Low firing rate Shuffle one spike by 0.1ms
ext
I
Reshuffle all later spikes
Balanced neural networks generate their own variability
EE
J
EI
J
IE
J
II
J
EE E IE I EI E II I
J v J v J v J v
e.g C. van Vreeswijk and H. Sompolinsky, Science (1996);
Asynchronous irregular regime Low firing rate Chaotic attractor: Shuffle one spike by 0.1ms
ext
I
Reshuffle all later spikes
Decoding = summing from large populations Code = Mean firing rates Spikes = random samples “Requiem for a spike”
Georgopoulos 1982
Tuning Curves Average pattern of activity
100 20 40 60 80 100 Direction (deg) Activity
100 20 40 60 80 100 Preferred Direction (deg) Activity
i i i
Pattern of activity (s)
100 20 40 60 80 100 Preferred Direction (deg) Activity
exp | !
i
s i i i i
f x f x p s x s
Poisson noise: Independent:
| |
i i
p s x p s x
𝑦
𝑗 𝑦 +Noise
Optimal when tuning curves are cosyne, noise gaussian, uniformely distributed over all orientation…
i i i
Decoding is easy… but usually suboptimal.
Pattern of activity (s)
100 20 40 60 80 100 Preferred Direction (deg) Activity
x
| p s x
Likelihood
ˆ x
Maximum likelihood estimates
ˆ |
argmax
x
x p s x
Decoding always optimal … but usually hard.
Decoding = summing from large neural populations
j j j
Decoding = summing from large neural populations
j j j
Efficient coding:
2
r
Christian Machens Wieland Brendel Ralph Bourdoukan Pietro Vertechi
In collaboration with
Single neuron Decoding= Post-synaptic integration
input signal x(t) Time
Single neuron Decoding= Post-synaptic integration
input signal x(t) Time
Single neuron Decoding= Post-synaptic integration
input signal x(t) Time
Where do we place the spikes?
Input signal x(t) Time Minimize: Decoding error
2
ˆ
t
E x t x t
Input signal x(t) Time Minimize:
spike no spike t t
Greedy spike rule:
2
ˆ
t
E x t x t
Input signal x(t) Time Minimize:
spike no spike t t
Greedy spike rule:
2 2
ˆ ˆ x t x t x t x t
2
ˆ
t
E x t x t
Input signal x(t) Time Minimize:
spike no spike t t
Greedy spike rule:
2 2
ˆ ˆ x t x t x t x t
2
ˆ 2 x t x t
2
ˆ
t
E x t x t
Input signal x(t) Time Minimize:
spike no spike t t
Greedy spike rule:
2
2
ˆ
t
E x t x t
Input signal x(t) Time Minimize:
spike no spike t t
Greedy spike rule:
2
Membrane potential Threshold Decoding error
2
ˆ
t
E x t x t
Input signal x(t) Time Minimize:
spike no spike
E E
Greedily
Membrane potential Threshold Decoding error
2
ˆ E x x
spike if
x ˆ x
ˆ V x x
Input signal x(t) Time
2
2
2
V V x x
Input Reset Leak Threshold:
ˆ V x x
ˆ x
x
Leaky Integrate and fire
1 2
, , ,
J
x x x x
ij j j
Minimize:
2 2
ˆ
t
E x t x t r t
Decoding error Quadratic cost
1 2
, , ,
J
x x x x
Minimize:
spike j no spike j t t
ij j j
2 2
ˆ
t
E x t x t r t
1 2
, , ,
J
x x x x
spike j no spike j t t
2
ˆ 2
T
V x x r
Minimize:
ij j j
2 2
ˆ
t
E x t x t r t
1 2
, , ,
J
x x x x
spike j no spike j t t
2
ˆ 2
T
V x x r
Decoding error Cost Threshold
Minimize:
ij j j
2 2
ˆ
t
E x t x t r t
Time
T j j ij i i kj k i i k
V V x x
Input
Reset +Recurrent
T
T
1 2
, , ,
J
x x x x
ij j j
Input signal x(t) Time
x
T
T
Input signal x(t)
1
ˆ x
1
x
ˆ x x
j j
2
1
3
1
ˆ x
1
x
ˆ x x
j j
Equivalent to
100
ˆ , x x
200 600 1000
Time (ms)
1400 1600 2000 2400
c Shift 1 spike by 1ms
x c
1
ˆ x
1
x
ˆ x x
j j
2
1
3
Correlated Uncorrelated
Yu and Ferster, 2009
Time (ms)
1
1000 2
Shuffle
1 2
Time (ms)
1000
Input signal x(t)
… But fired exactly at the right moment
x ˆ x
1 N
Time
2
2
V
1 N
Time
x ˆ x
1 N
Balanced network Random spikes
x ˆ x
1 N
Time
2
2
V
1 N
Time
x ˆ x
1 N
Balanced network Random spikes
j j j
100 ms
1 2
Time (s) Currents
jx
jx
Cancel the feedforward currents Point in the direction of input currents.
j k
2
Find minimizing
Over- compensation Under- compensation Balanced
∆⌦ −✏
⇤
⌦
∆⌦
✏
¯ ˆ x ˆ x F ˆ − − ˆ x ˆ x Γ − ˆ
⌦
− ∆⌦ −✏
⇤
⌦
∆⌦
✏
¯ ˆ x ˆ x F ˆ − − ˆ x ˆ x Γ − ˆ
⌦
−
Untrained FF. C. Trained FF. C.
D
V
x
^
x
^
x x
∆⌦ −✏
⇤
⌦
∆⌦
✏
¯ ˆ x ˆ x ˆ − − ˆ x ˆ x Γ − ˆ
⌦
− ∆⌦ −✏
⇤
⌦
∆⌦
✏
¯ ˆ x ˆ x ˆ − − ˆ x ˆ x Γ − ˆ
⌦
−
x
x, x
^
x = Dr
^
x = Dr
^
x x F Wr = -Fx =-FDr
^
A C
F
j
𝑘
Hebbian STDP Anti-Hebbian STDP
k
kj
Neuron k spikes
j
V 2
j
V
kj k j kj
j
𝑘
Hebbian STDP Anti-Hebbian STDP
k
time
pre post
t t
j
k
pre post
Potentiation Depression
kj k j kj
WEI WII WIE WEE F
Feed-forward weights Estimate Membrane potential and spikes
Signal
Quadratic optimization :
xi
20
0.1
10
360
Direction of x
j j j
*
2 * *
r
Quadratic optimization :
j j j
*
2 * *
r
Amplitude tuning: Direction tuning:
1 2
x r r Pocking the network in various ways
1
r
2
r
1 x 2 x 3 x
1 2
x r r Pocking the network in various ways
1
r
2
r
Change stimulus:
1 x 2 x 3 x
1 2
x r r
1
r
2
r
Inactivate:
Pocking the network in various ways
1
r
2
r
Change stimulus:
1 x 2 x 3 x
1 2
x r r
1
r
2
r
Inactivate:
Pocking the network in various ways
1
r
2
r
Change stimulus:
1 x 2 x 3 x
Instant compensation for neural loss
200 400 600 800 1000 1200
Time (ms)
100
100
Time (ms)
400 600 800 1000 1200 200
c
ˆ , x x
Inactivations (e.g. halorhodopsin)
Robust to perturbation, connection noise, background noise, synaptic failure…
Exemple: Neurones integrateurs de position des yeux
Activité Position des yeux ~ 30 neurone de chaque côté
Activité Activité Position des yeux Position des yeux
Activité Activité Position des yeux Position des yeux
Activité Activité Position des yeux Position des yeux
Activité Activité Position des yeux Position des yeux
Direction
Crook and Eysel, 1992
𝑦=argmin
𝑦
𝑦 − 𝑦 2 𝑠
𝑗 = 𝑔 𝑗 𝑦 +Noise
j j j
*
2 * *
arg min
r
r x r C r
𝑠
𝑗 = 𝑔 𝑗 𝑦, 𝑏 +Noise
*
2 * *
arg min
r
r x r C r
Change the output Output unchanged
Pertubation: Adaptation , neural death, lesions…
1. Asynchronous irregular spike trains. 2. Balanced, correlated E/I. 3. Diverse, context dependent tuning curves. 4. Instant changes in those tuning curves to compensate for perturbations.
rule.
than equivalent rate models.
population level, but neural responses are themselves highly non- invariant.