From retina to statistical physics Bruno Cessac NeuroMathComp - - PowerPoint PPT Presentation

from retina to statistical physics
SMART_READER_LITE
LIVE PREVIEW

From retina to statistical physics Bruno Cessac NeuroMathComp - - PowerPoint PPT Presentation

From retina to statistical physics Bruno Cessac NeuroMathComp Team,INRIA Sophia Antipolis,France. 4` eme journ ee de la physique ni coise, 20-06-14 Bruno Cessac From retina to statistical physics Bruno Cessac From retina to statistical


slide-1
SLIDE 1

From retina to statistical physics

Bruno Cessac

NeuroMathComp Team,INRIA Sophia Antipolis,France.

4` eme journ´ ee de la physique ni¸ coise, 20-06-14

Bruno Cessac From retina to statistical physics

slide-2
SLIDE 2

Bruno Cessac From retina to statistical physics

slide-3
SLIDE 3

Visual system

Bruno Cessac From retina to statistical physics

slide-4
SLIDE 4

Visual system

Bruno Cessac From retina to statistical physics

slide-5
SLIDE 5

Visual system

Bruno Cessac From retina to statistical physics

slide-6
SLIDE 6

Visual system

Bruno Cessac From retina to statistical physics

slide-7
SLIDE 7

Multi Electrodes Array

Figure: Multi-Electrodes Array.

slide-8
SLIDE 8

Multi Electrodes Array

slide-9
SLIDE 9

Encoding a visual scene

slide-10
SLIDE 10

Encoding a visual scene

slide-11
SLIDE 11

Encoding a visual scene

slide-12
SLIDE 12

Encoding a visual scene

slide-13
SLIDE 13

Encoding a visual scene

Do Ganglion cells act as independent encoders ?

slide-14
SLIDE 14

Encoding a visual scene

Do Ganglion cells act as independent encoders ? Or do their dynamical (spatio-temporal) correlations play a role in encoding a visual scene (population coding) ?

slide-15
SLIDE 15

Let us measure (instantaneous pairwise) correlations

  • E. Schneidman, M.J. Berry, R. Segev, and W. Bialek. ”Weak pairwise correlations imply strongly correlated

network states in a neural population”. Nature, 440(7087):1007-1012, 2006.

slide-16
SLIDE 16

Let us measure (instantaneous pairwise) correlations

  • E. Schneidman, M.J. Berry, R. Segev, and W. Bialek. ”Weak pairwise correlations imply strongly correlated

network states in a neural population”. Nature, 440(7087):1007-1012, 2006.

slide-17
SLIDE 17

Let us measure (instantaneous pairwise) correlations

  • E. Schneidman, M.J. Berry, R. Segev, and W. Bialek. ”Weak pairwise correlations imply strongly correlated

network states in a neural population”. Nature, 440(7087):1007-1012, 2006.

slide-18
SLIDE 18

Constructing a statistical model handling measured correlations

Assume stationarity. Measure empirical correlations. Select the probability distribution which maximizes the entropy and reproduces these correlations.

slide-19
SLIDE 19

Spike events

Figure: Spike state.

Spike state ωk(n) ∈ { 0, 1 }

slide-20
SLIDE 20

Spike events

Figure: Spike pattern.

Spike state ωk(n) ∈ { 0, 1 } Spike pattern ω(n) = ( ωk(n) )N

k=1

slide-21
SLIDE 21

Spike events

Figure: Spike pattern.

Spike state ωk(n) ∈ { 0, 1 } Spike pattern ω(n) = ( ωk(n) )N

k=1

  • 1

1

slide-22
SLIDE 22

Spike events

Figure: Spike block.

Spike state ωk(n) ∈ { 0, 1 } Spike pattern ω(n) = ( ωk(n) )N

k=1

Spike block ωn

m = { ω(m) ω(m + 1) . . . ω(n) }

slide-23
SLIDE 23

Spike events

Figure: Spike block.

Spike state ωk(n) ∈ { 0, 1 } Spike pattern ω(n) = ( ωk(n) )N

k=1

Spike block ωn

m = { ω(m) ω(m + 1) . . . ω(n) }

  • 1

1 1 1 1 1 1 1 1

slide-24
SLIDE 24

Spike events

Figure: Raster plot/Spike train.

Spike state ωk(n) ∈ { 0, 1 } Spike pattern ω(n) = ( ωk(n) )N

k=1

Spike block ωn

m = { ω(m) ω(m + 1) . . . ω(n) }

Raster plot ω def = ωT

slide-25
SLIDE 25

Constructing a statistical model handling measured correlations

Let π(T)

ω

be the empirical measure: π(T)

ω

[ f ] = 1 T

T

  • t=1

f ◦ σt(ω) e.g. π(T)

ω

[ ωi ] = 1

T

T

t=1 ωi(t): firing rate;

π(T)

ω

[ ωiωj ] = 1

T

T

t=1 ωi(t)ωj(t).

Find the (stationary) probability distribution µ that maximizes statistical entropy under the constraints: π(T)

ω

[ ωi ] = µ(ωi); π(T)

ω

[ ωiωj ] = µ(ωiωj)

slide-26
SLIDE 26

Constructing a statistical model handling measured correlations

There is a unique probability distribution which satisfies these conditions. This is the Gibbs distribution with potential: H(ω(0)) =

N

  • i=1

hiωi(0) +

N

  • i,j=1

Jijωi(0) ωj(0) Ising model

slide-27
SLIDE 27

End of the story ?

slide-28
SLIDE 28

End of the story ?

slide-29
SLIDE 29

End of the story ?

The Ising potential: H(ω(0)) =

N

  • i=1

hiωi(0) +

N

  • i,j=1

Jijωi(0) ωj(0) does not consider time correlations between neurons. It is therefore bad at predicting spatio-temporal patterns !

slide-30
SLIDE 30

Which correlations ?

Spikes correlations seem to play a role in spike coding.

slide-31
SLIDE 31

Which correlations ?

Spikes correlations seem to play a role in spike coding.

Although this statement depends on several assumption that could bias statistics Stationarity; Binning; Stimulus dependence ?

slide-32
SLIDE 32

Which correlations ?

Spikes correlations seem to play a role in spike coding.

Although this statement depends on several assumption that could bias statistics Stationarity; Binning; Stimulus dependence ?

Modulo these remarks, Maximum entropy seems to be a relevant setting to study the role of spatio-temporal spike correlations in retina coding.

slide-33
SLIDE 33
  • OK. So let us consider spatio-temporal constraints.
slide-34
SLIDE 34
  • OK. So let us consider spatio-temporal constraints.

Easy ! H(ωD

0 ) = N

  • i=1

hiωi(0) +

N

  • i,j=1

J(0)

ij ωi(0) ωj(0)

slide-35
SLIDE 35
  • OK. So let us consider spatio-temporal constraints.

Easy ! H(ωD

0 ) = N

  • i=1

hiωi(0) +

N

  • i,j=1

J(0)

ij ωi(0) ωj(0)

+

N

  • i,j=1

J(1)

ij ωi(0) ωj(1)

slide-36
SLIDE 36
  • OK. So let us consider spatio-temporal constraints.

Easy ! H(ωD

0 ) = N

  • i=1

hiωi(0) +

N

  • i,j=1

J(0)

ij ωi(0) ωj(0)

+

N

  • i,j=1

J(1)

ij ωi(0) ωj(1)

+

N

  • i,j,k=1

J(2)

ijk ωi(0) ωj(1) ωk(2)

slide-37
SLIDE 37
  • OK. So let us consider spatio-temporal constraints.

Easy ! Euh... In fact not so easy. H(ωD

0 ) = N

  • i=1

hiωi(0) +

N

  • i,j=1

J(0)

ij ωi(0) ωj(0)

+

N

  • i,j=1

J(1)

ij ωi(0) ωj(1)

+

N

  • i,j,k=1

J(2)

ijk ωi(0) ωj(1) ωk(2)

+????

slide-38
SLIDE 38

Two ”small” problems.

Handling temporality and memory.

slide-39
SLIDE 39

Two ”small” problems.

Handling temporality and memory.

slide-40
SLIDE 40

Two ”small” problems.

Handling temporality and memory.

slide-41
SLIDE 41

Two ”small” problems.

Handling temporality and memory. Ising model considers successive times as independent

slide-42
SLIDE 42

Two ”small” problems.

Handling temporality and memory. Probability of characteristic spatio-temporal patterns The probability of a spike pattern .... depends on the network history (transition probabilities).

slide-43
SLIDE 43

Two ”small” problems.

Handling temporality and memory. Probability of characteristic spatio-temporal patterns The probability of a spike pattern .... depends on the network history (transition probabilities).

slide-44
SLIDE 44

Two ”small” problems.

Handling temporality and memory. Probability of characteristic spatio-temporal patterns The probability of a spike pattern .... depends on the network history (transition probabilities).

slide-45
SLIDE 45

Two ”small” problems.

Handling temporality and memory. Probability of characteristic spatio-temporal patterns The probability of a spike pattern .... depends on the network history (transition probabilities).

slide-46
SLIDE 46

Two ”small” problems.

Handling temporality and memory. Probability of characteristic spatio-temporal patterns The probability of a spike pattern .... depends on the network history (transition probabilities).

slide-47
SLIDE 47

Two ”small” problems.

Handling temporality and memory. Probability of characteristic spatio-temporal patterns The probability of a spike pattern .... depends on the network history (transition probabilities).

slide-48
SLIDE 48

Two ”small” problems.

Handling temporality and memory. Probability of characteristic spatio-temporal patterns The probability of a spike pattern .... depends on the network history (transition probabilities).

slide-49
SLIDE 49

Two ”small” problems.

Handling temporality and memory. Probability of characteristic spatio-temporal patterns The probability of a spike pattern .... depends on the network history (transition probabilities).

slide-50
SLIDE 50

Two ”small” problems.

Handling temporality and memory. Probability of characteristic spatio-temporal patterns The probability of a spike pattern .... depends on the network history (transition probabilities).

slide-51
SLIDE 51

Two ”small” problems.

Handling temporality and memory. Probability of characteristic spatio-temporal patterns The probability of a spike pattern .... depends on the network history (transition probabilities).

slide-52
SLIDE 52

Two ”small” problems.

Handling temporality and memory. Probability of characteristic spatio-temporal patterns Given a set of hypotheses on transition probabilities there exists a mathematical framework to solve the problem.

slide-53
SLIDE 53

Handling memory.

Markov chains Variable length Markov chains Chains with complete connections . . . Gibbs distributions.

slide-54
SLIDE 54

Mathematical setting

Probability distribution on (bi-infinite) rasters: µ [ ωn

m ] , ∀m < n ∈ ❩

slide-55
SLIDE 55

Mathematical setting

Probability distribution on (bi-infinite) rasters: µ [ ωn

m ] , ∀m < n ∈ ❩

Conditional probabilities with memory depth D: Pn

  • ω(n)
  • ωn−1

n−D

  • .

slide-56
SLIDE 56

Mathematical setting

Probability distribution on (bi-infinite) rasters: µ [ ωn

m ] , ∀m < n ∈ ❩

Conditional probabilities with memory depth D: Pn

  • ω(n)
  • ωn−1

n−D

  • .
  • 1

1 1 | 1 1 | 1 | 1 1 | 1

slide-57
SLIDE 57

Mathematical setting

Probability distribution on (bi-infinite) rasters: µ [ ωn

m ] , ∀m < n ∈ ❩

Conditional probabilities with memory depth D: Pn

  • ω(n)
  • ωn−1

n−D

  • .

Generating arbitrary depth D blocks probabilities: ❩

slide-58
SLIDE 58

Mathematical setting

Probability distribution on (bi-infinite) rasters: µ [ ωn

m ] , ∀m < n ∈ ❩

Conditional probabilities with memory depth D: Pn

  • ω(n)
  • ωn−1

n−D

  • .

Generating arbitrary depth D blocks probabilities: µ

  • ωm+D

m

  • =

slide-59
SLIDE 59

Mathematical setting

Probability distribution on (bi-infinite) rasters: µ [ ωn

m ] , ∀m < n ∈ ❩

Conditional probabilities with memory depth D: Pn

  • ω(n)
  • ωn−1

n−D

  • .

Generating arbitrary depth D blocks probabilities: µ

  • ωm+D

m

  • = Pm+D
  • ω(m + D)
  • ωm+D−1

m

  • µ
  • ωm+D−1

m

slide-60
SLIDE 60

Mathematical setting

Probability distribution on (bi-infinite) rasters: µ [ ωn

m ] , ∀m < n ∈ ❩

Conditional probabilities with memory depth D: Pn

  • ω(n)
  • ωn−1

n−D

  • .

Generating arbitrary depth D blocks probabilities: µ

  • ωm+D

m

  • = Pm+D
  • ω(m + D)
  • ωm+D−1

m

  • µ
  • ωm+D−1

m

  • µ [ ωn

m ] = n l=m+D Pl

  • ω(l)
  • ωl−1

l−D

  • µ
  • ωm+D−1

m

  • ,

∀m < n ∈ ❩ Chapman-Kolmogorov relation

slide-61
SLIDE 61

Mathematical setting

µ [ ωn

m ] = n

  • l=m+D

Pl

  • ω(l)
  • ωl−1

l−D

  • µ
  • ωm+D−1

m

  • ,

∀m < n ∈ ❩

slide-62
SLIDE 62

Mathematical setting

µ [ ωn

m ] = n

  • l=m+D

Pl

  • ω(l)
  • ωl−1

l−D

  • µ
  • ωm+D−1

m

  • ,

∀m < n ∈ ❩ φl

  • ωl

l−D

  • = log Pl
  • ω(l)
  • ωl−1

l−D

slide-63
SLIDE 63

Mathematical setting

µ [ ωn

m ] = n

  • l=m+D

Pl

  • ω(l)
  • ωl−1

l−D

  • µ
  • ωm+D−1

m

  • ,

∀m < n ∈ ❩ φl

  • ωl

l−D

  • = log Pl
  • ω(l)
  • ωl−1

l−D

  • µ [ ωn

m ] = exp n

  • l=m+D

φl

  • ωl

l−D

  • µ
  • ωm+D−1

m

slide-64
SLIDE 64

Mathematical setting

µ [ ωn

m ] = n

  • l=m+D

Pl

  • ω(l)
  • ωl−1

l−D

  • µ
  • ωm+D−1

m

  • ,

∀m < n ∈ ❩ φl

  • ωl

l−D

  • = log Pl
  • ω(l)
  • ωl−1

l−D

  • µ [ ωn

m ] = exp n

  • l=m+D

φl

  • ωl

l−D

  • µ
  • ωm+D−1

m

  • µ
  • ωn

m | ωm+D−1 m

  • = exp

n

  • l=m+D

φl

  • ωl

l−D

slide-65
SLIDE 65

Gibbs distribution

slide-66
SLIDE 66

Gibbs distribution

∀Λ ⊂ ❩d, µ({ S } | ∂Λ) = 1 ZΛ,∂Λ e−βHΛ,∂Λ( { S } )

slide-67
SLIDE 67

Gibbs distribution

∀Λ ⊂ ❩d, µ({ S } | ∂Λ) = 1 ZΛ,∂Λ e−βHΛ,∂Λ( { S } ) f (β) = − 1 β lim

Λ↑∞

1 |Λ| log ZΛ,∂Λ (free energy density)

slide-68
SLIDE 68

Gibbs distribution

slide-69
SLIDE 69

Gibbs distribution

∀m, n, µ

  • ωn

m | ωm+D−1 m

  • = exp

n

  • l=m+D

φl

  • ωl

l−D

  • (normalized potential)
slide-70
SLIDE 70

Gibbs distribution

∀m < n, A < µ [ ωn

m ]

exp n

l=m+DH

  • ωl

l−D

  • exp−(n − m)P(H) < B

(non normalized potential)

slide-71
SLIDE 71

Gibbs distribution

P(H) is called ”topological pressure” and is formaly equivalent to free energy density. Does not require time-translation invariance (stationarity). In the stationary case (+ assumptions) a Gibbs state is also an equilibrium state. sup

ν∈Minv

h(ν) + ν(H) = h(µ) + µ(H) = P(H) .

slide-72
SLIDE 72

Gibbs distribution

This formalism allows to handle the spatio-temporal case H(ωD

0 ) = N

  • i=1

hiωi(0) +

N

  • i,j=1

J(0)

ij ωi(0) ωj(0)

+

N

  • i,j=1

J(1)

ij ωi(0) ωj(1)

+

N

  • i,j,k=1

J(2)

ijk ωi(0) ωj(1) ωk(2) + . . .

even numerically.

J.C. Vasquez, A. Palacios, O. Marre, M.J. Berry II, B. Cessac, J. Physiol. Paris, , Vol 106, Issues 3–4, (2012).

  • H. Nasser, O. Marre, and B. Cessac, J. Stat. Mech. (2013) P03006.
  • H. Nasser, B. Cessac, Entropy (2014), 16(4), 2244-2277.
slide-73
SLIDE 73

Gibbs distribution

This formalism allows to handle the spatio-temporal case H(ωD

0 ) = N

  • i=1

hiωi(0) +

N

  • i,j=1

J(0)

ij ωi(0) ωj(0)

+

N

  • i,j=1

J(1)

ij ωi(0) ωj(1)

+

N

  • i,j,k=1

J(2)

ijk ωi(0) ωj(1) ωk(2) + ?????

even numerically.

J.C. Vasquez, A. Palacios, O. Marre, M.J. Berry II, B. Cessac, J. Physiol. Paris, , Vol 106, Issues 3–4, (2012).

  • H. Nasser, O. Marre, and B. Cessac, J. Stat. Mech. (2013) P03006.
  • H. Nasser, B. Cessac, Entropy (2014), 16(4), 2244-2277.
slide-74
SLIDE 74

Two small problems.

Exponential number of possible terms.

slide-75
SLIDE 75

Two small problems.

Exponential number of possible terms. Contrarily to what happens usually in physics, we do not know what should be the right potential.

slide-76
SLIDE 76

Can we have a reasonable idea of what could be the spike statistics by studying a neural network model ?

slide-77
SLIDE 77

An Integrate and Fire neural network model with chemical and electric synapses

slide-78
SLIDE 78

An Integrate and Fire neural network model with chemical and electric synapses

R.Cofr´ e,B. Cessac: ”Dynamics and spike trains statistics in conductance-based Integrate-and-Fire neural networks with chemical and electric synapses”, Chaos, Solitons and Fractals, 2013.

slide-79
SLIDE 79

An Integrate and Fire neural network model with chemical and electric synapses

Sub-threshold dynamics: Ck dVk dt = −gL,k(Vk − EL)

slide-80
SLIDE 80

An Integrate and Fire neural network model with chemical and electric synapses

Sub-threshold dynamics: Ck dVk dt = −gL,k(Vk − EL) −

  • j

gkj(t, ω)(Vk − Ej)

slide-81
SLIDE 81

An Integrate and Fire neural network model with chemical and electric synapses

Sub-threshold dynamics: Ck dVk dt = −gL,k(Vk − EL) −

  • j

gkj(t, ω)(Vk − Ej)

slide-82
SLIDE 82

An Integrate and Fire neural network model with chemical and electric synapses

Sub-threshold dynamics: Ck dVk dt = −gL,k(Vk − EL) −

  • j

gkj(t, ω)(Vk − Ej)

0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1 0.5 1 1.5 2 2.5 3 3.5 4 PSP t t=1 t=1.2 t=1.6 t=3 g(x)

slide-83
SLIDE 83

An Integrate and Fire neural network model with chemical and electric synapses

Sub-threshold dynamics: Ck dVk dt = −gL,k(Vk − EL) −

  • j

gkj(t, ω)(Vk − Ej)

0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1 0.5 1 1.5 2 2.5 3 3.5 4 PSP t g(x)

slide-84
SLIDE 84

An Integrate and Fire neural network model with chemical and electric synapses

Sub-threshold dynamics: Ck dVk dt = −gL,k(Vk − EL) −

  • j

gkj(t, ω)(Vk − Ej) −

  • j

¯ gkj (Vk − Vj)

slide-85
SLIDE 85

An Integrate and Fire neural network model with chemical and electric synapses

Sub-threshold dynamics: Ck dVk dt = −gL,k(Vk − EL) −

  • j

gkj(t, ω)(Vk − Ej) −

  • j

¯ gkj (Vk − Vj) +i(ext)

k

(t) + σBξk(t)

slide-86
SLIDE 86

Sub-threshold regime

C dV dt +

  • G(t, ω) − G
  • V = I(t, ω),

Gkl(t, ω) =   gL,k +

N

  • j=1

gkj(t, ω)   δkl

def

= gk(t, ω)δkl. I(t, ω) = I (cs)(t, ω) + I (ext)(t) + I (B)(t) I (cs)

k

(t, ω) =

  • j

Wkjαkj(t, ω), Wkj

def

= GkjEj.

slide-87
SLIDE 87

Sub-threshold regime

   dV = (Φ(t, ω)V + f (t, ω))dt + σB

c INdW (t),

V (t0) = v, Φ(t, ω) = C −1 G − G(t, ω)

  • f (t, ω) = C −1I (cs)(t, ω) + C −1I (ext)(t)
slide-88
SLIDE 88

Homogeneous Cauchy problem

dV (t,ω)

dt

= Φ(t, ω)V (t, ω), V (t0) = v,

slide-89
SLIDE 89

Homogeneous Cauchy problem

dV (t,ω)

dt

= Φ(t, ω)V (t, ω), V (t0) = v, Theorem Φ(t, ω) square matrix with bounded elements. M0(t0, t, ω) = IN Mk(t0, t, ω) = IN + t

t0

Φ(s, ω)Mk−1(s, t)ds, t ≤ t1, converges uniformly in [t0, t1].

Brockett, R. W., ”Finite Dimensional Linear Systems”,John Wiley and Sons, 1970.

slide-90
SLIDE 90

Homogeneous Cauchy problem

dV (t,ω)

dt

= Φ(t, ω)V (t, ω), V (t0) = v, Theorem Φ(t, ω) square matrix with bounded elements. M0(t0, t, ω) = IN Mk(t0, t, ω) = IN + t

t0

Φ(s, ω)Mk−1(s, t)ds, t ≤ t1, converges uniformly in [t0, t1].

Brockett, R. W., ”Finite Dimensional Linear Systems”,John Wiley and Sons, 1970.

Flow Γ(t0, t, ω) def = lim

k→∞ Mk(t0, t, ω)

slide-91
SLIDE 91

Homogeneous Cauchy problem

If Φ(t, ω) and Φ(s, ω) commute Γ(t0, t, ω) =

  • k=0

1 k!( t

t0

Φ(s, ω)ds)k = e

R t

t0 Φ(s,ω)ds

slide-92
SLIDE 92

Homogeneous Cauchy problem

If Φ(t, ω) and Φ(s, ω) commute Γ(t0, t, ω) =

  • k=0

1 k!( t

t0

Φ(s, ω)ds)k = e

R t

t0 Φ(s,ω)ds

This holds only in two cases : G = 0; Γ(t0, t, ω) = e− 1

c

R t

t0 G(s,ω)ds

  • B. Cessac, J. Math. Neuroscience, 2011.
slide-93
SLIDE 93

Homogeneous Cauchy problem

If Φ(t, ω) and Φ(s, ω) commute Γ(t0, t, ω) =

  • k=0

1 k!( t

t0

Φ(s, ω)ds)k = e

R t

t0 Φ(s,ω)ds

This holds only in two cases : G = 0; Γ(t0, t, ω) = e− 1

c

R t

t0 G(s,ω)ds

  • B. Cessac, J. Math. Neuroscience, 2011.

G(t, ω) = κ(t, ω)IN

slide-94
SLIDE 94

Homogeneous Cauchy problem

In general: Γ(t0, t, ω) = IN +

+∞

  • n=1
  • X1 = ( B, A(s1, ω) )

X2 = ( B, A(s2, ω) ) . . . Xn = ( B, A(sn, ω) )

t

t0

· · · sn−1

t0 n

  • k=1

Xk ds1 · · · dsn. B = C −1G; A(t, ω) = −C −1G(t, ω)

slide-95
SLIDE 95

Exponentially bounded flow

Definition: An exponentially bounded flow is a two parameter (t0, t) family {Γ(t0, t, ω)}t≤t0 of flows such that, ∀ω ∈ Ω:

1 Γ(t0, t0, ω) = IN and Γ(t0, t, ω)Γ(t, s, ω) = Γ(t0, s, ω)

whenever t0 ≤ t ≤ s;

2 For each v ∈ ❘N and ω ∈ Ω, (t0, t) → Γ(t0, t, ω)v is

continuous for t0 ≤ t;

3 There is M > 0 and m > 0 such that :

||Γ(s, t, ω)|| ≤ Me−m(t−s), s ≤ t. (1)

slide-96
SLIDE 96

Exponentially bounded flow

Proposition Let σ1 be the largest eigenvalue of ¯

  • G. If:

σ1 < gL, then the flow Γ in our model has the exponentially bounded flow property.

slide-97
SLIDE 97

Exponentially bounded flow

Proposition Let σ1 be the largest eigenvalue of ¯

  • G. If:

σ1 < gL, then the flow Γ in our model has the exponentially bounded flow property. Remark The typical electrical conductance values are of order 1 nano-Siemens, while the leak conductance of retinal ganglion cells is of order 50 micro-Siemens. Therefore, this condition is compatible with the biophysical values of conductances in the retina.

slide-98
SLIDE 98

Exponentially bounded flow

Theorem If Γ(t0, t, ω) is an exponentially bounded flow , there is a unique strong solution for t ≥ t0 given by: V (t0, t, ω) = Γ(t0, t, ω)v+ t

t0

Γ(s, t, ω)f (s, ω)ds+σB c t

t0

Γ(s, t, ω)dW (s)

  • R. Wooster, ”Evolution systems of measures for non-autonomous stochastic differential equations with Levy noise”,

Communications on Stochastic Analysis, vol 5, 353-370, 2011

slide-99
SLIDE 99

Membrane potential decomposition

V (t, ω) = V (d)(t, ω) + V (noise)(t, ω),

slide-100
SLIDE 100

Membrane potential decomposition

V (t, ω) = V (d)(t, ω) + V (noise)(t, ω), V (d)(t, ω) = V (cs)(t, ω) + V (ext)(t, ω),

slide-101
SLIDE 101

Membrane potential decomposition

V (t, ω) = V (d)(t, ω) + V (noise)(t, ω), V (d)(t, ω) = V (cs)(t, ω) + V (ext)(t, ω), V (cs)(t, ω) = 1 c t

−∞

Γ(s, t, ω) χ(s, ω) I (cs)(s, ω)ds,

slide-102
SLIDE 102

Membrane potential decomposition

V (t, ω) = V (d)(t, ω) + V (noise)(t, ω), V (d)(t, ω) = V (cs)(t, ω) + V (ext)(t, ω), V (cs)(t, ω) = 1 c t

−∞

Γ(s, t, ω) χ(s, ω) I (cs)(s, ω)ds, V (ext)(t, ω) = 1 c t

−∞

Γ(s, t, ω) χ(s, ω) I (ext)(s, ω)ds,

slide-103
SLIDE 103

Membrane potential decomposition

V (t, ω) = V (d)(t, ω) + V (noise)(t, ω), V (d)(t, ω) = V (cs)(t, ω) + V (ext)(t, ω), V (cs)(t, ω) = 1 c t

−∞

Γ(s, t, ω) χ(s, ω) I (cs)(s, ω)ds, V (ext)(t, ω) = 1 c t

−∞

Γ(s, t, ω) χ(s, ω) I (ext)(s, ω)ds, V (noise)(t, ω) = σB c t

τk(t,ω)

Γ(s, t, ω) dW (s).

slide-104
SLIDE 104

Transition probabilities

Pb: to determine P

  • ω(n)
  • ωn−1

−∞

slide-105
SLIDE 105

Transition probabilities

Pb: to determine P

  • ω(n)
  • ωn−1

−∞

  • Fix ω, n and t < n. Set:
  • θk(t, ω) = θ − V (d)

k

(t, ω), (1)

slide-106
SLIDE 106

Transition probabilities

Pb: to determine P

  • ω(n)
  • ωn−1

−∞

  • Fix ω, n and t < n. Set:
  • θk(t, ω) = θ − V (d)

k

(t, ω), (1) Neuron k emits a spike at integer time n (ωk(n) = 1) if: ∃t ∈ [n − 1, n], V (noise)

k

(t, ω) = θk(t, ω).

slide-107
SLIDE 107

Transition probabilities

Pb: to determine P

  • ω(n)
  • ωn−1

−∞

  • Fix ω, n and t < n. Set:
  • θk(t, ω) = θ − V (d)

k

(t, ω), (1) Neuron k emits a spike at integer time n (ωk(n) = 1) if: ∃t ∈ [n − 1, n], V (noise)

k

(t, ω) = θk(t, ω). ”First passage” problem, in N dimension, with a time dependent boundary θk(t, ω). (general form unknown).

slide-108
SLIDE 108

Conditional probability

Without electric synapses the probability of ω(n) conditionally to ωn−1

−∞ can be approximated by:

P

  • ω(n)
  • ωn−1

−∞

  • =

N

  • k=1

P

  • ωk(n)
  • ωn−1

−∞

  • ,

with P

  • ωk(n)
  • ωn−1

−∞

  • =

ωk(n) π (Xk(n − 1, ω)) + (1 − ωk(n)) (1 − π (Xk(n − 1, ω))) , where Xk(n − 1, ω) = θ − V (det)

k

(n − 1, ω) σk(n − 1, ω) , and π(x) = 1 √ 2π +∞

x

e− u2

2 du.

slide-109
SLIDE 109

Conditional probability

φ(ω) = log P

  • ω(n)
  • ωn−1

−∞

  • defines a (infinite range)

normalized potential defining a unique Gibbs distribution.

slide-110
SLIDE 110

Conditional probability

φ(ω) = log P

  • ω(n)
  • ωn−1

−∞

  • defines a (infinite range)

normalized potential defining a unique Gibbs distribution. It depends explicitly on networks parameters and external stimulus.

slide-111
SLIDE 111

Conditional probability

φ(ω) = log P

  • ω(n)
  • ωn−1

−∞

  • defines a (infinite range)

normalized potential defining a unique Gibbs distribution. It depends explicitly on networks parameters and external stimulus. Its definition holds for a time-dependent stimulus (non stationary).

slide-112
SLIDE 112

Conditional probability

φ(ω) = log P

  • ω(n)
  • ωn−1

−∞

  • defines a (infinite range)

normalized potential defining a unique Gibbs distribution. It depends explicitly on networks parameters and external stimulus. Its definition holds for a time-dependent stimulus (non stationary). It is similar to the so-called Generalized Linear Model used for retina analysis, although with a more complex structure.

slide-113
SLIDE 113

Conditional probability

φ(ω) = log P

  • ω(n)
  • ωn−1

−∞

  • defines a (infinite range)

normalized potential defining a unique Gibbs distribution. It depends explicitly on networks parameters and external stimulus. Its definition holds for a time-dependent stimulus (non stationary). It is similar to the so-called Generalized Linear Model used for retina analysis, although with a more complex structure. The general form (with electric synapses) is yet unknown.

slide-114
SLIDE 114

Back to our second ”small” problem

slide-115
SLIDE 115

Back to our second ”small” problem

Is there a Maximum Entropy potential corresponding to φ (in the stationary case) ?

slide-116
SLIDE 116

Back to our second ”small” problem

One can make a Taylor expansion of φ(ω).

slide-117
SLIDE 117

Back to our second ”small” problem

Using ωi(n)k = ωi(n), k ≥ 1 one ends up with a potential of the form: φ(ω) =

N

  • i=1

hiωi(0) +

N

  • i,j=1

J(0)

ij ωi(0) ωj(0) + . . .

slide-118
SLIDE 118

Back to our second ”small” problem

The expansion is infinite although one can approximate the infinite range potential φ by a finite range approximation (finite memory), giving rise to a finite expansion.

slide-119
SLIDE 119

Back to our second ”small” problem

The coefficients of the expansion are non linear functions of the network parameters and stimulus. They are therefore somewhat redundant.

slide-120
SLIDE 120

Back to our second ”small” problem

Rodrigo Cofr´ e, Bruno Cessac, ”Exact computation of the maximum-entropy potential of spiking neural-network models”,Phys. Rev. E 89, 052117.

Given a set of stationary transition probabilities P

  • ω(D)
  • ωD−1
  • > 0

there is a unique (up to a constant) Maximum Entropy potential, written as a linear combination of spike interactions terms with a minimal number of terms (normal form). This potential can be explicitly (and algorithmically) computed.

Hints: Using variable change one can eliminate terms in the potential (”normal” form). The construction is based on equivalence between Gibbs potentials (cohomology) and periodic orbits expansion.

slide-121
SLIDE 121

Back to our second ”small” problem

However, there is still a number of terms growing exponentially with the number of neurons and the memory depth. These terms are generically non zero.

slide-122
SLIDE 122

Back to the retina

slide-123
SLIDE 123

Back to the retina

Neuromimetic models have typically O(N2) parameters where N is the number of neurons.

slide-124
SLIDE 124

Back to the retina

Neuromimetic models have typically O(N2) parameters where N is the number of neurons. The equivalent MaxEnt potential has generically a number of parameters growing exponentially with N, non linear and redundant functions of the network parameters (synaptic weights, stimulus).

slide-125
SLIDE 125

Back to the retina

Neuromimetic models have typically O(N2) parameters where N is the number of neurons. The equivalent MaxEnt potential has generically a number of parameters growing exponentially with N, non linear and redundant functions of the network parameters (synaptic weights, stimulus). ⇒ Intractable determination of parameters; Stimulus dependent parameters; Overfitting. BUT

slide-126
SLIDE 126

Back to the retina

Neuromimetic models have typically O(N2) parameters where N is the number of neurons. The equivalent MaxEnt potential has generically a number of parameters growing exponentially with N, non linear and redundant functions of the network parameters (synaptic weights, stimulus). ⇒ Intractable determination of parameters; Stimulus dependent parameters; Overfitting. BUT Real neural networks are not generic

slide-127
SLIDE 127

Back to the retina

MaxEnt approach might be useful if there is some hidden law of nature/ symmetry which cancels most terms in the expansion.

slide-128
SLIDE 128

Acknowledgment

Neuromathcomp team Rodrigo Cofr´ e (pHd, September 2014) Dora Karvouniari (M2) Pierre Kornprobst (CR1 INRIA) S´ elim Kraria (IR) Gaia Lombardi (M2 → Paris) Hassan Nasser (pHd → Startup) Daniela Pamplona (PostDoc) Geoffrey Portelli (Post Doc) Vivien Robinet (Post Doc → MCF Kourou) Horacio Rostro (pHd → Docent Mexico) Wahiba Taouali (Post Doc → Post Doc INT Marseille) Juan-Carlos Vasquez (pHd → Post Doc Bogota) Princeton University Michael J. Berry II ANR KEOPS Maria-Jos´ e Escobar (CN Valparaiso) Adrian Palacios (CN Valparaiso) Cesar Ravelo (CN Valparaiso) Thierry Vi´ eville (INRIA Mnemosyne) Renvision FP7 project Luca Bernondini (IIT Genova) Matthias Hennig (Edinburgh) Alessandro Maccionne (IIT Genova) Evelyne Sernagor (Newcastle) Institut de la Vision Olivier Marre Serge Picaud Bruno Cessac From retina to statistical physics

slide-129
SLIDE 129

Can we hear the shape of a Maximum entropy potential

Two distinct potentials H(1), H(2) of range R = D + 1 correspond to the same Gibbs distribution (are “equivalent”), if and only if there exists a range D function f such that (Chazottes-Keller (2009)): H(2) ωD

  • = H(1)

ωD

  • − f
  • ωD−1
  • + f
  • ωD

1

  • + ∆,

(2) where ∆ = P(H(2)) − P(H(1)).

slide-130
SLIDE 130

Can we hear the shape of a Maximum entropy potential

Summing over periodic orbits we get rid of the function f

R

  • n=1

φ(ωσnl1) =

R

  • n=1

H∗(ωσnl1) − RP(H∗), (3) We eliminate equivalent constraints.

slide-131
SLIDE 131

Can we hear the shape of a Maximum entropy potential

Conclusion Given a set of transition probabilities P

  • ω(D)
  • ωD−1
  • > 0 there

is a unique, up to a constant, MaxEnt potential, written as a linear combination of constraints (average of spike events) with a minimal number of terms. This potential can be explicitly (and algorithmically) computed.