from retina to statistical physics
play

From retina to statistical physics Bruno Cessac NeuroMathComp - PowerPoint PPT Presentation

From retina to statistical physics Bruno Cessac NeuroMathComp Team,INRIA Sophia Antipolis,France. 4` eme journ ee de la physique ni coise, 20-06-14 Bruno Cessac From retina to statistical physics Bruno Cessac From retina to statistical


  1. ❩ Mathematical setting Probability distribution on (bi-infinite) rasters: µ [ ω n m ] , ∀ m < n ∈ ❩ Conditional probabilities with memory depth D : � � � � ω n − 1 P n ω ( n ) . n − D Generating arbitrary depth D blocks probabilities:

  2. ❩ Mathematical setting Probability distribution on (bi-infinite) rasters: µ [ ω n m ] , ∀ m < n ∈ ❩ Conditional probabilities with memory depth D : � � � � ω n − 1 P n ω ( n ) . n − D Generating arbitrary depth D blocks probabilities: � � ω m + D µ = m

  3. ❩ Mathematical setting Probability distribution on (bi-infinite) rasters: µ [ ω n m ] , ∀ m < n ∈ ❩ Conditional probabilities with memory depth D : � � � � ω n − 1 P n ω ( n ) . n − D Generating arbitrary depth D blocks probabilities: � � � � � � � � ω m + D − 1 ω m + D ω m + D − 1 µ = P m + D ω ( m + D ) µ m m m

  4. Mathematical setting Probability distribution on (bi-infinite) rasters: µ [ ω n m ] , ∀ m < n ∈ ❩ Conditional probabilities with memory depth D : � � � � ω n − 1 P n ω ( n ) . n − D Generating arbitrary depth D blocks probabilities: � � � � � � � � ω m + D − 1 ω m + D ω m + D − 1 µ = P m + D ω ( m + D ) µ m m m � � � � � m ] = � n � µ [ ω n � ω l − 1 ω m + D − 1 l = m + D P l ω ( l ) µ , m l − D ∀ m < n ∈ ❩ Chapman-Kolmogorov relation

  5. Mathematical setting n � � � � � � � � ω l − 1 µ [ ω n ω m + D − 1 m ] = P l ω ( l ) µ , ∀ m < n ∈ ❩ m l − D l = m + D

  6. Mathematical setting n � � � � � � � � ω l − 1 µ [ ω n ω m + D − 1 m ] = P l ω ( l ) µ , ∀ m < n ∈ ❩ m l − D l = m + D � � � � � � ω l � ω l − 1 = log P l ω ( l ) φ l l − D l − D

  7. Mathematical setting n � � � � � � � � ω l − 1 µ [ ω n ω m + D − 1 m ] = P l ω ( l ) µ , ∀ m < n ∈ ❩ m l − D l = m + D � � � � � � ω l � ω l − 1 = log P l ω ( l ) φ l l − D l − D n � � � � � µ [ ω n ω l ω m + D − 1 m ] = exp φ l µ l − D m l = m + D

  8. Mathematical setting n � � � � � � � � ω l − 1 µ [ ω n ω m + D − 1 m ] = P l ω ( l ) µ , ∀ m < n ∈ ❩ m l − D l = m + D � � � � � � ω l � ω l − 1 = log P l ω ( l ) φ l l − D l − D n � � � � � µ [ ω n ω l ω m + D − 1 m ] = exp φ l µ l − D m l = m + D � � n � � � ω n m | ω m + D − 1 ω l µ = exp φ l m l − D l = m + D

  9. ❩ Gibbs distribution

  10. Gibbs distribution 1 e − β H Λ ,∂ Λ ( { S } ) ∀ Λ ⊂ ❩ d , µ ( { S } | ∂ Λ) = Z Λ , ∂ Λ

  11. Gibbs distribution 1 e − β H Λ ,∂ Λ ( { S } ) ∀ Λ ⊂ ❩ d , µ ( { S } | ∂ Λ) = Z Λ , ∂ Λ f ( β ) = − 1 1 β lim | Λ | log Z Λ , ∂ Λ Λ ↑∞ ( free energy density )

  12. Gibbs distribution

  13. Gibbs distribution � � n � � � ω n m | ω m + D − 1 ω l ∀ m , n , µ = exp φ l m l − D l = m + D ( normalized potential )

  14. Gibbs distribution µ [ ω n m ] ∀ m < n , A < � � exp − ( n − m ) P ( H ) < B exp � n ω l l = m + D H l − D ( non normalized potential )

  15. Gibbs distribution P ( H ) is called ” topological pressure ” and is formaly equivalent to free energy density. Does not require time-translation invariance (stationarity) . In the stationary case (+ assumptions) a Gibbs state is also an equilibrium state . sup h ( ν ) + ν ( H ) = h ( µ ) + µ ( H ) = P ( H ) ν ∈ M inv .

  16. Gibbs distribution This formalism allows to handle the spatio-temporal case N N � � J (0) H ( ω D 0 ) = h i ω i (0) + ij ω i (0) ω j (0) i =1 i , j =1 N � J (1) + ij ω i (0) ω j (1) i , j =1 N � J (2) + ijk ω i (0) ω j (1) ω k (2) + . . . i , j , k =1 even numerically . J.C. Vasquez, A. Palacios, O. Marre, M.J. Berry II, B. Cessac, J. Physiol. Paris, , Vol 106, Issues 3–4, (2012). H. Nasser, O. Marre, and B. Cessac, J. Stat. Mech. (2013) P03006. H. Nasser, B. Cessac, Entropy (2014), 16(4), 2244-2277.

  17. Gibbs distribution This formalism allows to handle the spatio-temporal case N N � � J (0) H ( ω D 0 ) = h i ω i (0) + ij ω i (0) ω j (0) i =1 i , j =1 N � J (1) + ij ω i (0) ω j (1) i , j =1 N � J (2) + ijk ω i (0) ω j (1) ω k (2) + ????? i , j , k =1 even numerically . J.C. Vasquez, A. Palacios, O. Marre, M.J. Berry II, B. Cessac, J. Physiol. Paris, , Vol 106, Issues 3–4, (2012). H. Nasser, O. Marre, and B. Cessac, J. Stat. Mech. (2013) P03006. H. Nasser, B. Cessac, Entropy (2014), 16(4), 2244-2277.

  18. Two small problems. Exponential number of possible terms.

  19. Two small problems. Exponential number of possible terms. Contrarily to what happens usually in physics, we do not know what should be the right potential .

  20. Can we have a reasonable idea of what could be the spike statistics by studying a neural network model ?

  21. An Integrate and Fire neural network model with chemical and electric synapses

  22. An Integrate and Fire neural network model with chemical and electric synapses R.Cofr´ e,B. Cessac: ”Dynamics and spike trains statistics in conductance-based Integrate-and-Fire neural networks with chemical and electric synapses”, Chaos, Solitons and Fractals, 2013.

  23. An Integrate and Fire neural network model with chemical and electric synapses Sub-threshold dynamics: dV k C k = − g L , k ( V k − E L ) dt

  24. An Integrate and Fire neural network model with chemical and electric synapses Sub-threshold dynamics: dV k C k = − g L , k ( V k − E L ) dt � − g kj ( t , ω )( V k − E j ) j

  25. An Integrate and Fire neural network model with chemical and electric synapses Sub-threshold dynamics: dV k C k = − g L , k ( V k − E L ) dt � − g kj ( t , ω )( V k − E j ) j

  26. An Integrate and Fire neural network model with chemical and electric synapses Sub-threshold dynamics: dV k C k = − g L , k ( V k − E L ) dt 0.1 t=1 0.09 t=1.2 � t=1.6 0.08 t=3 − g kj ( t , ω )( V k − E j ) g(x) 0.07 0.06 j PSP 0.05 0.04 0.03 0.02 0.01 0 0.5 1 1.5 2 2.5 3 3.5 4 t

  27. An Integrate and Fire neural network model with chemical and electric synapses Sub-threshold dynamics: dV k C k = − g L , k ( V k − E L ) dt 0.1 g(x) 0.09 � 0.08 − g kj ( t , ω )( V k − E j ) 0.07 0.06 j PSP 0.05 0.04 0.03 0.02 0.01 0 0.5 1 1.5 2 2.5 3 3.5 4 t

  28. An Integrate and Fire neural network model with chemical and electric synapses Sub-threshold dynamics: dV k C k = − g L , k ( V k − E L ) dt � − g kj ( t , ω )( V k − E j ) j � − g kj ( V k − V j ) ¯ j

  29. An Integrate and Fire neural network model with chemical and electric synapses Sub-threshold dynamics: dV k C k = − g L , k ( V k − E L ) dt � − g kj ( t , ω )( V k − E j ) j � − g kj ( V k − V j ) ¯ j + i ( ext ) ( t ) + σ B ξ k ( t ) k

  30. Sub-threshold regime C dV � � dt + G ( t , ω ) − G V = I ( t , ω ) ,   N � def  g L , k +  δ kl G kl ( t , ω ) = g kj ( t , ω ) = g k ( t , ω ) δ kl . j =1 I ( t , ω ) = I ( cs ) ( t , ω ) + I ( ext ) ( t ) + I ( B ) ( t ) � I ( cs ) def ( t , ω ) = W kj α kj ( t , ω ) , W kj = G kj E j . k j

  31. Sub-threshold regime  (Φ( t , ω ) V + f ( t , ω )) dt + σ B dV = c I N dW ( t ) ,   V ( t 0 ) = v , Φ( t , ω ) = C − 1 � � G − G ( t , ω ) f ( t , ω ) = C − 1 I ( cs ) ( t , ω ) + C − 1 I ( ext ) ( t )

  32. Homogeneous Cauchy problem � dV ( t ,ω ) = Φ( t , ω ) V ( t , ω ) , dt V ( t 0 ) = v ,

  33. Homogeneous Cauchy problem � dV ( t ,ω ) = Φ( t , ω ) V ( t , ω ) , dt V ( t 0 ) = v , Theorem Φ( t , ω ) square matrix with bounded elements. M 0 ( t 0 , t , ω ) = I N � t M k ( t 0 , t , ω ) = I N + Φ( s , ω ) M k − 1 ( s , t ) ds , t ≤ t 1 , t 0 converges uniformly in [ t 0 , t 1 ]. Brockett, R. W., ”Finite Dimensional Linear Systems”,John Wiley and Sons, 1970.

  34. Homogeneous Cauchy problem � dV ( t ,ω ) = Φ( t , ω ) V ( t , ω ) , dt V ( t 0 ) = v , Theorem Φ( t , ω ) square matrix with bounded elements. M 0 ( t 0 , t , ω ) = I N � t M k ( t 0 , t , ω ) = I N + Φ( s , ω ) M k − 1 ( s , t ) ds , t ≤ t 1 , t 0 converges uniformly in [ t 0 , t 1 ]. Brockett, R. W., ”Finite Dimensional Linear Systems”,John Wiley and Sons, 1970. Flow Γ( t 0 , t , ω ) def = lim k →∞ M k ( t 0 , t , ω )

  35. Homogeneous Cauchy problem If Φ( t , ω ) and Φ( s , ω ) commute � t ∞ � R t 1 Φ( s , ω ) ds ) k = e t 0 Φ( s ,ω ) ds Γ( t 0 , t , ω ) = k !( t 0 k =0

  36. Homogeneous Cauchy problem If Φ( t , ω ) and Φ( s , ω ) commute � t ∞ � R t 1 Φ( s , ω ) ds ) k = e t 0 Φ( s ,ω ) ds Γ( t 0 , t , ω ) = k !( t 0 k =0 This holds only in two cases : G = 0; R t Γ( t 0 , t , ω ) = e − 1 t 0 G ( s ,ω ) ds c B. Cessac, J. Math. Neuroscience, 2011 .

  37. Homogeneous Cauchy problem If Φ( t , ω ) and Φ( s , ω ) commute � t ∞ � R t 1 Φ( s , ω ) ds ) k = e t 0 Φ( s ,ω ) ds Γ( t 0 , t , ω ) = k !( t 0 k =0 This holds only in two cases : G = 0; R t Γ( t 0 , t , ω ) = e − 1 t 0 G ( s ,ω ) ds c B. Cessac, J. Math. Neuroscience, 2011 . G ( t , ω ) = κ ( t , ω ) I N

  38. Homogeneous Cauchy problem In general: � t � s n − 1 + ∞ n � � � Γ( t 0 , t , ω ) = I N + · · · X k d s 1 · · · d s n . t 0 t 0 n =1 k =1 X 1 = ( B , A ( s 1 , ω ) ) X 2 = ( B , A ( s 2 , ω ) ) . . . X n = ( B , A ( s n , ω ) ) B = C − 1 G ; A ( t , ω ) = − C − 1 G ( t , ω )

  39. Exponentially bounded flow Definition: An exponentially bounded flow is a two parameter ( t 0 , t ) family { Γ( t 0 , t , ω ) } t ≤ t 0 of flows such that, ∀ ω ∈ Ω: 1 Γ( t 0 , t 0 , ω ) = I N and Γ( t 0 , t , ω )Γ( t , s , ω ) = Γ( t 0 , s , ω ) whenever t 0 ≤ t ≤ s ; 2 For each v ∈ ❘ N and ω ∈ Ω, ( t 0 , t ) → Γ( t 0 , t , ω ) v is continuous for t 0 ≤ t ; 3 There is M > 0 and m > 0 such that : || Γ( s , t , ω ) || ≤ Me − m ( t − s ) , s ≤ t . (1)

  40. Exponentially bounded flow Proposition Let σ 1 be the largest eigenvalue of ¯ G. If: σ 1 < g L , then the flow Γ in our model has the exponentially bounded flow property.

  41. Exponentially bounded flow Proposition Let σ 1 be the largest eigenvalue of ¯ G. If: σ 1 < g L , then the flow Γ in our model has the exponentially bounded flow property. Remark The typical electrical conductance values are of order 1 nano-Siemens, while the leak conductance of retinal ganglion cells is of order 50 micro-Siemens. Therefore, this condition is compatible with the biophysical values of conductances in the retina .

  42. Exponentially bounded flow Theorem If Γ( t 0 , t , ω ) is an exponentially bounded flow , there is a unique strong solution for t ≥ t 0 given by: � t � t Γ( s , t , ω ) f ( s , ω ) ds + σ B V ( t 0 , t , ω ) = Γ( t 0 , t , ω ) v + Γ( s , t , ω ) dW ( s ) c t 0 t 0 R. Wooster, ”Evolution systems of measures for non-autonomous stochastic differential equations with Levy noise”, Communications on Stochastic Analysis, vol 5, 353-370, 2011

  43. Membrane potential decomposition V ( t , ω ) = V ( d ) ( t , ω ) + V ( noise ) ( t , ω ) ,

  44. Membrane potential decomposition V ( t , ω ) = V ( d ) ( t , ω ) + V ( noise ) ( t , ω ) , V ( d ) ( t , ω ) = V ( cs ) ( t , ω ) + V ( ext ) ( t , ω ) ,

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend