Theoretical neuroscience: From single neuron to network dynamics - - PowerPoint PPT Presentation

theoretical neuroscience from single neuron to network
SMART_READER_LITE
LIVE PREVIEW

Theoretical neuroscience: From single neuron to network dynamics - - PowerPoint PPT Presentation

Theoretical neuroscience: From single neuron to network dynamics Nicolas Brunel Outline Single neuron stochastic dynamics Network dynamics Learning and memory Single neurons in vivo seem highly stochastic Single neuron stochastic


slide-1
SLIDE 1

Theoretical neuroscience: From single neuron to network dynamics

Nicolas Brunel

slide-2
SLIDE 2

Outline

  • Single neuron stochastic dynamics
  • Network dynamics
  • Learning and memory
slide-3
SLIDE 3

Single neurons in vivo seem highly stochastic

slide-4
SLIDE 4

Single neuron stochastic dynamics: the LIF model

  • LIF neuron with deterministic + white noise inputs,

τm dV dt = −V + µ(t) + σ(t)√τmη(t)

Spikes are emitted when V = Vt, then neuron reset to Vr;

  • P(V, t) is described by Fokker-Planck equation

τm ∂P(V, t) ∂t = σ2(t) 2 ∂2P(V, t) ∂V 2 + ∂ ∂V [(V − µ(t))P(V, t)]

slide-5
SLIDE 5

Single neuron stochastic dynamics: the LIF model

  • P(V, t) is described by Fokker-Planck equation

τm ∂P(V, t) ∂t = σ2(t) 2 ∂2P(V, t) ∂V 2 + ∂ ∂V [(V − µ(t))P(V, t)]

  • Boundary conditions:

– At threshold Vt: absorbing b.c. + probability flux at Vt

= firing probability ν(t): P(Vt, t) = 0, ∂P ∂V (Vt, t) = −2ν(t)τm σ2(t)

– At reset potential Vr: what comes out at Vt must come back at Vr

P(V −

r , t) = P(V + r , t),

∂P ∂V (V −

r , t)−∂P

∂V (V +

r , t) = −2ν(t)τm

σ2(t)

slide-6
SLIDE 6

LIF model: stationary inputs µ(t) = µ0, σ(t) = σ0

P0(V ) = 2ν0τm σ exp

  • −(V − µ0)2

σ2

Vt−µ0 σ V −µ0 σ

exp(u2)Θ(u − Vr)du 1 ν0 = τm √π

  • Vt−µ0

σ Vr−µ0 σ

exp(u2)[1 + erf(u)] CV 2 = 2πν2

  • Vt−µ0

σ Vr−µ0 σ

ex2dx x

−∞

ey2(1 + erfy)2dy

slide-7
SLIDE 7

Time-dependent inputs

  • Given an arbitrary time-dependent input (µ(t), σ(t)) what is the instantaneous firing

rate ν(t)?

slide-8
SLIDE 8

Computing the linear firing rate response

  • Strategy:

– start with small time-dependent perturbations around means,

µ(t) = µ0 + ǫµ1(t), σ(t) = σ0 + ǫσ1(t)

– linearize FP equation and obtain the linear response of P = P0 + ǫP1(t) and

ν = ν0 + ǫν1(t) (solution of inhomogeneous 2nd order ODE). ν1(t) = t Rµ(t − t′)µ1(t′) + Rσ(t − t′)σ1(t′)dt′ ˜ ν1(ω) = Rµ(ω)˜ µ1(ω) + Rσ˜ σ1(ω)

– Rµ and Rσ can be computed explicitly in terms of confluent hypergeometric functions. – go to higher orders in ǫ...

slide-9
SLIDE 9

LIF model: linear rate response Rµ(ω) (changes in µ)

  • High frequency behavior: Rµ(ω) ∼ ν0/(σ0

√2iωτm)

  • Translates into a

√ t initial response for step currents.

slide-10
SLIDE 10

More realistic models

  • Colored noise inputs:

τm dV dt = −V + µ(t) + σ(t)W τs dW dt = −W + √τmη(t)

  • More realistic spike generation:

τm dV dt = −V + F(V ) + µ(t) + σ(t)√τmη(t)

Spike emitted when V → ∞; then reset at Vr – EIF: F(V ) = ∆t exp((V − VT )/∆t) – QIF: F(V ) ∼ V 2 – PIF: F(V ) ∼ V α High ω behavior

Rµ(ω) ∼ τs τm Rµ(ω) ∼ 1/ω Rµ(ω) ∼ 1/ω2 Rµ(ω) ∼ 1/ωα/(α−1)

slide-11
SLIDE 11

Conclusions

  • In simple spiking neuron models, response of instantaneous firing rate can be much

faster than the response of the membrane;

  • EIF model: fits well pyramidal cell data, allows to understand quantitatively factors

controlling speed of firing rate response;

  • Cut-off frequency of real neurons is very high (∼ 200 Hz or higher) ⇒ allows very fast

population response to time dependent inputs

  • EIF can be mapped to both LNP and Wilson-Cowan-type firing rate models, with a time

constant that depends on intrinsic parameters of the cell, and on instantaneous rate itself

slide-12
SLIDE 12

Local networks in cerebral cortex

  • Size ∼ cubic millimeter
  • Total number of cells ∼ 100,000
  • Types of cells:

– pyramidal cells - excitatory (80%) – interneurons - inhibitory (20%)

  • Connection probability ∼ 10%
  • Synapses/cell: ∼ 10,000

(total 109 synapses/mm3)

  • Each synapse has a small effect: depo-

larization/ hyperpolarization ∼ 1-10%

  • f threshold.
slide-13
SLIDE 13

Randomly connected network of LIFs

  • N neurons. Each neuron receives K < N randomly chosen connections from other
  • neurons. Couplings between neurons J (J < 0 is total coupling strength).
  • Neurons = leaky integrate-and-fire:

τm dVi(t) dt = −Vi + Ii

Threshold Vt, reset Vr

  • Total input of a neuron i at time t

Ii(t) = µext + J

  • j

cij

  • k

S(t − tk

j ) + σext

√τmηi(t)

where S(t) describes time course of PSCs, tk

j spike time of kth spike of neuron j, cij

chosen randomly such that

j cij = K for all i.

slide-14
SLIDE 14

Analytical description of irregular state

  • If neurons are firing approximately as Poisson processes, and connection probability is

small (K/N ≪ 1), then the recurrent inputs to a neuron can be approximated as

Ii(t) = µext + JKτν(t − D) +

  • σ2

ext + J2Kν(t − D)τ√τηi(t)

where ηi(t) are uncorrelated white noise.

  • We can use again Fokker-Planck formalism,

τ ∂P ∂t = σ2(t) 2 ∂P ∂V 2 + ∂ ∂V [(V − µ(t))P] ,

where – µ(t) = average input (external − recurrent inhibitory)

µ(t) = µext + JKτν(t − D)

– σ(t) = ‘intrinsic’ noise due to recurrent interactions

σ2(t) = σ2

ext + J2Kν(t − D)τ

slide-15
SLIDE 15

Asynchronous state, linear stability analysis

  • 1. Asynchronous state (constant instantaneous firing rate):

1 ν0 = τm √π

  • Vt−µ0

σ0 Vr−µ0 σ0

exp(u2)[1 + erf(u)] µ0 = µext + KJν0τm σ2 = σ2

ext + KJ2ν0τm

  • 2. Linear stability analysis:

P(V, t) = P0(V ) + ǫP1(V, λ) exp(λt) ν0(t) = ν0 + ǫν1(λ) exp(λt) . . . ⇒ obtain eigenvalues λ

  • 3. Instabilities of asynchronous state occur when Re(λ) = 0;
  • 4. Weakly non-linear analysis: behavior beyond the bifurcation point
  • 5. Finite size effects
slide-16
SLIDE 16

Randomly connected E-I networks

❍❍❍❍❍❍❍ ❥ ✏✏✏✏✏✏✏✏✏✏✏ ✶ ✘ ✘ ✾ ❆ ❆ ❑

slide-17
SLIDE 17

Conclusions - network dynamics

  • Network dynamics can be studied analytically using Fokker-Planck formalism;
  • Inhibition-dominated networks settle in highly irregular states, that can be either

asynchronous or synchronous;

  • Such irregular states reproduce some of the main experimentally observed features of

spontaneous activity in cortex in vivo: – Highly irregular firing of single cells at low rates; – Broad distribution of firing rates (close to lognormal) – Weak correlations between cells

  • Synchronous irregular oscillations similar to fast oscillations observed in cerebellum,

hippocampus, cerebral cortex

  • LFP spectra from all these structures can be fitted quantitatively by the model
  • Irregularity persists in randomly connected networks in the absence of noise
  • Irregular dynamics can be truly chaotic (positive Lyapunov exponents) or ‘stably

chaotic’ (negative Lyapunov exponents)

slide-18
SLIDE 18

Synaptic plasticity, learning and memory

slide-19
SLIDE 19

Synaptic plasticity and network dynamics: future challenges

  • So far, most studies of learning and memory in networks have focused on networks

with fixed connectivity (typically Hebbian - assumed to be the result of learning)

  • With Hebbian connectivity matrices, networks become multistable - with one

background state, and a multiplicity of ‘selective’ attractors representing stored memories.

  • Challenges:

– Devise ‘learning rules’ (i.e. dynamical equations for synapses) consistent with known data – Insert such rules in networks, and study how inputs with prescribed statistics shape network attractor landscape – Study maximal storage capacity of the network, with different types of attractors – Learning rules that are able to reach maximal capacity?