Stochastic Nerve Axon Equations Wilhelm Stannat Institut f ur - - PowerPoint PPT Presentation

stochastic nerve axon equations
SMART_READER_LITE
LIVE PREVIEW

Stochastic Nerve Axon Equations Wilhelm Stannat Institut f ur - - PowerPoint PPT Presentation

Stochastic Nerve Axon Equations Wilhelm Stannat Institut f ur Mathematik, Fakult at II TU Berlin Bernstein Center for Computational Neuroscience Berlin Linz, December 13, 2016 Stochastic processes in Neuroscience Modelling impact of


slide-1
SLIDE 1

Stochastic Nerve Axon Equations

Wilhelm Stannat Institut f¨ ur Mathematik, Fakult¨ at II TU Berlin Bernstein Center for Computational Neuroscience Berlin

Linz, December 13, 2016

slide-2
SLIDE 2

Stochastic processes in Neuroscience

◮ Modelling impact of noise in neural systems on all scales

  • microscopic - e.g., in ion channel dynamics
  • mezoscopic - on observables in single neurons, e.g. membrane

potential

  • macroscopic - e.g., in neural populations

◮ Analysis

  • mathematical framework for continuum limits (bridging scales)
  • multiscale analysis, w.r.t. coherent structures
  • model reduction, w.r.t. observables, mean field theories

◮ Numerical approximation

  • strong and weak approximation errors
  • robust estimation
slide-3
SLIDE 3

Hodgkin-Huxley Equations (1952)

  • math. description for the generation of Action Potentials (AP)

τ∂tv = λ2∂2

xxv − gNam3h(v − ENa) − gKn4(v − EK) − gL(v − EL) + I

dp dt = αp(v)(1 − p) − βp(v)p p ∈ {m, n, h}

where

v membrane potential, v = v(t, x), t ≥ 0, x ∈ [0, L]

m, n, h gating variables, 0 ≤ m, n, h ≤ 1

τ resp. λ specific time resp. space constants

gNa, gK , gL conductances

ENa, EK , EL resting potentials

αp(v) = a1

p v+Ap 1−e−a2 p(v+Ap) , βp(v) = b1 pe−b2 p(v+Bp)

typical shape of v

slide-4
SLIDE 4

Hodgkin-Huxley Equations (1952)

  • math. description for the generation of Action Potentials (AP)

τ∂tv = λ2∂2

xxv − gNam3h(v − ENa) − gKn4(v − EK) − gL(v − EL) + I

dp dt = αp(v)(1 − p) − βp(v)p , p ∈ {m, n, h} biophysiological relevant feature: ∃ I− < I+ excitable I < I−

  • scillatory I ∈ (I−, I+)

inhibitory I > I+

slide-5
SLIDE 5

In reality more like this ...

[H¨

  • pfner, Math. Biosciences, 2007]

due to fluctuations between open and closed states of ion channels regulating v

slide-6
SLIDE 6

Channel noise

Illustration: measurements of single Na-channel in the giant axon of squid (considered by Hodgkin-Huxley) [Vandenberg, Bezanilla, Biophys. J., 1991]

slide-7
SLIDE 7

Channel noise impact on APs

◮ spontaneous spiking (due to random openening of sufficient

numbers of Na-channels)

◮ time jitter - spike time distribution increases with time ◮ APs can split up or annihilate ◮ propagation failure

places limits on the axon diameter (around 0.1µm), hence also on the wiring density

e.g., [White, et al., Trends Neurosci. 2000, Faisal, et al., Current Biology 2005, Faisal, et al., PLOS 2007]

slide-8
SLIDE 8

Adding noise to Hodgkin-Huxley equations

Adding channel noise yields a stochastic partial differential equation: Current noise

τ∂tv = λ2∂xxv − gNam3h(v − ENa) − gKn4(v − EK) − gL(v − EL) + I + σ∂tξ(t, x) dp dt = αp(v)(1 − p) − βp(v)p , p ∈ {m, n, h} (1) σ = 0.2 σ = 0.35 σ = 0.6

slide-9
SLIDE 9

Adding noise to Hodgkin-Huxley equations

adding channel noise yields a stochastic pde τ∂tv = λ2∂xxv − gNam3h(v − ENa) − gKn4(v − EK) − gL(v − EL) + I + σ∂tξ(t, x) dp dt = αp(v)(1 − p) − βp(v)p , p ∈ {m, n, h} (2)

0.5 1 50 100 u 0.5 1 50 100 x u

features: subthreshold excitability (well-known already in the point neuron) due to spatial extension: spontaneous spiking, backpropagation, annihilation, propagation failure

slide-10
SLIDE 10

Illustration - subthreshold excitability

(already known from the point neuron case)

I = 6.0, σ = 0.0 I = 6.0, σ = 0.25

slide-11
SLIDE 11

Illustration - spontaneous activation, backpropagation

I = 6.0, σ = 0.25 I = 2.0, σ = 0.36

slide-12
SLIDE 12

Subunit noise

(classical) diffusion approximation for the Markov chain dynamics p(t) modelling ion channel dynamics p(t) = p(0) + t αp(v(s))(1 − p(s)) − βp(v(s))p(s) ds + M

(Np) t

with E

  • M

(Np) t

2 = 1 Np t E (αp(v(s))(1 − p(s)) + βp(v(s))p(s)) ds

τ∂tv = λ2∂xxv + gL(v − EL) − gNa(X)(v − ENa) − gK(X)(v − ENa) + ξv ∂tX = α(v)(1 − X) − β(v)X + σ(v, X)ξX

Rem representation in terms of Wiener noise driven sde causes troubles at reflecting boundaries {0, 1}

slide-13
SLIDE 13

Conductance noise

leads to pde with random coefficients... τ∂tv = λ2∂xxv + gL(v − EL) − gNa(X, ξNa)(v − ENa) − gK(X, ξK)(v − ENa) + ξv ∂tX = α(v)(1 − X) − β(v)X + σ(v, X)ξX

◮ comparison and validation of different types, except in case

studies, largely open

◮ also derivation from first principles

slide-14
SLIDE 14

Illustration: Stochastic Hodgkin-Huxley equations

0.5 1 50 100 u 0.5 1 50 100 u 0.5 1 50 100 x u 0.5 1 50 100 u 0.5 1 50 100 u 0.5 1 50 100 x u

realizations of propagation failure (resp. spotaneous activity) in more realistic models, see Sauer, S., J Comput Neurosci 2016

slide-15
SLIDE 15

Propagation failure - numerical studies

Tuckwell, et al. (2008,2010,2011) - numerical study of P ( Propagation failure ) w.r.t. σ from [Tuckwell, Neural Computation, 2008]

slide-16
SLIDE 16

Propagation failure - computational approach

detecting propagation failure Φ(v) := L v(x) − v∗ dx , v∗ = resting potential failure occurs w.r.t. given threshold θ if Φ(v(t)) < θ for some t ∈ [T0, T] hence interested in computing pσ := Pσ

  • min

T0≤t≤T Φ(v(t)) < θ

slide-17
SLIDE 17

Spontaneous activity - computational approach

detecting spontaneous activity using Φ(v) = L v(x) − v∗ dx , v∗ = resting potential w.r.t. given threshold θ if Φ(v(t)) > θ for some t ∈ [T0, T] leads to the probability sσ := Pσ

  • min

T0≤t≤T Φ(v(t)) > θ

slide-18
SLIDE 18

Numerical Illustrations

0.3 0.6 0.9 1.2 0.5 1 σ pσ θ = 0.5 θ = 0.2 θ = 0 pref

σ

0.2 0.4 0.6 0.5 1 σ sσ θ = 0.2 θ = 0.4 θ = 0.5 θ = 0.6

typical plots for pσ vs. σ (resp. sσ vs. σ)

slide-19
SLIDE 19

Model reduction w.r.t. Φ

Assuming the AP ˆ v is loc. exp. attracting with rate κ∗, hence d(v − ˆ v) ≈ −κ∗(v − ˆ v) dt + σ dξ(t) implies dΦ(v(t)) ≈ κ∗ (c − Φ(v(t))) dt + ˜ σdβ(t)

where

◮ c =

L

0 ˆ

v(t, x) − v∗ dx indep. of time, ˜ σ2 = σ2 1

t Var

L

0 W(t, x) dx

  • ◮ (β(t)) - 1-dim BM

and reduces the problem to computing first passage-time probabilities of 1-dim. OU-processes d ˜ Φ = κ∗(c − ˜ Φ) dt + ˜ σdβ(t)

slide-20
SLIDE 20

Numerical Illustrations

0.2 0.4 0.6 0.5 1 σ sσ θ = 0.2 ˜ sσ θ = 0.3 ˜ sσ 0.3 0.6 0.9 1.2 0.5 1 σ pσ θ = 0.7 ˜ pσ θ = 0.5 ˜ pσ

comparison of pσ (resp. sσ) for the full spde with the 1-dim ou typical plots for pσ vs. σ (resp. sσ vs. σ)

slide-21
SLIDE 21

A better fit in the case of FHN

parameters for

◮ FHN-system taken from Tuckwell, op.cit. ◮ OU-Approximation: κ∗ = 0.2, c = 8.6 ◮ θ = 5

slide-22
SLIDE 22

Analysis and Numerical Approximation

joint with Martin Sauer (TU Berlin) realization of (2) as (density controlled) (stochastic) evolution equation τ∂tv = λ2∂2

xxv − gNam3h(v − ENa) − gKn4(v − EK) − gL(v − EL) + I

dp dt = αp(v)(1 − p) − βp(v)p , p ∈ {m, n, h} crucial properties

◮ eq is neither Lipschitz nor one-sided Lipschitz jointly in (v, m, n, h) ◮ condition on ion-channel concentrations X = (m, n, h), first part is linear

w.r.t. v (one-sided Lipschitz will be sufficient for the general theory)

◮ condition on v, second part is a forward Kolmogorov eq, in particular,

p0(x) ∈ [0, 1] implies p(t, x) ∈ [0, 1]

slide-23
SLIDE 23

Abstract setting

Mathematical modelling as stochastic evolution equation on the function space H = L2(0, 1) dv(t) = ∆v(t) + f (v(t), X(t)) dt + BdW (t) , dXi(t) = fi(v(t), Xi(t)) dt + Bi(v(t), Xi(t)) dWi(t) . Assumption A Local Lipschitz continuity and conditional monotonicity |f (v, x)|, |∇f (v, x)| ≤ L

  • 1 + |v|r−1

(1 + ρ(x)) for some r ∈ [2, 4] |fi(v, xi)|, |∇fi(v, xi)| ≤ L (1 + ρi(|v|)) (1 + |xi|) for ρi s.th. ρi(v) ≤ eα|v| ∂vf (v, x) ≤ L(1 + ρ(x)) ∂xi fi(v, xi) ≤ L

slide-24
SLIDE 24

Abstract setting, ctd.

Mathematical modelling as stochastic evolution equation on the function space H = L2(0, 1) dv(t) = ∆v(t) + f (v(t), X(t)) dt + BdW (t) , dXi(t) = fi(v(t), Xi(t)) dt + Bi(v(t), Xi(t)) dWi(t) . Assumption B Recurrence for voltage & invariance of [0, 1] for gating variables ∂vf (v, x) ≤ −κK ∀|v| > K , 0 ≤ x ≤ 1 fi(v, xi) ≥ 0 ∀xi ≤ 0, v ∈ R fi(v, xi) ≤ 0 ∀xi ≥ 1, v ∈ R

slide-25
SLIDE 25

Abstract setting, ctd.

dv(t) = ∆v(t) + f (v(t), X(t)) dt + BdW (t) , dXi(t) = fi(v(t), Xi(t)) dt + Bi(v(t), Xi(t)) dWi(t) . Assumption C B ∈ L2(H, H1), i.e., (Bv)(x) = 1 b(x, y)v(y) dy b ∈ H1([0, 1]2) Bi : H × Hd → L2(H) with integral kernels (Bi(v, x)u)(x) = 1{0≤x≤1} 1 bi(v(x), x(x), x, y)u(y) dy bi(v, x) ∈ L2([0, 1]2) being Lipschitz w.r.t. v and x |bi(v1, x1, x, y) − bi(v2, x2, x, y)| ≤ L(|v1 − v2| + |x1 − x2|) with linear growth Bi(v, x)L2(H,H1) ≤ bi(v, x)H1([0,1]2) ≤ L(vH1 + xH1

d )

slide-26
SLIDE 26

A priori solution sets

X := X Ft-adapted : 0 ≤ X(t, x) ≤ 1 P − a.s. a.e. for all t ∈ [0, T]

  • V :=

   v Ft-adapted : v(t)L∞ ≤ Rt P-a.s. for all t ∈ [0, T] for some Rt with Gaussian moments E

  • exp

α

2 R2 T

  • < ∞ for some α > 0

  

slide-27
SLIDE 27

2 step fixed point iteration

given Xn

vn ∈ X

Step 1 solve vn+1

Xn

in V Step 2 conversely solve Xn+1

vn+1 ∈ X

set n → n + 1

slide-28
SLIDE 28

Step 1

dvn+1(t) = ∆vn+1(t) dt + f (vn+1(t), Xn(t)) dt + B dW (t) Decompose vn+1(t) = Z(t) + Y (t), where dY (t) = ∆Y (t) dt + B dW (t) is an Ornstein-Uhlenbeck process, and ∂tZ(t) = ∆Z(t) + f (Z(t) + Y (t), Xn(t)) solves a random pde

slide-29
SLIDE 29

Step 1, ctd.

verify vn+1 ∈ V using cutoff via normal contraction φ(v) := min / max{0, v ± R ± RY

t }

with RY

t := sup s∈[0,t]

Y (s)L∞

slide-30
SLIDE 30

Step 2

dX n+1

i

(t) = fi(vn+1(t), X n+1

i

(t)) dt + Bi(vn+1, X n+1

i

(t)) dWi(t) verify Xn+1 ∈ X using cutoff via normal contraction φ(xi) := min{0, xi} φ(xi) := max{1, xi}

slide-31
SLIDE 31

Main result: Existence & Uniqueness

Theorem 1 [Sauer, S, Math Comp 2016]

◮ p ≥ max{2(r − 1), 4} ◮ v0 ∈ Lp(Ω, F0, P, H), with Gaussian moments (w.r.t. · L∞) ◮ x0 ∈ Lp(Ω, F0, P; Hd) with 0 ≤ x0 ≤ 1 P-a.s. for a.e. x

Then there exists a unique variational solution (v, X) with v ∈ V and X ∈ X

slide-32
SLIDE 32

Spatial approximation

◮ No previous results for this setting ◮ Neither weak or strong rates ◮ restrict to finite difference approximation, but general finite

element method (w.r.t. H1) should work as well

s

vn

1 n

s

vn

1 2 n

s

vn

2 3 n

s

vn

3 4 n

s

vn

4

. . .

slide-33
SLIDE 33

Spatial approximation, ctd.

Well-known and simple finite difference method

s

vn

✟✟✟✟

1 n

s

vn

1

❳❳❳❳

2 n

s

vn

2

✜ ✜ ✜ ✜ ✜

3 n

s

vn

3

❅ ❅ ❅ ❅

4 n

s

vn

4

  • . . .

(D+v)k = n(vk+1 − vk) (D−v)k = n(vk − vk−1) (∆nv)k = n2(vk+1 − 2vk + vk−1)

slide-34
SLIDE 34

Spatial approximation, ctd.

Well-known and simple finite difference method

s

vn

✟✟✟✟

1 n

s

vn

1

❳❳❳❳

2 n

s

vn

2

✜ ✜ ✜ ✜ ✜

3 n

s

vn

3

❅ ❅ ❅ ❅

4 n

s

vn

4

  • . . .

˜ v ∈ H1(0, 1) linear interpolation D ˜ v(x) =

  • k

(vk+1 − vk)1

k n , k+1 n

(x) ∈ H ∆˜ v(x) =

  • k

n2(vk+1 − 2vk + vk−1)δ k

n (x) ∈ H1(0, 1)∗

slide-35
SLIDE 35

Spatial approximation, ctd.

Discretization of noise part without additional assumptions βk(t) := W (t), |Ik|− 1

2 IkH

bn

k,l := (|Ik||Il|)−1

  • Ik
  • Il

b(x, y) dy dx BW (t) ≈

  • l

bn

k,lW (t), 1IlH =

  • l

bn

k,l|Il|

1 2 βn

l (t)

=: (BnPnW (t))k

slide-36
SLIDE 36

Main result: Numerical Approximation

Key observation: ˜ X n ∈ X, ˜ v n ∈ V with the same uniform bound Rt as for v i.e., uniform bounds for the supremum in terms of the Ornstein-Uhlenbeck process Y

Theorem 2 [Sauer, S, Math Comp 2016] Define the error as E n(t) := (v(t) − ˜ vn(t), X(t) − ˜ Xn(t)), then there exists a constant C and a process Gt < ∞ P-a.s. such that E

  • sup

t∈[0,T]

e−GtE n(t)2

Hd+1

  • ≤ 2E
  • E(0)2

Hd+1

  • + C

n2 .

slide-37
SLIDE 37

Main result: Numerical Approximation, ctd.

Theorem 2 [Sauer, S, Math Comp 2016] Define the error as E n(t) := (v(t) − ˜ vn(t), X(t) − ˜ Xn(t)), then there exists a constant C and a process Gt < ∞ P-a.s. such that E

  • sup

t∈[0,T]

e−GtE n(t)2

Hd+1

  • ≤ 2E
  • E(0)2

Hd+1

  • + C

n2 . Corollary For all ε ∈ (0, 1) there exists a r.v. Cε, P-a.s. finite, such that sup

t∈[0,T]

E n(t)2

Hd+1 ≤

Cε n1−ε .

slide-38
SLIDE 38

Strong convergence rates

NB integrability of exp(Gt) yields strong convergence rates

Assumption D Let f and f = (fi)i satisfy f (v1, x1) − f (v2, x2) f(v1, x1) − f(v2, x2)

  • ·

v1 − v2 x1 − x2

  • ≤ L(|v1 − v2|2 + |x1 − x2|2)

(e.g. FitzHugh-Nagumo) Theorem 3 [Sauer, S, Math Comp 2016] Under the additional Assumption D there exists a constant C such that E

  • sup

t∈[0,T]

E n(t)2

Hd+1

  • ≤ 2eLTE
  • E(0)2

Hd+1

  • + C

n2 .