Data Assimilation in a Two Neuron Network Anna Miller University of - - PowerPoint PPT Presentation

data assimilation in a two neuron network
SMART_READER_LITE
LIVE PREVIEW

Data Assimilation in a Two Neuron Network Anna Miller University of - - PowerPoint PPT Presentation

Data Assimilation in a Two Neuron Network Anna Miller University of California San Diego December 7, 2017 Data Assimilation Objective Combine a theoretical model with experimental data to estimate physical properties of the system d x ( t


slide-1
SLIDE 1

Data Assimilation in a Two Neuron Network

Anna Miller

University of California San Diego

December 7, 2017

slide-2
SLIDE 2

Data Assimilation

Objective

◮ Combine a theoretical model with experimental data to

estimate physical properties of the system dx(t) dt = F(x(t), p) + η(t) where η(t) is gaussian noise. Deriving the path integral approach: P(X|Y) ∝ exp[−A(X|Y)] where X is estimated state history, Y is all data taken.

slide-3
SLIDE 3

Annealing

A(X|Y) =

M

  • n=1

L

  • l=1

Rm,l 2 (yl(tn) − xl(tn))2 +

M−1

  • n=0

D

  • d=1

Rf ,d 2 (xd(tn+1) − fd(x(tn), p))2 D is the number of state variables, L is the number of measured variables, and fd advances the state of the system in time.

◮ We start with a small Rf value, solves for the most likely state

history and parameters, then increases Rf by the formula: Rf = Rf 0αβ

slide-4
SLIDE 4

Network

Image borrowed from Homework 4:

slide-5
SLIDE 5

Model

C dV (t) dt = gNam3(t)h(t)(ENa − V (t)) + gKn4(t)(EK − V (t)) + gL(EL − V (t)) + Iinj(t) + r(t)gglu(Eex − V (t)) dx(t) dt = x∞(V (t)) − x(t) τx dr(t) dt = αrTe[V (t)](1 − r(t)) − βrr(t) x∞(V (t)) = 1 2

  • 1 + tanh

V (t) − Vx0 dVx

  • τx

= tx0 + tx1

  • 1 − tanh2

V (t) − Vx0 dVx

  • Te[V (t)]

= 1 1 + e(7−V (t))/5 where x describes the behavior of all gating variables: m,n, and h.

slide-6
SLIDE 6

Experiment Data

slide-7
SLIDE 7

Experiment Info

◮ Simulated Current Clamp ◮ Each individual neuron is NaKL with an additional synaptic

current

◮ Noise added to both the current and voltage data by pulling

from a gaussian distribution

◮ ENa = 55 mV, EK = −90 mV, Eex = −38.0 mV,

αr = 2.4 mM−1ms−1, βr = 0.56 ms−1

◮ Annealing done on 25000 time steps or 0.5 seconds of data ◮ Annealing settings: α = 1.25 and β range: 0 − 94

slide-8
SLIDE 8

Lowest Action?

We want to find the value of some function G(X): E[G(X)] =

  • dXG(X)e−A(X|Y)
  • dXe−A(X|Y)

This integral is difficult to do exactly, so we seek a solution, Xo, where A(X|Y) is the global minimum. This enables us to use Laplace’s approximation. Suppose f (x) has a large minima at x = xo:

  • dxe−f (x) =
  • dxe−f (xo)−f ′(xo)(x−xo)− 1

2 f ′′(xo)(x−xo)2

e−f (xo)

  • dxe− 1

2 f ′′(xo)(x−xo)2 = e−f (xo)

  • π

1 2f ′′(xo)

If f (x) has only one minima which dominates f (x), evaluating the integral with Laplace’s approximation is doable. If there are multiple, close together in magnitude, we must account for each of those when evaluating the integral!

slide-9
SLIDE 9

Action Plot: VA and VB Inputs

slide-10
SLIDE 10

Estimation Plot: VA and VB Inputs

slide-11
SLIDE 11

Results VA and VB Inputs

Parameter Bounds Estimated Actual g′

Na(nS/pF)

1, 250 123.992 120 g′

K (nS/pF)

1, 100 19.7614 20 g′

L(nS/pF)

0.01, 3 0.297296 0.3 EL (mV)

  • 100, -10
  • 53.0483
  • 54

Vmo (mV)

  • 100, -10
  • 39.8379
  • 40

dVm (mV−1) 0.02,1 0.0663339 0.06667 tm0 (ms) 0.01,3 0.107146 0.1 tm1 (ms) 0.01,3 0.388524 0.4 Vho (mV)

  • 100, -1
  • 60.0295
  • 60

dVh (mV−1)

  • 1,-0.02
  • 0.0661491
  • 0.06667

th0 (ms) 0.01,3 0.991706 1 th1 (ms) 0.01,10 7.07247 7 Vno (mV)

  • 100, -1
  • 55.0505
  • 55

dVn (mV−1) 0.02,1 0.0332586 0.03333 tn0 (ms) 0.01,3 0.968999 1 tn1 (ms) 0.01,10 5.00038 5 Cm(pF−1) 0.02,1.0 0.039752 0.04 αe (mM−1ms−1) 1.0,5.0 2.42373 2.4 βe (ms−1) 0,2 0.558840 0.56 g′

glu (nS/pF)

0.5,2 0.994867 1.0

slide-12
SLIDE 12

Estimation/Prediction Plot: VA and VB Inputs

slide-13
SLIDE 13

Action Plot: VB Input

slide-14
SLIDE 14

Estimation Plot: VB Input

slide-15
SLIDE 15

Results VB Input

Parameter Bounds Estimated Actual g′

Na(nS/pF)

1, 250 151.711 120 g′

K (nS/pF)

1, 100 100 20 g′

L(nS/pF)

0.01, 3 0.491178 0.3 EL (mV)

  • 100, -10
  • 10
  • 54

Vmo (mV)

  • 100, -10
  • 30.2863
  • 40

dVm (mV−1) 0.02,1 0.0307475 0.06667 tm0 (ms) 0.01,3 0.0406433 0.1 tm1 (ms) 0.01,3 0.270122 0.4 Vho (mV)

  • 100, -1
  • 12.9963
  • 60

dVh (mV−1)

  • 1,-0.02
  • 0.0760233
  • 0.06667

th0 (ms) 0.01,3 3 1 th1 (ms) 0.01,10 1.70837 7 Vno (mV)

  • 100, -1
  • 53.8779
  • 55

dVn (mV−1) 0.02,1 0.02 0.03333 tn0 (ms) 0.01,3 3 1 tn1 (ms) 0.01,10 1.70837 5 Cm(pF−1) 0.02,1.0 0.02 0.04 αe (mM−1ms−1) 1.0,5.0 2.26551 2.4 βe (ms−1) 0,2 2 0.56 g′

glu (nS/pF)

0.5,2 2 1

slide-16
SLIDE 16

Estimation/Prediction Plot: VB Input

slide-17
SLIDE 17

Conclusion

When provided with the voltages of both neurons, the data assimilation procedure is effective at estimating network parameters.

Future Experiments

◮ Add an excitatory connection from neuron B to neuron A. ◮ Increase the number of neurons in the network. ◮ Include inhibitory neurons in the network. ◮ Use a more complicated model for neurons: Sigmoid functions

instead of hyperbolic tangent, add in calcium current, etc.

slide-18
SLIDE 18

Appendix: Path Integral Formulation

We don’t know that our model is correct so we use stochastic differential equations: dx(t) dt = F(x(t), p) + η(t) where η(t) is gaussian noise. Since we don’t know the exact form

  • f the noise with ηj(t) = 0 and ηi(t2)ηj(t1) = gijδ(t2 − t1) -

noise is independent at distinct times. We want to obtain an expression for the probability distribution P(x(tM)|y1:M) =

  • M
  • n=1

dxn−1P(yn|xn, y1:n−1)P(xn|xn−1)P(x0) where x are the state variables and y are the measured values. We can call the − log of the probabilities on the RHS of the above equation, the action P(x0:M|y1:M) = P(X|Y) ∝ exp[−A(X|Y)] where A(X|Y) is the effective action of the system, X is the state history, and Y contains all measurements

slide-19
SLIDE 19

Path Integral Formulation cont.

Assumptions

◮ Markovian Dynamics: The state at x(tn+1) = x(n + 1)

depends only on the state of the system at the previous time tn

◮ All noise is Gaussian and independent of any other noise

A(X|Y) =

M

  • n=1

L

  • l=1

Rm,l 2 (yl(tn) − xl(tn))2 +

M−1

  • n=0

D

  • d=1

Rf ,d 2 (xd(tn+1) − Fd(x(n), p))2 − log P (x0) The first term corresponds to measurement error and the second term corresponds to model error