Efficient Bayesian spatiotemporal filters for dendritic data Liam - - PowerPoint PPT Presentation

efficient bayesian spatiotemporal filters for dendritic
SMART_READER_LITE
LIVE PREVIEW

Efficient Bayesian spatiotemporal filters for dendritic data Liam - - PowerPoint PPT Presentation

Efficient Bayesian spatiotemporal filters for dendritic data Liam Paninski Department of Statistics and Center for Theoretical Neuroscience Columbia University http://www.stat.columbia.edu/ liam liam@stat.columbia.edu March 21, 2012


slide-1
SLIDE 1

Efficient Bayesian spatiotemporal filters for dendritic data

Liam Paninski

Department of Statistics and Center for Theoretical Neuroscience Columbia University http://www.stat.columbia.edu/∼liam liam@stat.columbia.edu March 21, 2012

Support: NIH/NSF CRCNS, Sloan, NSF CAREER, McKnight Scholar, DARPA.

slide-2
SLIDE 2

Basic goal: understanding dendrites

Ramon y Cajal, 1888.

slide-3
SLIDE 3

The filtering problem

Spatiotemporal imaging data opens an exciting window on the computations performed by single neurons, but we have to deal with noise and intermittent observations.

slide-4
SLIDE 4

Basic paradigm: compartmental models

  • write neuronal dynamics in terms of equivalent nonlinear, time-varying

RC circuits

  • leads to a coupled system of stochastic differential equations
slide-5
SLIDE 5

Inference of spatiotemporal neuronal state given noisy observations

State-space approach: qt = state of neuron at time t. We want p(qt|Y1:t) ∝ p(qt, Y1:t). Markov assumption: p(Q, Y ) = p(Q)p(Y |Q) = p(q1) T Y

t=2

p(qt|qt−1) ! T Y

t=1

p(yt|qt) ! To compute p(qt, Y1:t), just recurse p(qt, Y1:t) = p(yt|qt) Z

qt−1

p(qt|qt−1)p(qt−1, Y1:t−1)dqt−1. Linear-Gaussian case: requires O(dim(q)3T) time; in principle, just matrix algebra (Kalman filter). Approximate solutions in more general case via sequential Monte Carlo (Huys and Paninski, 2009). Major challenge: dim(q) can be ≈ 104 or greater.

slide-6
SLIDE 6

Low-rank approximations

Key fact: current experimental methods provide just a few low-SNR

  • bservations per time step.

Basic idea: if dynamics are approximately linear and time-invariant, we can approximate Kalman covariance Ct = cov(qt|Y1:t) as a low-rank perturbation of the prior covariance C0 + UtDtU T

t , with

C0 = limt→∞ cov(qt). In many cases we can solve linear equations involving C0 in O(dim(q)) time; in this case we use the fact that the dendrite is a tree, and fast methods are available to solve the cable equation on a tree. The necessary recursions — i.e., updating Ut, Dt and the Kalman mean E(qt|Y1:t) — involve linear manipulations of C0, and can be handled in O(dim(q)) time (Paninski, 2010).

slide-7
SLIDE 7

Example: inferring voltage from subsampled

  • bservations

(Loading low-rank-speckle.mp4)

slide-8
SLIDE 8

Applications

  • Optimal experimental design: which parts of the neuron

should we image? (Huggins and Paninski, 2011)

  • Estimation of biophysical parameters (e.g., membrane

channel densities, axial resistance, etc.): reduces to a simple nonnegative regression problem once V (x, t) is known (Huys et al., 2006)

  • Detecting location and weights of synaptic input
slide-9
SLIDE 9

Application: synaptic locations/weights

slide-10
SLIDE 10

Application: synaptic locations/weights

slide-11
SLIDE 11

Application: synaptic locations/weights

Including known terms: d V /dt = A V (t) + W U(t) + ǫ(t); Uj(t) = known input terms. Example: U(t) are known presynaptic spike times, and we want to detect which compartments are connected (i.e., infer the weight matrix W). Loglikelihood is quadratic; L1-penalized loglikelihood can be

  • ptimized efficiently with homotopy approach. Total

computation time is O(NTk): N = # compartments, T = # timesteps, k = # nonzero weights.

slide-12
SLIDE 12

Application: synaptic locations/weights

slide-13
SLIDE 13

Generalizations

Need to handle nonlinear, non-Gaussian problems. Kalman idea can be extended by treating Gaussian filter density as approximate, rather than exact. However, Gaussian approximations become highly inaccurate in some cases (e.g., multimodal posteriors). Particle filter: compute filter density recursively via Monte Carlo.

slide-14
SLIDE 14

Particle filtering

Recall basic recursion: p(qt, Y1:t) = p(yt|qt)

  • qt−1

p(qt|qt−1)p(qt−1, Y1:t−1)dqt−1. Particle filter: importance sampling to approximate integral. Can be highly effective (Doucet et al., 2001). However, importance sampling is known to be very non-robust in many important cases: if we put the particles in the wrong part of the space, the variance of the importance weights becomes too large and the filter fails.

slide-15
SLIDE 15

Robust particle filtering via sequential MCMC

Rewrite basic recursion: p(qt, Y1:t) =

  • qt−1

p(yt|qt)p(qt|qt−1)p(qt−1, Y1:t−1)dqt−1. (1) =

  • qt−1

p(qt, qt−1, Y1:t)dqt−1. (2) Basic idea: use MCMC to sample directly from p(qt, qt−1|Y1:t) — bypass nonrobust particle filter step entirely.

slide-16
SLIDE 16

Reparameterized Gibbs sampling

Basic Gibbs sampling on p(qt, qt−1|Y1:t) often doesn’t work: mixing is too slow, because p(qt|qt−1 = qi

t−1, Y1:t) densities can have minimal overlap for different i.

1 2 3

  • riginal densities

qt i standardized densities reparameterized qt

But Gibbs sampling on p(qt, qt−1|Y1:t) on a reparameterized (“standardized”) space often mixes quite efficiently.

slide-17
SLIDE 17

Application to Fitzhugh-Nagumo model

−2 −1 1 v PA−APF

  • Std. Gibbs

Gibbs Weare 0.45 0.5 0.55 0.6 0.65 −0.5 0.5 1 w t

— Spike observed at t = 0.5. — Standard particle filter fails; standardized Gibbs approach works very well.

slide-18
SLIDE 18

Work in progress

  • Combining fast Kalman filter with particle filter:

Rao-Blackwellized Gaussian mixture filter

  • Exploiting local tree structure: factorized particle filter
  • Incorporating calcium measurements

(Pnevmatikakis et al., 2011)

slide-19
SLIDE 19

References

Djurisic, M., Antic, S., Chen, W. R., and Zecevic, D. (2004). Voltage imaging from dendrites of mitral cells: EPSP attenuation and spike trigger zones. J. Neurosci., 24(30):6703–6714. Doucet, A., de Freitas, N., and Gordon, N., editors (2001). Sequential Monte Carlo in Practice. Springer. Huggins, J. and Paninski, L. (2011). Optimal experimental design for sampling voltage on dendritic trees. J.

  • Comput. Neuro., In press.

Huys, Q., Ahrens, M., and Paninski, L. (2006). Efficient estimation of detailed single-neuron models. Journal

  • f Neurophysiology, 96:872–890.

Huys, Q. and Paninski, L. (2009). Model-based smoothing of, and parameter estimation from, noisy biophysical recordings. PLOS Computational Biology, 5:e1000379. Knopfel, T., Diez-Garcia, J., and Akemann, W. (2006). Optical probing of neuronal circuit dynamics: genetically encoded versus classical fluorescent sensors. Trends in Neurosciences, 29:160–166. Paninski, L. (2010). Fast Kalman filtering on quasilinear dendritic trees. Journal of Computational Neuroscience, 28:211–28. Pnevmatikakis, E., Kelleher, K., Chen, R., Josic, K., Saggau, P., and Paninski, L. (2011). Fast nonnegative spatiotemporal calcium smoothing in dendritic trees. COSYNE.