efficient bayesian spatiotemporal filters for dendritic
play

Efficient Bayesian spatiotemporal filters for dendritic data Liam - PowerPoint PPT Presentation

Efficient Bayesian spatiotemporal filters for dendritic data Liam Paninski Department of Statistics and Center for Theoretical Neuroscience Columbia University http://www.stat.columbia.edu/ liam liam@stat.columbia.edu March 21, 2012


  1. Efficient Bayesian spatiotemporal filters for dendritic data Liam Paninski Department of Statistics and Center for Theoretical Neuroscience Columbia University http://www.stat.columbia.edu/ ∼ liam liam@stat.columbia.edu March 21, 2012 Support: NIH/NSF CRCNS, Sloan, NSF CAREER, McKnight Scholar, DARPA.

  2. Basic goal: understanding dendrites Ramon y Cajal, 1888.

  3. The filtering problem Spatiotemporal imaging data opens an exciting window on the computations performed by single neurons, but we have to deal with noise and intermittent observations.

  4. Basic paradigm: compartmental models • write neuronal dynamics in terms of equivalent nonlinear, time-varying RC circuits • leads to a coupled system of stochastic differential equations

  5. Inference of spatiotemporal neuronal state given noisy observations State-space approach: q t = state of neuron at time t . We want p ( q t | Y 1: t ) ∝ p ( q t , Y 1: t ). Markov assumption: T ! T ! Y Y p ( Q, Y ) = p ( Q ) p ( Y | Q ) = p ( q 1 ) p ( q t | q t − 1 ) p ( y t | q t ) t =2 t =1 To compute p ( q t , Y 1: t ), just recurse Z p ( q t , Y 1: t ) = p ( y t | q t ) p ( q t | q t − 1 ) p ( q t − 1 , Y 1: t − 1 ) dq t − 1 . q t − 1 Linear-Gaussian case: requires O (dim( q ) 3 T ) time; in principle, just matrix algebra (Kalman filter). Approximate solutions in more general case via sequential Monte Carlo (Huys and Paninski, 2009). Major challenge: dim( q ) can be ≈ 10 4 or greater.

  6. Low-rank approximations Key fact: current experimental methods provide just a few low-SNR observations per time step. Basic idea: if dynamics are approximately linear and time-invariant, we can approximate Kalman covariance C t = cov ( q t | Y 1: t ) as a low-rank perturbation of the prior covariance C 0 + U t D t U T t , with C 0 = lim t →∞ cov ( q t ). In many cases we can solve linear equations involving C 0 in O (dim( q )) time; in this case we use the fact that the dendrite is a tree, and fast methods are available to solve the cable equation on a tree. The necessary recursions — i.e., updating U t , D t and the Kalman mean E ( q t | Y 1: t ) — involve linear manipulations of C 0 , and can be handled in O (dim( q )) time (Paninski, 2010).

  7. Example: inferring voltage from subsampled observations (Loading low-rank-speckle.mp4)

  8. Applications • Optimal experimental design: which parts of the neuron should we image? (Huggins and Paninski, 2011) • Estimation of biophysical parameters (e.g., membrane channel densities, axial resistance, etc.): reduces to a simple nonnegative regression problem once V ( x, t ) is known (Huys et al., 2006) • Detecting location and weights of synaptic input

  9. Application: synaptic locations/weights

  10. Application: synaptic locations/weights

  11. Application: synaptic locations/weights Including known terms: d� V /dt = A� V ( t ) + W � U ( t ) + � ǫ ( t ); U j ( t ) = known input terms. Example: U ( t ) are known presynaptic spike times, and we want to detect which compartments are connected (i.e., infer the weight matrix W ). Loglikelihood is quadratic; L 1 -penalized loglikelihood can be optimized efficiently with homotopy approach. Total computation time is O ( NTk ): N = # compartments, T = # timesteps, k = # nonzero weights.

  12. Application: synaptic locations/weights

  13. Generalizations Need to handle nonlinear, non-Gaussian problems. Kalman idea can be extended by treating Gaussian filter density as approximate, rather than exact. However, Gaussian approximations become highly inaccurate in some cases (e.g., multimodal posteriors). Particle filter: compute filter density recursively via Monte Carlo.

  14. Particle filtering Recall basic recursion: � p ( q t , Y 1: t ) = p ( y t | q t ) p ( q t | q t − 1 ) p ( q t − 1 , Y 1: t − 1 ) dq t − 1 . q t − 1 Particle filter: importance sampling to approximate integral. Can be highly effective (Doucet et al., 2001). However, importance sampling is known to be very non-robust in many important cases: if we put the particles in the wrong part of the space, the variance of the importance weights becomes too large and the filter fails.

  15. Robust particle filtering via sequential MCMC Rewrite basic recursion: � p ( q t , Y 1: t ) = p ( y t | q t ) p ( q t | q t − 1 ) p ( q t − 1 , Y 1: t − 1 ) dq t − 1 . (1) q t − 1 � = p ( q t , q t − 1 , Y 1: t ) dq t − 1 . (2) q t − 1 Basic idea: use MCMC to sample directly from p ( q t , q t − 1 | Y 1: t ) — bypass nonrobust particle filter step entirely.

  16. Reparameterized Gibbs sampling Basic Gibbs sampling on p ( q t , q t − 1 | Y 1: t ) often doesn’t work: mixing is too slow, because p ( q t | q t − 1 = q i t − 1 , Y 1: t ) densities can have minimal overlap for different i . original densities standardized densities 3 i 2 1 q t reparameterized q t But Gibbs sampling on p ( q t , q t − 1 | Y 1: t ) on a reparameterized (“standardized”) space often mixes quite efficiently.

  17. Application to Fitzhugh-Nagumo model PA−APF 1 Std. Gibbs Gibbs Weare 0 v −1 −2 1 0.5 w 0 −0.5 0.45 0.5 0.55 0.6 0.65 t — Spike observed at t = 0 . 5. — Standard particle filter fails; standardized Gibbs approach works very well.

  18. Work in progress • Combining fast Kalman filter with particle filter: Rao-Blackwellized Gaussian mixture filter • Exploiting local tree structure: factorized particle filter • Incorporating calcium measurements (Pnevmatikakis et al., 2011)

  19. References Djurisic, M., Antic, S., Chen, W. R., and Zecevic, D. (2004). Voltage imaging from dendrites of mitral cells: EPSP attenuation and spike trigger zones. J. Neurosci. , 24(30):6703–6714. Doucet, A., de Freitas, N., and Gordon, N., editors (2001). Sequential Monte Carlo in Practice . Springer. Huggins, J. and Paninski, L. (2011). Optimal experimental design for sampling voltage on dendritic trees. J. Comput. Neuro. , In press. Huys, Q., Ahrens, M., and Paninski, L. (2006). Efficient estimation of detailed single-neuron models. Journal of Neurophysiology , 96:872–890. Huys, Q. and Paninski, L. (2009). Model-based smoothing of, and parameter estimation from, noisy biophysical recordings. PLOS Computational Biology , 5:e1000379. Knopfel, T., Diez-Garcia, J., and Akemann, W. (2006). Optical probing of neuronal circuit dynamics: genetically encoded versus classical fluorescent sensors. Trends in Neurosciences , 29:160–166. Paninski, L. (2010). Fast Kalman filtering on quasilinear dendritic trees. Journal of Computational Neuroscience , 28:211–28. Pnevmatikakis, E., Kelleher, K., Chen, R., Josic, K., Saggau, P., and Paninski, L. (2011). Fast nonnegative spatiotemporal calcium smoothing in dendritic trees. COSYNE .

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend