acknowledgments graphics and animations
play

ACKNOWLEDGMENTS Graphics and Animations Experimentation Loren M. - PowerPoint PPT Presentation

2007 IEEE International Symposium on Information Theory Acropolis Congress and Exhibition Center Nice, France Signal Processing Algorithms to Decipher Brain Function Emery N. Brown Department of Brain and Cognitive Sciences Division of Health


  1. 2007 IEEE International Symposium on Information Theory Acropolis Congress and Exhibition Center Nice, France Signal Processing Algorithms to Decipher Brain Function Emery N. Brown Department of Brain and Cognitive Sciences Division of Health Sciences and Technology Massachusetts Institute of Technology Department of Anesthesia and Critical Care Massachusetts General Hospital Harvard Medical School June 29, 2007

  2. ACKNOWLEDGMENTS Graphics and Animations Experimentation Loren M. Frank David P. Nguyen Michael C. Quirk Riccardo Barbieri Loren M. Frank Modeling Collaborators Riccardo Barbieri Matthew A. Wilson (MIT) Loren M. Frank Loren M. Frank (UCSF) Uri Eden Victor Solo (U. Mich) David P. Nguyen Supported by NIMH, NIDA and NSF

  3. OUTLINE A. Background (Brain Signals) B. State-Space and Point Process Models C. Ensemble Neural Spike Train Decoding D. Neural Receptive Field Plasticity E. Conclusion

  4. OBJECTIVES A. Point process models as a framework for representing neural spike trains. B. State-space modeling as a framework for characterizing dynamic properties in neural systems.

  5. NEUROSCIENCE DATA: DYNAMIC AND MULTIVARIATE EXPERIMENTS Neurophysiology fMRI Electroencephalography Magnetoencephalography DATA ANALYSIS Cognitive Responses Behavioral Responses THEORY/ABSTRACT MODELS Most Neuroscience Data Analysis Methods Are Static.

  6. Neurons Kandel, Schwartz & Jessell

  7. Each Neuron is Multiple Dynamical Systems Dendritic Currents Dendritic Dynamics Retrograde Propagation of Action Potentials Gene-Regulation of Neurotransmitter Synthesis Anterograde Propagation of Action Potentials Synaptic Plasticity Electrotonic Coupling

  8. Neuroscience Experiments are Stimulus-Response Fly H1 Neuron Wind Velocity Spike Trains Firing Rate Rieke et al. (1997)

  9. Point Process Observation Models • Definition – Point process is a binary (0-1) process that occurs in continuous time or space. • Examples – Neural spike trains – Heart beats – Earthquake sites and times – Geyser eruptions

  10. State-Space Modeling Paradigm Observation Model (Point Process) State Model Filter Algorithm

  11. Point Process Filter Algorithms Recursive Gaussian Approximation Instantaneous Steepest Descent

  12. A Spike Train

  13. A Spike Train in Discrete Time n 1 n 2 n 3 n 4 n 5 n 6 n 7 0 0 1 0 0 0 1 n t is the spike indicator function in interval t

  14. Conditional Intensity Function : where is the history of the spiking process and covariates up to time t The conditional intensity function generalizes the Poisson process rate function and characterizes a point process.

  15. Rat Hippocampus Anatomy (Amaral &Witter 1989)

  16. The Place Cell Phenomenon of the Rat Hippocampus O’Keefe and Dostrovsky (1971) Frank et al., Neuron , 2000

  17. Multielectrode Recordings of Brain Activity Richard A. Norman, Matt Wilson, Bioengineering Dept., Dept of Brain and Cognitive Sciences, University of Utah MIT

  18. Dynamic Analyses of Information Encoding by Neural Ensembles. (Brown et al. 1998; Barbieri et al. 2004; Brown and Barbieri, 2006; Ergun et al. 2007)

  19. ENCODING ANALYSIS: PLACE FIELDS

  20. ENCODING ANALYSIS • Model the relation between spiking and position as an inhomogeneous Poisson Process 1. Model 1: Gaussian model for the rate function 2. Model 2: Zernike Polynomial model for the rate function • Represent the position as a AR(1) process to enforce continuity of information processing (behavioral constraint)

  21. Model 1: GAUSSIAN SPATIAL INTENSITY FUNCTION maximum field height scale matrix center

  22. Model 2: ZERNIKE SPATIAL INTENSITY FUNCTION

  23. ENCODING ANALYSIS (Brown et al. J. Neurosci., 1998) (Barbieri et al. Neural Computation, 2004) • A rat was trained to forage for chocolate pellets in a 70 cm diameter environment for 23 (25) minutes. • Recordings were made from 32 (34) CA1 place cells. • Position data were recorded at 30Hz (30 frames/ sec). • Estimate the model parameter for each cell and the path from the first 13 (15) minutes of the experiment using maximum likelihood.

  24. Likelihood of The Neural Spike Train We estimate the parameters for the Gaussian and Zernike models by maximum likelihood.

  25. DECODING ANALYSIS: ENSEMBLE REPRESENTATION OF POSITION

  26. POSITION RECONSTRUCTION BY RECURSIVE FILTERING (Brown et al. J. Neurosci. 1998; Barbieri et al. Neural Computation 2004) • A rat was trained to foraged for chocolate pellets in a 70 cm diameter environment for 23 minutes. • Recordings were made from 34 CA1 place cells. • Position data were recorded at 30Hz (30 frames/sec). • Decode the last 10 minutes of the experiment in real-time at the camera frame rate (30 frames/sec).

  27. STATE-SPACE MODEL AND NEURAL SPIKE TRAIN DECODING Observation Model State Model Position of the animal at time k The parameter of the place field of neuron c=1, … ,C.

  28. RECURSIVE STATE ESTIMATION FOR POINT PROCESSES Chapman-Kolmogorov Equation Bayes’ Theorem State Model First-Order Autoregressive Model Observation Model Poisson (Gaussian or Zernike)

  29. Neural Spike Train Decoding Algorithm (Gaussian Approximation) Prob of a spike from cell c at time k Position at time k predicted from time k-1 A spike from cell c at time k Covariance Update Algorithm

  30. Decoding Video

  31. Reverse Correlation Bayes’ (Gaussian) Med Error (cm) 29.6 7.7 R-Squared 34.0 89.0

  32. Summary 1.Median Decoding Errors (cm) for ~ 30 neurons Gaussian: 7.9; 7.7 Zernike: 6.0; 5.5 Reverse Correlation 29.6 2. Coverage Probabilities (Expected 0.95): Gaussian: 0.31; 0.40 Zernike: 0.67; 0.75 3. Improvements due to more accurate spatial model, faster learning rate, and smaller updating interval. 4. HIPPOCAMPUS MAINTAINS A DYNAMIC REPRESENTATIONS OF THE ANIMAL’S POSITION.

  33. Wessberg et al. (2000); Taylor et al. (2002); Serruya et al. (2002); Mussallam (2004) ; Hochberg et al. 2006 ; Srinivasan et al. 2006; 2007a , 2007b.

  34. Key Point Neural systems are dynamic. They are constantly changing how they represent information.

  35. An Analysis of Hippocampal Receptive Field Dynamics by Point Process Adaptive Filters Emery N. Brown, David Nguyen, Loren Frank, Matt Wilson,Victor Solo Proceedings of the National Academy of Sciences ( 2001 ) .

  36. SPATIOTEMPORAL DYNAMICS OF CA1 PLACE CELLS • Abbott and Blum (1996); Mehta et al. (1997, 2000, 2002) Over the first few passes – Fields increased in size – Field center of mass moved backwards – Fields skewed – Analyses dependent on restricted firing of place cells

  37. GAUSSIAN MODEL OF A CONDITONAL INTENSITY FUNCTION maximum field height scale parameter center

  38. STATE-SPACE MODEL AND NEURAL PLASTICITY Observation Model State Model A time-varying position vector of model parameters The animal’s position at k, an observable time-varying covariate

  39. ADAPTIVE POINT PROCESS FILTER ALGORITHM (Instantaneous Steepest Descent) Innovation Error Signal Learning Rate New Value Previous Value

  40. VIDEO

  41. Brown et al., PNAS 2001

  42. Receptive Field Formation in the Hippocampus Frank et al. (2002), Frank et al. 2004, Suzuki and Brown, 2005

  43. Receptive Field Formation • Recordings from CA1, the deep layers of the entorhinal cortex (EC), and adjacent cortical regions in a familiar and a novel configuration of an alternation task. • Four animals pre-trained on familiar configuration • 6 tetrodes targeted dorsal CA1 • 18 tetrodes targeted EC/Cortex • Sleep – Run – Sleep – Run – Sleep

  44. Receptive Field Formation Day Run 1 Run 2 1 Familiar (1-3-7) Novel (1-3-6) 2 Familiar (1-3-7) Novel (1-3-6) 3 Familiar (1-3-7) Novel (1-3-6) 4 Familiar (1-3-7) Novel (1-4-7) 5 Familiar (1-3-7) Novel (1-4-7) …

  45. Examples of CA1 Place Fields Run 1 Familiar Run 2 Novel

  46. Spline Model of the Conditional Intensity Function Space Time is the height of the control points

  47. POINT PROCESS ADAPTIVE FILTER ALGORITHM Innovation Error Signal New Value Learning Rate Matrix Previous Value are the spatial control points are the temporal control points

  48. Application: Rapid Hippocampal Place Field Changes Novel Arm Place fields can develop after little or no previous spiking Frank, Stanley and Brown, Journal of Neuroscience ( 2004 )

  49. Goodness-of-Fit Test (Time-Rescaling Theorem) • Rescale the spike train according to where s k is the time of the k-th spike. – If correctly describes the conditional intensity function underlying the spike train, the ’s will be independent, exponential distribution with rate parameter 1 (Time- Rescaling Theorem).

  50. Goodness of Fit - CA1 • Kolmogorov-Smirnov test to compare cumulative distributions of model to data. Cumulative Distribution of Transformed z k ’s Quantiles of Uniform Distribution

  51. Goodness of Fit Results Within 95% bounds Without Temporal Component: CA1 : 0 / 191 (0%) Deep EC: 2 / 56 (4%) With Temporal Component: CA1 : 71 / 191 (37%) Deep EC: 25 / 56 (45%)

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend