SLIDE 1
ACKNOWLEDGMENTS Graphics and Animations Experimentation Loren M. - - PowerPoint PPT Presentation
ACKNOWLEDGMENTS Graphics and Animations Experimentation Loren M. - - PowerPoint PPT Presentation
2007 IEEE International Symposium on Information Theory Acropolis Congress and Exhibition Center Nice, France Signal Processing Algorithms to Decipher Brain Function Emery N. Brown Department of Brain and Cognitive Sciences Division of Health
SLIDE 2
SLIDE 3
OUTLINE
- A. Background (Brain Signals)
- B. State-Space and Point Process Models
- C. Ensemble Neural Spike Train Decoding
- D. Neural Receptive Field Plasticity
- E. Conclusion
SLIDE 4
OBJECTIVES
- A. Point process models as a framework for
representing neural spike trains.
- B. State-space modeling as a framework for
characterizing dynamic properties in neural systems.
SLIDE 5
EXPERIMENTS Neurophysiology fMRI Electroencephalography Magnetoencephalography Cognitive Responses Behavioral Responses THEORY/ABSTRACT MODELS DATA ANALYSIS NEUROSCIENCE DATA: DYNAMIC AND MULTIVARIATE Most Neuroscience Data Analysis Methods Are Static.
SLIDE 6
Kandel, Schwartz & Jessell Neurons
SLIDE 7
Each Neuron is Multiple Dynamical Systems Dendritic Dynamics Gene-Regulation of Neurotransmitter Synthesis Synaptic Plasticity Electrotonic Coupling Retrograde Propagation
- f Action Potentials
Anterograde Propagation
- f Action Potentials
Dendritic Currents
SLIDE 8
Spike Trains Firing Rate Wind Velocity Rieke et al. (1997) Neuroscience Experiments are Stimulus-Response Fly H1 Neuron
SLIDE 9
Point Process Observation Models
- Definition
– Point process is a binary (0-1) process that
- ccurs in continuous time or space.
- Examples
– Neural spike trains – Heart beats – Earthquake sites and times – Geyser eruptions
SLIDE 10
State-Space Modeling Paradigm Observation Model (Point Process) State Model Filter Algorithm
SLIDE 11
Point Process Filter Algorithms Recursive Gaussian Approximation Instantaneous Steepest Descent
SLIDE 12
A Spike Train
SLIDE 13
A Spike Train in Discrete Time
n1 n2 n3 n4 n5 n6 n7 1 1 nt is the spike indicator function in interval t
SLIDE 14
Conditional Intensity Function:
where is the history of the spiking process and covariates up to time t The conditional intensity function generalizes the Poisson process rate function and characterizes a point process.
SLIDE 15
Rat Hippocampus Anatomy (Amaral &Witter 1989)
SLIDE 16
The Place Cell Phenomenon of the Rat Hippocampus O’Keefe and Dostrovsky (1971)
Frank et al., Neuron, 2000
SLIDE 17
SLIDE 18
Multielectrode Recordings of Brain Activity
Richard A. Norman, Bioengineering Dept., University of Utah Matt Wilson, Dept of Brain and Cognitive Sciences, MIT
SLIDE 19
Dynamic Analyses of Information Encoding by Neural Ensembles. (Brown et al. 1998; Barbieri et al. 2004; Brown and Barbieri, 2006; Ergun et al. 2007)
SLIDE 20
ENCODING ANALYSIS: PLACE FIELDS
SLIDE 21
ENCODING ANALYSIS
- Model the relation between spiking and position as
an inhomogeneous Poisson Process
- 1. Model 1: Gaussian model for the rate function
- 2. Model 2: Zernike Polynomial model for the
rate function
- Represent the position as a AR(1) process to enforce
continuity of information processing (behavioral constraint)
SLIDE 22
Model 1: GAUSSIAN SPATIAL INTENSITY FUNCTION maximum field height scale matrix center
SLIDE 23
Model 2: ZERNIKE SPATIAL INTENSITY FUNCTION
SLIDE 24
ENCODING ANALYSIS (Brown et al. J. Neurosci., 1998)
(Barbieri et al. Neural Computation, 2004)
- A rat was trained to forage for chocolate pellets in
a 70 cm diameter environment for 23 (25) minutes.
- Recordings were made from 32 (34) CA1 place
cells.
- Position data were recorded at 30Hz (30 frames/
sec).
- Estimate the model parameter for each cell and
the path from the first 13 (15) minutes of the experiment using maximum likelihood.
SLIDE 25
Likelihood of The Neural Spike Train We estimate the parameters for the Gaussian and Zernike models by maximum likelihood.
SLIDE 26
SLIDE 27
DECODING ANALYSIS: ENSEMBLE REPRESENTATION OF POSITION
SLIDE 28
POSITION RECONSTRUCTION BY RECURSIVE FILTERING
(Brown et al. J. Neurosci. 1998; Barbieri et al. Neural Computation 2004)
- A rat was trained to foraged for chocolate pellets in a 70 cm
diameter environment for 23 minutes.
- Recordings were made from 34 CA1 place cells.
- Position data were recorded at 30Hz (30 frames/sec).
- Decode the last 10 minutes of the experiment in real-time at
the camera frame rate (30 frames/sec).
SLIDE 29
STATE-SPACE MODEL AND NEURAL SPIKE TRAIN DECODING Observation Model State Model
Position of the animal at time k The parameter of the place field of neuron c=1, … ,C.
SLIDE 30
RECURSIVE STATE ESTIMATION FOR POINT PROCESSES
Bayes’ Theorem
State Model First-Order Autoregressive Model Observation Model Poisson (Gaussian or Zernike)
Chapman-Kolmogorov Equation
SLIDE 31
Neural Spike Train Decoding Algorithm (Gaussian Approximation)
A spike from cell c at time k Prob of a spike from cell c at time k Position at time k predicted from time k-1
Covariance Update Algorithm
SLIDE 32
Decoding Video
SLIDE 33
Reverse Correlation Bayes’ (Gaussian) Med Error (cm) 29.6 7.7 R-Squared 34.0 89.0
SLIDE 34
1.Median Decoding Errors (cm) for ~ 30 neurons Gaussian: 7.9; 7.7 Zernike: 6.0; 5.5 Reverse Correlation 29.6
- 2. Coverage Probabilities (Expected 0.95):
Gaussian: 0.31; 0.40 Zernike: 0.67; 0.75
- 3. Improvements due to more accurate spatial model, faster
learning rate, and smaller updating interval.
- 4. HIPPOCAMPUS MAINTAINS A DYNAMIC REPRESENTATIONS
OF THE ANIMAL’S POSITION. Summary
SLIDE 35
Wessberg et al. (2000); Taylor et al. (2002); Serruya et al. (2002); Mussallam (2004); Hochberg et al. 2006; Srinivasan et al. 2006; 2007a, 2007b.
SLIDE 36
Neural systems are dynamic. They are constantly changing how they represent information. Key Point
SLIDE 37
An Analysis of Hippocampal Receptive Field Dynamics by Point Process Adaptive Filters Emery N. Brown, David Nguyen, Loren Frank, Matt Wilson,Victor Solo Proceedings of the National Academy of Sciences (2001).
SLIDE 38
SPATIOTEMPORAL DYNAMICS OF CA1 PLACE CELLS
- Abbott and Blum (1996); Mehta et al. (1997, 2000,
2002)
Over the first few passes – Fields increased in size – Field center of mass moved backwards – Fields skewed – Analyses dependent on restricted firing of place cells
SLIDE 39
SLIDE 40
GAUSSIAN MODEL OF A CONDITONAL INTENSITY FUNCTION maximum field height scale parameter center
SLIDE 41
STATE-SPACE MODEL AND NEURAL PLASTICITY Observation Model State Model
A time-varying position vector of model parameters The animal’s position at k, an observable time-varying covariate
SLIDE 42
New Value Previous Value Innovation Error Signal Learning Rate ADAPTIVE POINT PROCESS FILTER ALGORITHM (Instantaneous Steepest Descent)
SLIDE 43
VIDEO
SLIDE 44
Brown et al., PNAS 2001
SLIDE 45
Receptive Field Formation in the Hippocampus
Frank et al. (2002), Frank et al. 2004, Suzuki and Brown, 2005
SLIDE 46
Receptive Field Formation
- Recordings from CA1, the deep layers of the entorhinal cortex
(EC), and adjacent cortical regions in a familiar and a novel configuration of an alternation task.
- Four animals pre-trained on familiar configuration
- 6 tetrodes targeted dorsal CA1
- 18 tetrodes targeted EC/Cortex
- Sleep – Run – Sleep – Run – Sleep
SLIDE 47
Receptive Field Formation
Day Run 1 Run 2 1 Familiar (1-3-7) Novel (1-3-6) 2 Familiar (1-3-7) Novel (1-3-6) 3 Familiar (1-3-7) Novel (1-3-6) 4 Familiar (1-3-7) Novel (1-4-7) 5 Familiar (1-3-7) Novel (1-4-7) …
SLIDE 48
Examples of CA1 Place Fields
Run 1 Familiar Run 2 Novel
SLIDE 49
is the height of the control points Spline Model of the Conditional Intensity Function Space Time
SLIDE 50
New Value Previous Value Innovation Error Signal Learning Rate Matrix POINT PROCESS ADAPTIVE FILTER ALGORITHM
are the spatial control points are the temporal control points
SLIDE 51
SLIDE 52
Application: Rapid Hippocampal Place Field Changes Novel Arm Place fields can develop after little or no previous spiking
Frank, Stanley and Brown, Journal of Neuroscience (2004)
SLIDE 53
- Rescale the spike train according to
where sk is the time of the k-th spike. – If correctly describes the conditional intensity function underlying the spike train, the ’s will be independent, exponential distribution with rate parameter 1 (Time- Rescaling Theorem). Goodness-of-Fit Test (Time-Rescaling Theorem)
SLIDE 54
Goodness of Fit - CA1
- Kolmogorov-Smirnov test to compare cumulative
distributions of model to data.
Quantiles of Uniform Distribution Cumulative Distribution of Transformed zk’s
SLIDE 55
Goodness of Fit Results
Within 95% bounds
Without Temporal Component: CA1 : 0 / 191 (0%) Deep EC: 2 / 56 (4%) With Temporal Component: CA1 : 71 / 191 (37%) Deep EC: 25 / 56 (45%)
SLIDE 56
Summary
- Individual CA1 place fields can show very rapid
plasticity.
- Across the population, there is a critical minimum
length of experience required to form a stable representation (5-7 minutes or three days).
- Even after a stable representation is present, the
animal still behaves differently in the novel place, suggesting that downstream cortical regions still differentiate between the novel and familiar regions.
SLIDE 57
CONCLUSION
- Characterization of single neuron and ensembles of neurons
within the hippocampus during learning and memory formation.
- One form of communication in neural systems is with spikes
that are dynamic, high-dimensional point processes.
- State-space modeling is an important theoretical link between
statistical data analysis and deterministic dynamic modeling of neural systems.
- Neuroscience needs