ACKNOWLEDGMENTS Graphics and Animations Experimentation Loren M. - - PowerPoint PPT Presentation

acknowledgments graphics and animations
SMART_READER_LITE
LIVE PREVIEW

ACKNOWLEDGMENTS Graphics and Animations Experimentation Loren M. - - PowerPoint PPT Presentation

2007 IEEE International Symposium on Information Theory Acropolis Congress and Exhibition Center Nice, France Signal Processing Algorithms to Decipher Brain Function Emery N. Brown Department of Brain and Cognitive Sciences Division of Health


slide-1
SLIDE 1

2007 IEEE International Symposium on Information Theory Acropolis Congress and Exhibition Center Nice, France

Signal Processing Algorithms to Decipher Brain Function

Emery N. Brown Department of Brain and Cognitive Sciences Division of Health Sciences and Technology Massachusetts Institute of Technology Department of Anesthesia and Critical Care Massachusetts General Hospital Harvard Medical School June 29, 2007

slide-2
SLIDE 2

ACKNOWLEDGMENTS

Experimentation Graphics and Animations

Loren M. Frank

David P. Nguyen Michael C. Quirk Riccardo Barbieri Loren M. Frank Modeling Collaborators Riccardo Barbieri Matthew A. Wilson (MIT) Loren M. Frank Loren M. Frank (UCSF) Uri Eden Victor Solo (U. Mich) David P. Nguyen Supported by NIMH, NIDA and NSF

slide-3
SLIDE 3

OUTLINE

  • A. Background (Brain Signals)
  • B. State-Space and Point Process Models
  • C. Ensemble Neural Spike Train Decoding
  • D. Neural Receptive Field Plasticity
  • E. Conclusion
slide-4
SLIDE 4

OBJECTIVES

  • A. Point process models as a framework for

representing neural spike trains.

  • B. State-space modeling as a framework for

characterizing dynamic properties in neural systems.

slide-5
SLIDE 5

EXPERIMENTS Neurophysiology fMRI Electroencephalography Magnetoencephalography Cognitive Responses Behavioral Responses THEORY/ABSTRACT MODELS DATA ANALYSIS NEUROSCIENCE DATA: DYNAMIC AND MULTIVARIATE Most Neuroscience Data Analysis Methods Are Static.

slide-6
SLIDE 6

Kandel, Schwartz & Jessell Neurons

slide-7
SLIDE 7

Each Neuron is Multiple Dynamical Systems Dendritic Dynamics Gene-Regulation of Neurotransmitter Synthesis Synaptic Plasticity Electrotonic Coupling Retrograde Propagation

  • f Action Potentials

Anterograde Propagation

  • f Action Potentials

Dendritic Currents

slide-8
SLIDE 8

Spike Trains Firing Rate Wind Velocity Rieke et al. (1997) Neuroscience Experiments are Stimulus-Response Fly H1 Neuron

slide-9
SLIDE 9

Point Process Observation Models

  • Definition

– Point process is a binary (0-1) process that

  • ccurs in continuous time or space.
  • Examples

– Neural spike trains – Heart beats – Earthquake sites and times – Geyser eruptions

slide-10
SLIDE 10

State-Space Modeling Paradigm Observation Model (Point Process) State Model Filter Algorithm

slide-11
SLIDE 11

Point Process Filter Algorithms Recursive Gaussian Approximation Instantaneous Steepest Descent

slide-12
SLIDE 12

A Spike Train

slide-13
SLIDE 13

A Spike Train in Discrete Time

n1 n2 n3 n4 n5 n6 n7 1 1 nt is the spike indicator function in interval t

slide-14
SLIDE 14

Conditional Intensity Function:

where is the history of the spiking process and covariates up to time t The conditional intensity function generalizes the Poisson process rate function and characterizes a point process.

slide-15
SLIDE 15

Rat Hippocampus Anatomy (Amaral &Witter 1989)

slide-16
SLIDE 16

The Place Cell Phenomenon of the Rat Hippocampus O’Keefe and Dostrovsky (1971)

Frank et al., Neuron, 2000

slide-17
SLIDE 17
slide-18
SLIDE 18

Multielectrode Recordings of Brain Activity

Richard A. Norman, Bioengineering Dept., University of Utah Matt Wilson, Dept of Brain and Cognitive Sciences, MIT

slide-19
SLIDE 19

Dynamic Analyses of Information Encoding by Neural Ensembles. (Brown et al. 1998; Barbieri et al. 2004; Brown and Barbieri, 2006; Ergun et al. 2007)

slide-20
SLIDE 20

ENCODING ANALYSIS: PLACE FIELDS

slide-21
SLIDE 21

ENCODING ANALYSIS

  • Model the relation between spiking and position as

an inhomogeneous Poisson Process

  • 1. Model 1: Gaussian model for the rate function
  • 2. Model 2: Zernike Polynomial model for the

rate function

  • Represent the position as a AR(1) process to enforce

continuity of information processing (behavioral constraint)

slide-22
SLIDE 22

Model 1: GAUSSIAN SPATIAL INTENSITY FUNCTION maximum field height scale matrix center

slide-23
SLIDE 23

Model 2: ZERNIKE SPATIAL INTENSITY FUNCTION

slide-24
SLIDE 24

ENCODING ANALYSIS (Brown et al. J. Neurosci., 1998)

(Barbieri et al. Neural Computation, 2004)

  • A rat was trained to forage for chocolate pellets in

a 70 cm diameter environment for 23 (25) minutes.

  • Recordings were made from 32 (34) CA1 place

cells.

  • Position data were recorded at 30Hz (30 frames/

sec).

  • Estimate the model parameter for each cell and

the path from the first 13 (15) minutes of the experiment using maximum likelihood.

slide-25
SLIDE 25

Likelihood of The Neural Spike Train We estimate the parameters for the Gaussian and Zernike models by maximum likelihood.

slide-26
SLIDE 26
slide-27
SLIDE 27

DECODING ANALYSIS: ENSEMBLE REPRESENTATION OF POSITION

slide-28
SLIDE 28

POSITION RECONSTRUCTION BY RECURSIVE FILTERING

(Brown et al. J. Neurosci. 1998; Barbieri et al. Neural Computation 2004)

  • A rat was trained to foraged for chocolate pellets in a 70 cm

diameter environment for 23 minutes.

  • Recordings were made from 34 CA1 place cells.
  • Position data were recorded at 30Hz (30 frames/sec).
  • Decode the last 10 minutes of the experiment in real-time at

the camera frame rate (30 frames/sec).

slide-29
SLIDE 29

STATE-SPACE MODEL AND NEURAL SPIKE TRAIN DECODING Observation Model State Model

Position of the animal at time k The parameter of the place field of neuron c=1, … ,C.

slide-30
SLIDE 30

RECURSIVE STATE ESTIMATION FOR POINT PROCESSES

Bayes’ Theorem

State Model First-Order Autoregressive Model Observation Model Poisson (Gaussian or Zernike)

Chapman-Kolmogorov Equation

slide-31
SLIDE 31

Neural Spike Train Decoding Algorithm (Gaussian Approximation)

A spike from cell c at time k Prob of a spike from cell c at time k Position at time k predicted from time k-1

Covariance Update Algorithm

slide-32
SLIDE 32

Decoding Video

slide-33
SLIDE 33

Reverse Correlation Bayes’ (Gaussian) Med Error (cm) 29.6 7.7 R-Squared 34.0 89.0

slide-34
SLIDE 34

1.Median Decoding Errors (cm) for ~ 30 neurons Gaussian: 7.9; 7.7 Zernike: 6.0; 5.5 Reverse Correlation 29.6

  • 2. Coverage Probabilities (Expected 0.95):

Gaussian: 0.31; 0.40 Zernike: 0.67; 0.75

  • 3. Improvements due to more accurate spatial model, faster

learning rate, and smaller updating interval.

  • 4. HIPPOCAMPUS MAINTAINS A DYNAMIC REPRESENTATIONS

OF THE ANIMAL’S POSITION. Summary

slide-35
SLIDE 35

Wessberg et al. (2000); Taylor et al. (2002); Serruya et al. (2002); Mussallam (2004); Hochberg et al. 2006; Srinivasan et al. 2006; 2007a, 2007b.

slide-36
SLIDE 36

Neural systems are dynamic. They are constantly changing how they represent information. Key Point

slide-37
SLIDE 37

An Analysis of Hippocampal Receptive Field Dynamics by Point Process Adaptive Filters Emery N. Brown, David Nguyen, Loren Frank, Matt Wilson,Victor Solo Proceedings of the National Academy of Sciences (2001).

slide-38
SLIDE 38

SPATIOTEMPORAL DYNAMICS OF CA1 PLACE CELLS

  • Abbott and Blum (1996); Mehta et al. (1997, 2000,

2002)

Over the first few passes – Fields increased in size – Field center of mass moved backwards – Fields skewed – Analyses dependent on restricted firing of place cells

slide-39
SLIDE 39
slide-40
SLIDE 40

GAUSSIAN MODEL OF A CONDITONAL INTENSITY FUNCTION maximum field height scale parameter center

slide-41
SLIDE 41

STATE-SPACE MODEL AND NEURAL PLASTICITY Observation Model State Model

A time-varying position vector of model parameters The animal’s position at k, an observable time-varying covariate

slide-42
SLIDE 42

New Value Previous Value Innovation Error Signal Learning Rate ADAPTIVE POINT PROCESS FILTER ALGORITHM (Instantaneous Steepest Descent)

slide-43
SLIDE 43

VIDEO

slide-44
SLIDE 44

Brown et al., PNAS 2001

slide-45
SLIDE 45

Receptive Field Formation in the Hippocampus

Frank et al. (2002), Frank et al. 2004, Suzuki and Brown, 2005

slide-46
SLIDE 46

Receptive Field Formation

  • Recordings from CA1, the deep layers of the entorhinal cortex

(EC), and adjacent cortical regions in a familiar and a novel configuration of an alternation task.

  • Four animals pre-trained on familiar configuration
  • 6 tetrodes targeted dorsal CA1
  • 18 tetrodes targeted EC/Cortex
  • Sleep – Run – Sleep – Run – Sleep
slide-47
SLIDE 47

Receptive Field Formation

Day Run 1 Run 2 1 Familiar (1-3-7) Novel (1-3-6) 2 Familiar (1-3-7) Novel (1-3-6) 3 Familiar (1-3-7) Novel (1-3-6) 4 Familiar (1-3-7) Novel (1-4-7) 5 Familiar (1-3-7) Novel (1-4-7) …

slide-48
SLIDE 48

Examples of CA1 Place Fields

Run 1 Familiar Run 2 Novel

slide-49
SLIDE 49

is the height of the control points Spline Model of the Conditional Intensity Function Space Time

slide-50
SLIDE 50

New Value Previous Value Innovation Error Signal Learning Rate Matrix POINT PROCESS ADAPTIVE FILTER ALGORITHM

are the spatial control points are the temporal control points

slide-51
SLIDE 51
slide-52
SLIDE 52

Application: Rapid Hippocampal Place Field Changes Novel Arm Place fields can develop after little or no previous spiking

Frank, Stanley and Brown, Journal of Neuroscience (2004)

slide-53
SLIDE 53
  • Rescale the spike train according to

where sk is the time of the k-th spike. – If correctly describes the conditional intensity function underlying the spike train, the ’s will be independent, exponential distribution with rate parameter 1 (Time- Rescaling Theorem). Goodness-of-Fit Test (Time-Rescaling Theorem)

slide-54
SLIDE 54

Goodness of Fit - CA1

  • Kolmogorov-Smirnov test to compare cumulative

distributions of model to data.

Quantiles of Uniform Distribution Cumulative Distribution of Transformed zk’s

slide-55
SLIDE 55

Goodness of Fit Results

Within 95% bounds

Without Temporal Component: CA1 : 0 / 191 (0%) Deep EC: 2 / 56 (4%) With Temporal Component: CA1 : 71 / 191 (37%) Deep EC: 25 / 56 (45%)

slide-56
SLIDE 56

Summary

  • Individual CA1 place fields can show very rapid

plasticity.

  • Across the population, there is a critical minimum

length of experience required to form a stable representation (5-7 minutes or three days).

  • Even after a stable representation is present, the

animal still behaves differently in the novel place, suggesting that downstream cortical regions still differentiate between the novel and familiar regions.

slide-57
SLIDE 57

CONCLUSION

  • Characterization of single neuron and ensembles of neurons

within the hippocampus during learning and memory formation.

  • One form of communication in neural systems is with spikes

that are dynamic, high-dimensional point processes.

  • State-space modeling is an important theoretical link between

statistical data analysis and deterministic dynamic modeling of neural systems.

  • Neuroscience needs

– Greater involvement of quantitative scientists – Greater push to develop dynamic methods appropriate for the broad range of data being collected in neuroscience experiments – MORE EMPIRICISM AND LESS THEORY