Frank Wood - fwood@cs.brown.edu
Topics in Brain Computer Interfaces Topics in Brain Computer Interfaces CS295 CS295-
- 7
Topics in Brain Computer Interfaces Topics in Brain Computer - - PowerPoint PPT Presentation
Topics in Brain Computer Interfaces Topics in Brain Computer Interfaces CS295- -7 7 CS295 Professor: M ICHAEL B LACK TA: F RANK W OOD Spring 2005 Automated Spike Sorting Frank Wood - fwood@cs.brown.edu Today Particle Filter Homework
Frank Wood - fwood@cs.brown.edu
Frank Wood - fwood@cs.brown.edu
Frank Wood - fwood@cs.brown.edu
Frank Wood - fwood@cs.brown.edu
Frank Wood - fwood@cs.brown.edu
1
− k k k k
1 1
1 1 1 1 1 1
1 1 1 1 2 1 2
1 2 1 2 2 2 2 1 2
Start here with particles representing the posterior. Start here with particles representing the posterior. System model System model Observation model Observation model
Frank Wood - fwood@cs.brown.edu
R.E. Kalman, “A New Approach to Linear Filtering and Prediction Problems”
k k k k
−1
k k k k
Observation model State model
Frank Wood - fwood@cs.brown.edu
1 1
k- k
Prior estimate Error covariance Posterior estimate Kalman gain Error covariance
T k k k k
− − − − 1 1
1
− − − − − −
T k T k k k k k k k k k k
Frank Wood - fwood@cs.brown.edu
1 1 1 1
+ + + +
k k k k k
We’ll look at this today. We’ll look at this today.
1 1
+ + k k
1
k
This bit is much trickier. A link to a full derivation is on the web. This bit is much trickier. A link to a full derivation is on the web.
Excerpted and modified from aticourses.com
Frank Wood - fwood@cs.brown.edu
Remember from the previous slide
1 1 1 1
+ + + + k k k k k
1 1 1 1 1 1 1
+ + + + + + + k k k k k k k k k k
1 1 1 1 1 1
+ + + + + + k k k k k k k k k k k
1 1 1 1 1 1 1
+ + + + + + + k k k k k k k k k k
1 1
+ + k k k
1 1
+ + k k
k k 1 1
+ +
1 1 1 1
+ + + +
k k k k k
1 1
+ + k k
Trick alert!
Excerpted and modified from aticourses.com
Frank Wood - fwood@cs.brown.edu
Remember from the previous slide
k k 1 1 + +
1 1 1 1
+ + + +
k k k k k
1 1 1 1
+ + + +
k k k k k
k k k k k
1 1 1
+ + +
Can get the Kalman gain by minimizing the variance of the estimation error. Can get the Kalman gain by minimizing the variance of the estimation error. Error covariance Posterior estimate Kalman gain
1
− − − − − −
T k T k k k k k k k k k k
Excerpted and modified from aticourses.com
Frank Wood - fwood@cs.brown.edu
1 1
k- k
Prior estimate Error covariance Posterior estimate Kalman gain Error covariance
T k k k k
− − − − 1 1
1
− − − − − −
T k T k k k k k k k k k k
Frank Wood - fwood@cs.brown.edu
Frank Wood - fwood@cs.brown.edu
“The central idea of [PCA] is to reduce the dimensionality of a data set consisting of a large number of interrelated variables, while retaining as much as possible of the variation present in the principal components (PCs), which are uncorrelated, and which are
in all of the original variables.”, I.T. Joliffe
– Compression – Noise Reduction – Dimensionality Reduction
Frank Wood - fwood@cs.brown.edu
5 10
2 4 6 8 10 Gaussian cloud
num_points = 500; angle = pi/4; variances = [5 0; 0 .5]' rotation = [cos(angle) -sin(angle);… sin(angle) cos(angle)] data = rotation*(variances*randn(2,1000)); [pcadata,eigenvectors,eigenvalues] = pca(data,2); recovered_rotation = eigenvectors recovered_variances = sqrt(eigenvalues)
variances = 5.0000 0 0 0.5000 rotation = 0.7071 -0.7071 0.7071 0.7071 recovered_rotation =
recovered_variances = 5.1584 0 0 0.4934
Frank Wood - fwood@cs.brown.edu
=
p j j j p p
1 1 1 2 12 1 11 T 1
th
k
3 2
Frank Wood - fwood@cs.brown.edu
T T 1 T 1
1
r
α
1 T T 1
1
α
r
1 T 1
1
α
r
1
1 T 1
Frank Wood - fwood@cs.brown.edu
1 T 1 1 T 1 1
1 1
1 =
1 1
1
Frank Wood - fwood@cs.brown.edu
Frank Wood - fwood@cs.brown.edu
5 10
2 4 6 8 10 Gaussian cloud
num_points = 500; angle = pi/4; variances = [5 0; 0 .5]' rotation = [cos(angle) -sin(angle);… sin(angle) cos(angle)] data = rotation*(variances*randn(2,1000)); [pcadata,eigenvectors,eigenvalues] = pca(data,2); recovered_rotation = eigenvectors recovered_variances = sqrt(eigenvalues)
variances = 5.0000 0 0 0.5000 rotation = 0.7071 -0.7071 0.7071 0.7071 recovered_rotation =
recovered_variances = 5.1584 0 0 0.4934
Frank Wood - fwood@cs.brown.edu
=
n i y i i
i
1
Shorthand for “the prior probability of the class labeled yi“ Shorthand for “the prior probability of the class labeled yi“ Probability of xi assuming that it came from that class. Probability of xi assuming that it came from that class.
Frank Wood - fwood@cs.brown.edu
=
N i g i new l
1
= =
N i g i N i g i i new l
1 1
= =
N i g i N i new l i new l i g i new l
1 1 T
From “A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models”, Jeff A. Bilmes
Frank Wood - fwood@cs.brown.edu
– Non-parametric template matching – Various clustering's of principle components (Lewiki) – EM on mixtures of multivariate t- distributions (Shoham, et al) – Wavelet packets (Hulata, et al)
– Manually determine waveform templates – Manually determine number of clusters – Manually identify noise – Waveform variability – Inter-spike interval – Off-line vs. on-line
Frank Wood - fwood@cs.brown.edu
Computer cursor and keyboard entry Robotic arm
Stimulation of Muscles, Spinal Cord, and Brain
Voluntary control signal Decoder (Kalman filter, linear filter, etc)
Signal
Detection Spike Sorting Rate Estimation
Frank Wood - fwood@cs.brown.edu
Frank Wood - fwood@cs.brown.edu
– which are “spikes” – how many neurons there are – which neurons each came from. – Not detection!
Frank Wood - fwood@cs.brown.edu
Frank Wood - fwood@cs.brown.edu
Frank Wood - fwood@cs.brown.edu
– Real data
35 18 27 32 28 Units 202351 77194 150917 50796 99160 Spikes E D C B A Subject
Frank Wood - fwood@cs.brown.edu
0. .5 1 1. .5 2
Subject D
Seconds 0. .5 1 1. .5 2
Subject E
Seconds
100 200 300 400 500 600 700 800 900 1000 µ sec.
Subject D
Noise: 19484 Unit A: 3474
100 200 300 400 500 600 700 800 900 1000 µ sec.
Subject E
Noise: 15074 Unit A: 4013 Unit B: 3539 Unit C: 332
Frank Wood - fwood@cs.brown.edu
Frank Wood - fwood@cs.brown.edu
Initialize mixture model with n mixture components by spectral clustering ala Ng and Jordan (2001) Best decoding MSE? For every channel in the recording Optimize model parameters (EM) Propose 0,2,3… units
Decode using all channels using Kalman filter ala Wu et al (2003)
Yes No
Assign every channel to have 1 unit and decode using Kalman filter. Record the MSE of the reconstruction.
For full details see the paper.
Frank Wood - fwood@cs.brown.edu
Frank Wood - fwood@cs.brown.edu
Waveforms Corresponding 2 largest PCA coefficients.
Frank Wood - fwood@cs.brown.edu
results for 5 one minute decoding segments.
11.30 +/- 1.15 625861 114 Auto Weighted 11.31 +/- 1.33 625861 114 Auto Max 12.78 +/- 1.89 860261 96 None 13.28 +/- 1.54 860261 288 Random 13.46 +/- 2.54 547993 92
12.37 +/- 1.22 642422 88 D 13.37 +/- 1.52 456221 78 C 16.16 +/- 2.38 335656 96 B 11.45 +/- 1.39 757674 107 A MSE (cm^2) Spikes Neurons Subject
Rank: Auto Sorted → No Sorting → Randomly Sorted → Human Sorted !
Actual Monkey hand position Neural Reconstruction
Frank Wood - fwood@cs.brown.edu
– Hints at using a different signal instead?
– Fully leverage probabilistic interpretation for enhanced rate estimation. – Different cost function. – Extend to continuous signal.
Frank Wood - fwood@cs.brown.edu