SLIDE 1
Statistical modeling and analysis of neural data (NEU 560), Fall - - PowerPoint PPT Presentation
Statistical modeling and analysis of neural data (NEU 560), Fall - - PowerPoint PPT Presentation
Statistical modeling and analysis of neural data (NEU 560), Fall 2020 Jonathan Pillow Princeton University Lecture 6: PCA part 2 Quick recap of PCA 2nd moment matrix the data d SVD } } 2 3 ~ x 1 ~ x 2 6 7 X = N . 6
SLIDE 2
SLIDE 3
Quick recap of PCA
X = 2 6 6 6 4 — ~ x1 — — ~ x2 — . . . — ~ xN — 3 7 7 7 5
the data d N
}
}
2nd moment matrix SVD first k PCs: fraction of sum of squares:
SLIDE 4
Quick recap of PCA
X = 2 6 6 6 4 — ~ x1 — — ~ x2 — . . . — ~ xN — 3 7 7 7 5
the data d N
}
}
2nd moment matrix SVD first k PCs: sum of squares of all data
SLIDE 5
Discussion Questions
X = 2 6 6 6 4 — ~ x1 — — ~ x2 — . . . — ~ xN — 3 7 7 7 5
SLIDE 6
Discussion Questions
X = 2 6 6 6 4 — ~ x1 — — ~ x2 — . . . — ~ xN — 3 7 7 7 5
1. Let where What is ?
SLIDE 7
Discussion Questions
X = 2 6 6 6 4 — ~ x1 — — ~ x2 — . . . — ~ xN — 3 7 7 7 5
1. Let where What is ? 2. Let be the SVD of X. What is the relationship between U, S, and P , Q, V?
SLIDE 8
Discussion Questions
answers on white-board (see end of slides)
SLIDE 9
PCA is equivalent to fitting an ellipse to your data
dimension 1 dimension 2
SLIDE 10
PCA is equivalent to fitting an ellipse to your data
dimension 1 dimension 2 1st PC
SLIDE 11
PCA is equivalent to fitting an ellipse to your data
1st PC dimension 1 dimension 2
}
SLIDE 12
PCA is equivalent to fitting an ellipse to your data
2nd PC dimension 1 dimension 2
}
1st PC
SLIDE 13
PCA is equivalent to fitting an ellipse to your data
2nd PC dimension 1 dimension 2
}
1st PC
}
- PCs are major axes of ellipse(oid)
- singular values specify lengths of axes
SLIDE 14
what is the dominant eigenvector of ?
dim 1 dim 2
SLIDE 15
what is the dominant eigenvector of ?
dim 1 dim 2
SLIDE 16
dim 1 dim 2
Centering the data
SLIDE 17
Centering the data
dim 1 dim 2 1st PC
SLIDE 18
Centering the data
dim 1 dim 2 now it’s a covariance! 1st PC
- In practice, we almost
always do PCA on centered data!
- C = np.cov(X)
SLIDE 19
Projecting onto the PCs
PC-1 projection PC-2 projection
- visualize low-dimensional projection that captures most variance
SLIDE 20
Full derivation of PCA: see notes
ˆ Bpca = arg max
B
||XB||2
F
such that B>B = I.
1. Two equivalent formulations: find subspace that preserves maximal sum-of-squares
SLIDE 21
Full derivation of PCA: see notes
ˆ Bpca = arg max
B
||XB||2
F
ˆ Bpca = arg min
B ||X − XBB>||2 F
such that B>B = I.
1. 2. Two equivalent formulations:
such that B>B = I.
find subspace that preserves maximal sum-of-squares minimize sum-of-squares of
- rthogonal component
}
reconstruction of X in subspace spanned by B
SLIDE 22
SLIDE 23
SLIDE 24
SLIDE 25