Principal component analysis
Course of Machine Learning Master Degree in Computer Science University of Rome “Tor Vergata” Giorgio Gambosi a.a. 2018-2019
1
Principal component analysis Course of Machine Learning Master - - PowerPoint PPT Presentation
Principal component analysis Course of Machine Learning Master Degree in Computer Science University of Rome Tor Vergata Giorgio Gambosi a.a. 2018-2019 1 Curse of dimensionality In general, many features: high-dimensional spaces. of
1
D
i=1
D
i=1 D
j=1
D
i=1 D
j=1 D
k=1
2
3
4
5
n
i=1
n
i=1
6
n
i=1
n
i=1
n
i=1
n
i=1
n
i=1
n
i=1
n
i=1
n
i=1
n
i=1
n
i=1
n
i=1
7
8
9
n
i=1
n
i=1
n
i=1
n
i=1
i ||u1||2 + n
i=1
n
i=1
1 (xi − m)
n
i=1
i + n
i=1
n
i=1
1 (xi − m) 10
1 (xk − m)
1 (xk − m) (the orthogonal projection of xk onto
k
11
n
i=1
n
i=1
i + n
i=1
n
i=1
i
n
i=1
1 (xi − m)]2 + n
i=1
n
i=1
1 (xi − m)(xi − m)T u1 + n
i=1
1 Su1 + n
i=1
12
1 (xi − m) is the projection of xi onto the line
1 (xi − m)(xi − m)T u1
n
i=1
1 (xi − m)(xi − m)T u1 = nuT 1 Su1
13
1 Su1. That is, J(u1) is
1 Su1 (wrt u1), with the constraint ||u1|| = 1.
1 Su1 − λ1(uT 1 u1 − 1)
14
1 Su1 = uT 1 λ1u1 = λ1uT 1 u1 = λ1
15
16
17
18
i=1 λ2 i
i=1 λ2 i
i∈{1,...,d}
19
20
(n × m) (n × r) (r × r) (r × m)
21
i wj by definition, where wk is the k-th
i wj = wT j wi = aji
22
23
1 v2) = (λ1v1)T v2 = (Av1)T v2 =
1 AT v2 = vT 1 Av2 = vT 1 λ2v2 = λ2(vT 1 v2)
1 v2 = 0
1 v2 = 0, that is v1, v2 must be orthogonal
24
25
26
i uj=
i WT Wvj =
i (λjvj) = σj
i vj
i uj ̸= 0 iff vT i vj ̸= 0, that is iff i ̸= j.
i (WT Wvi)
i (λivi) = 1
i vi) = 1 27
28
29
30
31
n
i=1
n
i=1
i
T
32
33
34
35
36
37
38
39
(V × D) (V × r) (r × r) (r × D)
40
(D × V ) (D × r) (r × r) (r × V )
41
A:rank(A)=d ||W − A||2 = ||W − W||2
i=1 n
j=1
42
43
44
(i, j)
j 45
(i, j)
j 46
47
T
(V × (D + 1)) (V × d) (d × d) (d × (D + 1)) v
48
T W
49