Orthogonal tensor decomposition
Daniel Hsu
Columbia University
Largely based on 2012 arXiv report “Tensor decompositions for learning latent variable models”, with Anandkumar, Ge, Kakade, and Telgarsky.
1
Orthogonal tensor decomposition Daniel Hsu Columbia University - - PowerPoint PPT Presentation
Orthogonal tensor decomposition Daniel Hsu Columbia University Largely based on 2012 arXiv report Tensor decompositions for learning latent variable models, with Anandkumar, Ge, Kakade, and Telgarsky. 1 The basic decomposition problem x
1
3
4
⊤
5
6
7
8
9
11
12
⊤
⊤
13
⊤
14
⊤
⊤
⊤
⊤ − λI
15
16
17
⊤
19
⊤
⊤
20
⊤
21
⊤:
⊤
⊤
⊤
22
⊤
⊤
⊤
⊤
23
0.002 0.004 0.006 0.008 0.01 0.012 200 400 600 800 1000
⊤
24
0.005 0.01 0.015 0.02 0.025 0.03 0.035 0.04 0.045 200 400 600 800 1000
⊤
25
0.05 0.1 0.15 0.2 200 400 600 800 1000
⊤
26
0.1 0.2 0.3 0.4 0.5 0.6 0.7 200 400 600 800 1000
⊤
27
0.2 0.4 0.6 0.8 1 200 400 600 800 1000
⊤
28
0.2 0.4 0.6 0.8 1 200 400 600 800 1000
⊤
29
30
⊤
⊤
⊤
31
32
33
34
35
37
38
⊤
39
40
⊤
⊤
⊤
41
42
◮ Many analogues to matrix SVD, but also many important
◮ Greedy algorithm for finding the decomposition can be
44
◮ ICA/blind source separation [Cardoso, 1991; Goyal et al, 2014] ◮ Mixture models [Bhaskara et al, 2014; Anderson et al, 2014] ◮ Dictionary learning [Barak et al, 2014] ◮ . . .
◮ Exploit other structure (e.g., sparsity) 45
46
◮ E
v1+ v2, w ∼ E v1, w + E v2, w
◮ E
v, w1+ w2 ∼ E v, w1 + E v, w2
◮ c E
v, w ∼ Ec v, w ∼ E v,c w for c ∈ R.
48
49
50
51
⊤
⊤
⊤
⊤
53