Hardness of Certification for Constrained PCA
Alex Wein
Courant Institute, NYU Joint work with: Afonso Bandeira (NYU) Tim Kunisky (NYU)
1 / 19
Hardness of Certification for Constrained PCA Alex Wein Courant - - PowerPoint PPT Presentation
Hardness of Certification for Constrained PCA Alex Wein Courant Institute, NYU Joint work with: Afonso Bandeira (NYU) Tim Kunisky (NYU) 1 / 19 Part I: Statistical-to-Computational Gaps and the Low-Degree Method 2 / 19
1 / 19
2 / 19
3 / 19
◮ n vertices 3 / 19
◮ n vertices ◮ Each of the
2
3 / 19
◮ n vertices ◮ Each of the
2
◮ Planted clique on k vertices 3 / 19
◮ n vertices ◮ Each of the
2
◮ Planted clique on k vertices 3 / 19
◮ n vertices ◮ Each of the
2
◮ Planted clique on k vertices ◮ Goal: find the clique 3 / 19
4 / 19
◮ Statistically, can find planted clique of size (2 + ε) log n 4 / 19
◮ Statistically, can find planted clique of size (2 + ε) log n ◮ In poly-time, can only find clique of size Ω(√n) 4 / 19
◮ Statistically, can find planted clique of size (2 + ε) log n ◮ In poly-time, can only find clique of size Ω(√n) 4 / 19
◮ Statistically, can find planted clique of size (2 + ε) log n ◮ In poly-time, can only find clique of size Ω(√n)
4 / 19
◮ Statistically, can find planted clique of size (2 + ε) log n ◮ In poly-time, can only find clique of size Ω(√n)
4 / 19
◮ Statistically, can find planted clique of size (2 + ε) log n ◮ In poly-time, can only find clique of size Ω(√n)
4 / 19
◮ Statistically, can find planted clique of size (2 + ε) log n ◮ In poly-time, can only find clique of size Ω(√n)
4 / 19
◮ Statistically, can find planted clique of size (2 + ε) log n ◮ In poly-time, can only find clique of size Ω(√n)
4 / 19
◮ Statistically, can find planted clique of size (2 + ε) log n ◮ In poly-time, can only find clique of size Ω(√n)
4 / 19
◮ Statistically, can find planted clique of size (2 + ε) log n ◮ In poly-time, can only find clique of size Ω(√n)
4 / 19
◮ Statistically, can find planted clique of size (2 + ε) log n ◮ In poly-time, can only find clique of size Ω(√n)
4 / 19
5 / 19
5 / 19
5 / 19
5 / 19
5 / 19
a ’11] 5 / 19
a ’11]
y ’10] 5 / 19
a ’11]
y ’10]
5 / 19
a ’11]
y ’10]
[Barak, Hopkins, Kelner, Kothari, Moitra, Potechin ’16; Hopkins, Steurer ’17; Hopkins, Kothari, Potechin, Raghavendra, Schramm, Steurer ’17; Hopkins ’18 (PhD thesis)] 5 / 19
6 / 19
6 / 19
6 / 19
6 / 19
6 / 19
7 / 19
7 / 19
7 / 19
7 / 19
7 / 19
8 / 19
8 / 19
8 / 19
8 / 19
8 / 19
8 / 19
8 / 19
9 / 19
9 / 19
◮ Planted clique, sparse PCA, stochastic block model, tensor
9 / 19
◮ Planted clique, sparse PCA, stochastic block model, tensor
9 / 19
◮ Planted clique, sparse PCA, stochastic block model, tensor
◮ But low-degree calculation is much easier than proving SOS
9 / 19
◮ Planted clique, sparse PCA, stochastic block model, tensor
◮ But low-degree calculation is much easier than proving SOS
◮ Degree-nδ polynomials ⇔ Time-2nδ algorithms
9 / 19
10 / 19
10 / 19
10 / 19
10 / 19
10 / 19
10 / 19
11 / 19
12 / 19
12 / 19
12 / 19
12 / 19
12 / 19
12 / 19
13 / 19
13 / 19
13 / 19
◮ Proves a lower bound on φ(W ) 13 / 19
◮ Proves a lower bound on φ(W )
13 / 19
◮ Proves a lower bound on φ(W )
◮ Formally: algorithm {fn} outputs fn(W ) ∈ R such that: 13 / 19
◮ Proves a lower bound on φ(W )
◮ Formally: algorithm {fn} outputs fn(W ) ∈ R such that:
13 / 19
◮ Proves a lower bound on φ(W )
◮ Formally: algorithm {fn} outputs fn(W ) ∈ R such that:
13 / 19
◮ Proves a lower bound on φ(W )
◮ Formally: algorithm {fn} outputs fn(W ) ∈ R such that:
◮ Note: cannot just output fn(W ) = 2P∗ + ε 13 / 19
◮ Proves a lower bound on φ(W )
◮ Formally: algorithm {fn} outputs fn(W ) ∈ R such that:
◮ Note: cannot just output fn(W ) = 2P∗ + ε 13 / 19
14 / 19
14 / 19
14 / 19
14 / 19
14 / 19
14 / 19
14 / 19
14 / 19
14 / 19
14 / 19
14 / 19
14 / 19
15 / 19
15 / 19
15 / 19
15 / 19
15 / 19
15 / 19
15 / 19
16 / 19
16 / 19
16 / 19
[Baik, Ben Arous, P´ ech´ e ’05] 16 / 19
[Baik, Ben Arous, P´ ech´ e ’05]
16 / 19
17 / 19
17 / 19
17 / 19
17 / 19
17 / 19
17 / 19
◮ Recall φ(W ) = maxx∈{±1/√n}n x⊤Wx 18 / 19
◮ Recall φ(W ) = maxx∈{±1/√n}n x⊤Wx
◮ If hypercube vector x is a linear combination of the top δn
18 / 19
◮ Recall φ(W ) = maxx∈{±1/√n}n x⊤Wx
18 / 19
◮ Recall φ(W ) = maxx∈{±1/√n}n x⊤Wx
18 / 19
◮ Recall φ(W ) = maxx∈{±1/√n}n x⊤Wx
18 / 19
◮ Recall φ(W ) = maxx∈{±1/√n}n x⊤Wx
18 / 19
◮ Recall φ(W ) = maxx∈{±1/√n}n x⊤Wx
18 / 19
19 / 19
19 / 19
19 / 19
◮ Search 19 / 19
◮ Search ◮ Certification 19 / 19
◮ Search ◮ Certification ◮ Recovery (e.g. tensor decomposition) 19 / 19
◮ Search ◮ Certification ◮ Recovery (e.g. tensor decomposition) ◮ Sampling 19 / 19
◮ Search ◮ Certification ◮ Recovery (e.g. tensor decomposition) ◮ Sampling ◮ Counting solutions 19 / 19
◮ Search ◮ Certification ◮ Recovery (e.g. tensor decomposition) ◮ Sampling ◮ Counting solutions
19 / 19
◮ Search ◮ Certification ◮ Recovery (e.g. tensor decomposition) ◮ Sampling ◮ Counting solutions
19 / 19
◮ Search ◮ Certification ◮ Recovery (e.g. tensor decomposition) ◮ Sampling ◮ Counting solutions
19 / 19