Non-Interactive Simulation and Dimension Reduction for Polynomials
Pritish Kamath
joint work with
Badih
Ghazi
Prasad
Raghavendra
CCC UCSD June 24, 2018
1 / 9
Non-Interactive Simulation and Dimension Reduction for Polynomials - - PowerPoint PPT Presentation
Non-Interactive Simulation and Dimension Reduction for Polynomials Pritish Kamath joint work with Badih Ghazi Prasad Raghavendra CCC UCSD June 24, 2018 1 / 9 2 / 9 Talk outline... Motivation Motivation Motivation
joint work with
Badih
Ghazi
Prasad
Raghavendra
CCC UCSD June 24, 2018
1 / 9
2 / 9
randomness
randomness randomness randomness randomness
In Information Theory Common Information [Gács-Körner ’73, Wyner ’75] Distributed Source Coding [Slepian-Wolf ’73] In Computer Science Information Tieoretic Crypto!
Key Agreement, Secure Computation, … ?
3 / 9
randomness
randomness randomness randomness randomness
In Information Theory Common Information [Gács-Körner ’73, Wyner ’75] Distributed Source Coding [Slepian-Wolf ’73] In Computer Science Information Tieoretic Crypto!
Key Agreement, Secure Computation, … ?
3 / 9
randomness
randomness randomness randomness randomness
In Information Theory Common Information [Gács-Körner ’73, Wyner ’75] Distributed Source Coding [Slepian-Wolf ’73] In Computer Science Information Tieoretic Crypto!
Key Agreement, Secure Computation, … ?
3 / 9
randomness
randomness randomness randomness randomness
In Information Theory Common Information [Gács-Körner ’73, Wyner ’75] Distributed Source Coding [Slepian-Wolf ’73] In Computer Science Information Tieoretic Crypto!
Key Agreement, Secure Computation, … ?
3 / 9
randomness
randomness randomness randomness randomness
1 1
(1−ε) 2 (1−ε) 2 ε/2 ε/2
In Information Theory Common Information [Gács-Körner ’73, Wyner ’75] Distributed Source Coding [Slepian-Wolf ’73] In Computer Science Information Tieoretic Crypto!
Key Agreement, Secure Computation, … ?
3 / 9
randomness
randomness randomness randomness randomness
In Information Theory Common Information [Gács-Körner ’73, Wyner ’75] Distributed Source Coding [Slepian-Wolf ’73] In Computer Science Information Tieoretic Crypto!
Key Agreement, Secure Computation, … ?
3 / 9
randomness
randomness randomness randomness randomness
In Information Theory . . . ▶ Common Information [Gács-Körner ’73, Wyner ’75] ▶ Distributed Source Coding [Slepian-Wolf ’73] ▶ · · · In Computer Science Information Tieoretic Crypto!
Key Agreement, Secure Computation, … ?
3 / 9
randomness
randomness randomness randomness randomness
In Information Theory . . . ▶ Common Information [Gács-Körner ’73, Wyner ’75] ▶ Distributed Source Coding [Slepian-Wolf ’73] ▶ · · · In Computer Science . . . ▶ Information Tieoretic Crypto!
Key Agreement, Secure Computation, … ?
3 / 9
randomness
randomness randomness randomness randomness
In Information Theory . . . ▶ Common Information [Gács-Körner ’73, Wyner ’75] ▶ Distributed Source Coding [Slepian-Wolf ’73] ▶ · · · In Computer Science . . . ▶ Information Tieoretic Crypto!
Key Agreement, Secure Computation, … ?
▶ Communication Complexity [Bavarian-Gavinsky-Ito ’14] [Canonne-Guruswami-Meka-Sudan ’15]
3 / 9
randomness
randomness randomness randomness randomness
In Information Theory . . . ▶ Common Information [Gács-Körner ’73, Wyner ’75] ▶ Distributed Source Coding [Slepian-Wolf ’73] ▶ · · · In Computer Science . . . ▶ Information Tieoretic Crypto!
Key Agreement, Secure Computation, … ?
▶ Communication Complexity [Bavarian-Gavinsky-Ito ’14] [Canonne-Guruswami-Meka-Sudan ’15] Abstract Goal: Understand the power of difgerent joint distributions!
3 / 9
When can simulate ? Main Qvestion
4 / 9
X Y
P(X, Y) BSSε
1 1
(1−ε) 2 ε/2 ε/2 (1−ε) 2
1 0 0 1 0 1 0 0 1 0 1 1 · · · 1 0 1 1 0 1 1 0 0 0 1 1 · · ·
When can simulate ? Main Qvestion
4 / 9
X Y
P(X, Y) BSSε
1 1
(1−ε) 2 ε/2 ε/2 (1−ε) 2
a b a, b ∈ {0, 1} (a, b) ∼ BSSδ 1 0 0 1 0 1 0 0 1 0 1 1 · · · 1 0 1 1 0 1 1 0 0 0 1 1 · · ·
When can BSSε simulate BSSδ? Main Qvestion
4 / 9
X Y
P(X, Y) BSSε
1 1
(1−ε) 2 ε/2 ε/2 (1−ε) 2
a b a, b ∈ {0, 1} (a, b) ∼ BSSδ 1 0 0 1 0 1 0 0 1 0 1 1 · · · 1 0 1 1 0 1 1 0 0 0 1 1 · · ·
Answer: YES δ ≥ ε NO δ < ε When can BSSε simulate BSSδ? Main Qvestion
4 / 9
X Y
P(X, Y) DISJ
1 1
1/3 1/3 1/3
a b a, b ∈ {0, 1} (a, b) ∼ BSSδ 1 0 0 0 0 0 0 0 1 0 1 1 · · · 0 1 0 0 0 1 0 1 0 1 0 0 · · ·
When can DISJ simulate BSSδ? Main Qvestion
4 / 9
X Y
P(X, Y) DISJ
1 1
1/3 1/3 1/3
a b a, b ∈ {0, 1} (a, b) ∼ BSSδ 1 0 0 0 0 0 0 0 1 0 1 1 · · · 0 1 0 0 0 1 0 1 0 1 0 0 · · ·
(Partial) Answer: YES δ ≥ 3
8
OPEN δ ∈ [
1 4, 3 8
) NO δ < 1
4
When can DISJ simulate BSSδ? Main Qvestion
4 / 9
X Y
P(X, Y)
a b a, b ∈ {0, 1} (a, b) ∼ BSSδ X1, X2, X3, X4, X5, . . . Y1, Y2, Y3, Y4, Y5, . . .
When can P simulate BSSδ? Main Qvestion
4 / 9
X Y
P(X, Y)
a b a, b ∈ [k] (a, b) ∼ Q X1, X2, X3, X4, X5, . . . Y1, Y2, Y3, Y4, Y5, . . .
When can P simulate Q? Main Qvestion
4 / 9
X Y
P(X, Y)
a b a, b ∈ [k] (a, b) ∼ Q X1, X2, X3, X4, X5, . . . Y1, Y2, Y3, Y4, Y5, . . .
When can P simulate Q? Main Qvestion Analytically? OPEN in most cases! Algorithmically decidable? Not obvious!
4 / 9
X Y
P(X, Y)
a b a, b ∈ [k] (a, b) ∼ Q X1, X2, X3, X4, X5, . . . Y1, Y2, Y3, Y4, Y5, . . .
When can P simulate Q? Main Qvestion Analytically? OPEN in most cases! Algorithmically decidable? Not obvious!
4 / 9
Non-interactive Simulation falls under the category of “Tensor Power” problems.
5 / 9
Non-interactive Simulation falls under the category of “Tensor Power” problems. In Information Theory, ▶ Zero-error Shannon capacity ▶ Zero-error Witsenhausen rate
5 / 9
Non-interactive Simulation falls under the category of “Tensor Power” problems. In Information Theory, ▶ Zero-error Shannon capacity ▶ Zero-error Witsenhausen rate In Computer Science, ▶ (Classical) Amortized value of 2-prover 1-round games ▶ (Qvantum) Entangled value of 2-prover 1-round games ▶ (Qvantum) Local State Transformation ▶ Computing SDP integrality gaps for CSPs ▶ Amortized communication complexity
5 / 9
Non-interactive Simulation falls under the category of “Tensor Power” problems. In Information Theory, ▶ Zero-error Shannon capacity ▶ Zero-error Witsenhausen rate In Computer Science, ▶ (Classical) Amortized value of 2-prover 1-round games ▶ (Qvantum) Entangled value of 2-prover 1-round games ▶ (Qvantum) Local State Transformation ▶ Computing SDP integrality gaps for CSPs [Raghavendra-Steurer ’09] ▶ Amortized communication complexity = Information complexity [Braverman-Rao ’11], [Braverman-Schneider ’15]
5 / 9
Non-interactive Simulation falls under the category of “Tensor Power” problems. In Information Theory, ▶ Zero-error Shannon capacity [Open] ▶ Zero-error Witsenhausen rate [Open] In Computer Science, ▶ (Classical) Amortized value of 2-prover 1-round games [Open] ▶ (Qvantum) Entangled value of 2-prover 1-round games [Open] ▶ (Qvantum) Local State Transformation [Open] ▶ Computing SDP integrality gaps for CSPs [Raghavendra-Steurer ’09] ▶ Amortized communication complexity = Information complexity [Braverman-Rao ’11], [Braverman-Schneider ’15]
5 / 9
Can P simulate Q? Main Qvestion
6 / 9
Xn Yn f [k] g [k] Can P simulate Q? Main Qvestion ( f (Xn), g(Yn)) ∼ Q for (Xn, Yn) ∼ Pn
6 / 9
Xn Yn f [k] g [k] Dimension Reduction Xn0 Yn0
[k]
[k] Can P simulate Q? Main Qvestion ( f (Xn), g(Yn)) ∼ Q for (Xn, Yn) ∼ Pn ( f (Xn0), g(Yn0)) ≈ε ( f (Xn), g(Yn))
6 / 9
Xn Yn f [k] g [k] Dimension Reduction Xn0 Yn0
[k]
[k] Can P simulate Q? Main Qvestion ( f (Xn), g(Yn)) ∼ Q for (Xn, Yn) ∼ Pn If P can simulate Q . . . ( f (Xn0), g(Yn0)) ≈ε ( f (Xn), g(Yn)) . . . then P can ε-approximately simulate Q with only n0 samples
6 / 9
Xn Yn f [k] g [k] Dimension Reduction Xn0 Yn0
[k]
[k] Can P simulate Q? Main Qvestion ( f (Xn), g(Yn)) ∼ Q for (Xn, Yn) ∼ Pn If P can simulate Q . . . ( f (Xn0), g(Yn0)) ≈ε ( f (Xn), g(Yn)) . . . then P can ε-approximately simulate Q with only n0 samples Key point: n0 = n0(ε, P, k) is explicit & does not depend on n.
6 / 9
Xn Yn f [k] g [k] Dimension Reduction Xn0 Yn0
[k]
[k] Can P simulate Q? Main Qvestion ( f (Xn), g(Yn)) ∼ Q for (Xn, Yn) ∼ Pn If P can simulate Q . . . ( f (Xn0), g(Yn0)) ≈ε ( f (Xn), g(Yn)) . . . then P can ε-approximately simulate Q with only n0 samples Key point: n0 = n0(ε, P, k) is explicit & does not depend on n.
DECIDABLE
6 / 9
Xn Yn f [k] g [k] Dimension Reduction Xn0 Yn0
[k]
[k] Can P simulate Q? Main Qvestion ( f (Xn), g(Yn)) ∼ Q for (Xn, Yn) ∼ Pn If P can simulate Q . . . ( f (Xn0), g(Yn0)) ≈ε ( f (Xn), g(Yn)) . . . then P can ε-approximately simulate Q with only n0 samples Key point: n0 = n0(ε, P, k) is explicit & does not depend on n.
DECIDABLE
Any distributed task performed with unbounded amounts of correlated randomness . . . . . . can also be approximately performed with an explicitly bounded number of samples!
Main “take-away” Theorem
6 / 9
Xn Yn f [k] g [k] Dimension Reduction Xn0 Yn0
[k]
[k] Can P simulate Q? Main Qvestion ( f (Xn), g(Yn)) ∼ Q for (Xn, Yn) ∼ Pn If P can simulate Q . . . ( f (Xn0), g(Yn0)) ≈ε ( f (Xn), g(Yn)) . . . then P can ε-approximately simulate Q with only n0 samples Key point: n0 = n0(ε, P, k) is explicit & does not depend on n.
DECIDABLE
Any distributed task performed with unbounded amounts of correlated randomness . . . . . . can also be approximately performed with an explicitly bounded number of samples! (extends to interactive settings even with inputs!)
Main “take-away” Theorem
6 / 9
Xn Yn
P⊗n f (Xn) g(Yn) f (Xn), g(Yn) ∈ [k] sup
n,f,g
Pr[ f (Xn) = g(Yn)] ? E[ f ] = ( 1
k , . . . , 1 k ) and E[g] = ( 1 k , . . . , 1 k ).
Max Agreement Distillation For convenience, f : Xn → Rk where, i ∈ [k] corresponds to ei. Borell’s Tieorem [Bor85] “Halfspaces are most Noise Stable” “Majority is Stablest” [MOO’04, Mos’10] “Peace Sign Conjecture” “Plurality is Stablest” [KKMO’04, IM’12]
7 / 9
Xn Yn
G⊗n
ρ
N ([ ] , [ 1 ρ ρ 1 ]) f (Xn) g(Yn) f (Xn), g(Yn) ∈ [k] sup
n,f,g
Pr[ f (Xn) = g(Yn)] ? E[ f ] = ( 1
k , . . . , 1 k ) and E[g] = ( 1 k , . . . , 1 k ).
Max Agreement Distillation For convenience, f : Xn → Rk where, i ∈ [k] corresponds to ei. Borell’s Tieorem [Bor85] “Halfspaces are most Noise Stable” “Majority is Stablest” [MOO’04, Mos’10] “Peace Sign Conjecture” “Plurality is Stablest” [KKMO’04, IM’12]
7 / 9
Xn Yn
G⊗n
ρ
N ([ ] , [ 1 ρ ρ 1 ]) f (Xn) g(Yn) f (Xn), g(Yn) ∈ [k]
sup
n,f,g
Pr[ f (Xn) = g(Yn)] ? E[ f ] = ( 1
2, 1 2) and E[g] = ( 1 2, 1 2).
Max Agreement Distillation Borell’s Tieorem [Bor85] “Halfspaces are most Noise Stable” “Majority is Stablest” [MOO’04, Mos’10] “Peace Sign Conjecture” “Plurality is Stablest” [KKMO’04, IM’12]
7 / 9
Xn Yn
G⊗n
ρ
N ([ ] , [ 1 ρ ρ 1 ]) f (Xn) g(Yn) f (Xn), g(Yn) ∈ [k]
sup
n,f,g
Pr[ f (Xn) = g(Yn)] ? E[ f ] = ( 1
2, 1 2) and E[g] = ( 1 2, 1 2).
Max Agreement Distillation Borell’s Tieorem [Bor85] 1 f (Xn) = sign(X1) 1 g(Yn) = sign(Y1) “Halfspaces are most Noise Stable” “Majority is Stablest” [MOO’04, Mos’10] “Peace Sign Conjecture” “Plurality is Stablest” [KKMO’04, IM’12]
7 / 9
Xn Yn
G⊗n
ρ
N ([ ] , [ 1 ρ ρ 1 ]) f (Xn) g(Yn) f (Xn), g(Yn) ∈ [k]
sup
n,f,g
Pr[ f (Xn) = g(Yn)] ? E[ f ] = (0.3, 0.7) and E[g] = (0.6, 0.4). Max Agreement Distillation Borell’s Tieorem [Bor85] 1 f (Xn) = sign(X1 − α) 1 g(Yn) = sign(Y1 − β) “Halfspaces are most Noise Stable” “Majority is Stablest” [MOO’04, Mos’10] Generalizes to non-uniform marginals! “Peace Sign Conjecture” “Plurality is Stablest” [KKMO’04, IM’12]
7 / 9
Xn Yn
G⊗n
ρ
N ([ ] , [ 1 ρ ρ 1 ]) f (Xn) g(Yn) f (Xn), g(Yn) ∈ [k]
sup
n,f,g
Pr[ f (Xn) = g(Yn)] ? E[ f ] = ( 1
3, 1 3, 1 3) and E[g] = ( 1 3, 1 3, 1 3).
Max Agreement Distillation Borell’s Tieorem [Bor85] “Halfspaces are most Noise Stable” “Majority is Stablest” [MOO’04, Mos’10] Generalizes to non-uniform marginals! “Peace Sign Conjecture” “Plurality is Stablest” [KKMO’04, IM’12]
7 / 9
Xn Yn
G⊗n
ρ
N ([ ] , [ 1 ρ ρ 1 ]) f (Xn) g(Yn) f (Xn), g(Yn) ∈ [k]
sup
n,f,g
Pr[ f (Xn) = g(Yn)] ? E[ f ] = ( 1
3, 1 3, 1 3) and E[g] = ( 1 3, 1 3, 1 3).
Max Agreement Distillation Borell’s Tieorem [Bor85] “Halfspaces are most Noise Stable” “Majority is Stablest” [MOO’04, Mos’10] Generalizes to non-uniform marginals! 1 2
f (X1, X2)
1 2
g(Y1, Y2)
“Peace Sign Conjecture” “Plurality is Stablest” [KKMO’04, IM’12]
7 / 9
Xn Yn
G⊗n
ρ
N ([ ] , [ 1 ρ ρ 1 ]) f (Xn) g(Yn) f (Xn), g(Yn) ∈ [k]
sup
n,f,g
Pr[ f (Xn) = g(Yn)] ? E[ f ] = ( 1
3, 1 3, 1 3) and E[g] = ( 1 3, 1 3, 1 3).
Max Agreement Distillation Borell’s Tieorem [Bor85] “Halfspaces are most Noise Stable” “Majority is Stablest” [MOO’04, Mos’10] Generalizes to non-uniform marginals! 1 2
f (X1, X2)
1 2
g(Y1, Y2)
“Peace Sign Conjecture” “Plurality is Stablest” [KKMO’04, IM’12] Generalize to non-uniform marginals? FALSE! [HMN’16]
7 / 9
Xn Yn
G⊗n
ρ
N ([ ] , [ 1 ρ ρ 1 ]) f (Xn) g(Yn) f (Xn), g(Yn) ∈ [k]
sup
n,f,g
Pr[ f (Xn) = g(Yn)] ? E[ f ] = ( 1
3, 1 3, 1 3) and E[g] = ( 1 3, 1 3, 1 3).
Max Agreement Distillation Borell’s Tieorem [Bor85] “Halfspaces are most Noise Stable” “Majority is Stablest” [MOO’04, Mos’10] Generalizes to non-uniform marginals! “Peace Sign Conjecture” “Plurality is Stablest” [KKMO’04, IM’12] Generalize to non-uniform marginals? FALSE! [HMN’16] ■ “Peace Sign Conjecture” implies optimal strategy exists with 2 samples.
some fjnite #samples?
get ε-close to optimal agreement? Can we obtain an “explicit” bound? [De-Mossel-Neeman’17]
7 / 9
Xn Yn
G⊗n
ρ
N ([ ] , [ 1 ρ ρ 1 ]) f (Xn) g(Yn) f (Xn), g(Yn) ∈ [k]
sup
n,f,g
Pr[ f (Xn) = g(Yn)] ? E[ f ] = ( 1
3, 1 3, 1 3) and E[g] = ( 1 3, 1 3, 1 3).
Max Agreement Distillation Borell’s Tieorem [Bor85] “Halfspaces are most Noise Stable” “Majority is Stablest” [MOO’04, Mos’10] Generalizes to non-uniform marginals! “Peace Sign Conjecture” “Plurality is Stablest” [KKMO’04, IM’12] Generalize to non-uniform marginals? FALSE! [HMN’16] ■ “Peace Sign Conjecture” implies optimal strategy exists with 2 samples.
some fjnite #samples?
get ε-close to optimal agreement? Can we obtain an “explicit” bound? [De-Mossel-Neeman’17]
Any distributed task performed with unbounded amounts of correlated randomness . . . . . . can also be approximately performed with an explicitly bounded number of samples!
Main “take-away” Theorem
7 / 9
( f (Xn0), g(Yn0)) ≈ε ( f (Xn), g(Yn))
Xn Yn f [k] g [k] Dimension Reduction Xn0 Yn0
[k]
[k] sup
n,f,g
Pr[ f (Xn) = g(Yn)] ? E[ f ] = ( 1
k , . . . , 1 k ) and E[g] = ( 1 k , . . . , 1 k ).
Max Agreement Distillation Borell’s Tieorem [Bor85] “Halfspaces are most Noise Stable” “Majority is Stablest” [MOO’04, Mos’10] Generalizes to non-uniform marginals! “Peace Sign Conjecture” “Plurality is Stablest” [KKMO’04, IM’12] Generalize to non-uniform marginals? FALSE! [HMN’16]
How many samples n0(ε) needed to get ε-close to the optimal agreement? Can we obtain an “explicit” bound?
7 / 9
( f (Xn0), g(Yn0)) ≈ε ( f (Xn), g(Yn))
Xn Yn f [k] g [k] Dimension Reduction Xn0 Yn0
[k]
[k] sup
n,f,g
Pr[ f (Xn) = g(Yn)] ? E[ f ] = ( 1
k , . . . , 1 k ) and E[g] = ( 1 k , . . . , 1 k ).
Max Agreement Distillation Borell’s Tieorem [Bor85] “Halfspaces are most Noise Stable” “Majority is Stablest” [MOO’04, Mos’10] Generalizes to non-uniform marginals! “Peace Sign Conjecture” “Plurality is Stablest” [KKMO’04, IM’12] Generalize to non-uniform marginals? FALSE! [HMN’16]
How many samples n0(ε) needed to get ε-close to the optimal agreement? Can we obtain an “explicit” bound? Gaussian Case: (X, Y) ∼ Gρ
[De-Mossel-Neeman’17, ’18]
n0 = Ackermann(?)
[Tiis Work!]
n0 = exp ( k, 1
ε , 1 1−ρ
)
7 / 9
( f (Xn0), g(Yn0)) ≈ε ( f (Xn), g(Yn))
Xn Yn f [k] g [k] Dimension Reduction Xn0 Yn0
[k]
[k] sup
n,f,g
Pr[ f (Xn) = g(Yn)] ? E[ f ] = ( 1
k , . . . , 1 k ) and E[g] = ( 1 k , . . . , 1 k ).
Max Agreement Distillation Borell’s Tieorem [Bor85] “Halfspaces are most Noise Stable” “Majority is Stablest” [MOO’04, Mos’10] Generalizes to non-uniform marginals! “Peace Sign Conjecture” “Plurality is Stablest” [KKMO’04, IM’12] Generalize to non-uniform marginals? FALSE! [HMN’16]
How many samples n0(ε) needed to get ε-close to the optimal agreement? Can we obtain an “explicit” bound? Gaussian Case: (X, Y) ∼ Gρ
[De-Mossel-Neeman’17, ’18]
n0 = Ackermann(?)
[Tiis Work!]
n0 = exp ( k, 1
ε , 1 1−ρ
) General Case: (X, Y) ∼ P
[Ghazi-K-Sudan’16] :
Reduces∗ to Gaussian case!
Using Regularity Lemma and Invariance Principle. Solved k = 2 case, due to Borell’s theorem!
7 / 9
( f (Xn0), g(Yn0)) ≈ε ( f (Xn), g(Yn))
Xn Yn f [k] g [k] Dimension Reduction Xn0 Yn0
[k]
[k] sup
n,f,g
Pr[ f (Xn) = g(Yn)] ? E[ f ] = ( 1
k , . . . , 1 k ) and E[g] = ( 1 k , . . . , 1 k ).
Max Agreement Distillation Borell’s Tieorem [Bor85] “Halfspaces are most Noise Stable” “Majority is Stablest” [MOO’04, Mos’10] Generalizes to non-uniform marginals! “Peace Sign Conjecture” “Plurality is Stablest” [KKMO’04, IM’12] Generalize to non-uniform marginals? FALSE! [HMN’16]
How many samples n0(ε) needed to get ε-close to the optimal agreement? Can we obtain an “explicit” bound? Gaussian Case: (X, Y) ∼ Gρ
[De-Mossel-Neeman’17, ’18]
n0 = Ackermann(?)
[Tiis Work!]
n0 = exp ( k, 1
ε , 1 1−ρ
) General Case: (X, Y) ∼ P
[Ghazi-K-Sudan’16] :
Reduces∗ to Gaussian case!
[De-Mossel-Neeman’18]
n0 = Ackermann(?)
[Tiis Work!]
n0 = exp ( k, 1
ε , 1 1−ρ , log 1 α
)
7 / 9
8 / 9
fi : Rn → R gi : Rn → R fi gj
{ f1, f2, f3} {g1, g2, g3} Dimension Reduction
{
f2, f3 } { g1, g2, g3} ⟨ fi, gj ⟩
G⊗n
ρ
≈ε ⟨
gj ⟩
G
⊗n0 ρ
⟨ fi, gj ⟩
G⊗n
ρ
:= E
X,Y∼G⊗n
ρ
[ fi(X)gj(Y)]
8 / 9
fi : Rn → R gi : Rn → R fi gj
{ f1, f2, f3} {g1, g2, g3} Dimension Reduction
{
f2, f3 } { g1, g2, g3} ⟨ fi, gj ⟩
G⊗n
ρ
≈ε ⟨
gj ⟩
G
⊗n0 ρ
ui : [n] → R vj : [n] → R
{u1, u2, u3} {v1, v2, v3} Johnson-Lindenstrauss
{ u1, u2, u3} { v1, v2, v3} ⟨ ui, vj ⟩
Rn ≈ε
⟨
vj ⟩
Rn0
8 / 9
fi : Rn → R gi : Rn → R fi gj
{ f1, f2, f3} {g1, g2, g3} Dimension Reduction
{
f2, f3 } { g1, g2, g3} ⟨ fi, gj ⟩
G⊗n
ρ
≈ε ⟨
gj ⟩
G
⊗n0 ρ
ui : [n] → R vj : [n] → R
{u1, u2, u3} {v1, v2, v3}
M ∼ N (0, 1)n0×n
√n0
√n0
{ u1, u2, u3} { v1, v2, v3} ⟨ ui, vj ⟩
Rn ≈ε
⟨
vj ⟩
Rn0
8 / 9
fi : Rn → R gi : Rn → R fi gj
{ f1, f2, f3} {g1, g2, g3} Dimension Reduction
{
f2, f3 } { g1, g2, g3} ⟨ fi, gj ⟩
G⊗n
ρ
≈ε ⟨
gj ⟩
G
⊗n0 ρ
ui : [n] → R vj : [n] → R
{u1, u2, u3} {v1, v2, v3}
M ∼ N (0, 1)n0×n
√n0
√n0
EM [⟨ ui, vj ⟩] = ⟨ ui, vj ⟩ VarM (⟨ ui, vj ⟩) < ε2 n0 = O( 1
ε2 )
⟨ ui, vj ⟩
Rn ≈ε
⟨
vj ⟩
Rn0
8 / 9
fi : Rn → R gi : Rn → R fi gj
{ f1, f2, f3} {g1, g2, g3}
M ∼ N (0, 1)n×n0
(
Ma √n0
)
(
Mb √n0
)
{
f2, f3 } { g1, g2, g3} ⟨ fi, gj ⟩
G⊗n
ρ
≈ε ⟨
gj ⟩
G
⊗n0 ρ
ui : [n] → R vj : [n] → R
{u1, u2, u3} {v1, v2, v3}
M ∼ N (0, 1)n0×n
√n0
√n0
EM [⟨ ui, vj ⟩] = ⟨ ui, vj ⟩ VarM (⟨ ui, vj ⟩) < ε2 n0 = O( 1
ε2 )
⟨ ui, vj ⟩
Rn ≈ε
⟨
vj ⟩
Rn0
8 / 9
fi : Rn → R gi : Rn → R fi gj
{ f1, f2, f3} {g1, g2, g3}
M ∼ N (0, 1)n×n0
(
Ma √n0
)
(
Mb √n0
) EM [⟨
gj ⟩] = ⟨ fi, gj ⟩ VarM (⟨
gj ⟩) < ε2
⟨ fi, gj ⟩
G⊗n
ρ
≈ε ⟨
gj ⟩
G
⊗n0 ρ
ui : [n] → R vj : [n] → R
{u1, u2, u3} {v1, v2, v3}
M ∼ N (0, 1)n0×n
√n0
√n0
EM [⟨ ui, vj ⟩] = ⟨ ui, vj ⟩ VarM (⟨ ui, vj ⟩) < ε2 n0 = O( 1
ε2 )
⟨ ui, vj ⟩
Rn ≈ε
⟨
vj ⟩
Rn0
8 / 9
fi : Rn → R gi : Rn → R fi gj
{ f1, f2, f3} {g1, g2, g3}
M ∼ N (0, 1)n×n0
(
Ma √n0
)
(
Mb √n0
) EM [⟨
gj ⟩] ≈ε ⟨ fi, gj ⟩ VarM (⟨
gj ⟩) < ε2 If f and g are multilinear, degree d, n0 = dO(d)
ε2
⟨ fi, gj ⟩
G⊗n
ρ
≈ε ⟨
gj ⟩
G
⊗n0 ρ
ui : [n] → R vj : [n] → R
{u1, u2, u3} {v1, v2, v3}
M ∼ N (0, 1)n0×n
√n0
√n0
EM [⟨ ui, vj ⟩] = ⟨ ui, vj ⟩ VarM (⟨ ui, vj ⟩) < ε2 n0 = O( 1
ε2 )
⟨ ui, vj ⟩
Rn ≈ε
⟨
vj ⟩
Rn0
8 / 9
fi : Rn → R gi : Rn → R fi gj
{ f1, f2, f3} {g1, g2, g3}
M ∼ N (0, 1)n×n0
(
Ma ∥a∥2
)
(
Mb ∥b∥2
) EM [⟨
gj ⟩] ≈ε ⟨ fi, gj ⟩ VarM (⟨
gj ⟩) < ε2 If f and g are multilinear, degree d, n0 = dO(d)
ε2
⟨ fi, gj ⟩
G⊗n
ρ
≈ε ⟨
gj ⟩
G
⊗n0 ρ
ui : [n] → R vj : [n] → R
{u1, u2, u3} {v1, v2, v3}
M ∼ N (0, 1)n0×n
√n0
√n0
EM [⟨ ui, vj ⟩] = ⟨ ui, vj ⟩ VarM (⟨ ui, vj ⟩) < ε2 n0 = O( 1
ε2 )
⟨ ui, vj ⟩
Rn ≈ε
⟨
vj ⟩
Rn0
8 / 9
fi : Rn → R gi : Rn → R fi gj
{ f1, f2, f3} {g1, g2, g3}
M ∼ N (0, 1)n×n0
(
Ma ∥a∥2
)
(
Mb ∥b∥2
) EM [⟨
gj ⟩] ≈ε ⟨ fi, gj ⟩ VarM (⟨
gj ⟩) < ε2 If f and g are multilinear, degree d, n0 = dO(d)
ε2
⟨ fi, gj ⟩
G⊗n
ρ
≈ε ⟨
gj ⟩
G
⊗n0 ρ
ui : [n] → R vj : [n] → R
{u1, u2, u3} {v1, v2, v3}
M ∼ N (0, 1)n0×n
√n0
√n0
EM [⟨ ui, vj ⟩] = ⟨ ui, vj ⟩ VarM (⟨ ui, vj ⟩) < ε2 n0 = O( 1
ε2 )
⟨ ui, vj ⟩
Rn ≈ε
⟨
vj ⟩
Rn0
8 / 9
fi : Rn → R gi : Rn → R fi gj
{ f1, f2, f3} {g1, g2, g3}
M ∼ N (0, 1)n×n0
(
Ma ∥a∥2
)
(
Mb ∥b∥2
) EM [⟨
gj ⟩] ≈ε ⟨ fi, gj ⟩ VarM (⟨
gj ⟩) < ε2 If f and g are multilinear, degree d, n0 = dO(d)
ε2
⟨ fi, gj ⟩
G⊗n
ρ
≈ε ⟨
gj ⟩
G
⊗n0 ρ
▶ Don’t care about seed length of M. ▶ Crucially, preserves other statistical properties! Ma ∥a∥2 ∼ N (0, 1)⊗n Comparison with [Kane-Rao ’18]
Tianks to Sankeerth Rao & Mitali Bafna!
8 / 9
Lower bounds on randomness reduction? Betuer upper bounds? Other applications of dimension reduction for polynomials? Derandomization of the dimension reduction lemma? (
Other Tensor Power problems?
Open Qvestions
9 / 9
Any distributed task performed with unbounded amounts of correlated randomness . . . . . . can also be approximately performed with an explicitly bounded number of samples! (extends to interactive settings even with inputs!) Main “take-away” Theorem Lower bounds on randomness reduction? Betuer upper bounds? Other applications of dimension reduction for polynomials? Derandomization of the dimension reduction lemma? (
Other Tensor Power problems?
Open Qvestions
9 / 9
Any distributed task performed with unbounded amounts of correlated randomness . . . . . . can also be approximately performed with an explicitly bounded number of samples! (extends to interactive settings even with inputs!) Proof via dimension reduction for low-degree multilinear polynomials Main “take-away” Theorem Lower bounds on randomness reduction? Betuer upper bounds? Other applications of dimension reduction for polynomials? Derandomization of the dimension reduction lemma? (
Other Tensor Power problems?
Open Qvestions
9 / 9
Any distributed task performed with unbounded amounts of correlated randomness . . . . . . can also be approximately performed with an explicitly bounded number of samples! (extends to interactive settings even with inputs!) Proof via dimension reduction for low-degree multilinear polynomials Main “take-away” Theorem ▶ Lower bounds on randomness reduction? Betuer upper bounds? ▶ Other applications of dimension reduction for polynomials? ▶ Derandomization of the dimension reduction lemma? ▶ (NP-)hardness of deciding Non-Interactive Simulation? ▶ Other Tensor Power problems?
Open Qvestions
9 / 9
Any distributed task performed with unbounded amounts of correlated randomness . . . . . . can also be approximately performed with an explicitly bounded number of samples! (extends to interactive settings even with inputs!) Proof via dimension reduction for low-degree multilinear polynomials Main “take-away” Theorem ▶ Lower bounds on randomness reduction? Betuer upper bounds? ▶ Other applications of dimension reduction for polynomials? ▶ Derandomization of the dimension reduction lemma? ▶ (NP-)hardness of deciding Non-Interactive Simulation? ▶ Other Tensor Power problems?
Open Qvestions
9 / 9