Non-Interactive Simulation and Dimension Reduction for Polynomials - - PowerPoint PPT Presentation

non interactive simulation and dimension reduction for
SMART_READER_LITE
LIVE PREVIEW

Non-Interactive Simulation and Dimension Reduction for Polynomials - - PowerPoint PPT Presentation

Non-Interactive Simulation and Dimension Reduction for Polynomials Pritish Kamath joint work with Badih Ghazi Prasad Raghavendra CCC UCSD June 24, 2018 1 / 9 2 / 9 Talk outline... Motivation Motivation Motivation


slide-1
SLIDE 1

Non-Interactive Simulation and Dimension Reduction for Polynomials

Pritish Kamath

joint work with

Badih

Ghazi

Prasad

Raghavendra

CCC UCSD June 24, 2018

1 / 9

slide-2
SLIDE 2

Talk outline...

  • Motivation
  • Motivation
  • Motivation
  • “Dimension Reduction for Polynomials” lemma
  • Summary & Open Directions!

2 / 9

slide-3
SLIDE 3

Talk outline...

  • Motivation
  • Motivation
  • Motivation
  • “Dimension Reduction for Polynomials” lemma
  • Summary & Open Directions!
  • 2 / 9
slide-4
SLIDE 4

Randomness Models in Distributed Tasks

randomness

randomness randomness randomness randomness

In Information Theory Common Information [Gács-Körner ’73, Wyner ’75] Distributed Source Coding [Slepian-Wolf ’73] In Computer Science Information Tieoretic Crypto!

Key Agreement, Secure Computation, … ?

3 / 9

slide-5
SLIDE 5

Randomness Models in Distributed Tasks

randomness

randomness randomness randomness randomness

In Information Theory Common Information [Gács-Körner ’73, Wyner ’75] Distributed Source Coding [Slepian-Wolf ’73] In Computer Science Information Tieoretic Crypto!

Key Agreement, Secure Computation, … ?

3 / 9

slide-6
SLIDE 6

Randomness Models in Distributed Tasks

randomness

randomness randomness randomness randomness

In Information Theory Common Information [Gács-Körner ’73, Wyner ’75] Distributed Source Coding [Slepian-Wolf ’73] In Computer Science Information Tieoretic Crypto!

Key Agreement, Secure Computation, … ?

3 / 9

slide-7
SLIDE 7

Randomness Models in Distributed Tasks

randomness

randomness randomness randomness randomness

In Information Theory Common Information [Gács-Körner ’73, Wyner ’75] Distributed Source Coding [Slepian-Wolf ’73] In Computer Science Information Tieoretic Crypto!

Key Agreement, Secure Computation, … ?

3 / 9

slide-8
SLIDE 8

Randomness Models in Distributed Tasks

randomness

randomness randomness randomness randomness

1 1

(1−ε) 2 (1−ε) 2 ε/2 ε/2

In Information Theory Common Information [Gács-Körner ’73, Wyner ’75] Distributed Source Coding [Slepian-Wolf ’73] In Computer Science Information Tieoretic Crypto!

Key Agreement, Secure Computation, … ?

3 / 9

slide-9
SLIDE 9

Randomness Models in Distributed Tasks

randomness

randomness randomness randomness randomness

X Y P(X, Y)

In Information Theory Common Information [Gács-Körner ’73, Wyner ’75] Distributed Source Coding [Slepian-Wolf ’73] In Computer Science Information Tieoretic Crypto!

Key Agreement, Secure Computation, … ?

3 / 9

slide-10
SLIDE 10

Randomness Models in Distributed Tasks

randomness

randomness randomness randomness randomness

X Y P(X, Y)

In Information Theory . . . ▶ Common Information [Gács-Körner ’73, Wyner ’75] ▶ Distributed Source Coding [Slepian-Wolf ’73] ▶ · · · In Computer Science Information Tieoretic Crypto!

Key Agreement, Secure Computation, … ?

3 / 9

slide-11
SLIDE 11

Randomness Models in Distributed Tasks

randomness

randomness randomness randomness randomness

In Information Theory . . . ▶ Common Information [Gács-Körner ’73, Wyner ’75] ▶ Distributed Source Coding [Slepian-Wolf ’73] ▶ · · · In Computer Science . . . ▶ Information Tieoretic Crypto!

Key Agreement, Secure Computation, … ?

3 / 9

slide-12
SLIDE 12

Randomness Models in Distributed Tasks

randomness

randomness randomness randomness randomness

X Y P(X, Y)

In Information Theory . . . ▶ Common Information [Gács-Körner ’73, Wyner ’75] ▶ Distributed Source Coding [Slepian-Wolf ’73] ▶ · · · In Computer Science . . . ▶ Information Tieoretic Crypto!

Key Agreement, Secure Computation, … ?

▶ Communication Complexity [Bavarian-Gavinsky-Ito ’14] [Canonne-Guruswami-Meka-Sudan ’15]

3 / 9

slide-13
SLIDE 13

Randomness Models in Distributed Tasks

randomness

randomness randomness randomness randomness

X Y P(X, Y)

In Information Theory . . . ▶ Common Information [Gács-Körner ’73, Wyner ’75] ▶ Distributed Source Coding [Slepian-Wolf ’73] ▶ · · · In Computer Science . . . ▶ Information Tieoretic Crypto!

Key Agreement, Secure Computation, … ?

▶ Communication Complexity [Bavarian-Gavinsky-Ito ’14] [Canonne-Guruswami-Meka-Sudan ’15] Abstract Goal: Understand the power of difgerent joint distributions!

3 / 9

slide-14
SLIDE 14

Non-Interactive Simulation of Joint Distributions

When can simulate ? Main Qvestion

How can a “constant-sized” problem be HARD?

4 / 9

slide-15
SLIDE 15

Non-Interactive Simulation of Joint Distributions

X Y

P(X, Y) BSSε

1 1

(1−ε) 2 ε/2 ε/2 (1−ε) 2

1 0 0 1 0 1 0 0 1 0 1 1 · · · 1 0 1 1 0 1 1 0 0 0 1 1 · · ·

When can simulate ? Main Qvestion

How can a “constant-sized” problem be HARD?

4 / 9

slide-16
SLIDE 16

Non-Interactive Simulation of Joint Distributions

X Y

P(X, Y) BSSε

1 1

(1−ε) 2 ε/2 ε/2 (1−ε) 2

a b a, b ∈ {0, 1} (a, b) ∼ BSSδ 1 0 0 1 0 1 0 0 1 0 1 1 · · · 1 0 1 1 0 1 1 0 0 0 1 1 · · ·

When can BSSε simulate BSSδ? Main Qvestion

How can a “constant-sized” problem be HARD?

4 / 9

slide-17
SLIDE 17

Non-Interactive Simulation of Joint Distributions

X Y

P(X, Y) BSSε

1 1

(1−ε) 2 ε/2 ε/2 (1−ε) 2

a b a, b ∈ {0, 1} (a, b) ∼ BSSδ 1 0 0 1 0 1 0 0 1 0 1 1 · · · 1 0 1 1 0 1 1 0 0 0 1 1 · · ·

Answer: YES δ ≥ ε NO δ < ε When can BSSε simulate BSSδ? Main Qvestion

How can a “constant-sized” problem be HARD?

4 / 9

slide-18
SLIDE 18

Non-Interactive Simulation of Joint Distributions

X Y

P(X, Y) DISJ

1 1

1/3 1/3 1/3

a b a, b ∈ {0, 1} (a, b) ∼ BSSδ 1 0 0 0 0 0 0 0 1 0 1 1 · · · 0 1 0 0 0 1 0 1 0 1 0 0 · · ·

When can DISJ simulate BSSδ? Main Qvestion

How can a “constant-sized” problem be HARD?

4 / 9

slide-19
SLIDE 19

Non-Interactive Simulation of Joint Distributions

X Y

P(X, Y) DISJ

1 1

1/3 1/3 1/3

a b a, b ∈ {0, 1} (a, b) ∼ BSSδ 1 0 0 0 0 0 0 0 1 0 1 1 · · · 0 1 0 0 0 1 0 1 0 1 0 0 · · ·

(Partial) Answer: YES δ ≥ 3

8

OPEN δ ∈ [

1 4, 3 8

) NO δ < 1

4

When can DISJ simulate BSSδ? Main Qvestion

How can a “constant-sized” problem be HARD?

4 / 9

slide-20
SLIDE 20

Non-Interactive Simulation of Joint Distributions

X Y

P(X, Y)

a b a, b ∈ {0, 1} (a, b) ∼ BSSδ X1, X2, X3, X4, X5, . . . Y1, Y2, Y3, Y4, Y5, . . .

When can P simulate BSSδ? Main Qvestion

How can a “constant-sized” problem be HARD?

4 / 9

slide-21
SLIDE 21

Non-Interactive Simulation of Joint Distributions

X Y

P(X, Y)

a b a, b ∈ [k] (a, b) ∼ Q X1, X2, X3, X4, X5, . . . Y1, Y2, Y3, Y4, Y5, . . .

When can P simulate Q? Main Qvestion

How can a “constant-sized” problem be HARD?

4 / 9

slide-22
SLIDE 22

Non-Interactive Simulation of Joint Distributions

X Y

P(X, Y)

a b a, b ∈ [k] (a, b) ∼ Q X1, X2, X3, X4, X5, . . . Y1, Y2, Y3, Y4, Y5, . . .

When can P simulate Q? Main Qvestion Analytically? OPEN in most cases! Algorithmically decidable? Not obvious!

How can a “constant-sized” problem be HARD?

4 / 9

slide-23
SLIDE 23

Non-Interactive Simulation of Joint Distributions

X Y

P(X, Y)

a b a, b ∈ [k] (a, b) ∼ Q X1, X2, X3, X4, X5, . . . Y1, Y2, Y3, Y4, Y5, . . .

When can P simulate Q? Main Qvestion Analytically? OPEN in most cases! Algorithmically decidable? Not obvious!

How can a “constant-sized” problem be HARD?

4 / 9

slide-24
SLIDE 24

Tensor Power Problems

Non-interactive Simulation falls under the category of “Tensor Power” problems.

5 / 9

slide-25
SLIDE 25

Tensor Power Problems

Non-interactive Simulation falls under the category of “Tensor Power” problems. In Information Theory, ▶ Zero-error Shannon capacity ▶ Zero-error Witsenhausen rate

5 / 9

slide-26
SLIDE 26

Tensor Power Problems

Non-interactive Simulation falls under the category of “Tensor Power” problems. In Information Theory, ▶ Zero-error Shannon capacity ▶ Zero-error Witsenhausen rate In Computer Science, ▶ (Classical) Amortized value of 2-prover 1-round games ▶ (Qvantum) Entangled value of 2-prover 1-round games ▶ (Qvantum) Local State Transformation ▶ Computing SDP integrality gaps for CSPs ▶ Amortized communication complexity

5 / 9

slide-27
SLIDE 27

Tensor Power Problems

Non-interactive Simulation falls under the category of “Tensor Power” problems. In Information Theory, ▶ Zero-error Shannon capacity ▶ Zero-error Witsenhausen rate In Computer Science, ▶ (Classical) Amortized value of 2-prover 1-round games ▶ (Qvantum) Entangled value of 2-prover 1-round games ▶ (Qvantum) Local State Transformation ▶ Computing SDP integrality gaps for CSPs [Raghavendra-Steurer ’09] ▶ Amortized communication complexity = Information complexity [Braverman-Rao ’11], [Braverman-Schneider ’15]

5 / 9

slide-28
SLIDE 28

Tensor Power Problems

Non-interactive Simulation falls under the category of “Tensor Power” problems. In Information Theory, ▶ Zero-error Shannon capacity [Open] ▶ Zero-error Witsenhausen rate [Open] In Computer Science, ▶ (Classical) Amortized value of 2-prover 1-round games [Open] ▶ (Qvantum) Entangled value of 2-prover 1-round games [Open] ▶ (Qvantum) Local State Transformation [Open] ▶ Computing SDP integrality gaps for CSPs [Raghavendra-Steurer ’09] ▶ Amortized communication complexity = Information complexity [Braverman-Rao ’11], [Braverman-Schneider ’15]

5 / 9

slide-29
SLIDE 29

Decidability via “Dimension Reduction”

Can P simulate Q? Main Qvestion

6 / 9

slide-30
SLIDE 30

Decidability via “Dimension Reduction”

Xn Yn f [k] g [k] Can P simulate Q? Main Qvestion ( f (Xn), g(Yn)) ∼ Q for (Xn, Yn) ∼ Pn

6 / 9

slide-31
SLIDE 31

Decidability via “Dimension Reduction”

Xn Yn f [k] g [k] Dimension Reduction Xn0 Yn0

  • f

[k]

  • g

[k] Can P simulate Q? Main Qvestion ( f (Xn), g(Yn)) ∼ Q for (Xn, Yn) ∼ Pn ( f (Xn0), g(Yn0)) ≈ε ( f (Xn), g(Yn))

6 / 9

slide-32
SLIDE 32

Decidability via “Dimension Reduction”

Xn Yn f [k] g [k] Dimension Reduction Xn0 Yn0

  • f

[k]

  • g

[k] Can P simulate Q? Main Qvestion ( f (Xn), g(Yn)) ∼ Q for (Xn, Yn) ∼ Pn If P can simulate Q . . . ( f (Xn0), g(Yn0)) ≈ε ( f (Xn), g(Yn)) . . . then P can ε-approximately simulate Q with only n0 samples

6 / 9

slide-33
SLIDE 33

Decidability via “Dimension Reduction”

Xn Yn f [k] g [k] Dimension Reduction Xn0 Yn0

  • f

[k]

  • g

[k] Can P simulate Q? Main Qvestion ( f (Xn), g(Yn)) ∼ Q for (Xn, Yn) ∼ Pn If P can simulate Q . . . ( f (Xn0), g(Yn0)) ≈ε ( f (Xn), g(Yn)) . . . then P can ε-approximately simulate Q with only n0 samples Key point: n0 = n0(ε, P, k) is explicit & does not depend on n.

6 / 9

slide-34
SLIDE 34

Decidability via “Dimension Reduction”

Xn Yn f [k] g [k] Dimension Reduction Xn0 Yn0

  • f

[k]

  • g

[k] Can P simulate Q? Main Qvestion ( f (Xn), g(Yn)) ∼ Q for (Xn, Yn) ∼ Pn If P can simulate Q . . . ( f (Xn0), g(Yn0)) ≈ε ( f (Xn), g(Yn)) . . . then P can ε-approximately simulate Q with only n0 samples Key point: n0 = n0(ε, P, k) is explicit & does not depend on n.

DECIDABLE

6 / 9

slide-35
SLIDE 35

Decidability via “Dimension Reduction”

Xn Yn f [k] g [k] Dimension Reduction Xn0 Yn0

  • f

[k]

  • g

[k] Can P simulate Q? Main Qvestion ( f (Xn), g(Yn)) ∼ Q for (Xn, Yn) ∼ Pn If P can simulate Q . . . ( f (Xn0), g(Yn0)) ≈ε ( f (Xn), g(Yn)) . . . then P can ε-approximately simulate Q with only n0 samples Key point: n0 = n0(ε, P, k) is explicit & does not depend on n.

DECIDABLE

Any distributed task performed with unbounded amounts of correlated randomness . . . . . . can also be approximately performed with an explicitly bounded number of samples!

Main “take-away” Theorem

6 / 9

slide-36
SLIDE 36

Decidability via “Dimension Reduction”

Xn Yn f [k] g [k] Dimension Reduction Xn0 Yn0

  • f

[k]

  • g

[k] Can P simulate Q? Main Qvestion ( f (Xn), g(Yn)) ∼ Q for (Xn, Yn) ∼ Pn If P can simulate Q . . . ( f (Xn0), g(Yn0)) ≈ε ( f (Xn), g(Yn)) . . . then P can ε-approximately simulate Q with only n0 samples Key point: n0 = n0(ε, P, k) is explicit & does not depend on n.

DECIDABLE

Any distributed task performed with unbounded amounts of correlated randomness . . . . . . can also be approximately performed with an explicitly bounded number of samples! (extends to interactive settings even with inputs!)

Main “take-away” Theorem

6 / 9

slide-37
SLIDE 37

Non-Interactive Agreement Distillation

Xn Yn

P⊗n f (Xn) g(Yn) f (Xn), g(Yn) ∈ [k] sup

n,f,g

Pr[ f (Xn) = g(Yn)] ? E[ f ] = ( 1

k , . . . , 1 k ) and E[g] = ( 1 k , . . . , 1 k ).

Max Agreement Distillation For convenience, f : Xn → Rk where, i ∈ [k] corresponds to ei. Borell’s Tieorem [Bor85] “Halfspaces are most Noise Stable” “Majority is Stablest” [MOO’04, Mos’10] “Peace Sign Conjecture” “Plurality is Stablest” [KKMO’04, IM’12]

7 / 9

slide-38
SLIDE 38

Non-Interactive Agreement from Correlated Gaussians

Xn Yn

G⊗n

ρ

N ([ ] , [ 1 ρ ρ 1 ]) f (Xn) g(Yn) f (Xn), g(Yn) ∈ [k] sup

n,f,g

Pr[ f (Xn) = g(Yn)] ? E[ f ] = ( 1

k , . . . , 1 k ) and E[g] = ( 1 k , . . . , 1 k ).

Max Agreement Distillation For convenience, f : Xn → Rk where, i ∈ [k] corresponds to ei. Borell’s Tieorem [Bor85] “Halfspaces are most Noise Stable” “Majority is Stablest” [MOO’04, Mos’10] “Peace Sign Conjecture” “Plurality is Stablest” [KKMO’04, IM’12]

7 / 9

slide-39
SLIDE 39

Non-Interactive Agreement from Correlated Gaussians

Xn Yn

G⊗n

ρ

N ([ ] , [ 1 ρ ρ 1 ]) f (Xn) g(Yn) f (Xn), g(Yn) ∈ [k]

k = 2

sup

n,f,g

Pr[ f (Xn) = g(Yn)] ? E[ f ] = ( 1

2, 1 2) and E[g] = ( 1 2, 1 2).

Max Agreement Distillation Borell’s Tieorem [Bor85] “Halfspaces are most Noise Stable” “Majority is Stablest” [MOO’04, Mos’10] “Peace Sign Conjecture” “Plurality is Stablest” [KKMO’04, IM’12]

7 / 9

slide-40
SLIDE 40

Non-Interactive Agreement from Correlated Gaussians

Xn Yn

G⊗n

ρ

N ([ ] , [ 1 ρ ρ 1 ]) f (Xn) g(Yn) f (Xn), g(Yn) ∈ [k]

k = 2

sup

n,f,g

Pr[ f (Xn) = g(Yn)] ? E[ f ] = ( 1

2, 1 2) and E[g] = ( 1 2, 1 2).

Max Agreement Distillation Borell’s Tieorem [Bor85] 1 f (Xn) = sign(X1) 1 g(Yn) = sign(Y1) “Halfspaces are most Noise Stable” “Majority is Stablest” [MOO’04, Mos’10] “Peace Sign Conjecture” “Plurality is Stablest” [KKMO’04, IM’12]

7 / 9

slide-41
SLIDE 41

Non-Interactive Agreement from Correlated Gaussians

Xn Yn

G⊗n

ρ

N ([ ] , [ 1 ρ ρ 1 ]) f (Xn) g(Yn) f (Xn), g(Yn) ∈ [k]

k = 2

sup

n,f,g

Pr[ f (Xn) = g(Yn)] ? E[ f ] = (0.3, 0.7) and E[g] = (0.6, 0.4). Max Agreement Distillation Borell’s Tieorem [Bor85] 1 f (Xn) = sign(X1 − α) 1 g(Yn) = sign(Y1 − β) “Halfspaces are most Noise Stable” “Majority is Stablest” [MOO’04, Mos’10] Generalizes to non-uniform marginals! “Peace Sign Conjecture” “Plurality is Stablest” [KKMO’04, IM’12]

7 / 9

slide-42
SLIDE 42

Non-Interactive Agreement from Correlated Gaussians

Xn Yn

G⊗n

ρ

N ([ ] , [ 1 ρ ρ 1 ]) f (Xn) g(Yn) f (Xn), g(Yn) ∈ [k]

k = 3

sup

n,f,g

Pr[ f (Xn) = g(Yn)] ? E[ f ] = ( 1

3, 1 3, 1 3) and E[g] = ( 1 3, 1 3, 1 3).

Max Agreement Distillation Borell’s Tieorem [Bor85] “Halfspaces are most Noise Stable” “Majority is Stablest” [MOO’04, Mos’10] Generalizes to non-uniform marginals! “Peace Sign Conjecture” “Plurality is Stablest” [KKMO’04, IM’12]

7 / 9

slide-43
SLIDE 43

Non-Interactive Agreement from Correlated Gaussians

Xn Yn

G⊗n

ρ

N ([ ] , [ 1 ρ ρ 1 ]) f (Xn) g(Yn) f (Xn), g(Yn) ∈ [k]

k = 3

sup

n,f,g

Pr[ f (Xn) = g(Yn)] ? E[ f ] = ( 1

3, 1 3, 1 3) and E[g] = ( 1 3, 1 3, 1 3).

Max Agreement Distillation Borell’s Tieorem [Bor85] “Halfspaces are most Noise Stable” “Majority is Stablest” [MOO’04, Mos’10] Generalizes to non-uniform marginals! 1 2

f (X1, X2)

1 2

g(Y1, Y2)

“Peace Sign Conjecture” “Plurality is Stablest” [KKMO’04, IM’12]

7 / 9

slide-44
SLIDE 44

Non-Interactive Agreement from Correlated Gaussians

Xn Yn

G⊗n

ρ

N ([ ] , [ 1 ρ ρ 1 ]) f (Xn) g(Yn) f (Xn), g(Yn) ∈ [k]

k = 3

sup

n,f,g

Pr[ f (Xn) = g(Yn)] ? E[ f ] = ( 1

3, 1 3, 1 3) and E[g] = ( 1 3, 1 3, 1 3).

Max Agreement Distillation Borell’s Tieorem [Bor85] “Halfspaces are most Noise Stable” “Majority is Stablest” [MOO’04, Mos’10] Generalizes to non-uniform marginals! 1 2

f (X1, X2)

1 2

g(Y1, Y2)

“Peace Sign Conjecture” “Plurality is Stablest” [KKMO’04, IM’12] Generalize to non-uniform marginals? FALSE! [HMN’16]

7 / 9

slide-45
SLIDE 45

Non-Interactive Agreement from Correlated Gaussians

Xn Yn

G⊗n

ρ

N ([ ] , [ 1 ρ ρ 1 ]) f (Xn) g(Yn) f (Xn), g(Yn) ∈ [k]

k = 3

sup

n,f,g

Pr[ f (Xn) = g(Yn)] ? E[ f ] = ( 1

3, 1 3, 1 3) and E[g] = ( 1 3, 1 3, 1 3).

Max Agreement Distillation Borell’s Tieorem [Bor85] “Halfspaces are most Noise Stable” “Majority is Stablest” [MOO’04, Mos’10] Generalizes to non-uniform marginals! “Peace Sign Conjecture” “Plurality is Stablest” [KKMO’04, IM’12] Generalize to non-uniform marginals? FALSE! [HMN’16] ■ “Peace Sign Conjecture” implies optimal strategy exists with 2 samples.

  • Q1. Does optimal strategy even exist with

some fjnite #samples?

  • Q2. How many samples n0(ε) needed to

get ε-close to optimal agreement? Can we obtain an “explicit” bound? [De-Mossel-Neeman’17]

7 / 9

slide-46
SLIDE 46

Non-Interactive Agreement from Correlated Gaussians

Xn Yn

G⊗n

ρ

N ([ ] , [ 1 ρ ρ 1 ]) f (Xn) g(Yn) f (Xn), g(Yn) ∈ [k]

k = 3

sup

n,f,g

Pr[ f (Xn) = g(Yn)] ? E[ f ] = ( 1

3, 1 3, 1 3) and E[g] = ( 1 3, 1 3, 1 3).

Max Agreement Distillation Borell’s Tieorem [Bor85] “Halfspaces are most Noise Stable” “Majority is Stablest” [MOO’04, Mos’10] Generalizes to non-uniform marginals! “Peace Sign Conjecture” “Plurality is Stablest” [KKMO’04, IM’12] Generalize to non-uniform marginals? FALSE! [HMN’16] ■ “Peace Sign Conjecture” implies optimal strategy exists with 2 samples.

  • Q1. Does optimal strategy even exist with

some fjnite #samples?

  • Q2. How many samples n0(ε) needed to

get ε-close to optimal agreement? Can we obtain an “explicit” bound? [De-Mossel-Neeman’17]

Any distributed task performed with unbounded amounts of correlated randomness . . . . . . can also be approximately performed with an explicitly bounded number of samples!

Main “take-away” Theorem

7 / 9

slide-47
SLIDE 47

Non-Interactive Agreement from Correlated Gaussians

( f (Xn0), g(Yn0)) ≈ε ( f (Xn), g(Yn))

Xn Yn f [k] g [k] Dimension Reduction Xn0 Yn0

  • f

[k]

  • g

[k] sup

n,f,g

Pr[ f (Xn) = g(Yn)] ? E[ f ] = ( 1

k , . . . , 1 k ) and E[g] = ( 1 k , . . . , 1 k ).

Max Agreement Distillation Borell’s Tieorem [Bor85] “Halfspaces are most Noise Stable” “Majority is Stablest” [MOO’04, Mos’10] Generalizes to non-uniform marginals! “Peace Sign Conjecture” “Plurality is Stablest” [KKMO’04, IM’12] Generalize to non-uniform marginals? FALSE! [HMN’16]

How many samples n0(ε) needed to get ε-close to the optimal agreement? Can we obtain an “explicit” bound?

7 / 9

slide-48
SLIDE 48

Non-Interactive Agreement from Correlated Gaussians

( f (Xn0), g(Yn0)) ≈ε ( f (Xn), g(Yn))

Xn Yn f [k] g [k] Dimension Reduction Xn0 Yn0

  • f

[k]

  • g

[k] sup

n,f,g

Pr[ f (Xn) = g(Yn)] ? E[ f ] = ( 1

k , . . . , 1 k ) and E[g] = ( 1 k , . . . , 1 k ).

Max Agreement Distillation Borell’s Tieorem [Bor85] “Halfspaces are most Noise Stable” “Majority is Stablest” [MOO’04, Mos’10] Generalizes to non-uniform marginals! “Peace Sign Conjecture” “Plurality is Stablest” [KKMO’04, IM’12] Generalize to non-uniform marginals? FALSE! [HMN’16]

How many samples n0(ε) needed to get ε-close to the optimal agreement? Can we obtain an “explicit” bound? Gaussian Case: (X, Y) ∼ Gρ

[De-Mossel-Neeman’17, ’18]

n0 = Ackermann(?)

[Tiis Work!]

n0 = exp ( k, 1

ε , 1 1−ρ

)

7 / 9

slide-49
SLIDE 49

Non-Interactive Agreement from Correlated Gaussians

( f (Xn0), g(Yn0)) ≈ε ( f (Xn), g(Yn))

Xn Yn f [k] g [k] Dimension Reduction Xn0 Yn0

  • f

[k]

  • g

[k] sup

n,f,g

Pr[ f (Xn) = g(Yn)] ? E[ f ] = ( 1

k , . . . , 1 k ) and E[g] = ( 1 k , . . . , 1 k ).

Max Agreement Distillation Borell’s Tieorem [Bor85] “Halfspaces are most Noise Stable” “Majority is Stablest” [MOO’04, Mos’10] Generalizes to non-uniform marginals! “Peace Sign Conjecture” “Plurality is Stablest” [KKMO’04, IM’12] Generalize to non-uniform marginals? FALSE! [HMN’16]

How many samples n0(ε) needed to get ε-close to the optimal agreement? Can we obtain an “explicit” bound? Gaussian Case: (X, Y) ∼ Gρ

[De-Mossel-Neeman’17, ’18]

n0 = Ackermann(?)

[Tiis Work!]

n0 = exp ( k, 1

ε , 1 1−ρ

) General Case: (X, Y) ∼ P

[Ghazi-K-Sudan’16] :

Reduces∗ to Gaussian case!

Using Regularity Lemma and Invariance Principle. Solved k = 2 case, due to Borell’s theorem!

7 / 9

slide-50
SLIDE 50

Non-Interactive Agreement from Correlated Gaussians

( f (Xn0), g(Yn0)) ≈ε ( f (Xn), g(Yn))

Xn Yn f [k] g [k] Dimension Reduction Xn0 Yn0

  • f

[k]

  • g

[k] sup

n,f,g

Pr[ f (Xn) = g(Yn)] ? E[ f ] = ( 1

k , . . . , 1 k ) and E[g] = ( 1 k , . . . , 1 k ).

Max Agreement Distillation Borell’s Tieorem [Bor85] “Halfspaces are most Noise Stable” “Majority is Stablest” [MOO’04, Mos’10] Generalizes to non-uniform marginals! “Peace Sign Conjecture” “Plurality is Stablest” [KKMO’04, IM’12] Generalize to non-uniform marginals? FALSE! [HMN’16]

How many samples n0(ε) needed to get ε-close to the optimal agreement? Can we obtain an “explicit” bound? Gaussian Case: (X, Y) ∼ Gρ

[De-Mossel-Neeman’17, ’18]

n0 = Ackermann(?)

[Tiis Work!]

n0 = exp ( k, 1

ε , 1 1−ρ

) General Case: (X, Y) ∼ P

[Ghazi-K-Sudan’16] :

Reduces∗ to Gaussian case!

[De-Mossel-Neeman’18]

n0 = Ackermann(?)

[Tiis Work!]

n0 = exp ( k, 1

ε , 1 1−ρ , log 1 α

)

7 / 9

slide-51
SLIDE 51

Main Technique!

8 / 9

slide-52
SLIDE 52

Main Technique!

fi : Rn → R gi : Rn → R fi gj

{ f1, f2, f3} {g1, g2, g3} Dimension Reduction

  • fi : Rn0 → R
  • gi : Rn0 → R
  • fi
  • gj

{

  • f1,

f2, f3 } { g1, g2, g3} ⟨ fi, gj ⟩

G⊗n

ρ

≈ε ⟨

  • fi,

gj ⟩

G

⊗n0 ρ

⟨ fi, gj ⟩

G⊗n

ρ

:= E

X,Y∼G⊗n

ρ

[ fi(X)gj(Y)]

8 / 9

slide-53
SLIDE 53

Main Technique!

fi : Rn → R gi : Rn → R fi gj

{ f1, f2, f3} {g1, g2, g3} Dimension Reduction

  • fi : Rn0 → R
  • gi : Rn0 → R
  • fi
  • gj

{

  • f1,

f2, f3 } { g1, g2, g3} ⟨ fi, gj ⟩

G⊗n

ρ

≈ε ⟨

  • fi,

gj ⟩

G

⊗n0 ρ

ui : [n] → R vj : [n] → R

{u1, u2, u3} {v1, v2, v3} Johnson-Lindenstrauss

  • ui : [n0] → R
  • vj : [n0] → R

{ u1, u2, u3} { v1, v2, v3} ⟨ ui, vj ⟩

Rn ≈ε

  • ui,

vj ⟩

Rn0

8 / 9

slide-54
SLIDE 54

Main Technique!

fi : Rn → R gi : Rn → R fi gj

{ f1, f2, f3} {g1, g2, g3} Dimension Reduction

  • fi : Rn0 → R
  • gi : Rn0 → R
  • fi
  • gj

{

  • f1,

f2, f3 } { g1, g2, g3} ⟨ fi, gj ⟩

G⊗n

ρ

≈ε ⟨

  • fi,

gj ⟩

G

⊗n0 ρ

ui : [n] → R vj : [n] → R

{u1, u2, u3} {v1, v2, v3}

M ∼ N (0, 1)n0×n

  • ui ← Mui

√n0

  • vj ← Mvj

√n0

  • ui : [n0] → R
  • vj : [n0] → R

{ u1, u2, u3} { v1, v2, v3} ⟨ ui, vj ⟩

Rn ≈ε

  • ui,

vj ⟩

Rn0

8 / 9

slide-55
SLIDE 55

Main Technique!

fi : Rn → R gi : Rn → R fi gj

{ f1, f2, f3} {g1, g2, g3} Dimension Reduction

  • fi : Rn0 → R
  • gi : Rn0 → R
  • fi
  • gj

{

  • f1,

f2, f3 } { g1, g2, g3} ⟨ fi, gj ⟩

G⊗n

ρ

≈ε ⟨

  • fi,

gj ⟩

G

⊗n0 ρ

ui : [n] → R vj : [n] → R

{u1, u2, u3} {v1, v2, v3}

M ∼ N (0, 1)n0×n

  • ui ← Mui

√n0

  • vj ← Mvj

√n0

EM [⟨ ui, vj ⟩] = ⟨ ui, vj ⟩ VarM (⟨ ui, vj ⟩) < ε2 n0 = O( 1

ε2 )

⟨ ui, vj ⟩

Rn ≈ε

  • ui,

vj ⟩

Rn0

8 / 9

slide-56
SLIDE 56

Main Technique!

fi : Rn → R gi : Rn → R fi gj

{ f1, f2, f3} {g1, g2, g3}

M ∼ N (0, 1)n×n0

  • fi(a) ← fi

(

Ma √n0

)

  • gj(b) ← gj

(

Mb √n0

)

  • fi : Rn0 → R
  • gi : Rn0 → R
  • fi
  • gj

{

  • f1,

f2, f3 } { g1, g2, g3} ⟨ fi, gj ⟩

G⊗n

ρ

≈ε ⟨

  • fi,

gj ⟩

G

⊗n0 ρ

ui : [n] → R vj : [n] → R

{u1, u2, u3} {v1, v2, v3}

M ∼ N (0, 1)n0×n

  • ui ← Mui

√n0

  • vj ← Mvj

√n0

EM [⟨ ui, vj ⟩] = ⟨ ui, vj ⟩ VarM (⟨ ui, vj ⟩) < ε2 n0 = O( 1

ε2 )

⟨ ui, vj ⟩

Rn ≈ε

  • ui,

vj ⟩

Rn0

8 / 9

slide-57
SLIDE 57

Main Technique!

fi : Rn → R gi : Rn → R fi gj

{ f1, f2, f3} {g1, g2, g3}

M ∼ N (0, 1)n×n0

  • fi(a) ← fi

(

Ma √n0

)

  • gj(b) ← gj

(

Mb √n0

) EM [⟨

  • fi,

gj ⟩] = ⟨ fi, gj ⟩ VarM (⟨

  • fi,

gj ⟩) < ε2

?

⟨ fi, gj ⟩

G⊗n

ρ

≈ε ⟨

  • fi,

gj ⟩

G

⊗n0 ρ

ui : [n] → R vj : [n] → R

{u1, u2, u3} {v1, v2, v3}

M ∼ N (0, 1)n0×n

  • ui ← Mui

√n0

  • vj ← Mvj

√n0

EM [⟨ ui, vj ⟩] = ⟨ ui, vj ⟩ VarM (⟨ ui, vj ⟩) < ε2 n0 = O( 1

ε2 )

⟨ ui, vj ⟩

Rn ≈ε

  • ui,

vj ⟩

Rn0

8 / 9

slide-58
SLIDE 58

Main Technique!

fi : Rn → R gi : Rn → R fi gj

{ f1, f2, f3} {g1, g2, g3}

M ∼ N (0, 1)n×n0

  • fi(a) ← fi

(

Ma √n0

)

  • gj(b) ← gj

(

Mb √n0

) EM [⟨

  • fi,

gj ⟩] ≈ε ⟨ fi, gj ⟩ VarM (⟨

  • fi,

gj ⟩) < ε2 If f and g are multilinear, degree d, n0 = dO(d)

ε2

⟨ fi, gj ⟩

G⊗n

ρ

≈ε ⟨

  • fi,

gj ⟩

G

⊗n0 ρ

ui : [n] → R vj : [n] → R

{u1, u2, u3} {v1, v2, v3}

M ∼ N (0, 1)n0×n

  • ui ← Mui

√n0

  • vj ← Mvj

√n0

EM [⟨ ui, vj ⟩] = ⟨ ui, vj ⟩ VarM (⟨ ui, vj ⟩) < ε2 n0 = O( 1

ε2 )

⟨ ui, vj ⟩

Rn ≈ε

  • ui,

vj ⟩

Rn0

8 / 9

slide-59
SLIDE 59

Main Technique!

fi : Rn → R gi : Rn → R fi gj

{ f1, f2, f3} {g1, g2, g3}

M ∼ N (0, 1)n×n0

  • fi(a) ← fi

(

Ma ∥a∥2

)

  • gj(b) ← gj

(

Mb ∥b∥2

) EM [⟨

  • fi,

gj ⟩] ≈ε ⟨ fi, gj ⟩ VarM (⟨

  • fi,

gj ⟩) < ε2 If f and g are multilinear, degree d, n0 = dO(d)

ε2

⟨ fi, gj ⟩

G⊗n

ρ

≈ε ⟨

  • fi,

gj ⟩

G

⊗n0 ρ

ui : [n] → R vj : [n] → R

{u1, u2, u3} {v1, v2, v3}

M ∼ N (0, 1)n0×n

  • ui ← Mui

√n0

  • vj ← Mvj

√n0

EM [⟨ ui, vj ⟩] = ⟨ ui, vj ⟩ VarM (⟨ ui, vj ⟩) < ε2 n0 = O( 1

ε2 )

⟨ ui, vj ⟩

Rn ≈ε

  • ui,

vj ⟩

Rn0

8 / 9

slide-60
SLIDE 60

Main Technique: Dimension Reduction for Polynomials!

fi : Rn → R gi : Rn → R fi gj

{ f1, f2, f3} {g1, g2, g3}

M ∼ N (0, 1)n×n0

  • fi(a) ← fi

(

Ma ∥a∥2

)

  • gj(b) ← gj

(

Mb ∥b∥2

) EM [⟨

  • fi,

gj ⟩] ≈ε ⟨ fi, gj ⟩ VarM (⟨

  • fi,

gj ⟩) < ε2 If f and g are multilinear, degree d, n0 = dO(d)

ε2

⟨ fi, gj ⟩

G⊗n

ρ

≈ε ⟨

  • fi,

gj ⟩

G

⊗n0 ρ

ui : [n] → R vj : [n] → R

{u1, u2, u3} {v1, v2, v3}

M ∼ N (0, 1)n0×n

  • ui ← Mui

√n0

  • vj ← Mvj

√n0

EM [⟨ ui, vj ⟩] = ⟨ ui, vj ⟩ VarM (⟨ ui, vj ⟩) < ε2 n0 = O( 1

ε2 )

⟨ ui, vj ⟩

Rn ≈ε

  • ui,

vj ⟩

Rn0

8 / 9

slide-61
SLIDE 61

Main Technique: Dimension Reduction for Polynomials!

fi : Rn → R gi : Rn → R fi gj

{ f1, f2, f3} {g1, g2, g3}

M ∼ N (0, 1)n×n0

  • fi(a) ← fi

(

Ma ∥a∥2

)

  • gj(b) ← gj

(

Mb ∥b∥2

) EM [⟨

  • fi,

gj ⟩] ≈ε ⟨ fi, gj ⟩ VarM (⟨

  • fi,

gj ⟩) < ε2 If f and g are multilinear, degree d, n0 = dO(d)

ε2

⟨ fi, gj ⟩

G⊗n

ρ

≈ε ⟨

  • fi,

gj ⟩

G

⊗n0 ρ

▶ Don’t care about seed length of M. ▶ Crucially, preserves other statistical properties! Ma ∥a∥2 ∼ N (0, 1)⊗n Comparison with [Kane-Rao ’18]

Tianks to Sankeerth Rao & Mitali Bafna!

8 / 9

slide-62
SLIDE 62

Summary & Open Questions . . .

Lower bounds on randomness reduction? Betuer upper bounds? Other applications of dimension reduction for polynomials? Derandomization of the dimension reduction lemma? (

  • )hardness of deciding Non-Interactive Simulation?

Other Tensor Power problems?

Open Qvestions

Tianks! Questions?

9 / 9

slide-63
SLIDE 63

Summary & Open Questions . . .

Any distributed task performed with unbounded amounts of correlated randomness . . . . . . can also be approximately performed with an explicitly bounded number of samples! (extends to interactive settings even with inputs!) Main “take-away” Theorem Lower bounds on randomness reduction? Betuer upper bounds? Other applications of dimension reduction for polynomials? Derandomization of the dimension reduction lemma? (

  • )hardness of deciding Non-Interactive Simulation?

Other Tensor Power problems?

Open Qvestions

Tianks! Questions?

9 / 9

slide-64
SLIDE 64

Summary & Open Questions . . .

Any distributed task performed with unbounded amounts of correlated randomness . . . . . . can also be approximately performed with an explicitly bounded number of samples! (extends to interactive settings even with inputs!) Proof via dimension reduction for low-degree multilinear polynomials Main “take-away” Theorem Lower bounds on randomness reduction? Betuer upper bounds? Other applications of dimension reduction for polynomials? Derandomization of the dimension reduction lemma? (

  • )hardness of deciding Non-Interactive Simulation?

Other Tensor Power problems?

Open Qvestions

Tianks! Questions?

9 / 9

slide-65
SLIDE 65

Summary & Open Questions . . .

Any distributed task performed with unbounded amounts of correlated randomness . . . . . . can also be approximately performed with an explicitly bounded number of samples! (extends to interactive settings even with inputs!) Proof via dimension reduction for low-degree multilinear polynomials Main “take-away” Theorem ▶ Lower bounds on randomness reduction? Betuer upper bounds? ▶ Other applications of dimension reduction for polynomials? ▶ Derandomization of the dimension reduction lemma? ▶ (NP-)hardness of deciding Non-Interactive Simulation? ▶ Other Tensor Power problems?

Open Qvestions

Tianks! Questions?

9 / 9

slide-66
SLIDE 66

Summary & Open Questions . . .

Any distributed task performed with unbounded amounts of correlated randomness . . . . . . can also be approximately performed with an explicitly bounded number of samples! (extends to interactive settings even with inputs!) Proof via dimension reduction for low-degree multilinear polynomials Main “take-away” Theorem ▶ Lower bounds on randomness reduction? Betuer upper bounds? ▶ Other applications of dimension reduction for polynomials? ▶ Derandomization of the dimension reduction lemma? ▶ (NP-)hardness of deciding Non-Interactive Simulation? ▶ Other Tensor Power problems?

Open Qvestions

Tianks! Questions?

9 / 9