Near-Optimal Pseudorandom Generators for Constant-Depth Read-Once - - PowerPoint PPT Presentation

near optimal pseudorandom generators for constant depth
SMART_READER_LITE
LIVE PREVIEW

Near-Optimal Pseudorandom Generators for Constant-Depth Read-Once - - PowerPoint PPT Presentation

Near-Optimal Pseudorandom Generators for Constant-Depth Read-Once Formulas Dean Doron 1 Pooya Hatami 2 William M. Hoza 3 UT Austin Stanford UT Austin Ohio State UT Austin BIRS Workshop 19w5088 July 8, 2019 1Supported by NSF Grant


slide-1
SLIDE 1

Near-Optimal Pseudorandom Generators for Constant-Depth Read-Once Formulas

Dean Doron1

UT Austin → Stanford

Pooya Hatami2

UT Austin → Ohio State

William M. Hoza3

UT Austin

BIRS Workshop 19w5088 July 8, 2019

1Supported by NSF Grant CCF-1705028 2Supported by a Simons Investigator Award (#409864, David Zuckerman) 3Supported by the NSF GRFP under Grant DGE-1610403 and by a Harrington Fellowship from UT Austin

slide-2
SLIDE 2

Randomness as a scarce resource

◮ Randomization is a popular algorithmic technique ◮ But randomness is costly

slide-3
SLIDE 3

Randomness as a scarce resource

◮ Randomization is a popular algorithmic technique ◮ But randomness is costly ◮ An algorithm that uses fewer random bits is better

slide-4
SLIDE 4

Pseudorandom generators (PRGs)

s bits Gen n bits

slide-5
SLIDE 5

Pseudorandom generators (PRGs)

s bits Gen n bits ◮ Gen “fools” f : {0, 1}n → {0, 1} if E[f (Gen(U))] = E[f (U)] ± ε

slide-6
SLIDE 6

Pseudorandom generators (PRGs)

s bits Gen n bits ◮ Gen “fools” f : {0, 1}n → {0, 1} if E[f (Gen(U))] = E[f (U)] ± ε ◮ Goal: Design PRG that fools an interesting class of functions f

slide-7
SLIDE 7

Pseudorandom generators (PRGs)

s bits Gen n bits ◮ Gen “fools” f : {0, 1}n → {0, 1} if E[f (Gen(U))] = E[f (U)] ± ε ◮ Goal: Design PRG that fools an interesting class of functions f ◮ Minimize seed length s = s(n, ε)

slide-8
SLIDE 8

Read-once formulas

∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ x7 ¬x2 ¬x1 ¬x5 x8 x3 x12 ¬x9 x11 x4 x10 ¬x6 x13

slide-9
SLIDE 9

Read-once formulas

∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ x7 ¬x2 ¬x1 ¬x5 x8 x3 x12 ¬x9 x11 x4 x10 ¬x6 x13 ◮ This work: Fool depth-d read-once formulas for d = O(1)

slide-10
SLIDE 10

Read-once formulas

∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ x7 ¬x2 ¬x1 ¬x5 x8 x3 x12 ¬x9 x11 x4 x10 ¬x6 x13 ◮ This work: Fool depth-d read-once formulas for d = O(1) ◮ Read-once version of AC0

slide-11
SLIDE 11

Prior work and main result

Seed length Model fooled Reference O(n0.001) AC0 Ajtai, Wigderson ’89

slide-12
SLIDE 12

Prior work and main result

Seed length Model fooled Reference O(n0.001) AC0 Ajtai, Wigderson ’89 O(log2d+6 n) AC0 Nisan ’91

slide-13
SLIDE 13

Prior work and main result

Seed length Model fooled Reference O(n0.001) AC0 Ajtai, Wigderson ’89 O(log2d+6 n) AC0 Nisan ’91

  • O(logd+4 n)

AC0 Trevisan, Xue ’13

slide-14
SLIDE 14

Prior work and main result

Seed length Model fooled Reference O(n0.001) AC0 Ajtai, Wigderson ’89 O(log2d+6 n) AC0 Nisan ’91

  • O(logd+4 n)

AC0 Trevisan, Xue ’13

  • O(logd+1 n)

Read-once AC0 Chen, Steinke, Vadhan ’15

slide-15
SLIDE 15

Prior work and main result

Seed length Model fooled Reference O(n0.001) AC0 Ajtai, Wigderson ’89 O(log2d+6 n) AC0 Nisan ’91

  • O(logd+4 n)

AC0 Trevisan, Xue ’13

  • O(logd+1 n)

Read-once AC0 Chen, Steinke, Vadhan ’15

  • O(log2 n)

Arbitrary-order width-O(1) ROBPs Forbes, Kelley ’18

slide-16
SLIDE 16

Prior work and main result

Seed length Model fooled Reference O(n0.001) AC0 Ajtai, Wigderson ’89 O(log2d+6 n) AC0 Nisan ’91

  • O(logd+4 n)

AC0 Trevisan, Xue ’13

  • O(logd+1 n)

Read-once AC0 Chen, Steinke, Vadhan ’15

  • O(log2 n)

Arbitrary-order width-O(1) ROBPs Forbes, Kelley ’18

  • O(log n)

Read-once AC0 This work

slide-17
SLIDE 17

Prior work and main result

Seed length Model fooled Reference O(n0.001) AC0 Ajtai, Wigderson ’89 O(log2d+6 n) AC0 Nisan ’91

  • O(logd+4 n)

AC0 Trevisan, Xue ’13

  • O(logd+1 n)

Read-once AC0 Chen, Steinke, Vadhan ’15

  • O(log2 n)

Arbitrary-order width-O(1) ROBPs Forbes, Kelley ’18

  • O(log n)

Read-once AC0 This work ◮ Main result: PRG for read-once AC0 with seed length log(n/ε) · O(d log log(n/ε))2d+2.

slide-18
SLIDE 18

Motivation: L vs. BPL

◮ Big open problem: Prove L = BPL

slide-19
SLIDE 19

Motivation: L vs. BPL

◮ Big open problem: Prove L = BPL

◮ “Randomness is not necessary for space-efficient computation”

slide-20
SLIDE 20

Motivation: L vs. BPL

◮ Big open problem: Prove L = BPL

◮ “Randomness is not necessary for space-efficient computation”

◮ Main approach: Design optimal PRG for “ROBPs”

slide-21
SLIDE 21

Motivation: L vs. BPL

◮ Big open problem: Prove L = BPL

◮ “Randomness is not necessary for space-efficient computation”

◮ Main approach: Design optimal PRG for “ROBPs” ◮ Bad news: Seed length O(log2 n) has not been improved for decades [Nisan ’92]

slide-22
SLIDE 22

Motivation: L vs. BPL

◮ Big open problem: Prove L = BPL

◮ “Randomness is not necessary for space-efficient computation”

◮ Main approach: Design optimal PRG for “ROBPs” ◮ Bad news: Seed length O(log2 n) has not been improved for decades [Nisan ’92] ◮ Good news: Can achieve seed length O(log n) for increasingly powerful restricted models

slide-23
SLIDE 23

Motivation: L vs. BPL

◮ Big open problem: Prove L = BPL

◮ “Randomness is not necessary for space-efficient computation”

◮ Main approach: Design optimal PRG for “ROBPs” ◮ Bad news: Seed length O(log2 n) has not been improved for decades [Nisan ’92] ◮ Good news: Can achieve seed length O(log n) for increasingly powerful restricted models ◮ Read-once AC0 is one of the frontiers of this progress

slide-24
SLIDE 24

Seed length O(log n)

O(1)-ROBPs Arbitrary-order O(1)-ROBPs poly(n)-ROBPs Arbitrary-order poly(n)-ROBPs 3-ROBPs MRT19 Arbitrary-order 3-ROBPs MRT19 2-ROBPs SZ95 Arbitrary-order 2-ROBPs SZ95 Conjunctions NN93 Parities NN93 Read-once CNFs DETT10 Read-once AC0 This work Read-once formulas Regular O(1)-ROBPs BRRY14 Arbitrary-order permutation O(1)-ROBPs CHHL18 Read-once AC0[⊕] Read-once polynomials LV17

slide-25
SLIDE 25

Starting point: Forbes-Kelley PRG

Seed length Model fooled Reference O(n0.001) AC0 Ajtai, Wigderson ’89 O(log2d+6 n) AC0 Nisan ’91

  • O(logd+4 n)

AC0 Trevisan, Xue ’13

  • O(logd+1 n)

Read-once AC0 Chen, Steinke, Vadhan ’15

  • O(log2 n)

Arbitrary-order width-O(1) ROBPs Forbes, Kelley ’18

  • O(log n)

Read-once AC0 This work

slide-26
SLIDE 26

Starting point: Forbes-Kelley PRG

Seed length Model fooled Reference O(n0.001) AC0 Ajtai, Wigderson ’89 O(log2d+6 n) AC0 Nisan ’91

  • O(logd+4 n)

AC0 Trevisan, Xue ’13

  • O(logd+1 n)

Read-once AC0 Chen, Steinke, Vadhan ’15

  • O(log2 n)

Arbitrary-order width-O(1) ROBPs Forbes, Kelley ’18

  • O(log n)

Read-once AC0 This work

slide-27
SLIDE 27

PRGs via pseudorandom restrictions [AW89]

∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ x7 ¬x2 ¬x1 ¬x5 x8 x3 x12 ¬x9 x11 x4 x10 ¬x6 x13

slide-28
SLIDE 28

PRGs via pseudorandom restrictions [AW89]

◮ Start by sampling a pseudorandom restriction X ∈ {0, 1, ⋆}n ∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ x7 ¬x2 ¬x1 ¬x5 x8 x3 x12 ¬x9 x11 x4 x10 ¬x6 x13

slide-29
SLIDE 29

PRGs via pseudorandom restrictions [AW89]

◮ Start by sampling a pseudorandom restriction X ∈ {0, 1, ⋆}n ∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ x7 1 ¬x1 x8 x3 1 x11 x4 x13

slide-30
SLIDE 30

Restriction notation

◮ Define Res: {0, 1}n × {0, 1}n → {0, 1, ⋆}n by Res(y, z)i =

if yi = 1 zi if yi = 0

slide-31
SLIDE 31

Restriction notation

◮ Define Res: {0, 1}n × {0, 1}n → {0, 1, ⋆}n by Res(y, z)i =

if yi = 1 zi if yi = 0 y = 0 1 1 0 0 1 0 0 z = 0 0 1 1 1 1 0 1 Res(y, z) = 0 ⋆ ⋆ 1 1 ⋆ 0 1

slide-32
SLIDE 32

Forbes-Kelley pseudorandom restriction

◮ A distribution D over {0, 1}n is ε-biased if it fools parities: S = ∅ = ⇒

  • E
  • i∈S

Di

  • − 1

2

  • ≤ ε
slide-33
SLIDE 33

Forbes-Kelley pseudorandom restriction

◮ A distribution D over {0, 1}n is ε-biased if it fools parities: S = ∅ = ⇒

  • E
  • i∈S

Di

  • − 1

2

  • ≤ ε

◮ Let D, D′ be independent small-bias strings

slide-34
SLIDE 34

Forbes-Kelley pseudorandom restriction

◮ A distribution D over {0, 1}n is ε-biased if it fools parities: S = ∅ = ⇒

  • E
  • i∈S

Di

  • − 1

2

  • ≤ ε

◮ Let D, D′ be independent small-bias strings ◮ Let X = Res(D, D′) (seed length O(log n))

slide-35
SLIDE 35

Forbes-Kelley pseudorandom restriction

◮ A distribution D over {0, 1}n is ε-biased if it fools parities: S = ∅ = ⇒

  • E
  • i∈S

Di

  • − 1

2

  • ≤ ε

◮ Let D, D′ be independent small-bias strings ◮ Let X = Res(D, D′) (seed length O(log n)) ◮ Theorem [Forbes, Kelley ’18]: For any O(1)-width ROBP f , E

X,U[f |X(U)] ≈ E U[f (U)]

slide-36
SLIDE 36

Forbes-Kelley pseudorandom restriction

◮ A distribution D over {0, 1}n is ε-biased if it fools parities: S = ∅ = ⇒

  • E
  • i∈S

Di

  • − 1

2

  • ≤ ε

◮ Let D, D′ be independent small-bias strings ◮ Let X = Res(D, D′) (seed length O(log n)) ◮ Theorem [Forbes, Kelley ’18]: For any O(1)-width ROBP f , E

X,U[f |X(U)] ≈ E U[f (U)]

◮ In words, X preserves expectation of f

slide-37
SLIDE 37

Forbes-Kelley pseudorandom restriction

◮ A distribution D over {0, 1}n is ε-biased if it fools parities: S = ∅ = ⇒

  • E
  • i∈S

Di

  • − 1

2

  • ≤ ε

◮ Let D, D′ be independent small-bias strings ◮ Let X = Res(D, D′) (seed length O(log n)) ◮ Theorem [Forbes, Kelley ’18]: For any O(1)-width ROBP f , E

X,U[f |X(U)] ≈ E U[f (U)]

◮ In words, X preserves expectation of f ◮

(Proof involves clever Fourier analysis, building on [RSV13, HLV18, CHRT18])

slide-38
SLIDE 38

Forbes-Kelley pseudorandom generator

◮ So [FK18] can assign values to half the inputs using O(log n) truly random bits

slide-39
SLIDE 39

Forbes-Kelley pseudorandom generator

◮ So [FK18] can assign values to half the inputs using O(log n) truly random bits ◮ After restricting, f |X is another ROBP

slide-40
SLIDE 40

Forbes-Kelley pseudorandom generator

◮ So [FK18] can assign values to half the inputs using O(log n) truly random bits ◮ After restricting, f |X is another ROBP ◮ So we can apply another pseudorandom restriction

slide-41
SLIDE 41

Forbes-Kelley pseudorandom generator

◮ So [FK18] can assign values to half the inputs using O(log n) truly random bits ◮ After restricting, f |X is another ROBP ◮ So we can apply another pseudorandom restriction ◮ Let X ◦t denote composition of t independent copies of X

slide-42
SLIDE 42

Forbes-Kelley pseudorandom generator

◮ So [FK18] can assign values to half the inputs using O(log n) truly random bits ◮ After restricting, f |X is another ROBP ◮ So we can apply another pseudorandom restriction ◮ Let X ◦t denote composition of t independent copies of X ◮ Let t = O(log n)

slide-43
SLIDE 43

Forbes-Kelley pseudorandom generator

◮ So [FK18] can assign values to half the inputs using O(log n) truly random bits ◮ After restricting, f |X is another ROBP ◮ So we can apply another pseudorandom restriction ◮ Let X ◦t denote composition of t independent copies of X ◮ Let t = O(log n) ◮ With high probability, X ◦t ∈ {0, 1}n (no ⋆)

slide-44
SLIDE 44

Forbes-Kelley pseudorandom generator

◮ So [FK18] can assign values to half the inputs using O(log n) truly random bits ◮ After restricting, f |X is another ROBP ◮ So we can apply another pseudorandom restriction ◮ Let X ◦t denote composition of t independent copies of X ◮ Let t = O(log n) ◮ With high probability, X ◦t ∈ {0, 1}n (no ⋆) ◮ Expectation preserved at every step, so total error is low: E

X ◦t[f (X ◦t)] ≈ E U[f (U)]

slide-45
SLIDE 45

Forbes-Kelley pseudorandom generator

◮ So [FK18] can assign values to half the inputs using O(log n) truly random bits ◮ After restricting, f |X is another ROBP ◮ So we can apply another pseudorandom restriction ◮ Let X ◦t denote composition of t independent copies of X ◮ Let t = O(log n) ◮ With high probability, X ◦t ∈ {0, 1}n (no ⋆) ◮ Expectation preserved at every step, so total error is low: E

X ◦t[f (X ◦t)] ≈ E U[f (U)]

◮ Total cost: O(log2 n) truly random bits

slide-46
SLIDE 46

Improved PRGs via simplification [GMRTV12]

∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ x7 ¬x2 ¬x1 ¬x5 x8 x3 x12 ¬x9 x11 x4 x10 ¬x6 x13

slide-47
SLIDE 47

Improved PRGs via simplification [GMRTV12]

◮ Step 1: Apply pseudorandom restriction X ∈ {0, 1, ⋆}n ∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ x7 ¬x2 ¬x1 ¬x5 x8 x3 x12 ¬x9 x11 x4 x10 ¬x6 x13

slide-48
SLIDE 48

Improved PRGs via simplification [GMRTV12]

◮ Step 1: Apply pseudorandom restriction X ∈ {0, 1, ⋆}n ∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ x7 1 ¬x1 x8 x3 1 x11 x4 x13

slide-49
SLIDE 49

Improved PRGs via simplification [GMRTV12]

◮ Step 1: Apply pseudorandom restriction X ∈ {0, 1, ⋆}n ◮ Design X to preserve expectation ∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ x7 1 ¬x1 x8 x3 1 x11 x4 x13

slide-50
SLIDE 50

Improved PRGs via simplification [GMRTV12]

◮ Step 1: Apply pseudorandom restriction X ∈ {0, 1, ⋆}n ◮ Design X to preserve expectation ◮ Design X so that X ◦t also simplifies formula, for t ≪ log n ∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ x7 1 ¬x1 x8 x3 1 x11 x4 x13

slide-51
SLIDE 51

Improved PRGs via simplification [GMRTV12]

◮ Step 1: Apply pseudorandom restriction X ∈ {0, 1, ⋆}n ◮ Design X to preserve expectation ◮ Design X so that X ◦t also simplifies formula, for t ≪ log n ∧ ∨ ∨ ∨ ∧ ∧ ∧ x7 ¬x1 x8 x3 x11 x4

slide-52
SLIDE 52

Improved PRGs via simplification [GMRTV12]

◮ Step 1: Apply pseudorandom restriction X ∈ {0, 1, ⋆}n ◮ Design X to preserve expectation ◮ Design X so that X ◦t also simplifies formula, for t ≪ log n ∧ ∨ ∨ ∨ ∧ ∧ ∧ x7 ¬x1 x8 x3 x11 x4

slide-53
SLIDE 53

Improved PRGs via simplification [GMRTV12]

◮ Step 1: Apply pseudorandom restriction X ∈ {0, 1, ⋆}n ◮ Design X to preserve expectation ◮ Design X so that X ◦t also simplifies formula, for t ≪ log n ∧ ∧ ∧ ∧ x7 ¬x1 x8 x3 x11 x4

slide-54
SLIDE 54

Improved PRGs via simplification [GMRTV12]

◮ Step 1: Apply pseudorandom restriction X ∈ {0, 1, ⋆}n ◮ Design X to preserve expectation ◮ Design X so that X ◦t also simplifies formula, for t ≪ log n ∧ x7 ¬x1 x8 x3 x11 x4

slide-55
SLIDE 55

Improved PRGs via simplification [GMRTV12]

◮ Step 1: Apply pseudorandom restriction X ∈ {0, 1, ⋆}n ◮ Design X to preserve expectation ◮ Design X so that X ◦t also simplifies formula, for t ≪ log n ∧ x7 ¬x1 x8 x3 x11 x4 ◮ Step 2: Fool restricted formula, taking advantage of simplicity

slide-56
SLIDE 56

Our pseudorandom restriction

◮ Assume by recursion: PRG for depth d with seed length

  • O(log n)
slide-57
SLIDE 57

Our pseudorandom restriction

◮ Assume by recursion: PRG for depth d with seed length

  • O(log n)

◮ Let’s sample X ∈ {0, 1, ⋆}n for depth d + 1

slide-58
SLIDE 58

Our pseudorandom restriction

◮ Assume by recursion: PRG for depth d with seed length

  • O(log n)

◮ Let’s sample X ∈ {0, 1, ⋆}n for depth d + 1

  • 1. Recursively sample Gd, G ′

d ∈ {0, 1}n

slide-59
SLIDE 59

Our pseudorandom restriction

◮ Assume by recursion: PRG for depth d with seed length

  • O(log n)

◮ Let’s sample X ∈ {0, 1, ⋆}n for depth d + 1

  • 1. Recursively sample Gd, G ′

d ∈ {0, 1}n

  • 2. Sample D, D′ ∈ {0, 1}n with small bias
slide-60
SLIDE 60

Our pseudorandom restriction

◮ Assume by recursion: PRG for depth d with seed length

  • O(log n)

◮ Let’s sample X ∈ {0, 1, ⋆}n for depth d + 1

  • 1. Recursively sample Gd, G ′

d ∈ {0, 1}n

  • 2. Sample D, D′ ∈ {0, 1}n with small bias
  • 3. X = Res(Gd ⊕ D, G ′

d ⊕ D′)

slide-61
SLIDE 61

Preserving expectation

◮ Claim: For any depth-(d + 1) read-once AC0 formula f , E

X,U[f |X(U)] ≈ E U[f (U)]

slide-62
SLIDE 62

Preserving expectation

◮ Claim: For any depth-(d + 1) read-once AC0 formula f , E

X,U[f |X(U)] ≈ E U[f (U)]

◮ Proof: Read-once AC0 can be simulated by constant-width ROBPs [CSV15]

slide-63
SLIDE 63

Preserving expectation

◮ Claim: For any depth-(d + 1) read-once AC0 formula f , E

X,U[f |X(U)] ≈ E U[f (U)]

◮ Proof: Read-once AC0 can be simulated by constant-width ROBPs [CSV15] ◮ So we can simply apply Forbes-Kelley result: X = Res(Gd ⊕ D, G ′

d ⊕ D′)

slide-64
SLIDE 64

Simplification

◮ ∆(f ) def = maximum fan-in of any gate other than root

slide-65
SLIDE 65

Simplification

◮ ∆(f ) def = maximum fan-in of any gate other than root ◮ Main Lemma: With high probability over X ◦t, ∆(f |X ◦t) ≤ polylog n, where t = O((log log n)2)

slide-66
SLIDE 66

Simplification

◮ ∆(f ) def = maximum fan-in of any gate other than root ◮ Main Lemma: With high probability over X ◦t, ∆(f |X ◦t) ≤ polylog n, where t = O((log log n)2) ◮

Actually we only prove this statement “up to sandwiching”

slide-67
SLIDE 67

∆ → polylog n: Proof outline

◮ Chen, Steinke, Vadhan ’15: Read-once AC0 simplifies under truly random restrictions

slide-68
SLIDE 68

∆ → polylog n: Proof outline

◮ Chen, Steinke, Vadhan ’15: Read-once AC0 simplifies under truly random restrictions ◮ Testing for simplification is another read-once AC0 problem

slide-69
SLIDE 69

∆ → polylog n: Proof outline

◮ Chen, Steinke, Vadhan ’15: Read-once AC0 simplifies under truly random restrictions ◮ Testing for simplification is another read-once AC0 problem ◮ So we can derandomize the [CSV15] analysis: X = Res(Gd ⊕ D, G ′

d ⊕ D′)

slide-70
SLIDE 70

Collapse under truly random restrictions

◮ Assume f is a biased read-once AC0 formula: E[f ] ≤ ρ or E[f ] ≥ 1 − ρ

slide-71
SLIDE 71

Collapse under truly random restrictions

◮ Assume f is a biased read-once AC0 formula: E[f ] ≤ ρ or E[f ] ≥ 1 − ρ ◮ Let R = Res(U, U′) (truly random restriction)

slide-72
SLIDE 72

Collapse under truly random restrictions

◮ Assume f is a biased read-once AC0 formula: E[f ] ≤ ρ or E[f ] ≥ 1 − ρ ◮ Let R = Res(U, U′) (truly random restriction) ◮ Theorem [CSV ’15]: Pr

R◦s[f |R◦s nonconstant] ≤ ρ +

1 n100 , where s = O(log log n)

slide-73
SLIDE 73

Collapse under truly random restrictions

◮ Assume f is a biased read-once AC0 formula: E[f ] ≤ ρ or E[f ] ≥ 1 − ρ ◮ Let R = Res(U, U′) (truly random restriction) ◮ Theorem [CSV ’15]: Pr

R◦s[f |R◦s nonconstant] ≤ ρ +

1 n100 , where s = O(log log n) ◮

(Proof uses Fourier analysis)

slide-74
SLIDE 74

NAND formulas

∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ x7 ¬x2 ¬x1 ¬x5 x8 x3 x12 ¬x9 x11 x4 x10 ¬x6 x13

slide-75
SLIDE 75

NAND formulas

NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND

x7 ¬x2 ¬x1 ¬x5 x8 x3 x12 ¬x9 x11 x4 x10 ¬x6 x13

slide-76
SLIDE 76

Collapse under truly random restrictions (continued)

◮ Corollary: If E[f ] ≥ 1 − ρ, then Pr

R◦s[f |R◦s ≡ 1] ≤ 2ρ +

1 n100

slide-77
SLIDE 77

Collapse under truly random restrictions (continued)

◮ Corollary: If E[f ] ≥ 1 − ρ, then Pr

R◦s[f |R◦s ≡ 1] ≤ 2ρ +

1 n100 ◮ Let F be a set of formulas on disjoint variable sets

slide-78
SLIDE 78

Collapse under truly random restrictions (continued)

◮ Corollary: If E[f ] ≥ 1 − ρ, then Pr

R◦s[f |R◦s ≡ 1] ≤ 2ρ +

1 n100 ◮ Let F be a set of formulas on disjoint variable sets ◮ Assume ∀f ∈ F, E[f ] ≥ 1 − ρ

slide-79
SLIDE 79

Collapse under truly random restrictions (continued)

◮ Corollary: If E[f ] ≥ 1 − ρ, then Pr

R◦s[f |R◦s ≡ 1] ≤ 2ρ +

1 n100 ◮ Let F be a set of formulas on disjoint variable sets ◮ Assume ∀f ∈ F, E[f ] ≥ 1 − ρ ◮ Corollary: Pr

R◦s[∀f ∈ F, f |R◦s ≡ 1] ≤

  • 2ρ +

1 n100 |F| .

slide-80
SLIDE 80

Derandomizing collapse

◮ Let F be a set of depth-(d − 1) formulas on disjoint variables

slide-81
SLIDE 81

Derandomizing collapse

◮ Let F be a set of depth-(d − 1) formulas on disjoint variables ◮ Computational problem: Given y, z ∈ {0, 1}n, decide whether ∃f ∈ F, f |Res(y,z) ≡ 1

slide-82
SLIDE 82

Derandomizing collapse

◮ Let F be a set of depth-(d − 1) formulas on disjoint variables ◮ Computational problem: Given y, z ∈ {0, 1}n, decide whether ∃f ∈ F, f |Res(y,z) ≡ 1 ◮ Lemma: Can be decided in depth-d read-once AC0

slide-83
SLIDE 83

Deciding whether ∃f ∈ F, f |Res(y,z) ≡ 1

NAND

a c b ≡ 1 ⇐ ⇒

slide-84
SLIDE 84

Deciding whether ∃f ∈ F, f |Res(y,z) ≡ 1

NAND

a c b ≡ 1 ⇐ ⇒ ∨

b ≡ 0 a ≡ 0 c ≡ 0

slide-85
SLIDE 85

Deciding whether ∃f ∈ F, f |Res(y,z) ≡ 1

NAND

a c b ≡ 1 ⇐ ⇒ ∨

b ≡ 0 a ≡ 0 c ≡ 0

NAND

a′ c′ b′ ≡ 0 ⇐ ⇒ ∧

b′ ≡ 1 a′ ≡ 1 c′ ≡ 1

slide-86
SLIDE 86

Deciding whether ∃f ∈ F, f |Res(y,z) ≡ 1 (continued)

◮ At bottom, we get one additional layer: (Res(y, z)i ≡ b) ⇐ ⇒ (yi = 0 ∧ zi = b) (¬ Res(y, z)i ≡ b) ⇐ ⇒ (yi = 0 ∧ zi = 1 − b)

slide-87
SLIDE 87

Deciding whether ∃f ∈ F, f |Res(y,z) ≡ 1 (continued)

◮ At bottom, we get one additional layer: (Res(y, z)i ≡ b) ⇐ ⇒ (yi = 0 ∧ zi = b) (¬ Res(y, z)i ≡ b) ⇐ ⇒ (yi = 0 ∧ zi = 1 − b) ◮ At top: “∃f ∈ F” is one more ∨ gate (merge with top ∨ gates)

slide-88
SLIDE 88

Collapse under pseudorandom restrictions

◮ Let F be a set of depth-(d − 1) formulas on disjoint variables ◮ Assume ∀f ∈ F, E[f ] ≥ 1 − ρ

slide-89
SLIDE 89

Collapse under pseudorandom restrictions

◮ Let F be a set of depth-(d − 1) formulas on disjoint variables ◮ Assume ∀f ∈ F, E[f ] ≥ 1 − ρ ◮ X = Res(Gd ⊕ D, G ′

d ⊕ D′)

slide-90
SLIDE 90

Collapse under pseudorandom restrictions

◮ Let F be a set of depth-(d − 1) formulas on disjoint variables ◮ Assume ∀f ∈ F, E[f ] ≥ 1 − ρ ◮ X = Res(Gd ⊕ D, G ′

d ⊕ D′)

◮ Gd, G ′

d fool depth d, so

Pr

X [∀f ∈ F, f |X ≡ 1] ≈ Pr R [∀f ∈ F, f |R ≡ 1]

slide-91
SLIDE 91

Collapse under pseudorandom restrictions

◮ Let F be a set of depth-(d − 1) formulas on disjoint variables ◮ Assume ∀f ∈ F, E[f ] ≥ 1 − ρ ◮ X = Res(Gd ⊕ D, G ′

d ⊕ D′)

◮ Gd, G ′

d fool depth d, so

Pr

X [∀f ∈ F, f |X ≡ 1] ≈ Pr R [∀f ∈ F, f |R ≡ 1]

◮ Hybrid argument: Pr

X ◦s[∀f ∈ F, f |X ◦s ≡ 1] ≈ Pr R◦s[∀f ∈ F, f |R◦s ≡ 1]

slide-92
SLIDE 92

Collapse under pseudorandom restrictions

◮ Let F be a set of depth-(d − 1) formulas on disjoint variables ◮ Assume ∀f ∈ F, E[f ] ≥ 1 − ρ ◮ X = Res(Gd ⊕ D, G ′

d ⊕ D′)

◮ Gd, G ′

d fool depth d, so

Pr

X [∀f ∈ F, f |X ≡ 1] ≈ Pr R [∀f ∈ F, f |R ≡ 1]

◮ Hybrid argument: Pr

X ◦s[∀f ∈ F, f |X ◦s ≡ 1] ≈ Pr R◦s[∀f ∈ F, f |R◦s ≡ 1]

  • 2ρ +

1 n100 |F|

slide-93
SLIDE 93

∆ → √ ∆ polylog n

◮ So far: X ◦s causes any biased depth-(d − 1) formula to collapse

slide-94
SLIDE 94

∆ → √ ∆ polylog n

◮ So far: X ◦s causes any biased depth-(d − 1) formula to collapse ◮ What about unbiased depth-(d + 1) formulas?

slide-95
SLIDE 95

∆ → √ ∆ polylog n

◮ So far: X ◦s causes any biased depth-(d − 1) formula to collapse ◮ What about unbiased depth-(d + 1) formulas? ◮ Assume that for every gate g in f , E[¬g] ≥ 1/ poly(n)

slide-96
SLIDE 96

∆ → √ ∆ polylog n

◮ So far: X ◦s causes any biased depth-(d − 1) formula to collapse ◮ What about unbiased depth-(d + 1) formulas? ◮ Assume that for every gate g in f , E[¬g] ≥ 1/ poly(n) ◮ Lemma: With high probability over X ◦s, ∆(f |X ◦s) ≤

  • ∆(f ) · polylog n
slide-97
SLIDE 97

Illustration: ∆ → √ ∆ polylog n

NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND

Total depth d + 1

slide-98
SLIDE 98

Illustration: ∆ → √ ∆ polylog n

NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND

Likely to collapse if biased Total depth d + 1

slide-99
SLIDE 99

Illustration: ∆ → √ ∆ polylog n

NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND

Likely to collapse if biased Likely to have few remaining children Total depth d + 1

slide-100
SLIDE 100

Proof that ∆ → √ ∆ polylog n

(This analysis follows [GMRTV12, CSV15])

slide-101
SLIDE 101

Proof that ∆ → √ ∆ polylog n

(This analysis follows [GMRTV12, CSV15])

◮ Let g be a gate, g = root

slide-102
SLIDE 102

Proof that ∆ → √ ∆ polylog n

(This analysis follows [GMRTV12, CSV15])

◮ Let g be a gate, g = root ◮ Partition children h into O(log n) buckets based on E[h]

slide-103
SLIDE 103

Proof that ∆ → √ ∆ polylog n

(This analysis follows [GMRTV12, CSV15])

◮ Let g be a gate, g = root ◮ Partition children h into O(log n) buckets based on E[h] ◮ Consider one bucket B = {h : E[h] ≈ 1 − ρ}

slide-104
SLIDE 104

Illustration: ∆ → √ ∆ polylog n (continued)

g h h′ Likely to collapse if biased Likely to have few remaining children Total depth d + 1

slide-105
SLIDE 105

Proof that ∆ → √ ∆ polylog n (continued)

◮ B = {h : E[h] ≈ 1 − ρ}

slide-106
SLIDE 106

Proof that ∆ → √ ∆ polylog n (continued)

◮ B = {h : E[h] ≈ 1 − ρ} ◮ How big can B be?

slide-107
SLIDE 107

Proof that ∆ → √ ∆ polylog n (continued)

◮ B = {h : E[h] ≈ 1 − ρ} ◮ How big can B be? ◮ 1/ poly(n) ≤ E[¬g]

slide-108
SLIDE 108

Proof that ∆ → √ ∆ polylog n (continued)

◮ B = {h : E[h] ≈ 1 − ρ} ◮ How big can B be? ◮ 1/ poly(n) ≤ E[¬g] ≤

  • h∈B

E[h]

slide-109
SLIDE 109

Proof that ∆ → √ ∆ polylog n (continued)

◮ B = {h : E[h] ≈ 1 − ρ} ◮ How big can B be? ◮ 1/ poly(n) ≤ E[¬g] ≤

  • h∈B

E[h] ≈ (1 − ρ)|B|

slide-110
SLIDE 110

Proof that ∆ → √ ∆ polylog n (continued)

◮ B = {h : E[h] ≈ 1 − ρ} ◮ How big can B be? ◮ 1/ poly(n) ≤ E[¬g] ≤

  • h∈B

E[h] ≈ (1 − ρ)|B| ◮ So |B| ≤ O((1/ρ) log n)

slide-111
SLIDE 111

Proof that ∆ → √ ∆ polylog n (continued)

◮ B = {h : E[h] ≈ 1 − ρ} ◮ How big can B be? ◮ 1/ poly(n) ≤ E[¬g] ≤

  • h∈B

E[h] ≈ (1 − ρ)|B| ◮ So |B| ≤ O((1/ρ) log n) ◮ We also trivially have |B| ≤ ∆

slide-112
SLIDE 112

Proof that ∆ → √ ∆ polylog n (continued)

◮ Let L = #{h ∈ B : h|X ◦s ≡ 1} (number of living children)

slide-113
SLIDE 113

Proof that ∆ → √ ∆ polylog n (continued)

◮ Let L = #{h ∈ B : h|X ◦s ≡ 1} (number of living children) ◮ Let M = Θ( √ ∆ log n)

slide-114
SLIDE 114

Proof that ∆ → √ ∆ polylog n (continued)

◮ Let L = #{h ∈ B : h|X ◦s ≡ 1} (number of living children) ◮ Let M = Θ( √ ∆ log n) ◮ Pr[L ≥ M] =

slide-115
SLIDE 115

Proof that ∆ → √ ∆ polylog n (continued)

◮ Let L = #{h ∈ B : h|X ◦s ≡ 1} (number of living children) ◮ Let M = Θ( √ ∆ log n), k = ◮ Pr[L ≥ M] = Pr L k

M k

  • Pascal
slide-116
SLIDE 116

Proof that ∆ → √ ∆ polylog n (continued)

◮ Let L = #{h ∈ B : h|X ◦s ≡ 1} (number of living children) ◮ Let M = Θ( √ ∆ log n), k = ◮ Pr[L ≥ M] = Pr L k

M k

  • Pascal

≤ 1 M

k

· E L k

  • Markov
slide-117
SLIDE 117

Proof that ∆ → √ ∆ polylog n (continued)

◮ Let L = #{h ∈ B : h|X ◦s ≡ 1} (number of living children) ◮ Let M = Θ( √ ∆ log n), k = ◮ Pr[L ≥ M] = Pr L k

M k

  • Pascal

≤ 1 M

k

· E L k

  • Markov

≤ 1 M

k

· |B| k

  • ·
  • O(ρ)k +

1 n200

slide-118
SLIDE 118

Proof that ∆ → √ ∆ polylog n (continued)

◮ Let L = #{h ∈ B : h|X ◦s ≡ 1} (number of living children) ◮ Let M = Θ( √ ∆ log n), k = ◮ Pr[L ≥ M] = Pr L k

M k

  • Pascal

≤ 1 M

k

· E L k

  • Markov

≤ 1 M

k

· |B| k

  • ·
  • O(ρ)k +

1 n200

|B|e M k ·

  • O(ρ)k +

1 n200

  • Stirling
slide-119
SLIDE 119

Proof that ∆ → √ ∆ polylog n (continued)

◮ Let L = #{h ∈ B : h|X ◦s ≡ 1} (number of living children) ◮ Let M = Θ( √ ∆ log n), k = ◮ Pr[L ≥ M] = Pr L k

M k

  • Pascal

≤ 1 M

k

· E L k

  • Markov

≤ 1 M

k

· |B| k

  • ·
  • O(ρ)k +

1 n200

|B|e M k ·

  • O(ρ)k +

1 n200

  • Stirling

≤ 1 √ ∆ k +

slide-120
SLIDE 120

Proof that ∆ → √ ∆ polylog n (continued)

◮ Let L = #{h ∈ B : h|X ◦s ≡ 1} (number of living children) ◮ Let M = Θ( √ ∆ log n), k = ◮ Pr[L ≥ M] = Pr L k

M k

  • Pascal

≤ 1 M

k

· E L k

  • Markov

≤ 1 M

k

· |B| k

  • ·
  • O(ρ)k +

1 n200

|B|e M k ·

  • O(ρ)k +

1 n200

  • Stirling

≤ 1 √ ∆ k + 1 n200 · ( √ ∆)k

slide-121
SLIDE 121

Proof that ∆ → √ ∆ polylog n (continued)

◮ Let L = #{h ∈ B : h|X ◦s ≡ 1} (number of living children) ◮ Let M = Θ( √ ∆ log n), k = Θ((log n)/ log ∆) ◮ Pr[L ≥ M] = Pr L k

M k

  • Pascal

≤ 1 M

k

· E L k

  • Markov

≤ 1 M

k

· |B| k

  • ·
  • O(ρ)k +

1 n200

|B|e M k ·

  • O(ρ)k +

1 n200

  • Stirling

≤ 1 √ ∆ k + 1 n200 · ( √ ∆)k ≤ 2 n100

slide-122
SLIDE 122

Finishing proof of main lemma

◮ After s = O(log log n) restrictions, ∆ → √ ∆ · polylog n

slide-123
SLIDE 123

Finishing proof of main lemma

◮ After s = O(log log n) restrictions, ∆ → √ ∆ · polylog n ◮ Therefore, after t = O((log log n)2) restrictions, ∆ = polylog n

slide-124
SLIDE 124

Finishing proof of main lemma

◮ After s = O(log log n) restrictions, ∆ → √ ∆ · polylog n ◮ Therefore, after t = O((log log n)2) restrictions, ∆ = polylog n ∧

slide-125
SLIDE 125

Finishing proof of main lemma

◮ After s = O(log log n) restrictions, ∆ → √ ∆ · polylog n ◮ Therefore, after t = O((log log n)2) restrictions, ∆ = polylog n ◮ Total cost so far: O(log n) truly random bits ∧

slide-126
SLIDE 126

Final step: MRT PRG

◮ Theorem (Meka, Reingold, Tal ’19): There is an explicit PRG with seed length O(log(n/ε)) that fools functions of the form f =

m

  • i=1

fi,

slide-127
SLIDE 127

Final step: MRT PRG

◮ Theorem (Meka, Reingold, Tal ’19): There is an explicit PRG with seed length O(log(n/ε)) that fools functions of the form f =

m

  • i=1

fi, where f1, . . . , fm are on disjoint variables and fi can be computed by an ROBP with width O(1), length polylog n

slide-128
SLIDE 128

Final step: MRT PRG

◮ Theorem (Meka, Reingold, Tal ’19): There is an explicit PRG with seed length O(log(n/ε)) that fools functions of the form f =

m

  • i=1

fi, where f1, . . . , fm are on disjoint variables and fi can be computed by an ROBP with width O(1), length polylog n ◮

(Proof uses GMRTV approach, building on [GY14, CHRT18, Vio09])

slide-129
SLIDE 129

Final step: MRT PRG

◮ Theorem (Meka, Reingold, Tal ’19): There is an explicit PRG with seed length O(log(n/ε)) that fools functions of the form f =

m

  • i=1

fi, where f1, . . . , fm are on disjoint variables and fi can be computed by an ROBP with width O(1), length polylog n ◮

(Proof uses GMRTV approach, building on [GY14, CHRT18, Vio09])

◮ In our case, f =

m

  • i=1

fi

slide-130
SLIDE 130

Final step: MRT PRG

◮ Theorem (Meka, Reingold, Tal ’19): There is an explicit PRG with seed length O(log(n/ε)) that fools functions of the form f =

m

  • i=1

fi, where f1, . . . , fm are on disjoint variables and fi can be computed by an ROBP with width O(1), length polylog n ◮

(Proof uses GMRTV approach, building on [GY14, CHRT18, Vio09])

◮ In our case, f =

m

  • i=1

fi =

  • S⊆[m]

(−1)|S| 2m

  • i∈S

(−1)fi

slide-131
SLIDE 131

Directions for further research

slide-132
SLIDE 132

Directions for further research

O(1)-ROBPs Arbitrary-order O(1)-ROBPs poly(n)-ROBPs Arbitrary-order poly(n)-ROBPs 3-ROBPs MRT19 Arbitrary-order 3-ROBPs MRT19 2-ROBPs SZ95 Arbitrary-order 2-ROBPs SZ95 Conjunctions NN93 Parities NN93 Read-once CNFs DETT10 Read-once AC0 This work Read-once formulas Regular O(1)-ROBPs BRRY14 Arbitrary-order permutation O(1)-ROBPs CHHL18 Read-once AC0[⊕] Read-once polynomials LV17

slide-133
SLIDE 133

Directions for further research

O(1)-ROBPs Arbitrary-order O(1)-ROBPs poly(n)-ROBPs Arbitrary-order poly(n)-ROBPs 3-ROBPs MRT19 Arbitrary-order 3-ROBPs MRT19 2-ROBPs SZ95 Arbitrary-order 2-ROBPs SZ95 Conjunctions NN93 Parities NN93 Read-once CNFs DETT10 Read-once AC0 This work Read-once formulas Regular O(1)-ROBPs BRRY14 Arbitrary-order permutation O(1)-ROBPs CHHL18 Read-once AC0[⊕] Read-once polynomials LV17

slide-134
SLIDE 134

Read-once AC0[⊕]

⊕ ∨ ∨ ∧ ∨ ⊕ ⊕ ⊕ ∧ ∧ x7 ¬x2 ¬x1 ¬x5 x8 x3 x12 ¬x9 x11 x4 x10 ¬x6 x13

slide-135
SLIDE 135

Fooling read-once AC0[⊕]

◮ Natural next step toward derandomizing BPL

slide-136
SLIDE 136

Fooling read-once AC0[⊕]

◮ Natural next step toward derandomizing BPL ◮ Best prior PRG: seed length O(log2 n) [FK ’18]

slide-137
SLIDE 137

Fooling read-once AC0[⊕]

◮ Natural next step toward derandomizing BPL ◮ Best prior PRG: seed length O(log2 n) [FK ’18] ◮ Theorem: Our PRG fools read-once AC0[⊕] with seed length

  • O(t + log(n/ε))

where t = # parity gates

slide-138
SLIDE 138

Fooling read-once AC0[⊕]

◮ Natural next step toward derandomizing BPL ◮ Best prior PRG: seed length O(log2 n) [FK ’18] ◮ Theorem: Our PRG fools read-once AC0[⊕] with seed length

  • O(t + log(n/ε))

where t = # parity gates ◮ Fool read-once AC0[⊕] with seed length O(log(n/ε))?

slide-139
SLIDE 139

Fooling read-once AC0[⊕]

◮ Natural next step toward derandomizing BPL ◮ Best prior PRG: seed length O(log2 n) [FK ’18] ◮ Theorem: Our PRG fools read-once AC0[⊕] with seed length

  • O(t + log(n/ε))

where t = # parity gates ◮ Fool read-once AC0[⊕] with seed length O(log(n/ε))? ◮ Thanks! Questions?