Near-Optimal Pseudorandom Generators for Constant-Depth Read-Once - - PowerPoint PPT Presentation

near optimal pseudorandom generators for constant depth
SMART_READER_LITE
LIVE PREVIEW

Near-Optimal Pseudorandom Generators for Constant-Depth Read-Once - - PowerPoint PPT Presentation

Near-Optimal Pseudorandom Generators for Constant-Depth Read-Once Formulas Dean Doron 1 Pooya Hatami 2 William M. Hoza 3 UT Austin Stanford UT Austin Ohio State UT Austin July 19 CCC 2019 1Supported by NSF Grant CCF-1705028 2Supported


slide-1
SLIDE 1

Near-Optimal Pseudorandom Generators for Constant-Depth Read-Once Formulas

Dean Doron1

UT Austin → Stanford

Pooya Hatami2

UT Austin → Ohio State

William M. Hoza3

UT Austin

July 19 CCC 2019

1Supported by NSF Grant CCF-1705028 2Supported by a Simons Investigator Award (#409864, David Zuckerman) 3Supported by the NSF GRFP under Grant DGE-1610403 and by a Harrington Fellowship from UT Austin

slide-2
SLIDE 2

Randomness as a scarce resource

◮ Randomization is a popular algorithmic technique ◮ But randomness is costly

slide-3
SLIDE 3

Randomness as a scarce resource

◮ Randomization is a popular algorithmic technique ◮ But randomness is costly ◮ An algorithm that uses fewer random bits is better

slide-4
SLIDE 4

Pseudorandom generators (PRGs)

s bits Gen n bits

slide-5
SLIDE 5

Pseudorandom generators (PRGs)

s bits Gen n bits ◮ Gen “fools” f : {0, 1}n → {0, 1} if E[f (Gen(U))] = E[f (U)] ± ε

slide-6
SLIDE 6

Pseudorandom generators (PRGs)

s bits Gen n bits ◮ Gen “fools” f : {0, 1}n → {0, 1} if E[f (Gen(U))] = E[f (U)] ± ε ◮ Goal: Design PRG that fools an interesting class of functions f

slide-7
SLIDE 7

Pseudorandom generators (PRGs)

s bits Gen n bits ◮ Gen “fools” f : {0, 1}n → {0, 1} if E[f (Gen(U))] = E[f (U)] ± ε ◮ Goal: Design PRG that fools an interesting class of functions f ◮ Minimize seed length s = s(n, ε)

slide-8
SLIDE 8

Read-once formulas

∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ x7 ¬x2 ¬x1 ¬x5 x8 x3 x12 ¬x9 x11 x4 x10 ¬x6 x13

slide-9
SLIDE 9

Read-once formulas

∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ x7 ¬x2 ¬x1 ¬x5 x8 x3 x12 ¬x9 x11 x4 x10 ¬x6 x13 ◮ This work: Fool depth-d read-once formulas for d = O(1)

slide-10
SLIDE 10

Read-once formulas

∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ x7 ¬x2 ¬x1 ¬x5 x8 x3 x12 ¬x9 x11 x4 x10 ¬x6 x13 ◮ This work: Fool depth-d read-once formulas for d = O(1) ◮ Read-once version of AC0

slide-11
SLIDE 11

Prior work and main result

Seed length Model fooled Reference O(n0.001) AC0 Ajtai, Wigderson ’89

slide-12
SLIDE 12

Prior work and main result

Seed length Model fooled Reference O(n0.001) AC0 Ajtai, Wigderson ’89 O(log2d+6 n) AC0 Nisan ’91

slide-13
SLIDE 13

Prior work and main result

Seed length Model fooled Reference O(n0.001) AC0 Ajtai, Wigderson ’89 O(log2d+6 n) AC0 Nisan ’91

  • O(logd+4 n)

AC0 Trevisan, Xue ’13

slide-14
SLIDE 14

Prior work and main result

Seed length Model fooled Reference O(n0.001) AC0 Ajtai, Wigderson ’89 O(log2d+6 n) AC0 Nisan ’91

  • O(logd+4 n)

AC0 Trevisan, Xue ’13

  • O(logd+1 n)

Read-once AC0 Chen, Steinke, Vadhan ’15

slide-15
SLIDE 15

Prior work and main result

Seed length Model fooled Reference O(n0.001) AC0 Ajtai, Wigderson ’89 O(log2d+6 n) AC0 Nisan ’91

  • O(logd+4 n)

AC0 Trevisan, Xue ’13

  • O(logd+1 n)

Read-once AC0 Chen, Steinke, Vadhan ’15

  • O(log2 n)

Arbitrary-order width-O(1) ROBPs Forbes, Kelley ’18

slide-16
SLIDE 16

Prior work and main result

Seed length Model fooled Reference O(n0.001) AC0 Ajtai, Wigderson ’89 O(log2d+6 n) AC0 Nisan ’91

  • O(logd+4 n)

AC0 Trevisan, Xue ’13

  • O(logd+1 n)

Read-once AC0 Chen, Steinke, Vadhan ’15

  • O(log2 n)

Arbitrary-order width-O(1) ROBPs Forbes, Kelley ’18

  • O(log n)

Read-once AC0 This work

slide-17
SLIDE 17

Prior work and main result

Seed length Model fooled Reference O(n0.001) AC0 Ajtai, Wigderson ’89 O(log2d+6 n) AC0 Nisan ’91

  • O(logd+4 n)

AC0 Trevisan, Xue ’13

  • O(logd+1 n)

Read-once AC0 Chen, Steinke, Vadhan ’15

  • O(log2 n)

Arbitrary-order width-O(1) ROBPs Forbes, Kelley ’18

  • O(log n)

Read-once AC0 This work ◮ Main result: PRG for read-once AC0 with seed length log(n/ε) · O(d log log(n/ε))2d+2.

slide-18
SLIDE 18

Motivation: L vs. BPL

◮ Big open problem: Prove L = BPL

slide-19
SLIDE 19

Motivation: L vs. BPL

◮ Big open problem: Prove L = BPL

◮ “Randomness is not necessary for space-efficient computation”

slide-20
SLIDE 20

Motivation: L vs. BPL

◮ Big open problem: Prove L = BPL

◮ “Randomness is not necessary for space-efficient computation”

◮ Main approach: Design optimal PRG for “ROBPs”

slide-21
SLIDE 21

Motivation: L vs. BPL

◮ Big open problem: Prove L = BPL

◮ “Randomness is not necessary for space-efficient computation”

◮ Main approach: Design optimal PRG for “ROBPs” ◮ Bad news: Seed length O(log2 n) has not been improved for decades [Nisan ’92]

slide-22
SLIDE 22

Motivation: L vs. BPL

◮ Big open problem: Prove L = BPL

◮ “Randomness is not necessary for space-efficient computation”

◮ Main approach: Design optimal PRG for “ROBPs” ◮ Bad news: Seed length O(log2 n) has not been improved for decades [Nisan ’92] ◮ Good news: Can achieve seed length O(log n) for increasingly powerful restricted models

slide-23
SLIDE 23

Motivation: L vs. BPL

◮ Big open problem: Prove L = BPL

◮ “Randomness is not necessary for space-efficient computation”

◮ Main approach: Design optimal PRG for “ROBPs” ◮ Bad news: Seed length O(log2 n) has not been improved for decades [Nisan ’92] ◮ Good news: Can achieve seed length O(log n) for increasingly powerful restricted models ◮ Read-once AC0 is one of the frontiers of this progress

slide-24
SLIDE 24

Seed length O(log n)

O(1)-ROBPs Arbitrary-order O(1)-ROBPs poly(n)-ROBPs Arbitrary-order poly(n)-ROBPs 3-ROBPs MRT19 Arbitrary-order 3-ROBPs MRT19 2-ROBPs SZ95 Arbitrary-order 2-ROBPs SZ95 Conjunctions NN93 Parities NN93 Read-once CNFs DETT10 Read-once AC0 This work Read-once formulas Regular O(1)-ROBPs BRRY14 Arbitrary-order permutation O(1)-ROBPs CHHL18 Read-once AC0[⊕] Read-once polynomials LV17

slide-25
SLIDE 25

Starting point: Forbes-Kelley PRG

Seed length Model fooled Reference O(n0.001) AC0 Ajtai, Wigderson ’89 O(log2d+6 n) AC0 Nisan ’91

  • O(logd+4 n)

AC0 Trevisan, Xue ’13

  • O(logd+1 n)

Read-once AC0 Chen, Steinke, Vadhan ’15

  • O(log2 n)

Arbitrary-order width-O(1) ROBPs Forbes, Kelley ’18

  • O(log n)

Read-once AC0 This work

slide-26
SLIDE 26

Starting point: Forbes-Kelley PRG

Seed length Model fooled Reference O(n0.001) AC0 Ajtai, Wigderson ’89 O(log2d+6 n) AC0 Nisan ’91

  • O(logd+4 n)

AC0 Trevisan, Xue ’13

  • O(logd+1 n)

Read-once AC0 Chen, Steinke, Vadhan ’15

  • O(log2 n)

Arbitrary-order width-O(1) ROBPs Forbes, Kelley ’18

  • O(log n)

Read-once AC0 This work

slide-27
SLIDE 27

PRGs via pseudorandom restrictions [AW89]

∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ x7 ¬x2 ¬x1 ¬x5 x8 x3 x12 ¬x9 x11 x4 x10 ¬x6 x13

slide-28
SLIDE 28

PRGs via pseudorandom restrictions [AW89]

◮ Start by sampling a pseudorandom restriction X ∈ {0, 1, ⋆}n ∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ x7 ¬x2 ¬x1 ¬x5 x8 x3 x12 ¬x9 x11 x4 x10 ¬x6 x13

slide-29
SLIDE 29

PRGs via pseudorandom restrictions [AW89]

◮ Start by sampling a pseudorandom restriction X ∈ {0, 1, ⋆}n ∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ x7 1 ¬x1 x8 x3 1 x11 x4 x13

slide-30
SLIDE 30

Restriction notation

◮ Define Res: {0, 1}n × {0, 1}n → {0, 1, ⋆}n by Res(y, z)i =

if yi = 1 zi if yi = 0

slide-31
SLIDE 31

Restriction notation

◮ Define Res: {0, 1}n × {0, 1}n → {0, 1, ⋆}n by Res(y, z)i =

if yi = 1 zi if yi = 0 y = 0 1 1 0 0 1 0 0 z = 0 0 1 1 1 1 0 1 Res(y, z) = 0 ⋆ ⋆ 1 1 ⋆ 0 1

slide-32
SLIDE 32

Forbes-Kelley pseudorandom restriction

◮ A distribution D over {0, 1}n is ε-biased if it fools parities: S = ∅ = ⇒

  • E
  • i∈S

Di

  • − 1

2

  • ≤ ε
slide-33
SLIDE 33

Forbes-Kelley pseudorandom restriction

◮ A distribution D over {0, 1}n is ε-biased if it fools parities: S = ∅ = ⇒

  • E
  • i∈S

Di

  • − 1

2

  • ≤ ε

◮ Let D, D′ be independent small-bias strings

slide-34
SLIDE 34

Forbes-Kelley pseudorandom restriction

◮ A distribution D over {0, 1}n is ε-biased if it fools parities: S = ∅ = ⇒

  • E
  • i∈S

Di

  • − 1

2

  • ≤ ε

◮ Let D, D′ be independent small-bias strings ◮ Let X = Res(D, D′) (seed length O(log n))

slide-35
SLIDE 35

Forbes-Kelley pseudorandom restriction

◮ A distribution D over {0, 1}n is ε-biased if it fools parities: S = ∅ = ⇒

  • E
  • i∈S

Di

  • − 1

2

  • ≤ ε

◮ Let D, D′ be independent small-bias strings ◮ Let X = Res(D, D′) (seed length O(log n)) ◮ Theorem [Forbes, Kelley ’18]: For any O(1)-width ROBP f , E

X,U[f |X(U)] ≈ E U[f (U)]

slide-36
SLIDE 36

Forbes-Kelley pseudorandom restriction

◮ A distribution D over {0, 1}n is ε-biased if it fools parities: S = ∅ = ⇒

  • E
  • i∈S

Di

  • − 1

2

  • ≤ ε

◮ Let D, D′ be independent small-bias strings ◮ Let X = Res(D, D′) (seed length O(log n)) ◮ Theorem [Forbes, Kelley ’18]: For any O(1)-width ROBP f , E

X,U[f |X(U)] ≈ E U[f (U)]

◮ In words, X preserves expectation of f

slide-37
SLIDE 37

Forbes-Kelley pseudorandom restriction

◮ A distribution D over {0, 1}n is ε-biased if it fools parities: S = ∅ = ⇒

  • E
  • i∈S

Di

  • − 1

2

  • ≤ ε

◮ Let D, D′ be independent small-bias strings ◮ Let X = Res(D, D′) (seed length O(log n)) ◮ Theorem [Forbes, Kelley ’18]: For any O(1)-width ROBP f , E

X,U[f |X(U)] ≈ E U[f (U)]

◮ In words, X preserves expectation of f ◮

(Proof involves clever Fourier analysis, building on [RSV13, HLV18, CHRT18])

slide-38
SLIDE 38

Forbes-Kelley pseudorandom generator

◮ So [FK18] can assign values to half the inputs using O(log n) truly random bits

slide-39
SLIDE 39

Forbes-Kelley pseudorandom generator

◮ So [FK18] can assign values to half the inputs using O(log n) truly random bits ◮ After restricting, f |X is another ROBP

slide-40
SLIDE 40

Forbes-Kelley pseudorandom generator

◮ So [FK18] can assign values to half the inputs using O(log n) truly random bits ◮ After restricting, f |X is another ROBP ◮ So we can apply another pseudorandom restriction

slide-41
SLIDE 41

Forbes-Kelley pseudorandom generator

◮ So [FK18] can assign values to half the inputs using O(log n) truly random bits ◮ After restricting, f |X is another ROBP ◮ So we can apply another pseudorandom restriction ◮ Let X ◦t denote composition of t independent copies of X

slide-42
SLIDE 42

Forbes-Kelley pseudorandom generator

◮ So [FK18] can assign values to half the inputs using O(log n) truly random bits ◮ After restricting, f |X is another ROBP ◮ So we can apply another pseudorandom restriction ◮ Let X ◦t denote composition of t independent copies of X ◮ Let t = O(log n)

slide-43
SLIDE 43

Forbes-Kelley pseudorandom generator

◮ So [FK18] can assign values to half the inputs using O(log n) truly random bits ◮ After restricting, f |X is another ROBP ◮ So we can apply another pseudorandom restriction ◮ Let X ◦t denote composition of t independent copies of X ◮ Let t = O(log n) ◮ With high probability, X ◦t ∈ {0, 1}n (no ⋆)

slide-44
SLIDE 44

Forbes-Kelley pseudorandom generator

◮ So [FK18] can assign values to half the inputs using O(log n) truly random bits ◮ After restricting, f |X is another ROBP ◮ So we can apply another pseudorandom restriction ◮ Let X ◦t denote composition of t independent copies of X ◮ Let t = O(log n) ◮ With high probability, X ◦t ∈ {0, 1}n (no ⋆) ◮ Expectation preserved at every step, so total error is low: E

X ◦t[f (X ◦t)] ≈ E U[f (U)]

slide-45
SLIDE 45

Forbes-Kelley pseudorandom generator

◮ So [FK18] can assign values to half the inputs using O(log n) truly random bits ◮ After restricting, f |X is another ROBP ◮ So we can apply another pseudorandom restriction ◮ Let X ◦t denote composition of t independent copies of X ◮ Let t = O(log n) ◮ With high probability, X ◦t ∈ {0, 1}n (no ⋆) ◮ Expectation preserved at every step, so total error is low: E

X ◦t[f (X ◦t)] ≈ E U[f (U)]

◮ Total cost: O(log2 n) truly random bits

slide-46
SLIDE 46

Improved PRGs via simplification [GMRTV12]

∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ x7 ¬x2 ¬x1 ¬x5 x8 x3 x12 ¬x9 x11 x4 x10 ¬x6 x13

slide-47
SLIDE 47

Improved PRGs via simplification [GMRTV12]

◮ Step 1: Apply pseudorandom restriction X ∈ {0, 1, ⋆}n ∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ x7 ¬x2 ¬x1 ¬x5 x8 x3 x12 ¬x9 x11 x4 x10 ¬x6 x13

slide-48
SLIDE 48

Improved PRGs via simplification [GMRTV12]

◮ Step 1: Apply pseudorandom restriction X ∈ {0, 1, ⋆}n ∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ x7 1 ¬x1 x8 x3 1 x11 x4 x13

slide-49
SLIDE 49

Improved PRGs via simplification [GMRTV12]

◮ Step 1: Apply pseudorandom restriction X ∈ {0, 1, ⋆}n ◮ Design X to preserve expectation ∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ x7 1 ¬x1 x8 x3 1 x11 x4 x13

slide-50
SLIDE 50

Improved PRGs via simplification [GMRTV12]

◮ Step 1: Apply pseudorandom restriction X ∈ {0, 1, ⋆}n ◮ Design X to preserve expectation ◮ Design X so that X ◦t also simplifies formula, for t ≪ log n ∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ x7 1 ¬x1 x8 x3 1 x11 x4 x13

slide-51
SLIDE 51

Improved PRGs via simplification [GMRTV12]

◮ Step 1: Apply pseudorandom restriction X ∈ {0, 1, ⋆}n ◮ Design X to preserve expectation ◮ Design X so that X ◦t also simplifies formula, for t ≪ log n ∧ ∨ ∨ ∨ ∧ ∧ ∧ x7 ¬x1 x8 x3 x11 x4

slide-52
SLIDE 52

Improved PRGs via simplification [GMRTV12]

◮ Step 1: Apply pseudorandom restriction X ∈ {0, 1, ⋆}n ◮ Design X to preserve expectation ◮ Design X so that X ◦t also simplifies formula, for t ≪ log n ∧ ∨ ∨ ∨ ∧ ∧ ∧ x7 ¬x1 x8 x3 x11 x4

slide-53
SLIDE 53

Improved PRGs via simplification [GMRTV12]

◮ Step 1: Apply pseudorandom restriction X ∈ {0, 1, ⋆}n ◮ Design X to preserve expectation ◮ Design X so that X ◦t also simplifies formula, for t ≪ log n ∧ ∧ ∧ ∧ x7 ¬x1 x8 x3 x11 x4

slide-54
SLIDE 54

Improved PRGs via simplification [GMRTV12]

◮ Step 1: Apply pseudorandom restriction X ∈ {0, 1, ⋆}n ◮ Design X to preserve expectation ◮ Design X so that X ◦t also simplifies formula, for t ≪ log n ∧ x7 ¬x1 x8 x3 x11 x4

slide-55
SLIDE 55

Improved PRGs via simplification [GMRTV12]

◮ Step 1: Apply pseudorandom restriction X ∈ {0, 1, ⋆}n ◮ Design X to preserve expectation ◮ Design X so that X ◦t also simplifies formula, for t ≪ log n ∧ x7 ¬x1 x8 x3 x11 x4 ◮ Step 2: Fool restricted formula, taking advantage of simplicity

slide-56
SLIDE 56

Our pseudorandom restriction

◮ Assume by recursion: PRG for depth d with seed length

  • O(log n)
slide-57
SLIDE 57

Our pseudorandom restriction

◮ Assume by recursion: PRG for depth d with seed length

  • O(log n)

◮ Let’s sample X ∈ {0, 1, ⋆}n for depth d + 1

slide-58
SLIDE 58

Our pseudorandom restriction

◮ Assume by recursion: PRG for depth d with seed length

  • O(log n)

◮ Let’s sample X ∈ {0, 1, ⋆}n for depth d + 1

  • 1. Recursively sample Gd, G ′

d ∈ {0, 1}n

slide-59
SLIDE 59

Our pseudorandom restriction

◮ Assume by recursion: PRG for depth d with seed length

  • O(log n)

◮ Let’s sample X ∈ {0, 1, ⋆}n for depth d + 1

  • 1. Recursively sample Gd, G ′

d ∈ {0, 1}n

  • 2. Sample D, D′ ∈ {0, 1}n with small bias
slide-60
SLIDE 60

Our pseudorandom restriction

◮ Assume by recursion: PRG for depth d with seed length

  • O(log n)

◮ Let’s sample X ∈ {0, 1, ⋆}n for depth d + 1

  • 1. Recursively sample Gd, G ′

d ∈ {0, 1}n

  • 2. Sample D, D′ ∈ {0, 1}n with small bias
  • 3. X = Res(Gd ⊕ D, G ′

d ⊕ D′)

slide-61
SLIDE 61

Preserving expectation

◮ Claim: For any depth-(d + 1) read-once AC0 formula f , E

X,U[f |X(U)] ≈ E U[f (U)]

slide-62
SLIDE 62

Preserving expectation

◮ Claim: For any depth-(d + 1) read-once AC0 formula f , E

X,U[f |X(U)] ≈ E U[f (U)]

◮ Proof: Read-once AC0 can be simulated by constant-width ROBPs [CSV15]

slide-63
SLIDE 63

Preserving expectation

◮ Claim: For any depth-(d + 1) read-once AC0 formula f , E

X,U[f |X(U)] ≈ E U[f (U)]

◮ Proof: Read-once AC0 can be simulated by constant-width ROBPs [CSV15] ◮ So we can simply apply Forbes-Kelley result: X = Res(Gd ⊕ D, G ′

d ⊕ D′)

slide-64
SLIDE 64

Simplification

◮ ∆(f ) def = maximum fan-in of any gate other than root

slide-65
SLIDE 65

Simplification

◮ ∆(f ) def = maximum fan-in of any gate other than root ◮ Main Lemma: With high probability over X ◦t, ∆(f |X ◦t) ≤ polylog n, where t = O((log log n)2)

slide-66
SLIDE 66

Simplification

◮ ∆(f ) def = maximum fan-in of any gate other than root ◮ Main Lemma: With high probability over X ◦t, ∆(f |X ◦t) ≤ polylog n, where t = O((log log n)2) ◮

Actually we only prove this statement “up to sandwiching”

slide-67
SLIDE 67

Simplification under truly random restrictions

◮ Let f be a read-once AC0 formula

slide-68
SLIDE 68

Simplification under truly random restrictions

◮ Let f be a read-once AC0 formula ◮ Let R = Res(U, U′) (truly random restriction)

slide-69
SLIDE 69

Simplification under truly random restrictions

◮ Let f be a read-once AC0 formula ◮ Let R = Res(U, U′) (truly random restriction) ◮ Chen, Steinke, Vadhan ’15 = ⇒ W.h.p. over R◦t, ∆(f |R◦t) ≤ polylog n

slide-70
SLIDE 70

Simplification under truly random restrictions

◮ Let f be a read-once AC0 formula ◮ Let R = Res(U, U′) (truly random restriction) ◮ Chen, Steinke, Vadhan ’15 = ⇒ W.h.p. over R◦t, ∆(f |R◦t) ≤ polylog n ◮ (In fact the simplification they show is more severe)

slide-71
SLIDE 71

Simplification under truly random restrictions

◮ Let f be a read-once AC0 formula ◮ Let R = Res(U, U′) (truly random restriction) ◮ Chen, Steinke, Vadhan ’15 = ⇒ W.h.p. over R◦t, ∆(f |R◦t) ≤ polylog n ◮ (In fact the simplification they show is more severe) ◮

Again, these statements are true “up to sandwiching.” Proof uses Fourier analysis

slide-72
SLIDE 72

Derandomizing simplification

◮ Let f be a depth-(d − 1) read-once AC0 formula

slide-73
SLIDE 73

Derandomizing simplification

◮ Let f be a depth-(d − 1) read-once AC0 formula ◮ Let b ∈ {0, 1}

slide-74
SLIDE 74

Derandomizing simplification

◮ Let f be a depth-(d − 1) read-once AC0 formula ◮ Let b ∈ {0, 1} ◮ Computational problem: Given y, z ∈ {0, 1}n, decide whether f |Res(y,z) ≡ b

slide-75
SLIDE 75

Derandomizing simplification

◮ Let f be a depth-(d − 1) read-once AC0 formula ◮ Let b ∈ {0, 1} ◮ Computational problem: Given y, z ∈ {0, 1}n, decide whether f |Res(y,z) ≡ b ◮ Lemma: Can be decided in depth-d read-once AC0

slide-76
SLIDE 76

Deciding whether f |Res(y,z) ≡ b

∧ a c b ≡ 0 ⇐ ⇒

slide-77
SLIDE 77

Deciding whether f |Res(y,z) ≡ b

∧ a c b ≡ 0 ⇐ ⇒ ∨

b ≡ 0 a ≡ 0 c ≡ 0

slide-78
SLIDE 78

Deciding whether f |Res(y,z) ≡ b

∧ a c b ≡ 0 ⇐ ⇒ ∨

b ≡ 0 a ≡ 0 c ≡ 0

∨ a′ c′ b′ ≡ 0 ⇐ ⇒ ∧

b′ ≡ 0 a′ ≡ 0 c′ ≡ 0

slide-79
SLIDE 79

Deciding whether f |Res(y,z) ≡ b (continued)

◮ At bottom, we get one additional layer: (Res(y, z)i ≡ b) ⇐ ⇒ (yi = 0 ∧ zi = b) (¬ Res(y, z)i ≡ b) ⇐ ⇒ (yi = 0 ∧ zi = 1 − b)

slide-80
SLIDE 80

Collapse under pseudorandom restrictions

◮ Let f be a depth-(d − 1) read-once AC0 formula ◮ Let b ∈ {0, 1}

slide-81
SLIDE 81

Collapse under pseudorandom restrictions

◮ Let f be a depth-(d − 1) read-once AC0 formula ◮ Let b ∈ {0, 1} ◮ X = Res(Gd ⊕ D, G ′

d ⊕ D′)

slide-82
SLIDE 82

Collapse under pseudorandom restrictions

◮ Let f be a depth-(d − 1) read-once AC0 formula ◮ Let b ∈ {0, 1} ◮ X = Res(Gd ⊕ D, G ′

d ⊕ D′)

◮ Gd, G ′

d fool depth d, so

Pr

X [f |X ≡ b] ≈ Pr R [f |R ≡ b]

slide-83
SLIDE 83

Collapse under pseudorandom restrictions

◮ Let f be a depth-(d − 1) read-once AC0 formula ◮ Let b ∈ {0, 1} ◮ X = Res(Gd ⊕ D, G ′

d ⊕ D′)

◮ Gd, G ′

d fool depth d, so

Pr

X [f |X ≡ b] ≈ Pr R [f |R ≡ b]

◮ Hybrid argument: Pr

X ◦t[f |X ◦t ≡ b] ≈ Pr R◦t[f |R◦t ≡ b]

slide-84
SLIDE 84

Bridging the gap from d − 1 to d + 1

◮ So far: Depth-(d − 1) formulas collapse with about the right probability

slide-85
SLIDE 85

Bridging the gap from d − 1 to d + 1

◮ So far: Depth-(d − 1) formulas collapse with about the right probability ◮ We were supposed to show that depth-(d + 1) formulas simplify w.r.t. ∆ w.h.p.

slide-86
SLIDE 86

Idea of proof that ∆ → polylog n

∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ ∨ ∨ ∨ ∨ ∨ ∨ ∨ ∨ ∨ ∨ ∨ ∨ ∨ Total depth d + 1

slide-87
SLIDE 87

Idea of proof that ∆ → polylog n

∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ ∨ ∨ ∨ ∨ ∨ ∨ ∨ ∨ ∨ ∨ ∨ ∨ ∨ These gates collapsing... Total depth d + 1

slide-88
SLIDE 88

Idea of proof that ∆ → polylog n

∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ ∨ ∨ ∨ ∨ ∨ ∨ ∨ ∨ ∨ ∨ ∨ ∨ ∨ These gates collapsing... ...corresponds to these gates having few children Total depth d + 1

slide-89
SLIDE 89

∆ = polylog n

◮ To recap, after t = O((log log n)2) restrictions, ∆ = polylog n

slide-90
SLIDE 90

∆ = polylog n

◮ To recap, after t = O((log log n)2) restrictions, ∆ = polylog n ∧

slide-91
SLIDE 91

∆ = polylog n

◮ To recap, after t = O((log log n)2) restrictions, ∆ = polylog n ◮ Total cost so far: O(log n) truly random bits ∧

slide-92
SLIDE 92

Final step: MRT PRG

◮ Theorem (Meka, Reingold, Tal ’19): There is an explicit PRG with seed length O(log(n/ε)) that fools functions of the form f =

m

  • i=1

fi,

slide-93
SLIDE 93

Final step: MRT PRG

◮ Theorem (Meka, Reingold, Tal ’19): There is an explicit PRG with seed length O(log(n/ε)) that fools functions of the form f =

m

  • i=1

fi, where f1, . . . , fm are on disjoint variables and fi can be computed by an ROBP with width O(1), length polylog n

slide-94
SLIDE 94

Final step: MRT PRG

◮ Theorem (Meka, Reingold, Tal ’19): There is an explicit PRG with seed length O(log(n/ε)) that fools functions of the form f =

m

  • i=1

fi, where f1, . . . , fm are on disjoint variables and fi can be computed by an ROBP with width O(1), length polylog n ◮

(Proof uses GMRTV approach, building on [GY14, CHRT18, Vio09])

slide-95
SLIDE 95

Final step: MRT PRG

◮ Theorem (Meka, Reingold, Tal ’19): There is an explicit PRG with seed length O(log(n/ε)) that fools functions of the form f =

m

  • i=1

fi, where f1, . . . , fm are on disjoint variables and fi can be computed by an ROBP with width O(1), length polylog n ◮

(Proof uses GMRTV approach, building on [GY14, CHRT18, Vio09])

◮ In our case, f =

m

  • i=1

fi

slide-96
SLIDE 96

Final step: MRT PRG

◮ Theorem (Meka, Reingold, Tal ’19): There is an explicit PRG with seed length O(log(n/ε)) that fools functions of the form f =

m

  • i=1

fi, where f1, . . . , fm are on disjoint variables and fi can be computed by an ROBP with width O(1), length polylog n ◮

(Proof uses GMRTV approach, building on [GY14, CHRT18, Vio09])

◮ In our case, f =

m

  • i=1

fi =

  • S⊆[m]

(−1)|S| 2m

  • i∈S

(−1)fi

slide-97
SLIDE 97

Directions for further research

slide-98
SLIDE 98

Directions for further research

O(1)-ROBPs Arbitrary-order O(1)-ROBPs poly(n)-ROBPs Arbitrary-order poly(n)-ROBPs 3-ROBPs MRT19 Arbitrary-order 3-ROBPs MRT19 2-ROBPs SZ95 Arbitrary-order 2-ROBPs SZ95 Conjunctions NN93 Parities NN93 Read-once CNFs DETT10 Read-once AC0 This work Read-once formulas Regular O(1)-ROBPs BRRY14 Arbitrary-order permutation O(1)-ROBPs CHHL18 Read-once AC0[⊕] Read-once polynomials LV17

slide-99
SLIDE 99

Directions for further research

O(1)-ROBPs Arbitrary-order O(1)-ROBPs poly(n)-ROBPs Arbitrary-order poly(n)-ROBPs 3-ROBPs MRT19 Arbitrary-order 3-ROBPs MRT19 2-ROBPs SZ95 Arbitrary-order 2-ROBPs SZ95 Conjunctions NN93 Parities NN93 Read-once CNFs DETT10 Read-once AC0 This work Read-once formulas Regular O(1)-ROBPs BRRY14 Arbitrary-order permutation O(1)-ROBPs CHHL18 Read-once AC0[⊕] Read-once polynomials LV17

slide-100
SLIDE 100

Read-once AC0[⊕]

⊕ ∨ ∨ ∧ ∨ ⊕ ⊕ ⊕ ∧ ∧ x7 ¬x2 ¬x1 ¬x5 x8 x3 x12 ¬x9 x11 x4 x10 ¬x6 x13

slide-101
SLIDE 101

Fooling read-once AC0[⊕]

◮ Natural next step toward derandomizing BPL

slide-102
SLIDE 102

Fooling read-once AC0[⊕]

◮ Natural next step toward derandomizing BPL ◮ Best prior PRG: seed length O(log2 n) [FK ’18]

slide-103
SLIDE 103

Fooling read-once AC0[⊕]

◮ Natural next step toward derandomizing BPL ◮ Best prior PRG: seed length O(log2 n) [FK ’18] ◮ Theorem: Our PRG fools read-once AC0[⊕] with seed length

  • O(t + log(n/ε))

where t = # parity gates

slide-104
SLIDE 104

Fooling read-once AC0[⊕]

◮ Natural next step toward derandomizing BPL ◮ Best prior PRG: seed length O(log2 n) [FK ’18] ◮ Theorem: Our PRG fools read-once AC0[⊕] with seed length

  • O(t + log(n/ε))

where t = # parity gates ◮ Fool read-once AC0[⊕] with seed length O(log(n/ε))?

slide-105
SLIDE 105

Fooling read-once AC0[⊕]

◮ Natural next step toward derandomizing BPL ◮ Best prior PRG: seed length O(log2 n) [FK ’18] ◮ Theorem: Our PRG fools read-once AC0[⊕] with seed length

  • O(t + log(n/ε))

where t = # parity gates ◮ Fool read-once AC0[⊕] with seed length O(log(n/ε))? ◮ Thanks! Questions?