SLIDE 1 Near-Optimal Pseudorandom Generators for Constant-Depth Read-Once Formulas
Dean Doron1
UT Austin → Stanford
Pooya Hatami2
UT Austin → Ohio State
William M. Hoza3
UT Austin
BIRS Workshop 19w5088 July 8, 2019
1Supported by NSF Grant CCF-1705028 2Supported by a Simons Investigator Award (#409864, David Zuckerman) 3Supported by the NSF GRFP under Grant DGE-1610403 and by a Harrington Fellowship from UT Austin
SLIDE 2
Randomness as a scarce resource
◮ Randomization is a popular algorithmic technique ◮ But randomness is costly
SLIDE 3
Randomness as a scarce resource
◮ Randomization is a popular algorithmic technique ◮ But randomness is costly ◮ An algorithm that uses fewer random bits is better
SLIDE 4
Pseudorandom generators (PRGs)
s bits Gen n bits
SLIDE 5
Pseudorandom generators (PRGs)
s bits Gen n bits ◮ Gen “fools” f : {0, 1}n → {0, 1} if E[f (Gen(U))] = E[f (U)] ± ε
SLIDE 6
Pseudorandom generators (PRGs)
s bits Gen n bits ◮ Gen “fools” f : {0, 1}n → {0, 1} if E[f (Gen(U))] = E[f (U)] ± ε ◮ Goal: Design PRG that fools an interesting class of functions f
SLIDE 7
Pseudorandom generators (PRGs)
s bits Gen n bits ◮ Gen “fools” f : {0, 1}n → {0, 1} if E[f (Gen(U))] = E[f (U)] ± ε ◮ Goal: Design PRG that fools an interesting class of functions f ◮ Minimize seed length s = s(n, ε)
SLIDE 8
Read-once formulas
∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ x7 ¬x2 ¬x1 ¬x5 x8 x3 x12 ¬x9 x11 x4 x10 ¬x6 x13
SLIDE 9
Read-once formulas
∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ x7 ¬x2 ¬x1 ¬x5 x8 x3 x12 ¬x9 x11 x4 x10 ¬x6 x13 ◮ This work: Fool depth-d read-once formulas for d = O(1)
SLIDE 10
Read-once formulas
∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ x7 ¬x2 ¬x1 ¬x5 x8 x3 x12 ¬x9 x11 x4 x10 ¬x6 x13 ◮ This work: Fool depth-d read-once formulas for d = O(1) ◮ Read-once version of AC0
SLIDE 11
Prior work and main result
Seed length Model fooled Reference O(n0.001) AC0 Ajtai, Wigderson ’89
SLIDE 12
Prior work and main result
Seed length Model fooled Reference O(n0.001) AC0 Ajtai, Wigderson ’89 O(log2d+6 n) AC0 Nisan ’91
SLIDE 13 Prior work and main result
Seed length Model fooled Reference O(n0.001) AC0 Ajtai, Wigderson ’89 O(log2d+6 n) AC0 Nisan ’91
AC0 Trevisan, Xue ’13
SLIDE 14 Prior work and main result
Seed length Model fooled Reference O(n0.001) AC0 Ajtai, Wigderson ’89 O(log2d+6 n) AC0 Nisan ’91
AC0 Trevisan, Xue ’13
Read-once AC0 Chen, Steinke, Vadhan ’15
SLIDE 15 Prior work and main result
Seed length Model fooled Reference O(n0.001) AC0 Ajtai, Wigderson ’89 O(log2d+6 n) AC0 Nisan ’91
AC0 Trevisan, Xue ’13
Read-once AC0 Chen, Steinke, Vadhan ’15
Arbitrary-order width-O(1) ROBPs Forbes, Kelley ’18
SLIDE 16 Prior work and main result
Seed length Model fooled Reference O(n0.001) AC0 Ajtai, Wigderson ’89 O(log2d+6 n) AC0 Nisan ’91
AC0 Trevisan, Xue ’13
Read-once AC0 Chen, Steinke, Vadhan ’15
Arbitrary-order width-O(1) ROBPs Forbes, Kelley ’18
Read-once AC0 This work
SLIDE 17 Prior work and main result
Seed length Model fooled Reference O(n0.001) AC0 Ajtai, Wigderson ’89 O(log2d+6 n) AC0 Nisan ’91
AC0 Trevisan, Xue ’13
Read-once AC0 Chen, Steinke, Vadhan ’15
Arbitrary-order width-O(1) ROBPs Forbes, Kelley ’18
Read-once AC0 This work ◮ Main result: PRG for read-once AC0 with seed length log(n/ε) · O(d log log(n/ε))2d+2.
SLIDE 18
Motivation: L vs. BPL
◮ Big open problem: Prove L = BPL
SLIDE 19
Motivation: L vs. BPL
◮ Big open problem: Prove L = BPL
◮ “Randomness is not necessary for space-efficient computation”
SLIDE 20
Motivation: L vs. BPL
◮ Big open problem: Prove L = BPL
◮ “Randomness is not necessary for space-efficient computation”
◮ Main approach: Design optimal PRG for “ROBPs”
SLIDE 21
Motivation: L vs. BPL
◮ Big open problem: Prove L = BPL
◮ “Randomness is not necessary for space-efficient computation”
◮ Main approach: Design optimal PRG for “ROBPs” ◮ Bad news: Seed length O(log2 n) has not been improved for decades [Nisan ’92]
SLIDE 22
Motivation: L vs. BPL
◮ Big open problem: Prove L = BPL
◮ “Randomness is not necessary for space-efficient computation”
◮ Main approach: Design optimal PRG for “ROBPs” ◮ Bad news: Seed length O(log2 n) has not been improved for decades [Nisan ’92] ◮ Good news: Can achieve seed length O(log n) for increasingly powerful restricted models
SLIDE 23
Motivation: L vs. BPL
◮ Big open problem: Prove L = BPL
◮ “Randomness is not necessary for space-efficient computation”
◮ Main approach: Design optimal PRG for “ROBPs” ◮ Bad news: Seed length O(log2 n) has not been improved for decades [Nisan ’92] ◮ Good news: Can achieve seed length O(log n) for increasingly powerful restricted models ◮ Read-once AC0 is one of the frontiers of this progress
SLIDE 24
Seed length O(log n)
O(1)-ROBPs Arbitrary-order O(1)-ROBPs poly(n)-ROBPs Arbitrary-order poly(n)-ROBPs 3-ROBPs MRT19 Arbitrary-order 3-ROBPs MRT19 2-ROBPs SZ95 Arbitrary-order 2-ROBPs SZ95 Conjunctions NN93 Parities NN93 Read-once CNFs DETT10 Read-once AC0 This work Read-once formulas Regular O(1)-ROBPs BRRY14 Arbitrary-order permutation O(1)-ROBPs CHHL18 Read-once AC0[⊕] Read-once polynomials LV17
SLIDE 25 Starting point: Forbes-Kelley PRG
Seed length Model fooled Reference O(n0.001) AC0 Ajtai, Wigderson ’89 O(log2d+6 n) AC0 Nisan ’91
AC0 Trevisan, Xue ’13
Read-once AC0 Chen, Steinke, Vadhan ’15
Arbitrary-order width-O(1) ROBPs Forbes, Kelley ’18
Read-once AC0 This work
SLIDE 26 Starting point: Forbes-Kelley PRG
Seed length Model fooled Reference O(n0.001) AC0 Ajtai, Wigderson ’89 O(log2d+6 n) AC0 Nisan ’91
AC0 Trevisan, Xue ’13
Read-once AC0 Chen, Steinke, Vadhan ’15
Arbitrary-order width-O(1) ROBPs Forbes, Kelley ’18
Read-once AC0 This work
SLIDE 27
PRGs via pseudorandom restrictions [AW89]
∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ x7 ¬x2 ¬x1 ¬x5 x8 x3 x12 ¬x9 x11 x4 x10 ¬x6 x13
SLIDE 28
PRGs via pseudorandom restrictions [AW89]
◮ Start by sampling a pseudorandom restriction X ∈ {0, 1, ⋆}n ∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ x7 ¬x2 ¬x1 ¬x5 x8 x3 x12 ¬x9 x11 x4 x10 ¬x6 x13
SLIDE 29
PRGs via pseudorandom restrictions [AW89]
◮ Start by sampling a pseudorandom restriction X ∈ {0, 1, ⋆}n ∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ x7 1 ¬x1 x8 x3 1 x11 x4 x13
SLIDE 30 Restriction notation
◮ Define Res: {0, 1}n × {0, 1}n → {0, 1, ⋆}n by Res(y, z)i =
if yi = 1 zi if yi = 0
SLIDE 31 Restriction notation
◮ Define Res: {0, 1}n × {0, 1}n → {0, 1, ⋆}n by Res(y, z)i =
if yi = 1 zi if yi = 0 y = 0 1 1 0 0 1 0 0 z = 0 0 1 1 1 1 0 1 Res(y, z) = 0 ⋆ ⋆ 1 1 ⋆ 0 1
SLIDE 32 Forbes-Kelley pseudorandom restriction
◮ A distribution D over {0, 1}n is ε-biased if it fools parities: S = ∅ = ⇒
Di
2
SLIDE 33 Forbes-Kelley pseudorandom restriction
◮ A distribution D over {0, 1}n is ε-biased if it fools parities: S = ∅ = ⇒
Di
2
◮ Let D, D′ be independent small-bias strings
SLIDE 34 Forbes-Kelley pseudorandom restriction
◮ A distribution D over {0, 1}n is ε-biased if it fools parities: S = ∅ = ⇒
Di
2
◮ Let D, D′ be independent small-bias strings ◮ Let X = Res(D, D′) (seed length O(log n))
SLIDE 35 Forbes-Kelley pseudorandom restriction
◮ A distribution D over {0, 1}n is ε-biased if it fools parities: S = ∅ = ⇒
Di
2
◮ Let D, D′ be independent small-bias strings ◮ Let X = Res(D, D′) (seed length O(log n)) ◮ Theorem [Forbes, Kelley ’18]: For any O(1)-width ROBP f , E
X,U[f |X(U)] ≈ E U[f (U)]
SLIDE 36 Forbes-Kelley pseudorandom restriction
◮ A distribution D over {0, 1}n is ε-biased if it fools parities: S = ∅ = ⇒
Di
2
◮ Let D, D′ be independent small-bias strings ◮ Let X = Res(D, D′) (seed length O(log n)) ◮ Theorem [Forbes, Kelley ’18]: For any O(1)-width ROBP f , E
X,U[f |X(U)] ≈ E U[f (U)]
◮ In words, X preserves expectation of f
SLIDE 37 Forbes-Kelley pseudorandom restriction
◮ A distribution D over {0, 1}n is ε-biased if it fools parities: S = ∅ = ⇒
Di
2
◮ Let D, D′ be independent small-bias strings ◮ Let X = Res(D, D′) (seed length O(log n)) ◮ Theorem [Forbes, Kelley ’18]: For any O(1)-width ROBP f , E
X,U[f |X(U)] ≈ E U[f (U)]
◮ In words, X preserves expectation of f ◮
(Proof involves clever Fourier analysis, building on [RSV13, HLV18, CHRT18])
SLIDE 38
Forbes-Kelley pseudorandom generator
◮ So [FK18] can assign values to half the inputs using O(log n) truly random bits
SLIDE 39
Forbes-Kelley pseudorandom generator
◮ So [FK18] can assign values to half the inputs using O(log n) truly random bits ◮ After restricting, f |X is another ROBP
SLIDE 40
Forbes-Kelley pseudorandom generator
◮ So [FK18] can assign values to half the inputs using O(log n) truly random bits ◮ After restricting, f |X is another ROBP ◮ So we can apply another pseudorandom restriction
SLIDE 41
Forbes-Kelley pseudorandom generator
◮ So [FK18] can assign values to half the inputs using O(log n) truly random bits ◮ After restricting, f |X is another ROBP ◮ So we can apply another pseudorandom restriction ◮ Let X ◦t denote composition of t independent copies of X
SLIDE 42
Forbes-Kelley pseudorandom generator
◮ So [FK18] can assign values to half the inputs using O(log n) truly random bits ◮ After restricting, f |X is another ROBP ◮ So we can apply another pseudorandom restriction ◮ Let X ◦t denote composition of t independent copies of X ◮ Let t = O(log n)
SLIDE 43
Forbes-Kelley pseudorandom generator
◮ So [FK18] can assign values to half the inputs using O(log n) truly random bits ◮ After restricting, f |X is another ROBP ◮ So we can apply another pseudorandom restriction ◮ Let X ◦t denote composition of t independent copies of X ◮ Let t = O(log n) ◮ With high probability, X ◦t ∈ {0, 1}n (no ⋆)
SLIDE 44
Forbes-Kelley pseudorandom generator
◮ So [FK18] can assign values to half the inputs using O(log n) truly random bits ◮ After restricting, f |X is another ROBP ◮ So we can apply another pseudorandom restriction ◮ Let X ◦t denote composition of t independent copies of X ◮ Let t = O(log n) ◮ With high probability, X ◦t ∈ {0, 1}n (no ⋆) ◮ Expectation preserved at every step, so total error is low: E
X ◦t[f (X ◦t)] ≈ E U[f (U)]
SLIDE 45
Forbes-Kelley pseudorandom generator
◮ So [FK18] can assign values to half the inputs using O(log n) truly random bits ◮ After restricting, f |X is another ROBP ◮ So we can apply another pseudorandom restriction ◮ Let X ◦t denote composition of t independent copies of X ◮ Let t = O(log n) ◮ With high probability, X ◦t ∈ {0, 1}n (no ⋆) ◮ Expectation preserved at every step, so total error is low: E
X ◦t[f (X ◦t)] ≈ E U[f (U)]
◮ Total cost: O(log2 n) truly random bits
SLIDE 46
Improved PRGs via simplification [GMRTV12]
∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ x7 ¬x2 ¬x1 ¬x5 x8 x3 x12 ¬x9 x11 x4 x10 ¬x6 x13
SLIDE 47
Improved PRGs via simplification [GMRTV12]
◮ Step 1: Apply pseudorandom restriction X ∈ {0, 1, ⋆}n ∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ x7 ¬x2 ¬x1 ¬x5 x8 x3 x12 ¬x9 x11 x4 x10 ¬x6 x13
SLIDE 48
Improved PRGs via simplification [GMRTV12]
◮ Step 1: Apply pseudorandom restriction X ∈ {0, 1, ⋆}n ∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ x7 1 ¬x1 x8 x3 1 x11 x4 x13
SLIDE 49
Improved PRGs via simplification [GMRTV12]
◮ Step 1: Apply pseudorandom restriction X ∈ {0, 1, ⋆}n ◮ Design X to preserve expectation ∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ x7 1 ¬x1 x8 x3 1 x11 x4 x13
SLIDE 50
Improved PRGs via simplification [GMRTV12]
◮ Step 1: Apply pseudorandom restriction X ∈ {0, 1, ⋆}n ◮ Design X to preserve expectation ◮ Design X so that X ◦t also simplifies formula, for t ≪ log n ∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ x7 1 ¬x1 x8 x3 1 x11 x4 x13
SLIDE 51
Improved PRGs via simplification [GMRTV12]
◮ Step 1: Apply pseudorandom restriction X ∈ {0, 1, ⋆}n ◮ Design X to preserve expectation ◮ Design X so that X ◦t also simplifies formula, for t ≪ log n ∧ ∨ ∨ ∨ ∧ ∧ ∧ x7 ¬x1 x8 x3 x11 x4
SLIDE 52
Improved PRGs via simplification [GMRTV12]
◮ Step 1: Apply pseudorandom restriction X ∈ {0, 1, ⋆}n ◮ Design X to preserve expectation ◮ Design X so that X ◦t also simplifies formula, for t ≪ log n ∧ ∨ ∨ ∨ ∧ ∧ ∧ x7 ¬x1 x8 x3 x11 x4
SLIDE 53
Improved PRGs via simplification [GMRTV12]
◮ Step 1: Apply pseudorandom restriction X ∈ {0, 1, ⋆}n ◮ Design X to preserve expectation ◮ Design X so that X ◦t also simplifies formula, for t ≪ log n ∧ ∧ ∧ ∧ x7 ¬x1 x8 x3 x11 x4
SLIDE 54
Improved PRGs via simplification [GMRTV12]
◮ Step 1: Apply pseudorandom restriction X ∈ {0, 1, ⋆}n ◮ Design X to preserve expectation ◮ Design X so that X ◦t also simplifies formula, for t ≪ log n ∧ x7 ¬x1 x8 x3 x11 x4
SLIDE 55
Improved PRGs via simplification [GMRTV12]
◮ Step 1: Apply pseudorandom restriction X ∈ {0, 1, ⋆}n ◮ Design X to preserve expectation ◮ Design X so that X ◦t also simplifies formula, for t ≪ log n ∧ x7 ¬x1 x8 x3 x11 x4 ◮ Step 2: Fool restricted formula, taking advantage of simplicity
SLIDE 56 Our pseudorandom restriction
◮ Assume by recursion: PRG for depth d with seed length
SLIDE 57 Our pseudorandom restriction
◮ Assume by recursion: PRG for depth d with seed length
◮ Let’s sample X ∈ {0, 1, ⋆}n for depth d + 1
SLIDE 58 Our pseudorandom restriction
◮ Assume by recursion: PRG for depth d with seed length
◮ Let’s sample X ∈ {0, 1, ⋆}n for depth d + 1
- 1. Recursively sample Gd, G ′
d ∈ {0, 1}n
SLIDE 59 Our pseudorandom restriction
◮ Assume by recursion: PRG for depth d with seed length
◮ Let’s sample X ∈ {0, 1, ⋆}n for depth d + 1
- 1. Recursively sample Gd, G ′
d ∈ {0, 1}n
- 2. Sample D, D′ ∈ {0, 1}n with small bias
SLIDE 60 Our pseudorandom restriction
◮ Assume by recursion: PRG for depth d with seed length
◮ Let’s sample X ∈ {0, 1, ⋆}n for depth d + 1
- 1. Recursively sample Gd, G ′
d ∈ {0, 1}n
- 2. Sample D, D′ ∈ {0, 1}n with small bias
- 3. X = Res(Gd ⊕ D, G ′
d ⊕ D′)
SLIDE 61
Preserving expectation
◮ Claim: For any depth-(d + 1) read-once AC0 formula f , E
X,U[f |X(U)] ≈ E U[f (U)]
SLIDE 62
Preserving expectation
◮ Claim: For any depth-(d + 1) read-once AC0 formula f , E
X,U[f |X(U)] ≈ E U[f (U)]
◮ Proof: Read-once AC0 can be simulated by constant-width ROBPs [CSV15]
SLIDE 63
Preserving expectation
◮ Claim: For any depth-(d + 1) read-once AC0 formula f , E
X,U[f |X(U)] ≈ E U[f (U)]
◮ Proof: Read-once AC0 can be simulated by constant-width ROBPs [CSV15] ◮ So we can simply apply Forbes-Kelley result: X = Res(Gd ⊕ D, G ′
d ⊕ D′)
SLIDE 64
Simplification
◮ ∆(f ) def = maximum fan-in of any gate other than root
SLIDE 65
Simplification
◮ ∆(f ) def = maximum fan-in of any gate other than root ◮ Main Lemma: With high probability over X ◦t, ∆(f |X ◦t) ≤ polylog n, where t = O((log log n)2)
SLIDE 66 Simplification
◮ ∆(f ) def = maximum fan-in of any gate other than root ◮ Main Lemma: With high probability over X ◦t, ∆(f |X ◦t) ≤ polylog n, where t = O((log log n)2) ◮
Actually we only prove this statement “up to sandwiching”
SLIDE 67
∆ → polylog n: Proof outline
◮ Chen, Steinke, Vadhan ’15: Read-once AC0 simplifies under truly random restrictions
SLIDE 68
∆ → polylog n: Proof outline
◮ Chen, Steinke, Vadhan ’15: Read-once AC0 simplifies under truly random restrictions ◮ Testing for simplification is another read-once AC0 problem
SLIDE 69
∆ → polylog n: Proof outline
◮ Chen, Steinke, Vadhan ’15: Read-once AC0 simplifies under truly random restrictions ◮ Testing for simplification is another read-once AC0 problem ◮ So we can derandomize the [CSV15] analysis: X = Res(Gd ⊕ D, G ′
d ⊕ D′)
SLIDE 70
Collapse under truly random restrictions
◮ Assume f is a biased read-once AC0 formula: E[f ] ≤ ρ or E[f ] ≥ 1 − ρ
SLIDE 71
Collapse under truly random restrictions
◮ Assume f is a biased read-once AC0 formula: E[f ] ≤ ρ or E[f ] ≥ 1 − ρ ◮ Let R = Res(U, U′) (truly random restriction)
SLIDE 72
Collapse under truly random restrictions
◮ Assume f is a biased read-once AC0 formula: E[f ] ≤ ρ or E[f ] ≥ 1 − ρ ◮ Let R = Res(U, U′) (truly random restriction) ◮ Theorem [CSV ’15]: Pr
R◦s[f |R◦s nonconstant] ≤ ρ +
1 n100 , where s = O(log log n)
SLIDE 73 Collapse under truly random restrictions
◮ Assume f is a biased read-once AC0 formula: E[f ] ≤ ρ or E[f ] ≥ 1 − ρ ◮ Let R = Res(U, U′) (truly random restriction) ◮ Theorem [CSV ’15]: Pr
R◦s[f |R◦s nonconstant] ≤ ρ +
1 n100 , where s = O(log log n) ◮
(Proof uses Fourier analysis)
SLIDE 74
NAND formulas
∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ x7 ¬x2 ¬x1 ¬x5 x8 x3 x12 ¬x9 x11 x4 x10 ¬x6 x13
SLIDE 75 NAND formulas
NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND
x7 ¬x2 ¬x1 ¬x5 x8 x3 x12 ¬x9 x11 x4 x10 ¬x6 x13
SLIDE 76
Collapse under truly random restrictions (continued)
◮ Corollary: If E[f ] ≥ 1 − ρ, then Pr
R◦s[f |R◦s ≡ 1] ≤ 2ρ +
1 n100
SLIDE 77
Collapse under truly random restrictions (continued)
◮ Corollary: If E[f ] ≥ 1 − ρ, then Pr
R◦s[f |R◦s ≡ 1] ≤ 2ρ +
1 n100 ◮ Let F be a set of formulas on disjoint variable sets
SLIDE 78
Collapse under truly random restrictions (continued)
◮ Corollary: If E[f ] ≥ 1 − ρ, then Pr
R◦s[f |R◦s ≡ 1] ≤ 2ρ +
1 n100 ◮ Let F be a set of formulas on disjoint variable sets ◮ Assume ∀f ∈ F, E[f ] ≥ 1 − ρ
SLIDE 79 Collapse under truly random restrictions (continued)
◮ Corollary: If E[f ] ≥ 1 − ρ, then Pr
R◦s[f |R◦s ≡ 1] ≤ 2ρ +
1 n100 ◮ Let F be a set of formulas on disjoint variable sets ◮ Assume ∀f ∈ F, E[f ] ≥ 1 − ρ ◮ Corollary: Pr
R◦s[∀f ∈ F, f |R◦s ≡ 1] ≤
1 n100 |F| .
SLIDE 80
Derandomizing collapse
◮ Let F be a set of depth-(d − 1) formulas on disjoint variables
SLIDE 81
Derandomizing collapse
◮ Let F be a set of depth-(d − 1) formulas on disjoint variables ◮ Computational problem: Given y, z ∈ {0, 1}n, decide whether ∃f ∈ F, f |Res(y,z) ≡ 1
SLIDE 82
Derandomizing collapse
◮ Let F be a set of depth-(d − 1) formulas on disjoint variables ◮ Computational problem: Given y, z ∈ {0, 1}n, decide whether ∃f ∈ F, f |Res(y,z) ≡ 1 ◮ Lemma: Can be decided in depth-d read-once AC0
SLIDE 83 Deciding whether ∃f ∈ F, f |Res(y,z) ≡ 1
NAND
a c b ≡ 1 ⇐ ⇒
SLIDE 84 Deciding whether ∃f ∈ F, f |Res(y,z) ≡ 1
NAND
a c b ≡ 1 ⇐ ⇒ ∨
b ≡ 0 a ≡ 0 c ≡ 0
SLIDE 85 Deciding whether ∃f ∈ F, f |Res(y,z) ≡ 1
NAND
a c b ≡ 1 ⇐ ⇒ ∨
b ≡ 0 a ≡ 0 c ≡ 0
NAND
a′ c′ b′ ≡ 0 ⇐ ⇒ ∧
b′ ≡ 1 a′ ≡ 1 c′ ≡ 1
SLIDE 86
Deciding whether ∃f ∈ F, f |Res(y,z) ≡ 1 (continued)
◮ At bottom, we get one additional layer: (Res(y, z)i ≡ b) ⇐ ⇒ (yi = 0 ∧ zi = b) (¬ Res(y, z)i ≡ b) ⇐ ⇒ (yi = 0 ∧ zi = 1 − b)
SLIDE 87
Deciding whether ∃f ∈ F, f |Res(y,z) ≡ 1 (continued)
◮ At bottom, we get one additional layer: (Res(y, z)i ≡ b) ⇐ ⇒ (yi = 0 ∧ zi = b) (¬ Res(y, z)i ≡ b) ⇐ ⇒ (yi = 0 ∧ zi = 1 − b) ◮ At top: “∃f ∈ F” is one more ∨ gate (merge with top ∨ gates)
SLIDE 88
Collapse under pseudorandom restrictions
◮ Let F be a set of depth-(d − 1) formulas on disjoint variables ◮ Assume ∀f ∈ F, E[f ] ≥ 1 − ρ
SLIDE 89
Collapse under pseudorandom restrictions
◮ Let F be a set of depth-(d − 1) formulas on disjoint variables ◮ Assume ∀f ∈ F, E[f ] ≥ 1 − ρ ◮ X = Res(Gd ⊕ D, G ′
d ⊕ D′)
SLIDE 90
Collapse under pseudorandom restrictions
◮ Let F be a set of depth-(d − 1) formulas on disjoint variables ◮ Assume ∀f ∈ F, E[f ] ≥ 1 − ρ ◮ X = Res(Gd ⊕ D, G ′
d ⊕ D′)
◮ Gd, G ′
d fool depth d, so
Pr
X [∀f ∈ F, f |X ≡ 1] ≈ Pr R [∀f ∈ F, f |R ≡ 1]
SLIDE 91
Collapse under pseudorandom restrictions
◮ Let F be a set of depth-(d − 1) formulas on disjoint variables ◮ Assume ∀f ∈ F, E[f ] ≥ 1 − ρ ◮ X = Res(Gd ⊕ D, G ′
d ⊕ D′)
◮ Gd, G ′
d fool depth d, so
Pr
X [∀f ∈ F, f |X ≡ 1] ≈ Pr R [∀f ∈ F, f |R ≡ 1]
◮ Hybrid argument: Pr
X ◦s[∀f ∈ F, f |X ◦s ≡ 1] ≈ Pr R◦s[∀f ∈ F, f |R◦s ≡ 1]
SLIDE 92 Collapse under pseudorandom restrictions
◮ Let F be a set of depth-(d − 1) formulas on disjoint variables ◮ Assume ∀f ∈ F, E[f ] ≥ 1 − ρ ◮ X = Res(Gd ⊕ D, G ′
d ⊕ D′)
◮ Gd, G ′
d fool depth d, so
Pr
X [∀f ∈ F, f |X ≡ 1] ≈ Pr R [∀f ∈ F, f |R ≡ 1]
◮ Hybrid argument: Pr
X ◦s[∀f ∈ F, f |X ◦s ≡ 1] ≈ Pr R◦s[∀f ∈ F, f |R◦s ≡ 1]
≤
1 n100 |F|
SLIDE 93
∆ → √ ∆ polylog n
◮ So far: X ◦s causes any biased depth-(d − 1) formula to collapse
SLIDE 94
∆ → √ ∆ polylog n
◮ So far: X ◦s causes any biased depth-(d − 1) formula to collapse ◮ What about unbiased depth-(d + 1) formulas?
SLIDE 95
∆ → √ ∆ polylog n
◮ So far: X ◦s causes any biased depth-(d − 1) formula to collapse ◮ What about unbiased depth-(d + 1) formulas? ◮ Assume that for every gate g in f , E[¬g] ≥ 1/ poly(n)
SLIDE 96 ∆ → √ ∆ polylog n
◮ So far: X ◦s causes any biased depth-(d − 1) formula to collapse ◮ What about unbiased depth-(d + 1) formulas? ◮ Assume that for every gate g in f , E[¬g] ≥ 1/ poly(n) ◮ Lemma: With high probability over X ◦s, ∆(f |X ◦s) ≤
SLIDE 97 Illustration: ∆ → √ ∆ polylog n
NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND
Total depth d + 1
SLIDE 98 Illustration: ∆ → √ ∆ polylog n
NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND
Likely to collapse if biased Total depth d + 1
SLIDE 99 Illustration: ∆ → √ ∆ polylog n
NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND NAND
Likely to collapse if biased Likely to have few remaining children Total depth d + 1
SLIDE 100 Proof that ∆ → √ ∆ polylog n
◮
(This analysis follows [GMRTV12, CSV15])
SLIDE 101 Proof that ∆ → √ ∆ polylog n
◮
(This analysis follows [GMRTV12, CSV15])
◮ Let g be a gate, g = root
SLIDE 102 Proof that ∆ → √ ∆ polylog n
◮
(This analysis follows [GMRTV12, CSV15])
◮ Let g be a gate, g = root ◮ Partition children h into O(log n) buckets based on E[h]
SLIDE 103 Proof that ∆ → √ ∆ polylog n
◮
(This analysis follows [GMRTV12, CSV15])
◮ Let g be a gate, g = root ◮ Partition children h into O(log n) buckets based on E[h] ◮ Consider one bucket B = {h : E[h] ≈ 1 − ρ}
SLIDE 104
Illustration: ∆ → √ ∆ polylog n (continued)
g h h′ Likely to collapse if biased Likely to have few remaining children Total depth d + 1
SLIDE 105
Proof that ∆ → √ ∆ polylog n (continued)
◮ B = {h : E[h] ≈ 1 − ρ}
SLIDE 106
Proof that ∆ → √ ∆ polylog n (continued)
◮ B = {h : E[h] ≈ 1 − ρ} ◮ How big can B be?
SLIDE 107
Proof that ∆ → √ ∆ polylog n (continued)
◮ B = {h : E[h] ≈ 1 − ρ} ◮ How big can B be? ◮ 1/ poly(n) ≤ E[¬g]
SLIDE 108 Proof that ∆ → √ ∆ polylog n (continued)
◮ B = {h : E[h] ≈ 1 − ρ} ◮ How big can B be? ◮ 1/ poly(n) ≤ E[¬g] ≤
E[h]
SLIDE 109 Proof that ∆ → √ ∆ polylog n (continued)
◮ B = {h : E[h] ≈ 1 − ρ} ◮ How big can B be? ◮ 1/ poly(n) ≤ E[¬g] ≤
E[h] ≈ (1 − ρ)|B|
SLIDE 110 Proof that ∆ → √ ∆ polylog n (continued)
◮ B = {h : E[h] ≈ 1 − ρ} ◮ How big can B be? ◮ 1/ poly(n) ≤ E[¬g] ≤
E[h] ≈ (1 − ρ)|B| ◮ So |B| ≤ O((1/ρ) log n)
SLIDE 111 Proof that ∆ → √ ∆ polylog n (continued)
◮ B = {h : E[h] ≈ 1 − ρ} ◮ How big can B be? ◮ 1/ poly(n) ≤ E[¬g] ≤
E[h] ≈ (1 − ρ)|B| ◮ So |B| ≤ O((1/ρ) log n) ◮ We also trivially have |B| ≤ ∆
SLIDE 112
Proof that ∆ → √ ∆ polylog n (continued)
◮ Let L = #{h ∈ B : h|X ◦s ≡ 1} (number of living children)
SLIDE 113
Proof that ∆ → √ ∆ polylog n (continued)
◮ Let L = #{h ∈ B : h|X ◦s ≡ 1} (number of living children) ◮ Let M = Θ( √ ∆ log n)
SLIDE 114
Proof that ∆ → √ ∆ polylog n (continued)
◮ Let L = #{h ∈ B : h|X ◦s ≡ 1} (number of living children) ◮ Let M = Θ( √ ∆ log n) ◮ Pr[L ≥ M] =
SLIDE 115 Proof that ∆ → √ ∆ polylog n (continued)
◮ Let L = #{h ∈ B : h|X ◦s ≡ 1} (number of living children) ◮ Let M = Θ( √ ∆ log n), k = ◮ Pr[L ≥ M] = Pr L k
M k
SLIDE 116 Proof that ∆ → √ ∆ polylog n (continued)
◮ Let L = #{h ∈ B : h|X ◦s ≡ 1} (number of living children) ◮ Let M = Θ( √ ∆ log n), k = ◮ Pr[L ≥ M] = Pr L k
M k
≤ 1 M
k
· E L k
SLIDE 117 Proof that ∆ → √ ∆ polylog n (continued)
◮ Let L = #{h ∈ B : h|X ◦s ≡ 1} (number of living children) ◮ Let M = Θ( √ ∆ log n), k = ◮ Pr[L ≥ M] = Pr L k
M k
≤ 1 M
k
· E L k
≤ 1 M
k
· |B| k
1 n200
SLIDE 118 Proof that ∆ → √ ∆ polylog n (continued)
◮ Let L = #{h ∈ B : h|X ◦s ≡ 1} (number of living children) ◮ Let M = Θ( √ ∆ log n), k = ◮ Pr[L ≥ M] = Pr L k
M k
≤ 1 M
k
· E L k
≤ 1 M
k
· |B| k
1 n200
|B|e M k ·
1 n200
SLIDE 119 Proof that ∆ → √ ∆ polylog n (continued)
◮ Let L = #{h ∈ B : h|X ◦s ≡ 1} (number of living children) ◮ Let M = Θ( √ ∆ log n), k = ◮ Pr[L ≥ M] = Pr L k
M k
≤ 1 M
k
· E L k
≤ 1 M
k
· |B| k
1 n200
|B|e M k ·
1 n200
≤ 1 √ ∆ k +
SLIDE 120 Proof that ∆ → √ ∆ polylog n (continued)
◮ Let L = #{h ∈ B : h|X ◦s ≡ 1} (number of living children) ◮ Let M = Θ( √ ∆ log n), k = ◮ Pr[L ≥ M] = Pr L k
M k
≤ 1 M
k
· E L k
≤ 1 M
k
· |B| k
1 n200
|B|e M k ·
1 n200
≤ 1 √ ∆ k + 1 n200 · ( √ ∆)k
SLIDE 121 Proof that ∆ → √ ∆ polylog n (continued)
◮ Let L = #{h ∈ B : h|X ◦s ≡ 1} (number of living children) ◮ Let M = Θ( √ ∆ log n), k = Θ((log n)/ log ∆) ◮ Pr[L ≥ M] = Pr L k
M k
≤ 1 M
k
· E L k
≤ 1 M
k
· |B| k
1 n200
|B|e M k ·
1 n200
≤ 1 √ ∆ k + 1 n200 · ( √ ∆)k ≤ 2 n100
SLIDE 122
Finishing proof of main lemma
◮ After s = O(log log n) restrictions, ∆ → √ ∆ · polylog n
SLIDE 123
Finishing proof of main lemma
◮ After s = O(log log n) restrictions, ∆ → √ ∆ · polylog n ◮ Therefore, after t = O((log log n)2) restrictions, ∆ = polylog n
SLIDE 124
Finishing proof of main lemma
◮ After s = O(log log n) restrictions, ∆ → √ ∆ · polylog n ◮ Therefore, after t = O((log log n)2) restrictions, ∆ = polylog n ∧
SLIDE 125
Finishing proof of main lemma
◮ After s = O(log log n) restrictions, ∆ → √ ∆ · polylog n ◮ Therefore, after t = O((log log n)2) restrictions, ∆ = polylog n ◮ Total cost so far: O(log n) truly random bits ∧
SLIDE 126 Final step: MRT PRG
◮ Theorem (Meka, Reingold, Tal ’19): There is an explicit PRG with seed length O(log(n/ε)) that fools functions of the form f =
m
fi,
SLIDE 127 Final step: MRT PRG
◮ Theorem (Meka, Reingold, Tal ’19): There is an explicit PRG with seed length O(log(n/ε)) that fools functions of the form f =
m
fi, where f1, . . . , fm are on disjoint variables and fi can be computed by an ROBP with width O(1), length polylog n
SLIDE 128 Final step: MRT PRG
◮ Theorem (Meka, Reingold, Tal ’19): There is an explicit PRG with seed length O(log(n/ε)) that fools functions of the form f =
m
fi, where f1, . . . , fm are on disjoint variables and fi can be computed by an ROBP with width O(1), length polylog n ◮
(Proof uses GMRTV approach, building on [GY14, CHRT18, Vio09])
SLIDE 129 Final step: MRT PRG
◮ Theorem (Meka, Reingold, Tal ’19): There is an explicit PRG with seed length O(log(n/ε)) that fools functions of the form f =
m
fi, where f1, . . . , fm are on disjoint variables and fi can be computed by an ROBP with width O(1), length polylog n ◮
(Proof uses GMRTV approach, building on [GY14, CHRT18, Vio09])
◮ In our case, f =
m
fi
SLIDE 130 Final step: MRT PRG
◮ Theorem (Meka, Reingold, Tal ’19): There is an explicit PRG with seed length O(log(n/ε)) that fools functions of the form f =
m
fi, where f1, . . . , fm are on disjoint variables and fi can be computed by an ROBP with width O(1), length polylog n ◮
(Proof uses GMRTV approach, building on [GY14, CHRT18, Vio09])
◮ In our case, f =
m
fi =
(−1)|S| 2m
(−1)fi
SLIDE 131
Directions for further research
SLIDE 132
Directions for further research
O(1)-ROBPs Arbitrary-order O(1)-ROBPs poly(n)-ROBPs Arbitrary-order poly(n)-ROBPs 3-ROBPs MRT19 Arbitrary-order 3-ROBPs MRT19 2-ROBPs SZ95 Arbitrary-order 2-ROBPs SZ95 Conjunctions NN93 Parities NN93 Read-once CNFs DETT10 Read-once AC0 This work Read-once formulas Regular O(1)-ROBPs BRRY14 Arbitrary-order permutation O(1)-ROBPs CHHL18 Read-once AC0[⊕] Read-once polynomials LV17
SLIDE 133
Directions for further research
O(1)-ROBPs Arbitrary-order O(1)-ROBPs poly(n)-ROBPs Arbitrary-order poly(n)-ROBPs 3-ROBPs MRT19 Arbitrary-order 3-ROBPs MRT19 2-ROBPs SZ95 Arbitrary-order 2-ROBPs SZ95 Conjunctions NN93 Parities NN93 Read-once CNFs DETT10 Read-once AC0 This work Read-once formulas Regular O(1)-ROBPs BRRY14 Arbitrary-order permutation O(1)-ROBPs CHHL18 Read-once AC0[⊕] Read-once polynomials LV17
SLIDE 134
Read-once AC0[⊕]
⊕ ∨ ∨ ∧ ∨ ⊕ ⊕ ⊕ ∧ ∧ x7 ¬x2 ¬x1 ¬x5 x8 x3 x12 ¬x9 x11 x4 x10 ¬x6 x13
SLIDE 135
Fooling read-once AC0[⊕]
◮ Natural next step toward derandomizing BPL
SLIDE 136
Fooling read-once AC0[⊕]
◮ Natural next step toward derandomizing BPL ◮ Best prior PRG: seed length O(log2 n) [FK ’18]
SLIDE 137 Fooling read-once AC0[⊕]
◮ Natural next step toward derandomizing BPL ◮ Best prior PRG: seed length O(log2 n) [FK ’18] ◮ Theorem: Our PRG fools read-once AC0[⊕] with seed length
where t = # parity gates
SLIDE 138 Fooling read-once AC0[⊕]
◮ Natural next step toward derandomizing BPL ◮ Best prior PRG: seed length O(log2 n) [FK ’18] ◮ Theorem: Our PRG fools read-once AC0[⊕] with seed length
where t = # parity gates ◮ Fool read-once AC0[⊕] with seed length O(log(n/ε))?
SLIDE 139 Fooling read-once AC0[⊕]
◮ Natural next step toward derandomizing BPL ◮ Best prior PRG: seed length O(log2 n) [FK ’18] ◮ Theorem: Our PRG fools read-once AC0[⊕] with seed length
where t = # parity gates ◮ Fool read-once AC0[⊕] with seed length O(log(n/ε))? ◮ Thanks! Questions?