SLIDE 1 Near-Optimal Pseudorandom Generators for Constant-Depth Read-Once Formulas
Dean Doron1
UT Austin → Stanford
Pooya Hatami2
UT Austin → Ohio State
William M. Hoza3
UT Austin
July 19 CCC 2019
1Supported by NSF Grant CCF-1705028 2Supported by a Simons Investigator Award (#409864, David Zuckerman) 3Supported by the NSF GRFP under Grant DGE-1610403 and by a Harrington Fellowship from UT Austin
SLIDE 2
Randomness as a scarce resource
◮ Randomization is a popular algorithmic technique ◮ But randomness is costly
SLIDE 3
Randomness as a scarce resource
◮ Randomization is a popular algorithmic technique ◮ But randomness is costly ◮ An algorithm that uses fewer random bits is better
SLIDE 4
Pseudorandom generators (PRGs)
s bits Gen n bits
SLIDE 5
Pseudorandom generators (PRGs)
s bits Gen n bits ◮ Gen “fools” f : {0, 1}n → {0, 1} if E[f (Gen(U))] = E[f (U)] ± ε
SLIDE 6
Pseudorandom generators (PRGs)
s bits Gen n bits ◮ Gen “fools” f : {0, 1}n → {0, 1} if E[f (Gen(U))] = E[f (U)] ± ε ◮ Goal: Design PRG that fools an interesting class of functions f
SLIDE 7
Pseudorandom generators (PRGs)
s bits Gen n bits ◮ Gen “fools” f : {0, 1}n → {0, 1} if E[f (Gen(U))] = E[f (U)] ± ε ◮ Goal: Design PRG that fools an interesting class of functions f ◮ Minimize seed length s = s(n, ε)
SLIDE 8
Read-once formulas
∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ x7 ¬x2 ¬x1 ¬x5 x8 x3 x12 ¬x9 x11 x4 x10 ¬x6 x13
SLIDE 9
Read-once formulas
∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ x7 ¬x2 ¬x1 ¬x5 x8 x3 x12 ¬x9 x11 x4 x10 ¬x6 x13 ◮ This work: Fool depth-d read-once formulas for d = O(1)
SLIDE 10
Read-once formulas
∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ x7 ¬x2 ¬x1 ¬x5 x8 x3 x12 ¬x9 x11 x4 x10 ¬x6 x13 ◮ This work: Fool depth-d read-once formulas for d = O(1) ◮ Read-once version of AC0
SLIDE 11
Prior work and main result
Seed length Model fooled Reference O(n0.001) AC0 Ajtai, Wigderson ’89
SLIDE 12
Prior work and main result
Seed length Model fooled Reference O(n0.001) AC0 Ajtai, Wigderson ’89 O(log2d+6 n) AC0 Nisan ’91
SLIDE 13 Prior work and main result
Seed length Model fooled Reference O(n0.001) AC0 Ajtai, Wigderson ’89 O(log2d+6 n) AC0 Nisan ’91
AC0 Trevisan, Xue ’13
SLIDE 14 Prior work and main result
Seed length Model fooled Reference O(n0.001) AC0 Ajtai, Wigderson ’89 O(log2d+6 n) AC0 Nisan ’91
AC0 Trevisan, Xue ’13
Read-once AC0 Chen, Steinke, Vadhan ’15
SLIDE 15 Prior work and main result
Seed length Model fooled Reference O(n0.001) AC0 Ajtai, Wigderson ’89 O(log2d+6 n) AC0 Nisan ’91
AC0 Trevisan, Xue ’13
Read-once AC0 Chen, Steinke, Vadhan ’15
Arbitrary-order width-O(1) ROBPs Forbes, Kelley ’18
SLIDE 16 Prior work and main result
Seed length Model fooled Reference O(n0.001) AC0 Ajtai, Wigderson ’89 O(log2d+6 n) AC0 Nisan ’91
AC0 Trevisan, Xue ’13
Read-once AC0 Chen, Steinke, Vadhan ’15
Arbitrary-order width-O(1) ROBPs Forbes, Kelley ’18
Read-once AC0 This work
SLIDE 17 Prior work and main result
Seed length Model fooled Reference O(n0.001) AC0 Ajtai, Wigderson ’89 O(log2d+6 n) AC0 Nisan ’91
AC0 Trevisan, Xue ’13
Read-once AC0 Chen, Steinke, Vadhan ’15
Arbitrary-order width-O(1) ROBPs Forbes, Kelley ’18
Read-once AC0 This work ◮ Main result: PRG for read-once AC0 with seed length log(n/ε) · O(d log log(n/ε))2d+2.
SLIDE 18
Motivation: L vs. BPL
◮ Big open problem: Prove L = BPL
SLIDE 19
Motivation: L vs. BPL
◮ Big open problem: Prove L = BPL
◮ “Randomness is not necessary for space-efficient computation”
SLIDE 20
Motivation: L vs. BPL
◮ Big open problem: Prove L = BPL
◮ “Randomness is not necessary for space-efficient computation”
◮ Main approach: Design optimal PRG for “ROBPs”
SLIDE 21
Motivation: L vs. BPL
◮ Big open problem: Prove L = BPL
◮ “Randomness is not necessary for space-efficient computation”
◮ Main approach: Design optimal PRG for “ROBPs” ◮ Bad news: Seed length O(log2 n) has not been improved for decades [Nisan ’92]
SLIDE 22
Motivation: L vs. BPL
◮ Big open problem: Prove L = BPL
◮ “Randomness is not necessary for space-efficient computation”
◮ Main approach: Design optimal PRG for “ROBPs” ◮ Bad news: Seed length O(log2 n) has not been improved for decades [Nisan ’92] ◮ Good news: Can achieve seed length O(log n) for increasingly powerful restricted models
SLIDE 23
Motivation: L vs. BPL
◮ Big open problem: Prove L = BPL
◮ “Randomness is not necessary for space-efficient computation”
◮ Main approach: Design optimal PRG for “ROBPs” ◮ Bad news: Seed length O(log2 n) has not been improved for decades [Nisan ’92] ◮ Good news: Can achieve seed length O(log n) for increasingly powerful restricted models ◮ Read-once AC0 is one of the frontiers of this progress
SLIDE 24
Seed length O(log n)
O(1)-ROBPs Arbitrary-order O(1)-ROBPs poly(n)-ROBPs Arbitrary-order poly(n)-ROBPs 3-ROBPs MRT19 Arbitrary-order 3-ROBPs MRT19 2-ROBPs SZ95 Arbitrary-order 2-ROBPs SZ95 Conjunctions NN93 Parities NN93 Read-once CNFs DETT10 Read-once AC0 This work Read-once formulas Regular O(1)-ROBPs BRRY14 Arbitrary-order permutation O(1)-ROBPs CHHL18 Read-once AC0[⊕] Read-once polynomials LV17
SLIDE 25 Starting point: Forbes-Kelley PRG
Seed length Model fooled Reference O(n0.001) AC0 Ajtai, Wigderson ’89 O(log2d+6 n) AC0 Nisan ’91
AC0 Trevisan, Xue ’13
Read-once AC0 Chen, Steinke, Vadhan ’15
Arbitrary-order width-O(1) ROBPs Forbes, Kelley ’18
Read-once AC0 This work
SLIDE 26 Starting point: Forbes-Kelley PRG
Seed length Model fooled Reference O(n0.001) AC0 Ajtai, Wigderson ’89 O(log2d+6 n) AC0 Nisan ’91
AC0 Trevisan, Xue ’13
Read-once AC0 Chen, Steinke, Vadhan ’15
Arbitrary-order width-O(1) ROBPs Forbes, Kelley ’18
Read-once AC0 This work
SLIDE 27
PRGs via pseudorandom restrictions [AW89]
∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ x7 ¬x2 ¬x1 ¬x5 x8 x3 x12 ¬x9 x11 x4 x10 ¬x6 x13
SLIDE 28
PRGs via pseudorandom restrictions [AW89]
◮ Start by sampling a pseudorandom restriction X ∈ {0, 1, ⋆}n ∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ x7 ¬x2 ¬x1 ¬x5 x8 x3 x12 ¬x9 x11 x4 x10 ¬x6 x13
SLIDE 29
PRGs via pseudorandom restrictions [AW89]
◮ Start by sampling a pseudorandom restriction X ∈ {0, 1, ⋆}n ∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ x7 1 ¬x1 x8 x3 1 x11 x4 x13
SLIDE 30 Restriction notation
◮ Define Res: {0, 1}n × {0, 1}n → {0, 1, ⋆}n by Res(y, z)i =
if yi = 1 zi if yi = 0
SLIDE 31 Restriction notation
◮ Define Res: {0, 1}n × {0, 1}n → {0, 1, ⋆}n by Res(y, z)i =
if yi = 1 zi if yi = 0 y = 0 1 1 0 0 1 0 0 z = 0 0 1 1 1 1 0 1 Res(y, z) = 0 ⋆ ⋆ 1 1 ⋆ 0 1
SLIDE 32 Forbes-Kelley pseudorandom restriction
◮ A distribution D over {0, 1}n is ε-biased if it fools parities: S = ∅ = ⇒
Di
2
SLIDE 33 Forbes-Kelley pseudorandom restriction
◮ A distribution D over {0, 1}n is ε-biased if it fools parities: S = ∅ = ⇒
Di
2
◮ Let D, D′ be independent small-bias strings
SLIDE 34 Forbes-Kelley pseudorandom restriction
◮ A distribution D over {0, 1}n is ε-biased if it fools parities: S = ∅ = ⇒
Di
2
◮ Let D, D′ be independent small-bias strings ◮ Let X = Res(D, D′) (seed length O(log n))
SLIDE 35 Forbes-Kelley pseudorandom restriction
◮ A distribution D over {0, 1}n is ε-biased if it fools parities: S = ∅ = ⇒
Di
2
◮ Let D, D′ be independent small-bias strings ◮ Let X = Res(D, D′) (seed length O(log n)) ◮ Theorem [Forbes, Kelley ’18]: For any O(1)-width ROBP f , E
X,U[f |X(U)] ≈ E U[f (U)]
SLIDE 36 Forbes-Kelley pseudorandom restriction
◮ A distribution D over {0, 1}n is ε-biased if it fools parities: S = ∅ = ⇒
Di
2
◮ Let D, D′ be independent small-bias strings ◮ Let X = Res(D, D′) (seed length O(log n)) ◮ Theorem [Forbes, Kelley ’18]: For any O(1)-width ROBP f , E
X,U[f |X(U)] ≈ E U[f (U)]
◮ In words, X preserves expectation of f
SLIDE 37 Forbes-Kelley pseudorandom restriction
◮ A distribution D over {0, 1}n is ε-biased if it fools parities: S = ∅ = ⇒
Di
2
◮ Let D, D′ be independent small-bias strings ◮ Let X = Res(D, D′) (seed length O(log n)) ◮ Theorem [Forbes, Kelley ’18]: For any O(1)-width ROBP f , E
X,U[f |X(U)] ≈ E U[f (U)]
◮ In words, X preserves expectation of f ◮
(Proof involves clever Fourier analysis, building on [RSV13, HLV18, CHRT18])
SLIDE 38
Forbes-Kelley pseudorandom generator
◮ So [FK18] can assign values to half the inputs using O(log n) truly random bits
SLIDE 39
Forbes-Kelley pseudorandom generator
◮ So [FK18] can assign values to half the inputs using O(log n) truly random bits ◮ After restricting, f |X is another ROBP
SLIDE 40
Forbes-Kelley pseudorandom generator
◮ So [FK18] can assign values to half the inputs using O(log n) truly random bits ◮ After restricting, f |X is another ROBP ◮ So we can apply another pseudorandom restriction
SLIDE 41
Forbes-Kelley pseudorandom generator
◮ So [FK18] can assign values to half the inputs using O(log n) truly random bits ◮ After restricting, f |X is another ROBP ◮ So we can apply another pseudorandom restriction ◮ Let X ◦t denote composition of t independent copies of X
SLIDE 42
Forbes-Kelley pseudorandom generator
◮ So [FK18] can assign values to half the inputs using O(log n) truly random bits ◮ After restricting, f |X is another ROBP ◮ So we can apply another pseudorandom restriction ◮ Let X ◦t denote composition of t independent copies of X ◮ Let t = O(log n)
SLIDE 43
Forbes-Kelley pseudorandom generator
◮ So [FK18] can assign values to half the inputs using O(log n) truly random bits ◮ After restricting, f |X is another ROBP ◮ So we can apply another pseudorandom restriction ◮ Let X ◦t denote composition of t independent copies of X ◮ Let t = O(log n) ◮ With high probability, X ◦t ∈ {0, 1}n (no ⋆)
SLIDE 44
Forbes-Kelley pseudorandom generator
◮ So [FK18] can assign values to half the inputs using O(log n) truly random bits ◮ After restricting, f |X is another ROBP ◮ So we can apply another pseudorandom restriction ◮ Let X ◦t denote composition of t independent copies of X ◮ Let t = O(log n) ◮ With high probability, X ◦t ∈ {0, 1}n (no ⋆) ◮ Expectation preserved at every step, so total error is low: E
X ◦t[f (X ◦t)] ≈ E U[f (U)]
SLIDE 45
Forbes-Kelley pseudorandom generator
◮ So [FK18] can assign values to half the inputs using O(log n) truly random bits ◮ After restricting, f |X is another ROBP ◮ So we can apply another pseudorandom restriction ◮ Let X ◦t denote composition of t independent copies of X ◮ Let t = O(log n) ◮ With high probability, X ◦t ∈ {0, 1}n (no ⋆) ◮ Expectation preserved at every step, so total error is low: E
X ◦t[f (X ◦t)] ≈ E U[f (U)]
◮ Total cost: O(log2 n) truly random bits
SLIDE 46
Improved PRGs via simplification [GMRTV12]
∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ x7 ¬x2 ¬x1 ¬x5 x8 x3 x12 ¬x9 x11 x4 x10 ¬x6 x13
SLIDE 47
Improved PRGs via simplification [GMRTV12]
◮ Step 1: Apply pseudorandom restriction X ∈ {0, 1, ⋆}n ∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ x7 ¬x2 ¬x1 ¬x5 x8 x3 x12 ¬x9 x11 x4 x10 ¬x6 x13
SLIDE 48
Improved PRGs via simplification [GMRTV12]
◮ Step 1: Apply pseudorandom restriction X ∈ {0, 1, ⋆}n ∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ x7 1 ¬x1 x8 x3 1 x11 x4 x13
SLIDE 49
Improved PRGs via simplification [GMRTV12]
◮ Step 1: Apply pseudorandom restriction X ∈ {0, 1, ⋆}n ◮ Design X to preserve expectation ∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ x7 1 ¬x1 x8 x3 1 x11 x4 x13
SLIDE 50
Improved PRGs via simplification [GMRTV12]
◮ Step 1: Apply pseudorandom restriction X ∈ {0, 1, ⋆}n ◮ Design X to preserve expectation ◮ Design X so that X ◦t also simplifies formula, for t ≪ log n ∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ x7 1 ¬x1 x8 x3 1 x11 x4 x13
SLIDE 51
Improved PRGs via simplification [GMRTV12]
◮ Step 1: Apply pseudorandom restriction X ∈ {0, 1, ⋆}n ◮ Design X to preserve expectation ◮ Design X so that X ◦t also simplifies formula, for t ≪ log n ∧ ∨ ∨ ∨ ∧ ∧ ∧ x7 ¬x1 x8 x3 x11 x4
SLIDE 52
Improved PRGs via simplification [GMRTV12]
◮ Step 1: Apply pseudorandom restriction X ∈ {0, 1, ⋆}n ◮ Design X to preserve expectation ◮ Design X so that X ◦t also simplifies formula, for t ≪ log n ∧ ∨ ∨ ∨ ∧ ∧ ∧ x7 ¬x1 x8 x3 x11 x4
SLIDE 53
Improved PRGs via simplification [GMRTV12]
◮ Step 1: Apply pseudorandom restriction X ∈ {0, 1, ⋆}n ◮ Design X to preserve expectation ◮ Design X so that X ◦t also simplifies formula, for t ≪ log n ∧ ∧ ∧ ∧ x7 ¬x1 x8 x3 x11 x4
SLIDE 54
Improved PRGs via simplification [GMRTV12]
◮ Step 1: Apply pseudorandom restriction X ∈ {0, 1, ⋆}n ◮ Design X to preserve expectation ◮ Design X so that X ◦t also simplifies formula, for t ≪ log n ∧ x7 ¬x1 x8 x3 x11 x4
SLIDE 55
Improved PRGs via simplification [GMRTV12]
◮ Step 1: Apply pseudorandom restriction X ∈ {0, 1, ⋆}n ◮ Design X to preserve expectation ◮ Design X so that X ◦t also simplifies formula, for t ≪ log n ∧ x7 ¬x1 x8 x3 x11 x4 ◮ Step 2: Fool restricted formula, taking advantage of simplicity
SLIDE 56 Our pseudorandom restriction
◮ Assume by recursion: PRG for depth d with seed length
SLIDE 57 Our pseudorandom restriction
◮ Assume by recursion: PRG for depth d with seed length
◮ Let’s sample X ∈ {0, 1, ⋆}n for depth d + 1
SLIDE 58 Our pseudorandom restriction
◮ Assume by recursion: PRG for depth d with seed length
◮ Let’s sample X ∈ {0, 1, ⋆}n for depth d + 1
- 1. Recursively sample Gd, G ′
d ∈ {0, 1}n
SLIDE 59 Our pseudorandom restriction
◮ Assume by recursion: PRG for depth d with seed length
◮ Let’s sample X ∈ {0, 1, ⋆}n for depth d + 1
- 1. Recursively sample Gd, G ′
d ∈ {0, 1}n
- 2. Sample D, D′ ∈ {0, 1}n with small bias
SLIDE 60 Our pseudorandom restriction
◮ Assume by recursion: PRG for depth d with seed length
◮ Let’s sample X ∈ {0, 1, ⋆}n for depth d + 1
- 1. Recursively sample Gd, G ′
d ∈ {0, 1}n
- 2. Sample D, D′ ∈ {0, 1}n with small bias
- 3. X = Res(Gd ⊕ D, G ′
d ⊕ D′)
SLIDE 61
Preserving expectation
◮ Claim: For any depth-(d + 1) read-once AC0 formula f , E
X,U[f |X(U)] ≈ E U[f (U)]
SLIDE 62
Preserving expectation
◮ Claim: For any depth-(d + 1) read-once AC0 formula f , E
X,U[f |X(U)] ≈ E U[f (U)]
◮ Proof: Read-once AC0 can be simulated by constant-width ROBPs [CSV15]
SLIDE 63
Preserving expectation
◮ Claim: For any depth-(d + 1) read-once AC0 formula f , E
X,U[f |X(U)] ≈ E U[f (U)]
◮ Proof: Read-once AC0 can be simulated by constant-width ROBPs [CSV15] ◮ So we can simply apply Forbes-Kelley result: X = Res(Gd ⊕ D, G ′
d ⊕ D′)
SLIDE 64
Simplification
◮ ∆(f ) def = maximum fan-in of any gate other than root
SLIDE 65
Simplification
◮ ∆(f ) def = maximum fan-in of any gate other than root ◮ Main Lemma: With high probability over X ◦t, ∆(f |X ◦t) ≤ polylog n, where t = O((log log n)2)
SLIDE 66 Simplification
◮ ∆(f ) def = maximum fan-in of any gate other than root ◮ Main Lemma: With high probability over X ◦t, ∆(f |X ◦t) ≤ polylog n, where t = O((log log n)2) ◮
Actually we only prove this statement “up to sandwiching”
SLIDE 67
Simplification under truly random restrictions
◮ Let f be a read-once AC0 formula
SLIDE 68
Simplification under truly random restrictions
◮ Let f be a read-once AC0 formula ◮ Let R = Res(U, U′) (truly random restriction)
SLIDE 69
Simplification under truly random restrictions
◮ Let f be a read-once AC0 formula ◮ Let R = Res(U, U′) (truly random restriction) ◮ Chen, Steinke, Vadhan ’15 = ⇒ W.h.p. over R◦t, ∆(f |R◦t) ≤ polylog n
SLIDE 70
Simplification under truly random restrictions
◮ Let f be a read-once AC0 formula ◮ Let R = Res(U, U′) (truly random restriction) ◮ Chen, Steinke, Vadhan ’15 = ⇒ W.h.p. over R◦t, ∆(f |R◦t) ≤ polylog n ◮ (In fact the simplification they show is more severe)
SLIDE 71 Simplification under truly random restrictions
◮ Let f be a read-once AC0 formula ◮ Let R = Res(U, U′) (truly random restriction) ◮ Chen, Steinke, Vadhan ’15 = ⇒ W.h.p. over R◦t, ∆(f |R◦t) ≤ polylog n ◮ (In fact the simplification they show is more severe) ◮
Again, these statements are true “up to sandwiching.” Proof uses Fourier analysis
SLIDE 72
Derandomizing simplification
◮ Let f be a depth-(d − 1) read-once AC0 formula
SLIDE 73
Derandomizing simplification
◮ Let f be a depth-(d − 1) read-once AC0 formula ◮ Let b ∈ {0, 1}
SLIDE 74
Derandomizing simplification
◮ Let f be a depth-(d − 1) read-once AC0 formula ◮ Let b ∈ {0, 1} ◮ Computational problem: Given y, z ∈ {0, 1}n, decide whether f |Res(y,z) ≡ b
SLIDE 75
Derandomizing simplification
◮ Let f be a depth-(d − 1) read-once AC0 formula ◮ Let b ∈ {0, 1} ◮ Computational problem: Given y, z ∈ {0, 1}n, decide whether f |Res(y,z) ≡ b ◮ Lemma: Can be decided in depth-d read-once AC0
SLIDE 76
Deciding whether f |Res(y,z) ≡ b
∧ a c b ≡ 0 ⇐ ⇒
SLIDE 77
Deciding whether f |Res(y,z) ≡ b
∧ a c b ≡ 0 ⇐ ⇒ ∨
b ≡ 0 a ≡ 0 c ≡ 0
SLIDE 78
Deciding whether f |Res(y,z) ≡ b
∧ a c b ≡ 0 ⇐ ⇒ ∨
b ≡ 0 a ≡ 0 c ≡ 0
∨ a′ c′ b′ ≡ 0 ⇐ ⇒ ∧
b′ ≡ 0 a′ ≡ 0 c′ ≡ 0
SLIDE 79
Deciding whether f |Res(y,z) ≡ b (continued)
◮ At bottom, we get one additional layer: (Res(y, z)i ≡ b) ⇐ ⇒ (yi = 0 ∧ zi = b) (¬ Res(y, z)i ≡ b) ⇐ ⇒ (yi = 0 ∧ zi = 1 − b)
SLIDE 80
Collapse under pseudorandom restrictions
◮ Let f be a depth-(d − 1) read-once AC0 formula ◮ Let b ∈ {0, 1}
SLIDE 81
Collapse under pseudorandom restrictions
◮ Let f be a depth-(d − 1) read-once AC0 formula ◮ Let b ∈ {0, 1} ◮ X = Res(Gd ⊕ D, G ′
d ⊕ D′)
SLIDE 82
Collapse under pseudorandom restrictions
◮ Let f be a depth-(d − 1) read-once AC0 formula ◮ Let b ∈ {0, 1} ◮ X = Res(Gd ⊕ D, G ′
d ⊕ D′)
◮ Gd, G ′
d fool depth d, so
Pr
X [f |X ≡ b] ≈ Pr R [f |R ≡ b]
SLIDE 83
Collapse under pseudorandom restrictions
◮ Let f be a depth-(d − 1) read-once AC0 formula ◮ Let b ∈ {0, 1} ◮ X = Res(Gd ⊕ D, G ′
d ⊕ D′)
◮ Gd, G ′
d fool depth d, so
Pr
X [f |X ≡ b] ≈ Pr R [f |R ≡ b]
◮ Hybrid argument: Pr
X ◦t[f |X ◦t ≡ b] ≈ Pr R◦t[f |R◦t ≡ b]
SLIDE 84
Bridging the gap from d − 1 to d + 1
◮ So far: Depth-(d − 1) formulas collapse with about the right probability
SLIDE 85
Bridging the gap from d − 1 to d + 1
◮ So far: Depth-(d − 1) formulas collapse with about the right probability ◮ We were supposed to show that depth-(d + 1) formulas simplify w.r.t. ∆ w.h.p.
SLIDE 86
Idea of proof that ∆ → polylog n
∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ ∨ ∨ ∨ ∨ ∨ ∨ ∨ ∨ ∨ ∨ ∨ ∨ ∨ Total depth d + 1
SLIDE 87
Idea of proof that ∆ → polylog n
∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ ∨ ∨ ∨ ∨ ∨ ∨ ∨ ∨ ∨ ∨ ∨ ∨ ∨ These gates collapsing... Total depth d + 1
SLIDE 88
Idea of proof that ∆ → polylog n
∧ ∨ ∨ ∨ ∧ ∧ ∧ ∧ ∧ ∧ ∨ ∨ ∨ ∨ ∨ ∨ ∨ ∨ ∨ ∨ ∨ ∨ ∨ These gates collapsing... ...corresponds to these gates having few children Total depth d + 1
SLIDE 89
∆ = polylog n
◮ To recap, after t = O((log log n)2) restrictions, ∆ = polylog n
SLIDE 90
∆ = polylog n
◮ To recap, after t = O((log log n)2) restrictions, ∆ = polylog n ∧
SLIDE 91
∆ = polylog n
◮ To recap, after t = O((log log n)2) restrictions, ∆ = polylog n ◮ Total cost so far: O(log n) truly random bits ∧
SLIDE 92 Final step: MRT PRG
◮ Theorem (Meka, Reingold, Tal ’19): There is an explicit PRG with seed length O(log(n/ε)) that fools functions of the form f =
m
fi,
SLIDE 93 Final step: MRT PRG
◮ Theorem (Meka, Reingold, Tal ’19): There is an explicit PRG with seed length O(log(n/ε)) that fools functions of the form f =
m
fi, where f1, . . . , fm are on disjoint variables and fi can be computed by an ROBP with width O(1), length polylog n
SLIDE 94 Final step: MRT PRG
◮ Theorem (Meka, Reingold, Tal ’19): There is an explicit PRG with seed length O(log(n/ε)) that fools functions of the form f =
m
fi, where f1, . . . , fm are on disjoint variables and fi can be computed by an ROBP with width O(1), length polylog n ◮
(Proof uses GMRTV approach, building on [GY14, CHRT18, Vio09])
SLIDE 95 Final step: MRT PRG
◮ Theorem (Meka, Reingold, Tal ’19): There is an explicit PRG with seed length O(log(n/ε)) that fools functions of the form f =
m
fi, where f1, . . . , fm are on disjoint variables and fi can be computed by an ROBP with width O(1), length polylog n ◮
(Proof uses GMRTV approach, building on [GY14, CHRT18, Vio09])
◮ In our case, f =
m
fi
SLIDE 96 Final step: MRT PRG
◮ Theorem (Meka, Reingold, Tal ’19): There is an explicit PRG with seed length O(log(n/ε)) that fools functions of the form f =
m
fi, where f1, . . . , fm are on disjoint variables and fi can be computed by an ROBP with width O(1), length polylog n ◮
(Proof uses GMRTV approach, building on [GY14, CHRT18, Vio09])
◮ In our case, f =
m
fi =
(−1)|S| 2m
(−1)fi
SLIDE 97
Directions for further research
SLIDE 98
Directions for further research
O(1)-ROBPs Arbitrary-order O(1)-ROBPs poly(n)-ROBPs Arbitrary-order poly(n)-ROBPs 3-ROBPs MRT19 Arbitrary-order 3-ROBPs MRT19 2-ROBPs SZ95 Arbitrary-order 2-ROBPs SZ95 Conjunctions NN93 Parities NN93 Read-once CNFs DETT10 Read-once AC0 This work Read-once formulas Regular O(1)-ROBPs BRRY14 Arbitrary-order permutation O(1)-ROBPs CHHL18 Read-once AC0[⊕] Read-once polynomials LV17
SLIDE 99
Directions for further research
O(1)-ROBPs Arbitrary-order O(1)-ROBPs poly(n)-ROBPs Arbitrary-order poly(n)-ROBPs 3-ROBPs MRT19 Arbitrary-order 3-ROBPs MRT19 2-ROBPs SZ95 Arbitrary-order 2-ROBPs SZ95 Conjunctions NN93 Parities NN93 Read-once CNFs DETT10 Read-once AC0 This work Read-once formulas Regular O(1)-ROBPs BRRY14 Arbitrary-order permutation O(1)-ROBPs CHHL18 Read-once AC0[⊕] Read-once polynomials LV17
SLIDE 100
Read-once AC0[⊕]
⊕ ∨ ∨ ∧ ∨ ⊕ ⊕ ⊕ ∧ ∧ x7 ¬x2 ¬x1 ¬x5 x8 x3 x12 ¬x9 x11 x4 x10 ¬x6 x13
SLIDE 101
Fooling read-once AC0[⊕]
◮ Natural next step toward derandomizing BPL
SLIDE 102
Fooling read-once AC0[⊕]
◮ Natural next step toward derandomizing BPL ◮ Best prior PRG: seed length O(log2 n) [FK ’18]
SLIDE 103 Fooling read-once AC0[⊕]
◮ Natural next step toward derandomizing BPL ◮ Best prior PRG: seed length O(log2 n) [FK ’18] ◮ Theorem: Our PRG fools read-once AC0[⊕] with seed length
where t = # parity gates
SLIDE 104 Fooling read-once AC0[⊕]
◮ Natural next step toward derandomizing BPL ◮ Best prior PRG: seed length O(log2 n) [FK ’18] ◮ Theorem: Our PRG fools read-once AC0[⊕] with seed length
where t = # parity gates ◮ Fool read-once AC0[⊕] with seed length O(log(n/ε))?
SLIDE 105 Fooling read-once AC0[⊕]
◮ Natural next step toward derandomizing BPL ◮ Best prior PRG: seed length O(log2 n) [FK ’18] ◮ Theorem: Our PRG fools read-once AC0[⊕] with seed length
where t = # parity gates ◮ Fool read-once AC0[⊕] with seed length O(log(n/ε))? ◮ Thanks! Questions?