Targeted Pseudorandom Generators, Simulation Advice Generators, and Derandomizing Logspace
William M. Hoza1 Chris Umans2 October 10, 2016 Dagstuhl Seminar 16411
1University of Texas at Austin 2California Institute of Technology
Targeted Pseudorandom Generators, Simulation Advice Generators, and - - PowerPoint PPT Presentation
Targeted Pseudorandom Generators, Simulation Advice Generators, and Derandomizing Logspace William M. Hoza 1 Chris Umans 2 October 10, 2016 Dagstuhl Seminar 16411 1 University of Texas at Austin 2 California Institute of Technology ?
William M. Hoza1 Chris Umans2 October 10, 2016 Dagstuhl Seminar 16411
1University of Texas at Austin 2California Institute of Technology
?
?
◮ Theorem (Aydınlıo˘
glu, van Melkebeek ’12):
◮ Assume the following derandomization statement:
promise-AM ⊆
i.o.-Σ2TIME(2nε)/nε
◮ Then there is a PRG that gives that same derandomization
?
◮ Theorem (Aydınlıo˘
glu, van Melkebeek ’12):
◮ Assume the following derandomization statement:
promise-AM ⊆
i.o.-Σ2TIME(2nε)/nε
◮ Then there is a PRG that gives that same derandomization
?
◮ Theorem (Aydınlıo˘
glu, van Melkebeek ’12):
◮ Assume the following derandomization statement:
promise-AM ⊆
i.o.-Σ2TIME(2nε)/nε
◮ Then there is a PRG that gives that same derandomization
?
◮ Theorem (Aydınlıo˘
glu, van Melkebeek ’12):
◮ Assume the following derandomization statement:
promise-AM ⊆
i.o.-Σ2TIME(2nε)/nε
◮ Then there is a PRG that gives that same derandomization
?
◮ Theorem (Aydınlıo˘
glu, van Melkebeek ’12):
◮ Assume the following derandomization statement:
promise-AM ⊆
i.o.-Σ2TIME(2nε)/nε
◮ Then there is a PRG that gives that same derandomization
?
◮ Theorem (Aydınlıo˘
glu, van Melkebeek ’12):
◮ Assume the following derandomization statement:
promise-AM ⊆
i.o.-Σ2TIME(2nε)/nε
◮ Then there is a PRG that gives that same derandomization
?
◮ Theorem (Aydınlıo˘
glu, van Melkebeek ’12):
◮ Assume the following derandomization statement:
promise-AM ⊆
i.o.-Σ2TIME(2nε)/nε
◮ Then there is a PRG that gives that same derandomization
◮ Theorem (Goldreich ’11):
?
◮ Theorem (Aydınlıo˘
glu, van Melkebeek ’12):
◮ Assume the following derandomization statement:
promise-AM ⊆
i.o.-Σ2TIME(2nε)/nε
◮ Then there is a PRG that gives that same derandomization
◮ Theorem (Goldreich ’11):
◮ Assume that ∀Π ∈ promise-BPP, ∀k ∈ N, ∃ deterministic
polytime algorithm A for Π s.t. any probabilistic nk-time algorithm has only an n−k chance of generating an instance on which A fails
?
◮ Theorem (Aydınlıo˘
glu, van Melkebeek ’12):
◮ Assume the following derandomization statement:
promise-AM ⊆
i.o.-Σ2TIME(2nε)/nε
◮ Then there is a PRG that gives that same derandomization
◮ Theorem (Goldreich ’11):
◮ Assume that ∀Π ∈ promise-BPP, ∀k ∈ N, ∃ deterministic
polytime algorithm A for Π s.t. any probabilistic nk-time algorithm has only an n−k chance of generating an instance on which A fails
◮ Then there is a PRG that gives that same derandomization
◮ Best PRG against logspace (Nisan ’92): Seed length
O(log2 n)
◮ Best PRG against logspace (Nisan ’92): Seed length
O(log2 n)
◮ Best derandomization (Saks, Zhou ’98):
BPL ⊆ DSPACE(log3/2 n)
◮ Theorem (informally stated):
◮ Theorem (informally stated):
◮ Assume that for every derandomization of logspace, there exists
a PRG strong enough to (nearly) recover derandomization
◮ Theorem (informally stated):
◮ Assume that for every derandomization of logspace, there exists
a PRG strong enough to (nearly) recover derandomization
◮ Then
BPL ⊆
DSPACE(log1+α n).
◮ Theorem (informally stated):
◮ Assume that for every derandomization of logspace, there exists
a PRG strong enough to (nearly) recover derandomization
◮ Then
BPL ⊆
DSPACE(log1+α n).
◮ Equivalence of PRGs and derandomization would itself give a
derandomization!
Constructing a PRG from derandomiza- tion is hard!
Constructing a PRG from derandomiza- tion is hard! Promising approach to derandom- izing BPL!
Simplified statement of main result
◮ Proof sketch of main result
◮ Saks-Zhou theorem, revisited
◮ Proof sketch of Saks-Zhou-Armoni theorem ◮ Stronger version of main result
◮ Targeted PRGs ◮ Simulation advice generators
◮ Given: Oracle Gen : {0, 1}s → {0, 1}m0, a PRG for log n space
◮ Given: Oracle Gen : {0, 1}s → {0, 1}m0, a PRG for log n space ◮ Goal: Simulate (log n)-space m-coin algorithm, m ≫ m0
◮ Given: Oracle Gen : {0, 1}s → {0, 1}m0, a PRG for log n space ◮ Goal: Simulate (log n)-space m-coin algorithm, m ≫ m0 ◮ Approach 1: Ignore oracle, use a PRG which outputs m bits
◮ Given: Oracle Gen : {0, 1}s → {0, 1}m0, a PRG for log n space ◮ Goal: Simulate (log n)-space m-coin algorithm, m ≫ m0 ◮ Approach 1: Ignore oracle, use a PRG which outputs m bits
◮ E.g. INW ’94 (extractors): Seed length O(log n log m)
◮ Given: Oracle Gen : {0, 1}s → {0, 1}m0, a PRG for log n space ◮ Goal: Simulate (log n)-space m-coin algorithm, m ≫ m0 ◮ Approach 1: Ignore oracle, use a PRG which outputs m bits
◮ E.g. INW ’94 (extractors): Seed length O(log n log m)
◮ Approach 2: Use Gen as building block in new PRG which
◮ Given: Oracle Gen : {0, 1}s → {0, 1}m0, a PRG for log n space ◮ Goal: Simulate (log n)-space m-coin algorithm, m ≫ m0 ◮ Approach 1: Ignore oracle, use a PRG which outputs m bits
◮ E.g. INW ’94 (extractors): Seed length O(log n log m)
◮ Approach 2: Use Gen as building block in new PRG which
◮ E.g. using techniques of INW: Seed length
s + O
m m0
◮ Given: Oracle Gen : {0, 1}s → {0, 1}m0, a PRG for log n space ◮ Goal: Simulate (log n)-space m-coin algorithm, m ≫ m0 ◮ Approach 1: Ignore oracle, use a PRG which outputs m bits
◮ E.g. INW ’94 (extractors): Seed length O(log n log m)
◮ Approach 2: Use Gen as building block in new PRG which
◮ E.g. using techniques of INW: Seed length
s + O
m m0
◮ Given: Oracle Gen : {0, 1}s → {0, 1}m0, a PRG for log n space ◮ Goal: Simulate (log n)-space m-coin algorithm, m ≫ m0 ◮ Approach 1: Ignore oracle, use a PRG which outputs m bits
◮ E.g. INW ’94 (extractors): Seed length O(log n log m)
◮ Approach 2: Use Gen as building block in new PRG which
◮ E.g. using techniques of INW: Seed length
s + O
m m0
◮ Approach 3: Use Gen as building block in m-step “simulator”
◮ Nonuniform model of log n space: n-state automaton
◮ Nonuniform model of log n space: n-state automaton ◮ Qm(q; y) = final state if Q starts in state q, reads y ∈ {0, 1}m
◮ Nonuniform model of log n space: n-state automaton ◮ Qm(q; y) = final state if Q starts in state q, reads y ∈ {0, 1}m ◮ Simulator for automata: algorithm Sim(Q, q, x) such that
Sim(Q, q, Us) ∼ε Qm(q; Um)
◮ Gen(x) is a PRG for automata iff
Sim(Q, q, x) = Qm(q; Gen(x)) is a simulator for automata
◮ Gen(x) is a PRG for automata iff
Sim(Q, q, x) = Qm(q; Gen(x)) is a simulator for automata
◮ Crucial feature: Gen doesn’t see “source code” (Q, q)!
◮ Theorem (implicit in Armoni ’98, builds on SZ ’98, some details suppressed):
◮ Theorem (implicit in Armoni ’98, builds on SZ ’98, some details suppressed):
◮ Given oracle Gen : {0, 1}s → {0, 1}m0, a PRG for n-state
automata
◮ Theorem (implicit in Armoni ’98, builds on SZ ’98, some details suppressed):
◮ Given oracle Gen : {0, 1}s → {0, 1}m0, a PRG for n-state
automata
◮ Can construct m-step simulator for n-state automata with seed
length/space complexity O
log m0
◮ Theorem (implicit in Armoni ’98, builds on SZ ’98, some details suppressed):
◮ Given oracle Gen : {0, 1}s → {0, 1}m0, a PRG for n-state
automata
◮ Can construct m-step simulator for n-state automata with seed
length/space complexity O
log m0
◮ Theorem (implicit in Armoni ’98, builds on SZ ’98, some details suppressed):
◮ Given oracle Gen : {0, 1}s → {0, 1}m0, a PRG for n-state
automata
◮ Can construct m-step simulator for n-state automata with seed
length/space complexity O
log m0
◮ m0 = 2
√log n, s = O(log n log m0) = O(log3/2 n) (INW)
◮ Theorem (implicit in Armoni ’98, builds on SZ ’98, some details suppressed):
◮ Given oracle Gen : {0, 1}s → {0, 1}m0, a PRG for n-state
automata
◮ Can construct m-step simulator for n-state automata with seed
length/space complexity O
log m0
◮ m0 = 2
√log n, s = O(log n log m0) = O(log3/2 n) (INW)
◮ Pick m = n (max # coins of (log n)-space algorithm)
◮ Theorem (implicit in Armoni ’98, builds on SZ ’98, some details suppressed):
◮ Given oracle Gen : {0, 1}s → {0, 1}m0, a PRG for n-state
automata
◮ Can construct m-step simulator for n-state automata with seed
length/space complexity O
log m0
◮ m0 = 2
√log n, s = O(log n log m0) = O(log3/2 n) (INW)
◮ Pick m = n (max # coins of (log n)-space algorithm) ◮ Obtain simulator with seed length/space complexity
O(log3/2 n + log3/2 n) = O(log3/2 n)
◮ Theorem (implicit in Armoni ’98, builds on SZ ’98, some details suppressed):
◮ Given oracle Gen : {0, 1}s → {0, 1}m0, a PRG for n-state
automata
◮ Can construct m-step simulator for n-state automata with seed
length/space complexity O
log m0
◮ Theorem (implicit in Armoni ’98, builds on SZ ’98, some details suppressed):
◮ Given oracle Gen : {0, 1}s → {0, 1}m0, a PRG for n-state
automata
◮ Can construct m-step simulator for n-state automata with seed
length/space complexity O
log m0
◮ m0 = 2log0.7 n, s = O(log1.1 n) (no such construction known)
◮ Theorem (implicit in Armoni ’98, builds on SZ ’98, some details suppressed):
◮ Given oracle Gen : {0, 1}s → {0, 1}m0, a PRG for n-state
automata
◮ Can construct m-step simulator for n-state automata with seed
length/space complexity O
log m0
◮ m0 = 2log0.7 n, s = O(log1.1 n) (no such construction known) ◮ Pick m = 2log0.8 n
◮ Theorem (implicit in Armoni ’98, builds on SZ ’98, some details suppressed):
◮ Given oracle Gen : {0, 1}s → {0, 1}m0, a PRG for n-state
automata
◮ Can construct m-step simulator for n-state automata with seed
length/space complexity O
log m0
◮ m0 = 2log0.7 n, s = O(log1.1 n) (no such construction known) ◮ Pick m = 2log0.8 n ◮ Obtain simulator with seed length/space complexity
O(log1.1 n + log1.1 n) = O(log1.1 n)
more simulation steps shorter seed
more simulation steps shorter seed BPL = L Dream
more simulation steps shorter seed BPL = L Dream Nisan INW NZ Armoni PRG
more simulation steps shorter seed BPL = L Dream Nisan INW NZ Armoni PRG BPL ⊆ L2
more simulation steps shorter seed BPL = L Dream Nisan INW NZ Armoni PRG BPL ⊆ L2 logO(1) n random bits in L
more simulation steps shorter seed BPL = L Dream Nisan INW NZ Armoni PRG BPL ⊆ L2 logO(1) n random bits in L
more simulation steps shorter seed BPL = L Dream Nisan INW NZ Armoni PRG BPL ⊆ L2 logO(1) n random bits in L SZA Simulator
more simulation steps shorter seed BPL = L Dream Nisan INW NZ Armoni PRG BPL ⊆ L2 logO(1) n random bits in L SZA Simulator BPL ⊆ L3/2
more simulation steps shorter seed BPL = L Dream Nisan INW NZ Armoni PRG BPL ⊆ L2 logO(1) n random bits in L SZA Simulator BPL ⊆ L3/2
more simulation steps shorter seed BPL = L Dream Nisan INW NZ Armoni PRG BPL ⊆ L2 logO(1) n random bits in L SZA Simulator BPL ⊆ L3/2
more simulation steps shorter seed BPL = L Dream Nisan INW NZ Armoni PRG BPL ⊆ L2 logO(1) n random bits in L SZA Simulator BPL ⊆ L3/2
more simulation steps shorter seed BPL = L Dream Nisan INW NZ Armoni PRG BPL ⊆ L2 logO(1) n random bits in L SZA Simulator BPL ⊆ L3/2
more simulation steps shorter seed BPL = L Dream Nisan INW NZ Armoni PRG BPL ⊆ L2 logO(1) n random bits in L SZA Simulator BPL ⊆ L3/2
more simulation steps shorter seed BPL = L Dream Nisan INW NZ Armoni PRG BPL ⊆ L2 logO(1) n random bits in L SZA Simulator BPL ⊆ L3/2
more simulation steps shorter seed BPL = L Dream Nisan INW NZ Armoni PRG BPL ⊆ L2 logO(1) n random bits in L SZA Simulator BPL ⊆ L3/2
more simulation steps shorter seed BPL = L Dream Nisan INW NZ Armoni PRG BPL ⊆ L2 logO(1) n random bits in L SZA Simulator BPL ⊆ L3/2
more simulation steps shorter seed BPL = L Dream Nisan INW NZ Armoni PRG BPL ⊆ L2 logO(1) n random bits in L SZA Simulator BPL ⊆ L3/2
more simulation steps shorter seed BPL = L Dream Nisan INW NZ Armoni PRG BPL ⊆ L2 logO(1) n random bits in L SZA Simulator BPL ⊆ L3/2
more simulation steps shorter seed BPL = L Dream Nisan INW NZ Armoni PRG BPL ⊆ L2 logO(1) n random bits in L SZA Simulator BPL ⊆ L3/2
more simulation steps shorter seed BPL = L Dream Nisan INW NZ Armoni PRG BPL ⊆ L2 logO(1) n random bits in L SZA Simulator BPL ⊆ L3/2
more simulation steps shorter seed BPL = L Dream Nisan INW NZ Armoni PRG BPL ⊆ L2 logO(1) n random bits in L SZA Simulator BPL ⊆ L3/2
more simulation steps shorter seed BPL = L Dream Nisan INW NZ Armoni PRG BPL ⊆ L2 logO(1) n random bits in L SZA Simulator BPL ⊆ L3/2
more simulation steps shorter seed BPL = L Dream Nisan INW NZ Armoni PRG BPL ⊆ L2 logO(1) n random bits in L SZA Simulator BPL ⊆ L3/2
more simulation steps shorter seed BPL = L Dream Nisan INW NZ Armoni PRG BPL ⊆ L2 logO(1) n random bits in L SZA Simulator BPL ⊆ L3/2
more simulation steps shorter seed BPL = L Dream Nisan INW NZ Armoni PRG BPL ⊆ L2 logO(1) n random bits in L SZA Simulator BPL ⊆ L3/2
more simulation steps shorter seed BPL = L Dream Nisan INW NZ Armoni PRG BPL ⊆ L2 logO(1) n random bits in L SZA Simulator BPL ⊆ L3/2
more simulation steps shorter seed BPL = L Dream Nisan INW NZ Armoni PRG BPL ⊆ L2 logO(1) n random bits in L SZA Simulator BPL ⊆ L3/2 BPL ⊆ L1.1
Simplified statement of main result Proof sketch of main result
Saks-Zhou theorem, revisited
◮ Proof sketch of Saks-Zhou-Armoni theorem ◮ Stronger version of main result
◮ Targeted PRGs ◮ Simulation advice generators
◮ Goal: approximate Qm
◮ Goal: approximate Qm ◮ Easier goal: Use Gen to find automaton Pow(Q0) ≈ Qm0
◮ Goal: approximate Qm ◮ Easier goal: Use Gen to find automaton Pow(Q0) ≈ Qm0 ◮ First attempt:
Pow(Q0)(q; y) = Qm0
0 (q; Gen(y))
◮ Goal: approximate Qm ◮ Easier goal: Use Gen to find automaton Pow(Q0) ≈ Qm0 ◮ First attempt:
Pow(Q0)(q; y) = Qm0
0 (q; Gen(y)) ◮ But we want Pow(Q0) to only read O(log n) bits at a time
◮ Goal: approximate Qm ◮ Easier goal: Use Gen to find automaton Pow(Q0) ≈ Qm0 ◮ First attempt:
Pow(Q0)(q; y) = Qm0
0 (q; Gen(y)) ◮ But we want Pow(Q0) to only read O(log n) bits at a time ◮ Randomized algorithm:
Pow(Q0, x)(q; y) = Qm0
0 (q; Gen(Samp(x, y)))
◮ Goal: approximate Qm ◮ Easier goal: Use Gen to find automaton Pow(Q0) ≈ Qm0 ◮ First attempt:
Pow(Q0)(q; y) = Qm0
0 (q; Gen(y)) ◮ But we want Pow(Q0) to only read O(log n) bits at a time ◮ Randomized algorithm:
Pow(Q0, x)(q; y) = Qm0
0 (q; Gen(Samp(x, y))) ◮ Can achieve |x| ≤ O(s), |y| ≤ O(log n)
◮ Goal: approximate Qm
◮ Goal: approximate Qm ◮ First attempt: For i = 1 to logm0 m:
◮ Goal: approximate Qm ◮ First attempt: For i = 1 to logm0 m:
◮ Pick fresh randomness xi
◮ Goal: approximate Qm ◮ First attempt: For i = 1 to logm0 m:
◮ Pick fresh randomness xi ◮ Let Qi = Pow(Qi−1, xi)
◮ Goal: approximate Qm ◮ First attempt: For i = 1 to logm0 m:
◮ Pick fresh randomness xi ◮ Let Qi = Pow(Qi−1, xi)
◮ Randomness complexity: O(s · log m log m0 ). Too much!
◮ Goal: approximate Qm ◮ First attempt: For i = 1 to logm0 m:
◮ Pick fresh randomness xi ◮ Let Qi = Pow(Qi−1, xi)
◮ Randomness complexity: O(s · log m log m0 ). Too much! ◮ Second attempt: Pick x once, reuse in each iteration
◮ Goal: approximate Qm ◮ First attempt: For i = 1 to logm0 m:
◮ Pick fresh randomness xi ◮ Let Qi = Pow(Qi−1, xi)
◮ Randomness complexity: O(s · log m log m0 ). Too much! ◮ Second attempt: Pick x once, reuse in each iteration
◮ Qi is stochastically dependent on x
◮ Goal: approximate Qm ◮ First attempt: For i = 1 to logm0 m:
◮ Pick fresh randomness xi ◮ Let Qi = Pow(Qi−1, xi)
◮ Randomness complexity: O(s · log m log m0 ). Too much! ◮ Second attempt: Pick x once, reuse in each iteration
◮ Qi is stochastically dependent on x ◮ No guarantee that Pow will be accurate
◮ Solution: Break dependencies by rounding
◮ Solution: Break dependencies by rounding ◮ Snap(Q):
◮ Solution: Break dependencies by rounding ◮ Snap(Q):
◮ Solution: Break dependencies by rounding ◮ Snap(Q):
◮ Solution: Break dependencies by rounding ◮ Snap(Q):
◮ Solution: Break dependencies by rounding ◮ Snap(Q):
◮ Key feature:
Q ≈ Q′ = ⇒ w.h.p. over r, Snap(Q, r) = Snap(Q′, r)
◮ Solution: Break dependencies by rounding ◮ Snap(Q):
◮ Key feature:
Q ≈ Q′ = ⇒ w.h.p. over r, Snap(Q, r) = Snap(Q′, r) Q Q′
◮ Solution: Break dependencies by rounding ◮ Snap(Q):
◮ Key feature:
Q ≈ Q′ = ⇒ w.h.p. over r, Snap(Q, r) = Snap(Q′, r) Q Q′
◮ Solution: Break dependencies by rounding ◮ Snap(Q):
◮ Key feature:
Q ≈ Q′ = ⇒ w.h.p. over r, Snap(Q, r) = Snap(Q′, r) Q Q′
◮ To approximate Qm 0 :
◮ To approximate Qm 0 :
◮ To approximate Qm 0 :
◮ To approximate Qm 0 :
◮ Correctness proof sketch:
◮ To approximate Qm 0 :
◮ Correctness proof sketch:
◮ Define
Qi by wishful thinking: Q0 = Q0, Qi = Snap( Qm0
i−1)
◮ To approximate Qm 0 :
◮ Correctness proof sketch:
◮ Define
Qi by wishful thinking: Q0 = Q0, Qi = Snap( Qm0
i−1)
◮ W.h.p., for all i, Pow(
Qi, x) ≈ Qm0
i
(union bound)
◮ To approximate Qm 0 :
◮ Correctness proof sketch:
◮ Define
Qi by wishful thinking: Q0 = Q0, Qi = Snap( Qm0
i−1)
◮ W.h.p., for all i, Pow(
Qi, x) ≈ Qm0
i
(union bound)
◮ W.h.p., Snap(Pow(
Qi, x)) = Snap( Qm0
i
) = Qi+1
◮ To approximate Qm 0 :
◮ Correctness proof sketch:
◮ Define
Qi by wishful thinking: Q0 = Q0, Qi = Snap( Qm0
i−1)
◮ W.h.p., for all i, Pow(
Qi, x) ≈ Qm0
i
(union bound)
◮ W.h.p., Snap(Pow(
Qi, x)) = Snap( Qm0
i
) = Qi+1
◮ W.h.p., by induction, dreams come true: Qi =
Qi for all i
◮ To approximate Qm 0 :
◮ Correctness proof sketch:
◮ Define
Qi by wishful thinking: Q0 = Q0, Qi = Snap( Qm0
i−1)
◮ W.h.p., for all i, Pow(
Qi, x) ≈ Qm0
i
(union bound)
◮ W.h.p., Snap(Pow(
Qi, x)) = Snap( Qm0
i
) = Qi+1
◮ W.h.p., by induction, dreams come true: Qi =
Qi for all i
◮ Implement using recursion
Simplified statement of main result Proof sketch of main result
Saks-Zhou theorem, revisited
Proof sketch of Saks-Zhou-Armoni theorem
◮ Stronger version of main result
◮ Targeted PRGs ◮ Simulation advice generators
◮ Two key features distinguish PRG from simulator
◮ Two key features distinguish PRG from simulator
◮ Input: no access to “source code” (Q, q)
◮ Two key features distinguish PRG from simulator
◮ Input: no access to “source code” (Q, q) ◮ Output: long string for automaton to read vs. state
◮ Two key features distinguish PRG from simulator
◮ Input: no access to “source code” (Q, q) ◮ Output: long string for automaton to read vs. state
◮ Claim: First feature is the one that matters for us
◮ Targeted PRG: Algorithm Gen(Q, q, x) such that
Sim(Q, q, x) = Qm(q; Gen(Q, q, x)) is a simulator
◮ Targeted PRG: Algorithm Gen(Q, q, x) such that
Sim(Q, q, x) = Qm(q; Gen(Q, q, x)) is a simulator
◮ Introduced by Goldreich ’11 for BPP
◮ Targeted PRG: Algorithm Gen(Q, q, x) such that
Sim(Q, q, x) = Qm(q; Gen(Q, q, x)) is a simulator
◮ Introduced by Goldreich ’11 for BPP ◮ Ordinary PRG: Special case that Gen doesn’t depend on (Q, q)
◮ Simulation advice generator: Algorithm Gen(x) such that for
some deterministic logspace S, Sim(Q, q, x) = S(Q, q, Gen(x)) is a simulator
◮ Simulation advice generator: Algorithm Gen(x) such that for
some deterministic logspace S, Sim(Q, q, x) = S(Q, q, Gen(x)) is a simulator
◮ Ordinary PRG: Special case that S(Q, q, y) = Q|y|(q; y)
Ordinary pseudorandom generator Simulator Targeted pseudorandom generator Simulation advice generator
Ordinary pseudorandom generator Simulator Targeted pseudorandom generator Simulation advice generator Theorem: Dashed arrow transformation exists if and only if
promise-BPSPACE(log1+α n) =
promise-DSPACE(log1+α n)
◮ SZA works on simulation advice generator without modification
◮ SZA works on simulation advice generator without modification
Simulator Simulation Advice Generator SZA
◮ SZA works on simulation advice generator without modification
Simulator Simulation Advice Generator SZA Targeted PRG Assumption
◮ SZA works on simulation advice generator without modification
Simulator Simulation Advice Generator SZA Targeted PRG Assumption Method of
◮ Theorem: The following are equivalent:
◮ Theorem: The following are equivalent:
1.
promise-BPSPACE(log1+α n) =
promise-DSPACE(log1+α n)
◮ Theorem: The following are equivalent:
1.
promise-BPSPACE(log1+α n) =
promise-DSPACE(log1+α n)
◮ Theorem: The following are equivalent:
1.
promise-BPSPACE(log1+α n) =
promise-DSPACE(log1+α n)
◮ If ∃ efficient targeted PRG with parameters
s ≤ O(log1+σ n), log(1/ε) = log1+η n, log m ≥ logµ n
◮ Theorem: The following are equivalent:
1.
promise-BPSPACE(log1+α n) =
promise-DSPACE(log1+α n)
◮ If ∃ efficient targeted PRG with parameters
s ≤ O(log1+σ n), log(1/ε) = log1+η n, log m ≥ logµ n
◮ Then ∃ efficient simulation advice generator with parameters
s′ ≤ O(log1+σ+γ n), log(1/ε′) = log1+η−γ n, log m′ ≥ logµ−γ n, log a′ ≤ O(log1+η+γ n)
◮ Theorem: The following are equivalent:
1.
promise-BPSPACE(log1+α n) =
promise-DSPACE(log1+α n)
◮ If ∃ efficient targeted PRG with parameters
s ≤ O(log1+σ n), log(1/ε) = log1+η n, log m ≥ logµ n
◮ Then ∃ efficient simulation advice generator with parameters
s′ ≤ O(log1+σ+γ n), log(1/ε′) = log1+η−γ n, log m′ ≥ logµ−γ n, log a′ ≤ O(log1+η+γ n)
◮ a′ = number of advice bits
◮ Theorem: The following are equivalent:
1.
promise-BPSPACE(log1+α n) =
promise-DSPACE(log1+α n)
◮ If ∃ efficient targeted PRG with parameters
s ≤ O(log1+σ n), log(1/ε) = log1+η n, log m ≥ logµ n
◮ Then ∃ efficient simulation advice generator with parameters
s′ ≤ O(log1+σ+γ n), log(1/ε′) = log1+η−γ n, log m′ ≥ logµ−γ n, log a′ ≤ O(log1+η+γ n)
◮ a′ = number of advice bits ◮ “Efficient”: Space complexity ≤ O(seed length)
◮ This material is based upon work supported by
◮ NSF GRFP Grant No. DGE-1610403 ◮ NSF Grant No. NSF CCF-1423544
◮ Thanks for your attention! ◮ Any questions?