Targeted Pseudorandom Generators, Simulation Advice Generators, and - - PowerPoint PPT Presentation

targeted pseudorandom generators simulation advice
SMART_READER_LITE
LIVE PREVIEW

Targeted Pseudorandom Generators, Simulation Advice Generators, and - - PowerPoint PPT Presentation

Targeted Pseudorandom Generators, Simulation Advice Generators, and Derandomizing Logspace William M. Hoza 1 Chris Umans 2 October 10, 2016 Dagstuhl Seminar 16411 1 University of Texas at Austin 2 California Institute of Technology ?


slide-1
SLIDE 1

Targeted Pseudorandom Generators, Simulation Advice Generators, and Derandomizing Logspace

William M. Hoza1 Chris Umans2 October 10, 2016 Dagstuhl Seminar 16411

1University of Texas at Austin 2California Institute of Technology

slide-2
SLIDE 2

Derandomization

?

⇐ ⇒ PRG

slide-3
SLIDE 3

Derandomization

?

⇐ ⇒ PRG

◮ Theorem (Aydınlıo˘

glu, van Melkebeek ’12):

◮ Assume the following derandomization statement:

promise-AM ⊆

  • ε>0

i.o.-Σ2TIME(2nε)/nε

◮ Then there is a PRG that gives that same derandomization

slide-4
SLIDE 4

Derandomization

?

⇐ ⇒ PRG

◮ Theorem (Aydınlıo˘

glu, van Melkebeek ’12):

◮ Assume the following derandomization statement:

promise-AM ⊆

  • ε>0

i.o.-Σ2TIME(2nε)/nε

◮ Then there is a PRG that gives that same derandomization

slide-5
SLIDE 5

Derandomization

?

⇐ ⇒ PRG

◮ Theorem (Aydınlıo˘

glu, van Melkebeek ’12):

◮ Assume the following derandomization statement:

promise-AM ⊆

  • ε>0

i.o.-Σ2TIME(2nε)/nε

◮ Then there is a PRG that gives that same derandomization

slide-6
SLIDE 6

Derandomization

?

⇐ ⇒ PRG

◮ Theorem (Aydınlıo˘

glu, van Melkebeek ’12):

◮ Assume the following derandomization statement:

promise-AM ⊆

  • ε>0

i.o.-Σ2TIME(2nε)/nε

◮ Then there is a PRG that gives that same derandomization

slide-7
SLIDE 7

Derandomization

?

⇐ ⇒ PRG

◮ Theorem (Aydınlıo˘

glu, van Melkebeek ’12):

◮ Assume the following derandomization statement:

promise-AM ⊆

  • ε>0

i.o.-Σ2TIME(2nε)/nε

◮ Then there is a PRG that gives that same derandomization

slide-8
SLIDE 8

Derandomization

?

⇐ ⇒ PRG

◮ Theorem (Aydınlıo˘

glu, van Melkebeek ’12):

◮ Assume the following derandomization statement:

promise-AM ⊆

  • ε>0

i.o.-Σ2TIME(2nε)/nε

◮ Then there is a PRG that gives that same derandomization

slide-9
SLIDE 9

Derandomization

?

⇐ ⇒ PRG

◮ Theorem (Aydınlıo˘

glu, van Melkebeek ’12):

◮ Assume the following derandomization statement:

promise-AM ⊆

  • ε>0

i.o.-Σ2TIME(2nε)/nε

◮ Then there is a PRG that gives that same derandomization

◮ Theorem (Goldreich ’11):

slide-10
SLIDE 10

Derandomization

?

⇐ ⇒ PRG

◮ Theorem (Aydınlıo˘

glu, van Melkebeek ’12):

◮ Assume the following derandomization statement:

promise-AM ⊆

  • ε>0

i.o.-Σ2TIME(2nε)/nε

◮ Then there is a PRG that gives that same derandomization

◮ Theorem (Goldreich ’11):

◮ Assume that ∀Π ∈ promise-BPP, ∀k ∈ N, ∃ deterministic

polytime algorithm A for Π s.t. any probabilistic nk-time algorithm has only an n−k chance of generating an instance on which A fails

slide-11
SLIDE 11

Derandomization

?

⇐ ⇒ PRG

◮ Theorem (Aydınlıo˘

glu, van Melkebeek ’12):

◮ Assume the following derandomization statement:

promise-AM ⊆

  • ε>0

i.o.-Σ2TIME(2nε)/nε

◮ Then there is a PRG that gives that same derandomization

◮ Theorem (Goldreich ’11):

◮ Assume that ∀Π ∈ promise-BPP, ∀k ∈ N, ∃ deterministic

polytime algorithm A for Π s.t. any probabilistic nk-time algorithm has only an n−k chance of generating an instance on which A fails

◮ Then there is a PRG that gives that same derandomization

slide-12
SLIDE 12

L vs. BPL

◮ Best PRG against logspace (Nisan ’92): Seed length

O(log2 n)

slide-13
SLIDE 13

L vs. BPL

◮ Best PRG against logspace (Nisan ’92): Seed length

O(log2 n)

◮ Best derandomization (Saks, Zhou ’98):

BPL ⊆ DSPACE(log3/2 n)

slide-14
SLIDE 14

Main result, simplified version

◮ Theorem (informally stated):

slide-15
SLIDE 15

Main result, simplified version

◮ Theorem (informally stated):

◮ Assume that for every derandomization of logspace, there exists

a PRG strong enough to (nearly) recover derandomization

slide-16
SLIDE 16

Main result, simplified version

◮ Theorem (informally stated):

◮ Assume that for every derandomization of logspace, there exists

a PRG strong enough to (nearly) recover derandomization

◮ Then

BPL ⊆

  • α>0

DSPACE(log1+α n).

slide-17
SLIDE 17

Main result, simplified version

◮ Theorem (informally stated):

◮ Assume that for every derandomization of logspace, there exists

a PRG strong enough to (nearly) recover derandomization

◮ Then

BPL ⊆

  • α>0

DSPACE(log1+α n).

◮ Equivalence of PRGs and derandomization would itself give a

derandomization!

slide-18
SLIDE 18

How to interpret our result

slide-19
SLIDE 19

How to interpret our result

Constructing a PRG from derandomiza- tion is hard!

slide-20
SLIDE 20

How to interpret our result

Constructing a PRG from derandomiza- tion is hard! Promising approach to derandom- izing BPL!

slide-21
SLIDE 21

Outline

Simplified statement of main result

◮ Proof sketch of main result

◮ Saks-Zhou theorem, revisited

◮ Proof sketch of Saks-Zhou-Armoni theorem ◮ Stronger version of main result

◮ Targeted PRGs ◮ Simulation advice generators

slide-22
SLIDE 22

When your PRG doesn’t output enough bits

slide-23
SLIDE 23

When your PRG doesn’t output enough bits

◮ Given: Oracle Gen : {0, 1}s → {0, 1}m0, a PRG for log n space

slide-24
SLIDE 24

When your PRG doesn’t output enough bits

◮ Given: Oracle Gen : {0, 1}s → {0, 1}m0, a PRG for log n space ◮ Goal: Simulate (log n)-space m-coin algorithm, m ≫ m0

slide-25
SLIDE 25

When your PRG doesn’t output enough bits

◮ Given: Oracle Gen : {0, 1}s → {0, 1}m0, a PRG for log n space ◮ Goal: Simulate (log n)-space m-coin algorithm, m ≫ m0 ◮ Approach 1: Ignore oracle, use a PRG which outputs m bits

slide-26
SLIDE 26

When your PRG doesn’t output enough bits

◮ Given: Oracle Gen : {0, 1}s → {0, 1}m0, a PRG for log n space ◮ Goal: Simulate (log n)-space m-coin algorithm, m ≫ m0 ◮ Approach 1: Ignore oracle, use a PRG which outputs m bits

◮ E.g. INW ’94 (extractors): Seed length O(log n log m)

slide-27
SLIDE 27

When your PRG doesn’t output enough bits

◮ Given: Oracle Gen : {0, 1}s → {0, 1}m0, a PRG for log n space ◮ Goal: Simulate (log n)-space m-coin algorithm, m ≫ m0 ◮ Approach 1: Ignore oracle, use a PRG which outputs m bits

◮ E.g. INW ’94 (extractors): Seed length O(log n log m)

◮ Approach 2: Use Gen as building block in new PRG which

  • utputs m bits
slide-28
SLIDE 28

When your PRG doesn’t output enough bits

◮ Given: Oracle Gen : {0, 1}s → {0, 1}m0, a PRG for log n space ◮ Goal: Simulate (log n)-space m-coin algorithm, m ≫ m0 ◮ Approach 1: Ignore oracle, use a PRG which outputs m bits

◮ E.g. INW ’94 (extractors): Seed length O(log n log m)

◮ Approach 2: Use Gen as building block in new PRG which

  • utputs m bits

◮ E.g. using techniques of INW: Seed length

s + O

  • (log n) · log

m m0

slide-29
SLIDE 29

When your PRG doesn’t output enough bits

◮ Given: Oracle Gen : {0, 1}s → {0, 1}m0, a PRG for log n space ◮ Goal: Simulate (log n)-space m-coin algorithm, m ≫ m0 ◮ Approach 1: Ignore oracle, use a PRG which outputs m bits

◮ E.g. INW ’94 (extractors): Seed length O(log n log m)

◮ Approach 2: Use Gen as building block in new PRG which

  • utputs m bits

◮ E.g. using techniques of INW: Seed length

s + O

  • (log n) · log

m m0

  • ◮ For m ≫ m0, might as well have started from scratch!
slide-30
SLIDE 30

When your PRG doesn’t output enough bits

◮ Given: Oracle Gen : {0, 1}s → {0, 1}m0, a PRG for log n space ◮ Goal: Simulate (log n)-space m-coin algorithm, m ≫ m0 ◮ Approach 1: Ignore oracle, use a PRG which outputs m bits

◮ E.g. INW ’94 (extractors): Seed length O(log n log m)

◮ Approach 2: Use Gen as building block in new PRG which

  • utputs m bits

◮ E.g. using techniques of INW: Seed length

s + O

  • (log n) · log

m m0

  • ◮ For m ≫ m0, might as well have started from scratch!

◮ Approach 3: Use Gen as building block in m-step “simulator”

slide-31
SLIDE 31

Randomness-efficient simulators for automata

◮ Nonuniform model of log n space: n-state automaton

slide-32
SLIDE 32

Randomness-efficient simulators for automata

◮ Nonuniform model of log n space: n-state automaton ◮ Qm(q; y) = final state if Q starts in state q, reads y ∈ {0, 1}m

slide-33
SLIDE 33

Randomness-efficient simulators for automata

◮ Nonuniform model of log n space: n-state automaton ◮ Qm(q; y) = final state if Q starts in state q, reads y ∈ {0, 1}m ◮ Simulator for automata: algorithm Sim(Q, q, x) such that

Sim(Q, q, Us) ∼ε Qm(q; Um)

slide-34
SLIDE 34

PRGs for automata

◮ Gen(x) is a PRG for automata iff

Sim(Q, q, x) = Qm(q; Gen(x)) is a simulator for automata

slide-35
SLIDE 35

PRGs for automata

◮ Gen(x) is a PRG for automata iff

Sim(Q, q, x) = Qm(q; Gen(x)) is a simulator for automata

◮ Crucial feature: Gen doesn’t see “source code” (Q, q)!

slide-36
SLIDE 36

Saks-Zhou-Armoni transformation

◮ Theorem (implicit in Armoni ’98, builds on SZ ’98, some details suppressed):

slide-37
SLIDE 37

Saks-Zhou-Armoni transformation

◮ Theorem (implicit in Armoni ’98, builds on SZ ’98, some details suppressed):

◮ Given oracle Gen : {0, 1}s → {0, 1}m0, a PRG for n-state

automata

slide-38
SLIDE 38

Saks-Zhou-Armoni transformation

◮ Theorem (implicit in Armoni ’98, builds on SZ ’98, some details suppressed):

◮ Given oracle Gen : {0, 1}s → {0, 1}m0, a PRG for n-state

automata

◮ Can construct m-step simulator for n-state automata with seed

length/space complexity O

  • s + (log n) · log m

log m0

slide-39
SLIDE 39

Saks-Zhou-Armoni transformation

◮ Theorem (implicit in Armoni ’98, builds on SZ ’98, some details suppressed):

◮ Given oracle Gen : {0, 1}s → {0, 1}m0, a PRG for n-state

automata

◮ Can construct m-step simulator for n-state automata with seed

length/space complexity O

  • s + (log n) · log m

log m0

  • ◮ Example 1: Saks-Zhou theorem
slide-40
SLIDE 40

Saks-Zhou-Armoni transformation

◮ Theorem (implicit in Armoni ’98, builds on SZ ’98, some details suppressed):

◮ Given oracle Gen : {0, 1}s → {0, 1}m0, a PRG for n-state

automata

◮ Can construct m-step simulator for n-state automata with seed

length/space complexity O

  • s + (log n) · log m

log m0

  • ◮ Example 1: Saks-Zhou theorem

◮ m0 = 2

√log n, s = O(log n log m0) = O(log3/2 n) (INW)

slide-41
SLIDE 41

Saks-Zhou-Armoni transformation

◮ Theorem (implicit in Armoni ’98, builds on SZ ’98, some details suppressed):

◮ Given oracle Gen : {0, 1}s → {0, 1}m0, a PRG for n-state

automata

◮ Can construct m-step simulator for n-state automata with seed

length/space complexity O

  • s + (log n) · log m

log m0

  • ◮ Example 1: Saks-Zhou theorem

◮ m0 = 2

√log n, s = O(log n log m0) = O(log3/2 n) (INW)

◮ Pick m = n (max # coins of (log n)-space algorithm)

slide-42
SLIDE 42

Saks-Zhou-Armoni transformation

◮ Theorem (implicit in Armoni ’98, builds on SZ ’98, some details suppressed):

◮ Given oracle Gen : {0, 1}s → {0, 1}m0, a PRG for n-state

automata

◮ Can construct m-step simulator for n-state automata with seed

length/space complexity O

  • s + (log n) · log m

log m0

  • ◮ Example 1: Saks-Zhou theorem

◮ m0 = 2

√log n, s = O(log n log m0) = O(log3/2 n) (INW)

◮ Pick m = n (max # coins of (log n)-space algorithm) ◮ Obtain simulator with seed length/space complexity

O(log3/2 n + log3/2 n) = O(log3/2 n)

slide-43
SLIDE 43

Saks-Zhou-Armoni transformation

◮ Theorem (implicit in Armoni ’98, builds on SZ ’98, some details suppressed):

◮ Given oracle Gen : {0, 1}s → {0, 1}m0, a PRG for n-state

automata

◮ Can construct m-step simulator for n-state automata with seed

length/space complexity O

  • s + (log n) · log m

log m0

  • ◮ Example 2: Some wishful thinking
slide-44
SLIDE 44

Saks-Zhou-Armoni transformation

◮ Theorem (implicit in Armoni ’98, builds on SZ ’98, some details suppressed):

◮ Given oracle Gen : {0, 1}s → {0, 1}m0, a PRG for n-state

automata

◮ Can construct m-step simulator for n-state automata with seed

length/space complexity O

  • s + (log n) · log m

log m0

  • ◮ Example 2: Some wishful thinking

◮ m0 = 2log0.7 n, s = O(log1.1 n) (no such construction known)

slide-45
SLIDE 45

Saks-Zhou-Armoni transformation

◮ Theorem (implicit in Armoni ’98, builds on SZ ’98, some details suppressed):

◮ Given oracle Gen : {0, 1}s → {0, 1}m0, a PRG for n-state

automata

◮ Can construct m-step simulator for n-state automata with seed

length/space complexity O

  • s + (log n) · log m

log m0

  • ◮ Example 2: Some wishful thinking

◮ m0 = 2log0.7 n, s = O(log1.1 n) (no such construction known) ◮ Pick m = 2log0.8 n

slide-46
SLIDE 46

Saks-Zhou-Armoni transformation

◮ Theorem (implicit in Armoni ’98, builds on SZ ’98, some details suppressed):

◮ Given oracle Gen : {0, 1}s → {0, 1}m0, a PRG for n-state

automata

◮ Can construct m-step simulator for n-state automata with seed

length/space complexity O

  • s + (log n) · log m

log m0

  • ◮ Example 2: Some wishful thinking

◮ m0 = 2log0.7 n, s = O(log1.1 n) (no such construction known) ◮ Pick m = 2log0.8 n ◮ Obtain simulator with seed length/space complexity

O(log1.1 n + log1.1 n) = O(log1.1 n)

slide-47
SLIDE 47

Proof of main result

more simulation steps shorter seed

slide-48
SLIDE 48

Proof of main result

more simulation steps shorter seed BPL = L Dream

slide-49
SLIDE 49

Proof of main result

more simulation steps shorter seed BPL = L Dream Nisan INW NZ Armoni PRG

slide-50
SLIDE 50

Proof of main result

more simulation steps shorter seed BPL = L Dream Nisan INW NZ Armoni PRG BPL ⊆ L2

slide-51
SLIDE 51

Proof of main result

more simulation steps shorter seed BPL = L Dream Nisan INW NZ Armoni PRG BPL ⊆ L2 logO(1) n random bits in L

slide-52
SLIDE 52

Proof of main result

more simulation steps shorter seed BPL = L Dream Nisan INW NZ Armoni PRG BPL ⊆ L2 logO(1) n random bits in L

slide-53
SLIDE 53

Proof of main result

more simulation steps shorter seed BPL = L Dream Nisan INW NZ Armoni PRG BPL ⊆ L2 logO(1) n random bits in L SZA Simulator

slide-54
SLIDE 54

Proof of main result

more simulation steps shorter seed BPL = L Dream Nisan INW NZ Armoni PRG BPL ⊆ L2 logO(1) n random bits in L SZA Simulator BPL ⊆ L3/2

slide-55
SLIDE 55

Proof of main result

more simulation steps shorter seed BPL = L Dream Nisan INW NZ Armoni PRG BPL ⊆ L2 logO(1) n random bits in L SZA Simulator BPL ⊆ L3/2

slide-56
SLIDE 56

Proof of main result

more simulation steps shorter seed BPL = L Dream Nisan INW NZ Armoni PRG BPL ⊆ L2 logO(1) n random bits in L SZA Simulator BPL ⊆ L3/2

slide-57
SLIDE 57

Proof of main result

more simulation steps shorter seed BPL = L Dream Nisan INW NZ Armoni PRG BPL ⊆ L2 logO(1) n random bits in L SZA Simulator BPL ⊆ L3/2

slide-58
SLIDE 58

Proof of main result

more simulation steps shorter seed BPL = L Dream Nisan INW NZ Armoni PRG BPL ⊆ L2 logO(1) n random bits in L SZA Simulator BPL ⊆ L3/2

slide-59
SLIDE 59

Proof of main result

more simulation steps shorter seed BPL = L Dream Nisan INW NZ Armoni PRG BPL ⊆ L2 logO(1) n random bits in L SZA Simulator BPL ⊆ L3/2

slide-60
SLIDE 60

Proof of main result

more simulation steps shorter seed BPL = L Dream Nisan INW NZ Armoni PRG BPL ⊆ L2 logO(1) n random bits in L SZA Simulator BPL ⊆ L3/2

slide-61
SLIDE 61

Proof of main result

more simulation steps shorter seed BPL = L Dream Nisan INW NZ Armoni PRG BPL ⊆ L2 logO(1) n random bits in L SZA Simulator BPL ⊆ L3/2

slide-62
SLIDE 62

Proof of main result

more simulation steps shorter seed BPL = L Dream Nisan INW NZ Armoni PRG BPL ⊆ L2 logO(1) n random bits in L SZA Simulator BPL ⊆ L3/2

slide-63
SLIDE 63

Proof of main result

more simulation steps shorter seed BPL = L Dream Nisan INW NZ Armoni PRG BPL ⊆ L2 logO(1) n random bits in L SZA Simulator BPL ⊆ L3/2

slide-64
SLIDE 64

Proof of main result

more simulation steps shorter seed BPL = L Dream Nisan INW NZ Armoni PRG BPL ⊆ L2 logO(1) n random bits in L SZA Simulator BPL ⊆ L3/2

slide-65
SLIDE 65

Proof of main result

more simulation steps shorter seed BPL = L Dream Nisan INW NZ Armoni PRG BPL ⊆ L2 logO(1) n random bits in L SZA Simulator BPL ⊆ L3/2

slide-66
SLIDE 66

Proof of main result

more simulation steps shorter seed BPL = L Dream Nisan INW NZ Armoni PRG BPL ⊆ L2 logO(1) n random bits in L SZA Simulator BPL ⊆ L3/2

slide-67
SLIDE 67

Proof of main result

more simulation steps shorter seed BPL = L Dream Nisan INW NZ Armoni PRG BPL ⊆ L2 logO(1) n random bits in L SZA Simulator BPL ⊆ L3/2

slide-68
SLIDE 68

Proof of main result

more simulation steps shorter seed BPL = L Dream Nisan INW NZ Armoni PRG BPL ⊆ L2 logO(1) n random bits in L SZA Simulator BPL ⊆ L3/2

slide-69
SLIDE 69

Proof of main result

more simulation steps shorter seed BPL = L Dream Nisan INW NZ Armoni PRG BPL ⊆ L2 logO(1) n random bits in L SZA Simulator BPL ⊆ L3/2

slide-70
SLIDE 70

Proof of main result

more simulation steps shorter seed BPL = L Dream Nisan INW NZ Armoni PRG BPL ⊆ L2 logO(1) n random bits in L SZA Simulator BPL ⊆ L3/2

slide-71
SLIDE 71

Proof of main result

more simulation steps shorter seed BPL = L Dream Nisan INW NZ Armoni PRG BPL ⊆ L2 logO(1) n random bits in L SZA Simulator BPL ⊆ L3/2

slide-72
SLIDE 72

Proof of main result

more simulation steps shorter seed BPL = L Dream Nisan INW NZ Armoni PRG BPL ⊆ L2 logO(1) n random bits in L SZA Simulator BPL ⊆ L3/2

slide-73
SLIDE 73

Proof of main result

more simulation steps shorter seed BPL = L Dream Nisan INW NZ Armoni PRG BPL ⊆ L2 logO(1) n random bits in L SZA Simulator BPL ⊆ L3/2 BPL ⊆ L1.1

slide-74
SLIDE 74

Outline

Simplified statement of main result Proof sketch of main result

Saks-Zhou theorem, revisited

◮ Proof sketch of Saks-Zhou-Armoni theorem ◮ Stronger version of main result

◮ Targeted PRGs ◮ Simulation advice generators

slide-75
SLIDE 75

Randomness-efficient approximate powering

◮ Goal: approximate Qm

slide-76
SLIDE 76

Randomness-efficient approximate powering

◮ Goal: approximate Qm ◮ Easier goal: Use Gen to find automaton Pow(Q0) ≈ Qm0

slide-77
SLIDE 77

Randomness-efficient approximate powering

◮ Goal: approximate Qm ◮ Easier goal: Use Gen to find automaton Pow(Q0) ≈ Qm0 ◮ First attempt:

Pow(Q0)(q; y) = Qm0

0 (q; Gen(y))

slide-78
SLIDE 78

Randomness-efficient approximate powering

◮ Goal: approximate Qm ◮ Easier goal: Use Gen to find automaton Pow(Q0) ≈ Qm0 ◮ First attempt:

Pow(Q0)(q; y) = Qm0

0 (q; Gen(y)) ◮ But we want Pow(Q0) to only read O(log n) bits at a time

slide-79
SLIDE 79

Randomness-efficient approximate powering

◮ Goal: approximate Qm ◮ Easier goal: Use Gen to find automaton Pow(Q0) ≈ Qm0 ◮ First attempt:

Pow(Q0)(q; y) = Qm0

0 (q; Gen(y)) ◮ But we want Pow(Q0) to only read O(log n) bits at a time ◮ Randomized algorithm:

Pow(Q0, x)(q; y) = Qm0

0 (q; Gen(Samp(x, y)))

slide-80
SLIDE 80

Randomness-efficient approximate powering

◮ Goal: approximate Qm ◮ Easier goal: Use Gen to find automaton Pow(Q0) ≈ Qm0 ◮ First attempt:

Pow(Q0)(q; y) = Qm0

0 (q; Gen(y)) ◮ But we want Pow(Q0) to only read O(log n) bits at a time ◮ Randomized algorithm:

Pow(Q0, x)(q; y) = Qm0

0 (q; Gen(Samp(x, y))) ◮ Can achieve |x| ≤ O(s), |y| ≤ O(log n)

slide-81
SLIDE 81

Repeated approximate powering

◮ Goal: approximate Qm

slide-82
SLIDE 82

Repeated approximate powering

◮ Goal: approximate Qm ◮ First attempt: For i = 1 to logm0 m:

slide-83
SLIDE 83

Repeated approximate powering

◮ Goal: approximate Qm ◮ First attempt: For i = 1 to logm0 m:

◮ Pick fresh randomness xi

slide-84
SLIDE 84

Repeated approximate powering

◮ Goal: approximate Qm ◮ First attempt: For i = 1 to logm0 m:

◮ Pick fresh randomness xi ◮ Let Qi = Pow(Qi−1, xi)

slide-85
SLIDE 85

Repeated approximate powering

◮ Goal: approximate Qm ◮ First attempt: For i = 1 to logm0 m:

◮ Pick fresh randomness xi ◮ Let Qi = Pow(Qi−1, xi)

◮ Randomness complexity: O(s · log m log m0 ). Too much!

slide-86
SLIDE 86

Repeated approximate powering

◮ Goal: approximate Qm ◮ First attempt: For i = 1 to logm0 m:

◮ Pick fresh randomness xi ◮ Let Qi = Pow(Qi−1, xi)

◮ Randomness complexity: O(s · log m log m0 ). Too much! ◮ Second attempt: Pick x once, reuse in each iteration

slide-87
SLIDE 87

Repeated approximate powering

◮ Goal: approximate Qm ◮ First attempt: For i = 1 to logm0 m:

◮ Pick fresh randomness xi ◮ Let Qi = Pow(Qi−1, xi)

◮ Randomness complexity: O(s · log m log m0 ). Too much! ◮ Second attempt: Pick x once, reuse in each iteration

◮ Qi is stochastically dependent on x

slide-88
SLIDE 88

Repeated approximate powering

◮ Goal: approximate Qm ◮ First attempt: For i = 1 to logm0 m:

◮ Pick fresh randomness xi ◮ Let Qi = Pow(Qi−1, xi)

◮ Randomness complexity: O(s · log m log m0 ). Too much! ◮ Second attempt: Pick x once, reuse in each iteration

◮ Qi is stochastically dependent on x ◮ No guarantee that Pow will be accurate

slide-89
SLIDE 89

Snap operation

◮ Solution: Break dependencies by rounding

slide-90
SLIDE 90

Snap operation

◮ Solution: Break dependencies by rounding ◮ Snap(Q):

slide-91
SLIDE 91

Snap operation

◮ Solution: Break dependencies by rounding ◮ Snap(Q):

  • 1. Compute M = transition probability matrix of Q
slide-92
SLIDE 92

Snap operation

◮ Solution: Break dependencies by rounding ◮ Snap(Q):

  • 1. Compute M = transition probability matrix of Q
  • 2. Randomly perturb, round each entry of M
slide-93
SLIDE 93

Snap operation

◮ Solution: Break dependencies by rounding ◮ Snap(Q):

  • 1. Compute M = transition probability matrix of Q
  • 2. Randomly perturb, round each entry of M
  • 3. Return automaton with resulting transition probability matrix
slide-94
SLIDE 94

Snap operation

◮ Solution: Break dependencies by rounding ◮ Snap(Q):

  • 1. Compute M = transition probability matrix of Q
  • 2. Randomly perturb, round each entry of M
  • 3. Return automaton with resulting transition probability matrix

◮ Key feature:

Q ≈ Q′ = ⇒ w.h.p. over r, Snap(Q, r) = Snap(Q′, r)

slide-95
SLIDE 95

Snap operation

◮ Solution: Break dependencies by rounding ◮ Snap(Q):

  • 1. Compute M = transition probability matrix of Q
  • 2. Randomly perturb, round each entry of M
  • 3. Return automaton with resulting transition probability matrix

◮ Key feature:

Q ≈ Q′ = ⇒ w.h.p. over r, Snap(Q, r) = Snap(Q′, r) Q Q′

slide-96
SLIDE 96

Snap operation

◮ Solution: Break dependencies by rounding ◮ Snap(Q):

  • 1. Compute M = transition probability matrix of Q
  • 2. Randomly perturb, round each entry of M
  • 3. Return automaton with resulting transition probability matrix

◮ Key feature:

Q ≈ Q′ = ⇒ w.h.p. over r, Snap(Q, r) = Snap(Q′, r) Q Q′

slide-97
SLIDE 97

Snap operation

◮ Solution: Break dependencies by rounding ◮ Snap(Q):

  • 1. Compute M = transition probability matrix of Q
  • 2. Randomly perturb, round each entry of M
  • 3. Return automaton with resulting transition probability matrix

◮ Key feature:

Q ≈ Q′ = ⇒ w.h.p. over r, Snap(Q, r) = Snap(Q′, r) Q Q′

slide-98
SLIDE 98

SZA transformation

◮ To approximate Qm 0 :

slide-99
SLIDE 99

SZA transformation

◮ To approximate Qm 0 :

  • 1. Pick x randomly
slide-100
SLIDE 100

SZA transformation

◮ To approximate Qm 0 :

  • 1. Pick x randomly
  • 2. For i = 1 to logm0 m, set Qi = Snap(Pow(Qi−1, x))
slide-101
SLIDE 101

SZA transformation

◮ To approximate Qm 0 :

  • 1. Pick x randomly
  • 2. For i = 1 to logm0 m, set Qi = Snap(Pow(Qi−1, x))

◮ Correctness proof sketch:

slide-102
SLIDE 102

SZA transformation

◮ To approximate Qm 0 :

  • 1. Pick x randomly
  • 2. For i = 1 to logm0 m, set Qi = Snap(Pow(Qi−1, x))

◮ Correctness proof sketch:

◮ Define

Qi by wishful thinking: Q0 = Q0, Qi = Snap( Qm0

i−1)

slide-103
SLIDE 103

SZA transformation

◮ To approximate Qm 0 :

  • 1. Pick x randomly
  • 2. For i = 1 to logm0 m, set Qi = Snap(Pow(Qi−1, x))

◮ Correctness proof sketch:

◮ Define

Qi by wishful thinking: Q0 = Q0, Qi = Snap( Qm0

i−1)

◮ W.h.p., for all i, Pow(

Qi, x) ≈ Qm0

i

(union bound)

slide-104
SLIDE 104

SZA transformation

◮ To approximate Qm 0 :

  • 1. Pick x randomly
  • 2. For i = 1 to logm0 m, set Qi = Snap(Pow(Qi−1, x))

◮ Correctness proof sketch:

◮ Define

Qi by wishful thinking: Q0 = Q0, Qi = Snap( Qm0

i−1)

◮ W.h.p., for all i, Pow(

Qi, x) ≈ Qm0

i

(union bound)

◮ W.h.p., Snap(Pow(

Qi, x)) = Snap( Qm0

i

) = Qi+1

slide-105
SLIDE 105

SZA transformation

◮ To approximate Qm 0 :

  • 1. Pick x randomly
  • 2. For i = 1 to logm0 m, set Qi = Snap(Pow(Qi−1, x))

◮ Correctness proof sketch:

◮ Define

Qi by wishful thinking: Q0 = Q0, Qi = Snap( Qm0

i−1)

◮ W.h.p., for all i, Pow(

Qi, x) ≈ Qm0

i

(union bound)

◮ W.h.p., Snap(Pow(

Qi, x)) = Snap( Qm0

i

) = Qi+1

◮ W.h.p., by induction, dreams come true: Qi =

Qi for all i

slide-106
SLIDE 106

SZA transformation

◮ To approximate Qm 0 :

  • 1. Pick x randomly
  • 2. For i = 1 to logm0 m, set Qi = Snap(Pow(Qi−1, x))

◮ Correctness proof sketch:

◮ Define

Qi by wishful thinking: Q0 = Q0, Qi = Snap( Qm0

i−1)

◮ W.h.p., for all i, Pow(

Qi, x) ≈ Qm0

i

(union bound)

◮ W.h.p., Snap(Pow(

Qi, x)) = Snap( Qm0

i

) = Qi+1

◮ W.h.p., by induction, dreams come true: Qi =

Qi for all i

◮ Implement using recursion

slide-107
SLIDE 107

Outline

Simplified statement of main result Proof sketch of main result

Saks-Zhou theorem, revisited

Proof sketch of Saks-Zhou-Armoni theorem

◮ Stronger version of main result

◮ Targeted PRGs ◮ Simulation advice generators

slide-108
SLIDE 108

Stronger version of main result

◮ Two key features distinguish PRG from simulator

slide-109
SLIDE 109

Stronger version of main result

◮ Two key features distinguish PRG from simulator

◮ Input: no access to “source code” (Q, q)

slide-110
SLIDE 110

Stronger version of main result

◮ Two key features distinguish PRG from simulator

◮ Input: no access to “source code” (Q, q) ◮ Output: long string for automaton to read vs. state

slide-111
SLIDE 111

Stronger version of main result

◮ Two key features distinguish PRG from simulator

◮ Input: no access to “source code” (Q, q) ◮ Output: long string for automaton to read vs. state

◮ Claim: First feature is the one that matters for us

slide-112
SLIDE 112

Targeted PRGs

◮ Targeted PRG: Algorithm Gen(Q, q, x) such that

Sim(Q, q, x) = Qm(q; Gen(Q, q, x)) is a simulator

slide-113
SLIDE 113

Targeted PRGs

◮ Targeted PRG: Algorithm Gen(Q, q, x) such that

Sim(Q, q, x) = Qm(q; Gen(Q, q, x)) is a simulator

◮ Introduced by Goldreich ’11 for BPP

slide-114
SLIDE 114

Targeted PRGs

◮ Targeted PRG: Algorithm Gen(Q, q, x) such that

Sim(Q, q, x) = Qm(q; Gen(Q, q, x)) is a simulator

◮ Introduced by Goldreich ’11 for BPP ◮ Ordinary PRG: Special case that Gen doesn’t depend on (Q, q)

slide-115
SLIDE 115

Simulation advice generators

◮ Simulation advice generator: Algorithm Gen(x) such that for

some deterministic logspace S, Sim(Q, q, x) = S(Q, q, Gen(x)) is a simulator

slide-116
SLIDE 116

Simulation advice generators

◮ Simulation advice generator: Algorithm Gen(x) such that for

some deterministic logspace S, Sim(Q, q, x) = S(Q, q, Gen(x)) is a simulator

◮ Ordinary PRG: Special case that S(Q, q, y) = Q|y|(q; y)

slide-117
SLIDE 117

Four kinds of derandomization

Ordinary pseudorandom generator Simulator Targeted pseudorandom generator Simulation advice generator

slide-118
SLIDE 118

Main result (informally)

Ordinary pseudorandom generator Simulator Targeted pseudorandom generator Simulation advice generator Theorem: Dashed arrow transformation exists if and only if

  • α>0

promise-BPSPACE(log1+α n) =

  • α>0

promise-DSPACE(log1+α n)

slide-119
SLIDE 119

Proof sketch

◮ SZA works on simulation advice generator without modification

slide-120
SLIDE 120

Proof sketch

◮ SZA works on simulation advice generator without modification

Simulator Simulation Advice Generator SZA

slide-121
SLIDE 121

Proof sketch

◮ SZA works on simulation advice generator without modification

Simulator Simulation Advice Generator SZA Targeted PRG Assumption

slide-122
SLIDE 122

Proof sketch

◮ SZA works on simulation advice generator without modification

Simulator Simulation Advice Generator SZA Targeted PRG Assumption Method of

  • cond. prob.
slide-123
SLIDE 123

Main result (in painful detail)

◮ Theorem: The following are equivalent:

slide-124
SLIDE 124

Main result (in painful detail)

◮ Theorem: The following are equivalent:

1.

  • α>0

promise-BPSPACE(log1+α n) =

  • α>0

promise-DSPACE(log1+α n)

slide-125
SLIDE 125

Main result (in painful detail)

◮ Theorem: The following are equivalent:

1.

  • α>0

promise-BPSPACE(log1+α n) =

  • α>0

promise-DSPACE(log1+α n)

  • 2. For all µ ∈ [0, 1], for all suff. small σ > η > 0, for all γ > 0:
slide-126
SLIDE 126

Main result (in painful detail)

◮ Theorem: The following are equivalent:

1.

  • α>0

promise-BPSPACE(log1+α n) =

  • α>0

promise-DSPACE(log1+α n)

  • 2. For all µ ∈ [0, 1], for all suff. small σ > η > 0, for all γ > 0:

◮ If ∃ efficient targeted PRG with parameters

s ≤ O(log1+σ n), log(1/ε) = log1+η n, log m ≥ logµ n

slide-127
SLIDE 127

Main result (in painful detail)

◮ Theorem: The following are equivalent:

1.

  • α>0

promise-BPSPACE(log1+α n) =

  • α>0

promise-DSPACE(log1+α n)

  • 2. For all µ ∈ [0, 1], for all suff. small σ > η > 0, for all γ > 0:

◮ If ∃ efficient targeted PRG with parameters

s ≤ O(log1+σ n), log(1/ε) = log1+η n, log m ≥ logµ n

◮ Then ∃ efficient simulation advice generator with parameters

s′ ≤ O(log1+σ+γ n), log(1/ε′) = log1+η−γ n, log m′ ≥ logµ−γ n, log a′ ≤ O(log1+η+γ n)

slide-128
SLIDE 128

Main result (in painful detail)

◮ Theorem: The following are equivalent:

1.

  • α>0

promise-BPSPACE(log1+α n) =

  • α>0

promise-DSPACE(log1+α n)

  • 2. For all µ ∈ [0, 1], for all suff. small σ > η > 0, for all γ > 0:

◮ If ∃ efficient targeted PRG with parameters

s ≤ O(log1+σ n), log(1/ε) = log1+η n, log m ≥ logµ n

◮ Then ∃ efficient simulation advice generator with parameters

s′ ≤ O(log1+σ+γ n), log(1/ε′) = log1+η−γ n, log m′ ≥ logµ−γ n, log a′ ≤ O(log1+η+γ n)

◮ a′ = number of advice bits

slide-129
SLIDE 129

Main result (in painful detail)

◮ Theorem: The following are equivalent:

1.

  • α>0

promise-BPSPACE(log1+α n) =

  • α>0

promise-DSPACE(log1+α n)

  • 2. For all µ ∈ [0, 1], for all suff. small σ > η > 0, for all γ > 0:

◮ If ∃ efficient targeted PRG with parameters

s ≤ O(log1+σ n), log(1/ε) = log1+η n, log m ≥ logµ n

◮ Then ∃ efficient simulation advice generator with parameters

s′ ≤ O(log1+σ+γ n), log(1/ε′) = log1+η−γ n, log m′ ≥ logµ−γ n, log a′ ≤ O(log1+η+γ n)

◮ a′ = number of advice bits ◮ “Efficient”: Space complexity ≤ O(seed length)

slide-130
SLIDE 130

Conclusion

◮ This material is based upon work supported by

◮ NSF GRFP Grant No. DGE-1610403 ◮ NSF Grant No. NSF CCF-1423544

◮ Thanks for your attention! ◮ Any questions?