Preserving Randomness for Adaptive Algorithms Adam R. Klivans - - PowerPoint PPT Presentation

preserving randomness for adaptive algorithms
SMART_READER_LITE
LIVE PREVIEW

Preserving Randomness for Adaptive Algorithms Adam R. Klivans - - PowerPoint PPT Presentation

Preserving Randomness for Adaptive Algorithms Adam R. Klivans William M. Hoza May 25, 2017 Caltech Theory of Computing Seminar 1 / 20 Randomized estimation algorithms Algorithm Est( C ) estimates some value ( C ) R d 2 / 20


slide-1
SLIDE 1

Preserving Randomness for Adaptive Algorithms

William M. Hoza Adam R. Klivans May 25, 2017 Caltech Theory of Computing Seminar

1 / 20

slide-2
SLIDE 2

Randomized estimation algorithms

◮ Algorithm Est(C) estimates some value µ(C) ∈ Rd

2 / 20

slide-3
SLIDE 3

Randomized estimation algorithms

◮ Algorithm Est(C) estimates some value µ(C) ∈ Rd

Pr[Est(C) − µ(C)∞ > ε] ≤ δ

2 / 20

slide-4
SLIDE 4

Randomized estimation algorithms

◮ Algorithm Est(C) estimates some value µ(C) ∈ Rd

Pr[Est(C) − µ(C)∞ > ε] ≤ δ

◮ Canonical example:

2 / 20

slide-5
SLIDE 5

Randomized estimation algorithms

◮ Algorithm Est(C) estimates some value µ(C) ∈ Rd

Pr[Est(C) − µ(C)∞ > ε] ≤ δ

◮ Canonical example:

◮ C is a Boolean circuit 2 / 20

slide-6
SLIDE 6

Randomized estimation algorithms

◮ Algorithm Est(C) estimates some value µ(C) ∈ Rd

Pr[Est(C) − µ(C)∞ > ε] ≤ δ

◮ Canonical example:

◮ C is a Boolean circuit ◮ µ(C)

def

= Prx[C(x) = 1] (d = 1)

2 / 20

slide-7
SLIDE 7

Randomized estimation algorithms

◮ Algorithm Est(C) estimates some value µ(C) ∈ Rd

Pr[Est(C) − µ(C)∞ > ε] ≤ δ

◮ Canonical example:

◮ C is a Boolean circuit ◮ µ(C)

def

= Prx[C(x) = 1] (d = 1)

◮ Est(C) evaluates C at several randomly chosen points 2 / 20

slide-8
SLIDE 8

Executing Est many times

3 / 20

slide-9
SLIDE 9

Executing Est many times

◮ Goal: Execute Est(C1), Est(C2), . . . , Est(Ck)

3 / 20

slide-10
SLIDE 10

Executing Est many times

◮ Goal: Execute Est(C1), Est(C2), . . . , Est(Ck) ◮ Say Est uses n random bits

3 / 20

slide-11
SLIDE 11

Executing Est many times

◮ Goal: Execute Est(C1), Est(C2), . . . , Est(Ck) ◮ Say Est uses n random bits ◮ Na¨

ıve implementation: nk random bits

3 / 20

slide-12
SLIDE 12

Executing Est many times

◮ Goal: Execute Est(C1), Est(C2), . . . , Est(Ck) ◮ Say Est uses n random bits ◮ Na¨

ıve implementation: nk random bits

◮ Can we do better?

3 / 20

slide-13
SLIDE 13

Executing Est many times

◮ Goal: Execute Est(C1), Est(C2), . . . , Est(Ck) ◮ Say Est uses n random bits ◮ Na¨

ıve implementation: nk random bits

◮ Can we do better? ◮ Theorem: Can use just n + O(k log(d + 1)) random bits!

3 / 20

slide-14
SLIDE 14

Executing Est many times

◮ Goal: Execute Est(C1), Est(C2), . . . , Est(Ck) ◮ Say Est uses n random bits ◮ Na¨

ıve implementation: nk random bits

◮ Can we do better? ◮ Theorem: Can use just n + O(k log(d + 1)) random bits!

◮ Slight increases in error, failure probability 3 / 20

slide-15
SLIDE 15

Nonadaptive setting

4 / 20

slide-16
SLIDE 16

Nonadaptive setting

C1, . . . , Ck

4 / 20

slide-17
SLIDE 17

Nonadaptive setting

C1, . . . , Ck Est(C1), . . . , Est(Ck)

4 / 20

slide-18
SLIDE 18

Nonadaptive setting

C1, . . . , Ck Est(C1), . . . , Est(Ck)

◮ Algorithm that uses just n random bits:

4 / 20

slide-19
SLIDE 19

Nonadaptive setting

C1, . . . , Ck Est(C1), . . . , Est(Ck)

◮ Algorithm that uses just n random bits:

  • 1. Pick X ∈ {0, 1}n uniformly at random once

4 / 20

slide-20
SLIDE 20

Nonadaptive setting

C1, . . . , Ck Est(C1), . . . , Est(Ck)

◮ Algorithm that uses just n random bits:

  • 1. Pick X ∈ {0, 1}n uniformly at random once
  • 2. Execute Est(C1, X), Est(C2, X), . . . , Est(Ck, X)

4 / 20

slide-21
SLIDE 21

Nonadaptive setting

C1, . . . , Ck Est(C1), . . . , Est(Ck)

◮ Algorithm that uses just n random bits:

  • 1. Pick X ∈ {0, 1}n uniformly at random once
  • 2. Execute Est(C1, X), Est(C2, X), . . . , Est(Ck, X)

◮ Overall failure probability is still kδ (union bound)

4 / 20

slide-22
SLIDE 22

Adaptive setting

5 / 20

slide-23
SLIDE 23

Adaptive setting

C1

5 / 20

slide-24
SLIDE 24

Adaptive setting

C1 Est(C1)

5 / 20

slide-25
SLIDE 25

Adaptive setting

C1 Est(C1) C2

5 / 20

slide-26
SLIDE 26

Adaptive setting

C1 Est(C1) C2 Est(C2)

5 / 20

slide-27
SLIDE 27

Adaptive setting

C1 Est(C1) C2 Est(C2) . . . Ck Est(Ck)

5 / 20

slide-28
SLIDE 28

Adaptive setting

C1 Est(C1) C2 Est(C2) . . . Ck Est(Ck)

◮ Let X be the randomness used for Est(C1)

5 / 20

slide-29
SLIDE 29

Adaptive setting

C1 Est(C1) C2 Est(C2) . . . Ck Est(Ck)

◮ Let X be the randomness used for Est(C1) ◮ C2 is stochastically dependent on X

5 / 20

slide-30
SLIDE 30

Adaptive setting

C1 Est(C1) C2 Est(C2) . . . Ck Est(Ck)

◮ Let X be the randomness used for Est(C1) ◮ C2 is stochastically dependent on X ◮ Failure probability of Est(C2, X) is ???

5 / 20

slide-31
SLIDE 31

Concentrated functions

◮ f : {0, 1}n → Rd

6 / 20

slide-32
SLIDE 32

Concentrated functions

◮ f : {0, 1}n → Rd ◮ Definition: f is (ε, δ)-concentrated at µ ∈ Rd if

Pr

X [f (X) − µ∞ > ε] ≤ δ.

6 / 20

slide-33
SLIDE 33

Concentrated functions

◮ f : {0, 1}n → Rd ◮ Definition: f is (ε, δ)-concentrated at µ ∈ Rd if

Pr

X [f (X) − µ∞ > ε] ≤ δ. ◮ Example: f (X) def

= Est(C, X)

6 / 20

slide-34
SLIDE 34

Randomness steward model

Owner Steward

7 / 20

slide-35
SLIDE 35

Randomness steward model

Owner Steward f1

7 / 20

slide-36
SLIDE 36

Randomness steward model

Owner Steward f1 Y1

7 / 20

slide-37
SLIDE 37

Randomness steward model

Owner Steward f1 Y1 f2

7 / 20

slide-38
SLIDE 38

Randomness steward model

Owner Steward f1 Y1 f2 Y2

7 / 20

slide-39
SLIDE 39

Randomness steward model

Owner Steward f1 Y1 f2 Y2 . . . fk Yk

7 / 20

slide-40
SLIDE 40

Randomness steward model

Owner Steward f1 Y1 f2 Y2 . . . fk Yk

◮ Each fi is (ε, δ)-concentrated at some µi

7 / 20

slide-41
SLIDE 41

Randomness steward model

Owner Steward f1 Y1 f2 Y2 . . . fk Yk

◮ Each fi is (ε, δ)-concentrated at some µi ◮ Steward requirement: For any owner,

Pr

  • max

i

Yi − µi∞ > ε′

  • ≤ δ′

7 / 20

slide-42
SLIDE 42

One-query stewards

◮ Definition: One-query steward: Only accesses each fi by

querying a single point fi(Xi)

8 / 20

slide-43
SLIDE 43

One-query stewards

◮ Definition: One-query steward: Only accesses each fi by

querying a single point fi(Xi)

◮ Querying fi corresponds to executing Est 8 / 20

slide-44
SLIDE 44

One-query stewards

◮ Definition: One-query steward: Only accesses each fi by

querying a single point fi(Xi)

◮ Querying fi corresponds to executing Est ◮ The owner does not see Xi 8 / 20

slide-45
SLIDE 45

Warm-up steward

◮ Theorem: For any n, k, ε, δ, there exists a one-query steward

for d = 1 with

9 / 20

slide-46
SLIDE 46

Warm-up steward

◮ Theorem: For any n, k, ε, δ, there exists a one-query steward

for d = 1 with

◮ Error ε′ ≤ O(ε)

(vs. na¨ ıve ε)

9 / 20

slide-47
SLIDE 47

Warm-up steward

◮ Theorem: For any n, k, ε, δ, there exists a one-query steward

for d = 1 with

◮ Error ε′ ≤ O(ε)

(vs. na¨ ıve ε)

◮ Failure probability δ′ ≤ 2k · δ

(vs. na¨ ıve kδ)

9 / 20

slide-48
SLIDE 48

Warm-up steward

◮ Theorem: For any n, k, ε, δ, there exists a one-query steward

for d = 1 with

◮ Error ε′ ≤ O(ε)

(vs. na¨ ıve ε)

◮ Failure probability δ′ ≤ 2k · δ

(vs. na¨ ıve kδ)

◮ Randomness n

(vs. na¨ ıve nk)

9 / 20

slide-49
SLIDE 49

Warm-up steward

◮ Theorem: For any n, k, ε, δ, there exists a one-query steward

for d = 1 with

◮ Error ε′ ≤ O(ε)

(vs. na¨ ıve ε)

◮ Failure probability δ′ ≤ 2k · δ

(vs. na¨ ıve kδ)

◮ Randomness n

(vs. na¨ ıve nk)

◮ The steward: “Reuse randomness and round”

9 / 20

slide-50
SLIDE 50

Warm-up steward

◮ Theorem: For any n, k, ε, δ, there exists a one-query steward

for d = 1 with

◮ Error ε′ ≤ O(ε)

(vs. na¨ ıve ε)

◮ Failure probability δ′ ≤ 2k · δ

(vs. na¨ ıve kδ)

◮ Randomness n

(vs. na¨ ıve nk)

◮ The steward: “Reuse randomness and round”

◮ Pick X ∈ {0, 1}n uniformly at random once 9 / 20

slide-51
SLIDE 51

Warm-up steward

◮ Theorem: For any n, k, ε, δ, there exists a one-query steward

for d = 1 with

◮ Error ε′ ≤ O(ε)

(vs. na¨ ıve ε)

◮ Failure probability δ′ ≤ 2k · δ

(vs. na¨ ıve kδ)

◮ Randomness n

(vs. na¨ ıve nk)

◮ The steward: “Reuse randomness and round”

◮ Pick X ∈ {0, 1}n uniformly at random once ◮ For i = 1 to k: Return fi(X), rounded to multiple of 2ε 9 / 20

slide-52
SLIDE 52

Analysis of warm-up steward

10 / 20

slide-53
SLIDE 53

Analysis of warm-up steward

2ε µ

10 / 20

slide-54
SLIDE 54

Analysis of warm-up steward

2ε µ A(µ)

10 / 20

slide-55
SLIDE 55

Analysis of warm-up steward

2ε µ A(µ) B(µ)

10 / 20

slide-56
SLIDE 56

Analysis of warm-up steward

2ε µ A(µ) B(µ)

◮ Imagine if the steward always returns A(µ) or B(µ)...

10 / 20

slide-57
SLIDE 57

Analysis of warm-up steward

2ε µ A(µ) B(µ)

◮ Imagine if the steward always returns A(µ) or B(µ)...

f1

10 / 20

slide-58
SLIDE 58

Analysis of warm-up steward

2ε µ A(µ) B(µ)

◮ Imagine if the steward always returns A(µ) or B(µ)...

f1 f A

2

f B

2

10 / 20

slide-59
SLIDE 59

Analysis of warm-up steward

2ε µ A(µ) B(µ)

◮ Imagine if the steward always returns A(µ) or B(µ)...

f1 f A

2

f B

2

f AA

3

f AB

3

f BA

3

f BB

3

10 / 20

slide-60
SLIDE 60

Analysis of warm-up steward

2ε µ A(µ) B(µ)

◮ Imagine if the steward always returns A(µ) or B(µ)...

f1 f A

2

f B

2

f AA

3

f AB

3

f BA

3

f BB

3

f AAA

4

f AAB

4

f ABA

4

f ABB

4

f BAA

4

f BAB

4

f BBA

4

f BBB

4

10 / 20

slide-61
SLIDE 61

Analysis of warm-up steward

2ε µ A(µ) B(µ)

◮ Imagine if the steward always returns A(µ) or B(µ)...

f1 f A

2

f B

2

f AA

3

f AB

3

f BA

3

f BB

3

f AAA

4

f AAB

4

f ABA

4

f ABB

4

f BAA

4

f BAB

4

f BBA

4

f BBB

4 ◮ Union bound: Pr[X good for every function in tree] ≥ 1 − 2kδ

10 / 20

slide-62
SLIDE 62

Analysis of warm-up steward

2ε µ A(µ) B(µ)

◮ Imagine if the steward always returns A(µ) or B(µ)...

f1 f A

2

f B

2

f AA

3

f AB

3

f BA

3

f BB

3

f AAA

4

f AAB

4

f ABA

4

f ABB

4

f BAA

4

f BAB

4

f BBA

4

f BBB

4 ◮ Union bound: Pr[X good for every function in tree] ≥ 1 − 2kδ ◮ If so, inductively, every fi is in the tree!

10 / 20

slide-63
SLIDE 63

Main result

◮ Theorem: For all n, k, d, ε, δ, γ, there is an efficient one-query

steward with

11 / 20

slide-64
SLIDE 64

Main result

◮ Theorem: For all n, k, d, ε, δ, γ, there is an efficient one-query

steward with

◮ Error ε′ ≤ O(εd) 11 / 20

slide-65
SLIDE 65

Main result

◮ Theorem: For all n, k, d, ε, δ, γ, there is an efficient one-query

steward with

◮ Error ε′ ≤ O(εd) ◮ Failure probability δ′ ≤ kδ + γ 11 / 20

slide-66
SLIDE 66

Main result

◮ Theorem: For all n, k, d, ε, δ, γ, there is an efficient one-query

steward with

◮ Error ε′ ≤ O(εd) ◮ Failure probability δ′ ≤ kδ + γ ◮ # random bits n + O(k log(d + 1) + log k log(1/γ)) 11 / 20

slide-67
SLIDE 67

Main steward

◮ Pick random seed X, compute (X1, . . . , Xk) = Gen(X)

12 / 20

slide-68
SLIDE 68

Main steward

◮ Pick random seed X, compute (X1, . . . , Xk) = Gen(X) ◮ For i = 1 to k:

12 / 20

slide-69
SLIDE 69

Main steward

◮ Pick random seed X, compute (X1, . . . , Xk) = Gen(X) ◮ For i = 1 to k:

◮ Obtain Wi = fi(Xi) 12 / 20

slide-70
SLIDE 70

Main steward

◮ Pick random seed X, compute (X1, . . . , Xk) = Gen(X) ◮ For i = 1 to k:

◮ Obtain Wi = fi(Xi) ◮ Shift and round Wi to determine output Yi 12 / 20

slide-71
SLIDE 71

Main steward

◮ Pick random seed X, compute (X1, . . . , Xk) = Gen(X) ◮ For i = 1 to k:

◮ Obtain Wi = fi(Xi) ◮ Shift and round Wi to determine output Yi

◮ Ingredient 1: Gen: PRG for block decision trees

12 / 20

slide-72
SLIDE 72

Main steward

◮ Pick random seed X, compute (X1, . . . , Xk) = Gen(X) ◮ For i = 1 to k:

◮ Obtain Wi = fi(Xi) ◮ Shift and round Wi to determine output Yi

◮ Ingredient 1: Gen: PRG for block decision trees ◮ Ingredient 2: Deterministic shifting and rounding algorithm

12 / 20

slide-73
SLIDE 73

Shifting and rounding algorithm

13 / 20

(d + 1) · 2ε Wi1 Wi2 Wi3 Wi4 Wi5

slide-74
SLIDE 74

Shifting and rounding algorithm

13 / 20

(d + 1) · 2ε Wi1 Wi2 Wi3 Wi4 Wi5

slide-75
SLIDE 75

Shifting and rounding algorithm

13 / 20

(d + 1) · 2ε Wi1 Wi2 Wi3 Wi4 Wi5

slide-76
SLIDE 76

Shifting and rounding algorithm

13 / 20

(d + 1) · 2ε Wi1 Wi2 Wi3 Wi4 Wi5

slide-77
SLIDE 77

Shifting and rounding algorithm

13 / 20

(d + 1) · 2ε Wi1 Wi2 Wi3 Wi4 Wi5

slide-78
SLIDE 78

Shifting and rounding algorithm

13 / 20

(d + 1) · 2ε Wi1 Wi2 Wi3 Wi4 Wi5

slide-79
SLIDE 79

Shifting and rounding algorithm

13 / 20

(d + 1) · 2ε Wi1 Wi2 Wi3 Wi4 Wi5

slide-80
SLIDE 80

Shifting and rounding algorithm

13 / 20

(d + 1) · 2ε Wi1 Wi2 Wi3 Wi4 Wi5

slide-81
SLIDE 81

Analysis of shifting and rounding algorithm

◮ For W ∈ Rd and ∆ ∈ [d + 1], define R∆(W ) ∈ Rd by shifting

W according to ∆, then rounding

14 / 20

slide-82
SLIDE 82

Analysis of shifting and rounding algorithm

◮ For W ∈ Rd and ∆ ∈ [d + 1], define R∆(W ) ∈ Rd by shifting

W according to ∆, then rounding

◮ By construction, Yi = R∆(Wi) for some ∆

14 / 20

slide-83
SLIDE 83

Analysis of shifting and rounding algorithm

◮ For W ∈ Rd and ∆ ∈ [d + 1], define R∆(W ) ∈ Rd by shifting

W according to ∆, then rounding

◮ By construction, Yi = R∆(Wi) for some ∆ ◮ Imagine if Yi = R∆(µi) for some ∆...

14 / 20

slide-84
SLIDE 84

Analysis of shifting and rounding algorithm

◮ For W ∈ Rd and ∆ ∈ [d + 1], define R∆(W ) ∈ Rd by shifting

W according to ∆, then rounding

◮ By construction, Yi = R∆(Wi) for some ∆ ◮ Imagine if Yi = R∆(µi) for some ∆...

f1

14 / 20

slide-85
SLIDE 85

Analysis of shifting and rounding algorithm

◮ For W ∈ Rd and ∆ ∈ [d + 1], define R∆(W ) ∈ Rd by shifting

W according to ∆, then rounding

◮ By construction, Yi = R∆(Wi) for some ∆ ◮ Imagine if Yi = R∆(µi) for some ∆...

f1 f R1(µ1)

2

f R2(µ1)

2

f R3(µ1)

2

14 / 20

slide-86
SLIDE 86

Analysis of shifting and rounding algorithm

◮ For W ∈ Rd and ∆ ∈ [d + 1], define R∆(W ) ∈ Rd by shifting

W according to ∆, then rounding

◮ By construction, Yi = R∆(Wi) for some ∆ ◮ Imagine if Yi = R∆(µi) for some ∆...

f1 f R1(µ1)

2

f R2(µ1)

2

f R3(µ1)

2

⊥ f R2(µ1),R1(µ2)

3

f R2(µ1),R2(µ2)

3

f R2(µ1),R3(µ2)

3

14 / 20

slide-87
SLIDE 87

Certification tree

f1 f R1(µ1)

2

f R2(µ1)

2

f R3(µ1)

2

⊥ f R2(µ1),R1(µ2)

3

f R2(µ1),R2(µ2)

3

f R2(µ1),R3(µ2)

3

15 / 20

slide-88
SLIDE 88

Certification tree

f1 f R1(µ1)

2

f R2(µ1)

2

f R3(µ1)

2

⊥ f R2(µ1),R1(µ2)

3

f R2(µ1),R2(µ2)

3

f R2(µ1),R3(µ2)

3

◮ A sequence (X1, . . . , Xk) of query points determines:

15 / 20

slide-89
SLIDE 89

Certification tree

f1 f R1(µ1)

2

f R2(µ1)

2

f R3(µ1)

2

⊥ f R2(µ1),R1(µ2)

3

f R2(µ1),R2(µ2)

3

f R2(µ1),R3(µ2)

3

◮ A sequence (X1, . . . , Xk) of query points determines:

◮ A transcript (f1, Y1, f2, Y2, . . . , fk, Yk) 15 / 20

slide-90
SLIDE 90

Certification tree

f1 f R1(µ1)

2

f R2(µ1)

2

f R3(µ1)

2

⊥ f R2(µ1),R1(µ2)

3

f R2(µ1),R2(µ2)

3

f R2(µ1),R3(µ2)

3

◮ A sequence (X1, . . . , Xk) of query points determines:

◮ A transcript (f1, Y1, f2, Y2, . . . , fk, Yk) ◮ A path P through tree 15 / 20

slide-91
SLIDE 91

Certification tree

f1 f R1(µ1)

2

f R2(µ1)

2

f R3(µ1)

2

⊥ f R2(µ1),R1(µ2)

3

f R2(µ1),R2(µ2)

3

f R2(µ1),R3(µ2)

3

◮ A sequence (X1, . . . , Xk) of query points determines:

◮ A transcript (f1, Y1, f2, Y2, . . . , fk, Yk) ◮ A path P through tree

◮ If we pick X1, . . . , Xk independently and u.a.r.,

Pr

(X1,...,Xk)[P has a ⊥ node] ≤ kδ

15 / 20

slide-92
SLIDE 92

Certification tree

f1 f R1(µ1)

2

f R2(µ1)

2

f R3(µ1)

2

⊥ f R2(µ1),R1(µ2)

3

f R2(µ1),R2(µ2)

3

f R2(µ1),R3(µ2)

3

◮ A sequence (X1, . . . , Xk) of query points determines:

◮ A transcript (f1, Y1, f2, Y2, . . . , fk, Yk) ◮ A path P through tree

◮ If we pick X1, . . . , Xk independently and u.a.r.,

Pr

(X1,...,Xk)[P has a ⊥ node] ≤ kδ ◮ (Certification) No ⊥ nodes in P =

⇒ every Yi has error O(εd)

15 / 20

slide-93
SLIDE 93

Block decision trees

◮ (k, n, q) block decision tree: Full q-ary tree of height k

16 / 20

slide-94
SLIDE 94

Block decision trees

◮ (k, n, q) block decision tree: Full q-ary tree of height k

v va vb vc vaa vab vac vba vbb vbc vca vcb vcc

16 / 20

slide-95
SLIDE 95

Block decision trees

◮ (k, n, q) block decision tree: Full q-ary tree of height k ◮ Each internal node vs has a function vs : {0, 1}n → [q]

v va vb vc vaa vab vac vba vbb vbc vca vcb vcc

16 / 20

slide-96
SLIDE 96

Block decision trees

◮ (k, n, q) block decision tree: Full q-ary tree of height k ◮ Each internal node vs has a function vs : {0, 1}n → [q]

v va vb vc vaa vab vac vba vbb vbc vca vcb vcc 00, 11 01 10

16 / 20

slide-97
SLIDE 97

Block decision trees

◮ (k, n, q) block decision tree: Full q-ary tree of height k ◮ Each internal node vs has a function vs : {0, 1}n → [q] ◮ Tree reads nk bits and outputs a leaf

v va vb vc vaa vab vac vba vbb vbc vca vcb vcc 00, 11 01 10

16 / 20

slide-98
SLIDE 98

PRG for block decision trees

◮ Theorem: There is an efficient γ-PRG for block decision trees

with seed length n + O(k log q + log k log(1/γ))

17 / 20

slide-99
SLIDE 99

PRG for block decision trees

◮ Theorem: There is an efficient γ-PRG for block decision trees

with seed length n + O(k log q + log k log(1/γ))

◮ Proof idea: Modify parameters of INW generator

17 / 20

slide-100
SLIDE 100

PRG for block decision trees

◮ Theorem: There is an efficient γ-PRG for block decision trees

with seed length n + O(k log q + log k log(1/γ))

◮ Proof idea: Modify parameters of INW generator ◮ This generator fools the certification tree

17 / 20

slide-101
SLIDE 101

PRG for block decision trees

◮ Theorem: There is an efficient γ-PRG for block decision trees

with seed length n + O(k log q + log k log(1/γ))

◮ Proof idea: Modify parameters of INW generator ◮ This generator fools the certification tree ◮ No need to fool steward/owner protocol!

17 / 20

slide-102
SLIDE 102

Application: Randomness-efficient Goldreich-Levin

18 / 20

slide-103
SLIDE 103

Application: Randomness-efficient Goldreich-Levin

◮ Oracle access to x ∈ {0, 1}2n

18 / 20

slide-104
SLIDE 104

Application: Randomness-efficient Goldreich-Levin

◮ Oracle access to x ∈ {0, 1}2n ◮ Theorem: Can find all Hadamard codewords that agree with x

in ( 1

2 + θ)-fraction of positions

18 / 20

slide-105
SLIDE 105

Application: Randomness-efficient Goldreich-Levin

◮ Oracle access to x ∈ {0, 1}2n ◮ Theorem: Can find all Hadamard codewords that agree with x

in ( 1

2 + θ)-fraction of positions

◮ Runtime poly(n, 1/θ, log(1/δ))

(δ = failure prob)

18 / 20

slide-106
SLIDE 106

Application: Randomness-efficient Goldreich-Levin

◮ Oracle access to x ∈ {0, 1}2n ◮ Theorem: Can find all Hadamard codewords that agree with x

in ( 1

2 + θ)-fraction of positions

◮ Runtime poly(n, 1/θ, log(1/δ))

(δ = failure prob)

◮ O(n + log n log(1/δ)) random bits (independent of θ!) 18 / 20

slide-107
SLIDE 107

Application: Randomness-efficient Goldreich-Levin

◮ Oracle access to x ∈ {0, 1}2n ◮ Theorem: Can find all Hadamard codewords that agree with x

in ( 1

2 + θ)-fraction of positions

◮ Runtime poly(n, 1/θ, log(1/δ))

(δ = failure prob)

◮ O(n + log n log(1/δ)) random bits (independent of θ!)

◮ Previous best: O(n log(n/θ) log(1/(δθ))) random bits

(Bshouty et al. ’04)

18 / 20

slide-108
SLIDE 108

Application: Randomness-efficient Goldreich-Levin

◮ Oracle access to x ∈ {0, 1}2n ◮ Theorem: Can find all Hadamard codewords that agree with x

in ( 1

2 + θ)-fraction of positions

◮ Runtime poly(n, 1/θ, log(1/δ))

(δ = failure prob)

◮ O(n + log n log(1/δ)) random bits (independent of θ!)

◮ Previous best: O(n log(n/θ) log(1/(δθ))) random bits

(Bshouty et al. ’04)

◮ Proof ingredients:

18 / 20

slide-109
SLIDE 109

Application: Randomness-efficient Goldreich-Levin

◮ Oracle access to x ∈ {0, 1}2n ◮ Theorem: Can find all Hadamard codewords that agree with x

in ( 1

2 + θ)-fraction of positions

◮ Runtime poly(n, 1/θ, log(1/δ))

(δ = failure prob)

◮ O(n + log n log(1/δ)) random bits (independent of θ!)

◮ Previous best: O(n log(n/θ) log(1/(δθ))) random bits

(Bshouty et al. ’04)

◮ Proof ingredients:

◮ Standard Goldreich-Levin algorithm 18 / 20

slide-110
SLIDE 110

Application: Randomness-efficient Goldreich-Levin

◮ Oracle access to x ∈ {0, 1}2n ◮ Theorem: Can find all Hadamard codewords that agree with x

in ( 1

2 + θ)-fraction of positions

◮ Runtime poly(n, 1/θ, log(1/δ))

(δ = failure prob)

◮ O(n + log n log(1/δ)) random bits (independent of θ!)

◮ Previous best: O(n log(n/θ) log(1/(δθ))) random bits

(Bshouty et al. ’04)

◮ Proof ingredients:

◮ Standard Goldreich-Levin algorithm ◮ Our steward with d = poly(1/θ) 18 / 20

slide-111
SLIDE 111

Application: Randomness-efficient Goldreich-Levin

◮ Oracle access to x ∈ {0, 1}2n ◮ Theorem: Can find all Hadamard codewords that agree with x

in ( 1

2 + θ)-fraction of positions

◮ Runtime poly(n, 1/θ, log(1/δ))

(δ = failure prob)

◮ O(n + log n log(1/δ)) random bits (independent of θ!)

◮ Previous best: O(n log(n/θ) log(1/(δθ))) random bits

(Bshouty et al. ’04)

◮ Proof ingredients:

◮ Standard Goldreich-Levin algorithm ◮ Our steward with d = poly(1/θ) ◮ Goldreich-Wigderson sampler 18 / 20

slide-112
SLIDE 112

Landscape of stewards

ε′ δ′ Randomness complexity Reference

19 / 20

slide-113
SLIDE 113

Landscape of stewards

ε′ δ′ Randomness complexity Reference ε kδ nk Na¨ ıve

19 / 20

slide-114
SLIDE 114

Landscape of stewards

ε′ δ′ Randomness complexity Reference ε kδ nk Na¨ ıve O(ε) 2kδ n

(works for d = 1 only)

This work

19 / 20

slide-115
SLIDE 115

Landscape of stewards

ε′ δ′ Randomness complexity Reference ε kδ nk Na¨ ıve O(ε) 2kδ n

(works for d = 1 only)

This work O(εd) kδ + γ n + O(k log(d + 1) + log k log(1/γ)) This work

19 / 20

slide-116
SLIDE 116

Landscape of stewards

◮ Steward model captures derandomization constructions in

literature ε′ δ′ Randomness complexity Reference ε kδ nk Na¨ ıve O(ε) 2kδ n

(works for d = 1 only)

This work O(εd) kδ + γ n + O(k log(d + 1) + log k log(1/γ)) This work

19 / 20

slide-117
SLIDE 117

Landscape of stewards

◮ Steward model captures derandomization constructions in

literature ε′ δ′ Randomness complexity Reference ε kδ nk Na¨ ıve O(ε) 2kδ n

(works for d = 1 only)

This work O(εd) kδ + γ n + O(k log(d + 1) + log k log(1/γ)) This work O(εkd/γ) kδ + γ n + O(k log k + k log d + k log(1/γ)) ≈ SZ ’99

19 / 20

slide-118
SLIDE 118

Landscape of stewards

◮ Steward model captures derandomization constructions in

literature ε′ δ′ Randomness complexity Reference ε kδ nk Na¨ ıve O(ε) 2kδ n

(works for d = 1 only)

This work O(εd) kδ + γ n + O(k log(d + 1) + log k log(1/γ)) This work O(εkd/γ) kδ + γ n + O(k log k + k log d + k log(1/γ)) ≈ SZ ’99 O(ε) kδ + k/2nΩ(1) O(n6 + kd) ≈ IZ ’89

19 / 20

slide-119
SLIDE 119

Landscape of stewards

◮ Steward model captures derandomization constructions in

literature ε′ δ′ Randomness complexity Reference ε kδ nk Na¨ ıve O(ε) 2kδ n

(works for d = 1 only)

This work O(εd) kδ + γ n + O(k log(d + 1) + log k log(1/γ)) This work O(εkd/γ) kδ + γ n + O(k log k + k log d + k log(1/γ)) ≈ SZ ’99 O(ε) kδ + k/2nΩ(1) O(n6 + kd) ≈ IZ ’89 O(ε) kδ + γ n + O(kd + log k log(1/γ)) This work

19 / 20

slide-120
SLIDE 120

Landscape of stewards

◮ Steward model captures derandomization constructions in

literature ε′ δ′ Randomness complexity Reference ε kδ nk Na¨ ıve O(ε) 2kδ n

(works for d = 1 only)

This work O(εd) kδ + γ n + O(k log(d + 1) + log k log(1/γ)) This work O(εkd/γ) kδ + γ n + O(k log k + k log d + k log(1/γ)) ≈ SZ ’99 O(ε) kδ + k/2nΩ(1) O(n6 + kd) ≈ IZ ’89 O(ε) kδ + γ n + O(kd + log k log(1/γ)) This work Any Any ≤ 0.2 n + Ω(k) − log(δ′/δ) This work

19 / 20

slide-121
SLIDE 121

Open questions

◮ Optimal randomness complexity when d is large?

20 / 20

slide-122
SLIDE 122

Open questions

◮ Optimal randomness complexity when d is large? ◮ Simultaneously achieve error ε′ ≤ O(ε) and randomness

complexity n + O(k log(d + 1))?

20 / 20

slide-123
SLIDE 123

Open questions

◮ Optimal randomness complexity when d is large? ◮ Simultaneously achieve error ε′ ≤ O(ε) and randomness

complexity n + O(k log(d + 1))?

◮ Thanks! Questions? ◮ This material is based upon work supported by the National Science

Foundation Graduate Research Fellowship under Grant No. DGE-1610403.

20 / 20