Improved Direct Product Theorems for Randomized Query Complexity - - PowerPoint PPT Presentation

improved direct product theorems for randomized query
SMART_READER_LITE
LIVE PREVIEW

Improved Direct Product Theorems for Randomized Query Complexity - - PowerPoint PPT Presentation

Improved Direct Product Theorems for Randomized Query Complexity Andrew Drucker Sept. 13, 2010 Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 1/28 Big picture Usually, computer users have not one goal, but


slide-1
SLIDE 1

Improved Direct Product Theorems for Randomized Query Complexity

Andrew Drucker

  • Sept. 13, 2010

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 1/28

slide-2
SLIDE 2

Big picture

Usually, computer users have not one goal, but many. When can multiple computations be combined to make them easier?

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 2/28

slide-3
SLIDE 3

Big picture

Usually, computer users have not one goal, but many. When can multiple computations be combined to make them easier?

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 2/28

slide-4
SLIDE 4

Separate inputs

Suppose each of the outputs we want to compute depends on a separate input. For example:

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 3/28

slide-5
SLIDE 5

Direct Product Theorems

Intuition: the different outputs are ‘unrelated’, so computing them together shouldn’t make the task easier. Direct Product Theorems (DPTs) are results that make this intuition rigorous (when it’s correct!). DPTs have been studied for many years, in many computational models. Our focus: randomized query algorithms, with cost = number of queries to the input.

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 4/28

slide-6
SLIDE 6

Direct Product Theorems

Intuition: the different outputs are ‘unrelated’, so computing them together shouldn’t make the task easier. Direct Product Theorems (DPTs) are results that make this intuition rigorous (when it’s correct!). DPTs have been studied for many years, in many computational models. Our focus: randomized query algorithms, with cost = number of queries to the input.

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 4/28

slide-7
SLIDE 7

Direct Product Theorems

Intuition: the different outputs are ‘unrelated’, so computing them together shouldn’t make the task easier. Direct Product Theorems (DPTs) are results that make this intuition rigorous (when it’s correct!). DPTs have been studied for many years, in many computational models. Our focus: randomized query algorithms, with cost = number of queries to the input.

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 4/28

slide-8
SLIDE 8

Direct Product Theorems

Intuition: the different outputs are ‘unrelated’, so computing them together shouldn’t make the task easier. Direct Product Theorems (DPTs) are results that make this intuition rigorous (when it’s correct!). DPTs have been studied for many years, in many computational models. Our focus: randomized query algorithms, with cost = number of queries to the input.

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 4/28

slide-9
SLIDE 9

Direct Product Theorems

Intuition: the different outputs are ‘unrelated’, so computing them together shouldn’t make the task easier. Direct Product Theorems (DPTs) are results that make this intuition rigorous (when it’s correct!). DPTs have been studied for many years, in many computational models. Our focus: randomized query algorithms, with cost = number of queries to the input.

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 4/28

slide-10
SLIDE 10

Direct Product Theorems

Intuition: the different outputs are ‘unrelated’, so computing them together shouldn’t make the task easier. Direct Product Theorems (DPTs) are results that make this intuition rigorous (when it’s correct!). DPTs have been studied for many years, in many computational models. Our focus: randomized query algorithms, with cost = number of queries to the input.

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 4/28

slide-11
SLIDE 11

Direct products

Given F : {0, 1}n → Σ, and k > 1, define F ⊗k(x1 . . . , xk)

=

  • F(x1), . . . , F(xk)
  • ,

a function of k different n-bit inputs x1, . . . , xk. F ⊗k = ‘k-fold direct product’ of F.

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 5/28

slide-12
SLIDE 12

Direct products

Given F : {0, 1}n → Σ, and k > 1, define F ⊗k(x1 . . . , xk)

=

  • F(x1), . . . , F(xk)
  • ,

a function of k different n-bit inputs x1, . . . , xk. F ⊗k = ‘k-fold direct product’ of F.

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 5/28

slide-13
SLIDE 13

Average-case complexity

For a function F, a query bound T > 0, and a distribution µ

  • ver inputs to F, define

SucT,µ(F) as the maximum success probability of any T-query algorithm R in computing F(y) on input y ∼ µ. (probability over randomness in y and in R)

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 6/28

slide-14
SLIDE 14

Average-case complexity

For a function F, a query bound T > 0, and a distribution µ

  • ver inputs to F, define

SucT,µ(F) as the maximum success probability of any T-query algorithm R in computing F(y) on input y ∼ µ. (probability over randomness in y and in R)

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 6/28

slide-15
SLIDE 15

The form of a DPT

Let µ⊗k denote k independent samples from µ. A Direct Product Theorem is of the form: ∀F, SucT,µ(F) ≤ p = ⇒ SucT ′,µ⊗k(F ⊗k) ≤ p′, where T ′, p′ depend on T, p, and k. We hope to have p′ ≪ p and T ′ ≫ T. “F is hard ⇒ F ⊗k is harder.”

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 7/28

slide-16
SLIDE 16

The form of a DPT

Let µ⊗k denote k independent samples from µ. A Direct Product Theorem is of the form: ∀F, SucT,µ(F) ≤ p = ⇒ SucT ′,µ⊗k(F ⊗k) ≤ p′, where T ′, p′ depend on T, p, and k. We hope to have p′ ≪ p and T ′ ≫ T. “F is hard ⇒ F ⊗k is harder.”

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 7/28

slide-17
SLIDE 17

The form of a DPT

Let µ⊗k denote k independent samples from µ. A Direct Product Theorem is of the form: ∀F, SucT,µ(F) ≤ p = ⇒ SucT ′,µ⊗k(F ⊗k) ≤ p′, where T ′, p′ depend on T, p, and k. We hope to have p′ ≪ p and T ′ ≫ T. “F is hard ⇒ F ⊗k is harder.”

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 7/28

slide-18
SLIDE 18

An ‘ideal’ DPT?

The strongest DPT we could hope for would say: ∀F, SucT,µ(F) ≤ 1−ε = ⇒ SucTk,µ⊗k(F ⊗k) ≤ (1−ε)k. (1 − ε)k is the success prob. we’d get if we run the optimal T-query algorithm on each of the k inputs. True for restricted classes of algorithms [NRS94], [Sha03]. Shaltiel [Sha03] defined fair Tk-query algorithms for F ⊗k as

  • nes which make exactly T queries to each of the k inputs.

He proved an ‘ideal’ DPT for these algorithms.

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 8/28

slide-19
SLIDE 19

An ‘ideal’ DPT?

The strongest DPT we could hope for would say: ∀F, SucT,µ(F) ≤ 1−ε = ⇒ SucTk,µ⊗k(F ⊗k) ≤ (1−ε)k. (1 − ε)k is the success prob. we’d get if we run the optimal T-query algorithm on each of the k inputs. True for restricted classes of algorithms [NRS94], [Sha03]. Shaltiel [Sha03] defined fair Tk-query algorithms for F ⊗k as

  • nes which make exactly T queries to each of the k inputs.

He proved an ‘ideal’ DPT for these algorithms.

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 8/28

slide-20
SLIDE 20

An ‘ideal’ DPT?

The strongest DPT we could hope for would say: ∀F, SucT,µ(F) ≤ 1−ε = ⇒ SucTk,µ⊗k(F ⊗k) ≤ (1−ε)k. (1 − ε)k is the success prob. we’d get if we run the optimal T-query algorithm on each of the k inputs. True for restricted classes of algorithms [NRS94], [Sha03]. Shaltiel [Sha03] defined fair Tk-query algorithms for F ⊗k as

  • nes which make exactly T queries to each of the k inputs.

He proved an ‘ideal’ DPT for these algorithms.

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 8/28

slide-21
SLIDE 21

An ‘ideal’ DPT?

The strongest DPT we could hope for would say: ∀F, SucT,µ(F) ≤ 1−ε = ⇒ SucTk,µ⊗k(F ⊗k) ≤ (1−ε)k. (1 − ε)k is the success prob. we’d get if we run the optimal T-query algorithm on each of the k inputs. True for restricted classes of algorithms [NRS94], [Sha03]. Shaltiel [Sha03] defined fair Tk-query algorithms for F ⊗k as

  • nes which make exactly T queries to each of the k inputs.

He proved an ‘ideal’ DPT for these algorithms.

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 8/28

slide-22
SLIDE 22

An ‘ideal’ DPT?

But, Shaltiel also showed the ideal DPT is false in general! The message: we can sometimes solve F ⊗k more effectively by adaptive reallocation of queries. Counterexamples of [Sha03] apply to most computational models.

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 9/28

slide-23
SLIDE 23

An ‘ideal’ DPT?

But, Shaltiel also showed the ideal DPT is false in general! The message: we can sometimes solve F ⊗k more effectively by adaptive reallocation of queries. Counterexamples of [Sha03] apply to most computational models.

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 9/28

slide-24
SLIDE 24

An ‘ideal’ DPT?

But, Shaltiel also showed the ideal DPT is false in general! The message: we can sometimes solve F ⊗k more effectively by adaptive reallocation of queries. Counterexamples of [Sha03] apply to most computational models.

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 9/28

slide-25
SLIDE 25

Our new DPT

We modify Shaltiel’s techniques for fair algorithms, to show a new DPT for unrestricted query algorithms.

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 10/28

slide-26
SLIDE 26

Our new DPT

Theorem

For any Boolean function F and α > 0, SucT,µ(F) ≤ 1 − ε ⇒ SucαεTk,µ⊗k(F ⊗k) ≤ (2αε(1 − ε))k. Success probability drops exponentially in k, if (number of queries) ≈ εTk. For α ≤ 1 we have 2αε(1 − ε) ≤ 1 − ε + αε. Varying α gives a tradeoff between the query bound and the success probability. Shaltiel’s examples tell us this is a nearly optimal tradeoff (for most parameter settings).

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 11/28

slide-27
SLIDE 27

Our new DPT

Theorem

For any Boolean function F and α > 0, SucT,µ(F) ≤ 1 − ε ⇒ SucαεTk,µ⊗k(F ⊗k) ≤ (2αε(1 − ε))k. Success probability drops exponentially in k, if (number of queries) ≈ εTk. For α ≤ 1 we have 2αε(1 − ε) ≤ 1 − ε + αε. Varying α gives a tradeoff between the query bound and the success probability. Shaltiel’s examples tell us this is a nearly optimal tradeoff (for most parameter settings).

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 11/28

slide-28
SLIDE 28

Our new DPT

Theorem

For any Boolean function F and α > 0, SucT,µ(F) ≤ 1 − ε ⇒ SucαεTk,µ⊗k(F ⊗k) ≤ (2αε(1 − ε))k. Success probability drops exponentially in k, if (number of queries) ≈ εTk. For α ≤ 1 we have 2αε(1 − ε) ≤ 1 − ε + αε. Varying α gives a tradeoff between the query bound and the success probability. Shaltiel’s examples tell us this is a nearly optimal tradeoff (for most parameter settings).

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 11/28

slide-29
SLIDE 29

Our new DPT

Theorem

For any Boolean function F and α > 0, SucT,µ(F) ≤ 1 − ε ⇒ SucαεTk,µ⊗k(F ⊗k) ≤ (2αε(1 − ε))k. Success probability drops exponentially in k, if (number of queries) ≈ εTk. For α ≤ 1 we have 2αε(1 − ε) ≤ 1 − ε + αε. Varying α gives a tradeoff between the query bound and the success probability. Shaltiel’s examples tell us this is a nearly optimal tradeoff (for most parameter settings).

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 11/28

slide-30
SLIDE 30

Proof sketch

First, some definitions about a single, n-bit input y ∼ µ to F. For v ∈ {0, 1, ∗}n, let µ[v] denote y ∼ µ conditioned on the event [yi = vi, for each i such that vi ∈ {0, 1}]. E.g., if µ is uniform on 3 bits, then µ[00∗] is uniform on {000, 001}. (We can assume µ has full support.) Let |v| = number of 0/1 entries in v.

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 12/28

slide-31
SLIDE 31

Proof sketch

First, some definitions about a single, n-bit input y ∼ µ to F. For v ∈ {0, 1, ∗}n, let µ[v] denote y ∼ µ conditioned on the event [yi = vi, for each i such that vi ∈ {0, 1}]. E.g., if µ is uniform on 3 bits, then µ[00∗] is uniform on {000, 001}. (We can assume µ has full support.) Let |v| = number of 0/1 entries in v.

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 12/28

slide-32
SLIDE 32

Proof sketch

First, some definitions about a single, n-bit input y ∼ µ to F. For v ∈ {0, 1, ∗}n, let µ[v] denote y ∼ µ conditioned on the event [yi = vi, for each i such that vi ∈ {0, 1}]. E.g., if µ is uniform on 3 bits, then µ[00∗] is uniform on {000, 001}. (We can assume µ has full support.) Let |v| = number of 0/1 entries in v.

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 12/28

slide-33
SLIDE 33

Proof sketch

First, some definitions about a single, n-bit input y ∼ µ to F. For v ∈ {0, 1, ∗}n, let µ[v] denote y ∼ µ conditioned on the event [yi = vi, for each i such that vi ∈ {0, 1}]. E.g., if µ is uniform on 3 bits, then µ[00∗] is uniform on {000, 001}. (We can assume µ has full support.) Let |v| = number of 0/1 entries in v.

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 12/28

slide-34
SLIDE 34

Proof sketch

First, some definitions about a single, n-bit input y ∼ µ to F. For v ∈ {0, 1, ∗}n, let µ[v] denote y ∼ µ conditioned on the event [yi = vi, for each i such that vi ∈ {0, 1}]. E.g., if µ is uniform on 3 bits, then µ[00∗] is uniform on {000, 001}. (We can assume µ has full support.) Let |v| = number of 0/1 entries in v.

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 12/28

slide-35
SLIDE 35

The k-fold setting

Say the algorithm R receives inputs x1, . . . , xk ∼ µ⊗k and makes M = ⌊αεTk⌋ queries. For j ∈ {1, . . . , k} and t ≥ 0, let the random string vj

t ∈ {0, 1, ∗}n

describe bits seen of the j-th input xj, after R has made t queries overall (to the entire collection).

Claim

Conditioned on v1

t , . . . , vk t , the k inputs remain independent, with

xj ∼ µ[vj

t].

Proof is a simple calculation.

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 13/28

slide-36
SLIDE 36

The k-fold setting

Say the algorithm R receives inputs x1, . . . , xk ∼ µ⊗k and makes M = ⌊αεTk⌋ queries. For j ∈ {1, . . . , k} and t ≥ 0, let the random string vj

t ∈ {0, 1, ∗}n

describe bits seen of the j-th input xj, after R has made t queries overall (to the entire collection).

Claim

Conditioned on v1

t , . . . , vk t , the k inputs remain independent, with

xj ∼ µ[vj

t].

Proof is a simple calculation.

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 13/28

slide-37
SLIDE 37

The k-fold setting

Say the algorithm R receives inputs x1, . . . , xk ∼ µ⊗k and makes M = ⌊αεTk⌋ queries. For j ∈ {1, . . . , k} and t ≥ 0, let the random string vj

t ∈ {0, 1, ∗}n

describe bits seen of the j-th input xj, after R has made t queries overall (to the entire collection).

Claim

Conditioned on v1

t , . . . , vk t , the k inputs remain independent, with

xj ∼ µ[vj

t].

Proof is a simple calculation.

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 13/28

slide-38
SLIDE 38

The k-fold setting

Say the algorithm R receives inputs x1, . . . , xk ∼ µ⊗k and makes M = ⌊αεTk⌋ queries. For j ∈ {1, . . . , k} and t ≥ 0, let the random string vj

t ∈ {0, 1, ∗}n

describe bits seen of the j-th input xj, after R has made t queries overall (to the entire collection).

Claim

Conditioned on v1

t , . . . , vk t , the k inputs remain independent, with

xj ∼ µ[vj

t].

Proof is a simple calculation.

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 13/28

slide-39
SLIDE 39

k inputs, k ‘fortunes’

For each input xj and each step t ≥ 0, define a random variable X(j, t) ∈ [0, 1]. Think of the algorithm R as a gambler gambling at k tables, and consider X(j, t) his fortune at the j-th table after t steps (i.e., queries).

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 14/28

slide-40
SLIDE 40

k inputs, k ‘fortunes’

Recall: vj

t ∈ {0, 1, ∗}n describes the queries made to xj so far.

If |vj

t| ≤ T, say that input j is under-budget (after t steps),

  • therwise j is over-budget.

If j is under-budget, define X(j, t) as the maximum success probability of computing F(xj) correctly of any algorithm making ≤ T − |vj

t| queries to input xj, under distribution

xj ∼ µ[vj

t].

Meaning: X(j, t) = best possible ‘winning prospects’ of computing F(xj), if we stay under-budget. Observe: X(j, t) ≥ 1/2 in this case.

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 15/28

slide-41
SLIDE 41

k inputs, k ‘fortunes’

Recall: vj

t ∈ {0, 1, ∗}n describes the queries made to xj so far.

If |vj

t| ≤ T, say that input j is under-budget (after t steps),

  • therwise j is over-budget.

If j is under-budget, define X(j, t) as the maximum success probability of computing F(xj) correctly of any algorithm making ≤ T − |vj

t| queries to input xj, under distribution

xj ∼ µ[vj

t].

Meaning: X(j, t) = best possible ‘winning prospects’ of computing F(xj), if we stay under-budget. Observe: X(j, t) ≥ 1/2 in this case.

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 15/28

slide-42
SLIDE 42

k inputs, k ‘fortunes’

Recall: vj

t ∈ {0, 1, ∗}n describes the queries made to xj so far.

If |vj

t| ≤ T, say that input j is under-budget (after t steps),

  • therwise j is over-budget.

If j is under-budget, define X(j, t) as the maximum success probability of computing F(xj) correctly of any algorithm making ≤ T − |vj

t| queries to input xj, under distribution

xj ∼ µ[vj

t].

Meaning: X(j, t) = best possible ‘winning prospects’ of computing F(xj), if we stay under-budget. Observe: X(j, t) ≥ 1/2 in this case.

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 15/28

slide-43
SLIDE 43

k inputs, k ‘fortunes’

Recall: vj

t ∈ {0, 1, ∗}n describes the queries made to xj so far.

If |vj

t| ≤ T, say that input j is under-budget (after t steps),

  • therwise j is over-budget.

If j is under-budget, define X(j, t) as the maximum success probability of computing F(xj) correctly of any algorithm making ≤ T − |vj

t| queries to input xj, under distribution

xj ∼ µ[vj

t].

Meaning: X(j, t) = best possible ‘winning prospects’ of computing F(xj), if we stay under-budget. Observe: X(j, t) ≥ 1/2 in this case.

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 15/28

slide-44
SLIDE 44

k inputs, k ‘fortunes’

Recall: vj

t ∈ {0, 1, ∗}n describes the queries made to xj so far.

If |vj

t| ≤ T, say that input j is under-budget (after t steps),

  • therwise j is over-budget.

If j is under-budget, define X(j, t) as the maximum success probability of computing F(xj) correctly of any algorithm making ≤ T − |vj

t| queries to input xj, under distribution

xj ∼ µ[vj

t].

Meaning: X(j, t) = best possible ‘winning prospects’ of computing F(xj), if we stay under-budget. Observe: X(j, t) ≥ 1/2 in this case.

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 15/28

slide-45
SLIDE 45

k inputs, k ‘fortunes’

If j is over-budget, set X(j, t) = 1/2. Note: going over-budget can’t increase our fortune!

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 16/28

slide-46
SLIDE 46

k inputs, k ‘fortunes’

If j is over-budget, set X(j, t) = 1/2. Note: going over-budget can’t increase our fortune!

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 16/28

slide-47
SLIDE 47

Unfavorable gambles

Two important properties:

  • 1. For all j,

X(j, 0) ≤ 1 − ε (follows from our initial assumption that SucT,µ(F) ≤ 1 − ε).

  • 2. If R makes its next query at table j, then

E[X(j, t + 1)|v1

t , . . . , vk t ] ≤ X(j, t), and

X(j′, t + 1) = X(j′, t) ∀j′ = j. (Follows from definition of X(j, t) and the fact that the inputs remain independent.) So, choosing input j to query next is like making an unfavorable gamble at the j-th table!

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 17/28

slide-48
SLIDE 48

Unfavorable gambles

Two important properties:

  • 1. For all j,

X(j, 0) ≤ 1 − ε (follows from our initial assumption that SucT,µ(F) ≤ 1 − ε).

  • 2. If R makes its next query at table j, then

E[X(j, t + 1)|v1

t , . . . , vk t ] ≤ X(j, t), and

X(j′, t + 1) = X(j′, t) ∀j′ = j. (Follows from definition of X(j, t) and the fact that the inputs remain independent.) So, choosing input j to query next is like making an unfavorable gamble at the j-th table!

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 17/28

slide-49
SLIDE 49

Unfavorable gambles

Two important properties:

  • 1. For all j,

X(j, 0) ≤ 1 − ε (follows from our initial assumption that SucT,µ(F) ≤ 1 − ε).

  • 2. If R makes its next query at table j, then

E[X(j, t + 1)|v1

t , . . . , vk t ] ≤ X(j, t), and

X(j′, t + 1) = X(j′, t) ∀j′ = j. (Follows from definition of X(j, t) and the fact that the inputs remain independent.) So, choosing input j to query next is like making an unfavorable gamble at the j-th table!

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 17/28

slide-50
SLIDE 50

Unfavorable gambles

Two important properties:

  • 1. For all j,

X(j, 0) ≤ 1 − ε (follows from our initial assumption that SucT,µ(F) ≤ 1 − ε).

  • 2. If R makes its next query at table j, then

E[X(j, t + 1)|v1

t , . . . , vk t ] ≤ X(j, t), and

X(j′, t + 1) = X(j′, t) ∀j′ = j. (Follows from definition of X(j, t) and the fact that the inputs remain independent.) So, choosing input j to query next is like making an unfavorable gamble at the j-th table!

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 17/28

slide-51
SLIDE 51

Bounding expectations

It follows that E  

j

X(j, t + 1)

  • v1

t , . . . , vk t

  ≤

  • j

X(j, t), so E  

j

X(j, t)   ≤

  • j

X(j, 0) ≤ (1 − ε)k for all 0 ≤ t ≤ M.

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 18/28

slide-52
SLIDE 52

Bounding expectations

It follows that E  

j

X(j, t + 1)

  • v1

t , . . . , vk t

  ≤

  • j

X(j, t), so E  

j

X(j, t)   ≤

  • j

X(j, 0) ≤ (1 − ε)k for all 0 ≤ t ≤ M.

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 18/28

slide-53
SLIDE 53

Success probability

What do the final fortunes X(j, M) tell us? If input j is under-budget after M queries, then for any guess y ∈ {0, 1}, Pr

  • y = f (xj)
  • v1

M, . . . , vk M

  • ≤ X(j, M).

If j is over-budget, then (trivially) for any y, Pr

  • y = f (xj)
  • v1

M, . . . , vk M

  • ≤ 1 = 2 · (1/2) = 2X(j, M).

Also, these k events are independent, after we condition on the guesses (y1, . . . , yk) produced by R.

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 19/28

slide-54
SLIDE 54

Success probability

What do the final fortunes X(j, M) tell us? If input j is under-budget after M queries, then for any guess y ∈ {0, 1}, Pr

  • y = f (xj)
  • v1

M, . . . , vk M

  • ≤ X(j, M).

If j is over-budget, then (trivially) for any y, Pr

  • y = f (xj)
  • v1

M, . . . , vk M

  • ≤ 1 = 2 · (1/2) = 2X(j, M).

Also, these k events are independent, after we condition on the guesses (y1, . . . , yk) produced by R.

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 19/28

slide-55
SLIDE 55

Success probability

What do the final fortunes X(j, M) tell us? If input j is under-budget after M queries, then for any guess y ∈ {0, 1}, Pr

  • y = f (xj)
  • v1

M, . . . , vk M

  • ≤ X(j, M).

If j is over-budget, then (trivially) for any y, Pr

  • y = f (xj)
  • v1

M, . . . , vk M

  • ≤ 1 = 2 · (1/2) = 2X(j, M).

Also, these k events are independent, after we condition on the guesses (y1, . . . , yk) produced by R.

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 19/28

slide-56
SLIDE 56

Success probability

What do the final fortunes X(j, M) tell us? If input j is under-budget after M queries, then for any guess y ∈ {0, 1}, Pr

  • y = f (xj)
  • v1

M, . . . , vk M

  • ≤ X(j, M).

If j is over-budget, then (trivially) for any y, Pr

  • y = f (xj)
  • v1

M, . . . , vk M

  • ≤ 1 = 2 · (1/2) = 2X(j, M).

Also, these k events are independent, after we condition on the guesses (y1, . . . , yk) produced by R.

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 19/28

slide-57
SLIDE 57

Success probability

Thus, Pr

  • R computes F ⊗k

v1

M, . . . , vk M

  • ≤ 2|B|

j

X(j, M), where B

= {j : input j is over-budget after M steps}. Counting queries, we have |B| < M/T ≤ (αεTk)/T = αεk. Thus Pr

  • R computes F ⊗k

≤ 2αεkE  

j

X(j, M)   ≤ 2αεk(1 − ε)k. QED

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 20/28

slide-58
SLIDE 58

Success probability

Thus, Pr

  • R computes F ⊗k

v1

M, . . . , vk M

  • ≤ 2|B|

j

X(j, M), where B

= {j : input j is over-budget after M steps}. Counting queries, we have |B| < M/T ≤ (αεTk)/T = αεk. Thus Pr

  • R computes F ⊗k

≤ 2αεkE  

j

X(j, M)   ≤ 2αεk(1 − ε)k. QED

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 20/28

slide-59
SLIDE 59

Success probability

Thus, Pr

  • R computes F ⊗k

v1

M, . . . , vk M

  • ≤ 2|B|

j

X(j, M), where B

= {j : input j is over-budget after M steps}. Counting queries, we have |B| < M/T ≤ (αεTk)/T = αεk. Thus Pr

  • R computes F ⊗k

≤ 2αεkE  

j

X(j, M)   ≤ 2αεk(1 − ε)k. QED

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 20/28

slide-60
SLIDE 60

Success probability

Thus, Pr

  • R computes F ⊗k

v1

M, . . . , vk M

  • ≤ 2|B|

j

X(j, M), where B

= {j : input j is over-budget after M steps}. Counting queries, we have |B| < M/T ≤ (αεTk)/T = αεk. Thus Pr

  • R computes F ⊗k

≤ 2αεkE  

j

X(j, M)   ≤ 2αεk(1 − ε)k. QED

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 20/28

slide-61
SLIDE 61

Seeking generalizations

Many other DPT variants we’d like to prove. But our previous technique was rather specific. We used the fact X(j, t) ≥ 1/2, which followed since F was

  • Boolean. Result weakens as output alphabet grows.

Bounding E

  • j,M X(j, M)
  • helped us upper-bound

Pr[R correct on all inputs], but we’d like to even bound Pr[R correct on most inputs]. Next: an approach to address both these issues.

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 21/28

slide-62
SLIDE 62

Seeking generalizations

Many other DPT variants we’d like to prove. But our previous technique was rather specific. We used the fact X(j, t) ≥ 1/2, which followed since F was

  • Boolean. Result weakens as output alphabet grows.

Bounding E

  • j,M X(j, M)
  • helped us upper-bound

Pr[R correct on all inputs], but we’d like to even bound Pr[R correct on most inputs]. Next: an approach to address both these issues.

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 21/28

slide-63
SLIDE 63

Seeking generalizations

Many other DPT variants we’d like to prove. But our previous technique was rather specific. We used the fact X(j, t) ≥ 1/2, which followed since F was

  • Boolean. Result weakens as output alphabet grows.

Bounding E

  • j,M X(j, M)
  • helped us upper-bound

Pr[R correct on all inputs], but we’d like to even bound Pr[R correct on most inputs]. Next: an approach to address both these issues.

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 21/28

slide-64
SLIDE 64

Seeking generalizations

Many other DPT variants we’d like to prove. But our previous technique was rather specific. We used the fact X(j, t) ≥ 1/2, which followed since F was

  • Boolean. Result weakens as output alphabet grows.

Bounding E

  • j,M X(j, M)
  • helped us upper-bound

Pr[R correct on all inputs], but we’d like to even bound Pr[R correct on most inputs]. Next: an approach to address both these issues.

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 21/28

slide-65
SLIDE 65

Seeking generalizations

Consider a more general setting than ours, in which a gambler plays games at k tables. Assume:

  • 1. Gambler has an initial endowment of (1 − ε) at every table.
  • 2. Cannot transfer funds between tables, or go into debt at a

table.

  • 3. All games ‘favor the house’ (in expectation).
  • 4. Gambler can choose which game to play next, at which table.

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 22/28

slide-66
SLIDE 66

Seeking generalizations

Consider a more general setting than ours, in which a gambler plays games at k tables. Assume:

  • 1. Gambler has an initial endowment of (1 − ε) at every table.
  • 2. Cannot transfer funds between tables, or go into debt at a

table.

  • 3. All games ‘favor the house’ (in expectation).
  • 4. Gambler can choose which game to play next, at which table.

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 22/28

slide-67
SLIDE 67

Seeking generalizations

Consider a more general setting than ours, in which a gambler plays games at k tables. Assume:

  • 1. Gambler has an initial endowment of (1 − ε) at every table.
  • 2. Cannot transfer funds between tables, or go into debt at a

table.

  • 3. All games ‘favor the house’ (in expectation).
  • 4. Gambler can choose which game to play next, at which table.

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 22/28

slide-68
SLIDE 68

Seeking generalizations

Consider a more general setting than ours, in which a gambler plays games at k tables. Assume:

  • 1. Gambler has an initial endowment of (1 − ε) at every table.
  • 2. Cannot transfer funds between tables, or go into debt at a

table.

  • 3. All games ‘favor the house’ (in expectation).
  • 4. Gambler can choose which game to play next, at which table.

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 22/28

slide-69
SLIDE 69

Seeking generalizations

Consider a more general setting than ours, in which a gambler plays games at k tables. Assume:

  • 1. Gambler has an initial endowment of (1 − ε) at every table.
  • 2. Cannot transfer funds between tables, or go into debt at a

table.

  • 3. All games ‘favor the house’ (in expectation).
  • 4. Gambler can choose which game to play next, at which table.

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 22/28

slide-70
SLIDE 70

Seeking generalizations

Suppose the gambler wishes to reach a fortune of 1 at every table. Reasoning similar to before gives Pr[success] ≤ (1 − ε)k. = winning odds if gambler plays independent ‘all or nothing’ bets at each table!

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 23/28

slide-71
SLIDE 71

Seeking generalizations

Suppose the gambler wishes to reach a fortune of 1 at every table. Reasoning similar to before gives Pr[success] ≤ (1 − ε)k. = winning odds if gambler plays independent ‘all or nothing’ bets at each table!

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 23/28

slide-72
SLIDE 72

Seeking generalizations

Suppose the gambler wishes to reach a fortune of 1 at every table. Reasoning similar to before gives Pr[success] ≤ (1 − ε)k. = winning odds if gambler plays independent ‘all or nothing’ bets at each table!

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 23/28

slide-73
SLIDE 73

Seeking generalizations

Now suppose the gambler’s goal is just to reach a fortune of 1 at ‘many’ tables. Here ‘many’ is specified by some monotone collection C of subsets of {1, . . . , k}. That is, (A ∈ C ∧ B ⊇ A) ⇒ B ∈ C. It’s natural to ask: does the ‘all or nothing’ strategy remain

  • ptimal?

Lemma (‘Gambling lemma’—informal)

YES! Under assumptions 1-4 above, independent all-or-nothing bets are an optimal strategy. Proof is a simple induction.

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 24/28

slide-74
SLIDE 74

Seeking generalizations

Now suppose the gambler’s goal is just to reach a fortune of 1 at ‘many’ tables. Here ‘many’ is specified by some monotone collection C of subsets of {1, . . . , k}. That is, (A ∈ C ∧ B ⊇ A) ⇒ B ∈ C. It’s natural to ask: does the ‘all or nothing’ strategy remain

  • ptimal?

Lemma (‘Gambling lemma’—informal)

YES! Under assumptions 1-4 above, independent all-or-nothing bets are an optimal strategy. Proof is a simple induction.

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 24/28

slide-75
SLIDE 75

Seeking generalizations

Now suppose the gambler’s goal is just to reach a fortune of 1 at ‘many’ tables. Here ‘many’ is specified by some monotone collection C of subsets of {1, . . . , k}. That is, (A ∈ C ∧ B ⊇ A) ⇒ B ∈ C. It’s natural to ask: does the ‘all or nothing’ strategy remain

  • ptimal?

Lemma (‘Gambling lemma’—informal)

YES! Under assumptions 1-4 above, independent all-or-nothing bets are an optimal strategy. Proof is a simple induction.

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 24/28

slide-76
SLIDE 76

Seeking generalizations

Now suppose the gambler’s goal is just to reach a fortune of 1 at ‘many’ tables. Here ‘many’ is specified by some monotone collection C of subsets of {1, . . . , k}. That is, (A ∈ C ∧ B ⊇ A) ⇒ B ∈ C. It’s natural to ask: does the ‘all or nothing’ strategy remain

  • ptimal?

Lemma (‘Gambling lemma’—informal)

YES! Under assumptions 1-4 above, independent all-or-nothing bets are an optimal strategy. Proof is a simple induction.

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 24/28

slide-77
SLIDE 77

Seeking generalizations

Now suppose the gambler’s goal is just to reach a fortune of 1 at ‘many’ tables. Here ‘many’ is specified by some monotone collection C of subsets of {1, . . . , k}. That is, (A ∈ C ∧ B ⊇ A) ⇒ B ∈ C. It’s natural to ask: does the ‘all or nothing’ strategy remain

  • ptimal?

Lemma (‘Gambling lemma’—informal)

YES! Under assumptions 1-4 above, independent all-or-nothing bets are an optimal strategy. Proof is a simple induction.

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 24/28

slide-78
SLIDE 78

Further results

With this Gambling Lemma, we can derive a variety of new direct product-type theorems for query complexity: threshold DPTs; an XOR lemma; DPTs for worst-case error;

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 25/28

slide-79
SLIDE 79

Further results

With this Gambling Lemma, we can derive a variety of new direct product-type theorems for query complexity: threshold DPTs; an XOR lemma; DPTs for worst-case error;

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 25/28

slide-80
SLIDE 80

Even more DPTs...

DPTs for search problems and errorless heuristics; DPTs for decision tree size (greatly improving on earlier

  • nes [IRW94]);

DPTs for interactive puzzles, in which the algorithm talks with dynamic entities rather than querying static strings.

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 26/28

slide-81
SLIDE 81

What’s next?

Our proofs crucially used the conditional independence property of k independent inputs queried by an algorithm. A simple analogue of this property is missing in richer computational models (including the quantum query model), which holds us back. But perhaps the ideas in our work can be helpful beyond the randomized query model.

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 27/28

slide-82
SLIDE 82

What’s next?

Our proofs crucially used the conditional independence property of k independent inputs queried by an algorithm. A simple analogue of this property is missing in richer computational models (including the quantum query model), which holds us back. But perhaps the ideas in our work can be helpful beyond the randomized query model.

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 27/28

slide-83
SLIDE 83

What’s next?

Our proofs crucially used the conditional independence property of k independent inputs queried by an algorithm. A simple analogue of this property is missing in richer computational models (including the quantum query model), which holds us back. But perhaps the ideas in our work can be helpful beyond the randomized query model.

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 27/28

slide-84
SLIDE 84

Thanks!

Andrew Drucker, Improved Direct Product Theorems for Randomized Query Complexity 28/28