Strong Direct Sum for Randomized Query Complexity
Conference on Computational Complexity New Brunswick, New Jersey July 18, 2019
Eric Blais Joshua Brody University of Waterloo Swarthmore College
Strong Direct Sum for Randomized Query Complexity Eric Blais Joshua - - PowerPoint PPT Presentation
Strong Direct Sum for Randomized Query Complexity Eric Blais Joshua Brody University of Waterloo Swarthmore College Conference on Computational Complexity New Brunswick, New Jersey July 18, 2019 Outline Introduction Strong Direct
Conference on Computational Complexity New Brunswick, New Jersey July 18, 2019
Eric Blais Joshua Brody University of Waterloo Swarthmore College
Does computing f(x) on k copies scale with k?
Direct Sum Theorem: Computing k copies of f requires k times the resources Direct Product Theorem: Success prob. of computing k copies of f with << k resources is 2-Ω(k)
Does computing f(x) on k copies scale with k?
Direct Sum Theorem: Computing k copies of f requires k times the resources Direct Product Theorem: Success prob. of computing k copies of f with << k resources is 2-Ω(k)
Does computing f(x) on k copies scale with k? Strong Direct Sum: computing k copies of f w/error ε requires >> k times the resources
Corollary: There is f such that Rε(fk) = ϴ(klog(k)Rε(f)) Strong Direct Sum for average query complexity: For any f and any k, computing fk satisfies: Rε(fk) = ϴ(kRε/k(f)) Separation Theorem: for all ε > 2-n^1/3 , there is total function f : {0,1}N → {0,1} such that Rε(f) = ϴ(R(f)log(1/ε))
aka Decision Tree Complexity
x1 x3 x8 1
abort
aka Decision Tree Complexity Decision Tree for f: {0,1}n → {0,1}:
Randomized DT: distribution A on decision trees
x1 x3 x8 1
abort
aka Decision Tree Complexity Decision Tree for f: {0,1}n → {0,1}:
Randomized DT: distribution A on decision trees
x1 x3 x8 1
abort
Distributional QC : min Ex[cost(T,x)] s.t. Pr[abort] ≤ 훿 and Pr[error] ≤ ε
D훿,ε(f)
μ
Randomized QC : minimum cost of randomized algorithm s.t. Pr[abort] ≤ 훿 and Pr[error] ≤ ε R훿,ε(f) Average case Randomized QC : minimum acost of randomized algorithm s.t. Pr[error] ≤ ε Rε(f)
Minimax Lemma: max D2훿,2ε(f)
μ μ
max D훿/2,ε/2(f)
μ μ
≤ R훿,ε(f) ≤ Error Reduction: R (f) ≤ O(log(t)R1/2, 1/3(f))
O(1/t), O(1/t)
Average QC vs Aborts: ≤ Rε(f) ≤
훿R훿,ε(f)
R훿,(1-훿)ε(f)/(1-훿)
Average QC vs Aborts: R훿,(1-훿)ε(f)/(1-훿) ≤ Rε(f) ≤
훿R훿,ε(f)
First inequality: Algorithm A: ε-error, acost(A)= q Second inequality: Algorithm B’: (1-훿)ε-error, 훿-abort, q queries.
Average QC vs Aborts: R훿,(1-훿)ε(f)/(1-훿) ≤ Rε(f) ≤
훿R훿,ε(f)
Algorithm B(x) { emulate A(x) abort if > q/훿 queries } Algorithm A’(x) { repeat: emulate B’(x) until no aborts }
First inequality: Algorithm A: ε-error, acost(A)= q Second inequality: Algorithm B’: (1-훿)ε-error, 훿-abort, q queries.
Information Complexity: [MWY13, MWY15]
Direct Product Theorem: [Drucker 12]
Separation Theorems: [GPW15, ABBLSS17]
Direct Sum Theorems:
Strong Direct Sum Theorem: D0,ε(fk) = Ω(kD1/5,40ε/k(f))
μk μ
Separation Theorem: There is f : {0,1}N → {0,1} such that for all ε > 2-N^1/3 , we have R = Ω(R1/3(f)log(1/ε))
훿,ε(f)
Corollary: There is f such that R1/3(fk) = Ω(klog(k)Rε(f))
Strong Direct Sum Theorem: D0,ε(fk) = Ω(kD1/5,40ε/k(f))
μk μ
Separation Theorem: There is f : {0,1}N → {0,1} such that for all ε > 2-N^1/3 , we have R = Ω(R1/3(f)log(1/ε))
훿,ε(f)
Corollary: There is f such that R1/3(fk) = Ω(klog(k)Rε(f)) proof: R1/3(fk) ≥ R0,1/3(fk) = Ω(kR1/5,40/3k(f)) = Ω(klog(k)R1/3(f))
Strong Direct Sum Theorem: D0,ε(fk) = Ω(kD1/5,40ε/k(f))
μk μ
Separation Theorem: There is f : {0,1}N → {0,1} such that for all ε > 2-N^1/3 , we have R = Ω(R1/3(f)log(1/ε))
훿,ε(f)
Corollary: There is f such that R1/3(fk) = Ω(klog(k)Rε(f)) Query-resistant codes: probabilistic encoding G: Σ →{0,1}N such that N/3 bits of G(x) needed to learn anything about x Key Technical result: proof: R1/3(fk) ≥ R0,1/3(fk) = Ω(kR1/5,40/3k(f)) = Ω(klog(k)R1/3(f))
Strong Direct Sum Theorem: D0,ε(fk) = 훀(kD1/5,40ε/k(f))
μk μ
Let A be an ε-error algorithm for fk with q queries. Goal: (ε/k)-error algorithm B for f with q/k queries. Let y = (y1,…, yk). Embed(y,i,x) := y, w/i-th coord replaced by x.
Strong Direct Sum Theorem: D0,ε(fk) = 훀(kD1/5,40ε/k(f))
μk μ
Let A be an ε-error algorithm for fk with q queries. Goal: (ε/k)-error algorithm B for f with q/k queries. Let y = (y1,…, yk). Embed(y,i,x) := y, w/i-th coord replaced by x.
Algorithm B(x) { carefully select y,i emulate A(EMBED(y,i,x)) abort if problems found }
Strong Direct Sum Theorem: D0,ε(fk) = 훀(kD1/5,40ε/k(f))
μk μ
Let A be an ε-error algorithm for fk with q queries. Goal: (ε/k)-error algorithm B for f with q/k queries. Let y = (y1,…, yk). Embed(y,i,x) := y, w/i-th coord replaced by x.
Algorithm B(x) { carefully select y,i emulate A(EMBED(y,i,x)) abort if problems found }
Intuition: success on typical coordinate ≥ 1- 10ε/k else overall success < (1- 10ε/k)k < 1-ε
Strong Direct Sum Theorem: D0,ε(fk) = 훀(kD1/5,40ε/k(f))
μk μ
1-ε ≤ Pr[A(Y) = fk(Y)] = ∏ Pr[A(Y)i = fk(Y)i | A(Y)<i = fk(Y)<i]
Y~μk Y~μk i=1 k
Strong Direct Sum Theorem: D0,ε(fk) = 훀(kD1/5,40ε/k(f))
μk μ
1-ε ≤ Pr[A(Y) = fk(Y)] = ∏ Pr[A(Y)i = fk(Y)i | A(Y)<i = fk(Y)<i]
Y~μk Y~μk i=1 k
Want: i such that (1) conditional error very low: Pr[A err. on i-th coord. | correct on < i] ≤ 10 ε/k (2) Expected # queries on i-th coord not too high: E[queries on i-th coord.] ≤ 3q/k
Strong Direct Sum Theorem: D0,ε(fk) = 훀(kD1/5,40ε/k(f))
μk μ
1-ε ≤ Pr[A(Y) = fk(Y)] = ∏ Pr[A(Y)i = fk(Y)i | A(Y)<i = fk(Y)<i]
Y~μk Y~μk i=1 k
Want: i such that (1) conditional error very low: Pr[A err. on i-th coord. | correct on < i] ≤ 10 ε/k (2) Expected # queries on i-th coord not too high: E[queries on i-th coord.] ≤ 3q/k Fact: at least 2k/3 coords. satisfy (1) Fact: at least 2k/3 coords. satisfy (2) ⟹There is i* satisfying (1) and (2). Y* := Embed(Y,i*,x).
Strong Direct Sum Theorem: D0,ε(fk) = 훀(kD1/5,40ε/k(f))
μk μ
This i* satisfies:
Strong Direct Sum Theorem: D0,ε(fk) = 훀(kD1/5,40ε/k(f))
μk μ
This i* satisfies:
Markov Inequality: there is y* such that
Algorithm B(x) { z := EMBED(y*,i*,x) emulate A(z) abort if qi*(z) > 120q/k abort if A(z)<i* ≠ fk(z)<i* }
Strong Direct Sum Theorem: D0,ε(fk) = 훀(kD1/5,40ε/k(f))
μk μ
This i* satisfies:
Markov Inequality: there is y* such that
Algorithm B(x) { z := EMBED(y*,i*,x) emulate A(z) abort if qi*(z) > 120q/k abort if A(z)<i* ≠ fk(z)<i* }
Strong Direct Sum Theorem: D0,ε(fk) = 훀(kD1/5,40ε/k(f))
μk μ
This i* satisfies:
Markov Inequality: there is y* such that
abort probability: 1/10 + 4ε < 1/5 error probability: 40ε/k
PtrFcn: Σn*n →{0,1}. each cell z ϵ Σ has:
PtrFcn(X) := 1 iff
1,[⊥,…,⊥] 1,[⊥,…,⊥] 1,[⊥,…,⊥] 1,[⊥,…,⊥] 1,[⊥,…,⊥] 1,[ ] 0,[⊥,…, ] 0,[⊥,…,⊥] 0,[⊥,…,⊥] 0,[⊥,…, ]
[GPW15, ABBLSS17,BB19]
Definition: a 훿N-query resistant code of Σ is a set of distribs {G(x)}
Definition: a 훿N-query resistant code of Σ is a set of distribs {G(x)}
Theorem: [Chor er al. 85] For any Σ, there is a (N/3)-query resistant code with N = 12.5log(|Σ|). Furthermore, conditional distributions G(x)|S are uniform.
For f : Σn →{0,1}, define F : {0,1}nN →{0,1} as: F(y1,…,yn) := f(h(y1),…, h(yn))
Theorem: R훿,ε(f) ≤ (3/N)R훿,ε(F)
cell
For f : Σn →{0,1}, define F : {0,1}nN →{0,1} as: F(y1,…,yn) := f(h(y1),…, h(yn))
Theorem: R훿,ε(f) ≤ (3/N)R훿,ε(F)
cell
Proof: Let A be a (q, 훿, ε)-algorithm for F.
Algorithm B(x1,…,xn) { emulate A(G(x1),…,G(xn)) when A queries G(xi) for k-th time: if k < N/3, sample G(xi) cond. on prev. queries if k = N/3, sample xi if k ≥ N/3, sample G(xi) cond. on prev. history. }
(a) XOR Lemma (b)Strong Direct Sum for MAJ