strong direct sum for randomized query complexity
play

Strong Direct Sum for Randomized Query Complexity Eric Blais Joshua - PowerPoint PPT Presentation

Strong Direct Sum for Randomized Query Complexity Eric Blais Joshua Brody University of Waterloo Swarthmore College Conference on Computational Complexity New Brunswick, New Jersey July 18, 2019 Outline Introduction Strong Direct


  1. Strong Direct Sum for Randomized Query Complexity Eric Blais Joshua Brody University of Waterloo Swarthmore College Conference on Computational Complexity New Brunswick, New Jersey July 18, 2019

  2. Outline • Introduction • Strong Direct Sum • Query Resistance • Separation Theorem • Open Problems

  3. Direct Sum Theorems Does computing f(x) on k copies scale with k?

  4. Direct Sum Theorems Does computing f(x) on k copies scale with k? Direct Sum Theorem: Computing k copies of f requires k times the resources Direct Product Theorem: Success prob. of computing k copies of f with << k resources is 2 - Ω (k)

  5. Direct Sum Theorems Does computing f(x) on k copies scale with k? Direct Sum Theorem: Computing k copies of f requires k times the resources Direct Product Theorem: Success prob. of computing k copies of f with << k resources is 2 - Ω (k) Strong Direct Sum: computing k copies of f w/error ε requires >> k times the resources

  6. Our Main Results Strong Direct Sum for average query complexity: For any f and any k, computing f k satisfies: R ε (f k ) = ϴ (kR ε /k (f)) Separation Theorem: for all ε > 2 -n^1/3 , there is total function f : {0,1} N → {0,1} such that R ε (f) = ϴ (R(f)log(1/ ε )) Corollary: There is f such that R ε (f k ) = ϴ (klog(k)R ε (f))

  7. Query Complexity aka Decision Tree Complexity x 1 x 3 0 x 8 1 0 abort

  8. Query Complexity aka Decision Tree Complexity Decision Tree for f: {0,1} n → {0,1}: x 1 • internal nodes labeled w/input bits x i x 3 0 • leaves labeled w/output or ABORT • cost(T,x): depth of T on input x x 8 1 Randomized DT: distribution A on decision trees • cost( A ) = max T,x cost(T,x) 0 abort • acost(A) = max x E T~A [cost(T, x)]

  9. Query Complexity aka Decision Tree Complexity Decision Tree for f: {0,1} n → {0,1}: x 1 • internal nodes labeled w/input bits x i x 3 0 • leaves labeled w/output or ABORT • cost(T,x): depth of T on input x x 8 1 Randomized DT: distribution A on decision trees • cost( A ) = max T,x cost(T,x) 0 abort • acost(A) = max x E T~A [cost(T, x)] μ Distributional QC : min E x [cost(T,x)] s.t. Pr[abort] ≤ 훿 and Pr[error] ≤ ε D 훿 , ε (f) Randomized QC : minimum cost of randomized algorithm s.t. R 훿 , ε (f) Pr[abort] ≤ 훿 and Pr[error] ≤ ε Average case Randomized QC : R ε (f) minimum acost of randomized algorithm s.t. Pr[error] ≤ ε

  10. Basic Results μ μ Minimax Lemma: max D 2 훿 ,2 ε (f) max D 훿 /2, ε /2 (f) ≤ R 훿 , ε (f) ≤ μ μ Error Reduction: R (f) ≤ O(log(t)R 1/2, 1/3 (f) ) O(1/t), O(1/t) Average QC vs Aborts: 훿 R 훿 , ε (f) R 훿 ,(1- 훿 ) ε (f) /(1- 훿 ) ≤ R ε (f) ≤

  11. Basic Results Average QC vs Aborts: 훿 R 훿 , ε (f) R 훿 ,(1- 훿 ) ε (f) /(1- 훿 ) ≤ R ε (f) ≤ First inequality: Algorithm A : ε -error, acost(A)= q Second inequality: Algorithm B’ : (1- 훿 ) ε -error, 훿 -abort , q queries.

  12. Basic Results Average QC vs Aborts: 훿 R 훿 , ε (f) R 훿 ,(1- 훿 ) ε (f) /(1- 훿 ) ≤ R ε (f) ≤ First inequality: Algorithm B(x) { emulate A(x) Algorithm A : ε -error, abort if > q/ 훿 queries acost(A)= q } Second inequality: Algorithm A’(x) { repeat: Algorithm B’ : (1- 훿 ) ε -error, emulate B’(x) 훿 -abort , q queries. until no aborts }

  13. Previous Work Information Complexity: [MWY13, MWY15] • strong direct sum for information complexity w/aborts + error • applications for streaming/sketching algorithms Direct Product Theorem: [Drucker 12] • direct product theorems for randomized query complexity Separation Theorems: [GPW15, ABBLSS17] • query complexity separations based on pointer functions • polynomial separation R 0 (f) vs R ε (f) Direct Sum Theorems: [Jain Klauck Santha 10]: R ε (f k ) ≥ 훿 2 k R ε /(1- 훿 )+ 훿 (f) • [Ben-David Kothari 18]: R ε (f k ) ≥ kR ε (f) •

  14. Our Results μ k μ Strong Direct Sum Theorem: D 0, ε (f k ) = Ω (kD 1/5 , 40 ε /k (f)) Separation Theorem: There is f : {0,1} N → {0,1} such that for all ε > 2 -N^1/3 , we have R = Ω (R 1/3 (f)log(1/ ε )) 훿 , ε (f) Corollary: There is f such that R 1/3 (f k ) = Ω (klog(k)R ε (f))

  15. Our Results μ k μ Strong Direct Sum Theorem: D 0, ε (f k ) = Ω (kD 1/5 , 40 ε /k (f)) Separation Theorem: There is f : {0,1} N → {0,1} such that for all ε > 2 -N^1/3 , we have R = Ω (R 1/3 (f)log(1/ ε )) 훿 , ε (f) Corollary: There is f such that R 1/3 (f k ) = Ω (klog(k)R ε (f)) proof: R 1/3 (f k ) ≥ R 0,1/3 (f k ) = Ω (kR 1/5 ,40/3k (f)) = Ω (klog(k)R 1/3 (f))

  16. Our Results μ k μ Strong Direct Sum Theorem: D 0, ε (f k ) = Ω (kD 1/5 , 40 ε /k (f)) Separation Theorem: There is f : {0,1} N → {0,1} such that for all ε > 2 -N^1/3 , we have R = Ω (R 1/3 (f)log(1/ ε )) 훿 , ε (f) Corollary: There is f such that R 1/3 (f k ) = Ω (klog(k)R ε (f)) proof: R 1/3 (f k ) ≥ R 0,1/3 (f k ) = Ω (kR 1/5 ,40/3k (f)) = Ω (klog(k)R 1/3 (f)) Key Technical result: Query-resistant codes: probabilistic encoding G: Σ → {0,1} N such that N/3 bits of G(x) needed to learn anything about x

  17. Outline • Introduction • Strong Direct Sum • Query Resistance • Separation Theorem • Open Problems

  18. μ k μ Strong Direct Sum Theorem: D 0, ε (f k ) = 훀 (kD 1/5 ,40 ε /k (f)) Let A be an ε -error algorithm for f k with q queries. Goal: ( ε /k)-error algorithm B for f with q/k queries. Let y = (y 1 ,…, y k ) . Embed(y,i,x) := y , w/i-th coord replaced by x .

  19. μ k μ Strong Direct Sum Theorem: D 0, ε (f k ) = 훀 (kD 1/5 ,40 ε /k (f)) Let A be an ε -error algorithm for f k with q queries. Goal: ( ε /k)-error algorithm B for f with q/k queries. Let y = (y 1 ,…, y k ) . Embed(y,i,x) := y , w/i-th coord replaced by x . Algorithm B(x) { carefully select y,i emulate A(EMBED(y,i,x)) abort if problems found }

  20. μ k μ Strong Direct Sum Theorem: D 0, ε (f k ) = 훀 (kD 1/5 ,40 ε /k (f)) Let A be an ε -error algorithm for f k with q queries. Goal: ( ε /k)-error algorithm B for f with q/k queries. Let y = (y 1 ,…, y k ) . Embed(y,i,x) := y , w/i-th coord replaced by x . Algorithm B(x) { carefully select y,i emulate A(EMBED(y,i,x)) abort if problems found } Intuition: success on typical coordinate ≥ 1- 10 ε /k else overall success < (1- 10 ε /k) k < 1- ε

  21. μ k μ Strong Direct Sum Theorem: D 0, ε (f k ) = 훀 (kD 1/5 , 40 ε /k (f)) k 1- ε ≤ Pr[A(Y) = f k (Y)] = ∏ Pr[A(Y) i = f k (Y) i | A(Y) <i = f k (Y) <i ] Y~ μ k Y~ μ k i=1

  22. μ k μ Strong Direct Sum Theorem: D 0, ε (f k ) = 훀 (kD 1/5 , 40 ε /k (f)) k 1- ε ≤ Pr[A(Y) = f k (Y)] = ∏ Pr[A(Y) i = f k (Y) i | A(Y) <i = f k (Y) <i ] Y~ μ k Y~ μ k i=1 Want: i such that (1) conditional error very low: Pr[A err. on i-th coord. | correct on < i] ≤ 10 ε /k (2) Expected # queries on i-th coord not too high: E[queries on i-th coord.] ≤ 3q/k

  23. μ k μ Strong Direct Sum Theorem: D 0, ε (f k ) = 훀 (kD 1/5 , 40 ε /k (f)) k 1- ε ≤ Pr[A(Y) = f k (Y)] = ∏ Pr[A(Y) i = f k (Y) i | A(Y) <i = f k (Y) <i ] Y~ μ k Y~ μ k i=1 Want: i such that (1) conditional error very low: Pr[A err. on i-th coord. | correct on < i] ≤ 10 ε /k (2) Expected # queries on i-th coord not too high: E[queries on i-th coord.] ≤ 3q/k Fact: at least 2k/3 coords. satisfy (1) Fact: at least 2k/3 coords. satisfy (2) ⟹ There is i* satisfying (1) and (2). Y* := Embed(Y,i*,x).

  24. μ μ k Strong Direct Sum Theorem: D 0, ε (f k ) = 훀 (kD 1/5 , 40 ε /k (f)) This i* satisfies: 1. E Y~ μ k [ Pr x~ μ [A(Y*) <i* ≠ f k (Y*) <i* ] ] ≤ ε 2. E Y~ μ k [Pr x~ μ [A(Y*) i* ≠ f k (Y*) i* | A(Y*) <i* = f k (Y*) <i* ] ≤ 10 ε /k 3. E Y~ μ k [ E X [q i* (Y*)] ] ≤ 3q/k

  25. μ μ k Strong Direct Sum Theorem: D 0, ε (f k ) = 훀 (kD 1/5 , 40 ε /k (f)) This i* satisfies: 1. E Y~ μ k [ Pr x~ μ [A(Y*) <i* ≠ f k (Y*) <i* ] ] ≤ ε 2. E Y~ μ k [Pr x~ μ [A(Y*) i* ≠ f k (Y*) i* | A(Y*) <i* = f k (Y*) <i* ] ≤ 10 ε /k 3. E Y~ μ k [ E X [q i* (Y*)] ] ≤ 3q/k Markov Inequality: there is y* such that 1. Pr x~ μ [A(Y*) <i* ≠ f k (Y*) <i* ] ≤ 4 ε 2. Pr x~ μ [A(Y*) i* ≠ f k (Y*) i* | A(Y*) <i* = f k (Y*) <i* ] ≤ 40 ε /k 3. E X [q i* (Y*)] ≤ 12q/k

  26. μ μ k Strong Direct Sum Theorem: D 0, ε (f k ) = 훀 (kD 1/5 , 40 ε /k (f)) This i* satisfies: 1. E Y~ μ k [ Pr x~ μ [A(Y*) <i* ≠ f k (Y*) <i* ] ] ≤ ε 2. E Y~ μ k [Pr x~ μ [A(Y*) i* ≠ f k (Y*) i* | A(Y*) <i* = f k (Y*) <i* ] ≤ 10 ε /k 3. E Y~ μ k [ E X [q i* (Y*)] ] ≤ 3q/k Markov Inequality: there is y* such that 1. Pr x~ μ [A(Y*) <i* ≠ f k (Y*) <i* ] ≤ 4 ε 2. Pr x~ μ [A(Y*) i* ≠ f k (Y*) i* | A(Y*) <i* = f k (Y*) <i* ] ≤ 40 ε /k 3. E X [q i* (Y*)] ≤ 12q/k Algorithm B(x) { z := EMBED(y*,i*,x) emulate A(z) abort if q i* (z) > 120q/k abort if A(z) <i* ≠ f k (z) <i* }

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend