communication complexity
play

Communication Complexity Lecture 23 Computing with remote inputs - PowerPoint PPT Presentation

Communication Complexity Lecture 23 Computing with remote inputs 1 Communication Complexity 2 Communication Complexity Setting 2 Communication Complexity Setting Alice wants to compute f(x,y) 2 Communication Complexity Setting


  1. Lower-Bounding χ (f) If a fooling set of size S, no two input-pairs from S can be on the same tile in a monochromatic tiling χ (f) ≥ |S| for every fooling set S Rank lower-bound χ (f) ≥ Rank(M f ) Discrepancy lower-bound χ (f) ≥ Discrepancy(f) 12

  2. Rank(M) 13

  3. Rank(M) Rank of a matrix 13

  4. Rank(M) Rank of a matrix Maximum number of linearly independent rows (or equivalently, columns) 13

  5. Rank(M) Rank of a matrix Maximum number of linearly independent rows (or equivalently, columns) Linear independence: operations in a field 13

  6. Rank(M) Rank of a matrix Maximum number of linearly independent rows (or equivalently, columns) Linear independence: operations in a field Rank-r matrix: after row & column reductions D (mxn) diagonal, with r 1’ s, rest 0’ s. M = UDV 13

  7. Rank(M) Rank of a matrix Maximum number of linearly independent rows (or equivalently, columns) Linear independence: operations in a field Rank-r matrix: after row & column reductions D (mxn) diagonal, with r 1’ s, rest 0’ s. M = UDV Rank(M) ≤ r, iff M can be written as sum of ≤ r rank 1 matrices 13

  8. Rank(M) Rank of a matrix Maximum number of linearly independent rows (or equivalently, columns) Linear independence: operations in a field Rank-r matrix: after row & column reductions D (mxn) diagonal, with r 1’ s, rest 0’ s. M = UDV Rank(M) ≤ r, iff M can be written as sum of ≤ r rank 1 matrices M = UDV = Σ i ≤ r D ii U i(mx1) V i(1xn) = Σ i ≤ r B i , where Rank(B i )=1 13

  9. Rank(M) Rank of a matrix Maximum number of linearly independent rows (or equivalently, columns) Linear independence: operations in a field Rank-r matrix: after row & column reductions D (mxn) diagonal, with r 1’ s, rest 0’ s. M = UDV Rank(M) ≤ r, iff M can be written as sum of ≤ r rank 1 matrices M = UDV = Σ i ≤ r D ii U i(mx1) V i(1xn) = Σ i ≤ r B i , where Rank(B i )=1 If M = Σ i ≤ r B i = UDV, Rank(M) ≤ min{Rank(U),Rank(D),Rank(V)} ≤ Rank(D) = r 13

  10. χ (f) ≥ Rank(M f ) 14

  11. χ (f) ≥ Rank(M f ) If M = Σ i ≤ r B i with Rank(B i )=1, then Rank(M) ≤ r 14

  12. χ (f) ≥ Rank(M f ) If M = Σ i ≤ r B i with Rank(B i )=1, then Rank(M) ≤ r M f = Σ i ≤ χ (f) Tile i , where Tile i has a monochromatic rectangle and 0’ s elsewhere 14

  13. χ (f) ≥ Rank(M f ) If M = Σ i ≤ r B i with Rank(B i )=1, then Rank(M) ≤ r M f = Σ i ≤ χ (f) Tile i , where Tile i has a monochromatic rectangle and 0’ s elsewhere Rank(Tile i )=1 14

  14. χ (f) ≥ Rank(M f ) If M = Σ i ≤ r B i with Rank(B i )=1, then Rank(M) ≤ r M f = Σ i ≤ χ (f) Tile i , where Tile i has a monochromatic rectangle and 0’ s elsewhere Rank(Tile i )=1 Rank(M f ) ≤ χ (f) 14

  15. χ (f) ≥ Rank(M f ) If M = Σ i ≤ r B i with Rank(B i )=1, then Rank(M) ≤ r M f = Σ i ≤ χ (f) Tile i , where Tile i has a monochromatic rectangle and 0’ s elsewhere Rank(Tile i )=1 Rank(M f ) ≤ χ (f) CC(f) ≥ log( χ (f)) ≥ log(Rank(M f )) 14

  16. Discrepancy 15

  17. Discrepancy Discrepancy of a 0-1 matrix 15

  18. Discrepancy Discrepancy of a 0-1 matrix max “imbalance” in any rectangle 15

  19. Discrepancy Discrepancy of a 0-1 matrix max “imbalance” in any rectangle Imbalance = | #1’ s - #0’ s | 15

  20. Discrepancy Discrepancy of a 0-1 matrix max “imbalance” in any rectangle Imbalance = | #1’ s - #0’ s | Disc(M) = 1/(mn) max rect imbalance(rect) 15

  21. Discrepancy Discrepancy of a 0-1 matrix max “imbalance” in any rectangle Imbalance = | #1’ s - #0’ s | Disc(M) = 1/(mn) max rect imbalance(rect) χ (f) ≥ 1/Disc(M f ) 15

  22. Discrepancy Discrepancy of a 0-1 matrix max “imbalance” in any rectangle Imbalance = | #1’ s - #0’ s | Disc(M) = 1/(mn) max rect imbalance(rect) χ (f) ≥ 1/Disc(M f ) Disc(M f ) ≥ 1/(mn) (size of largest monochromatic tile) 15

  23. Discrepancy Discrepancy of a 0-1 matrix max “imbalance” in any rectangle Imbalance = | #1’ s - #0’ s | Disc(M) = 1/(mn) max rect imbalance(rect) χ (f) ≥ 1/Disc(M f ) Disc(M f ) ≥ 1/(mn) (size of largest monochromatic tile) χ (f) ≥ (mn)/(size of largest monochromatic tile) 15

  24. CC Lower-bounds Summary 16

  25. CC Lower-bounds Summary CC(f) ≥ log(#transcripts) 16

  26. CC Lower-bounds Summary CC(f) ≥ log(#transcripts) Tiling Lower-bound: #transcripts ≥ χ (f) 16

  27. CC Lower-bounds Summary CC(f) ≥ log(#transcripts) Tiling Lower-bound: #transcripts ≥ χ (f) Both fairly tight: CC(f) = O( log 2 ( χ (f)) ) 16

  28. CC Lower-bounds Summary CC(f) ≥ log(#transcripts) Tiling Lower-bound: #transcripts ≥ χ (f) Both fairly tight: CC(f) = O( log 2 ( χ (f)) ) To lower-bound χ (f): fooling-set, rank, 1/Disc 16

  29. CC Lower-bounds Summary CC(f) ≥ log(#transcripts) Tiling Lower-bound: #transcripts ≥ χ (f) Both fairly tight: CC(f) = O( log 2 ( χ (f)) ) To lower-bound χ (f): fooling-set, rank, 1/Disc χ (f) ≥ |max fooling-set| ≥ (Rank(M f )) 2 16

  30. CC Lower-bounds Summary CC(f) ≥ log(#transcripts) Tiling Lower-bound: #transcripts ≥ χ (f) Both fairly tight: CC(f) = O( log 2 ( χ (f)) ) To lower-bound χ (f): fooling-set, rank, 1/Disc χ (f) ≥ |max fooling-set| ≥ (Rank(M f )) 2 1/Discrepancy lower-bounds can be very loose 16

  31. CC Lower-bounds Summary CC(f) ≥ log(#transcripts) Tiling Lower-bound: #transcripts ≥ χ (f) Both fairly tight: CC(f) = O( log 2 ( χ (f)) ) To lower-bound χ (f): fooling-set, rank, 1/Disc χ (f) ≥ |max fooling-set| ≥ (Rank(M f )) 2 1/Discrepancy lower-bounds can be very loose Conjecture: Rank(M f ) (and hence fooling set) is fairly tight 16

  32. CC Lower-bounds Summary CC(f) ≥ log(#transcripts) Tiling Lower-bound: #transcripts ≥ χ (f) Both fairly tight: CC(f) = O( log 2 ( χ (f)) ) To lower-bound χ (f): fooling-set, rank, 1/Disc χ (f) ≥ |max fooling-set| ≥ (Rank(M f )) 2 1/Discrepancy lower-bounds can be very loose Conjecture: Rank(M f ) (and hence fooling set) is fairly tight i.e., CC(f) = O(polylog(Rank(M f )) 16

  33. Many Variants 17

  34. Many Variants Randomized protocols: significant savings in expectation 17

  35. Many Variants Randomized protocols: significant savings in expectation Non-deterministic: Alice and Bob are non-deterministic. “Communication” now includes shared guess 17

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend