discrepancy theory and applications to bin packing
play

Discrepancy Theory and Applications to Bin Packing Thomas Rothvoss - PowerPoint PPT Presentation

Discrepancy Theory and Applications to Bin Packing Thomas Rothvoss Joint work with Becca Hoberg Discrepancy theory Set system S = { S 1 , . . . , S m } , S i [ n ] i S b b b Discrepancy theory 1 Set system S = { S 1 , . . .


  1. Discrepancy Theory and Applications to Bin Packing Thomas Rothvoss Joint work with Becca Hoberg

  2. Discrepancy theory ◮ Set system S = { S 1 , . . . , S m } , S i ⊆ [ n ] i S

  3. b b b Discrepancy theory − 1 ◮ Set system S = { S 1 , . . . , S m } , S i ⊆ [ n ] ◮ Coloring χ : [ n ] → {− 1 , +1 } − 1 +1 i S

  4. b b b Discrepancy theory − 1 ◮ Set system S = { S 1 , . . . , S m } , S i ⊆ [ n ] ◮ Coloring χ : [ n ] → {− 1 , +1 } − 1 +1 ◮ Discrepancy i S � � � disc( S ) = χ :[ n ] →{± 1 } max min χ ( i ) � . � � � S ∈S i ∈ S

  5. b b b Discrepancy theory − 1 ◮ Set system S = { S 1 , . . . , S m } , S i ⊆ [ n ] ◮ Coloring χ : [ n ] → {− 1 , +1 } − 1 +1 ◮ Discrepancy i S � � � disc( S ) = χ :[ n ] →{± 1 } max min χ ( i ) � . � � � S ∈S i ∈ S Known results: ◮ n sets, n elements: disc( S ) = O ( √ n ) [Spencer ’85]

  6. b b b Discrepancy theory − 1 ◮ Set system S = { S 1 , . . . , S m } , S i ⊆ [ n ] ◮ Coloring χ : [ n ] → {− 1 , +1 } − 1 +1 ◮ Discrepancy i S � � � disc( S ) = χ :[ n ] →{± 1 } max min χ ( i ) � . � � � S ∈S i ∈ S Known results: ◮ n sets, n elements: disc( S ) = O ( √ n ) [Spencer ’85] ◮ Every element in ≤ t sets: disc( S ) < 2 t [Beck & Fiala ’81]

  7. b b b Discrepancy theory − 1 ◮ Set system S = { S 1 , . . . , S m } , S i ⊆ [ n ] ◮ Coloring χ : [ n ] → {− 1 , +1 } − 1 +1 ◮ Discrepancy i S � � � disc( S ) = χ :[ n ] →{± 1 } max min χ ( i ) � . � � � S ∈S i ∈ S Known results: ◮ n sets, n elements: disc( S ) = O ( √ n ) [Spencer ’85] ◮ Every element in ≤ t sets: disc( S ) < 2 t [Beck & Fiala ’81] Main method: Iteratively find a partial coloring .

  8. Discrepancy algorithm Theorem (R., FOCS 2014) For a convex symmetric set K ⊆ R n with Pr[gaussian ∈ K ] ≥ e − Θ( n ) , one can find a y ∈ K ∩ [ − 1 , 1] n with |{ i : y i = ± 1 }| ≥ Θ( n ) in poly-time . [ − 1 , 1] n K y ∗ 0

  9. Discrepancy algorithm Theorem (R., FOCS 2014) For a convex symmetric set K ⊆ R n with Pr[gaussian ∈ K ] ≥ e − Θ( n ) , one can find a y ∈ K ∩ [ − 1 , 1] n with |{ i : y i = ± 1 }| ≥ Θ( n ) in poly-time . Algorithm: (1) take a random Gaussian x ∗ [ − 1 , 1] n K 0 x ∗

  10. Discrepancy algorithm Theorem (R., FOCS 2014) For a convex symmetric set K ⊆ R n with Pr[gaussian ∈ K ] ≥ e − Θ( n ) , one can find a y ∈ K ∩ [ − 1 , 1] n with |{ i : y i = ± 1 }| ≥ Θ( n ) in poly-time . Algorithm: (1) take a random Gaussian x ∗ (2) compute y ∗ = argmin {� x ∗ − y � 2 | y ∈ K ∩ [ − 1 , 1] n } [ − 1 , 1] n K 0 y ∗ x ∗

  11. Analysis [ − 1 , 1] n K 0

  12. Analysis √ n ◮ W.h.p. � x ∗ − y ∗ � 2 ≥ 1 5 [ − 1 , 1] n K y ∗ 0 x ∗

  13. Analysis √ n ◮ W.h.p. � x ∗ − y ∗ � 2 ≥ 1 5 ◮ Fact: For any set Q : Pr[gaussian ∈ Q ] ≥ e − o ( n ) ⇒ E [dist(gaussian , Q )] ≤ o ( √ n ). [ − 1 , 1] n K y ∗ 0 x ∗

  14. Analysis √ n ◮ W.h.p. � x ∗ − y ∗ � 2 ≥ 1 5 ◮ Fact: For any set Q : Pr[gaussian ∈ Q ] ≥ e − o ( n ) ⇒ E [dist(gaussian , Q )] ≤ o ( √ n ). ◮ Key observation: � y ∗ − x ∗ � 2 = min {� y − x ∗ � 2 | y ∈ K and | y i | ≤ 1 ∀ i } [ − 1 , 1] n K y ∗ 0 x ∗

  15. Analysis √ n ◮ W.h.p. � x ∗ − y ∗ � 2 ≥ 1 5 ◮ Fact: For any set Q : Pr[gaussian ∈ Q ] ≥ e − o ( n ) ⇒ E [dist(gaussian , Q )] ≤ o ( √ n ). ◮ Key observation: � y ∗ − x ∗ � 2 = min {� y − x ∗ � 2 | y ∈ K and | y i | ≤ 1 ∀ tight i } [ − 1 , 1] n K y ∗ 0 x ∗ K ∩ STRIP

  16. Analysis √ n ◮ W.h.p. � x ∗ − y ∗ � 2 ≥ 1 5 ◮ Fact: For any set Q : Pr[gaussian ∈ Q ] ≥ e − o ( n ) ⇒ E [dist(gaussian , Q )] ≤ o ( √ n ). ◮ Key observation: � y ∗ − x ∗ � 2 = min {� y − x ∗ � 2 | y ∈ K and | y i | ≤ 1 ∀ tight i } ◮ Strip of o ( n ) coord.: Pr[gaussian ∈ K ∩ STRIP] ≥ e − Ω( n ) . [ − 1 , 1] n K y ∗ 0 x ∗ K ∩ STRIP

  17. Analysis √ n ◮ W.h.p. � x ∗ − y ∗ � 2 ≥ 1 5 ◮ Fact: For any set Q : Pr[gaussian ∈ Q ] ≥ e − o ( n ) ⇒ E [dist(gaussian , Q )] ≤ o ( √ n ). ◮ Key observation: � y ∗ − x ∗ � 2 = min {� y − x ∗ � 2 | y ∈ K and | y i | ≤ 1 ∀ tight i } ◮ Strip of o ( n ) coord.: Pr[gaussian ∈ K ∩ STRIP] ≥ e − Ω( n ) . ◮ Then E [dist(gaussian , K ∩ STRIP)] ≤ o ( √ n ). Contradiction! [ − 1 , 1] n K y ∗ 0 x ∗ K ∩ STRIP

  18. Application to Bin Packing

  19. Bin Packing Input: Items with sizes s 1 , . . . , s n ∈ [0 , 1] Goal: Pack items into minimum number of bins of size 1. 1 0 bin 1 bin 2 s i 1 input

  20. Bin Packing Input: Items with sizes s 1 , . . . , s n ∈ [0 , 1] Goal: Pack items into minimum number of bins of size 1. 1 0 bin 1 bin 2 s i 1 input

  21. Bin Packing Input: Items with sizes s 1 , . . . , s n ∈ [0 , 1] Goal: Pack items into minimum number of bins of size 1. 1 0 bin 1 bin 2 s i 1 input

  22. Bin Packing Input: Items with sizes s 1 , . . . , s n ∈ [0 , 1] Goal: Pack items into minimum number of bins of size 1. 1 0 bin 1 bin 2 s i 1 input

  23. Bin Packing Input: Items with sizes s 1 , . . . , s n ∈ [0 , 1] Goal: Pack items into minimum number of bins of size 1. 1 0 bin 1 bin 2 s i 1 input

  24. Bin Packing Input: Items with sizes s 1 , . . . , s n ∈ [0 , 1] Goal: Pack items into minimum number of bins of size 1. 1 0 bin 1 bin 2 s i 1 input ◮ NP -hard to distinguish OPT ≤ 2 or OPT ≥ 3 [Garey & Johnson ’79]

  25. Bin Packing Input: Items with sizes s 1 , . . . , s n ∈ [0 , 1] Goal: Pack items into minimum number of bins of size 1. 1 0 bin 1 bin 2 s i 1 input ◮ NP -hard to distinguish OPT ≤ 2 or OPT ≥ 3 [Garey & Johnson ’79] ◮ First Fit Decreasing [Johnson ’73]: APX ≤ 11 9 OPT + 4

  26. Bin Packing Input: Items with sizes s 1 , . . . , s n ∈ [0 , 1] Goal: Pack items into minimum number of bins of size 1. 1 0 bin 1 bin 2 s i 1 input ◮ NP -hard to distinguish OPT ≤ 2 or OPT ≥ 3 [Garey & Johnson ’79] ◮ First Fit Decreasing [Johnson ’73]: APX ≤ 11 9 OPT + 4 ◮ [de la Vega & L¨ ucker ’81] : APX ≤ (1 + ε ) OPT + O (1 /ε 2 ) in time O ( n ) · f ( ε )

  27. Bin Packing Input: Items with sizes s 1 , . . . , s n ∈ [0 , 1] Goal: Pack items into minimum number of bins of size 1. 1 0 bin 1 bin 2 s i 1 input ◮ NP -hard to distinguish OPT ≤ 2 or OPT ≥ 3 [Garey & Johnson ’79] ◮ First Fit Decreasing [Johnson ’73]: APX ≤ 11 9 OPT + 4 ◮ [de la Vega & L¨ ucker ’81] : APX ≤ (1 + ε ) OPT + O (1 /ε 2 ) in time O ( n ) · f ( ε ) ◮ [Karmarkar & Karp ’82]: APX ≤ OPT + O (log 2 OPT ) in poly-time

  28. The Gilmore Gomory LP relaxation ◮ b i = #items with size s i ◮ Feasible patterns : n � � � p ∈ Z n P = ≥ 0 | s i p i ≤ 1 i =1

  29. The Gilmore Gomory LP relaxation ◮ b i = #items with size s i ◮ Feasible patterns : n � � � p ∈ Z n P = ≥ 0 | s i p i ≤ 1 i =1 ◮ Gilmore Gomory LP relaxation: � min x p p ∈P � p i · x p ≥ b i ∀ i ∈ [ n ] p ∈P x p ≥ 0 ∀ p ∈ P

  30. The Gilmore Gomory LP relaxation ◮ b i = #items with size s i ◮ Feasible patterns : n � � � p ∈ Z n P = ≥ 0 | s i p i ≤ 1 i =1 ◮ Gilmore Gomory LP relaxation: 1 T x min � Ax ≥ b p ∈P x p ≥ 0 ∀ p ∈ P

  31. The Gilmore Gomory LP relaxation ◮ b i = #items with size s i ◮ Feasible patterns : n � � � p ∈ Z n P = ≥ 0 | s i p i ≤ 1 i =1 ◮ Gilmore Gomory LP relaxation: 1 T x min � Ax ≥ b p ∈P x p ≥ 0 ∀ p ∈ P ◮ Can find x with 1 T x ≤ OPT f + δ in time poly( � b � 1 , 1 δ )

  32. The Gilmore Gomory LP - Example s i 1 0 . 26 input 0 . 44 0 . 4 0 . 3

  33. The Gilmore Gomory LP - Example s i 1 0 . 26 input 0 . 44 0 . 4 0 . 3 min 1 T x     2 0 0 0 1 1 1 0 0 0 1 0 1 A b 0 2 0 0 1 0 0 1 1 0 0 1 1      x ≥     0 0 3 0 0 1 0 1 0 1 1 1 1    0 0 0 3 0 0 1 0 1 1 1 1 1 x ≥ 0

  34. The Gilmore Gomory LP - Example s i 1 0 . 26 input 0 . 44 0 . 4 0 . 3 min 1 T x     2 0 0 0 1 1 1 0 0 0 1 0 1 0 2 0 0 1 0 0 1 1 0 0 1 1      x ≥     0 0 3 0 0 1 0 1 0 1 1 1 1    0 0 0 3 0 0 1 0 1 1 1 1 1 x ≥ 0

  35. The Gilmore Gomory LP - Example s i 1 0 . 26 input 0 . 44 0 . 4 0 . 3 min 1 T x     2 0 0 0 1 1 1 0 0 0 1 0 1 0 2 0 0 1 0 0 1 1 0 0 1 1      x ≥     0 0 3 0 0 1 0 1 0 1 1 1 1    0 0 0 3 0 0 1 0 1 1 1 1 1 x ≥ 0 1 / 2 × 1 / 2 × 1 / 2 ×

  36. Karmarkar-Karp’s Grouping input:

  37. Karmarkar-Karp’s Grouping � s i ∈ [2 , 3] input:

  38. Karmarkar-Karp’s Grouping � s i ∈ [2 , 3] input: new input: 3 × 4 × 4 ×

  39. Karmarkar-Karp’s Grouping � s i ∈ [2 , 3] input: new input: 3 × 4 × 4 × ◮ increases OPT by O (log n )

  40. Karmarkar-Karp’s Grouping � s i ∈ [2 , 3] input: new input: 3 × 4 × 4 × ◮ increases OPT by O (log n ) A

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend