The Entropy Rounding Method in Approximation Algorithms Thomas - - PowerPoint PPT Presentation

the entropy rounding method in approximation algorithms
SMART_READER_LITE
LIVE PREVIEW

The Entropy Rounding Method in Approximation Algorithms Thomas - - PowerPoint PPT Presentation

The Entropy Rounding Method in Approximation Algorithms Thomas Rothvo Department of Mathematics, M.I.T. Carg` ese 2011 A general rounding problem Problem: Given: A R n m , fractional solution x [0 , 1] m Find: y { 0 ,


slide-1
SLIDE 1

The Entropy Rounding Method in Approximation Algorithms

Thomas Rothvoß

Department of Mathematics, M.I.T.

Carg` ese 2011

slide-2
SLIDE 2

A general rounding problem

Problem:

◮ Given: A ∈ Rn×m, fractional solution x ∈ [0, 1]m ◮ Find: y ∈ {0, 1}m : Ax ≈ Ay

slide-3
SLIDE 3

A general rounding problem

Problem:

◮ Given: A ∈ Rn×m, fractional solution x ∈ [0, 1]m ◮ Find: y ∈ {0, 1}m : Ax ≈ Ay

General strategies:

◮ Randomized rounding:

Flip Pr[yi = 1] = xi (in)dependently

slide-4
SLIDE 4

A general rounding problem

Problem:

◮ Given: A ∈ Rn×m, fractional solution x ∈ [0, 1]m ◮ Find: y ∈ {0, 1}m : Ax ≈ Ay

General strategies:

◮ Randomized rounding:

Flip Pr[yi = 1] = xi (in)dependently

◮ Use properties of basic solutions:

|supp(x)| ≤ #rows(A)

slide-5
SLIDE 5

A general rounding problem

Problem:

◮ Given: A ∈ Rn×m, fractional solution x ∈ [0, 1]m ◮ Find: y ∈ {0, 1}m : Ax ≈ Ay

General strategies:

◮ Randomized rounding:

Flip Pr[yi = 1] = xi (in)dependently

◮ Use properties of basic solutions:

|supp(x)| ≤ #rows(A) Try another way:

◮ “Entropy rounding method” based on discrepancy theory

slide-6
SLIDE 6

A general rounding problem

Problem:

◮ Given: A ∈ Rn×m, fractional solution x ∈ [0, 1]m ◮ Find: y ∈ {0, 1}m : Ax ≈ Ay

General strategies:

◮ Randomized rounding:

Flip Pr[yi = 1] = xi (in)dependently

◮ Use properties of basic solutions:

|supp(x)| ≤ #rows(A) Try another way:

◮ “Entropy rounding method” based on discrepancy theory

Application:

◮ A (OPT + O(log2 OPT))-algorithm for Bin Packing

With Rejection

slide-7
SLIDE 7

Discrepancy theory

◮ Set system S = {S1, . . . , Sm}, Si ⊆ [n]

i S

slide-8
SLIDE 8

Discrepancy theory

◮ Set system S = {S1, . . . , Sm}, Si ⊆ [n] ◮ Coloring χ : [n] → {−1, +1}

i S

b b b

slide-9
SLIDE 9

Discrepancy theory

◮ Set system S = {S1, . . . , Sm}, Si ⊆ [n] ◮ Coloring χ : [n] → {−1, +1} ◮ Discrepancy

disc(S) = min

χ:[n]→{±1} max S∈S |χ(S)|.

where χ(S) =

i∈S χ(i).

i S

b b b

slide-10
SLIDE 10

Discrepancy theory

◮ Set system S = {S1, . . . , Sm}, Si ⊆ [n] ◮ Coloring χ : [n] → {−1, +1} ◮ Discrepancy

disc(S) = min

χ:[n]→{±1} max S∈S |χ(S)|.

where χ(S) =

i∈S χ(i).

i S

b b b

Known results:

◮ n sets, n elements: disc(S) = O(√n) [Spencer ’85]

slide-11
SLIDE 11

Discrepancy theory

◮ Set system S = {S1, . . . , Sm}, Si ⊆ [n] ◮ Coloring χ : [n] → {−1, +1} ◮ Discrepancy

disc(S) = min

χ:[n]→{±1} max S∈S |χ(S)|.

where χ(S) =

i∈S χ(i).

i S

b b b

Known results:

◮ n sets, n elements: disc(S) = O(√n) [Spencer ’85] ◮ Every element in ≤ t sets: disc(S) < 2t [Beck & Fiala ’81]

Conjecture: disc(S) ≤ O( √ t)

slide-12
SLIDE 12

Discrepancy theory

◮ Set system S = {S1, . . . , Sm}, Si ⊆ [n] ◮ Coloring χ : [n] → {−1, +1} ◮ Discrepancy

disc(S) = min

χ:[n]→{±1} max S∈S |χ(S)|.

where χ(S) =

i∈S χ(i).

i S

b b b

b

Known results:

◮ n sets, n elements: disc(S) = O(√n) [Spencer ’85] ◮ Every element in ≤ t sets: disc(S) < 2t [Beck & Fiala ’81]

Conjecture: disc(S) ≤ O( √ t) More definitions:

◮ Partial coloring: χ : [n] → {0, −1, +1} ◮ Half coloring: χ : [n] → {0, −1, +1}, |supp(χ)| ≥ n/2

slide-13
SLIDE 13

Matrix discrepancy

◮ Matrix A

disc(A) = min

χ∈{±1}n Aχ∞

i S A =   1 1 1 1 1 1   set S i disc(S) = disc(A)

slide-14
SLIDE 14

Matrix discrepancy

◮ Matrix A

disc(A) = min

χ∈{±1}n Aχ∞

i S A =   1 1 1 1 1 1   set S i disc(S) = disc(A)

Theorem (Lov´ asz, Spencer & Vesztergombi ’86)

Finding good colorings ⇒ ∀x ∈ [0, 1]m can find y ∈ {0, 1}m with Ax − Ay∞ small

slide-15
SLIDE 15

Entropy rounding (simple version)

◮ Input: A ∈ Rn×m, x ∈ [0, 1]m ◮ Assume: ∀ submatrix A′ ⊆ A: half-coloring χ:

A′χ∞ ≤ ∆

◮ Output: y ∈ {0, 1}m: Ax − Ay∞ ≤ O(log m) · ∆

(1) y := x (2) FOR phase k = last bit TO 1 DO

(3) Call yi active if kth bit is 1 (4) Find half-coloring χ : active var → {−1, +1, 0} (5) Update y′ := y + ( 1

2)kχ

(6) REPEAT WHILE ∃ active var.

A =

  • y = (0.11 0.10 0.11 0.01 )
slide-16
SLIDE 16

Entropy rounding (simple version)

◮ Input: A ∈ Rn×m, x ∈ [0, 1]m ◮ Assume: ∀ submatrix A′ ⊆ A: half-coloring χ:

A′χ∞ ≤ ∆

◮ Output: y ∈ {0, 1}m: Ax − Ay∞ ≤ O(log m) · ∆

(1) y := x (2) FOR phase k = last bit TO 1 DO

(3) Call yi active if kth bit is 1 (4) Find half-coloring χ : active var → {−1, +1, 0} (5) Update y′ := y + ( 1

2)kχ

(6) REPEAT WHILE ∃ active var.

A =

  • y = (0.11 0.10 0.11 0.01)
slide-17
SLIDE 17

Entropy rounding (simple version)

◮ Input: A ∈ Rn×m, x ∈ [0, 1]m ◮ Assume: ∀ submatrix A′ ⊆ A: half-coloring χ:

A′χ∞ ≤ ∆

◮ Output: y ∈ {0, 1}m: Ax − Ay∞ ≤ O(log m) · ∆

(1) y := x (2) FOR phase k = last bit TO 1 DO

(3) Call yi active if kth bit is 1 (4) Find half-coloring χ : active var → {−1, +1, 0} (5) Update y′ := y + ( 1

2)kχ

(6) REPEAT WHILE ∃ active var.

A =

  • y = (0.11 0.10 0.11 0.01)

χ = ( +1 −1 0 )

slide-18
SLIDE 18

Entropy rounding (simple version)

◮ Input: A ∈ Rn×m, x ∈ [0, 1]m ◮ Assume: ∀ submatrix A′ ⊆ A: half-coloring χ:

A′χ∞ ≤ ∆

◮ Output: y ∈ {0, 1}m: Ax − Ay∞ ≤ O(log m) · ∆

(1) y := x (2) FOR phase k = last bit TO 1 DO

(3) Call yi active if kth bit is 1 (4) Find half-coloring χ : active var → {−1, +1, 0} (5) Update y′ := y + ( 1

2)kχ

(6) REPEAT WHILE ∃ active var.

A =

  • y = (0.11 0.10 0.11 0.01)

χ = ( +1 −1 0 ) y′ = (1.00 0.10 0.10 0.01)

slide-19
SLIDE 19

Entropy rounding (simple version)

◮ Input: A ∈ Rn×m, x ∈ [0, 1]m ◮ Assume: ∀ submatrix A′ ⊆ A: half-coloring χ:

A′χ∞ ≤ ∆

◮ Output: y ∈ {0, 1}m: Ax − Ay∞ ≤ O(log m) · ∆

(1) y := x (2) FOR phase k = last bit TO 1 DO

(3) Call yi active if kth bit is 1 (4) Find half-coloring χ : active var → {−1, +1, 0} (5) Update y′ := y + ( 1

2)kχ

(6) REPEAT WHILE ∃ active var.

A =

  • y = (0.11 0.10 0.11 0.01)

χ = ( +1 −1 0 ) y′ = (1.00 0.10 0.10 0.01) Analysis:

◮ A phase has ≤ log m iterations

slide-20
SLIDE 20

Entropy rounding (simple version)

◮ Input: A ∈ Rn×m, x ∈ [0, 1]m ◮ Assume: ∀ submatrix A′ ⊆ A: half-coloring χ:

A′χ∞ ≤ ∆

◮ Output: y ∈ {0, 1}m: Ax − Ay∞ ≤ O(log m) · ∆

(1) y := x (2) FOR phase k = last bit TO 1 DO

(3) Call yi active if kth bit is 1 (4) Find half-coloring χ : active var → {−1, +1, 0} (5) Update y′ := y + ( 1

2)kχ

(6) REPEAT WHILE ∃ active var.

A =

  • y = (0.11 0.10 0.11 0.01)

χ = ( +1 −1 0 ) y′ = (1.00 0.10 0.10 0.01) Analysis:

◮ A phase has ≤ log m iterations ◮ During phase k: Ay′ − Ay∞ = (1 2)kAχ∞ ≤ (1 2)k∆

slide-21
SLIDE 21

Entropy rounding (simple version)

◮ Input: A ∈ Rn×m, x ∈ [0, 1]m ◮ Assume: ∀ submatrix A′ ⊆ A: half-coloring χ:

A′χ∞ ≤ ∆

◮ Output: y ∈ {0, 1}m: Ax − Ay∞ ≤ O(log m) · ∆

(1) y := x (2) FOR phase k = last bit TO 1 DO

(3) Call yi active if kth bit is 1 (4) Find half-coloring χ : active var → {−1, +1, 0} (5) Update y′ := y + ( 1

2)kχ

(6) REPEAT WHILE ∃ active var.

A =

  • y = (0.11 0.10 0.11 0.01)

χ = ( +1 −1 0 ) y′ = (1.00 0.10 0.10 0.01) Analysis:

◮ A phase has ≤ log m iterations ◮ During phase k: Ay′ − Ay∞ = (1 2)kAχ∞ ≤ (1 2)k∆ ◮ Triangle inequality:

Ax − Ay∞ ≤

  • k≥1

log m

  • t=1

1 2 k · ∆ = O(log m) · ∆

slide-22
SLIDE 22

Entropy

Definition

For random variable Z, the entropy is H(Z) =

  • z

Pr[Z = z] · log2

  • 1

Pr[Z = z]

slide-23
SLIDE 23

Entropy

Definition

For random variable Z, the entropy is H(Z) =

  • z

Pr[Z = z] · log2

  • 1

Pr[Z = z]

  • Example: Pr[Z = a] = p and Pr[Z = b] = 1 − p

0.5 1.0 0.5 1.0 H(Z) = p · log(1

p) + (1 − p) · log( 1 1−p)

p 0.5 1.0 0.5 1.0 x · log( 1

x)

x

slide-24
SLIDE 24

Entropy

Definition

For random variable Z, the entropy is H(Z) =

  • z

Pr[Z = z] · log2

  • 1

Pr[Z = z]

  • Properties:

◮ Uniform distribution maximizes entropy: If Z attains k

distinct values: H(Z) ≤ log2(k) (attained if Pr[Z = z] = 1

k ∀z).

slide-25
SLIDE 25

Entropy

Definition

For random variable Z, the entropy is H(Z) =

  • z

Pr[Z = z] · log2

  • 1

Pr[Z = z]

  • Properties:

◮ Uniform distribution maximizes entropy: If Z attains k

distinct values: H(Z) ≤ log2(k) (attained if Pr[Z = z] = 1

k ∀z). ◮ One likely event: ∃z : Pr[Z = z] ≥ (1 2)H(Z)

slide-26
SLIDE 26

Entropy

Definition

For random variable Z, the entropy is H(Z) =

  • z

Pr[Z = z] · log2

  • 1

Pr[Z = z]

  • Properties:

◮ Uniform distribution maximizes entropy: If Z attains k

distinct values: H(Z) ≤ log2(k) (attained if Pr[Z = z] = 1

k ∀z). ◮ One likely event: ∃z : Pr[Z = z] ≥ (1 2)H(Z) ◮ Subadditivity: H(f(Z, Z′)) ≤ H(Z) + H(Z′).

slide-27
SLIDE 27

Chernov-type bound

Lemma

Let X1, . . . , Xn be indep. RV with Pr[Xi = ±1] = 1

2.

Pr

  • n
  • i=1

Xi

  • ≥ λ√n
  • ≤ 2e−λ2/2.
slide-28
SLIDE 28

Chernov-type bound

Lemma

Let X1, . . . , Xn be indep. RV with Pr[Xi = ±αi] = 1

2.

Pr

  • n
  • i=1

Xi

  • ≥ λα2
  • ≤ 2e−λ2/2.

◮ Standard deviation:

  • Var[
  • i

Xi] =

  • i

E[(Xi − E[Xi])2] =

  • n
  • i=1

α2

i = α2

slide-29
SLIDE 29

An isoperimetric inequality

Lemma (Special case of Isoperimetric Ineq – Kleitman’66)

For any X ⊆ {0, 1}n of size |X| ≥ 20.8n and n ≥ 2, there are x, y ∈ X with x − y1 ≥ n/2. y x

slide-30
SLIDE 30

An isoperimetric inequality

Lemma (Special case of Isoperimetric Ineq – Kleitman’66)

For any X ⊆ {0, 1}n of size |X| ≥ 20.8n and n ≥ 2, there are x, y ∈ X with x − y1 ≥ n/2. y x

◮ Proof with weaker constant:

  • ball of radius n/10

around 0

  • 0≤q<n/10

n q

en n/10 n/10 < 20.8n

slide-31
SLIDE 31

Theorem [Beck’s entropy method]

H

χi∈{±1}

2∆

  • ≤ m

5 ⇒ ∃half-coloring χ0 : Aχ0∞ ≤ ∆.

A =

  • m

{±1}m

A1χ A2χ 2∆ 2∆

slide-32
SLIDE 32

Theorem [Beck’s entropy method]

H

χi∈{±1}

2∆

  • ≤ m

5 ⇒ ∃half-coloring χ0 : Aχ0∞ ≤ ∆.

b

A =

  • m

{±1}m

b

A1χ A2χ 2∆ 2∆

slide-33
SLIDE 33

Theorem [Beck’s entropy method]

H

χi∈{±1}

2∆

  • ≤ m

5 ⇒ ∃half-coloring χ0 : Aχ0∞ ≤ ∆.

b b

A =

  • m

{±1}m

b b

A1χ A2χ 2∆ 2∆

slide-34
SLIDE 34

Theorem [Beck’s entropy method]

H

χi∈{±1}

2∆

  • ≤ m

5 ⇒ ∃half-coloring χ0 : Aχ0∞ ≤ ∆.

b b b

A =

  • m

{±1}m

b b b

A1χ A2χ 2∆ 2∆

slide-35
SLIDE 35

Theorem [Beck’s entropy method]

H

χi∈{±1}

2∆

  • ≤ m

5 ⇒ ∃half-coloring χ0 : Aχ0∞ ≤ ∆.

b b b b

A =

  • m

{±1}m

b b b b

A1χ A2χ 2∆ 2∆

slide-36
SLIDE 36

Theorem [Beck’s entropy method]

H

χi∈{±1}

2∆

  • ≤ m

5 ⇒ ∃half-coloring χ0 : Aχ0∞ ≤ ∆.

b b b b b

A =

  • m

{±1}m

b b b b b

A1χ A2χ 2∆ 2∆

slide-37
SLIDE 37

Theorem [Beck’s entropy method]

H

χi∈{±1}

2∆

  • ≤ m

5 ⇒ ∃half-coloring χ0 : Aχ0∞ ≤ ∆.

b b b b b b

A =

  • m

{±1}m

b b b b b b

A1χ A2χ 2∆ 2∆

slide-38
SLIDE 38

Theorem [Beck’s entropy method]

H

χi∈{±1}

2∆

  • ≤ m

5 ⇒ ∃half-coloring χ0 : Aχ0∞ ≤ ∆.

b b b b b b b

A =

  • m

{±1}m

b b b b b b b

A1χ A2χ 2∆ 2∆

slide-39
SLIDE 39

Theorem [Beck’s entropy method]

H

χi∈{±1}

2∆

  • ≤ m

5 ⇒ ∃half-coloring χ0 : Aχ0∞ ≤ ∆.

b b b b b b b b

A =

  • m

{±1}m

b b b b b b b b

A1χ A2χ 2∆ 2∆

slide-40
SLIDE 40

Theorem [Beck’s entropy method]

H

χi∈{±1}

2∆

  • ≤ m

5 ⇒ ∃half-coloring χ0 : Aχ0∞ ≤ ∆. ◮ ∃cell : Pr[Aχ ∈ cell] ≥ (1 2)m/5.

b b b b b b b b

A =

  • m

{±1}m

b b b b b b b b

A1χ A2χ 2∆ 2∆

slide-41
SLIDE 41

Theorem [Beck’s entropy method]

H

χi∈{±1}

2∆

  • ≤ m

5 ⇒ ∃half-coloring χ0 : Aχ0∞ ≤ ∆. ◮ ∃cell : Pr[Aχ ∈ cell] ≥ (1 2)m/5. ◮ At least 2m · (1 2)m/5 = 20.8m colorings χ have Aχ ∈ cell

b b b

A =

  • m

{±1}m

b b b

A1χ A2χ 2∆ 2∆

slide-42
SLIDE 42

Theorem [Beck’s entropy method]

H

χi∈{±1}

2∆

  • ≤ m

5 ⇒ ∃half-coloring χ0 : Aχ0∞ ≤ ∆. ◮ ∃cell : Pr[Aχ ∈ cell] ≥ (1 2)m/5. ◮ At least 2m · (1 2)m/5 = 20.8m colorings χ have Aχ ∈ cell ◮ Pick χ1, χ2 differing in half of entries

b b

A =

  • m

χ1 χ2

{±1}m

b b

A1χ A2χ 2∆ 2∆ Aχ1 Aχ2

slide-43
SLIDE 43

Theorem [Beck’s entropy method]

H

χi∈{±1}

2∆

  • ≤ m

5 ⇒ ∃half-coloring χ0 : Aχ0∞ ≤ ∆. ◮ ∃cell : Pr[Aχ ∈ cell] ≥ (1 2)m/5. ◮ At least 2m · (1 2)m/5 = 20.8m colorings χ have Aχ ∈ cell ◮ Pick χ1, χ2 differing in half of entries ◮ Define χ0(i) := 1 2(χ1(i) − χ2(i)) ∈ {0, ±1}.

b b

A =

  • m

χ1 χ2

{±1}m

b b

A1χ A2χ 2∆ 2∆ Aχ1 Aχ2

slide-44
SLIDE 44

Theorem [Beck’s entropy method]

H

χi∈{±1}

2∆

  • ≤ m

5 ⇒ ∃half-coloring χ0 : Aχ0∞ ≤ ∆. ◮ ∃cell : Pr[Aχ ∈ cell] ≥ (1 2)m/5. ◮ At least 2m · (1 2)m/5 = 20.8m colorings χ have Aχ ∈ cell ◮ Pick χ1, χ2 differing in half of entries ◮ Define χ0(i) := 1 2(χ1(i) − χ2(i)) ∈ {0, ±1}. ◮ Then Aχ0∞ ≤ 1 2Aχ1 − Aχ2∞ ≤ ∆.

b b

A =

  • m

χ1 χ2

{±1}m

b b

A1χ A2χ 2∆ 2∆ Aχ1 Aχ2

slide-45
SLIDE 45

A slight generalization

Theorem

For any auxiliary function f(χ) with Aχ − f(χ)∞ ≤ ∆: H

χi∈{±1}

(f(χ)) ≤ m

5 ⇒ ∃half-coloring χ0 : Aχ0∞ ≤ ∆.

b b b b b b b b

A =

  • m

{±1}m

b b b b b b b b

A1χ A2χ 2∆ 2∆ Aχ f(χ)

slide-46
SLIDE 46

A bound on the entropy

Lemma

For α ∈ Rm, ∆ > 0: H

χi∈{±1}

αT χ 2∆ ≤ G ∆ α2

  • =:λ

:=

  • 9e−λ2/5

if λ ≥ 2 log2(32 + 64/λ) if λ < 2

b

2 e−Ω(λ2) λ O(log( 1

λ)) + O(1)

bound on H( αT χ

2∆

  • )
slide-47
SLIDE 47

Proof – Case λ ≥ 2

Z = 0 Z = −1 Z = 1 ∆ −∆ 3∆ −3∆ 0.5 1.0 0.5 1.0 x · log( 1

x)

x

◮ Recall: ∆ = λ · α2 with λ ≥ 2 and Z =

  • αT χ

2∆

  • H(Z) =
  • i∈Z

Pr[Z = i] · log

  • 1

Pr[Z = i]

slide-48
SLIDE 48

Proof – Case λ ≥ 2

Z = 0 Z = −1 Z = 1 ∆ −∆ 3∆ −3∆ Pr[Z = 0] ≥ 1 − e−Ω(λ2) 0.5 1.0 0.5 1.0 x · log( 1

x)

x

◮ Recall: ∆ = λ · α2 with λ ≥ 2 and Z =

  • αT χ

2∆

  • ◮ Pr[Z = 0] · log(

1 Pr[Z=0]) ≤ e−Ω(λ2)

H(Z) =

  • i∈Z

Pr[Z = i] · log

  • 1

Pr[Z = i]

slide-49
SLIDE 49

Proof – Case λ ≥ 2

Z = 0 Z = −1 Z = 1 ∆ −∆ 3∆ −3∆ Pr[Z = i] ≤ Pr[αT χ is ≥ iλ times standard dev] ≤ e−Ω(i2λ2) 0.5 1.0 0.5 1.0 x · log( 1

x)

x

◮ Recall: ∆ = λ · α2 with λ ≥ 2 and Z =

  • αT χ

2∆

  • ◮ Pr[Z = 0] · log(

1 Pr[Z=0]) ≤ e−Ω(λ2) ◮ Pr[Z = i] · log( 1 Pr[Z=i]) ≤ e−Ω(i2λ2) · log(ei2λ2)

H(Z) =

  • i∈Z

Pr[Z = i] · log

  • 1

Pr[Z = i]

slide-50
SLIDE 50

Proof – Case λ ≥ 2

Z = 0 Z = −1 Z = 1 ∆ −∆ 3∆ −3∆ Pr[Z = i] ≤ Pr[αT χ is ≥ iλ times standard dev] ≤ e−Ω(i2λ2) 0.5 1.0 0.5 1.0 x · log( 1

x)

x

◮ Recall: ∆ = λ · α2 with λ ≥ 2 and Z =

  • αT χ

2∆

  • ◮ Pr[Z = 0] · log(

1 Pr[Z=0]) ≤ e−Ω(λ2) ◮ Pr[Z = i] · log( 1 Pr[Z=i]) ≤ e−Ω(i2λ2) · log(ei2λ2)

H(Z) =

  • i∈Z

Pr[Z = i] · log

  • 1

Pr[Z = i]

  • ≤ e−Ω(λ2)
slide-51
SLIDE 51

Proof – Case λ < 2

α2 −α2 3α2 −3α2 2∆ Subadditivity: H(Z) ≤

slide-52
SLIDE 52

Proof – Case λ < 2

block α2 −α2 3α2 −3α2 2∆ Subadditivity: H(Z) ≤ H(which block of length 2α2) + ≤ O(1) +

slide-53
SLIDE 53

Proof – Case λ < 2

block α2 −α2 3α2 −3α2 2∆

2α2 2∆

= 1

λ intervals

Subadditivity: H(Z) ≤ H(which block of length 2α2) + H(index) ≤ O(1) + O(log 1

λ)

slide-54
SLIDE 54

Entropy rounding (extended version)

Algorithm:

◮ Input: A ∈ Rn×m, x ∈ [0, 1]m

(1) y := x (2) FOR phase k = last bit TO 1 DO

(3) Call yi active if kth bit is 1 (4) Find half-coloring χ : active var → {−1, +1, 0} (5) Update y′ := y + ( 1

2)kχ

(6) REPEAT WHILE ∃ active var.

slide-55
SLIDE 55

Entropy rounding (extended version)

Algorithm:

◮ Input: A ∈ [−1, 1]n×m, x ∈ [0, 1]m ◮ Row weights w(i) ( i w(i) = 1; w(i) ≥ 0)

(1) y := x (2) FOR phase k = last bit TO 1 DO

(3) Call yi active if kth bit is 1 (4) Find half-coloring χ : active var → {−1, +1, 0} with |Aiχ| ≤ ∆i s.t. G(

∆i √#active var ) ≤ w(i) · #active var 5

(5) Update y′ := y + ( 1

2)kχ

(6) REPEAT WHILE ∃ active var.

slide-56
SLIDE 56

Entropy rounding (extended version)

Algorithm:

◮ Input: A ∈ [−1, 1]n×m, x ∈ [0, 1]m ◮ Row weights w(i) ( i w(i) = 1; w(i) ≥ 0)

(1) y := x (2) FOR phase k = last bit TO 1 DO

(3) Call yi active if kth bit is 1 (4) Find half-coloring χ : active var → {−1, +1, 0} with |Aiχ| ≤ ∆i s.t. G(

∆i √#active var ) ≤ w(i) · #active var 5

(5) Update y′ := y + ( 1

2)kχ

(6) REPEAT WHILE ∃ active var.

◮ In each step:

H

  • Aiχ

2∆i

  • i

Subadd. ≤

n

  • i=1

H Aiχ 2∆i

n

  • i=1

G

  • ∆i

√#act. var

  • ≤ #act.var.

5

◮ Use α ∈ [−1, 1]m′ ⇒ α2 ≤

√ m′

slide-57
SLIDE 57

Entropy rounding (extended version) (2)

◮ Consider row i while rounding bit k:

∆i

b

Θ

  • 1

w(i)

  • v := #active var.

∼ (1

2)v·w(i) · √v

  • ln(

1 v·w(i)) · √v

Θ

  • 1

w(i)

slide-58
SLIDE 58

Entropy rounding (extended version) (2)

◮ Consider row i while rounding bit k:

∆i

b

Θ

  • 1

w(i)

  • v := #active var.

∼ (1

2)v·w(i) · √v

  • ln(

1 v·w(i)) · √v

Θ

  • 1

w(i)

  • it 1
slide-59
SLIDE 59

Entropy rounding (extended version) (2)

◮ Consider row i while rounding bit k:

∆i

b

Θ

  • 1

w(i)

  • v := #active var.

∼ (1

2)v·w(i) · √v

  • ln(

1 v·w(i)) · √v

Θ

  • 1

w(i)

  • it 1

it 2 v halves

slide-60
SLIDE 60

Entropy rounding (extended version) (2)

◮ Consider row i while rounding bit k:

∆i

b

Θ

  • 1

w(i)

  • v := #active var.

∼ (1

2)v·w(i) · √v

  • ln(

1 v·w(i)) · √v

Θ

  • 1

w(i)

  • it 1

it 2 it . . . v halves

slide-61
SLIDE 61

Entropy rounding (extended version) (2)

◮ Consider row i while rounding bit k:

∆i

b

Θ

  • 1

w(i)

  • v := #active var.

∼ (1

2)v·w(i) · √v

  • ln(

1 v·w(i)) · √v

Θ

  • 1

w(i)

  • it 1

it 2 it . . . v halves

slide-62
SLIDE 62

Entropy rounding (extended version) (2)

◮ Consider row i while rounding bit k:

∆i

b

Θ

  • 1

w(i)

  • v := #active var.

∼ (1

2)v·w(i) · √v

  • ln(

1 v·w(i)) · √v

Θ

  • 1

w(i)

  • it 1

it 2 it . . . v halves

slide-63
SLIDE 63

Entropy rounding (extended version) (2)

◮ Consider row i while rounding bit k:

∆i

b

Θ

  • 1

w(i)

  • v := #active var.

∼ (1

2)v·w(i) · √v

  • ln(

1 v·w(i)) · √v

Θ

  • 1

w(i)

  • it 1

it 2 it . . . v halves

slide-64
SLIDE 64

Entropy rounding (extended version) (2)

◮ Consider row i while rounding bit k:

∆i

b

Θ

  • 1

w(i)

  • v := #active var.

∼ (1

2)v·w(i) · √v

  • ln(

1 v·w(i)) · √v

Θ

  • 1

w(i)

  • it 1

it 2 it . . . it log m v halves

slide-65
SLIDE 65

Entropy rounding (extended version) (2)

◮ Consider row i while rounding bit k:

∆i

b

Θ

  • 1

w(i)

  • v := #active var.

∼ (1

2)v·w(i) · √v

  • ln(

1 v·w(i)) · √v

Θ

  • 1

w(i)

  • it 1

it 2 it . . . it log m v halves

◮ By convergence: |Aix − Aiy| ≤ O

  • 1

w(i)

slide-66
SLIDE 66

Example: Discrepancy of set systems

◮ Given: Set system S with n sets and n elements

i S A =   1 1 1 1 1 1   set S i

slide-67
SLIDE 67

Example: Discrepancy of set systems

◮ Given: Set system S with n sets and n elements

i S A =   1 1 1 1 1 1   set S i

◮ Pick x := (1 2, . . . , 1 2); weight w(i) := 1 n for each row

slide-68
SLIDE 68

Example: Discrepancy of set systems

◮ Given: Set system S with n sets and n elements

i S A =   1 1 1 1 1 1   set S i

◮ Pick x := (1 2, . . . , 1 2); weight w(i) := 1 n for each row ◮ There is y ∈ {0, 1}n : Ax − Ay∞ = O(

  • 1

1/n) = O(√n).

slide-69
SLIDE 69

Example: Discrepancy of set systems

◮ Given: Set system S with n sets and n elements

i S A =   1 1 1 1 1 1   set S i

◮ Pick x := (1 2, . . . , 1 2); weight w(i) := 1 n for each row ◮ There is y ∈ {0, 1}n : Ax − Ay∞ = O(

  • 1

1/n) = O(√n). ◮ Coloring χ(i) =

  • +1

yi = 1 −1 yi = 0 has discrepancy O(√n).

◮ “6 Standard deviations suffice”-Thm [Spencer ’85]

slide-70
SLIDE 70

Summarizing

Theorem

Input:

◮ matrix A ∈ [−1, 1]n×m (∀A′ ⊆ A: there are

f : −∆ ≤ A′χ − f(χ) ≤ ∆ and H(f(χ)) ≤ #cols(A′)

10

)

◮ vector x ∈ [0, 1]m ◮ row weights w(i) ( i w(i) = 1)

There is a y ∈ {0, 1}m with

◮ Bounded difference:

◮ |Aix − Aiy| ≤ O(log(m)) · ∆i ◮ |Aix − Aiy| ≤ O(

  • 1/w(i))
slide-71
SLIDE 71

Summarizing

Theorem

Input:

◮ matrix A ∈ [−1, 1]n×m (∀A′ ⊆ A: there are

f : −∆ ≤ A′χ − f(χ) ≤ ∆ and H(f(χ)) ≤ #cols(A′)

10

)

◮ vector x ∈ [0, 1]m ◮ row weights w(i) ( i w(i) = 1)

There is a random variable y ∈ {0, 1}m with

◮ Bounded difference:

◮ |Aix − Aiy| ≤ O(log(m)) · ∆i ◮ |Aix − Aiy| ≤ O(

  • 1/w(i))

◮ Preserved expectation: E[yi] = xi. ◮ Randomness: y = x + k≥1

log m

t=1 (1 2)k · χ(k,t)

slide-72
SLIDE 72

Summarizing

Theorem

Input:

◮ matrix A ∈ [−1, 1]n×m (∀A′ ⊆ A: there are

f : −∆ ≤ A′χ − f(χ) ≤ ∆ and H(f(χ)) ≤ #cols(A′)

10

)

◮ vector x ∈ [0, 1]m ◮ row weights w(i) ( i w(i) = 1)

There is a random variable y ∈ {0, 1}m with

◮ Bounded difference:

◮ |Aix − Aiy| ≤ O(log(m)) · ∆i ◮ |Aix − Aiy| ≤ O(

  • 1/w(i))

◮ Preserved expectation: E[yi] = xi. ◮ Randomness: y = x + k≥1

log m

t=1 (1 2)k · rand. ± 1 · χ(k,t)

slide-73
SLIDE 73

Summarizing

Theorem

Input:

◮ matrix A ∈ [−1, 1]n×m (∀A′ ⊆ A: there are

f : −∆ ≤ A′χ − f(χ) ≤ ∆ and H(f(χ)) ≤ #cols(A′)

10

)

◮ vector x ∈ [0, 1]m ◮ row weights w(i) ( i w(i) = 1)

There is a random variable y ∈ {0, 1}m with

◮ Bounded difference:

◮ |Aix − Aiy| ≤ O(log(m)) · ∆i ◮ |Aix − Aiy| ≤ O(

  • 1/w(i))

◮ Preserved expectation: E[yi] = xi. ◮ Randomness: y = x + k≥1

log m

t=1 (1 2)k · rand. ± 1 · χ(k,t) ◮ Can be computed by SDP in poly-time using [Bansal ’10]

slide-74
SLIDE 74

Bin Packing With Rejection

Input:

◮ Items i ∈ {1, . . . , n} with size si ∈ [0, 1], and rejection

penalty πi ∈ [0, 1] Goal: Pack or reject. Minimize # bins + rejection cost. πi 0.9 0.7 0.4 0.6 bin 1 bin 2 input si 1 1

slide-75
SLIDE 75

Bin Packing With Rejection

Input:

◮ Items i ∈ {1, . . . , n} with size si ∈ [0, 1], and rejection

penalty πi ∈ [0, 1] Goal: Pack or reject. Minimize # bins + rejection cost. πi 0.7 0.4 0.6 bin 1 bin 2 input si 1 1

slide-76
SLIDE 76

Bin Packing With Rejection

Input:

◮ Items i ∈ {1, . . . , n} with size si ∈ [0, 1], and rejection

penalty πi ∈ [0, 1] Goal: Pack or reject. Minimize # bins + rejection cost. πi 0.7 0.4 0.6 bin 1 bin 2 input si 1 1

×

slide-77
SLIDE 77

Bin Packing With Rejection

Input:

◮ Items i ∈ {1, . . . , n} with size si ∈ [0, 1], and rejection

penalty πi ∈ [0, 1] Goal: Pack or reject. Minimize # bins + rejection cost. πi 0.7 0.6 bin 1 bin 2 input si 1 1

×

slide-78
SLIDE 78

Bin Packing With Rejection

Input:

◮ Items i ∈ {1, . . . , n} with size si ∈ [0, 1], and rejection

penalty πi ∈ [0, 1] Goal: Pack or reject. Minimize # bins + rejection cost. πi 0.7 bin 1 bin 2 input si 1 1

×

slide-79
SLIDE 79

Known results

Bin Packing With Rejection:

◮ APTAS [Epstein ’06] ◮ Faster APTAS [Bein, Correa & Han ’08] ◮ AFPTAS (APX ≤ OPT + OP T (log OP T)1−o(1) )

[Epstein & Levin ’10]

slide-80
SLIDE 80

Known results

Bin Packing With Rejection:

◮ APTAS [Epstein ’06] ◮ Faster APTAS [Bein, Correa & Han ’08] ◮ AFPTAS (APX ≤ OPT + OP T (log OP T)1−o(1) )

[Epstein & Levin ’10] Bin Packing:

◮ APX ≤ OPT + O(log2 OPT) [Karmarkar & Karp ’82]

slide-81
SLIDE 81

Known results

Bin Packing With Rejection:

◮ APTAS [Epstein ’06] ◮ Faster APTAS [Bein, Correa & Han ’08] ◮ AFPTAS (APX ≤ OPT + OP T (log OP T)1−o(1) )

[Epstein & Levin ’10] Bin Packing:

◮ APX ≤ OPT + O(log2 OPT) [Karmarkar & Karp ’82]

Theorem

There is a randomized approximation algorithm for Bin Packing With Rejection with APX ≤ OPT + O(log2 OPT) (with high probability).

slide-82
SLIDE 82

The column-based LP

Set Cover formulation:

◮ Bins: Sets S ⊆ [n] with i∈S si ≤ 1 of cost c(S) = 1 ◮ Rejections: Sets S = {i} of cost c(S) = πi

LP: min

  • S∈S

c(S) · xS

  • S∈S

1S · xS ≥ 1 xS ≥ ∀S ∈ S

slide-83
SLIDE 83

The column-based LP - Example

input si 1 0.44 0.4 0.3 0.26 πi 0.9 0.7 0.4 0.6

slide-84
SLIDE 84

The column-based LP - Example

input si 1 0.44 0.4 0.3 0.26 πi 0.9 0.7 0.4 0.6 min

  • 1

1 1 1 1 1 1 1 1 1 1 1 | .9 .7 .4 .6

  • x

    1 1 1 1 1 | 1 1 1 1 1 1 | 0 1 1 1 1 1 1 1 | 0 1 1 1 1 1 1 1 | 0 1     x ≥     1 1 1 1     x ≥

slide-85
SLIDE 85

The column-based LP - Example

input si 1 0.44 0.4 0.3 0.26 πi 0.9 0.7 0.4 0.6 min

  • 1

1 1 1 1 1 1 1 1 1 1 1 | .9 .7 .4 .6

  • x

    1 1 1 1 1 | 1 1 1 1 1 1 | 0 1 1 1 1 1 1 1 | 0 1 1 1 1 1 1 1 | 0 1     x ≥     1 1 1 1     x ≥ 1/2× 1/2× 1/2×

slide-86
SLIDE 86

Massaging the LP

◮ Sort items s1 ≥ . . . ≥ sn

slide-87
SLIDE 87

Massaging the LP

◮ Sort items s1 ≥ . . . ≥ sn ◮ Call items large if size at least ε := log OP Tf OP Tf

(otherwise small

slide-88
SLIDE 88

Massaging the LP

◮ Sort items s1 ≥ . . . ≥ sn ◮ Call items large if size at least ε := log OP Tf OP Tf

(otherwise small       1 1 1 1 1 1 1 1 1 1 1 1 1 1 1      

slide-89
SLIDE 89

Massaging the LP

◮ Sort items s1 ≥ . . . ≥ sn ◮ Call items large if size at least ε := log OP Tf OP Tf

(otherwise small

◮ Add up rows 1, . . . , i of constraint matrix to obtain row i

for matrix A (for large i’s). “1 slot per item” ⇒ “i slots for largest i items”       1 1 1 1 1 1 1 1 1 1 1 1 1 1 1       → A =       1 1 1 2 1 1 1 1 2 2 2 1 1 1      

slide-90
SLIDE 90

Massaging the LP

◮ Sort items s1 ≥ . . . ≥ sn ◮ Call items large if size at least ε := log OP Tf OP Tf

(otherwise small

◮ Add up rows 1, . . . , i of constraint matrix to obtain row i

for matrix A (for large i’s). “1 slot per item” ⇒ “i slots for largest i items”

◮ Append row vector ( i∈S:si<ε si)S

      1 1 1 1 1 1 1 1 1 1 1 1 1 1 1       → A =       1 1 1 2 1 1 1 1 2 2 2 1 1 1 space for small items      

slide-91
SLIDE 91

Massaging the LP

◮ Sort items s1 ≥ . . . ≥ sn ◮ Call items large if size at least ε := log OP Tf OP Tf

(otherwise small

◮ Add up rows 1, . . . , i of constraint matrix to obtain row i

for matrix A (for large i’s). “1 slot per item” ⇒ “i slots for largest i items”

◮ Append row vector ( i∈S:si<ε si)S ◮ Append objective function c

      1 1 1 1 1 1 1 1 1 1 1 1 1 1 1       → A =       1 1 1 2 1 1 1 1 2 2 2 1 1 1 space for small items

  • bjective function

     

slide-92
SLIDE 92

Entropy bound for monotone matrices

Theorem

Let A be column-monotone matrix, max entry ≤ ∆, sum of last row = σ. There are auxiliary RV f: Aχ − f(χ)∞ = O(∆) and Hχ(f(χ)) ≤ O( σ

∆).

σ = A =             1 1 1 1 2 1 1 2 2 1 2 2 1 2 1 2 1 2 2 2 1 3 2             ≤ ∆ Aiχ i

slide-93
SLIDE 93

Entropy bound for monotone matrices

Theorem

Let A be column-monotone matrix, max entry ≤ ∆, sum of last row = σ. There are auxiliary RV f: Aχ − f(χ)∞ = O(∆) and Hχ(f(χ)) ≤ O( σ

∆).

χ = (+1,-1,-1,+1) A =             1 1 1 1 2 1 1 2 2 1 2 2 1 2 1 2 1 2 2 2 1 3 2             Aiχ i

◮ Idea: Describe random walk A1χ, . . . , Aσχ

O(∆)-approximately

slide-94
SLIDE 94

Entropy bound for monotone matrices

χ = (+1,-1,-1,+1) A =             1 1 1 1 2 1 1 2 2 1 2 2 1 2 1 2 1 2 2 2 1 3 2             Aiχ i σ

◮ Idea: Describe random walk A1χ, . . . , Aσχ

O(∆)-approximately

slide-95
SLIDE 95

Entropy bound for monotone matrices

χ = (+1,-1,-1,+1) A =             1 1 1 1 2 1 1 2 2 1 2 2 1 2 1 2 1 2 2 2 1 3 2             Aiχ i σ D k 2k∆

◮ Idea: Describe random walk A1χ, . . . , Aσχ

O(∆)-approximately

◮ For each interval D of length 2k · ∆:

gD(χ) := covered distance in D rounded to

∆ 1.1k

slide-96
SLIDE 96

Entropy bound for monotone matrices

χ = (+1,-1,-1,+1) A =             1 1 1 1 2 1 1 2 2 1 2 2 1 2 1 2 1 2 2 2 1 3 2             Aiχ i σ D k 2k∆ gD(χ)

◮ Idea: Describe random walk A1χ, . . . , Aσχ

O(∆)-approximately

◮ For each interval D of length 2k · ∆:

gD(χ) := covered distance in D rounded to

∆ 1.1k

slide-97
SLIDE 97

Entropy bound for monotone matrices

χ = (+1,-1,-1,+1) A =             1 1 1 1 2 1 1 2 2 1 2 2 1 2 1 2 1 2 2 2 1 3 2             Aiχ i σ D k 2k∆ gD(χ)

◮ Idea: Describe random walk A1χ, . . . , Aσχ

O(∆)-approximately

◮ For each interval D of length 2k · ∆:

gD(χ) := covered distance in D rounded to

∆ 1.1k ◮ Std-Dev for gD:

  • max dep. · |D| ≤

√ ∆ · 2k∆ = 2k/2 · ∆

slide-98
SLIDE 98

Entropy bound for monotone matrices

χ = (+1,-1,-1,+1) A =             1 1 1 1 2 1 1 2 2 1 2 2 1 2 1 2 1 2 2 2 1 3 2             Aiχ i σ D k 2k∆ gD(χ)

◮ Idea: Describe random walk A1χ, . . . , Aσχ

O(∆)-approximately

◮ For each interval D of length 2k · ∆:

gD(χ) := covered distance in D rounded to

∆ 1.1k ◮ Std-Dev for gD:

  • max dep. · |D| ≤

√ ∆ · 2k∆ = 2k/2 · ∆

◮ H(gD) ≤ G(∆/1.1k 2k/2∆ ) ≤ G(2−k) = O(log 2k) = O(k)

slide-99
SLIDE 99

Entropy bound for monotone matrices

χ = (+1,-1,-1,+1) A =             1 1 1 1 2 1 1 2 2 1 2 2 1 2 1 2 1 2 2 2 1 3 2             Aiχ i σ D k 2k∆ gD(χ)

◮ Idea: Describe random walk A1χ, . . . , Aσχ

O(∆)-approximately

◮ For each interval D of length 2k · ∆:

gD(χ) := covered distance in D rounded to

∆ 1.1k ◮ Std-Dev for gD:

  • max dep. · |D| ≤

√ ∆ · 2k∆ = 2k/2 · ∆

◮ H(gD) ≤ G(∆/1.1k 2k/2∆ ) ≤ G(2−k) = O(log 2k) = O(k) ◮ Total entropy of g: k≥1 σ 2k∆ · O(k) = O( σ ∆).

slide-100
SLIDE 100

Entropy bound for monotone matrices

χ = (+1,-1,-1,+1) A =             1 1 1 1 2 1 1 2 2 1 2 2 1 2 1 2 1 2 2 2 1 3 2             Aiχ i

◮ We know Aiχ up to an error of:

slide-101
SLIDE 101

Entropy bound for monotone matrices

χ = (+1,-1,-1,+1) A =             1 1 1 1 2 1 1 2 2 1 2 2 1 2 1 2 1 2 2 2 1 3 2             Aiχ i

◮ We know Aiχ up to an error of:

slide-102
SLIDE 102

Entropy bound for monotone matrices

χ = (+1,-1,-1,+1) A =             1 1 1 1 2 1 1 2 2 1 2 2 1 2 1 2 1 2 2 2 1 3 2             Aiχ i

◮ We know Aiχ up to an error of: k≥1 ∆ 1.1k = O(∆).

(formally fi(χ) :=

D: ˙ D=[i] gD(χ))

slide-103
SLIDE 103

Approximation algorithm for BPWR

◮ Apply rounding theorem to fractional sol. x

A =       1 1 1 2 1 1 1 1 2 2 2 1 1 1 space for small items

  • bjective function

      weight 1/2 weight 1/2

slide-104
SLIDE 104

Approximation algorithm for BPWR

◮ Apply rounding theorem to fractional sol. x ◮ Pick ∆i := Θ( 1 si )

A =       1 1 1 2 1 1 1 1 2 2 2 1 1 1 space for small items

  • bjective function

      i ∆i := Θ( 1

si )

weight 1/2 weight 1/2

slide-105
SLIDE 105

Approximation algorithm for BPWR

◮ Apply rounding theorem to fractional sol. x ◮ Pick ∆i := Θ( 1 si ) ◮ Assume for 1 sec that 1 2k ≤ si ≤ 1 k (same size class)

O( σ ∆) ≤ 1 20

  • S active

|S|· 1 k ≤ 1 10

  • S active
  • i∈S

si

≤1

≤ # active var. 10 A =       1 1 1 2 1 1 1 1 2 2 2 1 1 1 space for small items

  • bjective function

      i ∆i := Θ( 1

si )

weight 1/2 weight 1/2

slide-106
SLIDE 106

Approximation algorithm for BPWR (2)

Obtain y ∈ {0, 1}m:

◮ |Aiy − Aix| ≤ O(log m) · 1 si ◮ |cT x − cT y| ≤ O(1) ◮ space for small items in x and y differs by O(1)

slide-107
SLIDE 107

Approximation algorithm for BPWR (2)

Obtain y ∈ {0, 1}m: → repair to feasible solution

◮ Aiy ≥ Aix = i (y reserves ≥ i slots for largest i items)

Buy O(log m) · 1

si slots for each size class → O(log m · log 1 ε) ◮ |cT x − cT y| ≤ O(1) ◮ space for small items in x and y differs by O(1)

Input: (large items) y :

slide-108
SLIDE 108

Approximation algorithm for BPWR (2)

Obtain y ∈ {0, 1}m: → repair to feasible solution

◮ Aiy ≥ Aix = i (y reserves ≥ i slots for largest i items)

Buy O(log m) · 1

si slots for each size class → O(log m · log 1 ε) ◮ |cT x − cT y| ≤ O(1) ◮ space for small items in x and y differs by O(1)

Input: (large items) y :

slide-109
SLIDE 109

Approximation algorithm for BPWR (2)

Obtain y ∈ {0, 1}m: → repair to feasible solution

◮ Aiy ≥ Aix = i (y reserves ≥ i slots for largest i items)

Buy O(log m) · 1

si slots for each size class → O(log m · log 1 ε) ◮ |cT x − cT y| ≤ O(1) ◮ space for small items in x and y differs by O(1)

Input: (large items) y :

slide-110
SLIDE 110

Approximation algorithm for BPWR (2)

Obtain y ∈ {0, 1}m: → repair to feasible solution

◮ Aiy ≥ Aix = i (y reserves ≥ i slots for largest i items)

Buy O(log m) · 1

si slots for each size class → O(log m · log 1 ε) ◮ |cT x − cT y| ≤ O(1) ◮ space for small items in x and y differs by O(1)

Input: (large items) y :

slide-111
SLIDE 111

Approximation algorithm for BPWR (2)

Obtain y ∈ {0, 1}m: → repair to feasible solution

◮ Aiy ≥ Aix = i (y reserves ≥ i slots for largest i items)

Buy O(log m) · 1

si slots for each size class → O(log m · log 1 ε) ◮ |cT x − cT y| ≤ O(1) ◮ space for small items in x and y differs by O(1)

Input: (large items) y :

slide-112
SLIDE 112

Approximation algorithm for BPWR (2)

Obtain y ∈ {0, 1}m: → repair to feasible solution

◮ Aiy ≥ Aix = i (y reserves ≥ i slots for largest i items)

Buy O(log m) · 1

si slots for each size class → O(log m · log 1 ε) ◮ |cT x − cT y| ≤ O(1) ◮ space for small items in x and y differs by O(1)

slide-113
SLIDE 113

Approximation algorithm for BPWR (2)

Obtain y ∈ {0, 1}m: → repair to feasible solution

◮ Aiy ≥ Aix = i (y reserves ≥ i slots for largest i items)

Buy O(log m) · 1

si slots for each size class → O(log m · log 1 ε) ◮ |cT x − cT y| ≤ O(1) ◮ y reserves enough space for small items → O(1)

bin 1 bin 2 bin 3 1 large items:

slide-114
SLIDE 114

Approximation algorithm for BPWR (2)

Obtain y ∈ {0, 1}m: → repair to feasible solution

◮ Aiy ≥ Aix = i (y reserves ≥ i slots for largest i items)

Buy O(log m) · 1

si slots for each size class → O(log m · log 1 ε) ◮ |cT x − cT y| ≤ O(1) ◮ y reserves enough space for small items → O(1)

bin 1 bin 2 bin 3 1 large items: small items:

slide-115
SLIDE 115

Approximation algorithm for BPWR (2)

Obtain y ∈ {0, 1}m: → repair to feasible solution

◮ Aiy ≥ Aix = i (y reserves ≥ i slots for largest i items)

Buy O(log m) · 1

si slots for each size class → O(log m · log 1 ε) ◮ |cT x − cT y| ≤ O(1) ◮ y reserves enough space for small items → O(1)

bin 1 bin 2 bin 3 1 large items: small items:

slide-116
SLIDE 116

Approximation algorithm for BPWR (2)

Obtain y ∈ {0, 1}m: → repair to feasible solution

◮ Aiy ≥ Aix = i (y reserves ≥ i slots for largest i items)

Buy O(log m) · 1

si slots for each size class → O(log m · log 1 ε) ◮ |cT x − cT y| ≤ O(1) ◮ y reserves enough space for small items → O(1)

bin 1 bin 2 bin 3 1 large items: small items:

slide-117
SLIDE 117

Approximation algorithm for BPWR (2)

Obtain y ∈ {0, 1}m: → repair to feasible solution

◮ Aiy ≥ Aix = i (y reserves ≥ i slots for largest i items)

Buy O(log m) · 1

si slots for each size class → O(log m · log 1 ε) ◮ |cT x − cT y| ≤ O(1) ◮ y reserves enough space for small items → O(1)

bin 1 bin 2 bin 3 1 large items: small items:

slide-118
SLIDE 118

Approximation algorithm for BPWR (2)

Obtain y ∈ {0, 1}m: → repair to feasible solution

◮ Aiy ≥ Aix = i (y reserves ≥ i slots for largest i items)

Buy O(log m) · 1

si slots for each size class → O(log m · log 1 ε) ◮ |cT x − cT y| ≤ O(1) ◮ y reserves enough space for small items → O(1)

bin 1 bin 2 bin 3 1 large items: small items:

slide-119
SLIDE 119

Approximation algorithm for BPWR (2)

Obtain y ∈ {0, 1}m: → repair to feasible solution

◮ Aiy ≥ Aix = i (y reserves ≥ i slots for largest i items)

Buy O(log m) · 1

si slots for each size class → O(log m · log 1 ε) ◮ |cT x − cT y| ≤ O(1) ◮ y reserves enough space for small items → O(1)

assign small items integrally → O(ε) · OPTf = O(log OPTf) bin 1 bin 2 bin 3 1 large items: small items:

slide-120
SLIDE 120

Approximation algorithm for BPWR (2)

Obtain y ∈ {0, 1}m: → repair to feasible solution

◮ Aiy ≥ Aix = i (y reserves ≥ i slots for largest i items)

Buy O(log m) · 1

si slots for each size class → O(log m · log 1 ε) ◮ |cT x − cT y| ≤ O(1) ◮ y reserves enough space for small items → O(1)

assign small items integrally → O(ε) · OPTf = O(log OPTf) Doing the math:

◮ may assume that xS ≤ 1 − ε (o.w. buy them apriori)

slide-121
SLIDE 121

Approximation algorithm for BPWR (2)

Obtain y ∈ {0, 1}m: → repair to feasible solution

◮ Aiy ≥ Aix = i (y reserves ≥ i slots for largest i items)

Buy O(log m) · 1

si slots for each size class → O(log m · log 1 ε) ◮ |cT x − cT y| ≤ O(1) ◮ y reserves enough space for small items → O(1)

assign small items integrally → O(ε) · OPTf = O(log OPTf) Doing the math:

◮ may assume that xS ≤ 1 − ε (o.w. buy them apriori) ◮ Can assume that m ≤ #large items + 2

slide-122
SLIDE 122

Approximation algorithm for BPWR (2)

Obtain y ∈ {0, 1}m: → repair to feasible solution

◮ Aiy ≥ Aix = i (y reserves ≥ i slots for largest i items)

Buy O(log m) · 1

si slots for each size class → O(log m · log 1 ε) ◮ |cT x − cT y| ≤ O(1) ◮ y reserves enough space for small items → O(1)

assign small items integrally → O(ε) · OPTf = O(log OPTf) Doing the math:

◮ may assume that xS ≤ 1 − ε (o.w. buy them apriori) ◮ Can assume that m ≤ #large items + 2 ◮ OPTf ≥ ε2 · #large items ⇒ log m, log 1 ε = O(log OPTf)

slide-123
SLIDE 123

Approximation algorithm for BPWR (2)

Obtain y ∈ {0, 1}m: → repair to feasible solution

◮ Aiy ≥ Aix = i (y reserves ≥ i slots for largest i items)

Buy O(log m) · 1

si slots for each size class → O(log m · log 1 ε) ◮ |cT x − cT y| ≤ O(1) ◮ y reserves enough space for small items → O(1)

assign small items integrally → O(ε) · OPTf = O(log OPTf) Doing the math:

◮ may assume that xS ≤ 1 − ε (o.w. buy them apriori) ◮ Can assume that m ≤ #large items + 2 ◮ OPTf ≥ ε2 · #large items ⇒ log m, log 1 ε = O(log OPTf) ◮ APX − OPTf ≤ O(log2 OPTf)

slide-124
SLIDE 124

Open problem I

Method works pretty well for other Bin Packing variants. But:

Open question I

Are there other applications?

slide-125
SLIDE 125

Open problem II

Bin Packing: min

S∈S xS

  • S∈S 1S · xS

≥ 1 xS ≥ ∀S ∈ S

Modified Integer Roundup Conjecture

OPT ≤ ⌈OPTf⌉ + 1

◮ True, if # of different item sizes ≤ 7 [Seb˝

  • , Shmonin ’09]

◮ Best known general bound: OPT ≤ OPTf + O(log2 n) ◮ WARNING: No o(log2 n) bound possible by just “selecting

patterns from an initial fractional solution and rounding up items” [Eisenbrand, P´ alv¨

  • lgyi, R. ’11]
slide-126
SLIDE 126

Open problem II

Bin Packing: min

S∈S xS

  • S∈S 1S · xS

≥ 1 xS ≥ ∀S ∈ S

Modified Integer Roundup Conjecture

OPT ≤ ⌈OPTf⌉ + 1

◮ True, if # of different item sizes ≤ 7 [Seb˝

  • , Shmonin ’09]

◮ Best known general bound: OPT ≤ OPTf + O(log2 n) ◮ WARNING: No o(log2 n) bound possible by just “selecting

patterns from an initial fractional solution and rounding up items” [Eisenbrand, P´ alv¨

  • lgyi, R. ’11]

Thanks for your attention