Directed Steiner Tree and the Lasserre Hierarchy Thomas Rothvo - - PowerPoint PPT Presentation

directed steiner tree and the lasserre hierarchy
SMART_READER_LITE
LIVE PREVIEW

Directed Steiner Tree and the Lasserre Hierarchy Thomas Rothvo - - PowerPoint PPT Presentation

Directed Steiner Tree and the Lasserre Hierarchy Thomas Rothvo Department of Mathematics, M.I.T. Directed Steiner Tree r 4 7 9 8 6 4 8 7 Directed Steiner Tree Input: directed weighted graph G = ( V, E, c ) r 3 2 4 7 3 5 2


slide-1
SLIDE 1

Directed Steiner Tree and the Lasserre Hierarchy

Thomas Rothvoß

Department of Mathematics, M.I.T.

slide-2
SLIDE 2

Directed Steiner Tree

r 4 8 9 7 6 7 4 8

slide-3
SLIDE 3

Directed Steiner Tree

Input:

◮ directed weighted graph G = (V, E, c)

r 4 8 9 7 6 7 4 8 3 2 3 5 2 2 1 1

slide-4
SLIDE 4

Directed Steiner Tree

Input:

◮ directed weighted graph G = (V, E, c) ◮ root r ∈ V

r 4 8 9 7 6 7 4 8 3 2 3 5 2 2 1 1 root

slide-5
SLIDE 5

Directed Steiner Tree

Input:

◮ directed weighted graph G = (V, E, c) ◮ root r ∈ V , terminals X

r 4 8 9 7 6 7 4 8 3 2 3 5 2 2 1 1 terminals

slide-6
SLIDE 6

Directed Steiner Tree

Input:

◮ directed weighted graph G = (V, E, c) ◮ root r ∈ V , terminals X

Find: Tree T connecting r and X, minimizing c(T) r 4 8 9 7 6 7 4 8 3 2 3 5 2 2 1 1

slide-7
SLIDE 7

Directed Steiner Tree

Input:

◮ directed weighted graph G = (V, E, c) ◮ root r ∈ V , terminals X

Find: Tree T connecting r and X, minimizing c(T) r 4 8 9 7 6 7 4 8 3 2 3 5 2 2 1 1

◮ W.l.o.g. G is acyclic

slide-8
SLIDE 8

Directed Steiner Tree

Input:

◮ directed weighted graph G = (V, E, c) ◮ root r ∈ V , terminals X

Find: Tree T connecting r and X, minimizing c(T) layer ℓ

. . .

layer 1 layer 0 r 4 8 9 7 6 7 4 8 3 2 3 5 2 2 1 1

◮ W.l.o.g. G is acyclic ◮ Modulo O(log |X|) factor, may assume ℓ = log |X| levels

(∃ℓ-level tree of cost ℓ · |X|1/ℓ · OPT [Zelikovsky ’97])

slide-9
SLIDE 9

What’s known?

Generalizes:

◮ Set Cover ◮ (Non-metric / Multi-level) Facility Location ◮ Group Steiner Tree

slide-10
SLIDE 10

What’s known?

Generalizes:

◮ Set Cover ◮ (Non-metric / Multi-level) Facility Location ◮ Group Steiner Tree

Known results:

◮ Ω(log2−ε n)-hard [Halperin, Krauthgamer ’03]

slide-11
SLIDE 11

What’s known?

Generalizes:

◮ Set Cover ◮ (Non-metric / Multi-level) Facility Location ◮ Group Steiner Tree

Known results:

◮ Ω(log2−ε n)-hard [Halperin, Krauthgamer ’03] ◮ |X|ε-apx in polytime (for any ε > 0)

→ sophisticated greedy algo [Zelikovsky ’97]

slide-12
SLIDE 12

What’s known?

Generalizes:

◮ Set Cover ◮ (Non-metric / Multi-level) Facility Location ◮ Group Steiner Tree

Known results:

◮ Ω(log2−ε n)-hard [Halperin, Krauthgamer ’03] ◮ |X|ε-apx in polytime (for any ε > 0)

→ sophisticated greedy algo [Zelikovsky ’97]

◮ O(log3 |X|)-apx in nO(log |X|) time

→ (more) sophisticated greedy algo [Charikar, Chekuri, Cheung, Goel, Guha and Li ’99]

slide-13
SLIDE 13

What’s known?

Generalizes:

◮ Set Cover ◮ (Non-metric / Multi-level) Facility Location ◮ Group Steiner Tree

Known results:

◮ Ω(log2−ε n)-hard [Halperin, Krauthgamer ’03] ◮ |X|ε-apx in polytime (for any ε > 0)

→ sophisticated greedy algo [Zelikovsky ’97]

◮ O(log3 |X|)-apx in nO(log |X|) time

→ (more) sophisticated greedy algo [Charikar, Chekuri, Cheung, Goel, Guha and Li ’99]

What about LPs?

slide-14
SLIDE 14

A flow based LP

Variables:

◮ ye = “use edge e?” ◮ fs,e = “r-s flow uses e?”

r Constraints: min

  • e∈E

ceye

  • e∈δ+(v)

fs,e −

  • e∈δ−(v)

fs,e =      1 v = r −1 v = s

  • therwise

∀s ∈ X ∀v ∈ V fs,e ≤ ye ∀s ∈ X ∀e ∈ E y(δ−(v)) ≤ 1 ∀v ∈ V 0 ≤ ye ≤ 1 ∀e ∈ E 0 ≤ fs,e ≤ 1 ∀s ∈ X ∀e ∈ E

slide-15
SLIDE 15

A flow based LP

Variables:

◮ ye = “use edge e?” ◮ fs,e = “r-s flow uses e?”

r 1/2 1/2 1/2 1/2 Constraints: min

  • e∈E

ceye

  • e∈δ+(v)

fs,e −

  • e∈δ−(v)

fs,e =      1 v = r −1 v = s

  • therwise

∀s ∈ X ∀v ∈ V fs,e ≤ ye ∀s ∈ X ∀e ∈ E y(δ−(v)) ≤ 1 ∀v ∈ V 0 ≤ ye ≤ 1 ∀e ∈ E 0 ≤ fs,e ≤ 1 ∀s ∈ X ∀e ∈ E

slide-16
SLIDE 16

A flow based LP

Variables:

◮ ye = “use edge e?” ◮ fs,e = “r-s flow uses e?”

r 1/2 1/2 1/2 1/2 1/2 1/2 1/2 1/2 Constraints: min

  • e∈E

ceye

  • e∈δ+(v)

fs,e −

  • e∈δ−(v)

fs,e =      1 v = r −1 v = s

  • therwise

∀s ∈ X ∀v ∈ V fs,e ≤ ye ∀s ∈ X ∀e ∈ E y(δ−(v)) ≤ 1 ∀v ∈ V 0 ≤ ye ≤ 1 ∀e ∈ E 0 ≤ fs,e ≤ 1 ∀s ∈ X ∀e ∈ E

slide-17
SLIDE 17

A flow based LP

Variables:

◮ ye = “use edge e?” ◮ fs,e = “r-s flow uses e?”

r 1/2 1/2 1/2 1/2 1/2 1/2 1/2 1/2 1/2 1/2 1/2 1/2 Constraints: min

  • e∈E

ceye

  • e∈δ+(v)

fs,e −

  • e∈δ−(v)

fs,e =      1 v = r −1 v = s

  • therwise

∀s ∈ X ∀v ∈ V fs,e ≤ ye ∀s ∈ X ∀e ∈ E y(δ−(v)) ≤ 1 ∀v ∈ V 0 ≤ ye ≤ 1 ∀e ∈ E 0 ≤ fs,e ≤ 1 ∀s ∈ X ∀e ∈ E

slide-18
SLIDE 18

A flow based LP

Variables:

◮ ye = “use edge e?” ◮ fs,e = “r-s flow uses e?”

r 1/2 1/2 1/2 1/2 1/2 1/2 1/2 1/2 1/2 1/2 1/2 1/2 1/2 1/2 1/2 1/2 1/2 1/2 1/2 1/2 1/2 Constraints: min

  • e∈E

ceye

  • e∈δ+(v)

fs,e −

  • e∈δ−(v)

fs,e =      1 v = r −1 v = s

  • therwise

∀s ∈ X ∀v ∈ V fs,e ≤ ye ∀s ∈ X ∀e ∈ E y(δ−(v)) ≤ 1 ∀v ∈ V 0 ≤ ye ≤ 1 ∀e ∈ E 0 ≤ fs,e ≤ 1 ∀s ∈ X ∀e ∈ E

slide-19
SLIDE 19

Integrality gap instance [Zosin - Khuller ’02]

r i if i ∈ S′ if S ⊆ S′

|S′| = √ k + 1

cS′

|S′| = √ k + 1

bS′

|S| = √ k

aS k terminals cost 0 cost √ k cost 0 cost k

◮ Integrality gap is Ω(

√ k) already for 5 layers. (though n = 2˜

Θ( √ k); no ω(log2 n) gap instance known)

slide-20
SLIDE 20

Integrality gap instance [Zosin - Khuller ’02]

r i if i ∈ S′ if S ⊆ S′

|S′| = √ k + 1

cS′

|S′| = √ k + 1

bS′

|S| = √ k

aS k terminals cost 0 cost √ k cost 0 cost k

◮ Integrality gap is Ω(

√ k) already for 5 layers. (though n = 2˜

Θ( √ k); no ω(log2 n) gap instance known)

slide-21
SLIDE 21

Integrality gap instance [Zosin - Khuller ’02]

r i if i ∈ S′ if S ⊆ S′

|S′| = √ k + 1

cS′

|S′| = √ k + 1

bS′

|S| = √ k

aS k terminals cost 0 cost √ k cost 0 cost k

◮ Integrality gap is Ω(

√ k) already for 5 layers. (though n = 2˜

Θ( √ k); no ω(log2 n) gap instance known)

slide-22
SLIDE 22

Integrality gap instance [Zosin - Khuller ’02]

r i if i ∈ S′ if S ⊆ S′

|S′| = √ k + 1

cS′

|S′| = √ k + 1

bS′

|S| = √ k

aS k terminals cost 0 cost √ k cost 0 cost k

What about the Lasserre strengthening?

◮ Integrality gap is Ω(

√ k) already for 5 layers. (though n = 2˜

Θ( √ k); no ω(log2 n) gap instance known)

slide-23
SLIDE 23

Round-t Lasserre relaxation

◮ Given: K = {x ∈ Rn | Ax ≥ b}.

slide-24
SLIDE 24

Round-t Lasserre relaxation

◮ Given: K = {x ∈ Rn | Ax ≥ b}. ◮ Introduce variables yI ≡ i∈I(xi = 1) for I ⊆ {1, . . . , n}

with |I| ≤ 2t + 2

slide-25
SLIDE 25

Round-t Lasserre relaxation

◮ Given: K = {x ∈ Rn | Ax ≥ b}. ◮ Introduce variables yI ≡ i∈I(xi = 1) for I ⊆ {1, . . . , n}

with |I| ≤ 2t + 2

Round-t Lasserre relaxation

(yI∪J)|I|,|J|≤t+1

  • i∈[n]

AℓiyI∪J∪{i} − bℓyI∪J

  • |I|,|J|≤t
  • ∀ℓ ∈ [m]

y∅ = 1

slide-26
SLIDE 26

Properties of Lasserre hierarchy

Theorem

Let K = {x ∈ Rn | Ax ≥ b}; y ∈ Last(K); |I|, |J| ≤ t y Last 1 y{i} R( [n]

2t+2)

slide-27
SLIDE 27

Properties of Lasserre hierarchy

Theorem

Let K = {x ∈ Rn | Ax ≥ b}; y ∈ Last(K); |I|, |J| ≤ t y Last

b b

z{i} = 0 z z′ z′

{i} = 1

Last−1 1 y{i} R( [n]

2t+2)

slide-28
SLIDE 28

Properties of Lasserre hierarchy

Theorem

Let K = {x ∈ Rn | Ax ≥ b}; y ∈ Last(K); |I|, |J| ≤ t (a) Local consistency: y ∈ conv{z ∈ Last−|I|(K) | z{i} ∈ {0, 1} ∀i ∈ I} y Last

b b

z{i} = 0 z z′ z′

{i} = 1

Last−1 1 y{i} R( [n]

2t+2)

slide-29
SLIDE 29

Properties of Lasserre hierarchy

Theorem

Let K = {x ∈ Rn | Ax ≥ b}; y ∈ Last(K); |I|, |J| ≤ t (a) Local consistency: y ∈ conv{z ∈ Last−|I|(K) | z{i} ∈ {0, 1} ∀i ∈ I} (b) Decomposition: [Karlin-Mathieu-Nguyen ’11] Let S ⊆ [n]; k := max{|I| : I ⊆ S; x ∈ K; xi = 1 ∀i ∈ I} ≤ t. Then y ∈ conv{z ∈ Last−k(K) | z{i} ∈ {0, 1} ∀i ∈ S}. y Last

b b

z{i} = 0 z z′ z′

{i} = 1

Last−1 1 y{i} R( [n]

2t+2)

slide-30
SLIDE 30

Properties of Lasserre hierarchy

Theorem

Let K = {x ∈ Rn | Ax ≥ b}; y ∈ Last(K); |I|, |J| ≤ t (a) Local consistency: y ∈ conv{z ∈ Last−|I|(K) | z{i} ∈ {0, 1} ∀i ∈ I} (b) Decomposition: [Karlin-Mathieu-Nguyen ’11] Let S ⊆ [n]; k := max{|I| : I ⊆ S; x ∈ K; xi = 1 ∀i ∈ I} ≤ t. Then y ∈ conv{z ∈ Last−k(K) | z{i} ∈ {0, 1} ∀i ∈ S}.

◮ Example: For Knapsack take S := {large items}

slide-31
SLIDE 31

Properties of Lasserre hierarchy

Theorem

Let K = {x ∈ Rn | Ax ≥ b}; y ∈ Last(K); |I|, |J| ≤ t (a) Local consistency: y ∈ conv{z ∈ Last−|I|(K) | z{i} ∈ {0, 1} ∀i ∈ I} (b) Decomposition: [Karlin-Mathieu-Nguyen ’11] Let S ⊆ [n]; k := max{|I| : I ⊆ S; x ∈ K; xi = 1 ∀i ∈ I} ≤ t. Then y ∈ conv{z ∈ Last−k(K) | z{i} ∈ {0, 1} ∀i ∈ S}.

◮ Example: For Knapsack take S := {large items} ◮ Decomposition not true for Sherali-Adams or

Lov´ asz-Schrijver hierarchies

slide-32
SLIDE 32

Properties of Lasserre hierarchy

Theorem

Let K = {x ∈ Rn | Ax ≥ b}; y ∈ Last(K); |I|, |J| ≤ t (a) Local consistency: y ∈ conv{z ∈ Last−|I|(K) | z{i} ∈ {0, 1} ∀i ∈ I} (b) Decomposition: [Karlin-Mathieu-Nguyen ’11] Let S ⊆ [n]; k := max{|I| : I ⊆ S; x ∈ K; xi = 1 ∀i ∈ I} ≤ t. Then y ∈ conv{z ∈ Last−k(K) | z{i} ∈ {0, 1} ∀i ∈ S}. (c) Convergence: conv(K ∩ {0, 1}n) = Lasproj

n

(K) (d) Monotonicity: I ⊇ J = ⇒ 0 ≤ yI ≤ yJ ≤ 1 (e) yI = 1 ⇐ ⇒

i∈I(y{i} = 1).

(f) (∀i ∈ I : y{i} ∈ {0, 1}) = ⇒ yI =

i∈I y{i}.

(g) yI = 1 = ⇒ yI∪J = yJ.

slide-33
SLIDE 33

Our contribution

Theorem

The integrality gap of an O(ℓ)-round Lasserre solution for an ℓ-level Directed Steiner Tree instance is O(ℓ log |X|).

◮ Recall: gap is Ω(

  • |X|) (for ℓ = 4) without strengthening.

◮ This gives an O(log3 |X|)-apx in nO(log |X|) time (matching

the greedy algo of [Charikar et al. ’99])

slide-34
SLIDE 34

Our contribution

Theorem

The integrality gap of an O(ℓ)-round Lasserre solution for an ℓ-level Directed Steiner Tree instance is O(ℓ log |X|).

◮ Recall: gap is Ω(

  • |X|) (for ℓ = 4) without strengthening.

◮ This gives an O(log3 |X|)-apx in nO(log |X|) time (matching

the greedy algo of [Charikar et al. ’99])

◮ Garg-Konjevod-Ravi rounding:

slide-35
SLIDE 35

Our contribution

Theorem

The integrality gap of an O(ℓ)-round Lasserre solution for an ℓ-level Directed Steiner Tree instance is O(ℓ log |X|).

◮ Recall: gap is Ω(

  • |X|) (for ℓ = 4) without strengthening.

◮ This gives an O(log3 |X|)-apx in nO(log |X|) time (matching

the greedy algo of [Charikar et al. ’99])

◮ Garg-Konjevod-Ravi rounding:

Input: Group Steiner Tree instance

slide-36
SLIDE 36

Our contribution

Theorem

The integrality gap of an O(ℓ)-round Lasserre solution for an ℓ-level Directed Steiner Tree instance is O(ℓ log |X|).

◮ Recall: gap is Ω(

  • |X|) (for ℓ = 4) without strengthening.

◮ This gives an O(log3 |X|)-apx in nO(log |X|) time (matching

the greedy algo of [Charikar et al. ’99])

◮ Garg-Konjevod-Ravi rounding:

Input: Group Steiner Tree instance tree embedding

slide-37
SLIDE 37

Our contribution

Theorem

The integrality gap of an O(ℓ)-round Lasserre solution for an ℓ-level Directed Steiner Tree instance is O(ℓ log |X|).

◮ Recall: gap is Ω(

  • |X|) (for ℓ = 4) without strengthening.

◮ This gives an O(log3 |X|)-apx in nO(log |X|) time (matching

the greedy algo of [Charikar et al. ’99])

◮ Garg-Konjevod-Ravi rounding:

Input: Group Steiner Tree instance tree embedding LP-rounding on tree graph

slide-38
SLIDE 38

Our contribution

Theorem

The integrality gap of an O(ℓ)-round Lasserre solution for an ℓ-level Directed Steiner Tree instance is O(ℓ log |X|).

◮ Recall: gap is Ω(

  • |X|) (for ℓ = 4) without strengthening.

◮ This gives an O(log3 |X|)-apx in nO(log |X|) time (matching

the greedy algo of [Charikar et al. ’99])

◮ Garg-Konjevod-Ravi rounding:

Input: Group Steiner Tree instance tree embedding LP-rounding on tree graph D i r e c t e d

slide-39
SLIDE 39

Our contribution

Theorem

The integrality gap of an O(ℓ)-round Lasserre solution for an ℓ-level Directed Steiner Tree instance is O(ℓ log |X|).

◮ Recall: gap is Ω(

  • |X|) (for ℓ = 4) without strengthening.

◮ This gives an O(log3 |X|)-apx in nO(log |X|) time (matching

the greedy algo of [Charikar et al. ’99])

◮ Garg-Konjevod-Ravi rounding:

Input: Group Steiner Tree instance tree embedding LP-rounding on tree graph D i r e c t e d O ( ℓ ) r

  • u

n d s

  • f

L a s s e r r e

slide-40
SLIDE 40

Our contribution

Theorem

The integrality gap of an O(ℓ)-round Lasserre solution for an ℓ-level Directed Steiner Tree instance is O(ℓ log |X|).

◮ Recall: gap is Ω(

  • |X|) (for ℓ = 4) without strengthening.

◮ This gives an O(log3 |X|)-apx in nO(log |X|) time (matching

the greedy algo of [Charikar et al. ’99])

◮ Garg-Konjevod-Ravi rounding:

Input: Group Steiner Tree instance tree embedding LP-rounding on tree graph D i r e c t e d O ( ℓ ) r

  • u

n d s

  • f

L a s s e r r e w i t h yI v a r i a b l e s

slide-41
SLIDE 41

The rounding algorithm

◮ Let Y ∈ LasO(ℓ)(LP) (yP value for {ye | e ∈ P}-variables)

(1) T := {∅} (2) FOR all P ∈ T and incident e ∈ E DO

(3) Pr[add P ∪ {e} to T] = yP ∪{e} yP

r Pr[add {e}] = y{e}

slide-42
SLIDE 42

The rounding algorithm

◮ Let Y ∈ LasO(ℓ)(LP) (yP value for {ye | e ∈ P}-variables)

(1) T := {∅} (2) FOR all P ∈ T and incident e ∈ E DO

(3) Pr[add P ∪ {e} to T] = yP ∪{e} yP

r Pr[add {e}] = y{e}

slide-43
SLIDE 43

The rounding algorithm

◮ Let Y ∈ LasO(ℓ)(LP) (yP value for {ye | e ∈ P}-variables)

(1) T := {∅} (2) FOR all P ∈ T and incident e ∈ E DO

(3) Pr[add P ∪ {e} to T] = yP ∪{e} yP

r Pr[add {e}] = y{e}

slide-44
SLIDE 44

The rounding algorithm

◮ Let Y ∈ LasO(ℓ)(LP) (yP value for {ye | e ∈ P}-variables)

(1) T := {∅} (2) FOR all P ∈ T and incident e ∈ E DO

(3) Pr[add P ∪ {e} to T] = yP ∪{e} yP

r Pr[add {e}] = y{e}

slide-45
SLIDE 45

The rounding algorithm

◮ Let Y ∈ LasO(ℓ)(LP) (yP value for {ye | e ∈ P}-variables)

(1) T := {∅} (2) FOR all P ∈ T and incident e ∈ E DO

(3) Pr[add P ∪ {e} to T] = yP ∪{e} yP

r Pr[add {e}] = y{e}

slide-46
SLIDE 46

The rounding algorithm

◮ Let Y ∈ LasO(ℓ)(LP) (yP value for {ye | e ∈ P}-variables)

(1) T := {∅} (2) FOR all P ∈ T and incident e ∈ E DO

(3) Pr[add P ∪ {e} to T] = yP ∪{e} yP

r e Pr[add P ∪ {e}] =

yP ∪{e} yP

P

slide-47
SLIDE 47

The rounding algorithm

◮ Let Y ∈ LasO(ℓ)(LP) (yP value for {ye | e ∈ P}-variables)

(1) T := {∅} (2) FOR all P ∈ T and incident e ∈ E DO

(3) Pr[add P ∪ {e} to T] = yP ∪{e} yP

r Pr[add P ∪ {e}] =

yP ∪{e} yP

slide-48
SLIDE 48

The rounding algorithm

◮ Let Y ∈ LasO(ℓ)(LP) (yP value for {ye | e ∈ P}-variables)

(1) T := {∅} (2) FOR all P ∈ T and incident e ∈ E DO

(3) Pr[add P ∪ {e} to T] = yP ∪{e} yP

r Pr[add P ∪ {e}] =

yP ∪{e} yP

slide-49
SLIDE 49

The rounding algorithm

◮ Let Y ∈ LasO(ℓ)(LP) (yP value for {ye | e ∈ P}-variables)

(1) T := {∅} (2) FOR all P ∈ T and incident e ∈ E DO

(3) Pr[add P ∪ {e} to T] = yP ∪{e} yP

r Pr[add P ∪ {e}] =

yP ∪{e} yP

slide-50
SLIDE 50

The rounding algorithm

◮ Let Y ∈ LasO(ℓ)(LP) (yP value for {ye | e ∈ P}-variables)

(1) T := {∅} (2) FOR all P ∈ T and incident e ∈ E DO

(3) Pr[add P ∪ {e} to T] = yP ∪{e} yP

r Pr[add P ∪ {e}] =

yP ∪{e} yP

slide-51
SLIDE 51

The rounding algorithm

◮ Let Y ∈ LasO(ℓ)(LP) (yP value for {ye | e ∈ P}-variables)

(1) T := {∅} (2) FOR all P ∈ T and incident e ∈ E DO

(3) Pr[add P ∪ {e} to T] = yP ∪{e} yP

r Pr[add P ∪ {e}] =

yP ∪{e} yP

slide-52
SLIDE 52

The rounding algorithm

◮ Let Y ∈ LasO(ℓ)(LP) (yP value for {ye | e ∈ P}-variables)

(1) T := {∅} (2) FOR all P ∈ T and incident e ∈ E DO

(3) Pr[add P ∪ {e} to T] = yP ∪{e} yP

r Pr[add P ∪ {e}] =

yP ∪{e} yP

slide-53
SLIDE 53

The rounding algorithm

◮ Let Y ∈ LasO(ℓ)(LP) (yP value for {ye | e ∈ P}-variables)

(1) T := {∅} (2) FOR all P ∈ T and incident e ∈ E DO

(3) Pr[add P ∪ {e} to T] = yP ∪{e} yP

r e Pr[add P ∪ {e}] =

yP ∪{e} yP

P

slide-54
SLIDE 54

The rounding algorithm

◮ Let Y ∈ LasO(ℓ)(LP) (yP value for {ye | e ∈ P}-variables)

(1) T := {∅} (2) FOR all P ∈ T and incident e ∈ E DO

(3) Pr[add P ∪ {e} to T] = yP ∪{e} yP

r e Pr[add P ∪ {e}] =

yP ∪{e} yP

P

slide-55
SLIDE 55

The rounding algorithm

◮ Let Y ∈ LasO(ℓ)(LP) (yP value for {ye | e ∈ P}-variables)

(1) T := {∅} (2) FOR all P ∈ T and incident e ∈ E DO

(3) Pr[add P ∪ {e} to T] = yP ∪{e} yP

r Pr[add P ∪ {e}] =

yP ∪{e} yP

slide-56
SLIDE 56

The rounding algorithm

◮ Let Y ∈ LasO(ℓ)(LP) (yP value for {ye | e ∈ P}-variables)

(1) T := {∅} (2) FOR all P ∈ T and incident e ∈ E DO

(3) Pr[add P ∪ {e} to T] = yP ∪{e} yP

r Pr[add P ∪ {e}] =

yP ∪{e} yP

slide-57
SLIDE 57

The rounding algorithm

◮ Let Y ∈ LasO(ℓ)(LP) (yP value for {ye | e ∈ P}-variables)

(1) T := {∅} (2) FOR all P ∈ T and incident e ∈ E DO

(3) Pr[add P ∪ {e} to T] = yP ∪{e} yP

r Pr[add P ∪ {e}] =

yP ∪{e} yP

slide-58
SLIDE 58

The rounding algorithm

◮ Let Y ∈ LasO(ℓ)(LP) (yP value for {ye | e ∈ P}-variables)

(1) T := {∅} (2) FOR all P ∈ T and incident e ∈ E DO

(3) Pr[add P ∪ {e} to T] = yP ∪{e} yP

r Pr[add P ∪ {e}] =

yP ∪{e} yP

slide-59
SLIDE 59

The rounding algorithm

◮ Let Y ∈ LasO(ℓ)(LP) (yP value for {ye | e ∈ P}-variables)

(1) T := {∅} (2) FOR all P ∈ T and incident e ∈ E DO

(3) Pr[add P ∪ {e} to T] = yP ∪{e} yP

r Pr[add P ∪ {e}] =

yP ∪{e} yP

slide-60
SLIDE 60

The rounding algorithm

◮ Let Y ∈ LasO(ℓ)(LP) (yP value for {ye | e ∈ P}-variables)

(1) T := {∅} (2) FOR all P ∈ T and incident e ∈ E DO

(3) Pr[add P ∪ {e} to T] = yP ∪{e} yP

r Pr[add P ∪ {e}] =

yP ∪{e} yP

slide-61
SLIDE 61

The rounding algorithm

◮ Let Y ∈ LasO(ℓ)(LP) (yP value for {ye | e ∈ P}-variables)

(1) T := {∅} (2) FOR all P ∈ T and incident e ∈ E DO

(3) Pr[add P ∪ {e} to T] = yP ∪{e} yP

r T = sampled paths

slide-62
SLIDE 62

The rounding algorithm

◮ Let Y ∈ LasO(ℓ)(LP) (yP value for {ye | e ∈ P}-variables)

(1) T := {∅} (2) FOR all P ∈ T and incident e ∈ E DO

(3) Pr[add P ∪ {e} to T] = yP ∪{e} yP

r T = sampled paths

  • Road map:

◮ Show Pr[e ∈ T] = y{e}

slide-63
SLIDE 63

The rounding algorithm

◮ Let Y ∈ LasO(ℓ)(LP) (yP value for {ye | e ∈ P}-variables)

(1) T := {∅} (2) FOR all P ∈ T and incident e ∈ E DO

(3) Pr[add P ∪ {e} to T] = yP ∪{e} yP

r T = sampled paths

  • Road map:

◮ Show Pr[e ∈ T] = y{e} ◮ Pr[s connected] ≥ Ω( 1 #levels) for each terminal s

slide-64
SLIDE 64

Probability to sample a particular path

Lemma

For any root-path P: Pr[P ∈ T] = yP . Pr[P ∈ T] = y{e1} · y{e1,e2} y{e1} · y{e1,e2,e3} y{e1,e2} · . . . · yP yP\{ej} = yP . r

e1 e2 . . . ej

slide-65
SLIDE 65

Upper bounding the expected cost

Lemma

  • P ending in e

yP ≤ y{e} r u

e

slide-66
SLIDE 66

Upper bounding the expected cost

Lemma

  • P ending in e

yP ≤ y{e}

◮ It suffices to consider case y{e} ∈ {0, 1} (costs 1 level).

r u

e

Y Last

b b

y′

{e} = 0

Y ′ Y ′′ y′′

{e} = 1

Last−1 1 y{e} R( [n]

2t+2)

slide-67
SLIDE 67

Upper bounding the expected cost

Lemma

  • P ending in e

yP ≤ y{e}

◮ It suffices to consider case y{e} ∈ {0, 1} (costs 1 level).

r u

e

y{e} = 1

slide-68
SLIDE 68

Upper bounding the expected cost

Lemma

  • P ending in e

yP ≤ y{e}

◮ It suffices to consider case y{e} ∈ {0, 1} (costs 1 level).

r u

e

y{e} = 1

  • e′∈δ−(u)

y{e′} ≤ 1

slide-69
SLIDE 69

Upper bounding the expected cost

Lemma

  • P ending in e

yP ≤ y{e}

◮ It suffices to consider case y{e} ∈ {0, 1} (costs 1 level). ◮ By induction P ending in e′ yP ≤ y{e′}

r u

e

y{e} = 1

e′

  • e′∈δ−(u)

y{e′} ≤ 1

slide-70
SLIDE 70

Upper bounding the expected cost

Lemma

  • P ending in e

yP ≤ y{e}

◮ It suffices to consider case y{e} ∈ {0, 1} (costs 1 level). ◮ By induction P ending in e′ yP ≤ y{e′} ◮ Since y{e} = 1 =

⇒ yP∪{e} = yP ,

  • P ending in e

yP =

  • e′∈δ−(v)
  • P ending in e′

yP ≤ 1 r u

e

y{e} = 1

e′

  • e′∈δ−(u)

y{e′} ≤ 1

slide-71
SLIDE 71

Upper bounding the expected cost

Lemma

  • P ending in e

yP ≤ y{e}

◮ It suffices to consider case y{e} ∈ {0, 1} (costs 1 level). ◮ By induction P ending in e′ yP ≤ y{e′} ◮ Since y{e} = 1 =

⇒ yP∪{e} = yP ,

  • P ending in e

yP =

  • e′∈δ−(v)
  • P ending in e′

yP ≤ 1 r u

e

y{e} = 1

e′

  • e′∈δ−(u)

y{e′} ≤ 1

⇒E[c(T)] ≤ OPT

slide-72
SLIDE 72

Each terminal connected once in expectation

Lemma

For terminal s:

  • P ending in s

yP = 1. r s

slide-73
SLIDE 73

Each terminal connected once in expectation

Lemma

For terminal s:

  • P ending in s

yP = 1.

◮ No feasible frac. flow with |{e : fs,e = 1}| > ℓ

r s 0.3 0.7 0.3 0.4 0.3 0.3 0.3

slide-74
SLIDE 74

Each terminal connected once in expectation

Lemma

For terminal s:

  • P ending in s

yP = 1.

◮ No feasible frac. flow with |{e : fs,e = 1}| > ℓ ◮ Decomposition: Write sol. as convex comb. of sol. that

are integral on fs,∗ (costs ℓ levels) r s f = 1 f = 1 f = 1

slide-75
SLIDE 75

Each terminal connected once in expectation

Lemma

For terminal s:

  • P ending in s

yP = 1.

◮ No feasible frac. flow with |{e : fs,e = 1}| > ℓ ◮ Decomposition: Write sol. as convex comb. of sol. that

are integral on fs,∗ (costs ℓ levels)

◮ Suffices to show claim if fs,e ∈ {0, 1} ∀e ∈ E

r s f = 1 f = 1 f = 1

slide-76
SLIDE 76

Each terminal connected once in expectation

Lemma

For terminal s:

  • P ending in s

yP = 1.

◮ No feasible frac. flow with |{e : fs,e = 1}| > ℓ ◮ Decomposition: Write sol. as convex comb. of sol. that

are integral on fs,∗ (costs ℓ levels)

◮ Suffices to show claim if fs,e ∈ {0, 1} ∀e ∈ E

r s y = f = 1 y = f = 1 y = f = 1

slide-77
SLIDE 77

Each terminal connected once in expectation

Lemma

For terminal s:

  • P ending in s

yP = 1.

◮ No feasible frac. flow with |{e : fs,e = 1}| > ℓ ◮ Decomposition: Write sol. as convex comb. of sol. that

are integral on fs,∗ (costs ℓ levels)

◮ Suffices to show claim if fs,e ∈ {0, 1} ∀e ∈ E ◮ Use LP-constraint: “Incoming capacity ≤ 1”

r s

× ×

y = f = 1 y = f = 1 y = f = 1

slide-78
SLIDE 78

Each terminal connected once in expectation

Lemma

For terminal s:

  • P ending in s

yP = 1.

◮ No feasible frac. flow with |{e : fs,e = 1}| > ℓ ◮ Decomposition: Write sol. as convex comb. of sol. that

are integral on fs,∗ (costs ℓ levels)

◮ Suffices to show claim if fs,e ∈ {0, 1} ∀e ∈ E ◮ Use LP-constraint: “Incoming capacity ≤ 1”

r s

× ×

y = f = 1 y = f = 1 y = f = 1

For fixed s, Z := #paths connecting s ⇒E[Z] = 1

slide-79
SLIDE 79

Upper bounding the conditional expectation

Lemma

E[Z | Z ≥ 1] ≤ ℓ + 1. r s

slide-80
SLIDE 80

Upper bounding the conditional expectation

Lemma

E[Z | Z ≥ 1] ≤ ℓ + 1.

◮ E[Z | Z ≥ 1] ≤ E[Z | P ∈ T] for some P

r s P

slide-81
SLIDE 81

Upper bounding the conditional expectation

Lemma

E[Z | Z ≥ 1] ≤ ℓ + 1.

◮ E[Z | Z ≥ 1] ≤ E[Z | P ∈ T] for some P

r s P Q

slide-82
SLIDE 82

Upper bounding the conditional expectation

Lemma

E[Z | Z ≥ 1] ≤ ℓ + 1.

◮ E[Z | Z ≥ 1] ≤ E[Z | P ∈ T] for some P ◮ Suffices to prove E[#S : S ⊇ Q, s ∈ S | Q ∈ T] ≤ 1.

r s P Q S

slide-83
SLIDE 83

Upper bounding the conditional expectation

Lemma

E[Z | Z ≥ 1] ≤ ℓ + 1.

◮ E[Z | Z ≥ 1] ≤ E[Z | P ∈ T] for some P ◮ Suffices to prove E[#S : S ⊇ Q, s ∈ S | Q ∈ T] ≤ 1.

  • S:S⊇Q,s∈S

Pr[S ∈ T | Q ∈ T]

  • cond. prob.

  • S:S⊇Q,s∈S

yS yQ

as in previous lemma

≤ 1 r s P Q S

slide-84
SLIDE 84
  • Done. . .

◮ Recall: Z = #paths connecting a fixed terminal s

Lemma

Pr[Z ≥ 1] ≥

1 ℓ+1.

slide-85
SLIDE 85
  • Done. . .

◮ Recall: Z = #paths connecting a fixed terminal s

Lemma

Pr[Z ≥ 1] ≥

1 ℓ+1.

1 = E[Z]

slide-86
SLIDE 86
  • Done. . .

◮ Recall: Z = #paths connecting a fixed terminal s

Lemma

Pr[Z ≥ 1] ≥

1 ℓ+1.

1 = E[Z] = Pr[Z = 0]·E[Z | Z = 0]

  • =0

+ Pr[Z ≥ 1]·E[Z | Z ≥ 1]

  • ≤ℓ+1
slide-87
SLIDE 87

Open problems

Open problem

Is there a convex relaxation for Directed Steiner Tree that

◮ has polylog(|X|) integrality gap ◮ can be solved in polytime?

slide-88
SLIDE 88

Open problems

Open problem

Is there a convex relaxation for Directed Steiner Tree that

◮ has polylog(|X|) integrality gap ◮ can be solved in polytime?

Thanks for your attention