Decomposition for Network Design Bernard Gendron March 2-10, 2016 - - PowerPoint PPT Presentation

decomposition for network design
SMART_READER_LITE
LIVE PREVIEW

Decomposition for Network Design Bernard Gendron March 2-10, 2016 - - PowerPoint PPT Presentation

Decomposition for Network Design Bernard Gendron March 2-10, 2016 EPFL, Lausanne, Switzerland CIRRELT and D epartement dinformatique et de recherche op erationnelle, Universit e de Montr eal, Canada Outline of lesson 4:


slide-1
SLIDE 1

Decomposition for Network Design

Bernard Gendron∗ March 2-10, 2016

EPFL, Lausanne, Switzerland

∗ CIRRELT and D´

epartement d’informatique et de recherche op´ erationnelle, Universit´ e de Montr´ eal, Canada

slide-2
SLIDE 2

Outline of lesson 4: Classic examples of decomposition

Branch-and-cut for traveling salesman Branch-and-price for integer multicommodity flow Lagrangian relaxation for budget network design Benders decomposition for uncapacitated facility location

slide-3
SLIDE 3

Asymmetric traveling salesman problem (ATSP)

◮ Directed network G = (N, A), with node set N = {1, 2, . . . , n}

and arc set A

◮ Routing cost cij on each arc (i, j) (can be different than cji,

routing cost on arc (j, i))

◮ Problem description: find the minimum cost Hamiltonian

circuit, i.e., a circuit that visits each node in N exactly one time (we assume such a circuit exists in G)

◮ Useful notation: for any S, T ⊆ N, let

A(S, T) = {(i, j) ∈ A|i ∈ S, j ∈ T}

slide-4
SLIDE 4

Problem formulation

◮ yij: 1, if arc (i, j) is chosen in the Hamiltonian circuit, 0,

  • therwise
slide-5
SLIDE 5

Problem formulation

◮ yij: 1, if arc (i, j) is chosen in the Hamiltonian circuit, 0,

  • therwise

Z = min

  • (i,j)∈A

cijyij

slide-6
SLIDE 6

Problem formulation

◮ yij: 1, if arc (i, j) is chosen in the Hamiltonian circuit, 0,

  • therwise

Z = min

  • (i,j)∈A

cijyij

  • j∈N+

i

yij = 1 i ∈ N

  • j∈N−

i

yji = 1 i ∈ N

slide-7
SLIDE 7

Problem formulation

◮ yij: 1, if arc (i, j) is chosen in the Hamiltonian circuit, 0,

  • therwise

Z = min

  • (i,j)∈A

cijyij

  • j∈N+

i

yij = 1 i ∈ N

  • j∈N−

i

yji = 1 i ∈ N

  • (i,j)∈A(S,N\S)

yij ≥ 1, S ⊂ N, S = ∅ yij ∈ {0, 1} (i, j) ∈ A

slide-8
SLIDE 8

Alternative formulations

◮ If routing costs are symmetric (cij = cji for each (i, j)), how

would you formulate the problem?

slide-9
SLIDE 9

Alternative formulations

◮ If routing costs are symmetric (cij = cji for each (i, j)), how

would you formulate the problem?

◮ Think of another way of expressing subtour elimination

constraints (SEC) (using the same variables)

slide-10
SLIDE 10

Alternative formulations

◮ If routing costs are symmetric (cij = cji for each (i, j)), how

would you formulate the problem?

◮ Think of another way of expressing subtour elimination

constraints (SEC) (using the same variables)

◮ Does the resulting model have an equivalent LP relaxation?

slide-11
SLIDE 11

Alternative formulations

◮ If routing costs are symmetric (cij = cji for each (i, j)), how

would you formulate the problem?

◮ Think of another way of expressing subtour elimination

constraints (SEC) (using the same variables)

◮ Does the resulting model have an equivalent LP relaxation? ◮ Both models have an exponential number of constraints: what

to do?

slide-12
SLIDE 12

Alternative formulations

◮ If routing costs are symmetric (cij = cji for each (i, j)), how

would you formulate the problem?

◮ Think of another way of expressing subtour elimination

constraints (SEC) (using the same variables)

◮ Does the resulting model have an equivalent LP relaxation? ◮ Both models have an exponential number of constraints: what

to do?

◮ Introduce a set of commodities K = N \ {1} such that

O(k) = 1, D(k) = k and dk = 1 for each k ∈ K

◮ Define multicommodity flow variables xk ij : flow of commodity

k on arc (i, j)

◮ Using these multicommodity flow variables, express SECs with

a polynomial number of constraints

slide-13
SLIDE 13

Alternative formulations

◮ If routing costs are symmetric (cij = cji for each (i, j)), how

would you formulate the problem?

◮ Think of another way of expressing subtour elimination

constraints (SEC) (using the same variables)

◮ Does the resulting model have an equivalent LP relaxation? ◮ Both models have an exponential number of constraints: what

to do?

◮ Introduce a set of commodities K = N \ {1} such that

O(k) = 1, D(k) = k and dk = 1 for each k ∈ K

◮ Define multicommodity flow variables xk ij : flow of commodity

k on arc (i, j)

◮ Using these multicommodity flow variables, express SECs with

a polynomial number of constraints

◮ Does the resulting model have an equivalent LP relaxation?

slide-14
SLIDE 14

Cutting-plane method

◮ Since there is an exponential number of SECs, we use a

cutting-plane method to solve the LP relaxation

◮ Separation problem: given a solution y to the current LP

relaxation, we have to find the most violated SEC or determine that no violated SEC exists

◮ Consider the following optimization problem:

γ = min   

  • (i,j)∈A(S,N\S)

yij | S ⊂ N, S = ∅   

◮ If γ ≥ 1, there is no violated SEC; otherwise, if γ < 1, the

  • ptimal solution S corresponds to the most violated SEC
slide-15
SLIDE 15

Solving the separation problem

◮ The equivalent optimization problem can be reformulated as:

γ = min

j=2,3,...,n γj

γj = min   

  • (i,j)∈A(S,N\S)

yij | {1, 2, . . . , j − 1} ⊂ N, j ∈ N \ S   

◮ Computing γj corresponds to finding the minimum cut

between 1 and j with capacities yij on each arc (i, j)

◮ How would you solve this problem?

slide-16
SLIDE 16

Solving the separation problem

◮ The equivalent optimization problem can be reformulated as:

γ = min

j=2,3,...,n γj

γj = min   

  • (i,j)∈A(S,N\S)

yij | {1, 2, . . . , j − 1} ⊂ N, j ∈ N \ S   

◮ Computing γj corresponds to finding the minimum cut

between 1 and j with capacities yij on each arc (i, j)

◮ How would you solve this problem? ◮ Any efficient maximum flow algorithm solves this problem in

polynomial time

slide-17
SLIDE 17

Embedding cutting-plane into branch-and-bound

◮ Branch-and-cut: apply cutting-plane at each node ◮ Cut-and-branch: apply cutting-plane only at the root ◮ Is this standard cut-and-branch approach enough to identify

an optimal solution?

slide-18
SLIDE 18

Embedding cutting-plane into branch-and-bound

◮ Branch-and-cut: apply cutting-plane at each node ◮ Cut-and-branch: apply cutting-plane only at the root ◮ Is this standard cut-and-branch approach enough to identify

an optimal solution?

◮ We also need to verify that there is no violated SEC at each

node where an integer solution is found!

◮ How do we do that? Remember that y is now integer!

slide-19
SLIDE 19

Embedding cutting-plane into branch-and-bound

◮ Branch-and-cut: apply cutting-plane at each node ◮ Cut-and-branch: apply cutting-plane only at the root ◮ Is this standard cut-and-branch approach enough to identify

an optimal solution?

◮ We also need to verify that there is no violated SEC at each

node where an integer solution is found!

◮ How do we do that? Remember that y is now integer! ◮ A linear-time graph traversal algorithm to verify the

connectivity of the subgraph induced by y does the job!

slide-20
SLIDE 20

Integer multicommodity flow problem

◮ Directed network G = (N, A), with node set N and arc set A ◮ Commodity set K: known demand dk between origin O(k)

and destination D(k) for each k ∈ K

◮ Unit transportation cost ck ij on arc (i, j) and commodity k ◮ Capacity uij on each arc (i, j) ◮ Problem description: satisfy the demand of each commodity

using only one path per commodity at minimum cost, while respecting capacity constraints

slide-21
SLIDE 21

Arc-based model

◮ xk ij : 1, if arc (i, j) is used in the chosen path for commodity k,

0 otherwise

slide-22
SLIDE 22

Arc-based model

◮ xk ij : 1, if arc (i, j) is used in the chosen path for commodity k,

0 otherwise Z = min

  • (i,j)∈A
  • k∈K

ck

ij xk ij

slide-23
SLIDE 23

Arc-based model

◮ xk ij : 1, if arc (i, j) is used in the chosen path for commodity k,

0 otherwise Z = min

  • (i,j)∈A
  • k∈K

ck

ij xk ij

  • j∈N+

i

xk

ij −

  • j∈N−

i

xk

ji =

   1, i = O(k) − 1, i = D(k) 0, i = O(k), D(k) i ∈ N, k ∈ K

slide-24
SLIDE 24

Arc-based model

◮ xk ij : 1, if arc (i, j) is used in the chosen path for commodity k,

0 otherwise Z = min

  • (i,j)∈A
  • k∈K

ck

ij xk ij

  • j∈N+

i

xk

ij −

  • j∈N−

i

xk

ji =

   1, i = O(k) − 1, i = D(k) 0, i = O(k), D(k) i ∈ N, k ∈ K

  • k∈K

dkxk

ij ≤ uij

(i, j) ∈ A (αij)

slide-25
SLIDE 25

Arc-based model

◮ xk ij : 1, if arc (i, j) is used in the chosen path for commodity k,

0 otherwise Z = min

  • (i,j)∈A
  • k∈K

ck

ij xk ij

  • j∈N+

i

xk

ij −

  • j∈N−

i

xk

ji =

   1, i = O(k) − 1, i = D(k) 0, i = O(k), D(k) i ∈ N, k ∈ K

  • k∈K

dkxk

ij ≤ uij

(i, j) ∈ A (αij) xk

ij ∈ {0, 1}

(i, j) ∈ A, k ∈ K

slide-26
SLIDE 26

Lagrangian relaxation of capacity constraints

◮ Derive the Lagrangian subproblem obtained after relaxing

capacity constraints using multipliers α ≥ 0

slide-27
SLIDE 27

Lagrangian relaxation of capacity constraints

◮ Derive the Lagrangian subproblem obtained after relaxing

capacity constraints using multipliers α ≥ 0 Z(LR(α)) = min

  • (i,j)∈A
  • k∈K

(ck

ij + αijdk)xk ij −

  • (i,j)∈A

αijuij

  • j∈N+

i

xk

ij −

  • j∈N−

i

xk

ji =

   1, i = O(k) − 1, i = D(k) 0, i = O(k), D(k) i ∈ N, k ∈ K xk

ij ∈ {0, 1}

(i, j) ∈ A, k ∈ K

◮ How do you solve this subproblem? Does it have the

integrality property?

slide-28
SLIDE 28

Dantzig-Wolfe reformulation

◮ Now, we know that the Lagrangian dual is equivalent to the

LP relaxation

◮ We also know that the Lagrangian dual can be solved with

Dantzig-Wolfe reformulation (obtained from the primal interpretation of Lagrangian duality)

◮ Write down Dantzig-Wolfe reformulation

slide-29
SLIDE 29

Dantzig-Wolfe reformulation

◮ Now, we know that the Lagrangian dual is equivalent to the

LP relaxation

◮ We also know that the Lagrangian dual can be solved with

Dantzig-Wolfe reformulation (obtained from the primal interpretation of Lagrangian duality)

◮ Write down Dantzig-Wolfe reformulation ◮ Using the fact that the Lagrangian subproblem decomposes

into |K| shortest path problems, write down an equivalent disaggregated Dantzig-Wolfe reformulation

slide-30
SLIDE 30

Dantzig-Wolfe reformulation

◮ Now, we know that the Lagrangian dual is equivalent to the

LP relaxation

◮ We also know that the Lagrangian dual can be solved with

Dantzig-Wolfe reformulation (obtained from the primal interpretation of Lagrangian duality)

◮ Write down Dantzig-Wolfe reformulation ◮ Using the fact that the Lagrangian subproblem decomposes

into |K| shortest path problems, write down an equivalent disaggregated Dantzig-Wolfe reformulation

◮ Show that this disaggregated Dantzig-Wolfe reformulation is

equivalent to the LP relaxation of the path-based model

slide-31
SLIDE 31

Path-based model

◮ Pk: circuit-free paths between O(k) and D(k) for each k ◮ δkp ij : 1, if arc (i, j) is on path p ∈ Pk; 0, otherwise

Z = min

  • k∈K
  • p∈Pk

 

(i,j)∈A

ck

ij δkp ij

  λkp

slide-32
SLIDE 32

Path-based model

◮ Pk: circuit-free paths between O(k) and D(k) for each k ◮ δkp ij : 1, if arc (i, j) is on path p ∈ Pk; 0, otherwise

Z = min

  • k∈K
  • p∈Pk

 

(i,j)∈A

ck

ij δkp ij

  λkp

  • p∈Pk

λkp = 1 k ∈ K (θk)

slide-33
SLIDE 33

Path-based model

◮ Pk: circuit-free paths between O(k) and D(k) for each k ◮ δkp ij : 1, if arc (i, j) is on path p ∈ Pk; 0, otherwise

Z = min

  • k∈K
  • p∈Pk

 

(i,j)∈A

ck

ij δkp ij

  λkp

  • p∈Pk

λkp = 1 k ∈ K (θk)

  • k∈K
  • p∈Pk

dkδkp

ij λkp ≤ uij

(i, j) ∈ A (αij)

slide-34
SLIDE 34

Path-based model

◮ Pk: circuit-free paths between O(k) and D(k) for each k ◮ δkp ij : 1, if arc (i, j) is on path p ∈ Pk; 0, otherwise

Z = min

  • k∈K
  • p∈Pk

 

(i,j)∈A

ck

ij δkp ij

  λkp

  • p∈Pk

λkp = 1 k ∈ K (θk)

  • k∈K
  • p∈Pk

dkδkp

ij λkp ≤ uij

(i, j) ∈ A (αij) λkp ∈ {0, 1} k ∈ K, p ∈ Pk

slide-35
SLIDE 35

Column generation method

◮ The LP relaxation of the path-based model has an exponential

number of variables: what to do?

slide-36
SLIDE 36

Column generation method

◮ The LP relaxation of the path-based model has an exponential

number of variables: what to do?

◮ Write down the expression of the reduced cost of λkp

slide-37
SLIDE 37

Column generation method

◮ The LP relaxation of the path-based model has an exponential

number of variables: what to do?

◮ Write down the expression of the reduced cost of λkp ◮ The reduced cost of variable λkp is:

  • (i,j)∈A

(ck

ij + αijdk)δkp ij − θk ◮ What is the pricing problem?

slide-38
SLIDE 38

Column generation method

◮ The LP relaxation of the path-based model has an exponential

number of variables: what to do?

◮ Write down the expression of the reduced cost of λkp ◮ The reduced cost of variable λkp is:

  • (i,j)∈A

(ck

ij + αijdk)δkp ij − θk ◮ What is the pricing problem? ◮ Solve the Lagrangian subproblem for each k to obtain an

  • ptimal solution that corresponds to a path p ∈ Pk

◮ If (i,j)∈A(ck ij + αijdk)δkp ij − θk < 0, the variable λkp

corresponding to the optimal path p ∈ Pk is added

◮ If (i,j)∈A(ck ij + αijdk)δkp ij − θk ≥ 0 for each k, the column

generation method has converged to the optimal LP relaxation

slide-39
SLIDE 39

Branch-and-price algorithm

◮ At each node, we perform the column generation method ◮ What happens in the pricing problem if we branch on λkp?

slide-40
SLIDE 40

Branch-and-price algorithm

◮ At each node, we perform the column generation method ◮ What happens in the pricing problem if we branch on λkp? ◮ When we branch, we have to make sure that we do not

destroy the structure of the pricing problem!

◮ A better way to branch is as follows: choose k for which there

are at least two paths p(q) such that λ

kp(q) > 0, q = 1, 2 ◮ Follow the flow from the origin O(k) up to the first node l

where paths p(1) and p(2) differ; let (l, j(q)) be the arc

  • riginating at l and belonging to path p(q), q = 1, 2

◮ Let (l, j(q)) ∈ N+ i (q), q = 1, 2, where N+ l

= N+

l (1) ∪ N+ l (2),

N+

l (1) ∩ N+ l (2) = ∅; generate the two child nodes defined by

the branching constraints: 1)

j∈N+

l (1) xk

lj = 0

2)

j∈N+

l (2) xk

lj = 0 ◮ Show that this branching rule is valid and does not change the

way we solve the pricing problem

slide-41
SLIDE 41

Budget network design

◮ Directed network G = (N, A), with node set N and arc set A ◮ Commodity set K: known demand dk between origin O(k)

and destination D(k) for each k ∈ K

◮ Transportation cost ck ij on arc (i, j) and commodity k ◮ Fixed charge fij incurred whenever arc (i, j) is used to

transport some commodity units

◮ Total budget B > 0 on the fixed charges ◮ Problem description: satisfy the demand of each commodity

at minimum cost, while respecting the fixed charge budget

slide-42
SLIDE 42

Problem formulation

◮ xk ij : fraction of the demand dk carried on arc (i, j) ◮ yij: 1, if arc (i, j) is used; 0 otherwise

slide-43
SLIDE 43

Problem formulation

◮ xk ij : fraction of the demand dk carried on arc (i, j) ◮ yij: 1, if arc (i, j) is used; 0 otherwise

Z = min

  • (i,j)∈A
  • k∈K

ck

ij xk ij

slide-44
SLIDE 44

Problem formulation

◮ xk ij : fraction of the demand dk carried on arc (i, j) ◮ yij: 1, if arc (i, j) is used; 0 otherwise

Z = min

  • (i,j)∈A
  • k∈K

ck

ij xk ij

  • j∈N+

i

xk

ij −

  • j∈N−

i

xk

ji =

   1, i = O(k) − 1, i = D(k) 0, i = O(k), D(k) i ∈ N, k ∈ K xk

ij ≥ 0

(i, j) ∈ A, k ∈ K

slide-45
SLIDE 45

Problem formulation

◮ xk ij : fraction of the demand dk carried on arc (i, j) ◮ yij: 1, if arc (i, j) is used; 0 otherwise

Z = min

  • (i,j)∈A
  • k∈K

ck

ij xk ij

  • j∈N+

i

xk

ij −

  • j∈N−

i

xk

ji =

   1, i = O(k) − 1, i = D(k) 0, i = O(k), D(k) i ∈ N, k ∈ K xk

ij ≥ 0

(i, j) ∈ A, k ∈ K xk

ij ≤ yij

(i, j) ∈ A, k ∈ K (βk

ij)

slide-46
SLIDE 46

Problem formulation

◮ xk ij : fraction of the demand dk carried on arc (i, j) ◮ yij: 1, if arc (i, j) is used; 0 otherwise

Z = min

  • (i,j)∈A
  • k∈K

ck

ij xk ij

  • j∈N+

i

xk

ij −

  • j∈N−

i

xk

ji =

   1, i = O(k) − 1, i = D(k) 0, i = O(k), D(k) i ∈ N, k ∈ K xk

ij ≥ 0

(i, j) ∈ A, k ∈ K xk

ij ≤ yij

(i, j) ∈ A, k ∈ K (βk

ij)

  • (i,j)∈A

fijyij ≤ B yij ∈ {0, 1} (i, j) ∈ A

slide-47
SLIDE 47

Lagrangian relaxation of linking constraints

◮ Derive the Lagrangian subproblem obtained after relaxing

linking constraints using multipliers β ≥ 0

slide-48
SLIDE 48

Lagrangian relaxation of linking constraints

◮ Derive the Lagrangian subproblem obtained after relaxing

linking constraints using multipliers β ≥ 0 Z(LR(β)) = min

  • (i,j)∈A
  • k∈K

(ck

ij +βk ij)xk ij +

  • (i,j)∈A
  • k∈K

−βk

ij

  • yij
  • j∈N+

i

xk

ij −

  • j∈N−

i

xk

ji =

   1, i = O(k) − 1, i = D(k) 0, i = O(k), D(k) i ∈ N, k ∈ K xk

ij ≥ 0

(i, j) ∈ A, k ∈ K

  • (i,j)∈A

fijyij ≤ B yij ∈ {0, 1} (i, j) ∈ A

◮ How do you solve this subproblem? Does it have the

integrality property?

slide-49
SLIDE 49

Partial Dantzig-Wolfe reformulation of Lagrangian dual

◮ Q: extreme points of conv{y ∈ {0, 1}|A| | (i,j)∈A fijyij ≤ B} ◮ δq ij: 1, if arc (i, j) is chosen in extreme point q; 0, otherwise

Z(LD) = min

  • (i,j)∈A
  • k∈K

ck

ij xk ij

  • j∈N+

i

xk

ij −

  • j∈N−

i

xk

ji =

   1, i = O(k) − 1, i = D(k) 0, i = O(k), D(k) i ∈ N, k ∈ K(πk

i )

xk

ij ≥ 0

(i, j) ∈ A, k ∈ K

slide-50
SLIDE 50

Partial Dantzig-Wolfe reformulation of Lagrangian dual

◮ Q: extreme points of conv{y ∈ {0, 1}|A| | (i,j)∈A fijyij ≤ B} ◮ δq ij: 1, if arc (i, j) is chosen in extreme point q; 0, otherwise

Z(LD) = min

  • (i,j)∈A
  • k∈K

ck

ij xk ij

  • j∈N+

i

xk

ij −

  • j∈N−

i

xk

ji =

   1, i = O(k) − 1, i = D(k) 0, i = O(k), D(k) i ∈ N, k ∈ K(πk

i )

xk

ij ≥ 0

(i, j) ∈ A, k ∈ K xk

ij ≤

  • q∈Q

δq

ijλq

(i, j) ∈ A, k ∈ K (βk

ij)

slide-51
SLIDE 51

Partial Dantzig-Wolfe reformulation of Lagrangian dual

◮ Q: extreme points of conv{y ∈ {0, 1}|A| | (i,j)∈A fijyij ≤ B} ◮ δq ij: 1, if arc (i, j) is chosen in extreme point q; 0, otherwise

Z(LD) = min

  • (i,j)∈A
  • k∈K

ck

ij xk ij

  • j∈N+

i

xk

ij −

  • j∈N−

i

xk

ji =

   1, i = O(k) − 1, i = D(k) 0, i = O(k), D(k) i ∈ N, k ∈ K(πk

i )

xk

ij ≥ 0

(i, j) ∈ A, k ∈ K xk

ij ≤

  • q∈Q

δq

ijλq

(i, j) ∈ A, k ∈ K (βk

ij)

  • q∈Q

λq = 1 (θ) λq ≥ 0 q ∈ Q

slide-52
SLIDE 52

Solving the Lagrangian dual

◮ Exponential number of λ variables: column generation! ◮ The number of x variables/linking constraints is polynomial,

but can be extremely large: column-and-row generation!

slide-53
SLIDE 53

Solving the Lagrangian dual

◮ Exponential number of λ variables: column generation! ◮ The number of x variables/linking constraints is polynomial,

but can be extremely large: column-and-row generation!

◮ Reduced cost of each λq:

  • (i,j)∈A

δq

ij

  • k∈K

−βk

ij

  • − θ
slide-54
SLIDE 54

Solving the Lagrangian dual

◮ Exponential number of λ variables: column generation! ◮ The number of x variables/linking constraints is polynomial,

but can be extremely large: column-and-row generation!

◮ Reduced cost of each λq:

  • (i,j)∈A

δq

ij

  • k∈K

−βk

ij

  • − θ

◮ Problem: since we use column-and-row generation for x

variables/linking constraints, many βk

ij values are unknown!

slide-55
SLIDE 55

Solving the Lagrangian dual

◮ Exponential number of λ variables: column generation! ◮ The number of x variables/linking constraints is polynomial,

but can be extremely large: column-and-row generation!

◮ Reduced cost of each λq:

  • (i,j)∈A

δq

ij

  • k∈K

−βk

ij

  • − θ

◮ Problem: since we use column-and-row generation for x

variables/linking constraints, many βk

ij values are unknown! ◮ Solution: use LP duality to derive the “missing” βk ij values

slide-56
SLIDE 56

Dual of partial Dantzig-Wolfe reformulation

max

  • k∈K
  • πk

D(k) − πk O(k)

  • + θ
slide-57
SLIDE 57

Dual of partial Dantzig-Wolfe reformulation

max

  • k∈K
  • πk

D(k) − πk O(k)

  • + θ

πk

j − πk i − βk ij ≤ ck ij

(i, j) ∈ A, k ∈ K

slide-58
SLIDE 58

Dual of partial Dantzig-Wolfe reformulation

max

  • k∈K
  • πk

D(k) − πk O(k)

  • + θ

πk

j − πk i − βk ij ≤ ck ij

(i, j) ∈ A, k ∈ K

  • (ij)∈A
  • k∈K

δq

ijβk ij + θ ≤ 0

q ∈ Q

slide-59
SLIDE 59

Dual of partial Dantzig-Wolfe reformulation

max

  • k∈K
  • πk

D(k) − πk O(k)

  • + θ

πk

j − πk i − βk ij ≤ ck ij

(i, j) ∈ A, k ∈ K

  • (ij)∈A
  • k∈K

δq

ijβk ij + θ ≤ 0

q ∈ Q βk

ij ≥ 0

(i, j) ∈ A, k ∈ K

slide-60
SLIDE 60

Dual of partial Dantzig-Wolfe reformulation

max

  • k∈K
  • πk

D(k) − πk O(k)

  • + θ

πk

j − πk i − βk ij ≤ ck ij

(i, j) ∈ A, k ∈ K

  • (ij)∈A
  • k∈K

δq

ijβk ij + θ ≤ 0

q ∈ Q βk

ij ≥ 0

(i, j) ∈ A, k ∈ K

◮ Any optimal solution satisfies θ = − (i,j)∈A

  • k∈K δq

ijβk ij

slide-61
SLIDE 61

Dual of partial Dantzig-Wolfe reformulation

max

  • k∈K
  • πk

D(k) − πk O(k)

  • + θ

πk

j − πk i − βk ij ≤ ck ij

(i, j) ∈ A, k ∈ K

  • (ij)∈A
  • k∈K

δq

ijβk ij + θ ≤ 0

q ∈ Q βk

ij ≥ 0

(i, j) ∈ A, k ∈ K

◮ Any optimal solution satisfies θ = − (i,j)∈A

  • k∈K δq

ijβk ij ◮ Therefore, any optimal solution also satisfies

−βk

ij = min{0, ck ij + πk i − πk j }

(i, j) ∈ A, k ∈ K

slide-62
SLIDE 62

Pricing problem

◮ Finding variable λq with the smallest reduced cost is thus

equivalent to min

q∈Q

  

  • (i,j)∈A

δq

ij

  • k∈K

min{0, ck

ij + πk i − πk j }

   − θ

slide-63
SLIDE 63

Pricing problem

◮ Finding variable λq with the smallest reduced cost is thus

equivalent to min

q∈Q

  

  • (i,j)∈A

δq

ij

  • k∈K

min{0, ck

ij + πk i − πk j }

   − θ

◮ Therefore, it is obtained by solving the Lagrangian subproblem

min

  • (i,j)∈A
  • k∈K

min{0, ck

ij + πk i − πk j }

  • yij
  • (i,j)∈A

fijyij ≤ B yij ∈ {0, 1} (i, j) ∈ A

slide-64
SLIDE 64

Column-and-row generation method

◮ K ij: commodities corresponding to the x variables that are

NOT in the current restricted master problem

◮ y,

y: optimal values of design variables from current restricted master and Lagrangian subproblem (yij =

q∈Q δq ijλ q) ◮ Generation of x variables, for each (i, j):

slide-65
SLIDE 65

Column-and-row generation method

◮ K ij: commodities corresponding to the x variables that are

NOT in the current restricted master problem

◮ y,

y: optimal values of design variables from current restricted master and Lagrangian subproblem (yij =

q∈Q δq ijλ q) ◮ Generation of x variables, for each (i, j):

◮ If y ij > 0, generate k ∈ K ij such that ck

ij + πk i − πk j < 0

◮ If y ij = 0 and

yij = 1, generate k ∈ K ij such that ck

ij + πk i − πk j < 0

◮ If y ij = 0 and

yij = 0, do not generate any new variables related to (i, j)

slide-66
SLIDE 66

Column-and-row generation method

◮ K ij: commodities corresponding to the x variables that are

NOT in the current restricted master problem

◮ y,

y: optimal values of design variables from current restricted master and Lagrangian subproblem (yij =

q∈Q δq ijλ q) ◮ Generation of x variables, for each (i, j):

◮ If y ij > 0, generate k ∈ K ij such that ck

ij + πk i − πk j < 0

◮ If y ij = 0 and

yij = 1, generate k ∈ K ij such that ck

ij + πk i − πk j < 0

◮ If y ij = 0 and

yij = 0, do not generate any new variables related to (i, j)

◮ After column generation, add linking constraints as cuts

slide-67
SLIDE 67

Column-and-row generation method

◮ K ij: commodities corresponding to the x variables that are

NOT in the current restricted master problem

◮ y,

y: optimal values of design variables from current restricted master and Lagrangian subproblem (yij =

q∈Q δq ijλ q) ◮ Generation of x variables, for each (i, j):

◮ If y ij > 0, generate k ∈ K ij such that ck

ij + πk i − πk j < 0

◮ If y ij = 0 and

yij = 1, generate k ∈ K ij such that ck

ij + πk i − πk j < 0

◮ If y ij = 0 and

yij = 0, do not generate any new variables related to (i, j)

◮ After column generation, add linking constraints as cuts ◮ Subgradient optimization to initialize the master problem ◮ With n best solutions from subgradient optimization: ◮ Generate subset of extreme points Q ⊆ Q ◮ If δq ij = 1 for at least one q ∈ Q and (xk ij > 0 or βk ij > 0),

generate xk

ij and corresponding linking constraint

slide-68
SLIDE 68

Column-and-row generation method

◮ K ij: commodities corresponding to the x variables that are

NOT in the current restricted master problem

◮ y,

y: optimal values of design variables from current restricted master and Lagrangian subproblem (yij =

q∈Q δq ijλ q) ◮ Generation of x variables, for each (i, j):

◮ If y ij > 0, generate k ∈ K ij such that ck

ij + πk i − πk j < 0

◮ If y ij = 0 and

yij = 1, generate k ∈ K ij such that ck

ij + πk i − πk j < 0

◮ If y ij = 0 and

yij = 0, do not generate any new variables related to (i, j)

◮ After column generation, add linking constraints as cuts ◮ Subgradient optimization to initialize the master problem ◮ With n best solutions from subgradient optimization: ◮ Generate subset of extreme points Q ⊆ Q ◮ If δq ij = 1 for at least one q ∈ Q and (xk ij > 0 or βk ij > 0),

generate xk

ij and corresponding linking constraint ◮ Do not forget artificial flow variables, one per commodity!

slide-69
SLIDE 69

Branch-and-price-and-cut

◮ Apply column-and-row generation at every node ◮ How to branch?

slide-70
SLIDE 70

Branch-and-price-and-cut

◮ Apply column-and-row generation at every node ◮ How to branch? ◮ Design variables are related to λq variables as follows:

yij =

  • q∈Q

δq

ijλq

(i, j) ∈ A

slide-71
SLIDE 71

Branch-and-price-and-cut

◮ Apply column-and-row generation at every node ◮ How to branch? ◮ Design variables are related to λq variables as follows:

yij =

  • q∈Q

δq

ijλq

(i, j) ∈ A

◮ If q∈Q δq ijλq ∈ {0, 1}, (i, j) ∈ A, the node can be fathomed ◮ Otherwise, select an arc (i, j) such that q∈Q δq ijλq is

fractional and generate the two child nodes defined by:

◮ q∈Q δq ijλq = 0 ◮ q∈Q δq ijλq = 1 ◮ What happens in the Lagrangian subproblem?

slide-72
SLIDE 72

Branch-and-price-and-cut

◮ Apply column-and-row generation at every node ◮ How to branch? ◮ Design variables are related to λq variables as follows:

yij =

  • q∈Q

δq

ijλq

(i, j) ∈ A

◮ If q∈Q δq ijλq ∈ {0, 1}, (i, j) ∈ A, the node can be fathomed ◮ Otherwise, select an arc (i, j) such that q∈Q δq ijλq is

fractional and generate the two child nodes defined by:

◮ q∈Q δq ijλq = 0 ◮ q∈Q δq ijλq = 1 ◮ What happens in the Lagrangian subproblem? ◮ Constraint yij = 0 or yij = 1 is added without any problem!

slide-73
SLIDE 73

Uncapacitated facility location problem (UFLP)

◮ K: set of customers ◮ J: set of locations for potential facilities ◮ fj ≥ 0: fixed cost for opening facility at location j ◮ cjk ≥ 0: cost of satisfying the demand of customer k from

facility at location j

◮ Problem description: determine the locations of the facilities

to satisfy customers’ demands at minimum cost

slide-74
SLIDE 74

Problem formulation

min

  • j∈J
  • k∈K

cjkxjk +

  • j∈J

fjyj

  • j∈J

xjk = 1, k ∈ K xjk ≤ yj, j ∈ J, k ∈ K xjk ≥ 0, j ∈ J, k ∈ K yj ∈ {0, 1}, j ∈ J

slide-75
SLIDE 75

Benders subproblem

min

  • j∈J
  • k∈K

cjkxjk

  • j∈J

xjk = 1, k ∈ K (πk) xjk ≤ yj, j ∈ J, k ∈ K (αjk ≥ 0) xjk ≥ 0, j ∈ J, k ∈ K

◮ How to solve Benders subproblem?

slide-76
SLIDE 76

Benders subproblem

min

  • j∈J
  • k∈K

cjkxjk

  • j∈J

xjk = 1, k ∈ K (πk) xjk ≤ yj, j ∈ J, k ∈ K (αjk ≥ 0) xjk ≥ 0, j ∈ J, k ∈ K

◮ How to solve Benders subproblem? ◮ Decomposes for each k: assign the cheapest open location ◮ Give a simple condition to ensure feasibility

slide-77
SLIDE 77

Benders subproblem

min

  • j∈J
  • k∈K

cjkxjk

  • j∈J

xjk = 1, k ∈ K (πk) xjk ≤ yj, j ∈ J, k ∈ K (αjk ≥ 0) xjk ≥ 0, j ∈ J, k ∈ K

◮ How to solve Benders subproblem? ◮ Decomposes for each k: assign the cheapest open location ◮ Give a simple condition to ensure feasibility ◮ Feasibility requires that at least one location is open!

slide-78
SLIDE 78

Dual of decomposed Benders subproblem

Zk(y) = max{πk −

  • j∈J

αjkyj | πk − αjk ≤ cjk, αjk ≥ 0, j ∈ J}

slide-79
SLIDE 79

Dual of decomposed Benders subproblem

Zk(y) = max{πk −

  • j∈J

αjkyj | πk − αjk ≤ cjk, αjk ≥ 0, j ∈ J}

◮ Every optimal solution satisfies πk = minj∈J{cjk + αjk} and

αjk = max{0, πk − cjk} = (πk − cjk)+

slide-80
SLIDE 80

Dual of decomposed Benders subproblem

Zk(y) = max{πk −

  • j∈J

αjkyj | πk − αjk ≤ cjk, αjk ≥ 0, j ∈ J}

◮ Every optimal solution satisfies πk = minj∈J{cjk + αjk} and

αjk = max{0, πk − cjk} = (πk − cjk)+

◮ The last equation implies there exists l ∈ J such that πk = clk ◮ Make sure you understand why!

slide-81
SLIDE 81

Dual of decomposed Benders subproblem

Zk(y) = max{πk −

  • j∈J

αjkyj | πk − αjk ≤ cjk, αjk ≥ 0, j ∈ J}

◮ Every optimal solution satisfies πk = minj∈J{cjk + αjk} and

αjk = max{0, πk − cjk} = (πk − cjk)+

◮ The last equation implies there exists l ∈ J such that πk = clk ◮ Make sure you understand why! ◮ The decomposed dual can therefore be reformulated as:

Zk(y) = max

l∈J

  clk −

  • j∈J

(clk − cjk)+yj   

slide-82
SLIDE 82

Benders reformulation

◮ Feasibility is ensured if at least one location is open:

  • j∈J yj ≥ 1 is the only needed Benders feasibility cut!

◮ What is the link with extreme ray of the dual polyhedron?

slide-83
SLIDE 83

Benders reformulation

◮ Feasibility is ensured if at least one location is open:

  • j∈J yj ≥ 1 is the only needed Benders feasibility cut!

◮ What is the link with extreme ray of the dual polyhedron? ◮ πk = αjk = 1, j ∈ J, defines an extreme ray for which

πk −

j∈J αjkyj > 0 if all locations are closed!

slide-84
SLIDE 84

Benders reformulation

◮ Feasibility is ensured if at least one location is open:

  • j∈J yj ≥ 1 is the only needed Benders feasibility cut!

◮ What is the link with extreme ray of the dual polyhedron? ◮ πk = αjk = 1, j ∈ J, defines an extreme ray for which

πk −

j∈J αjkyj > 0 if all locations are closed! ◮ Using our simplified reformulation of the decomposed dual,

the Benders reformulation is: min

  • j∈J

fjyj +

  • k∈K

zk

slide-85
SLIDE 85

Benders reformulation

◮ Feasibility is ensured if at least one location is open:

  • j∈J yj ≥ 1 is the only needed Benders feasibility cut!

◮ What is the link with extreme ray of the dual polyhedron? ◮ πk = αjk = 1, j ∈ J, defines an extreme ray for which

πk −

j∈J αjkyj > 0 if all locations are closed! ◮ Using our simplified reformulation of the decomposed dual,

the Benders reformulation is: min

  • j∈J

fjyj +

  • k∈K

zk clk −

  • j∈J

(clk − cjk)+yj ≤ zk k ∈ K, l ∈ J

slide-86
SLIDE 86

Benders reformulation

◮ Feasibility is ensured if at least one location is open:

  • j∈J yj ≥ 1 is the only needed Benders feasibility cut!

◮ What is the link with extreme ray of the dual polyhedron? ◮ πk = αjk = 1, j ∈ J, defines an extreme ray for which

πk −

j∈J αjkyj > 0 if all locations are closed! ◮ Using our simplified reformulation of the decomposed dual,

the Benders reformulation is: min

  • j∈J

fjyj +

  • k∈K

zk clk −

  • j∈J

(clk − cjk)+yj ≤ zk k ∈ K, l ∈ J 1 −

  • j∈J

yj ≤ 0

slide-87
SLIDE 87

Benders reformulation

◮ Feasibility is ensured if at least one location is open:

  • j∈J yj ≥ 1 is the only needed Benders feasibility cut!

◮ What is the link with extreme ray of the dual polyhedron? ◮ πk = αjk = 1, j ∈ J, defines an extreme ray for which

πk −

j∈J αjkyj > 0 if all locations are closed! ◮ Using our simplified reformulation of the decomposed dual,

the Benders reformulation is: min

  • j∈J

fjyj +

  • k∈K

zk clk −

  • j∈J

(clk − cjk)+yj ≤ zk k ∈ K, l ∈ J 1 −

  • j∈J

yj ≤ 0 yj ∈ {0, 1}, j ∈ J

slide-88
SLIDE 88

Benders decomposition

◮ Initial Benders master problem: add j∈J yj ≥ 1 + one

constraint for each k corresponding to the cheapest location

◮ At each iteration, we solve the Benders master problem to get

y and the Benders subproblem for that y

◮ The solution to the Benders subproblem for each k is xj∗k = 1

for the cheapest open location j∗, otherwise xjk = 0, j = j∗

◮ The solution to the dual of the Benders subproblem for each k

is πk = cj∗k (because of strong duality!)

slide-89
SLIDE 89

Benders decomposition

◮ Initial Benders master problem: add j∈J yj ≥ 1 + one

constraint for each k corresponding to the cheapest location

◮ At each iteration, we solve the Benders master problem to get

y and the Benders subproblem for that y

◮ The solution to the Benders subproblem for each k is xj∗k = 1

for the cheapest open location j∗, otherwise xjk = 0, j = j∗

◮ The solution to the dual of the Benders subproblem for each k

is πk = cj∗k (because of strong duality!)

◮ A Benders optimality cut is added for each k such that

cj∗k −

  • j∈J

(cj∗k − cjk)+yj > zk