SLIDE 1 Decomposition for Network Design
Bernard Gendron∗ March 2-10, 2016
EPFL, Lausanne, Switzerland
∗ CIRRELT and D´
epartement d’informatique et de recherche op´ erationnelle, Universit´ e de Montr´ eal, Canada
SLIDE 2
Outline of lesson 4: Classic examples of decomposition
Branch-and-cut for traveling salesman Branch-and-price for integer multicommodity flow Lagrangian relaxation for budget network design Benders decomposition for uncapacitated facility location
SLIDE 3
Asymmetric traveling salesman problem (ATSP)
◮ Directed network G = (N, A), with node set N = {1, 2, . . . , n}
and arc set A
◮ Routing cost cij on each arc (i, j) (can be different than cji,
routing cost on arc (j, i))
◮ Problem description: find the minimum cost Hamiltonian
circuit, i.e., a circuit that visits each node in N exactly one time (we assume such a circuit exists in G)
◮ Useful notation: for any S, T ⊆ N, let
A(S, T) = {(i, j) ∈ A|i ∈ S, j ∈ T}
SLIDE 4 Problem formulation
◮ yij: 1, if arc (i, j) is chosen in the Hamiltonian circuit, 0,
SLIDE 5 Problem formulation
◮ yij: 1, if arc (i, j) is chosen in the Hamiltonian circuit, 0,
Z = min
cijyij
SLIDE 6 Problem formulation
◮ yij: 1, if arc (i, j) is chosen in the Hamiltonian circuit, 0,
Z = min
cijyij
i
yij = 1 i ∈ N
i
yji = 1 i ∈ N
SLIDE 7 Problem formulation
◮ yij: 1, if arc (i, j) is chosen in the Hamiltonian circuit, 0,
Z = min
cijyij
i
yij = 1 i ∈ N
i
yji = 1 i ∈ N
yij ≥ 1, S ⊂ N, S = ∅ yij ∈ {0, 1} (i, j) ∈ A
SLIDE 8
Alternative formulations
◮ If routing costs are symmetric (cij = cji for each (i, j)), how
would you formulate the problem?
SLIDE 9
Alternative formulations
◮ If routing costs are symmetric (cij = cji for each (i, j)), how
would you formulate the problem?
◮ Think of another way of expressing subtour elimination
constraints (SEC) (using the same variables)
SLIDE 10
Alternative formulations
◮ If routing costs are symmetric (cij = cji for each (i, j)), how
would you formulate the problem?
◮ Think of another way of expressing subtour elimination
constraints (SEC) (using the same variables)
◮ Does the resulting model have an equivalent LP relaxation?
SLIDE 11
Alternative formulations
◮ If routing costs are symmetric (cij = cji for each (i, j)), how
would you formulate the problem?
◮ Think of another way of expressing subtour elimination
constraints (SEC) (using the same variables)
◮ Does the resulting model have an equivalent LP relaxation? ◮ Both models have an exponential number of constraints: what
to do?
SLIDE 12
Alternative formulations
◮ If routing costs are symmetric (cij = cji for each (i, j)), how
would you formulate the problem?
◮ Think of another way of expressing subtour elimination
constraints (SEC) (using the same variables)
◮ Does the resulting model have an equivalent LP relaxation? ◮ Both models have an exponential number of constraints: what
to do?
◮ Introduce a set of commodities K = N \ {1} such that
O(k) = 1, D(k) = k and dk = 1 for each k ∈ K
◮ Define multicommodity flow variables xk ij : flow of commodity
k on arc (i, j)
◮ Using these multicommodity flow variables, express SECs with
a polynomial number of constraints
SLIDE 13
Alternative formulations
◮ If routing costs are symmetric (cij = cji for each (i, j)), how
would you formulate the problem?
◮ Think of another way of expressing subtour elimination
constraints (SEC) (using the same variables)
◮ Does the resulting model have an equivalent LP relaxation? ◮ Both models have an exponential number of constraints: what
to do?
◮ Introduce a set of commodities K = N \ {1} such that
O(k) = 1, D(k) = k and dk = 1 for each k ∈ K
◮ Define multicommodity flow variables xk ij : flow of commodity
k on arc (i, j)
◮ Using these multicommodity flow variables, express SECs with
a polynomial number of constraints
◮ Does the resulting model have an equivalent LP relaxation?
SLIDE 14 Cutting-plane method
◮ Since there is an exponential number of SECs, we use a
cutting-plane method to solve the LP relaxation
◮ Separation problem: given a solution y to the current LP
relaxation, we have to find the most violated SEC or determine that no violated SEC exists
◮ Consider the following optimization problem:
γ = min
yij | S ⊂ N, S = ∅
◮ If γ ≥ 1, there is no violated SEC; otherwise, if γ < 1, the
- ptimal solution S corresponds to the most violated SEC
SLIDE 15 Solving the separation problem
◮ The equivalent optimization problem can be reformulated as:
γ = min
j=2,3,...,n γj
γj = min
yij | {1, 2, . . . , j − 1} ⊂ N, j ∈ N \ S
◮ Computing γj corresponds to finding the minimum cut
between 1 and j with capacities yij on each arc (i, j)
◮ How would you solve this problem?
SLIDE 16 Solving the separation problem
◮ The equivalent optimization problem can be reformulated as:
γ = min
j=2,3,...,n γj
γj = min
yij | {1, 2, . . . , j − 1} ⊂ N, j ∈ N \ S
◮ Computing γj corresponds to finding the minimum cut
between 1 and j with capacities yij on each arc (i, j)
◮ How would you solve this problem? ◮ Any efficient maximum flow algorithm solves this problem in
polynomial time
SLIDE 17
Embedding cutting-plane into branch-and-bound
◮ Branch-and-cut: apply cutting-plane at each node ◮ Cut-and-branch: apply cutting-plane only at the root ◮ Is this standard cut-and-branch approach enough to identify
an optimal solution?
SLIDE 18
Embedding cutting-plane into branch-and-bound
◮ Branch-and-cut: apply cutting-plane at each node ◮ Cut-and-branch: apply cutting-plane only at the root ◮ Is this standard cut-and-branch approach enough to identify
an optimal solution?
◮ We also need to verify that there is no violated SEC at each
node where an integer solution is found!
◮ How do we do that? Remember that y is now integer!
SLIDE 19
Embedding cutting-plane into branch-and-bound
◮ Branch-and-cut: apply cutting-plane at each node ◮ Cut-and-branch: apply cutting-plane only at the root ◮ Is this standard cut-and-branch approach enough to identify
an optimal solution?
◮ We also need to verify that there is no violated SEC at each
node where an integer solution is found!
◮ How do we do that? Remember that y is now integer! ◮ A linear-time graph traversal algorithm to verify the
connectivity of the subgraph induced by y does the job!
SLIDE 20
Integer multicommodity flow problem
◮ Directed network G = (N, A), with node set N and arc set A ◮ Commodity set K: known demand dk between origin O(k)
and destination D(k) for each k ∈ K
◮ Unit transportation cost ck ij on arc (i, j) and commodity k ◮ Capacity uij on each arc (i, j) ◮ Problem description: satisfy the demand of each commodity
using only one path per commodity at minimum cost, while respecting capacity constraints
SLIDE 21
Arc-based model
◮ xk ij : 1, if arc (i, j) is used in the chosen path for commodity k,
0 otherwise
SLIDE 22 Arc-based model
◮ xk ij : 1, if arc (i, j) is used in the chosen path for commodity k,
0 otherwise Z = min
ck
ij xk ij
SLIDE 23 Arc-based model
◮ xk ij : 1, if arc (i, j) is used in the chosen path for commodity k,
0 otherwise Z = min
ck
ij xk ij
i
xk
ij −
i
xk
ji =
1, i = O(k) − 1, i = D(k) 0, i = O(k), D(k) i ∈ N, k ∈ K
SLIDE 24 Arc-based model
◮ xk ij : 1, if arc (i, j) is used in the chosen path for commodity k,
0 otherwise Z = min
ck
ij xk ij
i
xk
ij −
i
xk
ji =
1, i = O(k) − 1, i = D(k) 0, i = O(k), D(k) i ∈ N, k ∈ K
dkxk
ij ≤ uij
(i, j) ∈ A (αij)
SLIDE 25 Arc-based model
◮ xk ij : 1, if arc (i, j) is used in the chosen path for commodity k,
0 otherwise Z = min
ck
ij xk ij
i
xk
ij −
i
xk
ji =
1, i = O(k) − 1, i = D(k) 0, i = O(k), D(k) i ∈ N, k ∈ K
dkxk
ij ≤ uij
(i, j) ∈ A (αij) xk
ij ∈ {0, 1}
(i, j) ∈ A, k ∈ K
SLIDE 26
Lagrangian relaxation of capacity constraints
◮ Derive the Lagrangian subproblem obtained after relaxing
capacity constraints using multipliers α ≥ 0
SLIDE 27 Lagrangian relaxation of capacity constraints
◮ Derive the Lagrangian subproblem obtained after relaxing
capacity constraints using multipliers α ≥ 0 Z(LR(α)) = min
(ck
ij + αijdk)xk ij −
αijuij
i
xk
ij −
i
xk
ji =
1, i = O(k) − 1, i = D(k) 0, i = O(k), D(k) i ∈ N, k ∈ K xk
ij ∈ {0, 1}
(i, j) ∈ A, k ∈ K
◮ How do you solve this subproblem? Does it have the
integrality property?
SLIDE 28
Dantzig-Wolfe reformulation
◮ Now, we know that the Lagrangian dual is equivalent to the
LP relaxation
◮ We also know that the Lagrangian dual can be solved with
Dantzig-Wolfe reformulation (obtained from the primal interpretation of Lagrangian duality)
◮ Write down Dantzig-Wolfe reformulation
SLIDE 29
Dantzig-Wolfe reformulation
◮ Now, we know that the Lagrangian dual is equivalent to the
LP relaxation
◮ We also know that the Lagrangian dual can be solved with
Dantzig-Wolfe reformulation (obtained from the primal interpretation of Lagrangian duality)
◮ Write down Dantzig-Wolfe reformulation ◮ Using the fact that the Lagrangian subproblem decomposes
into |K| shortest path problems, write down an equivalent disaggregated Dantzig-Wolfe reformulation
SLIDE 30
Dantzig-Wolfe reformulation
◮ Now, we know that the Lagrangian dual is equivalent to the
LP relaxation
◮ We also know that the Lagrangian dual can be solved with
Dantzig-Wolfe reformulation (obtained from the primal interpretation of Lagrangian duality)
◮ Write down Dantzig-Wolfe reformulation ◮ Using the fact that the Lagrangian subproblem decomposes
into |K| shortest path problems, write down an equivalent disaggregated Dantzig-Wolfe reformulation
◮ Show that this disaggregated Dantzig-Wolfe reformulation is
equivalent to the LP relaxation of the path-based model
SLIDE 31 Path-based model
◮ Pk: circuit-free paths between O(k) and D(k) for each k ◮ δkp ij : 1, if arc (i, j) is on path p ∈ Pk; 0, otherwise
Z = min
(i,j)∈A
ck
ij δkp ij
λkp
SLIDE 32 Path-based model
◮ Pk: circuit-free paths between O(k) and D(k) for each k ◮ δkp ij : 1, if arc (i, j) is on path p ∈ Pk; 0, otherwise
Z = min
(i,j)∈A
ck
ij δkp ij
λkp
λkp = 1 k ∈ K (θk)
SLIDE 33 Path-based model
◮ Pk: circuit-free paths between O(k) and D(k) for each k ◮ δkp ij : 1, if arc (i, j) is on path p ∈ Pk; 0, otherwise
Z = min
(i,j)∈A
ck
ij δkp ij
λkp
λkp = 1 k ∈ K (θk)
dkδkp
ij λkp ≤ uij
(i, j) ∈ A (αij)
SLIDE 34 Path-based model
◮ Pk: circuit-free paths between O(k) and D(k) for each k ◮ δkp ij : 1, if arc (i, j) is on path p ∈ Pk; 0, otherwise
Z = min
(i,j)∈A
ck
ij δkp ij
λkp
λkp = 1 k ∈ K (θk)
dkδkp
ij λkp ≤ uij
(i, j) ∈ A (αij) λkp ∈ {0, 1} k ∈ K, p ∈ Pk
SLIDE 35
Column generation method
◮ The LP relaxation of the path-based model has an exponential
number of variables: what to do?
SLIDE 36
Column generation method
◮ The LP relaxation of the path-based model has an exponential
number of variables: what to do?
◮ Write down the expression of the reduced cost of λkp
SLIDE 37 Column generation method
◮ The LP relaxation of the path-based model has an exponential
number of variables: what to do?
◮ Write down the expression of the reduced cost of λkp ◮ The reduced cost of variable λkp is:
(ck
ij + αijdk)δkp ij − θk ◮ What is the pricing problem?
SLIDE 38 Column generation method
◮ The LP relaxation of the path-based model has an exponential
number of variables: what to do?
◮ Write down the expression of the reduced cost of λkp ◮ The reduced cost of variable λkp is:
(ck
ij + αijdk)δkp ij − θk ◮ What is the pricing problem? ◮ Solve the Lagrangian subproblem for each k to obtain an
- ptimal solution that corresponds to a path p ∈ Pk
◮ If (i,j)∈A(ck ij + αijdk)δkp ij − θk < 0, the variable λkp
corresponding to the optimal path p ∈ Pk is added
◮ If (i,j)∈A(ck ij + αijdk)δkp ij − θk ≥ 0 for each k, the column
generation method has converged to the optimal LP relaxation
SLIDE 39
Branch-and-price algorithm
◮ At each node, we perform the column generation method ◮ What happens in the pricing problem if we branch on λkp?
SLIDE 40 Branch-and-price algorithm
◮ At each node, we perform the column generation method ◮ What happens in the pricing problem if we branch on λkp? ◮ When we branch, we have to make sure that we do not
destroy the structure of the pricing problem!
◮ A better way to branch is as follows: choose k for which there
are at least two paths p(q) such that λ
kp(q) > 0, q = 1, 2 ◮ Follow the flow from the origin O(k) up to the first node l
where paths p(1) and p(2) differ; let (l, j(q)) be the arc
- riginating at l and belonging to path p(q), q = 1, 2
◮ Let (l, j(q)) ∈ N+ i (q), q = 1, 2, where N+ l
= N+
l (1) ∪ N+ l (2),
N+
l (1) ∩ N+ l (2) = ∅; generate the two child nodes defined by
the branching constraints: 1)
j∈N+
l (1) xk
lj = 0
2)
j∈N+
l (2) xk
lj = 0 ◮ Show that this branching rule is valid and does not change the
way we solve the pricing problem
SLIDE 41
Budget network design
◮ Directed network G = (N, A), with node set N and arc set A ◮ Commodity set K: known demand dk between origin O(k)
and destination D(k) for each k ∈ K
◮ Transportation cost ck ij on arc (i, j) and commodity k ◮ Fixed charge fij incurred whenever arc (i, j) is used to
transport some commodity units
◮ Total budget B > 0 on the fixed charges ◮ Problem description: satisfy the demand of each commodity
at minimum cost, while respecting the fixed charge budget
SLIDE 42
Problem formulation
◮ xk ij : fraction of the demand dk carried on arc (i, j) ◮ yij: 1, if arc (i, j) is used; 0 otherwise
SLIDE 43 Problem formulation
◮ xk ij : fraction of the demand dk carried on arc (i, j) ◮ yij: 1, if arc (i, j) is used; 0 otherwise
Z = min
ck
ij xk ij
SLIDE 44 Problem formulation
◮ xk ij : fraction of the demand dk carried on arc (i, j) ◮ yij: 1, if arc (i, j) is used; 0 otherwise
Z = min
ck
ij xk ij
i
xk
ij −
i
xk
ji =
1, i = O(k) − 1, i = D(k) 0, i = O(k), D(k) i ∈ N, k ∈ K xk
ij ≥ 0
(i, j) ∈ A, k ∈ K
SLIDE 45 Problem formulation
◮ xk ij : fraction of the demand dk carried on arc (i, j) ◮ yij: 1, if arc (i, j) is used; 0 otherwise
Z = min
ck
ij xk ij
i
xk
ij −
i
xk
ji =
1, i = O(k) − 1, i = D(k) 0, i = O(k), D(k) i ∈ N, k ∈ K xk
ij ≥ 0
(i, j) ∈ A, k ∈ K xk
ij ≤ yij
(i, j) ∈ A, k ∈ K (βk
ij)
SLIDE 46 Problem formulation
◮ xk ij : fraction of the demand dk carried on arc (i, j) ◮ yij: 1, if arc (i, j) is used; 0 otherwise
Z = min
ck
ij xk ij
i
xk
ij −
i
xk
ji =
1, i = O(k) − 1, i = D(k) 0, i = O(k), D(k) i ∈ N, k ∈ K xk
ij ≥ 0
(i, j) ∈ A, k ∈ K xk
ij ≤ yij
(i, j) ∈ A, k ∈ K (βk
ij)
fijyij ≤ B yij ∈ {0, 1} (i, j) ∈ A
SLIDE 47
Lagrangian relaxation of linking constraints
◮ Derive the Lagrangian subproblem obtained after relaxing
linking constraints using multipliers β ≥ 0
SLIDE 48 Lagrangian relaxation of linking constraints
◮ Derive the Lagrangian subproblem obtained after relaxing
linking constraints using multipliers β ≥ 0 Z(LR(β)) = min
(ck
ij +βk ij)xk ij +
−βk
ij
i
xk
ij −
i
xk
ji =
1, i = O(k) − 1, i = D(k) 0, i = O(k), D(k) i ∈ N, k ∈ K xk
ij ≥ 0
(i, j) ∈ A, k ∈ K
fijyij ≤ B yij ∈ {0, 1} (i, j) ∈ A
◮ How do you solve this subproblem? Does it have the
integrality property?
SLIDE 49 Partial Dantzig-Wolfe reformulation of Lagrangian dual
◮ Q: extreme points of conv{y ∈ {0, 1}|A| | (i,j)∈A fijyij ≤ B} ◮ δq ij: 1, if arc (i, j) is chosen in extreme point q; 0, otherwise
Z(LD) = min
ck
ij xk ij
i
xk
ij −
i
xk
ji =
1, i = O(k) − 1, i = D(k) 0, i = O(k), D(k) i ∈ N, k ∈ K(πk
i )
xk
ij ≥ 0
(i, j) ∈ A, k ∈ K
SLIDE 50 Partial Dantzig-Wolfe reformulation of Lagrangian dual
◮ Q: extreme points of conv{y ∈ {0, 1}|A| | (i,j)∈A fijyij ≤ B} ◮ δq ij: 1, if arc (i, j) is chosen in extreme point q; 0, otherwise
Z(LD) = min
ck
ij xk ij
i
xk
ij −
i
xk
ji =
1, i = O(k) − 1, i = D(k) 0, i = O(k), D(k) i ∈ N, k ∈ K(πk
i )
xk
ij ≥ 0
(i, j) ∈ A, k ∈ K xk
ij ≤
δq
ijλq
(i, j) ∈ A, k ∈ K (βk
ij)
SLIDE 51 Partial Dantzig-Wolfe reformulation of Lagrangian dual
◮ Q: extreme points of conv{y ∈ {0, 1}|A| | (i,j)∈A fijyij ≤ B} ◮ δq ij: 1, if arc (i, j) is chosen in extreme point q; 0, otherwise
Z(LD) = min
ck
ij xk ij
i
xk
ij −
i
xk
ji =
1, i = O(k) − 1, i = D(k) 0, i = O(k), D(k) i ∈ N, k ∈ K(πk
i )
xk
ij ≥ 0
(i, j) ∈ A, k ∈ K xk
ij ≤
δq
ijλq
(i, j) ∈ A, k ∈ K (βk
ij)
λq = 1 (θ) λq ≥ 0 q ∈ Q
SLIDE 52
Solving the Lagrangian dual
◮ Exponential number of λ variables: column generation! ◮ The number of x variables/linking constraints is polynomial,
but can be extremely large: column-and-row generation!
SLIDE 53 Solving the Lagrangian dual
◮ Exponential number of λ variables: column generation! ◮ The number of x variables/linking constraints is polynomial,
but can be extremely large: column-and-row generation!
◮ Reduced cost of each λq:
δq
ij
−βk
ij
SLIDE 54 Solving the Lagrangian dual
◮ Exponential number of λ variables: column generation! ◮ The number of x variables/linking constraints is polynomial,
but can be extremely large: column-and-row generation!
◮ Reduced cost of each λq:
δq
ij
−βk
ij
◮ Problem: since we use column-and-row generation for x
variables/linking constraints, many βk
ij values are unknown!
SLIDE 55 Solving the Lagrangian dual
◮ Exponential number of λ variables: column generation! ◮ The number of x variables/linking constraints is polynomial,
but can be extremely large: column-and-row generation!
◮ Reduced cost of each λq:
δq
ij
−βk
ij
◮ Problem: since we use column-and-row generation for x
variables/linking constraints, many βk
ij values are unknown! ◮ Solution: use LP duality to derive the “missing” βk ij values
SLIDE 56 Dual of partial Dantzig-Wolfe reformulation
max
D(k) − πk O(k)
SLIDE 57 Dual of partial Dantzig-Wolfe reformulation
max
D(k) − πk O(k)
πk
j − πk i − βk ij ≤ ck ij
(i, j) ∈ A, k ∈ K
SLIDE 58 Dual of partial Dantzig-Wolfe reformulation
max
D(k) − πk O(k)
πk
j − πk i − βk ij ≤ ck ij
(i, j) ∈ A, k ∈ K
δq
ijβk ij + θ ≤ 0
q ∈ Q
SLIDE 59 Dual of partial Dantzig-Wolfe reformulation
max
D(k) − πk O(k)
πk
j − πk i − βk ij ≤ ck ij
(i, j) ∈ A, k ∈ K
δq
ijβk ij + θ ≤ 0
q ∈ Q βk
ij ≥ 0
(i, j) ∈ A, k ∈ K
SLIDE 60 Dual of partial Dantzig-Wolfe reformulation
max
D(k) − πk O(k)
πk
j − πk i − βk ij ≤ ck ij
(i, j) ∈ A, k ∈ K
δq
ijβk ij + θ ≤ 0
q ∈ Q βk
ij ≥ 0
(i, j) ∈ A, k ∈ K
◮ Any optimal solution satisfies θ = − (i,j)∈A
ijβk ij
SLIDE 61 Dual of partial Dantzig-Wolfe reformulation
max
D(k) − πk O(k)
πk
j − πk i − βk ij ≤ ck ij
(i, j) ∈ A, k ∈ K
δq
ijβk ij + θ ≤ 0
q ∈ Q βk
ij ≥ 0
(i, j) ∈ A, k ∈ K
◮ Any optimal solution satisfies θ = − (i,j)∈A
ijβk ij ◮ Therefore, any optimal solution also satisfies
−βk
ij = min{0, ck ij + πk i − πk j }
(i, j) ∈ A, k ∈ K
SLIDE 62 Pricing problem
◮ Finding variable λq with the smallest reduced cost is thus
equivalent to min
q∈Q
δq
ij
min{0, ck
ij + πk i − πk j }
− θ
SLIDE 63 Pricing problem
◮ Finding variable λq with the smallest reduced cost is thus
equivalent to min
q∈Q
δq
ij
min{0, ck
ij + πk i − πk j }
− θ
◮ Therefore, it is obtained by solving the Lagrangian subproblem
min
min{0, ck
ij + πk i − πk j }
fijyij ≤ B yij ∈ {0, 1} (i, j) ∈ A
SLIDE 64
Column-and-row generation method
◮ K ij: commodities corresponding to the x variables that are
NOT in the current restricted master problem
◮ y,
y: optimal values of design variables from current restricted master and Lagrangian subproblem (yij =
q∈Q δq ijλ q) ◮ Generation of x variables, for each (i, j):
SLIDE 65 Column-and-row generation method
◮ K ij: commodities corresponding to the x variables that are
NOT in the current restricted master problem
◮ y,
y: optimal values of design variables from current restricted master and Lagrangian subproblem (yij =
q∈Q δq ijλ q) ◮ Generation of x variables, for each (i, j):
◮ If y ij > 0, generate k ∈ K ij such that ck
ij + πk i − πk j < 0
◮ If y ij = 0 and
yij = 1, generate k ∈ K ij such that ck
ij + πk i − πk j < 0
◮ If y ij = 0 and
yij = 0, do not generate any new variables related to (i, j)
SLIDE 66 Column-and-row generation method
◮ K ij: commodities corresponding to the x variables that are
NOT in the current restricted master problem
◮ y,
y: optimal values of design variables from current restricted master and Lagrangian subproblem (yij =
q∈Q δq ijλ q) ◮ Generation of x variables, for each (i, j):
◮ If y ij > 0, generate k ∈ K ij such that ck
ij + πk i − πk j < 0
◮ If y ij = 0 and
yij = 1, generate k ∈ K ij such that ck
ij + πk i − πk j < 0
◮ If y ij = 0 and
yij = 0, do not generate any new variables related to (i, j)
◮ After column generation, add linking constraints as cuts
SLIDE 67 Column-and-row generation method
◮ K ij: commodities corresponding to the x variables that are
NOT in the current restricted master problem
◮ y,
y: optimal values of design variables from current restricted master and Lagrangian subproblem (yij =
q∈Q δq ijλ q) ◮ Generation of x variables, for each (i, j):
◮ If y ij > 0, generate k ∈ K ij such that ck
ij + πk i − πk j < 0
◮ If y ij = 0 and
yij = 1, generate k ∈ K ij such that ck
ij + πk i − πk j < 0
◮ If y ij = 0 and
yij = 0, do not generate any new variables related to (i, j)
◮ After column generation, add linking constraints as cuts ◮ Subgradient optimization to initialize the master problem ◮ With n best solutions from subgradient optimization: ◮ Generate subset of extreme points Q ⊆ Q ◮ If δq ij = 1 for at least one q ∈ Q and (xk ij > 0 or βk ij > 0),
generate xk
ij and corresponding linking constraint
SLIDE 68 Column-and-row generation method
◮ K ij: commodities corresponding to the x variables that are
NOT in the current restricted master problem
◮ y,
y: optimal values of design variables from current restricted master and Lagrangian subproblem (yij =
q∈Q δq ijλ q) ◮ Generation of x variables, for each (i, j):
◮ If y ij > 0, generate k ∈ K ij such that ck
ij + πk i − πk j < 0
◮ If y ij = 0 and
yij = 1, generate k ∈ K ij such that ck
ij + πk i − πk j < 0
◮ If y ij = 0 and
yij = 0, do not generate any new variables related to (i, j)
◮ After column generation, add linking constraints as cuts ◮ Subgradient optimization to initialize the master problem ◮ With n best solutions from subgradient optimization: ◮ Generate subset of extreme points Q ⊆ Q ◮ If δq ij = 1 for at least one q ∈ Q and (xk ij > 0 or βk ij > 0),
generate xk
ij and corresponding linking constraint ◮ Do not forget artificial flow variables, one per commodity!
SLIDE 69
Branch-and-price-and-cut
◮ Apply column-and-row generation at every node ◮ How to branch?
SLIDE 70 Branch-and-price-and-cut
◮ Apply column-and-row generation at every node ◮ How to branch? ◮ Design variables are related to λq variables as follows:
yij =
δq
ijλq
(i, j) ∈ A
SLIDE 71 Branch-and-price-and-cut
◮ Apply column-and-row generation at every node ◮ How to branch? ◮ Design variables are related to λq variables as follows:
yij =
δq
ijλq
(i, j) ∈ A
◮ If q∈Q δq ijλq ∈ {0, 1}, (i, j) ∈ A, the node can be fathomed ◮ Otherwise, select an arc (i, j) such that q∈Q δq ijλq is
fractional and generate the two child nodes defined by:
◮ q∈Q δq ijλq = 0 ◮ q∈Q δq ijλq = 1 ◮ What happens in the Lagrangian subproblem?
SLIDE 72 Branch-and-price-and-cut
◮ Apply column-and-row generation at every node ◮ How to branch? ◮ Design variables are related to λq variables as follows:
yij =
δq
ijλq
(i, j) ∈ A
◮ If q∈Q δq ijλq ∈ {0, 1}, (i, j) ∈ A, the node can be fathomed ◮ Otherwise, select an arc (i, j) such that q∈Q δq ijλq is
fractional and generate the two child nodes defined by:
◮ q∈Q δq ijλq = 0 ◮ q∈Q δq ijλq = 1 ◮ What happens in the Lagrangian subproblem? ◮ Constraint yij = 0 or yij = 1 is added without any problem!
SLIDE 73
Uncapacitated facility location problem (UFLP)
◮ K: set of customers ◮ J: set of locations for potential facilities ◮ fj ≥ 0: fixed cost for opening facility at location j ◮ cjk ≥ 0: cost of satisfying the demand of customer k from
facility at location j
◮ Problem description: determine the locations of the facilities
to satisfy customers’ demands at minimum cost
SLIDE 74 Problem formulation
min
cjkxjk +
fjyj
xjk = 1, k ∈ K xjk ≤ yj, j ∈ J, k ∈ K xjk ≥ 0, j ∈ J, k ∈ K yj ∈ {0, 1}, j ∈ J
SLIDE 75 Benders subproblem
min
cjkxjk
xjk = 1, k ∈ K (πk) xjk ≤ yj, j ∈ J, k ∈ K (αjk ≥ 0) xjk ≥ 0, j ∈ J, k ∈ K
◮ How to solve Benders subproblem?
SLIDE 76 Benders subproblem
min
cjkxjk
xjk = 1, k ∈ K (πk) xjk ≤ yj, j ∈ J, k ∈ K (αjk ≥ 0) xjk ≥ 0, j ∈ J, k ∈ K
◮ How to solve Benders subproblem? ◮ Decomposes for each k: assign the cheapest open location ◮ Give a simple condition to ensure feasibility
SLIDE 77 Benders subproblem
min
cjkxjk
xjk = 1, k ∈ K (πk) xjk ≤ yj, j ∈ J, k ∈ K (αjk ≥ 0) xjk ≥ 0, j ∈ J, k ∈ K
◮ How to solve Benders subproblem? ◮ Decomposes for each k: assign the cheapest open location ◮ Give a simple condition to ensure feasibility ◮ Feasibility requires that at least one location is open!
SLIDE 78 Dual of decomposed Benders subproblem
Zk(y) = max{πk −
αjkyj | πk − αjk ≤ cjk, αjk ≥ 0, j ∈ J}
SLIDE 79 Dual of decomposed Benders subproblem
Zk(y) = max{πk −
αjkyj | πk − αjk ≤ cjk, αjk ≥ 0, j ∈ J}
◮ Every optimal solution satisfies πk = minj∈J{cjk + αjk} and
αjk = max{0, πk − cjk} = (πk − cjk)+
SLIDE 80 Dual of decomposed Benders subproblem
Zk(y) = max{πk −
αjkyj | πk − αjk ≤ cjk, αjk ≥ 0, j ∈ J}
◮ Every optimal solution satisfies πk = minj∈J{cjk + αjk} and
αjk = max{0, πk − cjk} = (πk − cjk)+
◮ The last equation implies there exists l ∈ J such that πk = clk ◮ Make sure you understand why!
SLIDE 81 Dual of decomposed Benders subproblem
Zk(y) = max{πk −
αjkyj | πk − αjk ≤ cjk, αjk ≥ 0, j ∈ J}
◮ Every optimal solution satisfies πk = minj∈J{cjk + αjk} and
αjk = max{0, πk − cjk} = (πk − cjk)+
◮ The last equation implies there exists l ∈ J such that πk = clk ◮ Make sure you understand why! ◮ The decomposed dual can therefore be reformulated as:
Zk(y) = max
l∈J
clk −
(clk − cjk)+yj
SLIDE 82 Benders reformulation
◮ Feasibility is ensured if at least one location is open:
- j∈J yj ≥ 1 is the only needed Benders feasibility cut!
◮ What is the link with extreme ray of the dual polyhedron?
SLIDE 83 Benders reformulation
◮ Feasibility is ensured if at least one location is open:
- j∈J yj ≥ 1 is the only needed Benders feasibility cut!
◮ What is the link with extreme ray of the dual polyhedron? ◮ πk = αjk = 1, j ∈ J, defines an extreme ray for which
πk −
j∈J αjkyj > 0 if all locations are closed!
SLIDE 84 Benders reformulation
◮ Feasibility is ensured if at least one location is open:
- j∈J yj ≥ 1 is the only needed Benders feasibility cut!
◮ What is the link with extreme ray of the dual polyhedron? ◮ πk = αjk = 1, j ∈ J, defines an extreme ray for which
πk −
j∈J αjkyj > 0 if all locations are closed! ◮ Using our simplified reformulation of the decomposed dual,
the Benders reformulation is: min
fjyj +
zk
SLIDE 85 Benders reformulation
◮ Feasibility is ensured if at least one location is open:
- j∈J yj ≥ 1 is the only needed Benders feasibility cut!
◮ What is the link with extreme ray of the dual polyhedron? ◮ πk = αjk = 1, j ∈ J, defines an extreme ray for which
πk −
j∈J αjkyj > 0 if all locations are closed! ◮ Using our simplified reformulation of the decomposed dual,
the Benders reformulation is: min
fjyj +
zk clk −
(clk − cjk)+yj ≤ zk k ∈ K, l ∈ J
SLIDE 86 Benders reformulation
◮ Feasibility is ensured if at least one location is open:
- j∈J yj ≥ 1 is the only needed Benders feasibility cut!
◮ What is the link with extreme ray of the dual polyhedron? ◮ πk = αjk = 1, j ∈ J, defines an extreme ray for which
πk −
j∈J αjkyj > 0 if all locations are closed! ◮ Using our simplified reformulation of the decomposed dual,
the Benders reformulation is: min
fjyj +
zk clk −
(clk − cjk)+yj ≤ zk k ∈ K, l ∈ J 1 −
yj ≤ 0
SLIDE 87 Benders reformulation
◮ Feasibility is ensured if at least one location is open:
- j∈J yj ≥ 1 is the only needed Benders feasibility cut!
◮ What is the link with extreme ray of the dual polyhedron? ◮ πk = αjk = 1, j ∈ J, defines an extreme ray for which
πk −
j∈J αjkyj > 0 if all locations are closed! ◮ Using our simplified reformulation of the decomposed dual,
the Benders reformulation is: min
fjyj +
zk clk −
(clk − cjk)+yj ≤ zk k ∈ K, l ∈ J 1 −
yj ≤ 0 yj ∈ {0, 1}, j ∈ J
SLIDE 88
Benders decomposition
◮ Initial Benders master problem: add j∈J yj ≥ 1 + one
constraint for each k corresponding to the cheapest location
◮ At each iteration, we solve the Benders master problem to get
y and the Benders subproblem for that y
◮ The solution to the Benders subproblem for each k is xj∗k = 1
for the cheapest open location j∗, otherwise xjk = 0, j = j∗
◮ The solution to the dual of the Benders subproblem for each k
is πk = cj∗k (because of strong duality!)
SLIDE 89 Benders decomposition
◮ Initial Benders master problem: add j∈J yj ≥ 1 + one
constraint for each k corresponding to the cheapest location
◮ At each iteration, we solve the Benders master problem to get
y and the Benders subproblem for that y
◮ The solution to the Benders subproblem for each k is xj∗k = 1
for the cheapest open location j∗, otherwise xjk = 0, j = j∗
◮ The solution to the dual of the Benders subproblem for each k
is πk = cj∗k (because of strong duality!)
◮ A Benders optimality cut is added for each k such that
cj∗k −
(cj∗k − cjk)+yj > zk