SLIDE 1 Decomposition for Network Design
Bernard Gendron∗ March 16-17, 2016
EPFL, Lausanne, Switzerland
∗ CIRRELT and D´
epartement d’informatique et de recherche op´ erationnelle, Universit´ e de Montr´ eal, Canada
SLIDE 2
Outline of lesson 5: Multicommodity capacitated network design
Fixed-charge problem Modelling alternatives Cutting-plane method Column-and-row generation Lagrangian relaxation Benders decomposition Capacity installation case General integer formulation Binary formulation Polyhedral results Column-and-row generation
SLIDE 3
Multicommodity capacitated fixed-charge network design
◮ Directed network G = (N, A), with node set N and arc set A ◮ Commodity set K: known demand dk between origin O(k)
and destination D(k) for each k ∈ K
◮ Unit transportation cost cij on each arc (i, j) ◮ Capacity uij on each arc (i, j) ◮ Fixed charge fij incurred whenever arc (i, j) is used to
transport some commodity units
SLIDE 4 Problem formulation (MCNDF)
Z = min
cijxk
ij +
fijyij
i
xk
ij −
i
xk
ji =
dk, i = O(k) − dk, i = D(k) 0, i = O(k), D(k) i ∈ N, k ∈ K
xk
ij ≤ uijyij
(i, j) ∈ A xk
ij ≥ 0
(i, j) ∈ A, k ∈ K yij ∈ {0, 1} (i, j) ∈ A
◮ How would you solve the LP relaxation? What do you think of the
lower bound?
SLIDE 5
Commodity representation
◮ Each commodity is identified with one origin and one
destination: disaggregated representation
◮ Give an estimate of the maximum number of commodities
SLIDE 6
Commodity representation
◮ Each commodity is identified with one origin and one
destination: disaggregated representation
◮ Give an estimate of the maximum number of commodities ◮ In this model, costs and capacities are independent of the
commodities
◮ Use this observation to suggest a different way of representing
the commodities
SLIDE 7
Commodity representation
◮ Each commodity is identified with one origin and one
destination: disaggregated representation
◮ Give an estimate of the maximum number of commodities ◮ In this model, costs and capacities are independent of the
commodities
◮ Use this observation to suggest a different way of representing
the commodities
◮ All commodities with the same origin can be identified as a
single commodity with multiple destinations: aggregated representation
◮ What is the maximum number of commodities in this
representation?
SLIDE 8
Commodity representation
◮ Each commodity is identified with one origin and one
destination: disaggregated representation
◮ Give an estimate of the maximum number of commodities ◮ In this model, costs and capacities are independent of the
commodities
◮ Use this observation to suggest a different way of representing
the commodities
◮ All commodities with the same origin can be identified as a
single commodity with multiple destinations: aggregated representation
◮ What is the maximum number of commodities in this
representation?
◮ |N| instead of |N|2, so much less variables! ◮ Which representation is better and why?
SLIDE 9
Basic inequalities
◮ Suggest simple valid inequalities to improve the LP relaxation
SLIDE 10 Basic inequalities
◮ Suggest simple valid inequalities to improve the LP relaxation ◮ Strong inequality (SI)
xk
ij
≤ dkyij (i, j) ∈ A, k ∈ K
◮ S ⊂ N is a cutset if at least one commodity has its origin in S and
its destination in ¯ S = N \ S
◮ (S, ¯
S): set of arcs (i, j) that cross cutset S (i ∈ S and j ∈ ¯ S)
◮ d(S,¯
S): demand of all commodities with origin in S - destination in ¯
S
◮ Suggest a simple valid inequality based on a cutset (S, ¯
S)
SLIDE 11 Basic inequalities
◮ Suggest simple valid inequalities to improve the LP relaxation ◮ Strong inequality (SI)
xk
ij
≤ dkyij (i, j) ∈ A, k ∈ K
◮ S ⊂ N is a cutset if at least one commodity has its origin in S and
its destination in ¯ S = N \ S
◮ (S, ¯
S): set of arcs (i, j) that cross cutset S (i ∈ S and j ∈ ¯ S)
◮ d(S,¯
S): demand of all commodities with origin in S - destination in ¯
S
◮ Suggest a simple valid inequality based on a cutset (S, ¯
S)
◮ Cutset inequality
S)
uijyij ≥ d(S,¯
S)
◮ Does this inequality improve the LP relaxation lower bound?
SLIDE 12 Knapsack inequalities
◮ Cover C ⊆ (S, ¯
S): set of arcs such that
(i,j)∈(S,¯ S)\C uij < d(S,¯ S)
◮ Suggest a simple valid inequality based on a cover
SLIDE 13 Knapsack inequalities
◮ Cover C ⊆ (S, ¯
S): set of arcs such that
(i,j)∈(S,¯ S)\C uij < d(S,¯ S)
◮ Suggest a simple valid inequality based on a cover ◮ Cover inequality (CI)
yij ≥ 1
◮ lS: minimum number of arcs in (S, ¯
S) needed to satisfy d(S, ¯ S)
◮ Give an algorithm to compute lS and a valid inequality based on lS
SLIDE 14 Knapsack inequalities
◮ Cover C ⊆ (S, ¯
S): set of arcs such that
(i,j)∈(S,¯ S)\C uij < d(S,¯ S)
◮ Suggest a simple valid inequality based on a cover ◮ Cover inequality (CI)
yij ≥ 1
◮ lS: minimum number of arcs in (S, ¯
S) needed to satisfy d(S, ¯ S)
◮ Give an algorithm to compute lS and a valid inequality based on lS ◮ Minimum cardinality inequality (MCI)
S)
yij ≥ lS
SLIDE 15 Flow cover/pack inequalities
◮ Flow cover inequality (FCI)
(xL
ij + (bL ij − µ)+(1 − yij))
≤
min{bL
ji, µ}yji +
bL
ji
+dL
(S,¯ S) +
S,S)\C2∪D2
xL
ji
◮ Flow pack inequality (FPI)
xL
ij +
(xL
ij − min{bL ij, −µ}yij))
≤ −
(bL
ji + µ)+(1 − yji) +
S,S)\C2
xL
ji +
bL
ij
SLIDE 16 Cutting-plane method
◮ Starting with the weak LP relaxation, iteratively add violated
valid inequalities:
◮ To be more efficient: keep the problem size as small as possible ◮ To be more effective: improve the lower bound
◮ But the black-box solver already does that, so why not simply
use it?
◮ True, but we can be more efficient and more effective by
exploiting the structure of MCNDF
◮ Five classes of valid inequalities:
◮ Strong inequalities (SI) ◮ Cover inequalities (CI) ◮ Minimum cardinality inequalities (MCI) ◮ Flow cover inequalities (FCI) ◮ Flow pack inequalities (FPI)
SLIDE 17 Cutting-plane method: computational results
◮ The disaggregated commodity representation outperforms the
aggregated one, even when all inequalities are generated
◮ Single-node cutset inequalities are almost as effective as
multi-node cutset inequalities, but much faster to generate
◮ On instances with few commodities O(10) and many nodes
O(100), FCI/FPI are the most effective (but costly)
◮ On instances with many commodities O(100) and few nodes
O(10), SI are the most effective and fast to generate
◮ Cut-and-branch with CPLEX: our cutting-plane method is
competitive with CPLEX own cutting-plane
◮ Slower on instances with few commodities O(10) and many
nodes O(100)
◮ Faster on instances with many commodities O(100) and few
nodes O(10)
◮ A branch-and-cut algorithm has been developed based on the
same cutting-plane method
SLIDE 18 Column-and-row generation
◮ Extension of the cutting-plane method ◮ At each iteration, not all the flow variables are generated ◮ Flow variables are gradually added to the LP relaxation by
pricing the variables:
◮ Solve the restricted LP relaxation ◮ Compute the reduced costs of non-generated flow variables ◮ Add (some of) the variables with negative reduced costs
◮ The restricted LP relaxation is solved by the cutting-plane
method (only SI are added)
SLIDE 19 Restricted LP relaxation
min
Kij
cijxk
ij +
fijyij
N+
i (k)
xk
ij −
N−
i (k)
xk
ji =
dk, if i = O(k) −dk, if i = D(k) i ∈ N, k ∈ K (πk
i )
0,
Kij
xk
ij ≤ uijyij
(i, j) ∈ A+ (αij) xk
ij ≤ dkyij,
(i, j) ∈ A+, k ∈ K ij ⊆ Kij (βk
ij)
yij ≤ 1, (i, j) ∈ A+ (γij) xk
ij ≥ 0,
(i, j) ∈ A+, k ∈ Kij yij ≥ 0, (i, j) ∈ A+
SLIDE 20
LP dual
SLIDE 21 LP dual
max
dk πk
D(k) − πk O(k)
γij
SLIDE 22 LP dual
max
dk πk
D(k) − πk O(k)
γij πk
j − πk i − αij − βk ij ≤ cij,
(i, j) ∈ A, k ∈ K (xk
ij )
SLIDE 23 LP dual
max
dk πk
D(k) − πk O(k)
γij πk
j − πk i − αij − βk ij ≤ cij,
(i, j) ∈ A, k ∈ K (xk
ij )
uijαij +
dkβk
ij − γij ≤ fij,
(i, j) ∈ A (yij)
SLIDE 24 LP dual
max
dk πk
D(k) − πk O(k)
γij πk
j − πk i − αij − βk ij ≤ cij,
(i, j) ∈ A, k ∈ K (xk
ij )
uijαij +
dkβk
ij − γij ≤ fij,
(i, j) ∈ A (yij) αij ≥ 0, (i, j) ∈ A βk
ij ≥ 0,
(i, j) ∈ A, k ∈ K γij ≥ 0, (i, j) ∈ A
SLIDE 25
Complementary slackness conditions
SLIDE 26 Complementary slackness conditions
xk
ij
i − πk j + αij + βk ij
(i, j) ∈ A, k ∈ K
SLIDE 27 Complementary slackness conditions
xk
ij
i − πk j + αij + βk ij
(i, j) ∈ A, k ∈ K yij
dkβk
ij
(i, j) ∈ A,
SLIDE 28 Complementary slackness conditions
xk
ij
i − πk j + αij + βk ij
(i, j) ∈ A, k ∈ K yij
dkβk
ij
(i, j) ∈ A, αij
xk
ij
(i, j) ∈ A,
SLIDE 29 Complementary slackness conditions
xk
ij
i − πk j + αij + βk ij
(i, j) ∈ A, k ∈ K yij
dkβk
ij
(i, j) ∈ A, αij
xk
ij
(i, j) ∈ A, βk
ij
ij
(i, j) ∈ A, k ∈ K,
SLIDE 30 Complementary slackness conditions
xk
ij
i − πk j + αij + βk ij
(i, j) ∈ A, k ∈ K yij
dkβk
ij
(i, j) ∈ A, αij
xk
ij
(i, j) ∈ A, βk
ij
ij
(i, j) ∈ A, k ∈ K, γij (1 − yij) = 0, (i, j) ∈ A.
SLIDE 31
Reduced cost optimality conditions for (i, j), k
◮ k ∈
Kij: conditions are automatically satisfied
◮ k /
∈ Kij: add flow variables that do not satisfy the conditions
◮ ck ij (π, α) ≡ cij + πk i − πk j + αij, k ∈ K, fij(α) ≡ fij − uijαij ◮ Case 1: yij > 0
SLIDE 32 Reduced cost optimality conditions for (i, j), k
◮ k ∈
Kij: conditions are automatically satisfied
◮ k /
∈ Kij: add flow variables that do not satisfy the conditions
◮ ck ij (π, α) ≡ cij + πk i − πk j + αij, k ∈ K, fij(α) ≡ fij − uijαij ◮ Case 1: yij > 0
◮ β
k ij(dky ij > 0
− xk
ij
) = 0 = ⇒ β
k ij = 0, k /
∈ Kij
SLIDE 33 Reduced cost optimality conditions for (i, j), k
◮ k ∈
Kij: conditions are automatically satisfied
◮ k /
∈ Kij: add flow variables that do not satisfy the conditions
◮ ck ij (π, α) ≡ cij + πk i − πk j + αij, k ∈ K, fij(α) ≡ fij − uijαij ◮ Case 1: yij > 0
◮ β
k ij(dky ij > 0
− xk
ij
) = 0 = ⇒ β
k ij = 0, k /
∈ Kij
◮ Reduced cost optimality condition: ck
ij (π, α) ≥ 0
◮ Case 2: yij = 0
SLIDE 34 Reduced cost optimality conditions for (i, j), k
◮ k ∈
Kij: conditions are automatically satisfied
◮ k /
∈ Kij: add flow variables that do not satisfy the conditions
◮ ck ij (π, α) ≡ cij + πk i − πk j + αij, k ∈ K, fij(α) ≡ fij − uijαij ◮ Case 1: yij > 0
◮ β
k ij(dky ij > 0
− xk
ij
) = 0 = ⇒ β
k ij = 0, k /
∈ Kij
◮ Reduced cost optimality condition: ck
ij (π, α) ≥ 0
◮ Case 2: yij = 0
◮ γij(1 − y ij
= 0
) = 0 = ⇒ γij = 0 = ⇒ fij(α) ≥
k∈K dkβ k ij
SLIDE 35 Reduced cost optimality conditions for (i, j), k
◮ k ∈
Kij: conditions are automatically satisfied
◮ k /
∈ Kij: add flow variables that do not satisfy the conditions
◮ ck ij (π, α) ≡ cij + πk i − πk j + αij, k ∈ K, fij(α) ≡ fij − uijαij ◮ Case 1: yij > 0
◮ β
k ij(dky ij > 0
− xk
ij
) = 0 = ⇒ β
k ij = 0, k /
∈ Kij
◮ Reduced cost optimality condition: ck
ij (π, α) ≥ 0
◮ Case 2: yij = 0
◮ γij(1 − y ij
= 0
) = 0 = ⇒ γij = 0 = ⇒ fij(α) ≥
k∈K dkβ k ij
◮ But, we have β
k ij ≥ max{0, −ck ij (π, α)}
SLIDE 36 Reduced cost optimality conditions for (i, j), k
◮ k ∈
Kij: conditions are automatically satisfied
◮ k /
∈ Kij: add flow variables that do not satisfy the conditions
◮ ck ij (π, α) ≡ cij + πk i − πk j + αij, k ∈ K, fij(α) ≡ fij − uijαij ◮ Case 1: yij > 0
◮ β
k ij(dky ij > 0
− xk
ij
) = 0 = ⇒ β
k ij = 0, k /
∈ Kij
◮ Reduced cost optimality condition: ck
ij (π, α) ≥ 0
◮ Case 2: yij = 0
◮ γij(1 − y ij
= 0
) = 0 = ⇒ γij = 0 = ⇒ fij(α) ≥
k∈K dkβ k ij
◮ But, we have β
k ij ≥ max{0, −ck ij (π, α)}
◮ Reduced cost optimality condition: ck
ij (π, α) ≥ 0 AND
fij(α) ≥
k∈K dk max{0, −ck ij (π, α)}
SLIDE 37
Pricing problem
◮ Decomposes by arc (i, j) ◮ yij > 0: for any k /
∈ Kij such that ck
ij (π, α) < 0, add flow
variables xk
ij ◮ yij = 0 and fij(α) < k∈K dk max{0, −ck ij (π, α)}: for any
k / ∈ Kij such that ck
ij (π, α) < 0, add flow variables xk ij ◮ Make a connection between this pricing problem and a
Lagrangian relaxation
SLIDE 38
Column-and-row generation: computational results
◮ Column-and-row generation is embedded into
branch-and-bound: branch-and-price-and-cut
◮ How is branching performed?
SLIDE 39
Column-and-row generation: computational results
◮ Column-and-row generation is embedded into
branch-and-bound: branch-and-price-and-cut
◮ How is branching performed? ◮ Simply branch on yij variables: they appear in both the
restricted LP and the pricing problem
◮ B&P&C is faster and more effective than B&C (B&P&C
without column generation)
◮ B&P&C is faster and more effective than CPLEX on on
instances with many commodities O(100) and few nodes O(10)
SLIDE 40 Lagrangian relaxation
Z = min
cijxk
ij +
fijyij
i
xk
ij −
i
xk
ji =
dk, i = O(k) − dk, i = D(k) 0, i = O(k), D(k) i ∈ N, k ∈ K (πk
i )
xk
ij ≤ uijyij
(i, j) ∈ A (αij) xk
ij ≤ bk ijyij
(i, j) ∈ A, k ∈ K (βk
ij)
xk
ij ≥ 0
(i, j) ∈ A, k ∈ K yij ∈ {0, 1} (i, j) ∈ A
◮ Suggest several Lagrangian relaxations
SLIDE 41 Shortest path relaxation
Z(α, β) = min
(cij + αij + βk
ij)xk ij
+
(fij − uijαij −
bk
ijβk ij)yij
i
xk
ij −
i
xk
ji =
dk, i = O(k) − dk, i = D(k) 0, i = O(k), D(k) i ∈ N, k ∈ K yij ∈ {0, 1} (i, j) ∈ A
◮ How would you solve this problem?
SLIDE 42 Shortest path relaxation
Z(α, β) = min
(cij + αij + βk
ij)xk ij
+
(fij − uijαij −
bk
ijβk ij)yij
i
xk
ij −
i
xk
ji =
dk, i = O(k) − dk, i = D(k) 0, i = O(k), D(k) i ∈ N, k ∈ K yij ∈ {0, 1} (i, j) ∈ A
◮ How would you solve this problem? ◮ Does it have the integrality property?
SLIDE 43 Knapsack relaxation
Z(π) = min
(cij+πk
i −πk j )xk ij +
fijyij+
dk(πk
D(k)−πk O(k))
xk
ij ≤ uijyij
(i, j) ∈ A xk
ij ≤ bk ijyij
(i, j) ∈ A, k ∈ K xk
ij ≥ 0
(i, j) ∈ A, k ∈ K yij ∈ {0, 1} (i, j) ∈ A
◮ How would you solve this problem?
SLIDE 44 Knapsack relaxation
Z(π) = min
(cij+πk
i −πk j )xk ij +
fijyij+
dk(πk
D(k)−πk O(k))
xk
ij ≤ uijyij
(i, j) ∈ A xk
ij ≤ bk ijyij
(i, j) ∈ A, k ∈ K xk
ij ≥ 0
(i, j) ∈ A, k ∈ K yij ∈ {0, 1} (i, j) ∈ A
◮ How would you solve this problem? ◮ Does it have the integrality property?
SLIDE 45
Lagrangian relaxation: computational results
◮ What can you say about the Lagrangian dual lower bounds?
SLIDE 46 Lagrangian relaxation: computational results
◮ What can you say about the Lagrangian dual lower bounds? ◮ Both Lagrangian relaxations provide the same lower bound as
the strong LP relaxation
◮ To find (near-)optimal Lagrangian multipliers, two classes of
methods have been traditionally used:
◮ Column generation (CG) methods ◮ Subgradient methods
◮ Our computational results show that:
◮ CG methods are much more robust ◮ CG methods converge faster ◮ Any of these two methods converge much faster than solving
the strong LP relaxation with the simplex method (without cutting-plane or column-and-row generation)
SLIDE 47
Benders subproblem
◮ Fix the design variables to y by solving a (MIP) master
problem
◮ Write down the Benders subproblem: what is the structure of
this problem?
SLIDE 48 Benders subproblem
◮ Fix the design variables to y by solving a (MIP) master
problem
◮ Write down the Benders subproblem: what is the structure of
this problem?
◮ Solve the multicommodity flow subproblem restricted to y:
Zx(y) = min
cijxk
ij
i
xk
ij −
i
xk
ji =
dk, i = O(k) − dk, i = D(k) 0, i = O(k), D(k) i ∈ N, k ∈ K
xk
ij ≤ uijyij
(i, j) ∈ A xk
ij ≥ 0
(i, j) ∈ A, k ∈ K
SLIDE 49
Dual of the Benders subproblem
◮ Write down the dual of the Benders subproblem
SLIDE 50 Dual of the Benders subproblem
◮ Write down the dual of the Benders subproblem ◮ The dual of the Benders subproblem has the following
structure: Zx(y) = max
dkπk
D(k) −
uijyijαij πk
j − πk i − αij ≤ cij
(i, j) ∈ A, k ∈ K αij ≥ 0 (i, j) ∈ A
◮ Note: we can eliminate one of the π multipliers for each
commodity (πk
O(k) = 0)
SLIDE 51
Benders cuts and master problem
◮ If the LP dual is bounded, then generate an optimality cut ◮ Write down the Benders optimality cut
SLIDE 52 Benders cuts and master problem
◮ If the LP dual is bounded, then generate an optimality cut ◮ Write down the Benders optimality cut
dkπk
D(k) −
uijyijαij ≤ z
◮ If the LP dual is unbounded, then generate a feasibility cut ◮ Write down the Benders feasibility cut
SLIDE 53 Benders cuts and master problem
◮ If the LP dual is bounded, then generate an optimality cut ◮ Write down the Benders optimality cut
dkπk
D(k) −
uijyijαij ≤ z
◮ If the LP dual is unbounded, then generate a feasibility cut ◮ Write down the Benders feasibility cut
dkπk
D(k) −
uijyijαij ≤ 0
◮ Add the (optimality or feasibility) cut to the master problem
defined by the objective: min
fijyij + z
◮ Find a new y and perform another iteration ◮ This process converges to an optimal solution of MCNDF
SLIDE 54 Benders decomposition: implementation and results
◮ Several techniques are implemented to accelerate the
convergence of the algorithm:
◮ Add cutset inequalities to the master problem ◮ Solve the (strong) LP relaxation by Benders decomposition
(master problem ≡ LP)
◮ Use (slope scaling) heuristic to generate several tentative y ◮ When solving MIP master problem by branch-and-bound,
collect all feasible solutions to generate several tentative y
◮ Even with these improvements (and many others), the
method is not competitive with simplex-based branch-and-cut approach
◮ But, Benders feasibility cuts can be used in any
branch-and-cut approach!
SLIDE 55
Capacity installation multicommodity network design
◮ Directed network G = (N, A), with node set N and arc set A ◮ Commodity set K: known demand dk between origin O(k)
and destination D(k) for each k ∈ K
◮ Unit transportation cost cij on each arc (i, j) ◮ Capacity uij on each arc (i, j) ◮ Cost fij for each capacity unit installed on arc (i, j)
SLIDE 56 General integer formulation (I)
min
dkcijxk
ij +
fijyij
xk
ij −
xk
ji =
1, if i = O(k) − 1, if i = D(k) 0, if i = O(k), D(k) i ∈ N, k ∈ K
dkxk
ij ≤ uijyij
(i, j) ∈ A 0 ≤ xk
ij ≤ 1
(i, j) ∈ A, k ∈ K yij ≥ 0 (i, j) ∈ A yij integer (i, j) ∈ A
SLIDE 57 Lagrangian relaxation of flow conservation
min
(dkcij +πk
i −πk j )xk ij +
fijyij +
D(k) − πk O(k)
dkxk
ij ≤ uijyij
(i, j) ∈ A 0 ≤ xk
ij ≤ 1
(i, j) ∈ A, k ∈ K yij ≥ 0 (i, j) ∈ A yij integer (i, j) ∈ A
◮ Lagrangian subproblem decomposes by arc ◮ Easy (≈ 2 continuous knapsack) but no integrality property
SLIDE 58 Residual capacity inequalities
◮ For any P ⊆ K, define dP =
k∈P dk
◮ Then, for any (i, j) ∈ A, define
aP
ij = dP
uij qP
ij =
ij
ij = aP ij −
ij
- ◮ Residual capacity inequalities
- k∈P
{ak
ij(1 − xk ij )} ≥ r P ij (qP ij − yij)
(i, j) ∈ A, P ⊆ K
◮ Characterize the convex hull of solutions to the Lagrangian
subproblem (Magnanti, Mirchandani, Vachani 1993)
◮ Separation can be performed in O(|A||K|) (Atamt¨
urk, Rajan 2002)
SLIDE 59 Multiple choice model
yij ≤
uij
Sij = {1, . . . , Tij} y s
ij =
if yij = s 0,
s ∈ Sij xs
ij =
ij ,
if yij = s 0,
s ∈ Sij
SLIDE 60 Binary formulation (B)
yij =
sy s
ij
(i, j) ∈ A
dkxk
ij =
xs
ij
(i, j) ∈ A (s − 1)uijy s
ij ≤ xs ij ≤ suijy s ij
(i, j) ∈ A, s ∈ Sij
y s
ij ≤ 1
(i, j) ∈ A y s
ij ≥ 0
(i, j) ∈ A, s ∈ Sij y s
ij integer
(i, j) ∈ A, s ∈ Sij
SLIDE 61 Variable disaggregation and extended formulation (B+)
◮ Extended auxiliary variables
xks
ij =
ij ,
if yij = s 0,
s ∈ Sij xk
ij =
xks
ij
(i, j) ∈ A, k ∈ K xs
ij =
dkxks
ij
(i, j) ∈ A, s ∈ Sij
◮ Extended linking inequalities
xks
ij ≤ y s ij
(i, j) ∈ A, k ∈ K, s ∈ Sij
SLIDE 62
Polyhedral results: notation
◮ Z(M): optimal value for model M ◮ F(M) : feasible set for model M ◮ conv(F(M)) : convex hull of F(M) ◮ LP(M) : LP relaxation for model M ◮ LS(M) : Lagrangian subproblem
(relaxation of flow conservation constraints)
◮ LD(M) : Lagrangian dual for LS(M)
SLIDE 63 Polyhedral results
◮ Result 1:
◮ I + = I + residual capacity inequalities ◮ F(LP(LS(I +))) = conv(F(LS(I +)))
(Magnanti, Mirchandani, Vachani 1993)
◮ Z(LP(I +)) = Z(LD(I)): primal interpretation of Lagrangian
dual!
SLIDE 64 Polyhedral results
◮ Result 1:
◮ I + = I + residual capacity inequalities ◮ F(LP(LS(I +))) = conv(F(LS(I +)))
(Magnanti, Mirchandani, Vachani 1993)
◮ Z(LP(I +)) = Z(LD(I)): primal interpretation of Lagrangian
dual!
◮ Result 2:
◮ Z(LS(I)) = Z(LS(B+)) for the same values of the Lagrange
multipliers: apply reformulation to Lagrangian subproblem!
◮ Z(LD(I)) = Z(LD(B+)): same constraints relaxed in I and
B+!
SLIDE 65 Polyhedral results
◮ Result 1:
◮ I + = I + residual capacity inequalities ◮ F(LP(LS(I +))) = conv(F(LS(I +)))
(Magnanti, Mirchandani, Vachani 1993)
◮ Z(LP(I +)) = Z(LD(I)): primal interpretation of Lagrangian
dual!
◮ Result 2:
◮ Z(LS(I)) = Z(LS(B+)) for the same values of the Lagrange
multipliers: apply reformulation to Lagrangian subproblem!
◮ Z(LD(I)) = Z(LD(B+)): same constraints relaxed in I and
B+!
◮ Result 3:
◮ F(LP(LS(B+))) = conv(F(LS(B+)))
(Croxton, Gendron, Magnanti 2007)
◮ Z(LD(B+)) = Z(LP(B+)): primal interpretation of
Lagrangian dual!
SLIDE 66 Polyhedral results
◮ Result 1:
◮ I + = I + residual capacity inequalities ◮ F(LP(LS(I +))) = conv(F(LS(I +)))
(Magnanti, Mirchandani, Vachani 1993)
◮ Z(LP(I +)) = Z(LD(I)): primal interpretation of Lagrangian
dual!
◮ Result 2:
◮ Z(LS(I)) = Z(LS(B+)) for the same values of the Lagrange
multipliers: apply reformulation to Lagrangian subproblem!
◮ Z(LD(I)) = Z(LD(B+)): same constraints relaxed in I and
B+!
◮ Result 3:
◮ F(LP(LS(B+))) = conv(F(LS(B+)))
(Croxton, Gendron, Magnanti 2007)
◮ Z(LD(B+)) = Z(LP(B+)): primal interpretation of
Lagrangian dual!
◮ Z(LP(I +)) = Z(LD(I)) = Z(LD(B+)) = Z(LP(B+))
(Frangioni, Gendron 2009)
SLIDE 67 Column-and-row generation for LP(B+)
◮ Only a small subset of the xks ij
and ys
ij variables are generated ◮ Variables are gradually added to the LP relaxation by pricing
them:
◮ Solve the restricted LP relaxation ◮ Compute the reduced costs of non-generated flow variables ≡
solve the Lagrangian subproblem
◮ Add variables with negative reduced costs
◮ The restricted LP relaxation is solved by the cutting-plane
method: constraints xks
ij ≤ ys ij are gradually added
SLIDE 68 Column-and-row generation: computational results
◮ Implementation: solving the LP relaxation, then freezing the
formulation + CPLEX heuristics for one hour
◮ Comparison of three model/method combinations:
◮ B+: Binary model (pseudo-polynomial number of variables and
constraints)/ column-and-row generation (easy pricing)
◮ I +: General integer model (exponential number of
constraints)/cutting-plane (easy separation)
◮ DW: Dantzig-Wolfe Lagrangian dual (exponential number of
variables)/column generation (easy pricing)
SLIDE 69 Column-and-row generation: computational results
◮ Implementation: solving the LP relaxation, then freezing the
formulation + CPLEX heuristics for one hour
◮ Comparison of three model/method combinations:
◮ B+: Binary model (pseudo-polynomial number of variables and
constraints)/ column-and-row generation (easy pricing)
◮ I +: General integer model (exponential number of
constraints)/cutting-plane (easy separation)
◮ DW: Dantzig-Wolfe Lagrangian dual (exponential number of
variables)/column generation (easy pricing)
◮ B+ is much faster than DW ◮ B+ is generally faster AND more effective (better upper
bounds) than I +
◮ As |K| increases, the advantage of B+ over I + increases ◮ Additional features (subgradient warmstart, stabilization)
improve efficiency (time) AND effectiveness (upper bounds)