Robust combinatorial optimization with variable uncertainty Michael - - PowerPoint PPT Presentation

robust combinatorial optimization with variable
SMART_READER_LITE
LIVE PREVIEW

Robust combinatorial optimization with variable uncertainty Michael - - PowerPoint PPT Presentation

Robust combinatorial optimization with variable uncertainty Michael Poss Heudiasyc UMR CNRS 7253, Universit e de Technologie de Compi` egne 17th Aussois Combinatorial Optimization Workshop M. Poss (Heudiasyc) Variable uncertainty Aussois


slide-1
SLIDE 1

Robust combinatorial optimization with variable uncertainty

Michael Poss

Heudiasyc UMR CNRS 7253, Universit´ e de Technologie de Compi` egne

17th Aussois Combinatorial Optimization Workshop

  • M. Poss (Heudiasyc)

Variable uncertainty Aussois 1 / 31

slide-2
SLIDE 2

Outline

1

Robust optimization

2

Variable budgeted uncertainty

3

Cost uncertainty

  • M. Poss (Heudiasyc)

Variable uncertainty Aussois 2 / 31

slide-3
SLIDE 3

Outline

1

Robust optimization

2

Variable budgeted uncertainty

3

Cost uncertainty

  • M. Poss (Heudiasyc)

Variable uncertainty Aussois 3 / 31

slide-4
SLIDE 4

Combinatorial optimization under uncertainty

min

n

  • i=1

cixi s.t.

n

  • j=1

aijxj ≤ bi, i = 1, . . . , m x ∈ {0, 1}n Suppose that the parameters (a, b, c) are uncertain: They vary over time They must be predicted from historical data They cannot be measured with enough accuracy ... Let’s do something clever (and useful)!

  • M. Poss (Heudiasyc)

Variable uncertainty Aussois 4 / 31

slide-5
SLIDE 5

How much do we know?

Stochastic programming A lot ⇔ Robust programming A little

  • M. Poss (Heudiasyc)

Variable uncertainty Aussois 5 / 31

slide-6
SLIDE 6

How much do we know?

Stochastic programming A lot ⇔ Robust programming A little Robust pr. Uncertain parameters are merely assumed to belong to an uncertainty set U ⇒ one wishes to optimize some worst-case

  • bjective over the uncertainty set

Stochastic pr. Uncertain parameters are precisely described by probability distributions ⇒ one wishes to optimize some expectation, variance, Value-at-risk, . . . Intermediary models exist: distributionally robust optimization, ambiguous chance-constrained

  • M. Poss (Heudiasyc)

Variable uncertainty Aussois 5 / 31

slide-7
SLIDE 7

When do we take decisions?

Now All decisions must be taken before the uncertainty is known with precision ⇒ probability constraints, (static) robust

  • ptimization

Delayed Some decisions may be delayed until the uncertainty is revealed ⇒ multi-stage stochastic programming, adjustable robust optimization

  • M. Poss (Heudiasyc)

Variable uncertainty Aussois 6 / 31

slide-8
SLIDE 8

Robust combinatorial optimization

min

n

  • i=1

cixi s.t.

n

  • j=1

aijxj ≤ bi, i = 1, . . . , m, ∀ai ∈ Ui x ∈ {0, 1}n, The linear relaxation of this problem is tractable if Ui is defined by conic constraints: Ui = {ai ∈ Rn : u ai − v ∈ K}. In particular, polyhedrons and polytopes are nice (K = Rn

+).

  • M. Poss (Heudiasyc)

Variable uncertainty Aussois 7 / 31

slide-9
SLIDE 9

Feasibility set

  • aixi ≤ b,

∀a ∈ U ⇔

  • j cjαj ≤ b
  • j ujiαj ≥ xj

αj ≥ 0 The feasibility set of the constraint is a polyhedron (thus, convex) !

  • M. Poss (Heudiasyc)

Variable uncertainty Aussois 8 / 31

slide-10
SLIDE 10

Feasibility set

  • aixi ≤ b,

∀a ∈ U ⇔

  • j cjαj ≤ b
  • j ujiαj ≥ xj

αj ≥ 0 The feasibility set of the constraint is a polyhedron (thus, convex) ! A (very) popular polyhedral uncertainty set is (Bertsimas and Sim, 2004): UΓ :=

  • a ∈ Rn : ai = ai + δiˆ

ai, −1 ≤ δi ≤ 1,

  • |δi| ≤ Γ
  • .

Main reasons for popularity: Nice computational properties for MIP and combinatorial problems. Intuitive interpretation. Probabilistic interpretation.

  • M. Poss (Heudiasyc)

Variable uncertainty Aussois 8 / 31

slide-11
SLIDE 11

Consider a vehicle routing problem with uncertain travel times t and time limit T. For simplicity, we suppose t ∈ UΓ for Γ = 5.

  • M. Poss (Heudiasyc)

Variable uncertainty Aussois 9 / 31

slide-12
SLIDE 12

Consider a vehicle routing problem with uncertain travel times t and time limit T. For simplicity, we suppose t ∈ UΓ for Γ = 5. Consider two robust-feasible routes x1 and x2 : x11 = 3 and x21 = 10.

  • M. Poss (Heudiasyc)

Variable uncertainty Aussois 9 / 31

slide-13
SLIDE 13

Consider a vehicle routing problem with uncertain travel times t and time limit T. For simplicity, we suppose t ∈ UΓ for Γ = 5. Consider two robust-feasible routes x1 and x2 : x11 = 3 and x21 = 10. Because x1 and x2 are robust-feasible:

  • i:x1

i =1

ti ≤ T, ∀a ∈ UΓ, and

  • i:x2

i =1

ti ≤ T, ∀a ∈ UΓ

  • M. Poss (Heudiasyc)

Variable uncertainty Aussois 9 / 31

slide-14
SLIDE 14

Consider a vehicle routing problem with uncertain travel times t and time limit T. For simplicity, we suppose t ∈ UΓ for Γ = 5. Consider two robust-feasible routes x1 and x2 : x11 = 3 and x21 = 10. Because x1 and x2 are robust-feasible:

  • i:x1

i =1

ti ≤ T, ∀a ∈ UΓ, and

  • i:x2

i =1

ti ≤ T, ∀a ∈ UΓ which becomes

  • i:x1

i =1

ti ≤ T, and

  • i:x2

i =1

ti ≤ T, ∀a ∈ UΓ

  • M. Poss (Heudiasyc)

Variable uncertainty Aussois 9 / 31

slide-15
SLIDE 15

Consider a vehicle routing problem with uncertain travel times t and time limit T. For simplicity, we suppose t ∈ UΓ for Γ = 5. Consider two robust-feasible routes x1 and x2 : x11 = 3 and x21 = 10. Because x1 and x2 are robust-feasible:

  • i:x1

i =1

ti ≤ T, ∀a ∈ UΓ, and

  • i:x2

i =1

ti ≤ T, ∀a ∈ UΓ which becomes

  • i:x1

i =1

ti ≤ T, and

  • i:x2

i =1

ti ≤ T, ∀a ∈ UΓ For any probability distribution for t: P  

i:x1

i =1

ti > T   = 0.

  • M. Poss (Heudiasyc)

Variable uncertainty Aussois 9 / 31

slide-16
SLIDE 16

Consider a vehicle routing problem with uncertain travel times t and time limit T. For simplicity, we suppose t ∈ UΓ for Γ = 5. Consider two robust-feasible routes x1 and x2 : x11 = 3 and x21 = 10. Because x1 and x2 are robust-feasible:

  • i:x1

i =1

ti ≤ T, ∀a ∈ UΓ, and

  • i:x2

i =1

ti ≤ T, ∀a ∈ UΓ which becomes

  • i:x1

i =1

ti ≤ T, and

  • i:x2

i =1

ti ≤ T, ∀a ∈ UΓ For any probability distribution for t: P  

i:x1

i =1

ti > T   = 0. If x2 is not robust-feasible for Γ = 10, there exists probability distributions: P  

i:x2

i =1

ti > T   > 0

  • M. Poss (Heudiasyc)

Variable uncertainty Aussois 9 / 31

slide-17
SLIDE 17

Outline

1

Robust optimization

2

Variable budgeted uncertainty

3

Cost uncertainty

  • M. Poss (Heudiasyc)

Variable uncertainty Aussois 10 / 31

slide-18
SLIDE 18

Robust optimization and probabilistic constraint

Let ˜ ai be random variables and ǫ > 0. The chance constraint P

  • ˜

aixi > b

  • ≤ ǫ

(1) leads to very difficult optimization problems in general.

  • M. Poss (Heudiasyc)

Variable uncertainty Aussois 11 / 31

slide-19
SLIDE 19

Robust optimization and probabilistic constraint

Let ˜ ai be random variables and ǫ > 0. The chance constraint P

  • ˜

aixi > b

  • ≤ ǫ

(1) leads to very difficult optimization problems in general. In some situations, we know that (1) can be approximated by

  • aixi ≤ b

∀a ∈ U (2) for a properly chosen U. These approximations are conservative: any x feasible for (2) is feasible for (1). We must balance conservatism and protection cost ⇒ devise good protection sets U.

  • M. Poss (Heudiasyc)

Variable uncertainty Aussois 11 / 31

slide-20
SLIDE 20

Robust optimization and probabilistic constraint

What about UΓ ?

  • M. Poss (Heudiasyc)

Variable uncertainty Aussois 12 / 31

slide-21
SLIDE 21

Robust optimization and probabilistic constraint

What about UΓ ? Let ˜ ai be random variables independently and symmetrically distributed in [ai − ˆ ai, ai + ˆ ai]. Bertsimas and Sim (2004) prove that if a vector x satisfies the robust constraint

  • aixi ≤ b

∀a ∈ UΓ, then it satisfies also the probabilistic constraint P

  • ˜

aixi > b

  • ≤ exp
  • −Γ2

2n

  • .
  • M. Poss (Heudiasyc)

Variable uncertainty Aussois 12 / 31

slide-22
SLIDE 22

Something is wrong ...

From P

  • ˜

aixi > b

  • ≤ exp
  • −Γ2

2n

  • ,

we see that choosing Γ = (−2 ln(ǫ))1/2 n1/2 yields P

  • ˜

aixi > b

  • ≤ ǫ.
  • M. Poss (Heudiasyc)

Variable uncertainty Aussois 13 / 31

slide-23
SLIDE 23

Something is wrong ...

From P

  • ˜

aixi > b

  • ≤ exp
  • −Γ2

2n

  • ,

we see that choosing Γ = (−2 ln(ǫ))1/2 n1/2 yields P

  • ˜

aixi > b

  • ≤ ǫ.

For many problems, x1 < n1/2 for optimal (or feasible) vectors x (network design, assignment, ...) ⇒ Γ > n1/2 already for ǫ = 0.5 ⇒ for these problems, protecting with probability 0.5 yields protection with probability 0! ⇒ overprotection !

  • M. Poss (Heudiasyc)

Variable uncertainty Aussois 13 / 31

slide-24
SLIDE 24

Multifunctions

It is easy to see that the bound from Bertsimas and Sim can be adapted to P

  • ˜

aixi > b

  • ≤ exp

Γ2 2x1

  • .
  • M. Poss (Heudiasyc)

Variable uncertainty Aussois 14 / 31

slide-25
SLIDE 25

Multifunctions

It is easy to see that the bound from Bertsimas and Sim can be adapted to P

  • ˜

aixi > b

  • ≤ exp

Γ2 2x1

  • .

Γ can be reduced when x is small Let’s use multifunctions !

  • M. Poss (Heudiasyc)

Variable uncertainty Aussois 14 / 31

slide-26
SLIDE 26

Multifunctions

It is easy to see that the bound from Bertsimas and Sim can be adapted to P

  • ˜

aixi > b

  • ≤ exp

Γ2 2x1

  • .

Γ can be reduced when x is small Let’s use multifunctions ! Define αǫ(x) = (−2 ln(ǫ)x1)1/2 . Consider x∗ be given. If

  • aix∗

i ≤ b

∀a ∈ Uαǫ(x∗) , then P

  • ˜

aix∗

i > b

  • ≤ exp
  • −αǫ(x∗)2

2x∗1

  • = ǫ.
  • M. Poss (Heudiasyc)

Variable uncertainty Aussois 14 / 31

slide-27
SLIDE 27

New robust model

Let γ : {0, 1}n → R+ be a non-negative function. Uγ(x) :=

  • a ∈ Rn : ai = ai + δiˆ

ai, −1 ≤ δi ≤ 1,

  • |δi| ≤ γ(x)
  • .
  • M. Poss (Heudiasyc)

Variable uncertainty Aussois 15 / 31

slide-28
SLIDE 28

New robust model

Let γ : {0, 1}n → R+ be a non-negative function. Uγ(x) :=

  • a ∈ Rn : ai = ai + δiˆ

ai, −1 ≤ δi ≤ 1,

  • |δi| ≤ γ(x)
  • .

We have shown that the new model

  • aixi ≤ b

∀a ∈ Uαǫ(x), should be considered instead of the classical model

  • aixi ≤ b

∀a ∈ UΓ.

  • M. Poss (Heudiasyc)

Variable uncertainty Aussois 15 / 31

slide-29
SLIDE 29

Better bound

The previous bound is bad. Bertsimas and Sim propose a better bound: P

  • aix∗

i > b

  • ≤ B(n, Γ) = 1

2n  (1 − µ) n ⌊ν⌋

  • +

n

  • l=⌊ν⌋+1

n l   , where ν = (Γ + n)/2, µ = ν − ⌊ν⌋. We can make this bound dependent on x by considering B(x1, Γ). βǫ(x) is the solution of the equation B(x∗1, Γ) − ǫ = 0 (3) in variable Γ. We solve (3) numerically.

  • M. Poss (Heudiasyc)

Variable uncertainty Aussois 16 / 31

slide-30
SLIDE 30
  • Tractability. Example: Knapsack problem

max

n

  • i=1

cixi s.t.

n

  • i=1

aixi ≤ b, a ∈ Uγ x ∈ {0, 1}n, which can be rewritten as max

n

  • i=1

cixi s.t.

n

  • i=1

aixi + max 0 ≤ δi ≤ 1 δi ≤ γ(x)

n

  • i=1

δiˆ aixi ≤ b, x ∈ {0, 1}n,

  • M. Poss (Heudiasyc)

Variable uncertainty Aussois 17 / 31

slide-31
SLIDE 31

Knapsack problem

Using the dualization approach: max

n

  • i=1

cixi s.t.

n

  • i=1

aixi + zγ(x) +

n

  • i=1

pi ≤ b, z + pi ≥ ˆ aixi, i = 1, . . . , n z, p ≥ 0, x ∈ {0, 1}n. Non-convex reformulation. x binary may help.

  • M. Poss (Heudiasyc)

Variable uncertainty Aussois 18 / 31

slide-32
SLIDE 32

Dualization

Theorem

Consider robust constraint aTx ≤ b, ∀a ∈ Uγ(x), x ∈ {0, 1}n, (4) and suppose that γ = γ0 + γixi is an affine function of x, non-negative for x ∈ {0, 1}n. Then, (4) is equivalent to

n

  • i=1

aixi + γ0z +

n

  • i=1

γiwi +

n

  • i=1

pi ≤ b z + pi ≥ ˆ aixi, i = 1, . . . , n, wi − z ≥ − maxj(ˆ aj)(1 − xi), i = 1, . . . , n, p, w, z ≥ 0, x ∈ {0, 1}n.

  • M. Poss (Heudiasyc)

Variable uncertainty Aussois 19 / 31

slide-33
SLIDE 33

Non-affine functions γ

10 20 30 40 50 60 70 80 100 200 300 400 500 600 700 800 900 1000 β0.01 γ1 10 20 30 40 50 60 70 80 90 100 200 300 400 500 600 700 800 900 1000 β0.01 min(γ1, γ2)

  • M. Poss (Heudiasyc)

Variable uncertainty Aussois 20 / 31

slide-34
SLIDE 34

Computational results

Objective

1 Is there a benefit in using Uβ instead of UΓ ? 2 Computational “complexity” of solving the robust counterparts.

  • M. Poss (Heudiasyc)

Variable uncertainty Aussois 21 / 31

slide-35
SLIDE 35

Computational results

Objective

1 Is there a benefit in using Uβ instead of UΓ ? 2 Computational “complexity” of solving the robust counterparts.

Models We compare the following at ǫ = 0.01: UΓ The classical robust model with budget uncertainty. Uγ1 Our new model with variable budget uncertainty: γ1

  • verapproximates β.

Uγ1γ2 Our new model with variable budget uncertainty: min(γ1, γ2) over-approximates β.

  • M. Poss (Heudiasyc)

Variable uncertainty Aussois 21 / 31

slide-36
SLIDE 36

The price of robustness at ǫ = 0.01

Instances from Bertsimas and Sim (2004)

0.5 1 1.5 2 2.5 3 3.5 100 200 300 400 500 600 700 800 900 1000 Deterministic cost increase in % Number of items n Uγ1γ2 + + + + + + + + + + + Uγ1

  • M. Poss (Heudiasyc)

Variable uncertainty Aussois 22 / 31

slide-37
SLIDE 37

Computational complexity

model Uγ1 Uγ1γ2 Uγ1γ2γ3 time model/time UΓ 1.7 3.4 6.1 gap model/gap UΓ 0.87 0.98 1.1 Fixing M to maxj(ˆ aj) affects the LP relaxation. If M = 1000, gap Uγ1/gap UΓ → 3.9 !

  • M. Poss (Heudiasyc)

Variable uncertainty Aussois 23 / 31

slide-38
SLIDE 38

Outline

1

Robust optimization

2

Variable budgeted uncertainty

3

Cost uncertainty

  • M. Poss (Heudiasyc)

Variable uncertainty Aussois 24 / 31

slide-39
SLIDE 39

Cost uncertainty

Suppose that only cost coefficient are uncertain min max

c∈U n

  • i=1

cixi s.t.

n

  • j=1

aijxj ≤ bi, i = 1, . . . , m x ∈ {0, 1}n, which can be rewritten COΓ ≡ min

x∈X max c∈UΓ cTx.

The previous probabilistic approximation leads to a relation between COΓ and min

x∈X VaRǫ cTx.

  • M. Poss (Heudiasyc)

Variable uncertainty Aussois 25 / 31

slide-40
SLIDE 40

Value-at-risk

Definition: VaRǫ(cTx) = inf{t|P(cTx ≤ t) ≥ 1 − ǫ}. We see easily that COΓ provides an upper bound of the optimization of VaR

  • M. Poss (Heudiasyc)

Variable uncertainty Aussois 26 / 31

slide-41
SLIDE 41

Value-at-risk

Definition: VaRǫ(cTx) = inf{t|P(cTx ≤ t) ≥ 1 − ǫ}. We see easily that COΓ provides an upper bound of the optimization of VaR The upper bound is very bad for small cardinality vectors

  • M. Poss (Heudiasyc)

Variable uncertainty Aussois 26 / 31

slide-42
SLIDE 42

Value-at-risk

Definition: VaRǫ(cTx) = inf{t|P(cTx ≤ t) ≥ 1 − ǫ}. We see easily that COΓ provides an upper bound of the optimization of VaR The upper bound is very bad for small cardinality vectors Model COγ overcomes this flaw COγ ≡ min

x∈X max c∈Uγ cTx.

  • M. Poss (Heudiasyc)

Variable uncertainty Aussois 26 / 31

slide-43
SLIDE 43

Shortest path problem

10 15 20 25 30 35 40 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1 Cost reduction in % Value of ε NE1 AL1 MN1 IA1

  • M. Poss (Heudiasyc)

Variable uncertainty Aussois 27 / 31

slide-44
SLIDE 44

Complexity of the resulting problems

Theorem

When γ is an affine function, COγ can be solved by solving n + 1 problems CO and taking the cheapest optimal solution.

  • M. Poss (Heudiasyc)

Variable uncertainty Aussois 28 / 31

slide-45
SLIDE 45

Complexity of the resulting problems

Theorem

When γ is an affine function, COγ can be solved by solving n + 1 problems CO and taking the cheapest optimal solution.

Theorem

When γ is a non-decreasing function of x1, COγ can be solved by solving n cardinality constrained problems COΓ and taking the cheapest

  • ptimal solution.
  • M. Poss (Heudiasyc)

Variable uncertainty Aussois 28 / 31

slide-46
SLIDE 46

Dynamic Programming

We use the notation Γ′ = min(n, max

k=0,...,n γ(k)).

Theorem

Consider a combinatorial optimization problem that can be solved in O(τ) by using dynamic programming. If γ(k) ∈ Z for each k = 0, . . . , n, then COγ can be solved in O(nΓ′τ). Otherwise, COγ can be solved in O(n2Γ′τ).

  • M. Poss (Heudiasyc)

Variable uncertainty Aussois 29 / 31

slide-47
SLIDE 47

Dynamic Programming

We use the notation Γ′ = min(n, max

k=0,...,n γ(k)).

Theorem

Consider a combinatorial optimization problem that can be solved in O(τ) by using dynamic programming. If γ(k) ∈ Z for each k = 0, . . . , n, then COγ can be solved in O(nΓ′τ). Otherwise, COγ can be solved in O(n2Γ′τ).

Theorem

Consider a combinatorial optimization problem that can be solved in O(τ) by using dynamic programming. Then, COΓ can be solved in O(Γτ). If Γ ∼ n1/2, we get O(n1/2τ), improving over the O(nτ) from Bertsimas and Sim.

  • M. Poss (Heudiasyc)

Variable uncertainty Aussois 29 / 31

slide-48
SLIDE 48

Concluding remarks

We introduce a new class of uncertainty models. They correct the flaw of Bertsimas and Sim model. The tractability of the new model is often comparable (or equal) to the traditional model. Remark: The model can be extended to non-combinatorial problems but tractability becomes an issue.

  • M. Poss (Heudiasyc)

Variable uncertainty Aussois 30 / 31

slide-49
SLIDE 49

Thank’s for your attention

  • M. Poss (Heudiasyc)

Variable uncertainty Aussois 31 / 31