Heuristic Algorithms for Multiobjective Combinatorial Optimization - - PDF document

heuristic algorithms for multiobjective combinatorial
SMART_READER_LITE
LIVE PREVIEW

Heuristic Algorithms for Multiobjective Combinatorial Optimization - - PDF document

[center] HEURISTIC OPTIMIZATION Heuristic Algorithms for Multiobjective Combinatorial Optimization Adapted from a tutorial by Lu s Paquete 1 / 36 Introduction Multiobjective Combinatorial Optimization Problems (MCOPs) I Many real-life


slide-1
SLIDE 1

[center]

HEURISTIC OPTIMIZATION

Heuristic Algorithms for Multiobjective Combinatorial Optimization

Adapted from a tutorial by Lu´ ıs Paquete 1 / 36

Introduction

Multiobjective Combinatorial Optimization Problems (MCOPs)

I Many real-life problems are multiobjective

  • Logistics and transportation
  • Timetabling and scheduling
  • ... and many others

I But most MCOPs are NP-hard and intractable

How to design and analyze SLS algorithms for MCOPs?

2 / 36

slide-2
SLIDE 2

Train roundtrip through capitals of German federal states

The fastest roundtrip:

I take only ICE trains

The cheapest roundtrip:

I take only local trains

3 / 36

The fastest The cheapest cost time

4 / 36

slide-3
SLIDE 3

The fastest The cheapest cost time

5 / 36

The fastest The cheapest cost time

6 / 36

slide-4
SLIDE 4

cost time

7 / 36

cost time

8 / 36

slide-5
SLIDE 5

Multiobjective Combinatorial Optimization Problem The set X of feasible solutions is finite and its elements have some combinatorial property (graph, tree, path, partition, etc.). The goal is to min

x2X f(x) = (f1(x), . . . , fQ(x)) I The objective function f maps x 2 X to RQ

9 / 36

I Optimality depends of the decision maker’s preferences

(or lack of them).

I Pareto-optimality is based on component-wise order :

u  v ( ) u 6= v and ui  vi, i = 1, . . . , Q

I A solution x 2 X is efficient iff @ x0 2 X s.t. f(x0)  f(x) I Efficient set is the set of all efficient solutions I Nondominated set is the image of the efficient set in f

10 / 36

slide-6
SLIDE 6

cost time

11 / 36

MCOPs and Solution Methods

I Most MCOPs are NP-hard

Decision version of MCOP (MCOP-D) [Serafini 1986]: Given z = (z1, . . . , zQ), does there exist a solution x 2 X s.t. f(x)  z or f(x) = z?

  • 1. If the single-objective problem is NP-complete, then the

corresponding MCOP-D is also NP-complete.

  • 2. If the single-objective problem is solvable in polynomial time,

the corresponding MCOP-D may still be NP-complete.

12 / 36

slide-7
SLIDE 7

Solution Methods to MCOPs

I Enumeration Methods

  • Multiobjective Branch & Bound
  • Multiobjective Dynamic Programming

I Scalarized Methods

  • Solving several related single-objective problems
  • Weighted Sum, ✏-constraint, etc.

I SLS Algorithms

13 / 36

Weighted Sum

I min x2X Q

X

i=1

ifi(x)

I gives a search direction I An optimal solution with

> 0 is efficient.

14 / 36

slide-8
SLIDE 8

Weighted Sum

I min x2X Q

X

i=1

ifi(x)

I gives a search direction I An optimal solution with

> 0 is efficient.

15 / 36

SLS Algorithms

SLS Algorithm design challenges for MCOPs

I How to attain more than one solution? I How to attain high quality solutions? I How to evaluate performance?

Rule of thumb

I Closeness to the nondominated set I Well-distributed outcomes I The more, the better

16 / 36

slide-9
SLIDE 9

Scalarized Acceptance Criterion (SAC) Model

I Weighted Sum

f (x) =

Q

X

i=1

ifi(x)

I Weighted Chebycheff

f (x) = max

i=1,...,Q

  • i | fi(x) yi |
  • 17 / 36

SAC Search Model

————————————– input: weight vectors Λ for each 2 Λ do x is a candidate solution x0 = SolveSAC(x, ) Add x0 to Archive Filter Archive return Archive ————————————–

cost time

18 / 36

slide-10
SLIDE 10

SAC Search Model

———————————— input: weight vectors Λ for each 2 Λ do x is a candidate solution x0 = SolveSAC(x, ) Add x0 to Archive Filter Archive return Archive ———————————— AAA

I Search Strategy I Number of Scalarizations I Intensification Mechanism I Neighborhood

19 / 36

SAC Search Model – EMO ————————————— input: candidate solution set Xn repeat Xr = Reproduce/Mutate(Xn) R = Rank(Xr, Xn) Xs = Select(Xr, Xn, R) Xn = Replace(Xs) return Xn —————————————

cost time

3 3 3 2 2 2 2 1 1 1 1 20 / 36

slide-11
SLIDE 11

SAC Search Model – EMO ————————————— input: candidate solution set Xn repeat Xr = Reproduce/Mutate(Xn) R = Rank(Xr, Xn) Xs = Select(Xr, Xn, R) Xn = Replace(Xs) return Xn —————————————

cost time

5 5 6 3 2 2 3 1 1 1 1 21 / 36

SAC Search Model – EMO —————————————– input: candidate solution set Xn repeat Xr = Reproduce/Mutate(Xn) R = Rank(Xr, Xn) Xs = Select(Xr, Xn, R) Xn = Replace(Xs) return Xn —————————————– AA

I Component-wise order I Closeness I Performance indicators

22 / 36

slide-12
SLIDE 12

Multiobjective Local Search

——————————————————————– input:candidate solution x while x is not a local optimum do choose a neighbor x0 from x such that f(x0)  f(x) x = x0 return x ——————————————————————–

I What if f(x0) and f(x) are mutually nondominated? I How to obtain more than a single solution?

23 / 36

CWAC Search Model ———————————— input: candidate solution x Add x to Archive repeat Choose x from Archive XN = Neighbors(x) Add XN to Archive Filter Archive until all x in Archive are visited return Archive ————————————

cost time

24 / 36

slide-13
SLIDE 13

CWAC Search Model ———————————— input: candidate solution x, ✏ Add x to Archive repeat Choose x from Archive XN = Neighbors(x) Add XN to Archive Filter Archive according to ✏ until all x in Archive are visited return Archive ———————————— Archive bounding [Angel et al. 2004]

25 / 36

Hybrid Search Model ———————————— input: weight vectors Λ for each 2 Λ do x is a candidate solution x0 = SolveSAC(x, ) X 0 = CW(x0) Add X 0 to Archive Filter Archive return Archive ————————————

cost time

26 / 36

slide-14
SLIDE 14

Performance Assessment

Rules of Thumb: An algorithm performs better if

I It is closer to the nondominated set I It has better distributed outcomes I It has more solutions

Indicators of Performance

I Measure some property of the outcomes I Most of the indicators have limitations

[Knowles & Corne 2002, Zitzler et al. 2003]

27 / 36

cost time

Many runs of Algorithms Blue and Red

28 / 36

slide-15
SLIDE 15

Another example

29 / 36

I Better relations

[Hansen & Jaszkiewicz 1998, Zitzler et al. 2003]

cost time

Blue and Red are incomparable

cost time

Blue is better than Red

30 / 36

slide-16
SLIDE 16

I Unary Indicator: Hypervolume

[Zitzler and Thiele, 1998] H(B) = 45 H(W ) = 25 B is better than W = ) H(B) > H(W )

31 / 36

I Unary Indicator: Hypervolume

[Zitzler and Thiele, 1998] H(B) = 37 H(W ) = 36 H(B) > H(W ) = ) B is not worse than W

32 / 36

slide-17
SLIDE 17

I Attainment Functions [V.G. da Fonseca et al. 2001]

AF: Prob. that an outcome set is better or equal to z. EAF: How many runs an outcome set is better or equal to z?

33 / 36

I Attainment functions – Visualization of differences

EAFBlue EAFRed positive differences negative differences

34 / 36

slide-18
SLIDE 18

I Attainment functions – Statistical testing

K-S test statistic: max | EAFBlue EAFRed | positive differences negative differences

35 / 36

References

I Textbooks: R.E. Steuer 1986, K. Miettinen 1999, M. Ehrgott 2005,

V.T’kindt et al. 2002, K. Deb 2002.

I Reviews: M. Ehrgott and X. Gandibleux 2000, 2002, 2004, 2009,

C.C. Coello 2000, D. Jones et al. 2002, J. Knowles and D. Corne 2004, L. Paquete and T. St¨ utzle 2007.

I Complexity and Approximation: P. Hansen 1979, P. Serafini 1986,

  • M. Ehrgott 2000, C.H. Papadimitriou and M. Yannakakis 2000,
  • E. Angel et al. 2007.

I Performance Assessment: E. Zitzler et al. 2003, 2008, V.G. da

Fonseca et al. 2001, 2010, M. Lop´ ez-Ib´ a˜ nez et al. 2010.

I Web material: PISA (http://www.tik.ethz.ch/~sop/pisa),

MOMH (http://home.gna.org/momh), ParadisEO (http://paradiseo.gforge.inria.fr)

36 / 36