Implementation exercises for the course Heuristic Optimization Dr. - - PowerPoint PPT Presentation

implementation exercises for the course heuristic
SMART_READER_LITE
LIVE PREVIEW

Implementation exercises for the course Heuristic Optimization Dr. - - PowerPoint PPT Presentation

Implementation exercises for the course Heuristic Optimization Dr. Manuel L opez-Ib a nez manuel.lopez-ibanez@ulb.ac.be IRIDIA, CoDE, ULB March 6, 2013 Dr. Manuel L opez-Ib a nez Exercises for Heuristic Optimization


slide-1
SLIDE 1

Implementation exercises for the course Heuristic Optimization

  • Dr. Manuel L´
  • pez-Ib´

a˜ nez manuel.lopez-ibanez@ulb.ac.be

IRIDIA, CoDE, ULB

March 6, 2013

  • Dr. Manuel L´
  • pez-Ib´

a˜ nez Exercises for Heuristic Optimization

slide-2
SLIDE 2

Implementation Exercise 1

The Traveling Salesman Problem with Time Windows

Exercise 1.1. Iterative improvement algorithms for the TSPTW Exercise 1.2. Variable Neighborhood Descent for the TSPTW

  • Dr. Manuel L´
  • pez-Ib´

a˜ nez Exercises for Heuristic Optimization

slide-3
SLIDE 3

The Traveling Salesman Problem with Time Windows

TSP + each “node” may only be visited within a time-window Given an undirected complete graph G = (N, E), where N = {0, 1, . . . , n} customer nodes, 0 is the depot a travel time c(eij) for every edge eij ∈ E a time window [si, li] for each i ∈ N

if node i visited sooner than si ⇒ wait if node i visited later than li ⇒ constraint violation

  • Dr. Manuel L´
  • pez-Ib´

a˜ nez Exercises for Heuristic Optimization

slide-4
SLIDE 4

The Traveling Salesman Problem with Time Windows

Find a tour π = (π0 = 0, π1, . . . , πn, πn+1 = 0), where

(π1, . . . , πn) is a permutation of the nodes in N \ {0} and πk = customer at the kth position of the tour.

such that: minimize: f(π) =

n

  • k=0

c(eπk,πk+1) (travel time) subject to:

Ω(π) =

n+1

  • k=0

ω(πk) = 0

(constraint violations) where:

ω(πk) =

  • 1

if Aπk > lπk ,

  • therwise;

Aπk+1 = max(Aπk, sπk) + c(eπk,πk+1) (arrival time)

  • Dr. Manuel L´
  • pez-Ib´

a˜ nez Exercises for Heuristic Optimization

slide-5
SLIDE 5

The Traveling Salesman Problem with Time Windows

[0, 100]

6

[40, 70]

5

[70, 75]

1

[30, 35]

4

[40, 60]

2

[30, 45]

3

[30, 47]

7 8 5 9 7 5 15 6 8 9 11

  • Dr. Manuel L´
  • pez-Ib´

a˜ nez Exercises for Heuristic Optimization

slide-6
SLIDE 6

The Traveling Salesman Problem with Time Windows

A = 30 A = 36 A = 47 A = 56 A = 70 A = 78 A = 85

[0, 100]

6

[40, 70]

5

[70, 75]

1

[30, 35]

4

[40, 60]

2

[30, 45]

3

[30, 47]

7 8 5 9 7 5 15 6 8 9 11

  • Dr. Manuel L´
  • pez-Ib´

a˜ nez Exercises for Heuristic Optimization

slide-7
SLIDE 7

The Traveling Salesman Problem with Time Windows

A = 30 A = 36 A = 47 A = 56 A = 70 A = 78 A = 85

[0, 100]

6

[40, 70]

5

[70, 75]

1

[30, 35]

4

[40, 60]

2

[30, 45]

3

[30, 47]

7 8 5 9 7 5 15 6 8 9 11

Travel time: f = 5 + 6 + 11 + 9 + 5 + 8 + 7 = 51 Constraint violations: Ω = 1

  • Dr. Manuel L´
  • pez-Ib´

a˜ nez Exercises for Heuristic Optimization

slide-8
SLIDE 8

Exercise 1.1: Iterative Improvement for the TSPTW

Implement 6 iterative improvement algorithms for the TSPTW

  • Dr. Manuel L´
  • pez-Ib´

a˜ nez Exercises for Heuristic Optimization

slide-9
SLIDE 9

Exercise 1.1: Iterative Improvement for the TSPTW

Implement 6 iterative improvement algorithms for the TSPTW Pivoting rule:

1

first-improvement

2

best-improvement

Neighborhood:

1

Transpose

2

Insert

3

Exchange

Initial solution:

1

Random permutation (uninformed random picking)

  • Dr. Manuel L´
  • pez-Ib´

a˜ nez Exercises for Heuristic Optimization

slide-10
SLIDE 10

Exercise 1.1: Iterative Improvement for the TSPTW

Implement 6 iterative improvement algorithms for the TSPTW Pivoting rule:

1

first-improvement

2

best-improvement

Neighborhood:

1

Transpose

2

Insert

3

Exchange

Initial solution:

1

Random permutation (uninformed random picking)

2 pivoting rules × 3 neighborhoods × 1 initialization method = 6 combinations

  • Dr. Manuel L´
  • pez-Ib´

a˜ nez Exercises for Heuristic Optimization

slide-11
SLIDE 11

Exercise 1.1: Iterative Improvement for the TSPTW

Implement 6 iterative improvement algorithms for the TSPTW Don’t implement 6 programs! Reuse code and use command-line parameters

tsptw-ii --first --transpose --init-random tsptw-ii --best --exchange --init-random ...

  • Dr. Manuel L´
  • pez-Ib´

a˜ nez Exercises for Heuristic Optimization

slide-12
SLIDE 12

Exercise 1.1: Iterative Improvement for the TSPTW

Iterative Improvement

π := GenerateInitialSolution()

while π is not a local optimum do choose a neighbour π′ ∈ N(π) such that F(π′) < F(π)

π := π′

  • Dr. Manuel L´
  • pez-Ib´

a˜ nez Exercises for Heuristic Optimization

slide-13
SLIDE 13

Exercise 1.1: Iterative Improvement for the TSPTW

Iterative Improvement

π := GenerateInitialSolution()

while π is not a local optimum do choose a neighbour π′ ∈ N(π) such that F(π′) < F(π)

π := π′

Which neighbour to choose? Pivoting rule Best Improvement: choose best from all neighbours of s First improvement: evaluate neighbours in fixed order and choose first improving neighbour.

  • Dr. Manuel L´
  • pez-Ib´

a˜ nez Exercises for Heuristic Optimization

slide-14
SLIDE 14

Exercise 1.1: Iterative Improvement for the TSPTW

Iterative Improvement

π := GenerateInitialSolution()

while π is not a local optimum do choose a neighbour π′ ∈ N(π) such that F(π′) < F(π)

π := π′

Which neighbour to choose? Pivoting rule Best Improvement: choose best from all neighbours of s ✔ Better quality ✘ Requires evaluation of all neighbours in each step First improvement: evaluate neighbours in fixed order and choose first improving neighbour.

  • Dr. Manuel L´
  • pez-Ib´

a˜ nez Exercises for Heuristic Optimization

slide-15
SLIDE 15

Exercise 1.1: Iterative Improvement for the TSPTW

Iterative Improvement

π := GenerateInitialSolution()

while π is not a local optimum do choose a neighbour π′ ∈ N(π) such that F(π′) < F(π)

π := π′

Which neighbour to choose? Pivoting rule Best Improvement: choose best from all neighbours of s ✔ Better quality ✘ Requires evaluation of all neighbours in each step First improvement: evaluate neighbours in fixed order and choose first improving neighbour. ✔ More efficient ✘ Order of evaluation may impact quality / performance

  • Dr. Manuel L´
  • pez-Ib´

a˜ nez Exercises for Heuristic Optimization

slide-16
SLIDE 16

Exercise 1.1: Iterative Improvement for the TSPTW

Iterative Improvement

π := GenerateInitialSolution()

while π is not a local optimum do choose a neighbour π′ ∈ N(π) such that F(π′) < F(π)

π := π′

Which neighborhood N(π)? Transpose Insertion Exchange

  • Dr. Manuel L´
  • pez-Ib´

a˜ nez Exercises for Heuristic Optimization

slide-17
SLIDE 17

Exercise 1.1: Iterative Improvement for the TSPTW

A B C D E F A C B D E F A B C D E F A E C D B F transpose neighbourhood A B C D E F A C D B E F exchange neighbourhood insert neighbourhood

¼ ¶ ¼ ¼ ¶ ¼ ¼ ¶ ¼

  • Dr. Manuel L´
  • pez-Ib´

a˜ nez Exercises for Heuristic Optimization

slide-18
SLIDE 18

Exercise 1.1: Iterative Improvement for the TSPTW

A B C D E F A C B D E F transpose neighbourhood

¼ ¶ ¼

π′ = transp(π, k) = Transpose πk and πk+1

Fast (delta) evaluation of travel time:

∆c =c(aπk−1,πk+1) + c(aπk+1,πk) + c(aπk,πk+2) − c(aπk−1,πk) − c(aπk,πk+1) − c(aπk+1,πk+2)

But what about the time-windows?

  • Dr. Manuel L´
  • pez-Ib´

a˜ nez Exercises for Heuristic Optimization

slide-19
SLIDE 19

Exercise 1.1: Iterative Improvement for the TSPTW

A B C D E F A C B D E F transpose neighbourhood

¼ ¶ ¼

π′ = transp(π, k) = Transpose πk and πk+1

check the windows affected by the transpose move: lk, lk+1, lk+2 update waiting times for k, k + 1, k + 2, . . . , n Possible speed-up for i = k + 3 to n + 1 do if we had to wait at node i before and after the transpose move then nothing else changes, break loop else update waiting times and constraint violations at node i

  • Dr. Manuel L´
  • pez-Ib´

a˜ nez Exercises for Heuristic Optimization

slide-20
SLIDE 20

Exercise 1.1: Iterative Improvement for the TSPTW

A B C D E F A C D B E F insert neighbourhood

¼ ¶ ¼

π′ = insert(π, k, j) = Insert πk at position j

It can be implemented as a sequence of transpose moves: Example: A B C D

π

B A C D

π′ = transp(π, 1)

B C A D

π′′ = transp(π′, 2)

B C D A

π′′′ = transp(π′′, 3) = insert(π, 1, 4)

π′ = insert(π, k, k − 1) = transp(π, k − 1)

Avoid evaluating the same neighbor two times!

(You may need to remember and jump back to a previous position)

  • Dr. Manuel L´
  • pez-Ib´

a˜ nez Exercises for Heuristic Optimization

slide-21
SLIDE 21

Exercise 1.1: Iterative Improvement for the TSPTW

A B C D E F A E C D B F exchange neighbourhood

¼ ¶ ¼

π′ = exchange(π, k, j) = exchange nodes at πk and πj (k < j)

Do not worry about fast evaluation of travel time No trivial representation as transpose moves

∀i < k: nothing changes ∀i ≥ k: everything changes

So order is important! Do not evaluate the same thing twice: Transpose: n − 1 neighbors Insertion:

(n − 1)2 neighbors

Exchange: n · (n − 1)/2 neighbors

  • Dr. Manuel L´
  • pez-Ib´

a˜ nez Exercises for Heuristic Optimization

slide-22
SLIDE 22

Exercise 1.1: Iterative Improvement for the TSPTW

How to compare solutions during the search?

1

minimize the number of constraint violations (Ω)

2

if equal number of constraint violations, compare the tour travel time (f)

  • Dr. Manuel L´
  • pez-Ib´

a˜ nez Exercises for Heuristic Optimization

slide-23
SLIDE 23

Exercise 1.1: Iterative Improvement for the TSPTW

How to compare solutions during the search?

1

minimize the number of constraint violations (Ω)

2

if equal number of constraint violations, compare the tour travel time (f) How to compare solutions during analysis? Assign a penalty value to constraint violations: f ′(π) = f(π) + 104 · Ω(π)

  • Dr. Manuel L´
  • pez-Ib´

a˜ nez Exercises for Heuristic Optimization

slide-24
SLIDE 24

Exercise 1.1: Iterative Improvement for the TSPTW

Instances 10 TSPTW instances available from: http://iridia.ulb.ac.be/∼stuetzle/Teaching/HO/ Solution checker Given an instance and a solution, reports solution cost (travel time), constraint violations, etc. The source code of the solution checker shows how to read an instance and how to correctly evaluate a solution Check your solutions to be sure you are evaluating them correctly!

  • Dr. Manuel L´
  • pez-Ib´

a˜ nez Exercises for Heuristic Optimization

slide-25
SLIDE 25

Exercise 1.1: Iterative Improvement for the TSPTW

Experiments Apply each algorithm k, 100 times with different random seeds (seed r) to each instance i and compute:

1

Number of constraint violations Ωkri

2

Penalised relative percentage deviation: pRPDkri = 100 · f′

kri − besti

besti

= 100 · (fkri + 104 · Ωkri) − besti

besti

3

Computation time (tkri) (CPU-time, not wall-clock time)

  • Dr. Manuel L´
  • pez-Ib´

a˜ nez Exercises for Heuristic Optimization

slide-26
SLIDE 26

Exercise 1.1: Iterative Improvement for the TSPTW

Experiments Apply each algorithm k, 100 times with different random seeds (seed r) to each instance i and compute:

1

Number of constraint violations Ωkri

2

Penalised relative percentage deviation: pRPDkri = 100 · f′

kri − besti

besti

= 100 · (fkri + 104 · Ωkri) − besti

besti

3

Computation time (tkri) (CPU-time, not wall-clock time) Report for each algorithm k on each instance i Percentage of runs with Ω > 0 (% infeasible) Mean penalised RPD (pRPD) Mean computation time (T cpu) Boxplots of pRPD and Tcpu

  • Dr. Manuel L´
  • pez-Ib´

a˜ nez Exercises for Heuristic Optimization

slide-27
SLIDE 27

Exercise 1.1: Iterative Improvement for the TSPTW

II-first-transp II-best-transp . . . Instance %infeas pRPD T cpu %infeas pRPD T cpu . . . n80w20.1 2 30.01 0.0 0.10 2.7 n80w20.2 3 40.01 1.0 1 10.15 7.9 . . .

  • ii−best−tr

ii−first−tr ii−best−ex 2 4 6 8 10

n20w60.1

pRPD

(and the same for Tcpu)

  • Dr. Manuel L´
  • pez-Ib´

a˜ nez Exercises for Heuristic Optimization

slide-28
SLIDE 28

Exercise 1.1: Iterative Improvement for the TSPTW

Is there a statistically significant difference between the solution quality generated by the different algorithms? Statistical test Wilcoxon signed-rank test

  • Dr. Manuel L´
  • pez-Ib´

a˜ nez Exercises for Heuristic Optimization

slide-29
SLIDE 29

Exercise 1.1: Iterative Improvement for the TSPTW

Is there a statistically significant difference between the solution quality generated by the different algorithms? Background: Statistical hypothesis tests (1) Statistical hypothesis tests are used to assess the validity of statements about properties of or relations between sets of statistical data.

  • Dr. Manuel L´
  • pez-Ib´

a˜ nez Exercises for Heuristic Optimization

slide-30
SLIDE 30

Exercise 1.1: Iterative Improvement for the TSPTW

Is there a statistically significant difference between the solution quality generated by the different algorithms? Background: Statistical hypothesis tests (1) Statistical hypothesis tests are used to assess the validity of statements about properties of or relations between sets of statistical data. The statement to be tested (or its negation) is called the null hypothesis (H0) of the test.

  • Dr. Manuel L´
  • pez-Ib´

a˜ nez Exercises for Heuristic Optimization

slide-31
SLIDE 31

Exercise 1.1: Iterative Improvement for the TSPTW

Is there a statistically significant difference between the solution quality generated by the different algorithms? Background: Statistical hypothesis tests (1) Statistical hypothesis tests are used to assess the validity of statements about properties of or relations between sets of statistical data. The statement to be tested (or its negation) is called the null hypothesis (H0) of the test. Example: For the Wilcoxon signed-rank test, the null hypothesis is that ‘the median of the differences is zero’.

  • Dr. Manuel L´
  • pez-Ib´

a˜ nez Exercises for Heuristic Optimization

slide-32
SLIDE 32

Exercise 1.1: Iterative Improvement for the TSPTW

Is there a statistically significant difference between the solution quality generated by the different algorithms? Background: Statistical hypothesis tests (1) Statistical hypothesis tests are used to assess the validity of statements about properties of or relations between sets of statistical data. The statement to be tested (or its negation) is called the null hypothesis (H0) of the test. Example: For the Wilcoxon signed-rank test, the null hypothesis is that ‘the median of the differences is zero’. The significance level (α) determines the maximum allowable probability of incorrectly rejecting the null hypothesis.

  • Dr. Manuel L´
  • pez-Ib´

a˜ nez Exercises for Heuristic Optimization

slide-33
SLIDE 33

Exercise 1.1: Iterative Improvement for the TSPTW

Is there a statistically significant difference between the solution quality generated by the different algorithms? Background: Statistical hypothesis tests (1) Statistical hypothesis tests are used to assess the validity of statements about properties of or relations between sets of statistical data. The statement to be tested (or its negation) is called the null hypothesis (H0) of the test. Example: For the Wilcoxon signed-rank test, the null hypothesis is that ‘the median of the differences is zero’. The significance level (α) determines the maximum allowable probability of incorrectly rejecting the null hypothesis. Typical value of α is 0.05.

  • Dr. Manuel L´
  • pez-Ib´

a˜ nez Exercises for Heuristic Optimization

slide-34
SLIDE 34

Exercise 1.1: Iterative Improvement for the TSPTW

Is there a statistically significant difference between the solution quality generated by the different algorithms? Background: Statistical hypothesis tests (2) The application of a test to a given data set results in a p-value, which represents the probability that the null hypothesis is incorrectly rejected.

  • Dr. Manuel L´
  • pez-Ib´

a˜ nez Exercises for Heuristic Optimization

slide-35
SLIDE 35

Exercise 1.1: Iterative Improvement for the TSPTW

Is there a statistically significant difference between the solution quality generated by the different algorithms? Background: Statistical hypothesis tests (2) The application of a test to a given data set results in a p-value, which represents the probability that the null hypothesis is incorrectly rejected. The null hypothesis is rejected iff this p-value is smaller than the previously chosen significance level.

  • Dr. Manuel L´
  • pez-Ib´

a˜ nez Exercises for Heuristic Optimization

slide-36
SLIDE 36

Exercise 1.1: Iterative Improvement for the TSPTW

Is there a statistically significant difference between the solution quality generated by the different algorithms? Background: Statistical hypothesis tests (2) The application of a test to a given data set results in a p-value, which represents the probability that the null hypothesis is incorrectly rejected. The null hypothesis is rejected iff this p-value is smaller than the previously chosen significance level. Most common statistical hypothesis tests are already implemented in statistical software such as the R software environment (http://www.r-project.org/).

  • Dr. Manuel L´
  • pez-Ib´

a˜ nez Exercises for Heuristic Optimization

slide-37
SLIDE 37

Exercise 1.1: Iterative Improvement for the TSPTW

Is there a statistically significant difference between the solution quality generated by the different algorithms? Example in R

best.known <- read.table ("best.txt")

  • Dr. Manuel L´
  • pez-Ib´

a˜ nez Exercises for Heuristic Optimization

slide-38
SLIDE 38

Exercise 1.1: Iterative Improvement for the TSPTW

Is there a statistically significant difference between the solution quality generated by the different algorithms? Example in R

best.known <- read.table ("best.txt") a.f <- read.table("ii-best-ex-rand.dat")$V1

  • Dr. Manuel L´
  • pez-Ib´

a˜ nez Exercises for Heuristic Optimization

slide-39
SLIDE 39

Exercise 1.1: Iterative Improvement for the TSPTW

Is there a statistically significant difference between the solution quality generated by the different algorithms? Example in R

best.known <- read.table ("best.txt") a.f <- read.table("ii-best-ex-rand.dat")$V1 a.RPD <- 100 * (a.f - best.known) / best.known

  • Dr. Manuel L´
  • pez-Ib´

a˜ nez Exercises for Heuristic Optimization

slide-40
SLIDE 40

Exercise 1.1: Iterative Improvement for the TSPTW

Is there a statistically significant difference between the solution quality generated by the different algorithms? Example in R

best.known <- read.table ("best.txt") a.f <- read.table("ii-best-ex-rand.dat")$V1 a.RPD <- 100 * (a.f - best.known) / best.known b.f <- read.table("ii-best-ins-rand.dat")$V1 b.RPD <- 100 * (b.f - best.known) / best.known

  • Dr. Manuel L´
  • pez-Ib´

a˜ nez Exercises for Heuristic Optimization

slide-41
SLIDE 41

Exercise 1.1: Iterative Improvement for the TSPTW

Is there a statistically significant difference between the solution quality generated by the different algorithms? Example in R

best.known <- read.table ("best.txt") a.f <- read.table("ii-best-ex-rand.dat")$V1 a.RPD <- 100 * (a.f - best.known) / best.known b.f <- read.table("ii-best-ins-rand.dat")$V1 b.RPD <- 100 * (b.f - best.known) / best.known wilcox.test (a.RPD, b.RPD, paired=T)$p.value [1] 0.0019212

Compare best vs. first-improvement for each neighborhood, and exchange vs. insertion neighborhood for each pivoting rule.

  • Dr. Manuel L´
  • pez-Ib´

a˜ nez Exercises for Heuristic Optimization

slide-42
SLIDE 42

Exercise 1.1: Iterative Improvement for the TSPTW

BONUS Points: Implement an insertion heuristic for initialization Replace random initialization with an insertion heuristic

✔ Think what heuristics may generate good solutions

(tip: visit soon windows that close earlier, or windows that open soon, or the nearest feasible neighbor, or . . . )

✔ Try to generate feasible solutions from the start ✔ Consider whether it helps to include some random decisions

Report the same as for random initialization Compare the insertion heuristic versus the random initialization

  • Dr. Manuel L´
  • pez-Ib´

a˜ nez Exercises for Heuristic Optimization

slide-43
SLIDE 43

Exercise 1.2 VND algorithms for the TSPTW

Implement 4 VND algorithms for the TSPTW Type:

1

standard VND

2

piped VND

Pivoting rule: first-improvement Neighborhood order:

1

transpose → exchange → insert

2

transpose → insert → exchange

Initial solution: Random permutation

  • Dr. Manuel L´
  • pez-Ib´

a˜ nez Exercises for Heuristic Optimization

slide-44
SLIDE 44

Exercise 1.2 VND algorithms for the TSPTW

Variable Neighbourhood Descent (VND) k neighborhoods N1, . . . , Nk

π := GenerateInitialSolution()

i := 1 repeat choose the first improving neighbor π′ ∈ Ni(π) if ∄π′ then i := i + 1 else

π := π′

i := 1 until i > k

  • Dr. Manuel L´
  • pez-Ib´

a˜ nez Exercises for Heuristic Optimization

slide-45
SLIDE 45

Exercise 1.2 VND algorithms for the TSPTW

Variable Neighbourhood Descent (VND) k neighborhoods N1, . . . , Nk

π := GenerateInitialSolution()

i := 1 repeat choose the first improving neighbor π′ ∈ Ni(π) if ∄π′ then i := i + 1 else

π := π′

i := 1 until i > k Piped VND Simply chain different II algorithms one after the other Use final solution of one II as initial solution of the next

  • Dr. Manuel L´
  • pez-Ib´

a˜ nez Exercises for Heuristic Optimization

slide-46
SLIDE 46

Exercise 1.2 VND algorithms for the TSPTW

Implement 4 VND algorithms for the TSPTW Instances: Same as 1.1 Experiments: 100 runs with different random seed of each algorithm per instance Report (tables, boxplots, statistical tests): Same as 1.1

  • Dr. Manuel L´
  • pez-Ib´

a˜ nez Exercises for Heuristic Optimization

slide-47
SLIDE 47

What to send to manuel.lopez-ibanez@ulb.ac.be ?

1

Report (in PDF) must contain:

Short summary of contents: what is implemented, how it was implemented (language, compiler, etc), how to run it, any extras. Analysis of exercise 1.1: Tables, boxplots, statistical results, conclusions (what do you think the analysis shows? why?) Analysis of exercise 1.2: Tables, boxplots, statistical results, conclusions Overall conclusions from taking into account both 1.1 and 1.2

2

Source code (C, C++ or Java), with:

README file explaining how to compile, how to run it and the structure of the source code Reasonable comments in the source code Please do not use (Windows/Mac)-only libraries or extensions

Deadline: April 10, 2013, 23:59

  • Dr. Manuel L´
  • pez-Ib´

a˜ nez Exercises for Heuristic Optimization