Automated Configuration of MIP solvers Frank Hutter, Holger Hoos, - - PowerPoint PPT Presentation

automated configuration of mip solvers
SMART_READER_LITE
LIVE PREVIEW

Automated Configuration of MIP solvers Frank Hutter, Holger Hoos, - - PowerPoint PPT Presentation

Automated Configuration of MIP solvers Frank Hutter, Holger Hoos, and Kevin Leyton-Brown Department of Computer Science University of British Columbia Vancouver, Canada { hutter,hoos,kevinlb } @cs.ubc.ca CPAIOR 2010, June 16 Parameters in


slide-1
SLIDE 1

Automated Configuration of MIP solvers

Frank Hutter, Holger Hoos, and Kevin Leyton-Brown Department of Computer Science University of British Columbia Vancouver, Canada {hutter,hoos,kevinlb}@cs.ubc.ca CPAIOR 2010, June 16

slide-2
SLIDE 2

Parameters in Algorithms

Most algorithms have parameters

◮ Decisions that are left open during algorithm design

– numerical parameters (e.g., real-valued thresholds) – categorical parameters (e.g., which heuristic to use)

◮ Set to optimize empirical performance

2

slide-3
SLIDE 3

Parameters in Algorithms

Most algorithms have parameters

◮ Decisions that are left open during algorithm design

– numerical parameters (e.g., real-valued thresholds) – categorical parameters (e.g., which heuristic to use)

◮ Set to optimize empirical performance

Prominent parameters in MIP solvers

◮ Preprocessing ◮ Which type of cuts to apply ◮ MIP strategy parameters ◮ Details of underlying linear (or quadratic) programming solver

2

slide-4
SLIDE 4

Example: IBM ILOG CPLEX

◮ 76 parameters that affect search trajectory

3

slide-5
SLIDE 5

Example: IBM ILOG CPLEX

◮ 76 parameters that affect search trajectory

“Integer programming problems are more sensitive to specific parameter settings, so you may need to experiment with them.” [Cplex 12.1 user manual, page 235]

3

slide-6
SLIDE 6

Example: IBM ILOG CPLEX

◮ 76 parameters that affect search trajectory

“Integer programming problems are more sensitive to specific parameter settings, so you may need to experiment with them.” [Cplex 12.1 user manual, page 235]

◮ “Experiment with them”

– Perform manual optimization in 76-dimensional space – Complex, unintuitive interactions between parameters

3

slide-7
SLIDE 7

Example: IBM ILOG CPLEX

◮ 76 parameters that affect search trajectory

“Integer programming problems are more sensitive to specific parameter settings, so you may need to experiment with them.” [Cplex 12.1 user manual, page 235]

◮ “Experiment with them”

– Perform manual optimization in 76-dimensional space – Complex, unintuitive interactions between parameters – Humans are not good at that

3

slide-8
SLIDE 8

Example: IBM ILOG CPLEX

◮ 76 parameters that affect search trajectory

“Integer programming problems are more sensitive to specific parameter settings, so you may need to experiment with them.” [Cplex 12.1 user manual, page 235]

◮ “Experiment with them”

– Perform manual optimization in 76-dimensional space – Complex, unintuitive interactions between parameters – Humans are not good at that

◮ Cplex automated tuning tool (since version 11)

– Saves valuable human time – Improves performance

3

slide-9
SLIDE 9

Our work: automated algorithm configuration

◮ Given:

– Runnable algorithm A, its parameters and their domains – Benchmark set of instances Π – Performance metric m

4

slide-10
SLIDE 10

Our work: automated algorithm configuration

◮ Given:

– Runnable algorithm A, its parameters and their domains – Benchmark set of instances Π – Performance metric m

◮ Find:

– Parameter setting (“configuration”) of A optimizing m on Π

4

slide-11
SLIDE 11

Our work: automated algorithm configuration

◮ Given:

– Runnable algorithm A, its parameters and their domains – Benchmark set of instances Π – Performance metric m

◮ Find:

– Parameter setting (“configuration”) of A optimizing m on Π

◮ First to handle this with many categorical parameters

– E.g. 51/76 Cplex parameters are categorical – 1047 possible configurations algorithm configuration

4

slide-12
SLIDE 12

Our work: automated algorithm configuration

◮ Given:

– Runnable algorithm A, its parameters and their domains – Benchmark set of instances Π – Performance metric m

◮ Find:

– Parameter setting (“configuration”) of A optimizing m on Π

◮ First to handle this with many categorical parameters

– E.g. 51/76 Cplex parameters are categorical – 1047 possible configurations algorithm configuration

This paper: application study for MIP solvers

◮ Use existing algorithm configuration tool (ParamILS) ◮ Use different MIP solvers (Cplex, Gurobi, lpsolve) ◮ Use six different MIP benchmark sets ◮ Optimize different objectives (runtime to optimality/MIP gap)

4

slide-13
SLIDE 13

Outline

  • 1. Related work
  • 2. Details about this study
  • 3. Results
  • 4. Conclusions

5

slide-14
SLIDE 14

Outline

  • 1. Related work
  • 2. Details about this study
  • 3. Results
  • 4. Conclusions

6

slide-15
SLIDE 15

Parameter Optimization Tools and Applications

◮ Composer [Gratch & Dejong, ’92; Gratch and Chien, ’96]

– Spacecraft communication scheduling

◮ Calibra [Diaz and Laguna, ’06]

– Optimized various metaheuristics

◮ F-Race [Birattari et al., ’04-present]

– Iterated Local Search and Ant Colony Optimization

◮ ParamILS [Hutter et al, ’07-present]

– SAT (tree & local search), time-tabling, protein folding, ...

7

slide-16
SLIDE 16

Parameter Optimization Tools and Applications

◮ Composer [Gratch & Dejong, ’92; Gratch and Chien, ’96]

– Spacecraft communication scheduling

◮ Calibra [Diaz and Laguna, ’06]

– Optimized various metaheuristics

◮ F-Race [Birattari et al., ’04-present]

– Iterated Local Search and Ant Colony Optimization

◮ ParamILS [Hutter et al, ’07-present]

– SAT (tree & local search), time-tabling, protein folding, ...

◮ Stop [Baz, Hunsaker, Brooks & Gosavi, ’07 (Tech report)] [Baz, Hunsaker & Prokopyev, Comput Optim Appl, ’09]

– Optimized MIP solvers, including Cplex – We only found this work ≈ 1 month ago

7

slide-17
SLIDE 17

Parameter Optimization Tools and Applications

◮ Composer [Gratch & Dejong, ’92; Gratch and Chien, ’96]

– Spacecraft communication scheduling

◮ Calibra [Diaz and Laguna, ’06]

– Optimized various metaheuristics

◮ F-Race [Birattari et al., ’04-present]

– Iterated Local Search and Ant Colony Optimization

◮ ParamILS [Hutter et al, ’07-present]

– SAT (tree & local search), time-tabling, protein folding, ...

◮ Stop [Baz, Hunsaker, Brooks & Gosavi, ’07 (Tech report)] [Baz, Hunsaker & Prokopyev, Comput Optim Appl, ’09]

– Optimized MIP solvers, including Cplex – We only found this work ≈ 1 month ago – Main problem: only optimized performance for single instances – Only used small subset of 10 Cplex parameters

7

slide-18
SLIDE 18

Outline

  • 1. Related work
  • 2. Details about this study

The automated configuration tool: ParamILS The MIP solvers: Cplex, Gurobi & lpsolve Experimental Setup

  • 3. Results
  • 4. Conclusions

8

slide-19
SLIDE 19

Outline

  • 1. Related work
  • 2. Details about this study

The automated configuration tool: ParamILS The MIP solvers: Cplex, Gurobi & lpsolve Experimental Setup

  • 3. Results
  • 4. Conclusions

9

slide-20
SLIDE 20

Simple manual approach for configuration

Start with some parameter configuration

10

slide-21
SLIDE 21

Simple manual approach for configuration

Start with some parameter configuration Modify a single parameter

10

slide-22
SLIDE 22

Simple manual approach for configuration

Start with some parameter configuration Modify a single parameter if results on benchmark set improve then keep new configuration

10

slide-23
SLIDE 23

Simple manual approach for configuration

Start with some parameter configuration repeat Modify a single parameter if results on benchmark set improve then keep new configuration until no more improvement possible (or “good enough”)

10

slide-24
SLIDE 24

Simple manual approach for configuration

Start with some parameter configuration repeat Modify a single parameter if results on benchmark set improve then keep new configuration until no more improvement possible (or “good enough”) Manually-executed local search

10

slide-25
SLIDE 25

Simple manual approach for configuration

Start with some parameter configuration repeat Modify a single parameter if results on benchmark set improve then keep new configuration until no more improvement possible (or “good enough”) Manually-executed local search ParamILS [Hutter et al., AAAI’07 & ’09]: Iterated local search: biased random walk over local optima

10

slide-26
SLIDE 26

Instantiations of ParamILS Framework

How to evaluate each configuration?

◮ BasicILS(N): perform fixed number of N runs to evaluate a

configuration θ

– Variance reduction: use same N instances & seeds for each θ

11

slide-27
SLIDE 27

Instantiations of ParamILS Framework

How to evaluate each configuration?

◮ BasicILS(N): perform fixed number of N runs to evaluate a

configuration θ

– Variance reduction: use same N instances & seeds for each θ

◮ FocusedILS: choose N(θ) adaptively

– small N(θ) for poor configurations θ – large N(θ) only for good θ

11

slide-28
SLIDE 28

Instantiations of ParamILS Framework

How to evaluate each configuration?

◮ BasicILS(N): perform fixed number of N runs to evaluate a

configuration θ

– Variance reduction: use same N instances & seeds for each θ

◮ FocusedILS: choose N(θ) adaptively

– small N(θ) for poor configurations θ – large N(θ) only for good θ – typically outperforms BasicILS – used in this study

11

slide-29
SLIDE 29

Adaptive Choice of Cutoff Time

◮ Evaluation of poor configurations takes especially long

12

slide-30
SLIDE 30

Adaptive Choice of Cutoff Time

◮ Evaluation of poor configurations takes especially long ◮ Can terminate evaluations early

– Incumbent solution provides bound – Can stop evaluation once bound is reached

12

slide-31
SLIDE 31

Adaptive Choice of Cutoff Time

◮ Evaluation of poor configurations takes especially long ◮ Can terminate evaluations early

– Incumbent solution provides bound – Can stop evaluation once bound is reached

◮ Results

– Provably never hurts – Sometimes substantial speedups

[Hutter et al., JAIR’09]

12

slide-32
SLIDE 32

Outline

  • 1. Related work
  • 2. Details about this study

The automated configuration tool: ParamILS The MIP solvers: Cplex, Gurobi & lpsolve Experimental Setup

  • 3. Results
  • 4. Conclusions

13

slide-33
SLIDE 33

MIP Solvers & their parameters

◮ Commercial solvers: Cplex 12.1 & Gurobi 2.0.1 ◮ Open-source solver: lpsolve 5.5

14

slide-34
SLIDE 34

MIP Solvers & their parameters

◮ Commercial solvers: Cplex 12.1 & Gurobi 2.0.1 ◮ Open-source solver: lpsolve 5.5 Algorithm Parameter type # params # values Total # configurations Boolean 6 2 Cplex Categorical 45 3–7 1.90 · 1047 Integer 18 discretized: 5–7 Continuous 7 discretized: 5–8

14

slide-35
SLIDE 35

MIP Solvers & their parameters

◮ Commercial solvers: Cplex 12.1 & Gurobi 2.0.1 ◮ Open-source solver: lpsolve 5.5 Algorithm Parameter type # params # values Total # configurations Boolean 6 2 Cplex Categorical 45 3–7 1.90 · 1047 Integer 18 discretized: 5–7 Continuous 7 discretized: 5–8 Boolean 4 2 Gurobi Categorical 16 3–5 3.84 · 1014 Integer 3 discretized: 5 Continuous 2 discretized: 5

14

slide-36
SLIDE 36

MIP Solvers & their parameters

◮ Commercial solvers: Cplex 12.1 & Gurobi 2.0.1 ◮ Open-source solver: lpsolve 5.5 Algorithm Parameter type # params # values Total # configurations Boolean 6 2 Cplex Categorical 45 3–7 1.90 · 1047 Integer 18 discretized: 5–7 Continuous 7 discretized: 5–8 Boolean 4 2 Gurobi Categorical 16 3–5 3.84 · 1014 Integer 3 discretized: 5 Continuous 2 discretized: 5 lpsolve Boolean 40 2 1.22 · 1015 Categorical 7 3–8

14

slide-37
SLIDE 37

MIP Solvers & their parameters

◮ Commercial solvers: Cplex 12.1 & Gurobi 2.0.1 ◮ Open-source solver: lpsolve 5.5 Algorithm Parameter type # params # values Total # configurations Boolean 6 2 Cplex Categorical 45 3–7 1.90 · 1047 Integer 18 discretized: 5–7 Continuous 7 discretized: 5–8 Boolean 4 2 Gurobi Categorical 16 3–5 3.84 · 1014 Integer 3 discretized: 5 Continuous 2 discretized: 5 lpsolve Boolean 40 2 1.22 · 1015 Categorical 7 3–8

Problems with some parameter configurations

◮ Segmentation faults & wrong results

14

slide-38
SLIDE 38

MIP Solvers & their parameters

◮ Commercial solvers: Cplex 12.1 & Gurobi 2.0.1 ◮ Open-source solver: lpsolve 5.5 Algorithm Parameter type # params # values Total # configurations Boolean 6 2 Cplex Categorical 45 3–7 1.90 · 1047 Integer 18 discretized: 5–7 Continuous 7 discretized: 5–8 Boolean 4 2 Gurobi Categorical 16 3–5 3.84 · 1014 Integer 3 discretized: 5 Continuous 2 discretized: 5 lpsolve Boolean 40 2 1.22 · 1015 Categorical 7 3–8

Problems with some parameter configurations

◮ Segmentation faults & wrong results ◮ Detect such runs online, give worst possible score

Local search avoids problematic parameter configurations

14

slide-39
SLIDE 39

MIP Solvers & their parameters

◮ Commercial solvers: Cplex 12.1 & Gurobi 2.0.1 ◮ Open-source solver: lpsolve 5.5 Algorithm Parameter type # params # values Total # configurations Boolean 6 2 Cplex Categorical 45 3–7 1.90 · 1047 Integer 18 discretized: 5–7 Continuous 7 discretized: 5–8 Boolean 4 2 Gurobi Categorical 16 3–5 3.84 · 1014 Integer 3 discretized: 5 Continuous 2 discretized: 5 lpsolve Boolean 40 2 1.22 · 1015 Categorical 7 3–8

Problems with some parameter configurations

◮ Segmentation faults & wrong results ◮ Detect such runs online, give worst possible score

Local search avoids problematic parameter configurations

◮ Concise bug reports helped to fix 2 bugs in Gurobi (!)

14

slide-40
SLIDE 40

Outline

  • 1. Related work
  • 2. Details about this study

The automated configuration tool: ParamILS The MIP solvers: Cplex, Gurobi & lpsolve Experimental Setup

  • 3. Results
  • 4. Conclusions

15

slide-41
SLIDE 41

Benchmark sets used

Domain Type #instances Citation

  • Comp. sustainability (SUST)

MILP 2 000 [Gomes et al, ’08] Combinatorial auctions (WDP) MILP 2 000 [Leyton-Brown et al., ’00] Mixed integer knapsack (MIK) MILP 120 [Atamt¨ urk, ’03] and 3 more ...

16

slide-42
SLIDE 42

Benchmark sets used

Domain Type #instances Citation

  • Comp. sustainability (SUST)

MILP 2 000 [Gomes et al, ’08] Combinatorial auctions (WDP) MILP 2 000 [Leyton-Brown et al., ’00] Mixed integer knapsack (MIK) MILP 120 [Atamt¨ urk, ’03] and 3 more ...

Split benchmarks 50:50 into training and test sets

◮ Optimized parameters on the training set ◮ Reported performance on the test set ◮ Necessary to check for over-tuning

16

slide-43
SLIDE 43

Setup of configuration experiments

Perform 10 independent runs of ParamILS

◮ Select configuration ˆ

θ∗ of run with best training performance

17

slide-44
SLIDE 44

Setup of configuration experiments

Perform 10 independent runs of ParamILS

◮ Select configuration ˆ

θ∗ of run with best training performance

Compare test performance of:

◮ ParamILS’s configuration ˆ

θ∗

◮ Default algorithm settings ◮ Cplex tuning tool

– Gurobi and lpsolve: no tuning tool available

17

slide-45
SLIDE 45

Outline

  • 1. Related work
  • 2. Details about this study
  • 3. Results
  • 4. Conclusions

18

slide-46
SLIDE 46

Minimization of Runtime to Optimal Solution

◮ “Optimal”: relative optimality gap of 0.0001

(Cplex and Gurobi default)

19

slide-47
SLIDE 47

Minimization of Runtime to Optimal Solution

◮ “Optimal”: relative optimality gap of 0.0001

(Cplex and Gurobi default)

◮ Ran ParamILS for 2 days on 10 machines

19

slide-48
SLIDE 48

Minimization of Runtime to Optimal Solution

◮ “Optimal”: relative optimality gap of 0.0001

(Cplex and Gurobi default)

◮ Ran ParamILS for 2 days on 10 machines ◮ Mean speedup (on test instances)

– Cplex 2x to 50x

10

−210 −1 10 0 10 1 10 2 10 3 10 4 10 5

10

−2

10

−1

10 10

1

10

2

10

3

10

4

10

5

Default [CPU s]

  • Config. found by ParamILS [CPU s]

Train Test

Cplex on SUST instances (50x)

19

slide-49
SLIDE 49

Minimization of Runtime to Optimal Solution

◮ “Optimal”: relative optimality gap of 0.0001

(Cplex and Gurobi default)

◮ Ran ParamILS for 2 days on 10 machines ◮ Mean speedup (on test instances)

– Cplex 2x to 50x – lpsolve 1x (no speedup) to 150x

10

−210 −1 10 0 10 1 10 2 10 3 10 4 10 5

10

−2

10

−1

10 10

1

10

2

10

3

10

4

10

5

Default [CPU s]

  • Config. found by ParamILS [CPU s]

Train Test

Cplex on SUST instances (50x)

10

1

10

2

10

3

10

4

10

5

10

1

10

2

10

3

10

4

10

5

Default [CPU s]

  • Config. found by ParamILS [CPU s]

Train Train−timeout Test Test−timeout

lpsolve on WDP instances (150x)

19

slide-50
SLIDE 50

Minimization of Runtime to Optimal Solution

◮ “Optimal”: relative optimality gap of 0.0001

(Cplex and Gurobi default)

◮ Ran ParamILS for 2 days on 10 machines ◮ Mean speedup (on test instances)

– Cplex 2x to 50x – lpsolve 1x (no speedup) to 150x – Gurobi 1.2x to 2.3x

10

−210 −1 10 0 10 1 10 2 10 3 10 4 10 5

10

−2

10

−1

10 10

1

10

2

10

3

10

4

10

5

Default [CPU s]

  • Config. found by ParamILS [CPU s]

Train Test

Cplex on SUST instances (50x)

10

−210 −1 10 0 10 1 10 2 10 3 10 4 10 5

10

−2

10

−1

10 10

1

10

2

10

3

10

4

10

5

Default [CPU s]

  • Config. found by ParamILS [CPU s]

Train Train−timeout Test Test−timeout

Gurobi on SUST instances (2.3x)

19

slide-51
SLIDE 51

Minimization of Runtime to Optimal Solution

◮ “Optimal”: relative optimality gap of 0.0001

(Cplex and Gurobi default)

◮ Ran ParamILS for 2 days on 10 machines ◮ Mean speedup (on test instances)

– Cplex 2x to 50x – lpsolve 1x (no speedup) to 150x – Gurobi 1.2x to 2.3x

10

−1

10 10

1

10

2

10

−1

10 10

1

10

2

Default [CPU s]

  • Config. found by ParamILS [CPU s]

Train Test

Gurobi on MIK instances (1.2x)

10

−210 −1 10 0 10 1 10 2 10 3 10 4 10 5

10

−2

10

−1

10 10

1

10

2

10

3

10

4

10

5

Default [CPU s]

  • Config. found by ParamILS [CPU s]

Train Train−timeout Test Test−timeout

Gurobi on SUST instances (2.3x)

19

slide-52
SLIDE 52

Comparison to Cplex tuning tool

◮ Cplex tuning tool

– Evaluates predefined good configurations, returns best one – Required runtime varies (from < 1h to weeks)

20

slide-53
SLIDE 53

Comparison to Cplex tuning tool

◮ Cplex tuning tool

– Evaluates predefined good configurations, returns best one – Required runtime varies (from < 1h to weeks)

◮ ParamILS: anytime algorithm

– At each time step, keeps track of its incumbent

20

slide-54
SLIDE 54

Comparison to Cplex tuning tool

◮ Cplex tuning tool

– Evaluates predefined good configurations, returns best one – Required runtime varies (from < 1h to weeks)

◮ ParamILS: anytime algorithm

– At each time step, keeps track of its incumbent

10

4

10

5

10

6

2 4 6 8

Configuration budget [CPU s] Performance [CPU s]

Default CPLEX tuning tool

Cplex on MIK instances

20

slide-55
SLIDE 55

Comparison to Cplex tuning tool

◮ Cplex tuning tool

– Evaluates predefined good configurations, returns best one – Required runtime varies (from < 1h to weeks)

◮ ParamILS: anytime algorithm

– At each time step, keeps track of its incumbent

10

4

10

5

10

6

2 4 6 8

Configuration budget [CPU s] Performance [CPU s]

Default CPLEX tuning tool ParamILS

Cplex on MIK instances

20

slide-56
SLIDE 56

Comparison to Cplex tuning tool

◮ Cplex tuning tool

– Evaluates predefined good configurations, returns best one – Required runtime varies (from < 1h to weeks)

◮ ParamILS: anytime algorithm

– At each time step, keeps track of its incumbent

10

4

10

5

10

6

2 4 6 8

Configuration budget [CPU s] Performance [CPU s]

Default CPLEX tuning tool ParamILS

Cplex on MIK instances

10

4

10

5

10

6

10

1

10

2

10

3

Configuration budget [CPU s] Performance [CPU s]

Default CPLEX tuning tool ParamILS

Cplex on SUST instances

20

slide-57
SLIDE 57

Minimization of Optimality Gap

◮ Objective: minimal optimality gap within 10 seconds runtime

21

slide-58
SLIDE 58

Minimization of Optimality Gap

◮ Objective: minimal optimality gap within 10 seconds runtime ◮ Ran ParamILS for 5 hours on 10 machines

21

slide-59
SLIDE 59

Minimization of Optimality Gap

◮ Objective: minimal optimality gap within 10 seconds runtime ◮ Ran ParamILS for 5 hours on 10 machines ◮ Reduction factors of average optimality gap (on test set)

– Cplex 1.3x to 8.6x – lpsolve 1x (no reduction) to 46x – Gurobi 1.1x to 2.2x

21

slide-60
SLIDE 60

Outline

  • 1. Related work
  • 2. Details about this study
  • 3. Results
  • 4. Conclusions

22

slide-61
SLIDE 61

Conclusions

MIP solvers can be configured automatically

◮ Configuration tool ParamILS available online:

– http://www.cs.ubc.ca/labs/beta/Projects/ParamILS/ – off-the-shelf tool (knows nothing about MIP or MIP solvers!)

◮ Sometimes substantial improvements ◮ Saves valuable human time

23

slide-62
SLIDE 62

Conclusions

MIP solvers can be configured automatically

◮ Configuration tool ParamILS available online:

– http://www.cs.ubc.ca/labs/beta/Projects/ParamILS/ – off-the-shelf tool (knows nothing about MIP or MIP solvers!)

◮ Sometimes substantial improvements ◮ Saves valuable human time

Requirements

◮ Representative instance set

– 100 instances sometimes not enough – If you generate instances, please make more (e.g., 2000)!

23

slide-63
SLIDE 63

Conclusions

MIP solvers can be configured automatically

◮ Configuration tool ParamILS available online:

– http://www.cs.ubc.ca/labs/beta/Projects/ParamILS/ – off-the-shelf tool (knows nothing about MIP or MIP solvers!)

◮ Sometimes substantial improvements ◮ Saves valuable human time

Requirements

◮ Representative instance set

– 100 instances sometimes not enough – If you generate instances, please make more (e.g., 2000)!

◮ CPU time (here: 10 × 2 days per domain)

23

slide-64
SLIDE 64

Future Work

◮ Model-based techniques

– Fit a model that predicts performance of a given configuration

  • n a given instance

24

slide-65
SLIDE 65

Future Work

◮ Model-based techniques

– Fit a model that predicts performance of a given configuration

  • n a given instance

– Use that model to quantify

+ Importance of each parameter + Interaction of parameters + Interaction of parameters and instance characteristics

24

slide-66
SLIDE 66

Future Work

◮ Model-based techniques

– Fit a model that predicts performance of a given configuration

  • n a given instance

– Use that model to quantify

+ Importance of each parameter + Interaction of parameters + Interaction of parameters and instance characteristics

◮ Per-instance approaches for heterogeneous benchmarks

– Given a new unseen instance:

+ Compute instance characteristics (fast) + Use parameter config. predicted to be best for the instance

24

slide-67
SLIDE 67

Thanks to:

◮ Providers of instance benchmark sets

– Louis-Martin Rousseau – Bistra Dilkina – Berkeley Computational Optimization Lab

◮ Commercial MIP solvers for free full academic license

– IBM (Cplex) – Gurobi

◮ lpsolve developers for their solver ◮ Compute clusters

– Westgrid – CFI-funded arrow cluster

◮ Funding agencies

– Postdoc fellowship from CBIE – MITACS – NSERC

25

slide-68
SLIDE 68

Backup slides

26

slide-69
SLIDE 69

Differences to STOP [Baz et al, ’09]

Baz et al optimized for single instances

“In practice, users would typically be tuning for a family of related instances rather than for an individual instance”

◮ Generalization to sets of instances is nontrivial

– Cannot afford to run all instances for each configuration FocusedILS adapts # runs per configuration

Further differences

◮ Baz et al used older Cplex version (9.0)

– defaults improved in newer Cplex versions

◮ Baz et al considered (only) 10 Cplex parameters

– and also not all possible values for each parameter – in order to improve Stop’s performance requires domain knowledge

27

slide-70
SLIDE 70

Configuration of MIP Solvers: Optimality Gap

◮ Objective: minimal optimality gap within 10 seconds runtime

28

slide-71
SLIDE 71

Configuration of MIP Solvers: Optimality Gap

◮ Objective: minimal optimality gap within 10 seconds runtime ◮ Ran ParamILS for 5 hours on 10 machines

28

slide-72
SLIDE 72

Configuration of MIP Solvers: Optimality Gap

◮ Objective: minimal optimality gap within 10 seconds runtime ◮ Ran ParamILS for 5 hours on 10 machines ◮ Reduction factors of average optimality gap (on test inst.)

– Cplex 1.3x to 8.6x – lpsolve 1x (no reduction) to 46x – Gurobi 1.1x to 2.2x

10

−2

10

−1

10 10

−2

10

−1

10

Default [% MIP gap] Auto−config. [% MIP gap] Train Test

Cplex on MIK instances (8.6x)

28

slide-73
SLIDE 73

Configuration of MIP Solvers: Optimality Gap

◮ Objective: minimal optimality gap within 10 seconds runtime ◮ Ran ParamILS for 5 hours on 10 machines ◮ Reduction factors of average optimality gap (on test inst.)

– Cplex 1.3x to 8.6x – lpsolve 1x (no reduction) to 46x – Gurobi 1.1x to 2.2x

10

−2

10

−1

10 10

−2

10

−1

10

Default [% MIP gap] Auto−config. [% MIP gap] Train Test

Cplex on MIK instances (8.6x)

10

1

10

2

10

3

10

1

10

2

10

3

Default [% MIP gap] Auto−config. [% MIP gap] Train Test

lpsolve on MIK instances (46x)

28