Learning a Reactive Restart Strategy to Improve Stochastic Search - - PowerPoint PPT Presentation

learning a reactive restart strategy to improve
SMART_READER_LITE
LIVE PREVIEW

Learning a Reactive Restart Strategy to Improve Stochastic Search - - PowerPoint PPT Presentation

Learning a Reactive Restart Strategy to Improve Stochastic Search Serdar Kadolu, Meinolf Sellmann, Markus Wagner Learning and Intelligent Optimization, LION 2017 Serdar Kadolu Manager of Research and Development | Oracle Visiting


slide-1
SLIDE 1

Learning a Reactive Restart Strategy to Improve Stochastic Search |

Learning a Reactive Restart Strategy to Improve Stochastic Search

Serdar Kadıoğlu Manager of Research and Development | Oracle Visiting Assistant Professor | Brown University

LION 2017

Serdar Kadıoğlu, Meinolf Sellmann, Markus Wagner Learning and Intelligent Optimization, LION 2017

slide-2
SLIDE 2

Learning a Reactive Restart Strategy to Improve Stochastic Search |

Houston, we have a problem..

LION 2017

Restarts to the rescue!

How to fix a computer?

Reinstall the OS Run a virus scan Restart the computer

slide-3
SLIDE 3

Learning a Reactive Restart Strategy to Improve Stochastic Search |

Background

LION 2017

Restarted Search

Ø Become integral part of combinatorial search Ø Stochastic search algorithms and randomized heuristics

slide-4
SLIDE 4

Learning a Reactive Restart Strategy to Improve Stochastic Search |

Background

LION 2017

Restarted Search

Ø Become integral part of combinatorial search Ø Stochastic search algorithms and randomized heuristics Ø Complete methods: avoid heavy-tailed distribution (Gomes et al. JAR’00)

slide-5
SLIDE 5

Learning a Reactive Restart Strategy to Improve Stochastic Search |

Background

LION 2017

Restarted Search

Ø Become integral part of combinatorial search Ø Stochastic search algorithms and randomized heuristics Ø Complete methods: avoid heavy-tailed distribution (Gomes et al. JAR’00) Ø Incomplete methods: diversification technique

slide-6
SLIDE 6

Learning a Reactive Restart Strategy to Improve Stochastic Search |

Restart Strategies Reactive Restarts Numerical Results

LION 2017

1 2 3

Agenda for Today

slide-7
SLIDE 7

Learning a Reactive Restart Strategy to Improve Stochastic Search |

Brief Background

LION 2017

Part – I

slide-8
SLIDE 8

Learning a Reactive Restart Strategy to Improve Stochastic Search |

Background

LION 2017

Restart Strategies

Ø Complexity of designing appropriate restart strategy

slide-9
SLIDE 9

Learning a Reactive Restart Strategy to Improve Stochastic Search |

Background

LION 2017

Restart Strategies

Ø Complexity of designing appropriate restart strategy Ø Two common approaches:

  • 1. Use restarts with a certain probability
  • 2. Employ fixed schedule of restarts

RESTART

p / f

slide-10
SLIDE 10

Learning a Reactive Restart Strategy to Improve Stochastic Search |

Background

LION 2017

Restart Strategies – Feasibility

Ø Theoretical work on fixed-schedule restart strategies (Luby et al.’93) Ø Practical studies with SAT and CP solvers Ø Geometrically growing restarts limits (Wu et al. CP’07) Ø (Audemard et al. CP’12) argued fixed schedules are sub-optimal for SAT

slide-11
SLIDE 11

Learning a Reactive Restart Strategy to Improve Stochastic Search |

Background

LION 2017

Restart Strategies – Optimization

Ø Classical optimization algorithms are often deterministic Ø As such, does not really benefit from restarts Ø Modern optimization algorithms have randomized components Ø Memory constraints & parallel computation introduce new characteristics Ø (Ruiz et al.’16) different mathematical programming formulations to provide different starting points for the solver

slide-12
SLIDE 12

Learning a Reactive Restart Strategy to Improve Stochastic Search |

Limited Runtime Budget

LION 2017

Restart Strategies

Ø Assume we are given a time budget t to run an algorithm

slide-13
SLIDE 13

Learning a Reactive Restart Strategy to Improve Stochastic Search |

Limited Runtime Budget

LION 2017

Restart Strategies

Ø Assume we are given a time budget t to run an algorithm Ø Two natural options:

  • 1. Single–run strategy: use all of the time t for a single run of the algorithm
  • 2. Multi–run strategy: make k runs each with runtime t/k
slide-14
SLIDE 14

Learning a Reactive Restart Strategy to Improve Stochastic Search |

Limited Runtime Budget

LION 2017

Restart Strategies

Ø Assume we are given a time budget t to run an algorithm Ø Two natural options:

  • 1. Single–run strategy: use all of the time t for a single run of the algorithm
  • 2. Multi–run strategy: make k runs each with runtime t/k

Ø (Fischetti et al.’14) generalizes this strategy into Bet–And–Run for MIPs

slide-15
SLIDE 15

Learning a Reactive Restart Strategy to Improve Stochastic Search |

Phase – II:

Bet–And–Run

LION 2017

Sampling Phase + Long Run

Phase – I:

slide-16
SLIDE 16

Learning a Reactive Restart Strategy to Improve Stochastic Search |

Phase – II:

Bet–And–Run

LION 2017

Sampling Phase + Long Run

Phase – I: perform k runs for some (short) time limit t1 with t1 ≤ t/k

slide-17
SLIDE 17

Learning a Reactive Restart Strategy to Improve Stochastic Search |

Phase – II: remaining time t2 = t - kt1 continue only the best run from the first phase until timeout

Bet–And–Run

LION 2017

Sampling Phase + Long Run

Phase – I: perform k runs for some (short) time limit t1 with t1 ≤ t/k

slide-18
SLIDE 18

Learning a Reactive Restart Strategy to Improve Stochastic Search |

Ø Single–run strategy: special case for k = 0 Ø Multi–run strategy: special case for t1 = t/k Phase – II: remaining time t2 = t - kt1 continue only the best run from the first phase until timeout

Bet–And–Run

LION 2017

Sampling Phase + Long Run

Phase – I: perform k runs for some (short) time limit t1 with t1 ≤ t/k

slide-19
SLIDE 19

Learning a Reactive Restart Strategy to Improve Stochastic Search |

Bet–And–Run

LION 2017

Sampling Phase + Long Run

Ø (Fischetti et al.’OR14) introduced diversity in starting conditions of MIP Experimentally good results with k = 5

slide-20
SLIDE 20

Learning a Reactive Restart Strategy to Improve Stochastic Search |

Bet–And–Run

LION 2017

Sampling Phase + Long Run

Ø (Fischetti et al.’OR14) introduced diversity in starting conditions of MIP Experimentally good results with k = 5 Ø (Friedrich et al. AAAI’17) studied TSP and MVC Experimentally good results with Restarts1%

40

slide-21
SLIDE 21

Learning a Reactive Restart Strategy to Improve Stochastic Search |

Bet–And–Run

LION 2017

Sampling Phase + Long Run

Ø (Fischetti et al.’OR14) introduced diversity in starting conditions of MIP Experimentally good results with k = 5 Ø (Friedrich et al. AAAI’17) studied TSP and MVC Experimentally good results with Restarts1%

40

Ø (Lissovoi et al. GECCO’17) theoretical results for a family of PBO non-trivial k and t1 are necessary to find the global optimum efficiently

slide-22
SLIDE 22

Learning a Reactive Restart Strategy to Improve Stochastic Search |

Bet–And–Run

LION 2017

Sampling Phase + Long Run

Ø (Fischetti et al.’OR14) introduced diversity in starting conditions of MIP Experimentally good results with k = 5 Ø (Friedrich et al. AAAI’17) studied TSP and MVC Experimentally good results with Restarts1%

40

Ø (Lissovoi et al. GECCO’17) theoretical results for a family of PBO non-trivial k and t1 are necessary to find the global optimum efficiently Issue: need to set k and t1 appropriately Issue: general inflexibility

slide-23
SLIDE 23

Learning a Reactive Restart Strategy to Improve Stochastic Search |

This Paper: Reactive Restarts

LION 2017

Hyper-Parameterized Restart Strategy

Ø General methodology for any black-box optimization solver Ø Embedding into an adaptive stochastic restart framework Ø Evolution of the objective function of solutions found Ø Monitor key performance metrics

slide-24
SLIDE 24

Learning a Reactive Restart Strategy to Improve Stochastic Search |

Reactive Restarts [Features – Scoring – Framework]

LION 2017

Part – II

slide-25
SLIDE 25

Learning a Reactive Restart Strategy to Improve Stochastic Search |

Reactive Restarts

LION 2017

Hyper-Parameterized Restart Strategy

Ø Combine the two core ideas:

  • 1. Consider a batch of runs

with the option to continue some of them

  • 2. Automatically learn which run to continue, or

whether to start a new run based on the observed performance characteristics of past runs

slide-26
SLIDE 26

Learning a Reactive Restart Strategy to Improve Stochastic Search |

Reactive Restarts

LION 2017

Hyper-Parameterized Restart Strategy

Ø Combine the two core ideas:

  • 1. Consider a batch of runs

with the option to continue some of them

  • 2. Automatically learn which run to continue, or

whether to start a new run based on the observed performance characteristics of past runs Ø Relevant approaches – Reactive tabu search (Battiti and Tecchiolli, IJOC’94) – SATenstein (KhudaBukhsh et al. IJCAI’09) – Reactive dialectic search (Ansotegui et al. AAAI’17)

slide-27
SLIDE 27

Learning a Reactive Restart Strategy to Improve Stochastic Search |

Reactive Restarts

LION 2017

Hyper-Parameterized Restart Strategy

Ø Based on observed performance metrics Ø Adaptively compute scores to compute likelihood of decisions: a) Continue current run beyond fail limit? b) Continue the best run so far? c) Discard the run and start a new one?

slide-28
SLIDE 28

Learning a Reactive Restart Strategy to Improve Stochastic Search |

Reactive Restarts

LION 2017

Hyper-Parameterized Restart Strategy

Ø Based on observed performance metrics Ø Adaptively compute scores to compute likelihood of decisions: a) Continue current run beyond fail limit? b) Continue the best run so far? c) Discard the run and start a new one? Ø Automatic parameter tuner to learn how to adapt these probabilities Ø Using tuners for parameter training (Bezerra et al.’16, Stützle et al.’16, Ansotegui et al.’17)

slide-29
SLIDE 29

Learning a Reactive Restart Strategy to Improve Stochastic Search |

Reactive Restarts

LION 2017

Features

Continue current run beyond fail limit? Continue the best run so far? Discard the run and start a new one?

slide-30
SLIDE 30

Learning a Reactive Restart Strategy to Improve Stochastic Search |

Reactive Restarts

LION 2017

Features

Solver

Continue current run beyond fail limit? Continue the best run so far? Discard the run and start a new one?

slide-31
SLIDE 31

Learning a Reactive Restart Strategy to Improve Stochastic Search |

Reactive Restarts

LION 2017

Features

How good each run looked initially? How the trajectory looks like for making further progress?

Solver

Continue current run beyond fail limit? Continue the best run so far? Discard the run and start a new one?

slide-32
SLIDE 32

Learning a Reactive Restart Strategy to Improve Stochastic Search |

Features

LION 2017

1) Performance initially

Ø Applies to both the current and the best run Ø Record best objective after the initial limit

slide-33
SLIDE 33

Learning a Reactive Restart Strategy to Improve Stochastic Search |

Features

LION 2017

1) Performance initially

Ø Applies to both the current and the best run Ø Record best objective after the initial limit Ø How about the performance of starting a new run?

slide-34
SLIDE 34

Learning a Reactive Restart Strategy to Improve Stochastic Search |

Features

LION 2017

1) Performance initially

Ø Applies to both the current and the best run Ø Record best objective after the initial limit Ø How about the performance of starting a new run? Ø For the new run: base it on the past data Ø Track how well any new run did after the initial limit Ø Compute the running average

slide-35
SLIDE 35

Learning a Reactive Restart Strategy to Improve Stochastic Search |

Features

LION 2017

2) Performance ahead

Ø Applies to the trajectory of both the current and the best run Ø Extrapolate the performance improvement achieved between – the best solution found in the initial run – the best performance achieved so far Ø Extrapolation point is the end of the remaining time

slide-36
SLIDE 36

Learning a Reactive Restart Strategy to Improve Stochastic Search |

Features

LION 2017

2) Performance ahead

Ø Applies to the trajectory of both the current and the best run Ø Extrapolate the performance improvement achieved between – the best solution found in the initial run – the best performance achieved so far Ø Extrapolation point is the end of the remaining time Ø How about the performance of starting a new run?

slide-37
SLIDE 37

Learning a Reactive Restart Strategy to Improve Stochastic Search |

Features

LION 2017

2) Performance ahead

Ø Applies to the trajectory of both the current and the best run Ø Extrapolate the performance improvement achieved between – the best solution found in the initial run – the best performance achieved so far Ø Extrapolation point is the end of the remaining time Ø How about the performance of starting a new run? Ø We need an estimate on how well we might do (from now until time limit) if all we did was run new runs

slide-38
SLIDE 38

Learning a Reactive Restart Strategy to Improve Stochastic Search |

Features

LION 2017

2) Performance ahead

Ø Assume, we can afford n new runs before hitting the time limit

slide-39
SLIDE 39

Learning a Reactive Restart Strategy to Improve Stochastic Search |

Features

LION 2017

2) Performance ahead

Ø Assume, we can afford n new runs before hitting the time limit Ø Estimate the trajectory for the new run as (Hartigan, Journal of Statistics’14)

) ( 2 ) ( z n z s µ ´

slide-40
SLIDE 40

Learning a Reactive Restart Strategy to Improve Stochastic Search |

Features

LION 2017

2) Performance ahead

Ø Assume, we can afford n new runs before hitting the time limit Ø Estimate the trajectory for the new run as (Hartigan, Journal of Statistics’14) Ø When all we know is the mean and the standard deviation Ø This is a lower bound for the minimum of repeated stochastic experiments

) ( 2 ) ( z n z s µ ´

slide-41
SLIDE 41

Learning a Reactive Restart Strategy to Improve Stochastic Search |

Features

LION 2017

One complication

Ø When learning weights, we need to work with all kinds of instances Ø Objective operate on vastly different scale Ø Values found for the current and best run and the running average Ø Normalize them between [0..1]

slide-42
SLIDE 42

Learning a Reactive Restart Strategy to Improve Stochastic Search |

Features

LION 2017

One complication

Ø When learning weights, we need to work with all kinds of instances Ø Objective operate on vastly different scale Ø Values found for the current and best run and the running average Ø Normalize them between [0..1] 3 * 2 = 6 features

slide-43
SLIDE 43

Learning a Reactive Restart Strategy to Improve Stochastic Search |

Features

LION 2017

Extensions based on time

Ø Percentage of the overall time that has already elapsed Ø Percentage of overall time afforded in the beginning where all we do is restart a new run every time Ø Percentage of total time left a new run will be given

slide-44
SLIDE 44

Learning a Reactive Restart Strategy to Improve Stochastic Search |

Features

LION 2017

Extensions based on time

Ø Percentage of the overall time that has already elapsed Ø Percentage of overall time afforded in the beginning where all we do is restart a new run every time Ø Percentage of total time left a new run will be given 6 + 3 = 9 features in total

slide-45
SLIDE 45

Learning a Reactive Restart Strategy to Improve Stochastic Search |

From Features to Scores

LION 2017

Score computation

feature vector f 1 2 3 4 5 6 7 8 9

slide-46
SLIDE 46

Learning a Reactive Restart Strategy to Improve Stochastic Search |

From Features to Scores

LION 2017

Score computation

feature vector f 1 2 3 4 5 6 7 8 9

å

+ + =

i d i i d d

w f w (f) p ) exp( 1 1

slide-47
SLIDE 47

Learning a Reactive Restart Strategy to Improve Stochastic Search |

From Features to Scores

LION 2017

Score computation

feature vector f 1 2 3 4 5 6 7 8 9

å

+ + =

i d i i d d

w f w (f) p ) exp( 1 1

slide-48
SLIDE 48

Learning a Reactive Restart Strategy to Improve Stochastic Search |

From Features to Scores

LION 2017

Score computation

9*3 + 3 = 30 weights in total (to be learned during training) feature vector f 1 2 3 4 5 6 7 8 9

å

+ + =

i d i i d d

w f w (f) p ) exp( 1 1

slide-49
SLIDE 49

Learning a Reactive Restart Strategy to Improve Stochastic Search |

Reactive Restarts

LION 2017

Hyper-parameterized framework

  • 1. Initialization phase
  • 2. Sampling phase
  • 3. Long run phase
slide-50
SLIDE 50

Learning a Reactive Restart Strategy to Improve Stochastic Search |

Reactive Restarts Framework

LION 2017

Part – I: Initialization Phase

slide-51
SLIDE 51

Learning a Reactive Restart Strategy to Improve Stochastic Search |

Reactive Restarts Framework

LION 2017

Part – II: Sampling Phase

slide-52
SLIDE 52

Learning a Reactive Restart Strategy to Improve Stochastic Search |

Reactive Restarts Framework

LION 2017

Part – III: Long Run Phase

slide-53
SLIDE 53

Learning a Reactive Restart Strategy to Improve Stochastic Search |

Reactive Restarts Framework

LION 2017

Part – III: Long Run Phase

slide-54
SLIDE 54

Learning a Reactive Restart Strategy to Improve Stochastic Search |

Reactive Restarts

LION 2017

Hyper-parameterized framework

Ø Given the weights wi

d with d ∈ {1, 2, 3} and i ∈ {0, …, 9}

Ø We can embed any black-box optimization solvers Ø Black-box: we don’t need to know anything about inner workings

slide-55
SLIDE 55

Learning a Reactive Restart Strategy to Improve Stochastic Search |

Reactive Restarts

LION 2017

Hyper-parameterized framework

Ø Given the weights wi

d with d ∈ {1, 2, 3} and i ∈ {0, …, 9}

Ø We can embed any black-box optimization solvers Ø Black-box: we don’t need to know anything about inner workings Ø Assumptions: Ø Input: set a time-limit to stop the solver and add more time to continue Ø Output: time and quality information when it finds first/best solution

slide-56
SLIDE 56

Learning a Reactive Restart Strategy to Improve Stochastic Search |

Numerical Results

LION 2017

Part – III

slide-57
SLIDE 57

Learning a Reactive Restart Strategy to Improve Stochastic Search |

Problems and Benchmarks

LION 2017

Travelling Salesperson Problem (TSP)

Ø One of the most studied optimization problem Ø Given an edge weighted graph, find a permutation such that visiting the nodes in the order returning to origin is minimized Ø Natural applications in sequencing, planning and logistics Ø Solver: Chained-Lin-Kernigan LINKERN (Applegate et al. INFORMS’03) Ø Random components during the creating of initial tour Ø Benchmarks: All 112 instances from TSPLib plus challenge instance ch71009, mona-lisa100k, usa115475

slide-58
SLIDE 58

Learning a Reactive Restart Strategy to Improve Stochastic Search |

Problems and Benchmarks

LION 2017

Minimum Vertex Cover (MVC)

Ø Classical NP-hard optimization problem Ø Given an unweighted, undirected graph, G, find the minimum subset S of vertices such that every edge in G has an endpoint in S Ø Related to max-clique with applications in security, scheduling, VLSI design Ø Solver: FASTVC (Cai, IJCAI’15) works well in massive graphs Ø Random components in candidates for exchanging step Ø Benchmarks: vertices from 1000 to 4 million, and edges 2000 to 56 million

slide-59
SLIDE 59

Learning a Reactive Restart Strategy to Improve Stochastic Search |

Training

LION 2017

Learning weights with GGA

Ø Handed the parameterized framework to GGA with 2/3 instances Ø Train for 70 generations Ø Population size was set to 100 individuals Ø Random replacement rate was set to 5% Ø Mutation rate was set to 5%

slide-60
SLIDE 60

Learning a Reactive Restart Strategy to Improve Stochastic Search |

Testing

LION 2017

Approaches compared

  • 1. SINGLE:

run with random seed use all the time limit

  • 2. RESTARTS:

fixed restart strategy with preset limit, repeated

  • 3. LUBY:

restarts based on Luby sequence

  • ne unit set as 5 times to find initial solution
  • 4. BET-AND-RUN:

previously described best (Friedrich et al. AAAI’17)

  • 5. HYPER:

hyper-parameterized bet-and-run

slide-61
SLIDE 61

Learning a Reactive Restart Strategy to Improve Stochastic Search |

Testing

LION 2017

Approaches compared

  • 1. SINGLE:

run with random seed use all the time limit

  • 2. RESTARTS:

fixed restart strategy with preset limit, repeated

  • 3. LUBY:

restarts based on Luby sequence

  • ne unit set as 5 times to find initial solution
  • 4. BET-AND-RUN:

previously described best (Friedrich et al. AAAI’17)

  • 5. HYPER:

hyper-parameterized bet-and-run Analyze: gap – best count – time to best

slide-62
SLIDE 62

Learning a Reactive Restart Strategy to Improve Stochastic Search |

Results

LION 2017

Travelling Salesperson

slide-63
SLIDE 63

Learning a Reactive Restart Strategy to Improve Stochastic Search |

Results

LION 2017

Travelling Salesperson

slide-64
SLIDE 64

Learning a Reactive Restart Strategy to Improve Stochastic Search |

Results

LION 2017

Travelling Salesperson

slide-65
SLIDE 65

Learning a Reactive Restart Strategy to Improve Stochastic Search |

Results

LION 2017

Travelling Salesperson

slide-66
SLIDE 66

Learning a Reactive Restart Strategy to Improve Stochastic Search |

Results

LION 2017

Travelling Salesperson

slide-67
SLIDE 67

Learning a Reactive Restart Strategy to Improve Stochastic Search |

Results

LION 2017

Travelling Salesperson

slide-68
SLIDE 68

Learning a Reactive Restart Strategy to Improve Stochastic Search |

Results

LION 2017

Travelling Salesperson

slide-69
SLIDE 69

Learning a Reactive Restart Strategy to Improve Stochastic Search |

Results

LION 2017

Minimum Vertex Cover

slide-70
SLIDE 70

Learning a Reactive Restart Strategy to Improve Stochastic Search |

Results

LION 2017

Minimum Vertex Cover

slide-71
SLIDE 71

Learning a Reactive Restart Strategy to Improve Stochastic Search |

Results

LION 2017

Head-to-head comparisons

slide-72
SLIDE 72

Learning a Reactive Restart Strategy to Improve Stochastic Search | LION 2017

Concluding Remarks

Ø Hyper-parameterized restart strategy Ø Comprehensive tests with better solution quality Ø Structurally different domains and features

slide-73
SLIDE 73

Learning a Reactive Restart Strategy to Improve Stochastic Search |

Q&A

LION 2017