cmu q 15 381
play

CMU-Q 15-381 Lecture 8: Optimization I: Optimization for CSP - PowerPoint PPT Presentation

CMU-Q 15-381 Lecture 8: Optimization I: Optimization for CSP Local Search Teacher: Gianni A. Di Caro L OCAL S EARCH FOR CSP Real-life CSPs can be very large and hard to solve Methods so far: construct a solution by assigning one


  1. CMU-Q 15-381 Lecture 8: Optimization I: Optimization for CSP Local Search Teacher: Gianni A. Di Caro

  2. L OCAL S EARCH FOR CSP § Real-life CSPs can be very large and hard to solve … § Methods so far: construct a solution by assigning one variable at-a- time, if an assignment fails because of constraint violation, backtrack , and keep doing until all variables have been assigned feasible values § At any point of the construction process we have one partial solution (partial assignment of values to variables) § The states of the process are partial states of the problem 2

  3. L OCAL S EARCH FOR CSP Local search methods: work with complete states (i.e., all variables assigned, it can be an unfeasible assignment) Start with some unfeasible assignment (i.e., featuring 𝑜 1. constraint violations) 2. LS operators reassign variable values (one or more at each search step) A. Variable selection (e.g., randomly select any variable involved in constraints violation) B. Values selection (e.g., min-conflicts heuristic h , to choose a value such that the new CSP assignment violates the fewest constraints) 3. Iterate 2 (A-B) until a feasible solution is found or only a few constraint violations survive or … 3

  4. L OCAL S EARCH FOR CSP Neighbor states (one color change) 4

  5. E XAMPLE N -Q UEENS • States: 4 queens in 4 columns (44 = 256 states) • LS Operator: move queen in column • Goal test: no attacks • Evaluation: h = number of attacks 5

  6. LOCAL SEARCH Local search algorithms at each step consider a single “current” state , and try to improve it by moving to one of its neighbors ➔ Iterative improvement algorithms § Pros and cons o No complete (no optimal), except with random restarts o Space complexity 𝒫 ( 𝑐 ) o Time complexity 𝒫 ( 𝑒 ), 𝑒 can be ∞ ! o Can perform well also in large (infinite, continuous) spaces o Relatively easy to implement 6

  7. H ILL -C LIMBING S EARCH Like climbing Everest in thick fog with amnesia § Move in the direction of strictly increasing value (up to the hill) § Steepest ascent / Steepest descent § Terminate when no neighbor has higher value § Greedy (myopic) local search § We necessarily end into a local optimum or a plateau § Which optimum: depends on the starting point 7

  8. H ILL -C LIMBING S EARCH 18 12 14 13 13 12 14 14 14 16 13 15 12 14 12 16 14 12 18 13 15 12 14 14 15 14 14 13 16 13 16 14 17 15 14 16 16 17 16 18 15 15 18 14 15 15 14 16 Local optimum: state that has only 14 14 13 17 12 14 12 18 one conflict, but every move leads to larger #conflicts State with 17 conflicts, showing the #conflicts by moving a queen within its column, with best moves in red 8

  9. H ILL -C LIMBING S EARCH § Hill-climbing can solve large instances of 𝑜 -queens ( 𝑜 = 10 6 ) in a few seconds § 8 queens statistics: o State space of size ≈ 17 million o Starting from random state, steepest-ascent hill climbing solves 14% of problem instances o It takes 4 steps on average when it succeeds, 3 when it gets stuck o When sideways moves are allowed, things change … o When multiple restarts are allowed, things change even more 9

  10. HILL - CLIMBING CAN GET STUCK ! Objective function global maximum Plateaux shoulder Local optima “flat” local maximum neighborhood state space current state 10

  11. V ARIANTS OF HILL - CLIMBING § Sideways moves: if no uphill moves, allow moving to a state with the same value as the current one (escape shoulders) global maximum Objective function Plateaux sideways moves ( M ): M =100 → 94% solved instances for the 8-queens! shoulder 21 steps avg. on success 64 steps avg. on “failure” Local optima “flat” local maximum neighborhood state space current state 11

  12. V ARIANTS OF HILL - CLIMBING § Sideways moves: if no uphill moves, allow moving to a state with the same value as the current one (escape shoulders) § Stochastic hill-climbing: selection among the available uphill moves is done randomly (uniform, proportional, soft-max, ε - greedy, …) to be “less” greedy § First-choice hill-climbing: successors are generated randomly, one at a time, until one that is better than the current state is found (deal with large neighborhoods) § Random-restart hill climbing: probabilistically complete (how do we select the next restart configuration?) In general, these variants apply to all Local Search algorithms 12

  13. HILL - CLIMBING CAN GET STUCK ! Diagonal ridges: From each local maximum all the available actions point downhill, but there is an uphill path! Zig-zag motion, very long ascent time! Gradient ascent doesn’t have this issue: all state vector components are (potentially) changed when moving to a successor state, climbing can follow the direction of the ridge 13

  14. L OCAL S EARCH + M IN -C ONFLICTS HEURISTIC § min-conflicts heuristic h chooses a value such that the new CSP assignment violates the fewest constraints § Given a random initial state, can solve 𝑜 -queens in almost constant time for very large 𝑜 The same appears to be true for any randomly-generated CSP except in a narrow range of the ratio: 14

  15. W ALKSAT : LS FOR SAT § Binary literals (true / false) § Clause: disjunction of literals § Conjunctive Normal Form (CNF) for a logical Formula: Conjunction of clauses 3-SAT (all clauses have 3 literals) 15

  16. W ALKSAT : LS FOR SAT § Random 3-SAT o sample uniformly from space of all possible 3- clauses o 𝑜 variables, 𝐷 clauses § Which are the hard instances? ' ( = 4.26 o around 16

  17. W ALKSAT : LS FOR SAT n Complexity peak is very stable … q across problem sizes q across solver types systematic n stochastic n 17

  18. W ALKSAT : LS FOR M AX -SAT § At each step, the randomly chosen clause is satisfied, but other clauses may become unsatisfied The parameter 𝑞 is called the "mixing probability" and determined approximately § by experiment for a given class of CNF formulas § For random, hard 3-SAT problems (those with the ratio of clauses to variables around 4.25) 𝑞 = 0.5 works well § For 3-SAT formulas with more structure, as generated in many applications, slightly more greediness, i.e. 𝑞 < 0.5 , is often better Empirically, restarting after 𝑃(𝑜 4 ) flips, 𝑜 = number of variables, works well § 18

  19. G ENERAL L OCAL S EARCH O PTIMIZER Function Search by Iterative Solution Modification() π = instance of optimization problem of class Π S = { set of all feasible solutions of π } N = neighborhood structure for Π , can be variable in ξ , t, m eval() = evaluation function for candidate solutions in N t = iteration, time ξ t = search state at time t , current feasible solution m t = memory structure of search states and values t 0 ← 0 m 0 ← ∅ ξ 0 ← initial feasible solution ( π , S ) while ¬ terminate ( ξ t , π , N ( ξ t , π ) , t, . . . ) ( ξ 0 , m t ) ← step ( N ( ξ t , π ) , m t , eval()) if accept ( ξ 0 , ξ t , t, m t ) ξ t +1 ← ξ 0 m t +1 ← update solution best value ( π , ξ t +1 , t ) t ← t + 1 § We only need to be able to if at least one feasible solution has been generated ( m t , S ) compute the function … return best solution found ( m ) § No derivatives, analytical else properties are needed return “No feasible solution found!” 19

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend