beyond classical search
play

Beyond Classical Search Sections 4.1 and 4.2 Ch. 04 p.1/20 - PowerPoint PPT Presentation

Beyond Classical Search Sections 4.1 and 4.2 Ch. 04 p.1/20 Outline Iterative improvement algorithms Hill climbing Simulated annealing Local beam search Genetic algorithms References: The slides were adapted to the 3 rd edition of Russell


  1. Beyond Classical Search Sections 4.1 and 4.2 Ch. 04 – p.1/20

  2. Outline Iterative improvement algorithms Hill climbing Simulated annealing Local beam search Genetic algorithms References: The slides were adapted to the 3 rd edition of Russell and Norvig’s textbook using the slides for the 2 nd edition. Ch. 04 – p.2/20

  3. Iterative improvement algorithms In many optimization problems, the path is irrelevant; the goal state itself is the solution Then state space = set of “complete” configurations the goal is one of the following to find the optimal configuration, e.g., TSP to find the configuration that satisfies constraints, e.g., timetable In such cases, can use iterative improvement algorithms: keep a single “current” state, try to improve it Constant space, suitable for online as well as offline search Ch. 04 – p.3/20

  4. Example: travelling salesperson problem Start with any complete tour, perform pairwise exchanges A B A B A B C C C E D E D E D ABEDCA ABCDEA 2-opt heuristic: Remove two edges (usually crossing), and connect the fragments in a different way Ch. 04 – p.4/20

  5. Example: n -queens problem Put n queens on an n × n board with no two queens on the same row, column, or diagonal Move a queen to reduce number of conflicts Ch. 04 – p.5/20

  6. Hill-climbing (or gradient ascent/descent) function H ILL -C LIMBING ( problem ) returns a state that is a local maximum current ← M AKE -N ODE ( problem .I NITIAL -S TATE ) loop do neighbor ← a highest-valued successor of current if neighbor V ALUE ≤ current . V ALUE then return current . S TATE current ← neighbor Ch. 04 – p.6/20

  7. 8-queens with hill-climbing The heuristic cost function h is the number of pairs of attacking queens (directly or indirectly) The h for the state below is 17 (3+4+2+3+2+2+1) the numbers in the squares show the value of h when the queen is moved to that space 18 12 14 13 13 12 14 14 14 16 13 15 12 14 12 16 14 12 18 13 15 12 14 14 15 14 14 13 16 13 16 14 17 15 14 16 16 17 16 18 15 15 18 14 15 15 14 16 14 14 13 17 12 14 12 18 Ch. 04 – p.7/20

  8. 8-queens with hill-climbing(cont’d) To find a successor: move a single queen to another square in the same column. The figure shows a local maximum: there is one attack but every successor has a higher cost Ch. 04 – p.8/20

  9. Local maxima in hill-climbing “Like climbing Everest in thick fog with amnesia” Problem: depending on initial state, can get stuck on local maxima objective function global maximum shoulder local maximum “flat” local maximum state space current state In continuous spaces, problems w/ choosing step size, slow convergence Ch. 04 – p.9/20

  10. Ridges in hill-climbing Ridges result in a sequence of local maxima that is very difficult for greedy algorithms to navigate Ch. 04 – p.10/20

  11. Variants of hill-climbing Stochastic hill-climbing chooses at random from among the uphill moves; the probability of selection can vary with the steepness of the uphill move First-choice hill-climbing implements stochistic hill-climbing by generating successors randomly until one is generated that is better than the current state. Random-restart hill-climbing conducts a series of hill-climbing searches from randomly generated initial states Note that hill-climbing never moves downward Ch. 04 – p.11/20

  12. Simulated annealing Idea: escape local maxima by allowing some “bad” moves but gradually decrease their size and frequency Devised by Metropolis et al., 1953, for physical process modelling Widely used in VLSI layout, airline scheduling, etc. Ch. 04 – p.12/20

  13. Simulated annealing algorithm function S IMULATED -A NNEALING ( problem, schedule ) returns a solution state inputs: problem , a problem schedule , a mapping from time to “temperature” local variables: T , a “temperature” controlling the probability of downward steps current ← M AKE -N ODE ( problem .I NITIAL -S TATE ) for t ← 1 to ∞ do T ← schedule ( t ) if T = 0 then return current next ← a randomly selected successor of current ∆ E ← V ALUE [ next ] - V ALUE [ current ] if ∆ E > 0 then current ← next else current ← next only with probability e ∆ E/T Ch. 04 – p.13/20

  14. Properties of simulated annealing At fixed “temperature” T , state occupation probability E ( x ) reaches Boltzman distribution p ( x ) = αe kT T decreased slowly enough = ⇒ always reach best state Is this necessarily an interesting guarantee?? Ch. 04 – p.14/20

  15. Local beam search Idea: keep track of k states rather than just one. Start with k randomly generated states At each step, generate all the successors of the k states. If one is a goal, halt. Otherwise, select the best k states. Can be viewed as parallel and interacting random searches Stochastic beam search helps alleviate the problem of diversity within the k states: rather than choosing the best k among the successors, choose randomly where the probability is higher for better successors Ch. 04 – p.15/20

  16. Genetic algorithms Idea: Maintain a population of k states similar to local beam search but use genetically inspired methods to generate the successors Crossover means taking portions of the parents to generate a child Mutation means a random change in the individual Ch. 04 – p.16/20

  17. Genetic algorithm function G ENETIC -A LGORITHM ( problem , F ITNESS -F UNCTION ) returns an individual inputs: population , a set of individuals F ITNESS -F UNCTION , measures the fitness of an individual repeat new-population ← empty set for i = 1 to S IZE ( population ) do x ← R ANDOM -S ELECTION ( population , F ITNESS -F UNCTION ) y ← R ANDOM -S ELECTION ( population , F ITNESS -F UNCTION ) child ← R EPRODUCE ( x,y ) if (small random probability) then child ← M UTATE ( child ) add child to new-population population ← new-population until some individual is fit enough, or enough time has elapsed return best individual in population , according to F ITNESS -F UNCTION Ch. 04 – p.17/20

  18. Reproduction algorithm function R EPRODUCE ( x,y ) returns an individual inputs: x,y , parent individuals n ← length( x ) c ← random number from 1 to n return A PPEND ( S UBSTRING ( x, 1, c ), S UBSTRING ( y, c+1, n ) ) Ch. 04 – p.18/20

  19. Crossover in 8-queens The shaded columns at the parents are lost, the unshaded columns are retained. + = Ch. 04 – p.19/20

  20. Iterative algorithms for CSPs Hill-climbing, simulated annealing typically work with “complete” states, i.e., all variables assigned To apply to CSPs: allow states with unsatisfied constraints operators reassign variable values Variable selection: randomly select any conflicted variable Value selection by min-conflicts heuristic: choose value that violates the fewest constraints i.e., hillclimb with h ( n ) = total number of violated constraints Ch. 04 – p.20/20

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend