informed search 2
play

Informed Search (2) Introduction to Artificial Intelligence H di M - PDF document

Informed Search (2) Introduction to Artificial Intelligence H di M Hadi Moradi di 1 Last time: search strategies I I nformed: Use heuristics to guide the search f d U h i i id h h Best first: Greedy search: A* search: 2


  1. Informed Search (2) Introduction to Artificial Intelligence H di M Hadi Moradi di 1 Last time: search strategies I I nformed: Use heuristics to guide the search f d U h i i id h h � Best first: � Greedy search: � A* search: 2 1

  2. Another Search Problem � Job scheduling � m jobs 1 machine j b 1 hi � m jobs n machines (job-shop scheduling) � Example: 5 Job problem � N job problem: 3 This time � Iterative improvement Iterative improvement � Hill climbing � Simulated annealing 4 2

  3. Iterative improvement � In many optimization problems, y p p , � path is irrelevant; � the goal state itself is the solution. � In such cases, can use iterative improvement algorithms: � 5 Iterative improvement example: vacuum world Simplified world: 2 locations, each may or not contain dirt, each may or not contain vacuuming agent. h t t i i t Goal of agent: clean up the dirt. If path does not matter, do not need to keep track of it. 6 3

  4. Iterative improvement example: n-queens 7 Hill climbing (or gradient ascent/descent) � Iteratively maximize/minmize “value” of current state, by replacing it by successor state that has t t b l i it b t t th t h highest value, as long as possible. 8 4

  5. Hill climbing � Note: minimizing a value function v(n) is Note: minimizing a “value” function v(n) is equivalent to maximizing –v(n), 9 Hill climbing � Problem: depending on initial state, may get stuck in local maxima stuck in local maxima. Does it matter if you � Any suggestion? start from left or from right? 10 5

  6. Minimizing energy Basin of Attraction for C A B D E 11 C Local Minima Problem � Question: How do you avoid this local Question: How do you avoid this local minima? barrier to local search starting point descend descend direction local minima global minima 12 6

  7. Consequences of the Occasional Ascents desired effect Help escaping the local optima. adverse effect Might pass global optima after reaching it 13 Boltzmann machines Attraction for C h A B D E C The Boltzmann Machine of Hinton, Sejnowski, and Ackley (1984) uses simulated annealing to escape local minima. Example: Arranging sugar cubes in its box. 14 7

  8. Simulated annealing: basic idea � From current state pick a random successor � From current state, pick a random successor state: 15 Simulated annealing: Sword � . 16 8

  9. Simulated annealing in practice - set T - optimize for given T - lower T - repeat t 17 MDSA: Molecular Dynamics Simulated Annealing Simulated annealing in practice set T - optimize for given T - lower T (see Geman & Geman, 1984) - repeat - � Geman & Geman (1984): if T is lowered ( ) sufficiently slowly (with respect to the number of iterations used to optimize at a given T), simulated annealing is guaranteed to find the global minimum. 18 9

  10. Simulated annealing in practice set T set T - optimize for given T - lower T (see Geman & Geman, 1984) - repeat - � Caveat: � Caveat: 19 Simulated annealing algorithm � Idea: Escape local extrema by allowing “bad moves,” b t but gradually decrease their size and frequency. d ll d th i i d f Note: goal here is to - maximize E. 20 10

  11. Note on simulated annealing: limit cases � Boltzmann distribution: accept “bad move” with Δ E< 0 (goal p (g is to maximize E) with probability P( Δ E) = exp( Δ E/T) Δ E < 0 � If T is large: Δ E/T < 0 and very small exp( Δ E/T) close to 1 accept bad move with high probability Δ E < 0 � If T is near 0: Δ E/T < 0 and very large exp( Δ E/T) close to 0 accept bad move with low probability 21 To accept or not to accept - SA? Change Temp exp(- Δ E/T) exp(- Δ E/T) Change Temp 0.2 0.95 0.810157735 0.2 0.1 0.135335283 0.4 0.95 0.656355555 0.4 0.1 0.018315639 0.6 0.95 0.53175153 0.6 0.1 0.002478752 0.8 0.95 0.430802615 0.8 0.1 0.000335463 22 11

  12. Parent solution = new solution Monte Carlo Number Simulated Annealing Flowchart Flowchart 23 Local Beam Search � Keep track of k states Keep track of k states 24 12

  13. Online Search in Continues Space � For instance f(x1,y1,x2,y2,x3,y3) to be For instance f(x1 y1 x2 y2 x3 y3) to be minimized 25 Solving TSP using SA � Visit all the cities (do not skip any city) � Visit all the cities (do not skip any city) � Do not visit a city twice � Shortest path � Convert it to: � Moving the elements of a fixed sized list � List = the path that should be taken (list of cities in Li t th th th t h ld b t k (li t f iti i order) � http://www.hermetic.ch/misc/ts3/ts3demo.htm 26 13

  14. Summary � Best-first search: � Best first search: � Greedy search: 27 Summary � A* search = best-first with measure = � A search best first with measure path cost so far + estimated path cost to goal. 28 14

  15. Summary � Time complexity of heuristic algorithms � Time complexity of heuristic algorithms depend on quality of heuristic function. � Good heuristics can sometimes be constructed by examining the problem definition or by generalizing from experience with the problem class. 29 Summary � Iterative improvement algorithms keep � Iterative improvement algorithms keep only a single state in memory. � Can get stuck in local extrema; simulated annealing provides a way to escape local extrema, and is complete and optimal given a slow enough cooling schedule. 30 15

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend