informed search
play

Informed Search Philipp Koehn 24 September 2015 Philipp Koehn - PowerPoint PPT Presentation

Informed Search Philipp Koehn 24 September 2015 Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015 Heuristic 1 From Wikipedia: any approach to problem solving, learning, or discovery that employs a practical method


  1. Informed Search Philipp Koehn 24 September 2015 Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015

  2. Heuristic 1 From Wikipedia: any approach to problem solving, learning, or discovery that employs a practical method not guaranteed to be optimal or perfect but sufficient for the immediate goals Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015

  3. Outline 2 ● Best-first search ● A ∗ search ● Heuristic algorithms – hill-climbing – simulated annealing – genetic algorithms (briefly) – local search in continuous spaces (very briefly) Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015

  4. 3 best-first search Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015

  5. Review: Tree Search 4 function T REE -S EARCH ( problem,fringe ) returns a solution, or failure fringe ← I NSERT ( M AKE -N ODE ( I NITIAL -S TATE [ problem ]), fringe ) loop do if fringe is empty then return failure node ← R EMOVE -F RONT ( fringe ) if G OAL -T EST [ problem ] applied to S TATE ( node ) succeeds return node fringe ← I NSERT A LL ( E XPAND ( node , problem ), fringe ) ● Search space is in form of a tree ● Strategy is defined by picking the order of node expansion Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015

  6. Best-First Search 5 ● Idea: use an evaluation function for each node – estimate of “desirability” ⇒ Expand most desirable unexpanded node ● Implementation: fringe is a queue sorted in decreasing order of desirability ● Special cases – greedy search – A ∗ search Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015

  7. Romania 6 Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015

  8. Romania with Step Costs in km 7 Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015

  9. Greedy Search 8 ● State evaluation function h ( n ) ( heuristic ) = estimate of cost from n to the closest goal ● E.g., h SLD ( n ) = straight-line distance from n to Bucharest ● Greedy search expands the node that appears to be closest to goal Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015

  10. Greedy Search Example 9 Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015

  11. Greedy Search Example 10 Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015

  12. Greedy Search Example 11 Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015

  13. Greedy Search Example 12 Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015

  14. Properties of Greedy Search 13 ● Complete? No, can get stuck in loops, e.g., with Oradea as goal, Iasi → Neamt → Iasi → Neamt → Complete in finite space with repeated-state checking ● Time? O ( b m ) , but a good heuristic can give dramatic improvement ● Space? O ( b m ) —keeps all nodes in memory ● Optimal? No Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015

  15. 14 a* search Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015

  16. A ∗ Search 15 ● Idea: avoid expanding paths that are already expensive ● State evaluation function f ( n ) = g ( n ) + h ( n ) – g ( n ) = cost so far to reach n – h ( n ) = estimated cost to goal from n – f ( n ) = estimated total cost of path through n to goal ● A ∗ search uses an admissible heuristic – i.e., h ( n ) ≤ h ∗ ( n ) where h ∗ ( n ) is the true cost from n – also require h ( n ) ≥ 0 , so h ( G ) = 0 for any goal G ● E.g., h SLD ( n ) never overestimates the actual road distance ● Theorem: A ∗ search is optimal Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015

  17. A ∗ Search Example 16 Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015

  18. A ∗ Search Example 17 Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015

  19. A ∗ Search Example 18 Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015

  20. A ∗ Search Example 19 Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015

  21. A ∗ Search Example 20 Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015

  22. A ∗ Search Example 21 Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015

  23. A ∗ Search Example 22 Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015

  24. A ∗ Search Example 23 Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015

  25. A ∗ Search Example 24 Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015

  26. A ∗ Search Example 25 Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015

  27. A ∗ Search Example 26 Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015

  28. Optimality of A ∗ (Standard Proof) 27 ● Suppose some suboptimal goal G 2 has been generated and is in the queue ● Let n be an unexpanded node on a shortest path to an optimal goal G 1 f ( G 2 ) = g ( G 2 ) since h ( G 2 ) = 0 > g ( G 1 ) since G 2 is suboptimal ≥ f ( n ) since h is admissible ● Since f ( G 2 ) > f ( n ) , A ∗ will never terminate at G 2 Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015

  29. Optimality of A ∗ (More Useful) 28 ● Lemma: A ∗ expands nodes in order of increasing f value ∗ ● Gradually adds “ f -contours” of nodes (cf. breadth-first adds layers) ● Contour i has all nodes with f = f i , where f i < f i + 1 Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015

  30. Properties of A ∗ 29 Yes, unless there are infinitely many nodes with f ≤ f ( G ) ● Complete? ● Time? Exponential in [relative error in h × length of solution] ● Space? Keeps all nodes in memory ● Optimal? Yes—cannot expand f i + 1 until f i is finished A ∗ expands all nodes with f ( n ) < C ∗ A ∗ expands some nodes with f ( n ) = C ∗ A ∗ expands no nodes with f ( n ) > C ∗ Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015

  31. Proof of Lemma: Consistency 30 ● A heuristic is consistent if h ( n ) ≤ c ( n,a,n ′ ) + h ( n ′ ) ● If h is consistent, we have f ( n ′ ) = g ( n ′ ) + h ( n ′ ) = g ( n ) + c ( n,a,n ′ ) + h ( n ′ ) ≥ g ( n ) + h ( n ) = f ( n ) ● I.e., f ( n ) is nondecreasing along any path. Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015

  32. Admissible Heuristics 31 ● E.g., for the 8-puzzle – h 1 ( n ) = number of misplaced tiles – h 2 ( n ) = total Manhattan distance (i.e., no. of squares from desired location of each tile) ● h 1 ( S ) =? ● h 2 ( S ) =? Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015

  33. Admissible Heuristics 32 ● E.g., for the 8-puzzle – h 1 ( n ) = number of misplaced tiles – h 2 ( n ) = total Manhattan distance (i.e., no. of squares from desired location of each tile) ● h 1 ( S ) =? 6 ● h 2 ( S ) =? 4+0+3+3+1+0+2+1 = 14 Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015

  34. Dominance 33 ● If h 2 ( n ) ≥ h 1 ( n ) for all n (both admissible) → h 2 dominates h 1 and is better for search ● Typical search costs: d = 14 IDS = 3,473,941 nodes A ∗ ( h 1 ) = 539 nodes A ∗ ( h 2 ) = 113 nodes d = 24 IDS ≈ 54,000,000,000 nodes A ∗ ( h 1 ) = 39,135 nodes A ∗ ( h 2 ) = 1,641 nodes ● Given any admissible heuristics h a , h b , h ( n ) = max ( h a ( n ) ,h b ( n )) is also admissible and dominates h a , h b Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015

  35. Relaxed Problems 34 ● Admissible heuristics can be derived from the exact solution cost of a relaxed version of the problem ● If the rules of the 8-puzzle are relaxed so that a tile can move anywhere ⇒ h 1 ( n ) gives the shortest solution ● If the rules are relaxed so that a tile can move to any adjacent square ⇒ h 2 ( n ) gives the shortest solution ● Key point: the optimal solution cost of a relaxed problem is no greater than the optimal solution cost of the real problem Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015

  36. Relaxed Problems 35 ● Well-known example: travelling salesperson problem (TSP) ● Find the shortest tour visiting all cities exactly once ● Minimum spanning tree – can be computed in O ( n 2 ) – is a lower bound on the shortest (open) tour Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015

  37. Summary: A* 36 ● Heuristic functions estimate costs of shortest paths ● Good heuristics can dramatically reduce search cost ● Greedy best-first search expands lowest h – incomplete and not always optimal ● A ∗ search expands lowest g + h – complete and optimal – also optimally efficient (up to tie-breaks, for forward search) ● Admissible heuristics can be derived from exact solution of relaxed problems Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015

  38. 37 iterative improvement algorithms Philipp Koehn Artificial Intelligence: Informed Search 24 September 2015

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend