informed search algorithms
play

Informed search algorithms Chapter 4, Sections 12 Chapter 4, - PowerPoint PPT Presentation

Informed search algorithms Chapter 4, Sections 12 Chapter 4, Sections 12 1 Outline Best-first search A search Heuristics Chapter 4, Sections 12 2 Review: Tree search function Tree-Search ( problem, fringe ) returns a


  1. Informed search algorithms Chapter 4, Sections 1–2 Chapter 4, Sections 1–2 1

  2. Outline ♦ Best-first search ♦ A ∗ search ♦ Heuristics Chapter 4, Sections 1–2 2

  3. Review: Tree search function Tree-Search ( problem, fringe ) returns a solution, or failure fringe ← Insert ( Make-Node ( Initial-State [ problem ]), fringe ) loop do if fringe is empty then return failure node ← Remove-Front ( fringe ) if Goal-Test [ problem ] applied to State ( node ) succeeds return node fringe ← InsertAll ( Expand ( node , problem ), fringe ) A strategy is defined by picking the order of node expansion Chapter 4, Sections 1–2 3

  4. Best-first search Idea: use an evaluation function for each node – estimate of “desirability” ⇒ Expand most desirable unexpanded node Implementation: fringe is a queue sorted in decreasing order of desirability Special cases: greedy search A ∗ search Chapter 4, Sections 1–2 4

  5. Romania with step costs in km Straight−line distance Oradea 71 to Bucharest Arad Neamt 366 Bucharest 87 0 Zerind 151 Craiova 75 160 Dobreta Iasi 242 Eforie Arad 140 161 92 Fagaras 178 Sibiu Fagaras 99 Giurgiu 77 118 Hirsova Vaslui 151 80 Iasi 226 Rimnicu Vilcea Lugoj Timisoara 244 Mehadia 142 241 211 111 Neamt Pitesti 97 234 Lugoj Oradea 380 70 98 Pitesti 98 Hirsova 85 146 Rimnicu Vilcea 101 Mehadia Urziceni 193 Sibiu 86 75 138 253 Timisoara Bucharest 329 120 Urziceni Dobreta 90 80 Vaslui Craiova Eforie 199 Giurgiu Zerind 374 Chapter 4, Sections 1–2 5

  6. Greedy search Evaluation function h ( n ) ( h euristic) = estimate of cost from n to the closest goal E.g., h SLD ( n ) = straight-line distance from n to Bucharest Greedy search expands the node that appears to be closest to goal Chapter 4, Sections 1–2 6

  7. Greedy search example Arad 366 Chapter 4, Sections 1–2 7

  8. Greedy search example Arad Sibiu Timisoara Zerind 253 329 374 Chapter 4, Sections 1–2 8

  9. Greedy search example Arad Sibiu Timisoara Zerind 329 374 Arad Fagaras Oradea Rimnicu Vilcea 366 176 380 193 Chapter 4, Sections 1–2 9

  10. Greedy search example Arad Sibiu Timisoara Zerind 329 374 Arad Fagaras Oradea Rimnicu Vilcea 366 380 193 Sibiu Bucharest 253 0 Chapter 4, Sections 1–2 10

  11. Properties of greedy search Complete?? Chapter 4, Sections 1–2 11

  12. Properties of greedy search Complete?? No–can get stuck in loops, e.g., with Oradea as goal, Iasi → Neamt → Iasi → Neamt → Complete in finite space with repeated-state checking Time?? Chapter 4, Sections 1–2 12

  13. Properties of greedy search Complete?? No–can get stuck in loops, e.g., Iasi → Neamt → Iasi → Neamt → Complete in finite space with repeated-state checking Time?? O ( b m ) , but a good heuristic can give dramatic improvement Space?? Chapter 4, Sections 1–2 13

  14. Properties of greedy search Complete?? No–can get stuck in loops, e.g., Iasi → Neamt → Iasi → Neamt → Complete in finite space with repeated-state checking Time?? O ( b m ) , but a good heuristic can give dramatic improvement Space?? O ( b m ) —keeps all nodes in memory Optimal?? Chapter 4, Sections 1–2 14

  15. Properties of greedy search Complete?? No–can get stuck in loops, e.g., Iasi → Neamt → Iasi → Neamt → Complete in finite space with repeated-state checking Time?? O ( b m ) , but a good heuristic can give dramatic improvement Space?? O ( b m ) —keeps all nodes in memory Optimal?? No Chapter 4, Sections 1–2 15

  16. A ∗ search Idea: avoid expanding paths that are already expensive Evaluation function f ( n ) = g ( n ) + h ( n ) g ( n ) = cost so far to reach n h ( n ) = estimated cost to goal from n f ( n ) = estimated total cost of path through n to goal A ∗ search uses an admissible heuristic i.e., h ( n ) ≤ h ∗ ( n ) where h ∗ ( n ) is the true cost from n . (Also require h ( n ) ≥ 0 , so h ( G ) = 0 for any goal G .) E.g., h SLD ( n ) never overestimates the actual road distance Theorem: A ∗ search is optimal Chapter 4, Sections 1–2 16

  17. A ∗ search example Arad 366=0+366 Chapter 4, Sections 1–2 17

  18. A ∗ search example Arad Sibiu Timisoara Zerind 393=140+253 447=118+329 449=75+374 Chapter 4, Sections 1–2 18

  19. A ∗ search example Arad Sibiu Timisoara Zerind 447=118+329 449=75+374 Arad Fagaras Oradea Rimnicu Vilcea 646=280+366 415=239+176 671=291+380 413=220+193 Chapter 4, Sections 1–2 19

  20. A ∗ search example Arad Sibiu Timisoara Zerind 447=118+329 449=75+374 Arad Fagaras Oradea Rimnicu Vilcea 646=280+366 415=239+176 671=291+380 Craiova Pitesti Sibiu 526=366+160 417=317+100 553=300+253 Chapter 4, Sections 1–2 20

  21. A ∗ search example Arad Sibiu Timisoara Zerind 447=118+329 449=75+374 Arad Fagaras Oradea Rimnicu Vilcea 646=280+366 671=291+380 Sibiu Bucharest Craiova Pitesti Sibiu 591=338+253 450=450+0 526=366+160 417=317+100 553=300+253 Chapter 4, Sections 1–2 21

  22. A ∗ search example Arad Sibiu Timisoara Zerind 447=118+329 449=75+374 Fagaras Arad Oradea Rimnicu Vilcea 646=280+366 671=291+380 Sibiu Bucharest Craiova Pitesti Sibiu 591=338+253 450=450+0 526=366+160 553=300+253 Bucharest Craiova Rimnicu Vilcea 418=418+0 615=455+160 607=414+193 Chapter 4, Sections 1–2 22

  23. Optimality of A ∗ (standard proof) Suppose some suboptimal goal G 2 has been generated and is in the queue. Let n be an unexpanded node on a shortest path to an optimal goal G 1 . Start n G G 2 f ( G 2 ) = g ( G 2 ) since h ( G 2 ) = 0 > g ( G 1 ) since G 2 is suboptimal ≥ f ( n ) since h is admissible Since f ( G 2 ) > f ( n ) , A ∗ will never select G 2 for expansion Chapter 4, Sections 1–2 23

  24. Optimality of A ∗ (more useful) Lemma: A ∗ expands nodes in order of increasing f value ∗ Gradually adds “ f -contours” of nodes (cf. breadth-first adds layers) Contour i has all nodes with f = f i , where f i < f i +1 O N Z I A S 380 F V 400 T R P L H M U B 420 D E C G Chapter 4, Sections 1–2 24

  25. Properties of A ∗ Complete?? Chapter 4, Sections 1–2 25

  26. Properties of A ∗ Complete?? Yes, unless there are infinitely many nodes with f ≤ f ( G ) Time?? Chapter 4, Sections 1–2 26

  27. Properties of A ∗ Complete?? Yes, unless there are infinitely many nodes with f ≤ f ( G ) Time?? Exponential in [relative error in h × length of soln.] Space?? Chapter 4, Sections 1–2 27

  28. Properties of A ∗ Complete?? Yes, unless there are infinitely many nodes with f ≤ f ( G ) Time?? Exponential in [relative error in h × length of soln.] Space?? Keeps all nodes in memory Optimal?? Chapter 4, Sections 1–2 28

  29. Properties of A ∗ Complete?? Yes, unless there are infinitely many nodes with f ≤ f ( G ) Time?? Exponential in [relative error in h × length of soln.] Space?? Keeps all nodes in memory Optimal?? Yes—cannot expand f i +1 until f i is finished A ∗ expands all nodes with f ( n ) < C ∗ A ∗ expands some nodes with f ( n ) = C ∗ A ∗ expands no nodes with f ( n ) > C ∗ Chapter 4, Sections 1–2 29

  30. Proof of lemma: Consistency A heuristic is consistent if n h ( n ) ≤ c ( n, a, n ′ ) + h ( n ′ ) c(n,a,n’) If h is consistent, we have h(n) n’ f ( n ′ ) = g ( n ′ ) + h ( n ′ ) = g ( n ) + c ( n, a, n ′ ) + h ( n ′ ) h(n’) ≥ g ( n ) + h ( n ) G = f ( n ) I.e., f ( n ) is nondecreasing along any path. Chapter 4, Sections 1–2 30

  31. Admissible heuristics E.g., for the 8-puzzle: h 1 ( n ) = number of misplaced tiles h 2 ( n ) = total Manhattan distance (i.e., no. of squares from desired location of each tile) 5 7 2 4 1 2 3 5 6 4 5 6 8 3 1 7 8 Start State Goal State h 1 ( S ) =?? h 2 ( S ) =?? Chapter 4, Sections 1–2 31

  32. Admissible heuristics E.g., for the 8-puzzle: h 1 ( n ) = number of misplaced tiles h 2 ( n ) = total Manhattan distance (i.e., no. of squares from desired location of each tile) 5 7 2 4 1 2 3 5 6 4 5 6 8 3 1 7 8 Start State Goal State h 1 ( S ) =?? 6 h 2 ( S ) =?? 4+0+3+3+1+0+2+1 = 14 Chapter 4, Sections 1–2 32

  33. Dominance If h 2 ( n ) ≥ h 1 ( n ) for all n (both admissible) then h 2 dominates h 1 and is better for search Typical search costs: d = 14 IDS = 3,473,941 nodes A ∗ ( h 1 ) = 539 nodes A ∗ ( h 2 ) = 113 nodes d = 24 IDS ≈ 54,000,000,000 nodes A ∗ ( h 1 ) = 39,135 nodes A ∗ ( h 2 ) = 1,641 nodes Given any admissible heuristics h a , h b , h ( n ) = max( h a ( n ) , h b ( n )) is also admissible and dominates h a , h b Chapter 4, Sections 1–2 33

  34. Relaxed problems Admissible heuristics can be derived from the exact solution cost of a relaxed version of the problem If the rules of the 8-puzzle are relaxed so that a tile can move anywhere , then h 1 ( n ) gives the shortest solution If the rules are relaxed so that a tile can move to any adjacent square , then h 2 ( n ) gives the shortest solution Key point: the optimal solution cost of a relaxed problem is no greater than the optimal solution cost of the real problem Chapter 4, Sections 1–2 34

  35. Relaxed problems contd. Well-known example: travelling salesperson problem (TSP) Find the shortest tour visiting all cities exactly once Minimum spanning tree can be computed in O ( n 2 ) and is a lower bound on the shortest (open) tour Chapter 4, Sections 1–2 35

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend