heuristic search
play

Heuristic Search Heuristic Search Romania: Arad Bucharest (for - PDF document

Bookkeeping Informed Search AI Class 5 (Ch. 3.5-3.7) Next lecture: Python for AI Eight decades of AI (okay, 4) Based on slides by Dr. Marie desJardin. Some material also adapted from slides by Dr. Matuszek @ Villanova University,


  1. Bookkeeping Informed Search AI Class 5 (Ch. 3.5-3.7) • Next lecture: • Python for AI • Eight decades of AI • (okay, 4) Based on slides by Dr. Marie desJardin. Some material also adapted from slides by Dr. Matuszek @ Villanova University, which are based on Hwee Tou Ng at Berkeley, which Dr. Cynthia Matuszek – CMSC 671 are based on Russell at Berkeley. Some diagrams are based on AIMA. Today’s Class Weak vs. Strong Methods • Heuristic search • Weak methods : “An informed search strategy—one that uses • Extremely general , not tailored to a specific situation • Best-first search problem specific • Examples • Greedy search knowledge… can find • Means-ends analysis : the current situation and goal, then look for • Beam search solutions more efficiently ways to shrink the differences between the two • A, A* then an uninformed • Space splitting : try to list possible solutions to a problem, then try • Examples strategy.” to rule out classes of these possibilities • Subgoaling : split a large problem into several smaller ones that can • Memory-conserving – R&N pg. 92 be solved one at a time. variations of A* • Called “weak” methods because they do not take • Heuristic functions advantage of more powerful domain-specific heuristics 3 4 Heuristic Heuristic Search Free On-line Dictionary of Computing* • Uninformed search is generic 1. A rule of thumb, simplification, or educated guess • Node selection depends only on shape of tree and node 2. Reduces, limits, or guides search in particular domains expansion trategy. 3. Does not guarantee feasible solutions; often used with no • Sometimes domain knowledge à Better decision theoretical guarantee • Knowledge about the specific problem WordNet (r) 1.6* 1. Commonsense rule (or set of rules) intended to increase the probability of solving some problem 5 6 *Heavily edited for clarity 1

  2. Heuristic Search Heuristic Search • Romania: Arad à Bucharest (for example) • Romania: • Eyeballing it à certain cities first • They “look closer” to where we are going • Can domain knowledge be captured in a heuristic? 7 8 Heuristics Examples Heuristic Function • 8-puzzle: • All domain-specific knowledge is encoded in • # of tiles in wrong place heuristic function h • 8-puzzle (better): • h is some estimate of how desirable a move is • Sum of distances from goal • How “close” (we think) it gets us to our goal • Captures distance and number of nodes • Usually: • h ( n ) ≥ 0: for all nodes n • Romania: • h ( n ) = 0: n is a goal node • Straight-line distance from S • h ( n ) = ∞ : n is a dead end (no goal can be reached from n ) start node to Bucharest G • Captures “closer to Bucharest” 9 10 Informed Methods Add Example Search Space Revisited Domain-Specific Information start state • Goal: select the best path to continue searching 8 S • Define h ( n ) to estimates the “goodness” of node n arc cost 8 • h ( n ) = estimated cost (or distance) of minimal cost path 1 5 from n to a goal state A B 4 C 3 • Heuristic function is: 8 3 9 h value • An estimate of how close we are to a goal 7 4 5 • Based on domain-specific information D E ∞ G ∞ • Computable from the current state description 0 goal state 11 12 2

  3. Straight Lines to Budapest (km) Admissible Heuristics h SLD (n) • Admissible heuristics never overestimate cost • They are optimistic – think goal is closer than it is • h ( n ) ≤ h * ( n ) • where h * ( n ) is true cost to reach goal from n • h LSD (Lugoj) = 244 • Can there be a shorter path? • Using admissible heuristics guarantees that the first solution found will be optimal 13 R&N pg. 68, 93 14 Best-First Search Best-First Search (more) • A generic way of referring to informed methods • Order nodes on the list by • Increasing value of f ( n ) • Use an evaluation function f ( n ) for each node • Expand most desirable unexpanded node à estimate of “desirability” • Implementation: • f ( n ) incorporates domain-specific information • Order nodes in frontier in decreasing order of desirability • Different f ( n ) à Different searches • Special cases: • Greedy best-first search • A* search 15 16 Greedy Best-First Search Greedy Best-First Search • Idea: always choose “closest node” to goal • Admissible? a a • Most likely to lead to a solution quickly • Why not? • So, evaluate nodes based only b g b g h=2 h=2 h=4 h=4 on heuristic function • Example: • f ( n ) = h ( n ) c • Greedy search will find: c h h h=1 h=1 h=1 h=1 • Sort nodes by increasing a à b à c à d à e à g ; cost = 5 values of f d d h=1 h=0 • Optimal solution: h=1 h=0 i i • Select node believed to be closest a à g à h à i ; cost = 3 e e to a goal node (hence “greedy”) h=1 h=1 • That is, select node with smallest f value • Not complete (why?) g g h=0 h=0 17 18 3

  4. Straight Lines to Budapest (km) Greedy Best-First Search: Ex. 1 h SLD (n) What can we say about the search space? S 224 242 G 19 R&N pg. 68, 93 Greedy Best-First Search: Ex. 2 Greedy Best-First Search: Ex. 2 h SLD (n) 21 22 Greedy Best-First Search: Ex. 2 Greedy Best-First Search: Ex. 2 23 24 4

  5. Beam Search Algorithm A • Use evaluation function • Use an evaluation function f ( n ) = h ( n ), but the f ( n ) = g ( n ) + h ( n ) maximum size of the nodes list is k , a fixed constant • g ( n ) = minimal-cost path from • Only keeps k best nodes as candidates for expansion, any S to state n and throws the rest away • Ranks nodes on search frontier by estimated cost of • More space-efficient than greedy search, but may solution throw away a node that is on a solution path • From start node, through given node, to goal • Not complete • Not complete if h ( n ) can = ∞ • Not admissible • Not admissible 25 26 Example Search Space Revisited Example Search Space Revisited start state start state parent pointer 0 8 0 8 S S arc cost arc cost 8 8 1 1 5 5 1 4 1 4 A B C A B C 5 8 3 5 8 3 8 8 3 3 9 9 h value h value 7 4 7 4 5 5 g value g value D E D E 4 8 ∞ 4 8 ∞ 9 G 9 G ∞ ∞ 0 0 goal state goal state 27 28 Algorithm A Algorithm A S • Use evaluation function 1. Put start node S on the nodes list, called OPEN f ( n ) = g ( n ) + h ( n ) 1 5 8 2. If OPEN is empty, exit with failure • g ( n ) = minimal-cost path from 1 3. Select node in OPEN with minimal f ( n ) and place on CLOSED any S to state n 5 A B C 8 4. If n is a goal node, collect path back to start; terminate 9 • Ranks nodes on search 3 5. Expand n , generating all its successors, and attach to them frontier by estimated cost of 5 1 pointers back to n . For each successor n' of n solution D 4 1. If n' is not already on OPEN or CLOSED • From start node, through given • G put n' on OPEN node, to goal • compute h ( n' ), g ( n' ) = g ( n ) + c( n , n' ), f ( n' ) = g ( n' ) + h ( n' ) 2. If n' is already on OPEN or CLOSED and if g ( n' ) is lower for the new 9 version of n' , then: • Not complete if h ( n ) can = ∞ • Redirect pointers backward from n' along path yielding lower g ( n' ). g(d)=4 • Put n' on OPEN. • Not admissible C is chosen next to expand h(d)=9 29 30 5

  6. Some Observations on A Some Observations on A • Perfect heuristic: If h ( n ) = h * ( n ) for all n : • Better heuristic: We say If h 1 ( n ) < h 2 ( n ) ≤ h * ( n ) for all • Only nodes on the optimal solution path will be that A 2 * is expanded non-goal nodes, h 2 is a better better heuristic than h 1 The closer h • No extra work will be performed informed is to h * , the than A 1 * • If A 1 * uses h 1 , A 2 * uses h 2 , • Null heuristic: If h ( n ) = 0 for all n : fewer extra à every node expanded by A 2 * is • This is an admissible heuristic nodes will be also expanded by A 1 * • A* acts like Uniform-Cost Search expanded • So A 1 expands at least as many nodes as A 2 * 31 32 A * Search Quick Terminology Check • What is f ( n )? • What is h *( n )? • Avoid expanding paths that are already expensive • An evaluation function • A heuristic function that • Combines costs-so-far with expected-costs that gives… gives the… • A cost estimate of... • A* is complete iff • True cost to reach goal from n • The distance from n to G • Why don’t we just use that? • Branching factor is finite • What is h ( n )? • Every operator has a fixed positive cost • What is g ( n )? • A heuristic function • The path cost of getting from that… • A* is admissible iff S to n • Encodes domain • h ( n ) is admissible knowledge about... • describes the “spent” costs of the current search • The search space 34 A * Search A * Example 1 • Idea: Evaluate nodes by combining g(n), the cost of reaching the node, with h(n), the cost of getting from the node to the goal. 0 S cost • Evaluation function f(n) = g(n) + h(n) 8 h • g(n) = cost so far to reach n • h(n) = estimated cost from n to goal C 3 8 • f(n) = estimated total cost of path through n to goal 5 g 9 G 0 35 36 6

  7. A * Example 1 A * Example 1 37 38 A * Example 1 A * Example 1 39 40 A * Example 1 Algorithm A* • Algorithm A with constraint that h ( n ) ≤ h * ( n ) • h *( n ) = true cost of the minimal cost path from n to a goal. • Therefore, h ( n ) is an underestimate of the distance to the goal • h () is admissible when h ( n ) ≤ h *( n ) • Guarantees optimality • A* is complete whenever the branching factor is finite, and every operator has a fixed positive cost • A* is admissible 41 42 7

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend