today cse 473 artificial intelligence spring 2018
play

Today CSE 473: Artificial Intelligence Spring 2018 A* Search - PDF document

3/22/2018 Today CSE 473: Artificial Intelligence Spring 2018 A* Search Heuristic Search and A* Algorithms Heuristic Design Steve Tanimoto Graph search With slides from : Dieter Fox, Dan Weld, Dan Klein, Stuart Russell, Andrew


  1. 3/22/2018 Today CSE 473: Artificial Intelligence Spring 2018  A* Search Heuristic Search and A* Algorithms  Heuristic Design Steve Tanimoto  Graph search With slides from : Dieter Fox, Dan Weld, Dan Klein, Stuart Russell, Andrew Moore, Luke Zettlemoyer Recap: Search Example: Pancake Problem  Search problem: Action: Flip over the  States (configurations of the world) top n pancakes  Successor function: a function from states to lists of (state, action, cost) triples; drawn as a graph  Start state and goal test  Search tree:  Nodes: represent plans for reaching states  Plans have costs (sum of action costs)  Search Algorithm:  Systematically builds a search tree Cost: Number of pancakes flipped  Chooses an ordering of the fringe (unexplored nodes) Example: Pancake Problem Example: Pancake Problem State space graph with costs as weights 4 2 3 2 3 4 3 4 2 3 2 2 3 4 3 1

  2. 3/22/2018 General Tree Search Example: Heuristic Function Heuristic: the largest pancake that is still out of place 3 h(x) 4 3 4 Action: flip top 3 Action: flip all four 0 Path to reach goal: two Cost: 4 Flip four, flip three Cost: 2 4 Total cost: 7 4 3 4 4 2 3 What is a Heuristic ? Example: Heuristic Function  An estimate of how close a state is to a goal  Designed for a particular search problem 10 5 11.2  Examples: Manhattan distance: 10+5 = 15 Euclidean distance: 11.2 h(x) Greedy Search Best First (Greedy)  Strategy: expand a node b … that you think is closest to a goal state  Heuristic: estimate of distance to nearest goal for each state  A common case: b  Best-first takes you straight … to the (wrong) goal  Worst-case: like a badly- guided DFS 2

  3. 3/22/2018 Greedy Search A* Search  Expand the node that seems closest…  What can go wrong? Combining UCS and Greedy When should A* terminate?  Uniform-cost orders by path cost, or backward cost  Should we stop when we enqueue a goal? g(n)  Greedy orders by goal proximity, or forward cost h(n) 8 g = 0 h=6 S A 2 2 e h=1 g = 1 h=5 a 1 h = 2 G 1 3 2 S h = 3 g = 9 h=1 S a d G g = 2 h=6 g = 4 b d e h = 0 h=6 h=5 h=2 1 h=2 h=0 B 3 2 1 g = 10 h=2 c b g = 3 h=7 g = 6 h = 1 c G d h=0 h=7 h=6  No: only stop when we dequeue a goal g = 12  A* Search orders by the sum: f(n) = g(n) + h(n) G h=0 Example: Teg Grenager Is A* Optimal? Admissible Heuristics 1  A heuristic h is admissible (optimistic) if: A 3 h = 6 h = 0 where is the true cost to a nearest goal S h = 7 G  Examples: 5 15 4  What went wrong?  Actual bad goal cost < estimated good path cost  Coming up with admissible heuristics is most of  We need estimates to be less than or equal to what’s involved in using A* in practice. actual costs! 3

  4. 3/22/2018 Optimality of A* Tree Search Optimality of A* Tree Search Proof: Assume: …  Imagine B is on the fringe  A is an optimal goal node …  Some ancestor n of A is on the  B is a suboptimal goal node fringe, too (maybe A!)  h is admissible  Claim: n will be expanded before B Claim: 1. f(n) is less or equal to f(A)  A will exit the fringe before B Definition of f-cost Admissibility of h h = 0 at a goal Optimality of A* Tree Search Optimality of A* Tree Search Proof: Proof:  Imagine B is on the fringe … …  Imagine B is on the fringe  Some ancestor n of A is on the  Some ancestor n of A is on the fringe, too (maybe A!) fringe, too (maybe A!)  Claim: n will be expanded  Claim: n will be expanded before B before B 1. f(n) is less or equal to f(A) 1. f(n) is less or equal to f(A) 2. f(A) is less than f(B) 2. f(A) is less than f(B) 3. n expands before B B is suboptimal  All ancestors of A expand h = 0 at a goal before B  A expands before B  A* search is optimal UCS vs A* Contours Which Algorithm?  Uniform-cost expanded  Uniform cost search (UCS): in all directions Start Goal  A* expands mainly toward the goal, but hedges its bets to ensure optimality Start Goal 4

  5. 3/22/2018 Which Algorithm? Which Algorithm?  A*, Manhattan Heuristic:  Best First / Greedy, Manhattan Heuristic: Creating Admissible Heuristics Creating Heuristics  Most of the work in solving hard search problems optimally is in coming up with admissible heuristics 8-puzzle:  Often, admissible heuristics are solutions to relaxed problems, where new actions are available 366 15  What are the states?  How many states?  What are the actions?  Inadmissible heuristics are often useful too  What states can I reach from the start state?  What should the costs be? 8 Puzzle I 8 Puzzle II  What if we had an easier 8-  Heuristic: Number of puzzle where any tile could tiles misplaced slide any direction at any time, ignoring other tiles?  Total Manhattan distance  h(start) = 8  h(start) = 3 + 1 + 2 + … = 18  Is it admissible? Average nodes expanded when Average nodes expanded when optimal path has length… optimal path has length… …4 steps …8 steps …12 steps …4 steps …8 steps …12 steps UCS 112 6,300 3.6 x 10 6 TILES 13 39 227  Admissible? TILES 13 39 227 12 25 73 MANHATTAN 5

  6. 3/22/2018 8 Puzzle III Trivial Heuristics, Dominance  Dominance: h a ≥ h c if  How about using the actual cost as a heuristic?  Would it be admissible?  Heuristics form a semi-lattice:  Would we save on nodes expanded?  Max of admissible heuristics is admissible  What’s wrong with it?  With A*: a trade-off between quality of  Trivial heuristics estimate and work per node!  Bottom of lattice is the zero heuristic (what does this give us?)  Top of lattice is the exact heuristic A* Applications Tree Search: Extra Work!  Failure to detect repeated states can cause  Pathing / routing problems exponentially more work. Why?  Resource planning problems  Robot motion planning  Language analysis  Machine translation  Speech recognition  … Graph Search Graph Search  Idea: never expand a state twice  In BFS, for example, we shouldn’t bother expanding some nodes (which, and why?)  How to implement:  Tree search + set of expanded states (“closed set”) S  Expand the search tree node-by-node, but… e p  Before expanding a node, check to make sure its state has never d been expanded before q b c e h r  If not new, skip it, if new add to closed set h r p q f a a  Hint: in python, store the closed set as a set, not a list q c p q f G  Can graph search wreck completeness? Why/why not? a q c G  How about optimality? a 6

  7. 3/22/2018 A* Graph Search Gone Wrong Consistency of Heuristics State space graph Search tree  Main idea: estimated heuristic costs ≤ actual costs A  Admissibility: heuristic cost ≤ actual cost to goal S (0+2) 1 A A h(A) ≤ actual cost from A to G C 1 h=4 h=1 1  Consistency: heuristic “arc” cost ≤ actual cost for h=4 A (1+4) B (1+1) h=2 S S h=3 each arc C C h=1 h(A) – h(C) ≤ cost(A to C) 1 h=2 3  Consequences of consistency: 2 C (2+1) C (3+1) 3  The f value along a path never decreases B B h(A) ≤ cost(A to C) + h(C) G (5+0) G (6+0) f(A) = g(A) + h(A) ≤ g(A) + cost(A to C) + h(C) = f(C) h=1 G  A* graph search is optimal G G h=0 Optimality Optimality of A* Graph Search  Tree search:  Sketch: consider what A* does with a  A* optimal if heuristic is admissible (and non-negative) consistent heuristic:  UCS is a special case (h = 0)  Nodes are popped with non-decreasing f- f ≤ 1 … scores: for all n, n’ with n’ popped after n :  Graph search: f ≤ 2 f(n’) ≥ f(n) f ≤ 3  A* optimal if heuristic is consistent  Proof by induction: (1) always pop the lowest f- score from the fringe, (2) all new nodes have  UCS optimal (h = 0 is consistent) larger (or equal) scores, (3) add them to the fringe, (4) repeat!  For every state s, nodes that reach s  Consistency implies admissibility optimally are expanded before nodes that reach s sub-optimally  In general, natural admissible heuristics tend to be consistent, especially if from relaxed problems  Result: A* graph search is optimal Summary: A*  A* uses both backward costs and (estimates of) forward costs  A* is optimal with admissible / consistent heuristics  Heuristic design is key: often use relaxed problems 7

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend