cse 573 introduction to artificial intelligence
play

CSE 573: Introduction to Artificial Intelligence Hanna Hajishirzi - PowerPoint PPT Presentation

CSE 573: Introduction to Artificial Intelligence Hanna Hajishirzi Search (Un-informed, Informed Search) slides adapted from Dan Klein, Pieter Abbeel ai.berkeley.edu And Dan Weld, Luke Zettelmoyer To Do: o Check out PS1 in the webpage o Start


  1. CSE 573: Introduction to Artificial Intelligence Hanna Hajishirzi Search (Un-informed, Informed Search) slides adapted from Dan Klein, Pieter Abbeel ai.berkeley.edu And Dan Weld, Luke Zettelmoyer

  2. To Do: o Check out PS1 in the webpage o Start ASAP o Submission: Canvas o Website: o Do readings for search algorithms o Try this search visualization tool o http://qiao.github.io/PathFinding.js/visual/

  3. Recap: Search

  4. Recap: Search o Search problem: o States (configurations of the world) o Actions and costs o Successor function (world dynamics) o Start state and goal test o Search tree: o Nodes: represent plans for reaching states o Search algorithm: o Systematically builds a search tree o Chooses an ordering of the fringe (unexplored nodes) o Optimal: finds least-cost plans

  5. Depth-First Search G Strategy: expand a a a deepest node first c c b b e e Implementation: d d f f Fringe is a LIFO stack S h h p p r r q q S e p d q e h r b c h r p q f a a q c p q f G a q c G a

  6. Breadth-First Search G Strategy: expand a a c shallowest node first b e Implementation: Fringe d f is a FIFO queue S h p r q S e p d Search q e h r b c Tiers h r p q f a a q c p q f G a q c G a

  7. Search Algorithm Properties Algorithm Complete Optimal Time Space w/ Path DFS O( b m ) Y N O( bm ) Checking BFS Y Y* O( b d ) O( b d ) 1 node b b nodes … d tiers b 2 nodes b d nodes b m nodes

  8. Video of Demo Maze Water DFS/BFS (part 1)

  9. Video of Demo Maze Water DFS/BFS (part 2)

  10. DFS vs BFS o When will BFS outperform DFS? o When will DFS outperform BFS?

  11. Iterative Deepening o Idea: get DFS’s space advantage with BFS’s time / shallow-solution b advantages … o Run a DFS with depth limit 1. If no solution… o Run a DFS with depth limit 2. If no solution… o Run a DFS with depth limit 3. ….. o Isn’t that wastefully redundant? o Generally most work happens in the lowest level searched, so not so bad!

  12. Cost-Sensitive Search GOAL a c b e d f START h p r q

  13. Cost-Sensitive Search GOAL a 2 2 c b 3 2 1 8 2 e d 3 f 9 8 2 START h 4 2 1 4 p r 15 q BFS finds the shortest path in terms of number of actions. How? It does not find the least-cost path. We will now cover a similar algorithm which does find the least-cost path.

  14. Uniform Cost Search

  15. Uniform Cost Search 2 G a Strategy: expand a c b 8 1 cheapest node first: 2 2 e 3 d f Fringe is a priority queue 9 2 8 S h 1 (priority: cumulative cost) 1 p r q 15 0 S 9 1 e p 3 d q 16 11 5 17 4 e h r b c 11 Cost 7 6 13 h r p q f a a contours q c 8 p q f G a q c 11 10 G a

  16. Uniform Cost Search (UCS) Properties o What nodes does UCS expand? o Processes all nodes with cost less than cheapest solution! b c £ 1 o If that solution costs C* and arcs cost at least e , then the … “effective depth” is roughly C*/ e c £ 2 C*/ e “tiers” o Takes time O(b C*/ e ) (exponential in effective depth) c £ 3 o How much space does the fringe take? o Has roughly the last tier, so O(b C*/ e ) o Is it complete? o Assuming best solution has a finite cost and minimum arc cost is positive, yes! (if no solution, still need depth != ∞ ) o Is it optimal? o Yes! (Proof via A*)

  17. Uniform Cost Issues o Remember: UCS explores increasing c £ 1 … cost contours c £ 2 c £ 3 o The good: UCS is complete and optimal! o The bad: o Explores options in every “direction” Start Goal o No information about goal location o We’ll fix that soon!

  18. Video of Demo Empty UCS

  19. Video of Demo Maze with Deep/Shallow Water --- DFS, BFS, or UCS? (part 1)

  20. Video of Demo Maze with Deep/Shallow Water --- DFS, BFS, or UCS? (part 2)

  21. Video of Demo Maze with Deep/Shallow Water --- DFS, BFS, or UCS? (part 3)

  22. Example: Pancake Problem Cost: Number of pancakes flipped

  23. Example: Pancake Problem

  24. Example: Pancake Problem State space graph with costs as weights 4 2 3 2 3 4 3 4 2 3 2 2 4 3

  25. General Tree Search Action: flip top two Action: flip all four Path to reach goal: Cost: 2 Cost: 4 Flip four, flip three Total cost: 7

  26. The One Queue o All these search algorithms are the same except for fringe strategies o Conceptually, all fringes are priority queues (i.e. collections of nodes with attached priorities) o Practically, for DFS and BFS, you can avoid the log(n) overhead from an actual priority queue, by using stacks and queues o Can even code one implementation that takes a variable queuing object

  27. Up next: Informed Search § Informed Search o Uninformed Search § Heuristics o DFS § Greedy Search o BFS § A* Search o UCS § Graph Search

  28. Search Heuristics § A heuristic is: § A function that estimates how close a state is to a goal § Designed for a particular search problem § Pathing? § Examples: Manhattan distance, Euclidean distance for pathing 10 5 11.2

  29. Example: Heuristic Function h(x)

  30. Example: Heuristic Function Heuristic: the number of the largest pancake that is still out of place 3 h(x) 4 3 4 3 0 4 4 3 4 4 2 3

  31. Greedy Search

  32. Greedy Search o Expand the node that seems closest… o Is it optimal? o No. Resulting path to Bucharest is not the shortest!

  33. Greedy Search b o Strategy: expand a node that you think is … closest to a goal state o Heuristic: estimate of distance to nearest goal for each state o A common case: b o Best-first takes you straight to the (wrong) … goal o Worst-case: like a badly-guided DFS

  34. Video of Demo Contours Greedy (Empty)

  35. Video of Demo Contours Greedy (Pacman Small Maze)

  36. A* Search

  37. A*: Summary

  38. A* Search UCS Greedy A*

  39. Combining UCS and Greedy o Uniform-cost orders by path cost, or backward cost g(n) o Greedy orders by goal proximity, or forward cost h(n) g = 0 8 S h=6 g = 1 h=1 e a h=5 1 1 3 2 g = 9 g = 2 g = 4 S a d G b d e h=1 h=6 h=2 h=6 h=5 1 h=2 h=0 1 g = 3 g = 6 g = 10 c b c G d h=7 h=0 h=2 h=7 h=6 g = 12 G h=0 o A* Search orders by the sum: f(n) = g(n) + h(n) Example: Teg Grenager

  40. When should A* terminate? o Should we stop when we enqueue a goal? h = 2 g h + A 2 2 S 0 3 3 S S->A 2 2 4 G h = 3 h = 0 S->B 2 1 3 2 3 B S->B->G 5 0 5 h = 1 S->A->G 4 0 4 o No: only stop when we dequeue a goal

  41. Is A* Optimal? h = 6 1 3 A g h + S 0 7 7 S h = 7 G h = 0 S->A 1 6 7 S->G 5 0 5 5 o What went wrong? o Actual bad goal cost < estimated good goal cost o We need estimates to be less than actual costs!

  42. Idea: Admissibility Inadmissible (pessimistic) heuristics Admissible (optimistic) heuristics break optimality by trapping slow down bad plans but good plans on the fringe never outweigh true costs

  43. Admissible Heuristics o A heuristic h is admissible (optimistic) if: where is the true cost to a nearest goal o Examples: 0.0 15 11.5 o Coming up with admissible heuristics is most of what’s involved in using A* in practice.

  44. Optimality of A* Tree Search

  45. Optimality of A* Tree Search Assume: o A is an optimal goal node … o B is a suboptimal goal node o h is admissible Claim: o A will exit the fringe before B

  46. Optimality of A* Tree Search: Blocking Proof: … o Imagine B is on the fringe o Some ancestor n of A is on the fringe, too (maybe A!) o Claim: n will be expanded before B 1. f(n) is less or equal to f(A) Definition of f-cost Admissibility of h h = 0 at a goal

  47. Optimality of A* Tree Search: Blocking Proof: … o Imagine B is on the fringe o Some ancestor n of A is on the fringe, too (maybe A!) o Claim: n will be expanded before B 1. f(n) is less or equal to f(A) 2. f(A) is less than f(B) B is suboptimal h = 0 at a goal

  48. Optimality of A* Tree Search: Blocking Proof: … o Imagine B is on the fringe o Some ancestor n of A is on the fringe, too (maybe A!) o Claim: n will be expanded before B 1. f(n) is less or equal to f(A) 2. f(A) is less than f(B) 3. n expands before B o All ancestors of A expand before B o A expands before B o A* search is optimal

  49. Properties of A* Uniform-Cost A* b b … …

  50. UCS vs A* Contours o Uniform-cost expands equally in all “directions” Start Goal o A* expands mainly toward the goal, but does hedge its bets to ensure optimality Start Goal

  51. Video of Demo Contours (Empty) – A*

  52. Video of Demo Contours (Pacman Small Maze) – A*

  53. Comparison Greedy Uniform Cost A*

  54. Which algorithm?

  55. Which algorithm?

  56. Video of Demo Empty Water Shallow/Deep – Guess Algorithm

  57. A*: Summary o A* uses both backward costs and (estimates of) forward costs o A* is optimal with admissible (optimistic) heuristics o Heuristic design is key: often use relaxed problems

  58. Creating Heuristics

  59. Creating Admissible Heuristics o Most of the work in solving hard search problems optimally is in coming up with admissible heuristics o Often, admissible heuristics are solutions to relaxed problems, where new actions are available 366 15 o Inadmissible heuristics are often useful too

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend