larry holder school of eecs washington state university
play

Larry Holder School of EECS Washington State University Artificial - PowerPoint PPT Presentation

Larry Holder School of EECS Washington State University Artificial Intelligence 1 } Problem-solving agent } Formulating problems } Search Uninformed search Informed (heuristic) search Heuristics Admissibility Artificial


  1. Larry Holder School of EECS Washington State University Artificial Intelligence 1

  2. } Problem-solving agent } Formulating problems } Search ◦ Uninformed search ◦ Informed (heuristic) search ◦ Heuristics ◦ Admissibility Artificial Intelligence 2

  3. } Type: Goal-based } Feature-based state representation A ◦ Agent location, orientation, … } Assume solution is a fixed B sequence of actions C } Rationality: Achieve goal (minimize cost) Which solution: A, B or C? } Search for sequence of actions achieving goal Artificial Intelligence 3

  4. } Initial state à } Goal state ◦ Any state where agent has gold and not in cave } Solution? ? GF TL TR Grab Shoot Climb … Artificial Intelligence 4

  5. function S IMPLE -P ROBLEM -S OLVING -A GENT ( percept ) returns an action persistent : seq , an action sequence initially empty state , some description of the current world state goal , a goal, initially null problem , a problem formulation state ← U PDATE -S TATE ( state , percept ) if seq is empty then goal = F ORMULATE -G OAL (state ) problem = F ORMULATE -P ROBLEM (state, goal ) seq = S EARCH (problem ) if seq = failure then return a null action action = F IRST (seq ) // action = seq [0] seq = R EST (seq ) // seq = seq [1…] return action Artificial Intelligence 5

  6. Initial st state 1. 1. ◦ State: relevant features of the problem Ac Actions 2. 2. ◦ Action: state à state ◦ A CTIONS ( s ) returns set of actions applicable to state s Transi sition model 3. 3. ◦ R ESULT ( s , a ) returns state after taking action a in state s Goal test st 4. 4. ◦ True for any state satisfying goal Path cost st 5. 5. ◦ Sum of costs of individual actions along a path ◦ Path: sequence of states connected by actions ◦ Step cost c(s,a,s’) : cost of taking action a in state s to reach state s’ Artificial Intelligence 6

  7. ◦ State space: set of all states reachable from the initial state by any sequence of actions – State space forms a directed graph of nodes (states) and edges (actions) ◦ Solution: sequence of actions leading from the initial state to a goal state ◦ Optimal solution: solution with minimal path cost Artificial Intelligence 7

  8. } State representation ◦ Location: A, B ◦ Cleanliness of rooms: Clean, Dirty ◦ Example state: (A,Clean,Clean) ◦ How many unique states? } Initial state: Any state } Actions: Left, Right, Suction } Transition model ◦ E.g., Result((A,Dirty,Clean), Suction) = ? } Goal test: State = (?,Clean,Clean) } Path cost ◦ Number of actions in solution (step cost = 1) Artificial Intelligence 8

  9. A B A B A B A B A B A B A B A B Artificial Intelligence 9

  10. } State: Location of each tile (and blank) ◦ E.g., (B,1,2,3,4,5,6,7,8) ◦ How many states? } Initial state: Any state } Actions: Move blank tile Up, Down, Left or Right } Transition model } Goal test: State matches Goal State } Path cost: Number of steps in path (step cost = 1) Artificial Intelligence 10

  11. 3 1 2 4 5 } Search tree 6 7 8 ◦ Root node is initial U R L D state 3 2 3 1 2 3 1 2 3 1 2 ◦ Node branches for 4 1 5 4 5 4 7 5 4 5 each applicable move 6 7 8 6 7 8 6 8 6 7 8 from node’s state ◦ Frontier consists of the U D R leaf nodes that can be expanded 1 2 3 1 2 3 1 2 ◦ Repeated states (*) 3 4 5 6 4 5 4 5 ◦ Goal state 7 8 6 7 8 6 7 8 * Artificial Intelligence 11

  12. } Nice 8-puzzle search web app ◦ http://tristanpenman.com/demos/n-puzzle Caution: This app may not produce answers consistent with algorithms used in this class. 1 2 3 1 2 3 4 5 4 5 6 7 8 7 8 6 Initial State Goal State Artificial Intelligence 12

  13. } Route finding } Robot navigation } Factory assembly } Circuit layout } Chemical design } Mathematical proofs } Game playing } Most of AI can be cast as a search problem Artificial Intelligence 13

  14. } Romania road map } Initial state: Arad } Goal state: Bucharest Artificial Intelligence 14

  15. Duplicate Bucharest Artificial Intelligence 15

  16. function T REE -S EARCH ( problem ) returns a solution, or failure initialize the frontier using the initial state of problem loop do if the frontier is empty then return failure choose a leaf node and remove it from the frontier if the node contains a goal state then return the corresponding solution expand the node, adding the resulting nodes to the frontier } Search strategy determines how nodes are chosen for expansion } Suffers from repeated state generation Artificial Intelligence 16

  17. function G RAPH -S EARCH ( problem ) returns a solution, or failure initialize the frontier using the initial state of problem initialize the explored set to be empty loop do if the frontier is empty then return failure choose a leaf node and remove it from the frontier if the node contains a goal state then return the corresponding solution add the node to the explored set expand the node, adding the resulting nodes to the frontier only if not in the frontier or explored set } Keep track of explored set to avoid repeated states } Changes from T REE -S EARCH highlighted Artificial Intelligence 17

  18. } Completeness ◦ Is the search algorithm guaranteed to find a solution if one exists? } Optimality ◦ Does the search algorithm find the optimal solution? } Time and space complexity ◦ Branching factor b b (maximum successors of a node) ◦ Depth d of shallowest goal node ◦ Maximum path length m ◦ Complexity O(b d ) to O(b m ) Artificial Intelligence 18

  19. } No preference over states based on “closeness” to goal } Strategies ◦ Breadth-first search ◦ Depth-first search ◦ Depth-limited search ◦ Iterative deepening search Artificial Intelligence 19

  20. } Expand shallowest nodes in frontier } Frontier is a simple queue ◦ Dequeue nodes from front, enqueue nodes to back ◦ First-In, First-Out (FIFO) Artificial Intelligence 20

  21. function B READTH -F IRST -S EARCH ( problem ) returns a solution, or failure node ← a node with S TATE = problem .I NITIAL -S TATE , P ATH -C OST = 0 if problem .G OAL -T EST ( node .S TATE ) then return S OLUTION ( node ) frontier ← FIFO queue with node as only element explored ← empty set loop do if E MPTY ( frontier ) then return failure node ← D EQUEUE ( frontier ) // choose shallowest node in frontier add node .S TATE to explored for each action in problem .A CTIONS ( node .S TATE ) do child = C HILD -N ODE (problem , node , action ) if child .S TATE is not in explored or frontier then if problem .G OAL -T EST ( child .S TATE ) then return S OLUTION ( child ) frontier ← E NQUEUE ( child , frontier ) Artificial Intelligence 21

  22. } 8-puzzle demo 1 2 3 1 2 3 4 8 5 4 5 6 7 6 7 8 Initial State Goal State Artificial Intelligence 22

  23. } Complete? } Optimal? } Time complexity ◦ Number of nodes generated (worst case) $ 𝑐 ! = 𝑐 $%& − 1 = 𝑃(𝑐 $ ) ! 𝑐 − 1 !"# } Space complexity ◦ O(b d-1 ) nodes in explored set ◦ O(b d ) nodes in frontier ◦ Total O(b d ) Artificial Intelligence 23

  24. } Exponential complexity O(b d ) } For b =4, 1KB/node, 1M nodes/sec De Depth th No Nodes Ti Time Me Memory 2 16 0.02 ms 16 KB (10 3 ) 4 256 0.26 ms 256 KB (10 3 ) 8 65,536 0.07 sec 65 MB (10 6 ) 16 4.3B 71.6 min 4.3 TB (10 12 ) 20 10 12 12.7 days 1 PetaByte (10 15 ) 30 10 18 366 centuries 1 ZettaByte (10 21 ) Artificial Intelligence 24

  25. } Always expand the deepest node } Frontier is a simple stack ◦ Push nodes to front, pop nodes from front ◦ Last-In, First-Out (LIFO) } Otherwise, same code as BFS } Or, implement recursively Artificial Intelligence 25

  26. DEMO Artificial Intelligence 26

  27. } Tree-Search version ◦ Not complete (infinite loops) ◦ Not optimal } Graph-Search version ◦ Complete ◦ Not optimal } Time complexity (m = max depth): O(b m ) } Space complexity ◦ Tree-search: O(bm) ◦ Graph-search: O(b m ) Artificial Intelligence 27

  28. function D EPTH -L IMITED -S EARCH ( problem, limit ) returns a solution, or failure/cutoff return R ECURSIVE -DLS (M AKE -N ODE ( problem .I NITIAL -S TATE ), problem , limit ) function R ECURSIVE -DLS ( node , problem, limit ) returns a solution, or failure/cutoff if problem .G OAL -T EST ( node. S TATE ) then return S OLUTION ( node ) else if limit = 0 then return cutoff else cutoff_occurred ← false for each action in problem .A CTIONS ( node .S TATE ) do child = C HILD -N ODE (problem , node , action ) result ← R ECURSIVE -DLS ( child , problem , limit – 1) if result = cutoff then cutoff_occurred ← true else if result ≠ failure then return result if cutoff_occurred then return cutoff else return failure Artificial Intelligence 28

  29. } Limit DFS depth to l } Still incomplete, if l < d } Non-optimal if l > d } Time complexity: O(b l ) } Space complexity: O(b l ) Artificial Intelligence 29

  30. } Run D EPTH -L IMITED -S EARCH iteratively with increasing depth limit function I TERATIVE -D EEPENING -S EARCH ( problem ) returns a solution, or failure for depth = 0 to ∞ do result = D EPTH -L IMITED -S EARCH ( problem , depth ) if result ≠ cutoff then return result Artificial Intelligence 30

  31. DEMO Artificial Intelligence 31

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend