csc2542
play

CSC2542 Bernhard Nebel, and Jussi Rintanen. State-Space Planning I - PDF document

Acknowledgements Some the slides used in this course are modifications of Dana Naus lecture slides for the textbook Automated Planning, licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:


  1. Acknowledgements Some the slides used in this course are modifications of Dana Nau’s lecture slides for the textbook Automated Planning, licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License: http://creativecommons.org/licenses/by-nc-sa/2.0/ Other slides are modifications of slides developed by Malte Helmert, CSC2542 Bernhard Nebel, and Jussi Rintanen. State-Space Planning I have also used some material prepared by P@trick Haslum and Rao Kambhampati. Sheila McIlraith I would like to gratefully acknowledge the contributions of these researchers, Department of Computer Science and thank them for generously permitting me to use aspects of their presentation material. University of Toronto Fall 2010 1 2 Motivation Outline � Nearly all planning procedures are search procedures � State-space planning � Different planning procedures have different search spaces � Forward search Two examples: � Backward search � State-space planning � Lifting � Plan-space planning � STRIPS � State-space planning � Block-stacking � Each node represents a state of the world � A plan is a path through the space � Plan-space planning � Each node is a set of partially-instantiated operators, plus some constraints � Impose more and more constraints, until we get a plan 3 4

  2. Forward Search Properties � Forward-search is sound � for any plan returned by any of its nondeterministic traces, this plan is guaranteed to be a solution � Forward-search also is complete � if a solution exists then at least one of Forward- search’s nondeterministic traces will return a solution. take c3 … take c2 move r1 … 5 6 Branching Factor of Forward Search Deterministic Implementations Some deterministic implementations of forward search: � a 1 s 1 � breadth-first search a 1 s 4 a 4 a 2 � depth-first search a 3 � best-first search (e.g., A*) s 0 a 1 a 2 a 3 … a 50 s g s 2 a 2 s 5 � greedy search a 5 … goal initial state a 3 s 3 Breadth-first and best-first search are sound and complete � � Forward search can have a very large branching factor � But they usually aren’t practical, requiring too much memory � Can have many applicable actions that don’t progress � Memory requirement is exponential in the length of the solution toward goal In practice, more likely to use depth-first search or greedy search � Why this is bad: � � Worst-case memory requirement is linear in the length of the solution � Deterministic implementations can waste time trying lots of � In general, sound but not complete irrelevant actions � But classical planning has only finitely many states � Need a good heuristic function and/or pruning procedure � Thus, can make depth-first search complete by doing loop-checking (This will be a focus of later discussion) 7 8

  3. Backward Search Inverse State Transitions � For forward search, we started at the initial state and � If a is relevant for g , then computed state transitions � γ –1 ( g,a ) = ( g – effects(a)) ∪ precond( a ) � new state = γ ( s,a ) � Otherwise γ –1 ( g,a ) is undefined � For backward search, we start at the goal and compute inverse state transitions � Example: suppose that � new set of subgoals = γ –1 ( g,a ) � g = { on(b1,b2), on(b2,b3) } � To define γ -1 ( g,a ), must first define relevance : � a = stack(b1,b2) � An action a is relevant for a goal g if � What is γ –1 ( g,a )? � a makes at least one of g ’s literals true � g ∩ effects( a ) ≠ ∅ � a does not make any of g ’s literals false � g + ∩ effects – ( a ) = ∅ and g – ∩ effects + ( a ) = ∅ 9 10 Efficiency of Backward Search b 1 b 3 … b 50 b 1 b 2 goal initial state � Backward search can also have a very large branching factor � E.g., an operator o that is relevant for g may have many ground instances a 1 , a 2 , …, a n such that each a i ’s input state might be unreachable from the initial state g 1 a 1 g 4 a 4 � As before, deterministic implementations can waste lots of g 2 g 0 a 2 time trying all of them s 0 a 5 g 5 a 3 g 3 11 12

  4. Lifting Lifted Backward Search p(a 1 ,a 1 ) � Basic Idea: Delay grounding of operators until necessary foo(a 1 ,a 1 ) p(a 1 ,a 2 ) in order to bind variables with those required to realize foo( x,y ) foo(a 1 ,a 2 ) goal or subgoal precond: p( x,y ) p(a 1 ,a 3 ) effects: q( x ) foo(a 1 ,a 3 ) q(a 1 ) . . . � More complicated than Backward-search foo(a 1 ,a 50 ) p(a1,a 50 ) � Must keep track of what substitutions were performed � Can reduce the branching factor of backward search if we � But it has a much smaller branching factor partially instantiate the operators � this is called lifting foo(a 1 , y ) q(a 1 ) p(a 1 , y ) 13 14 The Search Space is Still Too Large Lifted Backward Search Lifted-backward-search generates a smaller search space � than Backward-search , but it still can be quite large � Suppose actions a , b , and c are independent, action d must precede all of them, and there’s no path from s 0 to d ’s input state � We’ll try all possible orderings of a , b , and c before realizing there is no solution � Plan-space planning can help with this problem d a b c d b a d b a s 0 goal b d a c d b c a d c b 15 16

  5. STRIPS Pruning the Search Space � One of the first planning algorithms (Shakey the robot) Pruning the search space can really help. � π ← the empty plan Two techniques we will discuss: � do a modified backward search from g � Sound pruning using branch-and-bound heuristic search � ** each new subgoal is precond( a ) (instead of γ -1 ( s,a )) � Domain customization that prunes actions and states � when you find an action that’s executable in the current state, then go forward on the current search path as far as For now, just two examples: possible, executing actions and appending them to π � STRIPS � repeat until all goals are satisfied � Block stacking π = 〈 a 6 , a 4 〉 satisfied in s 0 g 6 g 1 a 1 s = γ ( γ ( s 0 ,a 6 ) ,a 4 ) a 4 a 6 g 4 g 2 g a 2 g 3 a 5 a 3 g 5 a 3 g 3 current search path 17 18 Quick Review of Blocks World Limitations of STRIPS c unstack(x,y) a b Pre: on(x,y), clear(x), handempty Example 1. The Sussman Anomaly Eff: ~on(x,y), ~clear(x), ~handempty, holding(x), clear(y) c a b a stack(x,y) Pre: holding(x), clear(y) b c Eff: ~holding(x), ~clear(y), c c a b on(x,y), clear(x), handempty a b Initial state goal pickup(x) Pre: ontable(x), clear(x), handempty c Eff: ~ontable(x), ~clear(x), ~handempty, holding(x) b � On this problem, STRIPS cannot produce an irredundant a putdown(x) solution. Pre: holding(x) Eff: ~holding(x), ontable(x), clear(x), handempty c � Try it and see. Start with the goal {on(b,c), on(a,b)}. a b 19 20

  6. Example 2. Register Assignment Problem How to Handle Problems like These? Several ways: � State-variable formulation: Initial state: {value(r1)=3, value(r2)=5, value(r3)=0} � Do something other than state-space search � e.g., Chapters 5–8 Goal: {value(r1)=5, value(r2)=3} � Use forward or backward state-space search, with Operator: assign( r,v,r',v' ) domain-specific knowledge to prune the search space precond: value( r )= v , value( r' )= v' � Can solve both problems quite easily this way effects: value( r )= v' � Example: block stacking using forward search � STRIPS cannot solve this problem at all 21 22 Domain-Specific Knowledge Additional Domain-Specific Knowledge � A blocks-world planning problem P = ( O , s 0 , g ) is solvable A block x needs to be moved if any of the following is true: if s 0 and g satisfy some simple consistency conditions � s contains ontable( x ) and g contains on( x,y ) - see a below � g should not mention any blocks not mentioned in s 0 � s contains on( x,y ) and g contains ontable( x ) - see d below � a block cannot be on two other blocks at once � s contains on( x,y ) and g contains on( x,z ) for some y ≠ z - see c below � etc. � s contains on( x,y ) and y needs to be moved - see e below � Can check these in time O( n log n ) � If P is solvable, can easily construct a solution of length O(2 m ), where m is the number of blocks a � Move all blocks to the table, then build up stacks from the d b bottom e c c � Can do this in time O( n ) a b d � With additional domain-specific knowledge can do even initial state goal better … 23 24

  7. Domain-Specific Algorithm Easily Solves the Sussman Anomaly loop loop if there is a clear block x such that if there is a clear block x such that x needs to be moved and x needs to be moved and x can be moved to a place where it won’t need to be x can be moved to a place where it won’t need to be moved moved then move x to that place then move x to that place else if there is a clear block x such that x needs to be moved else if there is a clear block x such that then move x to the table x needs to be moved else if the goal is satisfied then move x to the table then return the plan else if the goal is satisfied a else return failure then return the plan d b a else return failure repeat e c c c b repeat a b a b d c initial state goal initial state goal 25 26 Properties The block-stacking algorithm: � Sound, complete, guaranteed to terminate � Runs in time O ( n 3 ) � Can be modified to run in time O ( n ) � Often finds optimal (shortest) solutions � But sometimes only near-optimal 27

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend