agent based systems
play

Agent-Based Systems Specifying agents in a logical, deductive - PowerPoint PPT Presentation

Agent-Based Systems Agent-Based Systems Where are we? Last time . . . Agent-Based Systems Specifying agents in a logical, deductive framework General framework, agent-oriented programming, MetateM Intelligent autonomous behaviour not


  1. Agent-Based Systems Agent-Based Systems Where are we? Last time . . . Agent-Based Systems • Specifying agents in a logical, deductive framework • General framework, agent-oriented programming, MetateM • Intelligent autonomous behaviour not only determined by logic! Michael Rovatsos • (Although this does not mean it cannot be simulated with deductive mrovatso@inf.ed.ac.uk reasoning methods) • Need to look for more practical view of agent reasoning Lecture 4 – Practical Reasoning Agents Today . . . • Practical Reasoning Systems 1 / 23 2 / 23 Agent-Based Systems Agent-Based Systems Practical reasoning Practical reasoning • Practical reasoning is reasoning directed towards actions , • Practical reasoning consists of two main activities: i.e. deciding what to do 1 Deliberation: deciding what to do • Principles of practical reasoning applied to agents largely derive 2 Means-ends reasoning: deciding how to do it from work of philosopher Michael Bratman (1990): • Combining them appropriately = foundation of deliberative agency Practical reasoning is a matter of weighing conflicting • Deliberation is concerned with determining what one wants to considerations for and against competing options, where the achieve (considering preferences, choosing goals, etc.) relevant considerations are provided by what the agent • Deliberation generates intentions (interface between deliberation desires/values/cares about and what the agent believes. and means-ends reasoning) • Difference to theoretical reasoning, which is concerned with belief • Means-ends reasoning is used to determine how the goals are to (e.g. reasoning about a mathematical problem) be achieved (thinking about suitable actions, resources and how to • Important: computational aspects (e.g. agent cannot go on “organise” activity) deciding indefinitely, he has to act) • Means-ends reasoning generates plans which are turned into • Practical reasoning is foundation for Belief-Desire-Intention actions model of agency 3 / 23 4 / 23

  2. Agent-Based Systems Agent-Based Systems Intentions Intentions • In ordinary speech, intentions refer to actions or to states of mind; here we consider the latter • Bratman’s model suggests the following properties: • We focus on future-directed intentions i.e. pro-attitudes that tend to • Intentions pose problems for agents, who need to determine ways of achieving them lead to actions • Intentions provide a ‘filter’ for adopting other intentions, which must • We make reasonable attempts to fulfil intentions once we form not conflict them, but they may change if circumstances do • Agents track the success of their intentions, and are inclined to try • Main properties of intentions: again if their attempts fail • Agents believe their intentions are possible • Intentions drive means-ends reasoning : If I adopt an intention I will attempt to achieve it, this affects action choice • Agents do not believe they will not bring about their intentions • Intentions persist : Once adopted they will not be dropped until • Under certain circumstances, agents believe they will bring about their intentions achieved, deemed unachievable, or reconsidered • Intentions constrain future deliberation : Options inconsistent • Agents need not intend all the expected side effects of their intentions with intentions will not be entertained • Intentions influence beliefs concerning future practical reasoning : Rationality requires that I believe I can achieve intention 5 / 23 6 / 23 Agent-Based Systems Agent-Based Systems Intentions Desires • Cohen-Levesque theory of intentions based on notion of • Desires describe the states of affairs that are considered for persistent goal achievement, i.e. basic preferences of the agent • An agent has a persistent goal of ϕ iff: • Desires are much weaker than intentions, they are not directly 1 It has a goal that ϕ eventually becomes true, and believes that ϕ is related to activity: not currently true 2 Before it drops the goal ϕ , one of the following conditions must hold: My desire to play basketball this afternoon is merely a potential influence of my conduct this afternoon. It must vie with my • the agent believes ϕ has been satisfied • the agent believes ϕ will never be satisfied other relevant desires [ . . . ] before it is settled what I will do. In contrast, once I intend to play basketball this afternoon, the • Definition of intention (consistent with Bratman’s list): matter is settled: I normally need not continue to weigh the An agent intends to do action α iff it has a persistent goal pros and cons. When the afternoon arrives, I will normally just to have brought about a state wherein it believed it was proceed to execute my intentions. (Bratman, 1990) about to do α , and then did α . 7 / 23 8 / 23

  3. Agent-Based Systems Agent-Based Systems The BDI Architecture The BDI Architecture Deliberation process in the BDI model: sensor input Sub-components of overall BDI control flow: • Belief revision function • Update beliefs with sensory input and previous belief belief revision • Generate options beliefs • Use beliefs and existing intentions to generate a set of alternatives/options (=desires) generate options • Filtering function desires • Choose between competing alternatives and commit to their filter achievement • Planning function intentions • Given current belief and intentions generate a plan for action action • Action generation: iteratively execute actions in plan sequence action output 9 / 23 10 / 23 Agent-Based Systems Agent-Based Systems The BDI architecture – formal model BDI control loop (first version) Practical Reasoning Agent Control Loop • Let B ⊆ Bel , D ⊆ Des , I ⊆ Int be sets describing beliefs, desires B ← B 0 ; I ← I 0 ; /* initialisation */ 1. and intentions of agent 2. while true do • Percepts Per and actions Ac as before, Plan set of all plans (for 3. get next percept ρ through see ( . . . ) function now, sequences of actions) 4. B ← brf ( B , ρ ) ; D ← options ( B , I ) ; I ← filter ( B , D , I ) ; 5. π ← plan ( B , I , Ac ) ; • We describe the model through a set of abstract functions 6. while not ( empty ( π ) or succeeded ( I , B ) or impossible ( I , B ) ) do • Belief revision brf : ℘ ( Bel ) × Per → ℘ ( Bel ) α ← head ( π ) ; 7. • Option generation options : ℘ ( Bel ) × ℘ ( Int ) → ℘ ( Des ) execute ( α ) ; 8. π ← tail ( π ) ; 9. • Filter to select options filter : ℘ ( Bel ) × ℘ ( Des ) × ℘ ( Int ) → ℘ ( Int ) 10. end-while • Means-ends reasoning: plan : ℘ ( Bel ) × ℘ ( Int ) × ℘ ( Ac ) → Plan 11. end-while 11 / 23 12 / 23

  4. Agent-Based Systems Agent-Based Systems Means-ends reasoning Blocks world example • Given: A set of cube-shaped blocks sitting on a table • So far, we have not described plan function, i.e. how to achieve • Robot arm can move around/stack blocks (one at a time) goals (ends) using available means • Goal: configuration of stacks of blocks • Classical AI planning uses the following representations as inputs: • Formalisation in STRIPS: • A goal (intention, task) to be achieved (or maintained) • State description through set of literals, e.g. • Current state of the environment (beliefs) • Actions available to the agent { Clear ( A ) , On ( A , B ) , OnTable ( B ) , OnTable ( C ) , Clear ( C ) } • Output is a plan , i.e. “a recipe for action” to achieve goal from current state • Same for goal description, e.g. • STRIPS: most famous classical planning system { OnTable ( A ) , OnTable ( B ) , OnTable ( C ) } • State and goal are described as logical formulae • Action schemata describe preconditions and effects of actions • Action schemata: precondition/add/delete list notation 13 / 23 14 / 23 Agent-Based Systems Agent-Based Systems Blocks world example Formal model of planning • Some action schemata examples • Define a descriptor for an action α ∈ Ac as Stack ( x , y ) UnStack ( x , y ) � P α , D α , A α � pre { Clear ( y ) , Holding ( x ) } pre { On ( x , y ) , Clear ( x ) , ArmEmpty } del { Clear ( y ) , Holding ( x ) } del { On ( x , y ) , ArmEmpty } defining sets of first-order logic formulae of precondition, delete- and add-list add { ArmEmpty , On ( x , y ) } add { Holding ( x ) , Clear ( y ) } • Although these may contain variables and logical connectives we ignore these for now (assume ground atoms) Pickup ( x ) PutDown ( x ) • A planning problem � ∆ , O , γ � over Ac specifies pre { Clear ( x ) , OnTable ( x ) , ArmEmpty } pre { Holding ( x ) } • ∆ as the (belief about) initial state (a list of atoms) del { OnTable ( x ) , ArmEmpty } del { Holding ( x ) } • a set of operator descriptors O = {� P α , D α , A α �| α ∈ Ac } add { Holding ( x ) } add { ArmEmpty , OnTable ( x ) } • an intention γ (set of literals) to be achieved • (Linear) plan = sequence of action schema instances • A plan is a sequence of actions π = ( α 1 , . . . α n ) with α i ∈ Ac • Many algorithms, simplest method: state-space search 15 / 23 16 / 23

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend