lecture 3
play

LECTURE 3: flexible autonomous action Issues one needs to address - PDF document

Agent Architectures An agent is a computer system capable of LECTURE 3: flexible autonomous action Issues one needs to address in order to build DEDUCTIVE agent-based systems REASONING AGENTS Three types of agent architecture


  1. Agent Architectures � An agent is a computer system capable of LECTURE 3: flexible autonomous action… � Issues one needs to address in order to build DEDUCTIVE agent-based systems… REASONING AGENTS � Three types of agent architecture : � symbolic/logical An Introduction to MultiAgent Systems � reactive http://www.csc.liv.ac.uk/~mjw/pubs/imas � hybrid 3-1 3-2 Agent Architectures Agent Architectures � We want to build agents, that enjoy the properties of � Kaelbling considers an agent autonomy, reactiveness, pro-activeness, and social ability that we talked about earlier architecture to be: � This is the area of agent architectures ‘[A] specific collection of software (or hardware) modules, typically designated by boxes with � Maes defines an agent architecture as: ‘[A] particular methodology for building [agents]. It specifies how… the arrows indicating the data and control flow agent can be decomposed into the construction of a set of component among the modules. A more abstract view of an modules and how these modules should be made to interact. The total architecture is as a general methodology for set of modules and their interactions has to provide an answer to the question of how the sensor data and the current internal state of the designing particular modular decompositions for agent determine the actions… and future internal state of the agent. particular tasks.’ An architecture encompasses techniques and algorithms that support this methodology.’ 3-3 3-4 Agent Architectures Symbolic Reasoning Agents � The classical approach to building agents is � Originally (1956-1985), pretty much all agents to view them as a particular type of designed within AI were symbolic reasoning agents knowledge-based system, and bring all the � Its purest expression proposes that agents use explicit associated (discredited?!) methodologies of logical reasoning in order to decide what to do such systems to bear � Problems with symbolic reasoning led to a reaction � This paradigm is known as symbolic AI against this — the so-called reactive agents � We define a deliberative agent or agent movement, 1985–present architecture to be one that: � From 1990-present, a number of alternatives � contains an explicitly represented, symbolic model proposed: hybrid architectures, which attempt to of the world combine the best of reasoning and reactive � makes decisions (for example about what actions architectures to perform) via symbolic reasoning 3-5 3-6 1

  2. Symbolic Reasoning Agents Symbolic Reasoning Agents � If we aim to build an agent in this way, there are two � Most researchers accept that neither problem key problems to be solved: is anywhere near solved 1. The transduction problem : � Underlying problem lies with the complexity that of translating the real world into an accurate, of symbol manipulation algorithms in general: adequate symbolic description, in time for that description to be useful…vision, speech many (most) search-based symbol understanding, learning manipulation algorithms of interest are highly 2. The representation/reasoning problem : intractable that of how to symbolically represent information about complex real-world entities and processes, � Because of these problems, some and how to get agents to reason with this researchers have looked to alternative information in time for the results to be techniques for building agents; we look at useful…knowledge representation, automated reasoning, automatic planning these later 3-7 3-8 Deductive Reasoning Agents Deductive Reasoning Agents /* try to find an action explicitly prescribed */ � How can an agent decide what to do using for each a ∈ Ac do theorem proving? if ∆ � ρ Do ( a ) then � Basic idea is to use logic to encode a theory return a stating the best action to perform in any given end-if situation end-for /* try to find an action not excluded */ � Let: for each a ∈ Ac do � ρ be this theory (typically a set of rules) if ∆ � ρ ¬ Do ( a ) then � ∆ be a logical database that describes the current return a state of the world end-if � Ac be the set of actions the agent can perform end-for � ∆ � ρ φ mean that φ can be proved from ∆ using ρ return null /* no action found */ 3-9 3-10 Deductive Reasoning Agents Deductive Reasoning Agents � An example: The Vacuum World � Use 3 domain predicates to solve problem: � Goal is for the robot to clear up all dirt In(x, y) agent is at (x, y) Dirt(x, y) there is dirt at (x, y) Facing(d) the agent is facing direction d � Possible actions: Ac = {turn, forward, suck} P.S. turn means “turn right” 3-11 3-12 2

  3. Deductive Reasoning Agents Deductive Reasoning Agents � Problems: � How to convert video camera input to Dirt( 0, 1 ) ? � Rules ρ for determining what to do: � decision making assumes a static environment: calculative rationality � decision making using first-order logic is undecidable ! � Even where we use propositional logic, decision making in the worst case means solving co-NP- complete problems (PS: co-NP-complete = bad news!) � …and so on! � Typical solutions: � Using these rules (+ other obvious ones), � weaken the logic starting at (0, 0) the robot will clear up dirt � use symbolic, non-logical representations � shift the emphasis of reasoning from run time to design time � We will look at some examples of these approaches 3-13 3-14 More Problems… Planning Systems (in general) � Planning systems find a sequence of actions � The “logical approach” that was presented that transforms an initial state into a goal state implies adding and removing things from a database � That’s not pure logic a142 � Early attempts at creating a “planning agent” a1 I G tried to use true logical deduction to the solve the problem a17 3-15 3-16 Planning The Blocks World � Planning involves issues of both Search and � The Blocks World (today) consists of equal Knowledge Rrepresentation sized blocks on a table � Sample planning systems: � A robot arm can manipulate the blocks using the actions: � Robot Planning (STRIPS) � Planning of biological experiments (MOLGEN) � UNSTACK(a, b) � Planning of speech acts � STACK(a, b) � PICKUP(a) � For purposes of exposition, we use a simple � PUTDOWN(a) domain – The Blocks World 3-17 3-18 3

  4. The Blocks World Logical Formulas to Describe Facts � We also use predicates to describe the world: Always True of the World � ON(A,B) � And of course we can write general logical In general: � ONTABLE(B) ON(a,b) truths relating the predicates: HOLDING(a) � ONTABLE(C) ONTABLE(a) [ ∃ x HOLDING(x) ] → ¬ ARMEMPTY ARMEMPTY � CLEAR(A) CLEAR(a) � CLEAR(C) ∀ x [ ONTABLE(x) → ¬ ∃ y [ON(x,y)] ] � ARMEMPTY ∀ x [ ¬ ∃ y [ON(y, x)] → CLEAR(x) ] A So…how do we use theorem-proving B C techniques to construct plans? 3-19 3-20 Green’s Method UNSTACK � So to characterize the action UNSTACK we � Add state variables to the predicates, and could write: [ CLEAR(x, s) ∧ ON(x, y, s) ] → use a function DO that maps actions and [HOLDING(x, DO(UNSTACK(x,y),s)) ∧ states into new states CLEAR(y, DO(UNSTACK(x,y),s))] DO: A x S → S � We can prove that if S0 is ON(A,B,S0) ∧ ONTABLE(B,S0) ∧ CLEAR(A, S0) � Example: then DO(UNSTACK(x, y), S) is a new state S1 HOLDING(A,DO(UNSTACK(A,B),S0)) ∧ A CLEAR(B,DO(UNSTACK(A,B),S0)) B S1 3-21 3-22 More Proving More Proving A � The proof could proceed further; if we characterize B PUTDOWN: � So if we have in our database: HOLDING(x,s) → ON(A,B,S0) ∧ ONTABLE(B,S0) ∧ CLEAR(A,S0) ONTABLE(x,DO(PUTDOWN(x),s)) and our goal is � Then we could prove: ∃ s(ONTABLE(A, s)) ONTABLE(A, we could use theorem proving to find the plan DO(PUTDOWN(A), DO(UNSTACK(A,B), S0))) � But could I prove: S2 ONTABLE(B, S1 DO(PUTDOWN(A), � The nested actions in this constructive proof give DO(UNSTACK(A,B), S0))) you the plan: ? S2 1. UNSTACK(A,B); 2. PUTDOWN(A) S1 3-23 3-24 4

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend