CM30174 Introduction to Intelligent Agents Semester 1, 2010-11 - - PDF document

cm30174 introduction to intelligent agents semester 1
SMART_READER_LITE
LIVE PREVIEW

CM30174 Introduction to Intelligent Agents Semester 1, 2010-11 - - PDF document

Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents CM30174 Introduction to Intelligent Agents Semester 1, 2010-11 Marina De Vos, Julian Padget Reasoning Agents / 20101011 / version 0.5


slide-1
SLIDE 1

Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents

CM30174 Introduction to Intelligent Agents Semester 1, 2010-11

Marina De Vos, Julian Padget

Reasoning Agents / 20101011 / version 0.5

October 25, 2010

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 1 / 88 Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents

Authors/Credits for this lecture

Primary author: Marina De Vos. Material sourced from Michael Wooldridge’s book “An Introduction to Multiagent Systems”, Chapter 3 and 4. [Wooldridge, 2002].

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 2 / 88 Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents

Content

1

Introduction to Reasoning

2

Agent Architectures

3

Deductive Reasoning Agents

4

Practical Reasoning Agents

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 3 / 88

slide-2
SLIDE 2

Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents Key Distinctions Concrete Approaches Summary

What is Reasoning?

More than thinking Taking a set of facts and deriving new ones in a fixed way More specifically (usefully):

Reasoning to achieve a goal - planning Problem Solving Working out how to get world state A to world state B

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 5 / 88 Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents Key Distinctions Concrete Approaches Summary

Examples

Samples:

I have 7 apples and then give 3 to Vladimir: how many apples do I have? How do I get from San Jose to Puerto Viejo? How do I pass this unit with the least effort? Should I take this unit or not? How do achieve my dream of owning a house by the seaside?

Each of these cases has different facts or world conditions Each of these cases requires a different inference mechanism

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 6 / 88 Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents Key Distinctions Concrete Approaches Summary

The Dream House

Take the house building problem:

Starting world state:

I have X amount of money I have many facts about land, the city, planning permission, the housing market etc.

How do I achieve my goal state:

Where I have a house (preferably one which is the BEST I could get with my money)

The possibilities in the real world are (nearly!) infinite!

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 7 / 88

slide-3
SLIDE 3

Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents Key Distinctions Concrete Approaches Summary

Automated Reasoning

Objective: carry out such inference automatically - without the need for human intervention This is very hard because:

The real world is complex (huge number of factors)

inaccessible

Resources are bounded (finite time and finite memory) Things change (while I am thinking or acting the world may change)

dynamic

The world is uncertain (I cannot be sure that an action I take will have the expected outcome)

non-deterministica

There are other actors (that might try to intentionally or unintentionally thwart my plans!)

non-deterministic

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 8 / 88 Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents Key Distinctions Concrete Approaches Summary

Formulating the Problem

Decide on an abstraction / simplification:

A model of the world - an abstract vision of the concepts relevant to the problem What to consider / ignore (uncertainty? other agents? ...) Whether or not we are reasoning in one agent or across many (do I have to negotiate?)

Each of these affect the complexity of the reasoner

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 9 / 88 Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents Key Distinctions Concrete Approaches Summary

Monotonic vs. Non-Monotonic (I)

Monotonic

A logical inference relation is monotonic if and only if, for all sets of propositions S and T, and for all propositions A, if S entails A (e.g. S ⊢ A) then (S ∪ T) ⊢ A First order logic is monotonic Classical deduction - suitable for reasoning in open-ended situations Absence of x implies x is unknown A proposition A is false with respect to a set of propositions S when S ⊢ ¬A.

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 10 / 88

slide-4
SLIDE 4

Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents Key Distinctions Concrete Approaches Summary

Monotonic vs. Non-Monotonic (II)

Non-monotonic

Absence of x implies x is false - closed world assumption - Clark’s completion Prolog is non-monotonic Logics in which the set of implications determined by a given group of premises does not necessarily grow, and can shrink, when new well-formed formulae are added to the set of premises Reasoning to conclusions on the basis of incomplete

  • information. Given more information, we are prepared to

retract previously drawn inferences. Agents are in general non-monotonic systems.

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 11 / 88 Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents Key Distinctions Concrete Approaches Summary

Abductive vs. Deductive

Abductive

A form of inference that works forward to the best explanation Example:

D is a collection of data (facts, observations, givens), H explains D (or would, if true, explain D), No other hypothesis explains D as well as H does. Therefore, H is probably correct.

Good for diagnosis, plan recognition, natural language understanding, vision Explanation is not necessarily true

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 12 / 88 Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents Key Distinctions Concrete Approaches Summary

Abductive vs. Deductive

Deductive

Predictive Works forward from premises to conclusion Inference rules drive the process Uses the existence of facts to infer (via rules) the existence

  • f new facts

Conclusion is proven with respect to available facts

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 13 / 88

slide-5
SLIDE 5

Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents Key Distinctions Concrete Approaches Summary

Forward vs. Backward Chaining

Forward

Effectively, an implementation of deduction Rules are used to deduce new facts from existing facts Process continues until no more rules apply

Backward

Works backwards from goal to current situation Rules are used to infer that a (sub)goal holds then the preconditions (left hand side of rule) also hold Process moves backwards down chain of reasoning until no more rules apply Prolog style

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 14 / 88 Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents Key Distinctions Concrete Approaches Summary

Essential in all of these

You have a description of the world and a specification of the goal You have a (possibly vast) search space of things to do You traverse the search space in some way

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 15 / 88 Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents Key Distinctions Concrete Approaches Summary

Concrete Approaches

Approaches

Case Based Reasoning Model Based Reasoning Qualitative Reasoning Planning Systems Constraint Satisfaction Reasoning Rule Based Reasoning – RBS Ontology Inference, e.g. RACER/FaCT Symbolic Reasoning Logic Programming

Note: these are necessarily disjoint. For example there are constraint logic based planning systems

Skip Approaches De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 16 / 88

slide-6
SLIDE 6

Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents Key Distinctions Concrete Approaches Summary

Case Based Reasoning

“I remember solving a problem like this some time ago ... “ Functions:

A case-base of previous problem-solution pairs An indexing scheme which classifies problems and cases

When a new problem arises:

Find the closest previous problem(s) and solution(s) Try to adapt them to the new problem

Challenge is how to create initial case base

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 17 / 88 Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents Key Distinctions Concrete Approaches Summary

Model Based Reasoning

“I understand how this system and its components work based on their input parameters”

Component models (e.g. failure modes) Differential equations, logical models, ...

Combined:

Brute force search algorithms

Often used for system diagnosis:

Why is my washing machine not working? Why is this electric circuit failing?

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 18 / 88 Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents Key Distinctions Concrete Approaches Summary

Qualitative Reasoning

“Gravity works downwards, if I jump out of this plane I will probably fall” Approximate

Physical world properties Use “na¨ ıeve” (but often useful) deduction rules

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 19 / 88

slide-7
SLIDE 7

Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents Key Distinctions Concrete Approaches Summary

Planning

“From my current world state I can apply a sequence of possible actions to get to the goal” Different types:

State based - we search the combinations of all actions (Domain driven) Hierarchical Task Network - we search the possible plans (Knowledge based)

A lot of different search techniques, world models and reasoning approaches are used

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 20 / 88 Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents Key Distinctions Concrete Approaches Summary

Constraint Satisfaction I

“The world is a set of interdependent choices. If I make

  • ne, it may affect another”

Problem:

A set of variables V (each with a possible set of values vi1-vin) A set of constraints linking variables C(vi1, vi2, vi3) such as “if my trousers are green my shirt should not be blue” What are the legal combinations of values for each variable? Or, which choices fit together given the constraints

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 21 / 88 Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents Key Distinctions Concrete Approaches Summary

Constraint Satisfaction II

Many search techniques

Propagating constraint effects, subdividing the constraint graph etc. Related problems: dynamically changing choices/options, uncertainty, ... But algorithms are typically quite expensive (complexity) Domain specific SAT solvers relatively efficient Good heuristics

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 22 / 88

slide-8
SLIDE 8

Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents Key Distinctions Concrete Approaches Summary

Rule Based Reasoning

“If the light is red STOP , if it is raining I must be wet, ...” Functions by:

Accumulating a set of rules relating PRE-conditions to inferences or actions A fact base Allowing the rules to fire iteratively when the facts fit the rule preconditions Heuristics to select one rule when several satisfy the preconditions

Reasoning happens by traversing the facts available

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 23 / 88 Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents Key Distinctions Concrete Approaches Summary

Expert Systems

Expert systems provide expert quality advice, diagnoses and recommendations on real world problems Designed to perform function of a human expert Examples:

Medical diagnosis - program takes place of a doctor; given a set of symptoms the system suggests a diagnosis and treatment Car fault diagnosis - given car’s symptoms, suggest what is wrong with it

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 24 / 88 Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents Key Distinctions Concrete Approaches Summary

Ontological Reasoning

Means two things:

Adapting the data models in each of the other schemes to use objects in agreed ontologies - that is, using any of the previous approaches, but the facts are represented via

  • ntologies

Description logic reasoning over ontological knowledge (e.g. class membership inference) - such as RACER etc.

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 25 / 88

slide-9
SLIDE 9

Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents Key Distinctions Concrete Approaches Summary

Symbolic Reasoning

The world or a portion of it is represented in terms of formulae in some logic Reasoning is based on inference. Different types of logic are used

description logic temporal logic BDI modal logic epistemic logic ...

We will come back on this later in the unit

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 26 / 88 Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents Key Distinctions Concrete Approaches Summary

Logic Programming

“Given this set of rules with pre and post conditions which information can I obtain from it” Based a model theoretic principles Prolog Answer Set Programming Provides possible views of solutions We will come back on this at the end of this unit

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 27 / 88 Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents Key Distinctions Concrete Approaches Summary

Summary

All these systems are trying to solve a similar problem - traverse a search space of possibilities (or possible facts) No one approach is better than another What you want depends on the problem How you formulate the problem has a big impact on tractability

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 28 / 88

slide-10
SLIDE 10

Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents

Agent Architectures

Introduce the idea of an agent as a computer system capable of flexible autonomous action Briefly discuss the issues one needs to address in order to build agent-base systems. Three-types of agent architecture:

symbolic/logical reactive hybrid

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 30 / 88 Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents

Agent Architectures I

We want to build agents, that enjoy the properties of autonomy, reactiveness, pro-activeness, and social ability : This is the area of agent architectures. Pattie Maes (1991) defines an agent architecture as:

‘[A] particular methodology for building [agents]. It specifies how ... the agent can be decomposed into the construction

  • f a set of component modules and how these modules

should be made to interact. The total set of modules and their interactions has to provide an answer to the question

  • f how the sensor data and the current internal state of the

agent determine the actions ... and future internal state of the agent. An architecture encompasses techniques and algorithms that support this methodology . ’

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 31 / 88 Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents

Agent Architectures II

Leslie Kaelbling (1991) considers an agent architecture to be:

‘[A] specific collection of software (or hardware) modules, typically designated by boxes with arrows indicating the data and control flow among the modules. A more abstract view of an architecture is as a general methodology for designing particular modular decompositions for particular

  • tasks. ’

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 32 / 88

slide-11
SLIDE 11

Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents

Types of Agents

1956–present: Symbolic Reasoning Agents Agents make decisions about what to do via symbol manipulation. Its purest expression, proposes that agents use explicit logical reasoning in order to decide what to do. 1985–present: Reactive Agents Problems with symbolic reasoning led to a reaction against this — led to the reactive agents movement, 1985–present. 1990-present: Hybrid Agents Hybrid architectures attempt to combine the best of reasoning and reactive architectures.

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 33 / 88 Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents

Symbolic Reasoning Agents

The classical approach to building agents is to view them as a particular type of knowledgebased system, and bring all the associated methodologies of such systems to bear. This paradigm is known as symbolic AI. We define a deliberative agent or agent architecture to be

  • ne that:

contains an explicitly represented, symbolic model of the world; makes decisions (for example about what actions to perform) via symbolic reasoning.

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 34 / 88 Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents

Issues I

1

The transduction problem: that of translating the real world into an accurate, adequate symbolic description, in time for that description to be useful. ... vision, speech understanding, learning.

2

The representation/reasoning problem: that of how to symbolically represent information about complex realworld entities and processes, and how to get agents to reason with this information in time for the results to be useful. ... knowledge representation, automated reasoning, automatic planning.

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 35 / 88

slide-12
SLIDE 12

Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents

Issues II

Most researchers accept that neither problem is anywhere near solved. Underlying problem lies with the complexity of symbol manipulation algorithms in general: many (most) searchbased symbol manipulation algorithms of interest are highly intractable. Because of these problems, some researchers have looked to alternative techniques for building agents; we look at these later.

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 36 / 88 Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents Definition Example Agent Oriented Programming

Deductive Reasoning Agents

How can an agent decide what to do using theorem proving? Basic idea is to use logic to encode a theory stating the best action to perform in any given situation. Let:

ρ be this theory (typically a set of rules); ∆ be a logical database that describes the current state of the world; Ac be the set of actions the agent can perform; ∆ ⊢ρ φ means that can be proved from ∆ using ρ.

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 38 / 88 Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents Definition Example Agent Oriented Programming

Algorithm

/* try to find an action explicitly prescribed */ for each a ∈ AC do if ∆ ⊢ρ Do(a) then return a endif end-for /* try to find an action not excluded */ for each a ∈ AC do if ∆ ⊢ρ ¬Do(a) then return a endif end-for return null /* no action found */

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 39 / 88

slide-13
SLIDE 13

Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents Definition Example Agent Oriented Programming

Example: The Vacuum World (I)

The goal is for the robot to clear up all dirt. (0,2) (0,1) (0,0) (1,2) (1,1) (1,0) (2,2) (2,1) (2,0)

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 40 / 88 Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents Definition Example Agent Oriented Programming

Example: The Vacuum World (II)

Use 3 domain predicates in this exercise:

In(x, y) agent is at (x, y) Dirt(x, y) there is dirt at (x, y) Facing(d) the agent is facing direction d

Possible actions: Ac = {turn, forward, suck} NB: turn means “turn right”. Rules ρ for determining what to do:

In(0, 0) ∧ Facing(west) ∧ ¬Dirt(0, 0) → Do(forward) In(0, 1) ∧ Facing(west) ∧ ¬Dirt(0, 1) → Do(forward) In(0, 2) ∧ Facing(west) ∧ ¬Dirt(0, 2) → Do(turn) In(0, 2) ∧ Facing(south) → Do(forward)

... and so on! Using these rules (+ other obvious ones), starting at (0,0) the robot will clear up dirt.

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 41 / 88 Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents Definition Example Agent Oriented Programming

Considerations

Problems:

how to convert video camera input to Dirt(0, 1)? decision making assumes a static environment: calculative rationality . decision making using firstorder logic is undecidable!

Even where we use propositional logic, decision making in the worst case means solving coNPcomplete problems. (NB: coNPcomplete = bad news!) Typical solutions:

weaken the logic; use symbolic, nonlogical representations; shift the emphasis of reasoning from run time to design time. Accept the worst case complexity but tailor the algorithms to avoid it as much as possible

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 42 / 88

slide-14
SLIDE 14

Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents Definition Example Agent Oriented Programming

Agent Oriented Programming I

Yoav Shoham introduced “agentoriented programming” in 1990: “new programming paradigm, based on a societal view of computation”. The key idea: directly programming agents in terms of intentional notions like belief, commitment, and intention. The motivation behind such a proposal is that, as we humans use the intentional stance as an abstraction mechanism for representing the properties of complex systems. In the same way that we use the intentional stance to describe humans, it might be useful to use to describe the programming of machines.

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 43 / 88 Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents Definition Example Agent Oriented Programming

Agent Oriented Programming II

Shoham suggested that a complete AOP system will have 3 components:

a logic for specifying agents and describing their mental states; an interpreted programming languages for programming agents an ’agentification’ process, for converting ’neutral applications’ (e.g. databases) into agents.

The first AOP language: AGENT0

Skip Agent0 De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 44 / 88 Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents Definition Example Agent Oriented Programming

Agent0 (I)

AGENT0 is implemented as an extension to LISP . Each agent in AGENT0 has 4 components:

a set of capabilities (things the agent can do); a set of initial beliefs; a set of initial commitments (things the agent will do); and a set of commitment rules.

The key component, which determines how the agent acts, is the commitment rule set. Each commitment rule contains

a message condition; a mental condition; and an action.

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 45 / 88

slide-15
SLIDE 15

Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents Definition Example Agent Oriented Programming

Agent0 (II)

On each ‘decision cycle’ ... The message condition is matched against the messages the agent has received. The mental condition is matched against the beliefs of the agent. If the rule fires, then the agent becomes committed to the action (the action gets added to the agents commitment set). Actions may be

private: an internally executed computation, or communicative: sending messages.

Messages are constrained to be one of three types:

“requests” to commit to action; “unrequests” to refrain from actions; “informs” which pass on information.

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 46 / 88 Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents Definition Example Agent Oriented Programming

AGENT0 Reasoning Cycle

intialise update beliefs update commitments EXECUTE internal actions messages out messages in beliefs commitments abilities De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 47 / 88 Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents Definition Example Agent Oriented Programming

Commitment Rules

An example:

COMMIT( ( agent , REQUEST, DO( time , action ) ) , ; ; ; msg condition (B, [ now, Friend agent ] AND CAN( self , action ) AND NOT [ time , CMT( self , anyaction ) ] ) , ; ; ; mental condition self , DO( time , action ) )

✡ ✝ ✆

This rule may be paraphrased as follows: if I receive a message from agent which requests me to do action at time, and I believe that:

agent is currently a friend; I can do the action; at time, I am not committed to doing any other action, then commit to doing action at time.

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 48 / 88

slide-16
SLIDE 16

Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents Definition Example Agent Oriented Programming

Current State

AGENT0 provides support for multiple agents to cooperate and communicate, and provides basic provision for debugging ... ... it is, however, a prototype that was designed to illustrate some principles, rather than be a production language A more refined implementation was developed by Thomas, for her 1993 doctoral thesis. Her Planning Communicating Agents (PLACA) language was intended to address one severe drawback to AGENT0: the inability of agents to plan, and communicate requests for action via highlevel goals. Agents in PLACA are programmed in much the same way as in AGENT0, in terms of mental change rules.

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 49 / 88 Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents Definition Example Agent Oriented Programming

Example

An example mental change rule:

( ( ( s e l f ?agent REQUEST (? t ( xeroxed ?x ) ) ) (AND ( CAN ACHIEVE (? t xeroxed ?x ) ) ) (NOT (BEL (∗now∗ shelving ) ) ) (NOT (BEL (∗now∗ ( vip ?agent ) ) ) ) ( (ADOPT (INTEND (5pm ( xeroxed ?x ) ) ) ) ) ((? agent s e l f INFORM (∗now∗ (INTEND (5pm ( xeroxed ?x ) ) ) ) ) ) )

✡ ✝ ✆

Paraphrased: if someone asks you to xerox something, and you can, and you don’t believe that they’re a VIP , or that you’re supposed to be shelving books, then

adopt the intention to xerox it by 5pm, and inform them of your newly adopted intention.

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 50 / 88 Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents BDI Planning Theory Implementation

What is Practical Reasoning?

Practical reasoning is reasoning directed towards actions – the process of figuring out what to do:

Practical reasoning is a matter of weighing conflicting considerations for and against competing options, where the relevant considerations are provided by what the agent desires/values/cares about and what the agent believes. (Bratman)

Distinguish practical reasoning from theoretical reasoning. Theoretical reasoning is directed towards beliefs. Human practical reasoning consists of two activities:

deliberation deciding what state of affairs we want to achieve – the outputs of deliberation are intentions; meansends reasoning deciding how to achieve these states

  • f affairs – the outputs of meansends reasoning are plans.

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 52 / 88

slide-17
SLIDE 17

Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents BDI Planning Theory Implementation

Intentions in Practical Reasoning (I)

Intentions pose problems for agents, who need to determine ways of achieving them.

If I have an intention to φ, you would expect me to devote resources to deciding how to bring about φ

Intentions provide a “filter” for adopting other intentions, which must not conflict.

If I have an intention to φ, you would not expect me to adopt an intention ψ that was incompatible with φ.

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 53 / 88 Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents BDI Planning Theory Implementation

Intentions in Practical Reasoning (II)

Agents track the success of their intentions, and are inclined to try again if their attempts fail.

If an agent’s first attempt to achieve φ fails, then all other things being equal, it will try an alternative plan to achieve φ.

Agents believe their intentions are possible.

That is, they believe there is at least some way that the intentions could be brought about.

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 54 / 88 Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents BDI Planning Theory Implementation

Intentions in Practical Reasoning (III)

Agents do not believe they will not bring about their intentions.

It would not be rational of me to adopt an intention to φ if I believed I would fail with φ.

Under certain circumstances, agents believe they will bring about their intentions.

If I intend φ, then I believe that under “normal circumstances” I will succeed with φ

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 55 / 88

slide-18
SLIDE 18

Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents BDI Planning Theory Implementation

Intentions in Practical Reasoning (IV)

Agents need not intend all the expected side effects of their intentions.

If I believe φ ⇒ ψ and I intend that φ, I do not necessarily intend ψ also.

Intentions are not closed under implication. This last problem is known as the side effect or package deal

  • problem. I may believe that going to the dentist involves

pain, and I may also intend to go to the dentist – but this does not imply that I intend to suffer pain!

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 56 / 88 Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents BDI Planning Theory Implementation

Intensions vs. Desires

Intentions are much stronger than mere desires:

My desire to play basketball this afternoon is merely a potential influencer of my conduct this afternoon. It must vie with my other relevant desires [...] before it is settled what I will do. In contrast,

  • nce I intend to play basketball this afternoon, the matter is

settled: I normally need not continue to weigh the pros and cons. When the afternoon arrives, I will normally just proceed to execute my intentions. (Bratman, 1990)

Agents hold Beliefs, Desires and Intentions (BDI)

Options: ρ(Bel) × ρ(Int) → ρ(Des) Filter: ρ(Bel) × ρ(Per) × ρ(Int) → ρ(Int) Brf: ρ(Bel) × ρ(Per) → ρ(Bel)

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 57 / 88 Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents BDI Planning Theory Implementation

Means-Ends Reasoning - Planning I

Since the early 1970s, the AI planning community has been closely concerned with the design of artificial agents. Within the symbolic AI community, it has long been assumed that some form of AI planning system will be a central component of any artificial agent. Building largely on the early work of Fikes & Nilsson, many planning algorithms have been proposed, and the theory of planning has been well-developed. STRIPS; first planner – uses first order formulae action schemata Planning is the design of a course of action that will achieve some desired goal.

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 58 / 88

slide-19
SLIDE 19

Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents BDI Planning Theory Implementation

Means-Ends Reasoning - Planning II

Basic idea is to give a planning system:

(representation of) goal/intention to achieve; (representation of) actions it can perform; and (representation of) the environment; and have it generate a plan to achieve the goal.

This is automatic programming.

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 59 / 88 Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents BDI Planning Theory Implementation

Planners

planner

goal/ intention/ task state of environment possible action plan to achieve goal

Question: How do we represent...

goal to be achieved; state of environment; actions available to agent; plan itself.

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 60 / 88 Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents BDI Planning Theory Implementation

Blocks World (I)

A B C

We will illustrate the technique with reference to the blocks world. Contains a robot arm, 3 blocks (A, B, C) of equal size, and a table top.

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 61 / 88

slide-20
SLIDE 20

Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents BDI Planning Theory Implementation

Blocks World (II)

To represent this environment, need an ontology. On(x,y)

  • bj x on top of obj y

OnTable(x)

  • bj x is on the table

Clear(x) nothing is on top of obj x Holding(x) arm is holding x Here is a representation of the blocks world described above: Clear(A) On(A,B) OnTable(B) OnTable(C) Use the closed world assumption: anything not stated is assumed to be false.

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 62 / 88 Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents BDI Planning Theory Implementation

Blocks World (III)

A goal is represented as a set of formulae. Here is a goal: OnTable(A); OnTable(B); OnTable(C) Actions are represented using a technique that was developed in the STRIPS planner. Each action has:

a name: which may have arguments; a precondition list: list of facts which must be true for action to be executed; a delete list: list of facts that are no longer true after action is performed; an add list: list of facts made true by executing the action.

Each of these may contain variables.

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 63 / 88 Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents BDI Planning Theory Implementation

Blocks World (IV)

Example 1: The stack action occurs when the robot arm places the

  • bject x it is holding is placed on top of object y.

Stack(x,y) pre Clear(y) ∧ Holding(x) del Clear(y) ∧ Holding(x) add ArmEmpty ∧ On(x,y) Example 2: The unstack action occurs when the robot arm picks an

  • bject x up from on top of another object y.

UnStack(x,y) pre On(x,y) ∧ Clear(x) ∧ ArmEmpty del On(x,y) ∧ ArmEmpty add Holding(x) ∧ Clear(y) Stack and UnStack are inverses of oneanother.

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 64 / 88

slide-21
SLIDE 21

Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents BDI Planning Theory Implementation

Blocks World (V)

Example 3: The pickup action occurs when the arm picks up an object x from the table. Pickup(x) pre Clear(x) ∧ OnTable(x) ∧ ArmEmpty del OnTable(x) ∧ ArmEmpty add Holding(x)

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 65 / 88 Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents BDI Planning Theory Implementation

Blocks World (VI)

Example 4: The putdown action occurs when the arm places the object x onto the table. PutDown(x) pre Holding(x) del Holding(x) add OnTable(x)∧ ArmEmpty What is a plan? A sequence (list) of actions, with variables replaced by constraints

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 66 / 88 Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents BDI Planning Theory Implementation

Planning Theory (I)

Ac = {α1, . . . , αn} : a fixed set of actions Pα, Dα, Aα: a descriptor for an action α ∈ Ac

Pα is a set of formulae of first-order logic that characterise the precondition of action α Dα is a set of forumlae of first-order logic that characterise those facts made false by the performance of α (the delete list) Aα is a set of forumlae of first-order that characterise those facts made true by the performance of α (the add list)

A planning problem is a triple ∆, O, γ

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 67 / 88

slide-22
SLIDE 22

Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents BDI Planning Theory Implementation

Planning Theory (II)

π = (α1, . . . , αn) : a plan with respect to a planning problem ∆, O, γ determines a sequence of n + 1 models: ∆0, ∆1, . . . , ∆n where ∆0 = ∆ and ∆i = (∆i−1 \ Dαi) ∪ Aαi for1 ≤ i ≤ n π is acceptable iff ∆i−1 ⊢ Pαi, for all 1 ≤ i ≤ n π is correct iff

π is acceptable, and ∆n ⊢ γ

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 68 / 88 Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents BDI Planning Theory Implementation

Implementing Practical Reasoning Agents

A first pass at an implementation of a practical reasoning agent:

Agent Control Loop Version 1 1. while true 2.

  • bserve the

world ; 3. update i n t e r n a l world model ; 4. deliberate about what i n t e n t i o n to achieve next ; 5. use means ends reasoning to get a plan for the i n t e n t i o n ; 6. execute the plan 7. end while

✡ ✝ ✆

We will not be concerned with stages (2) or (3).

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 69 / 88 Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents BDI Planning Theory Implementation

Timing Problem I

Problem: deliberation and meansends reasoning processes are not instantaneous. They have a time cost. Suppose that deliberation is optimal in that if it selects some intention to achieve, then this is the best thing for the

  • agent. (Maximises expected utility .)

So the agent has selects an intention to achieve that would have been optimal at the time it observed the world. This is calculative rationality. The world may change.

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 70 / 88

slide-23
SLIDE 23

Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents BDI Planning Theory Implementation

Timing Problem II

Deliberation is only half of the problem: the agent still has to determine how to achieve the intention. So the agent will have overall optimal behaviour in the following circumstances:

1

when deliberation and means-end reasoning take a vanishingly small amount of time; or

2

when the world remains static while the agent is deliberating and performing means-end reasoning

3

when the optimal intention remains optimal until has the agent has found a way of achieving it.

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 71 / 88 Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents BDI Planning Theory Implementation

Algorithm Second Attempt

Let’s make the algorithm more formal.

Agent Control Loop Version 2 1. B := B0 ; /∗ i n i t i a l b e l i e f s ∗/ 2. while true do 3. get next percept p ; 4. B := b r f (B, p ) ; 5. I := deliberate (B) ; 6. pi := plan (B, I ) ; 7. execute ( pi ) 8. end while

✡ ✝ ✆

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 72 / 88 Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents BDI Planning Theory Implementation

Deliberation

How does an agent deliberate?

begin by trying to understand what the options available to you are; choose between them, and commit to some.

Chosen options are then intentions. The deliberate function can be decomposed into two distinct functional components:

  • ption generation: in which the agent generates a set of

possible alternatives; and Represent option generation via a function, options, which takes the agent’s current beliefs and current intentions, and from them determines a set of options (= desires). filtering: in which the agent chooses between competing alternatives, and commits to achieving them. In order to select between competing options, an agent uses a filter function.

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 73 / 88

slide-24
SLIDE 24

Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents BDI Planning Theory Implementation

Algorithm Third Attempt

Control Loop Version 3 1. 2. B := B0 ; 3. I := I0 ; 4. while true do 5. get next percept p ; 6. B := b r f (B, p ) ; 7. D :=

  • ptions (B, I ) ;

8. I := f i l t e r (B,D, I ) ; 9. pi := plan (B, I ) ; 10. execute ( pi ) 11. end while

✡ ✝ ✆

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 74 / 88 Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents BDI Planning Theory Implementation

Commitment Strategies

Some time in the notsodistant future, you are having trouble with your new household robot. You say “Willie, bring me a beer. ” The robot replies “OK boss”. Twenty minutes later, you screech “Willie, why didn’t you bring me that beer?” It answers “Well, I intended to get you the beer, but I decided to do something else. ” Miffed, you send the wise guy back to the manufacturer, complaining about a lack of commitment. After retrofitting, Willie is returned, marked “Model C: The Committed Assistant. ” Again, you ask Willie to bring you a beer. Again, it accedes, replying “Sure

  • thing. ”. Then you ask: “What kind of beer did you buy?”. It answers: “Genes You say “Never mind.

”. One minute later, Willie trundles over with a Genessee in its gripper. This time, you angrily return Willie for overcommitment. After still more tinkering, the manufacturer sends Willie back, promising no more problems with its commitments. So, being a somewhat trusting customer, you accept the rascal back into your household, but as a test, you ask it to bring you your last beer. [...] The robot gets the beer and starts towards you. As it approaches, it lifts its arm, wheels around, deliberately smashes the bottle, and trundles off. Back at the plant, when interrogated by customer service as to why it had abandoned its commitments, the robot replies that according to its specifications, it kept its commitments as long as required — commitments must be dropped when fulfilled or impossible to achieve. By smashing the bottle, the commitment became unachievable. De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 75 / 88 Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents BDI Planning Theory Implementation

Forms of Commitment I

The following strategies are commonly discussed in the literature of rational agents:

Blind commitment: A blindly committed agent will continue to maintain an intention until it believes the intention has actually been achieved. Blind commitment is also sometimes referred to as fanatical commitment. Singleminded commitment: A singleminded agent will continue to maintain an intention until it believes that either the intention has been achieved, or else that it is no longer possible to achieve the intention. Open-minded commitment: An open-minded agent will maintain an intention as long as it is still believed possible.

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 76 / 88

slide-25
SLIDE 25

Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents BDI Planning Theory Implementation

Forms of Commitment I

An agent has commitment both to ends (i.e., the state of affairs it wishes to bring about), and means (i.e., the mechanism via which the agent wishes to achieve the state of affairs). Currently , our agent control loop is overcommitted, both to means and ends. Modification: replan if ever a plan goes wrong.

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 77 / 88 Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents BDI Planning Theory Implementation

Algorithm Fourth Attempt

Control Loop Version 4 1. 2. B := B0 ; 3. I := I0 ; 4. while true do 5. get next percept p ; 6. B := b r f (B; p ) ; 7. D :=

  • ptions (B, I ) ;

8. I := f i l t e r (B,D, I ) ; 9. pi := plan (B, I ) ; 10. while not empty ( pi ) do 11. a := hd ( pi ) ; 12. execute ( a ) ; 13. pi := t a i l ( pi ) ; 14. get next percept p ; 15. B := b r f (B, p ) ; 16. i f not sound ( pi , I ,B) then 17. pi := plan (B, I ) 18. end i f 19. end while 20. end while

✡ ✝ ✆

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 78 / 88 Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents BDI Planning Theory Implementation

Still problems

Still overcommitted to intentions: Never stops to consider whether or not its intentions are appropriate. Modification: stop to determine whether intentions have succeeded or whether they are impossible: singleminded commitment.

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 79 / 88

slide-26
SLIDE 26

Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents BDI Planning Theory Implementation

Algorithm Fifth Attempt

Control Loop Version 5 1. 2. B := B0 ; 3. I := I0 ; 4. while true do 5. get next percept p ; 6. B := b r f (B; p ) ; 7. D :=

  • ptions (B, I ) ;

8. I := f i l t e r (B,D, I ) ; 9. pi := plan (B, I ) ; 10. while not empty ( pi )

  • r succeeded ( I ,B)
  • r

impossible ( I ,B) do 11. a := hd ( pi ) ; 12. execute ( a ) ; 13. pi := t a i l ( pi ) ; 14. get next percept p ; 15. B := b r f (B, p ) ; 16. i f not sound ( pi , I ,B) then 17. pi := plan (B, I ) 18. end i f 19. end while 20. end while

✡ ✝ ✆

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 80 / 88 Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents BDI Planning Theory Implementation

Intention Reconsideration

Our agent gets to reconsider its intentions once every time around the outer control loop, i.e., when:

it has completely executed a plan to achieve its current intentions; or it believes it has achieved its current intentions; or it believes its current intentions are no longer possible.

This is limited in the way that it permits an agent to reconsider its intentions. Modification: Reconsider intentions after executing every action.

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 81 / 88 Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents BDI Planning Theory Implementation

Algorithm Sixth Attempt

Control Loop Version 6 1. 2. B := B0 ; 3. I := I0 ; 4. while true do 5. get next percept p ; 6. B := b r f (B; p ) ; 7. D :=

  • ptions (B, I ) ;

8. I := f i l t e r (B,D, I ) ; 9. pi := plan (B, I ) ; 10. while not empty ( pi )

  • r succeeded ( I ,B)
  • r

impossible ( I ,B) do 11. a := hd ( pi ) ; 12. execute ( a ) ; 13. pi := t a i l ( pi ) ; 14. get next percept p ; 15. B := b r f (B, p ) ; 16. D :=

  • ptions (B, I ) ;

17. I := f i l t e r (B,D, I ) ; 18. i f not sound ( pi , I ,B) then 19. pi := plan (B, I ) 20. end i f 21. end while 22. end while

✡ ✝ ✆

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 82 / 88

slide-27
SLIDE 27

Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents BDI Planning Theory Implementation

But ...

But intention reconsideration is costly! A dilemma:

an agent that does not stop to reconsider its intentions sufficiently often will continue attempting to achieve its intentions even after it is clear that they cannot be achieved,

  • r that there is no longer any reason for achieving them;

an agent that constantly reconsiders its attentions may spend insufficient time actually working to achieve them, and hence runs the risk of never actually achieving them.

Solution: incorporate an explicit metalevel control component, that decides whether or not to reconsider.

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 83 / 88 Introduction to Reasoning Agent Architectures Deductive Reasoning Agents Practical Reasoning Agents BDI Planning Theory Implementation

Algorithm Seventh Attempt

Control Loop Version 7 1. 2. B := B0 ; 3. I := I0 ; 4. while true do 5. get next percept p ; 6. B := b r f (B; p ) ; 7. D :=

  • ptions (B, I ) ;

8. I := f i l t e r (B,D, I ) ; 9. pi := plan (B, I ) ; 10. while not empty ( pi )

  • r succeeded ( I ,B)
  • r

impossible ( I ,B) do 11. a := hd ( pi ) ; 12. execute ( a ) ;

✡ ✝ ✆ ✞

13. pi := t a i l ( pi ) ; 14. get next percept p ; 15. B := b r f (B, p ) ; 16. i f reconsider ( I ,B) then 17. D :=

  • ptions (B, I ) ;

18. I := f i l t e r (B,D, I ) ; 19. end i f 20. i f not sound ( pi , I ,B) then 21. pi := plan (B, I ) 22. end i f 23. end while 24. end while

✡ ✝ ✆

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 84 / 88 Summary Directed Reading

Summary

Different reasoning strategies Properties of reasoning Agent architectures Deductive reasoning agents Agent Oriented Programming The Belief-Desire-Intention Approach Practical Reasoning

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 86 / 88

slide-28
SLIDE 28

Summary Directed Reading

Directed and Additional Reading

Wooldridge Chapter. 3 Wooldridge Chapter. 4 Wooldridge, M. (2002). An introduction to multiagent systems. Wiley. ISBN: 0 47149691X.

De Vos/Padget (Bath/CS) CM30174/Reasoning Agents October 25, 2010 88 / 88