1 Defining Agents 2 2 How Agents Should Act 3 2.1 Mapping from - - PowerPoint PPT Presentation

1 defining agents 2 2 how agents should act 3
SMART_READER_LITE
LIVE PREVIEW

1 Defining Agents 2 2 How Agents Should Act 3 2.1 Mapping from - - PowerPoint PPT Presentation

Intelligent Agents Christian Jacob 1 Defining Agents 2 2 How Agents Should Act 3 2.1 Mapping from Percept Sequences to Actions 5 2.2 Autonomy 6 3 Designs of Intelligent Agents 7 3.1 Architecture and Program 7 3.2 Agent Programs


slide-1
SLIDE 1

Intelligent Agents

Christian Jacob TOC

1

Back

1 Defining Agents 2 2 How Agents Should Act 3

2.1 Mapping from Percept Sequences to Actions 5 2.2 Autonomy 6

3 Designs of Intelligent Agents 7

3.1 Architecture and Program 7 3.2 Agent Programs 9 3.3 Simple Lookup? 11 3.4 Example — An Automated Taxi Driver 13

4 Types of Agents 15

4.1 Simple Reflex Agents 16 4.2 Agents that Keep Track of the World 19 4.3 Goal-Based Agents 22 4.4 Utility-Based Agents 24

5 Environments 26

5.1 Properties of Environments 26 5.2 Environment Programs 29

slide-2
SLIDE 2

Defining Agents Christian Jacob First Back TOC

2

Prev Next Last

1 Defining Agents

An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through effectors. environment agent sensors effectors actions percepts

?

slide-3
SLIDE 3

How Agents Should Act Christian Jacob First Back TOC

3

Prev Next Last

2 How Agents Should Act

Rational Agent

A rational agent performs actions that cause the agent to be most successful, depending on a performance measure, which decides how and when to evaluate an agent. What is rational at any given time depends on four things:

  • The performance measure that defines degree of success
  • The percept sequence, a complete history of what the agent has perceived so

far

  • The agent’s knowledge about the environment
  • The actions that the agent can perform

This leads to a definition of an ideal rational agent ...

slide-4
SLIDE 4

How Agents Should Act Christian Jacob First Back TOC

4

Prev Next Last

Ideal Rational Agent

For each possible percept sequence, an ideal rational agent should do

  • whatever action is expected to maximize its performance measure,
  • on the basis of the evidence provided by the percept sequence and
  • whatever built-in knowledge the agent has.

Example: Crossing a busy road Doing actions in order to obtain useful information is an important part of rationality. The notion of an agent is meant to be a tool for analyzing systems.

slide-5
SLIDE 5

How Agents Should Act Christian Jacob First Back TOC

5

Prev Next Last

2.1 Mapping from Percept Sequences to Actions

Any particular agent can be described by making a table of the action it takes in response to each possible percept sequence. Such a list is called a mapping from percept sequences to actions. However, this does not mean that we have to create an explicit table with an entry for every possible percept sequence (compare the square root example).

Percept x Action z 1.0 1.000000 1.1 1.048808 1.2 1.095445 1.3 1.140175 1.4 1.183215 1.5 1.224744 1.6 1.264911 1.7 1.303840 1.8 1.341640 1.9 1.378404 . . . function SQRT(x) z := 1.0 /* initial guess */ repeat until | z2 - x | < 10-15 z := z - (z2 - x) / ( 2 z ) end return z

slide-6
SLIDE 6

How Agents Should Act Christian Jacob First Back TOC

6

Prev Next Last

2.2 Autonomy

Assume an agent’s actions are based completely on built-in knowledge. Then it need not pay attention to its percepts. This agent clearly lacks autonomy. An agent’s behavior can be based on both its

  • own experience and
  • built-in knowledge

A system is autonomous to the extent that its behavior is determined by its own experience.

  • -> Agents should be provided with
  • initial knowledge (compare animal reflexes) and
  • the ability to learn.
slide-7
SLIDE 7

Designs of Intelligent Agents Christian Jacob First Back TOC

7

Prev Next Last

3 Designs of Intelligent Agents

AI intends to design agent programs, functions that implement the agent mapping from percepts to actions. The computing device, we assume this program to run on, is referred to as the architecture.

3.1 Architecture and Program

The architecture might include special-purpose hardware (camera, microphone,etc.). The software might provide a degree of insulation between the raw computer hardware and the agent program, enabling programming at a higher level. agent = architecture + program What matters is not the distinction between “real” and “artificial” environments, but the complexity of the relationship among the behavior of the agent, the percept sequence generated by the environment, and the goals that the agent is supposed to achieve.

slide-8
SLIDE 8

Designs of Intelligent Agents Christian Jacob First Back TOC

8

Prev Next Last

Percepts and Actions for a Selection of Agent Types

Agent Type Percepts Actions Goals Environment Medical diagno- sis system Symptoms, find- ings, patient’s answers Questions, tests, treatments Healthy patient, minimize cost Patient, hospital Satellite image analysis system Pixels of varying intensity, color Print a categoriza- tion of scene Correct catego- rization Image from

  • rbiting satellite

Part-picking robot Pixels of varying intensity Pick up parts and sort into bins Place parts in correct bins Conveyor belt with parts Refinery control- ler Temperature, pressure readings Open, close valves; adjust temperature Maximizing purity, yield, safety Refinery Interactive English tutor Typed words Print exercises, suggestions, corrections Maximize stu- dent’s score on test Set of students

slide-9
SLIDE 9

Designs of Intelligent Agents Christian Jacob First Back TOC

9

Prev Next Last

3.2 Agent Programs

All agents and agent programs accept percepts from an environment and generate actions. function SKELETON-AGENT( percept ) returns action static: memory the agent’s memory of the world memory <--- UPDATE-MEMORY( memory, percept ) action <--- CHOOSE-BEST-ACTION( memory ) memory <--- UPDATE-MEMORY( memory, action ) return action

slide-10
SLIDE 10

Designs of Intelligent Agents Christian Jacob First Back TOC

10

Prev Next Last

Remarks on agent programs:

Percept sequence

  • The agent program receives only a single percept as its input.
  • It is up to the agent to build up the percept sequence in memory.

Performance measure

  • The goal or performance measure is not part of the agent program.
  • The performance evaluation is applied externally.
slide-11
SLIDE 11

Designs of Intelligent Agents Christian Jacob First Back TOC

11

Prev Next Last

3.3 Simple Lookup?

A lookup table is the simplest possible way of writing an agent program. It operates by keeping in memory its entire percept sequence, and using it to index into table, which contains the appropriate action for all possible percept sequences. function TABLE-DRIVEN-AGENT( percept ) returns action static: percepts, a sequence, initially empty table, a table, indexed by percept sequences, initially fully specified append percept to the end of percepts action <--- LOOKUP( percepts, table ) return action

slide-12
SLIDE 12

Designs of Intelligent Agents Christian Jacob First Back TOC

12

Prev Next Last

Why is this TABLE-DRIVEN AGENT proposal doomed to failure?

  • Table size:

The table needed for something as simple as an agent that can only play chess would be about 35100 entries.

  • Time to build:

It would take an enormous amount of time to build complete tables.

  • Lack of autonomy:

The agent has no autonomy at all, because the calculation of best actions is entirely built-in. If the environment changed in some unexpected way, the agent would be lost.

  • Lack or infeasibility of learning:

Even if the agent were given a learning mechanism, so that it could have a degree of autonomy, it would take forever to learn the right value for all the table entries.

slide-13
SLIDE 13

Designs of Intelligent Agents Christian Jacob First Back TOC

13

Prev Next Last

3.4 Example — An Automated Taxi Driver

The full driving task is extremely open-ended. There is no limit to the novel combination of circumstances that can arise. First, we have to think about the percepts, actions, goals and environment for the taxi.

Agent Type Percepts Actions Goals Environment Taxi driver Cameras, speedometer, GPS, sonar, microphone Steer, accelerate, brake, talk to passenger, communicate with

  • ther vehicles

Safe, fast, legal, comfortable trip, maximize profits Roads,

  • ther traffic,

pedestrians, customers

slide-14
SLIDE 14

Designs of Intelligent Agents Christian Jacob First Back TOC

14

Prev Next Last

Performance measures for the taxi driver agent:

  • Getting to the correct destination
  • Minimizing fuel consumption and wear and tear
  • Minimizing the trip time and/or cost
  • Minimizing violations of traffic laws
  • Minimizing disturbances to other drivers
  • Maximizing safety and passenger comfort
  • Maximizing profits

Obviously, some of these goals conflict, so there will be trade-offs involved. Driving environments:

  • local roads or highways
  • weather conditions
  • left or right lane driving
slide-15
SLIDE 15

Types of Agents Christian Jacob First Back TOC

15

Prev Next Last

4 Types of Agents

We have to decide how to build a real program to implement the mapping from percepts to action for the taxi driver agent. Different aspects of driving suggest different types of agent programs. We will consider four types of agent programs:

  • Simple reflex agents
  • Agents that keep track of the world
  • Goal-based agents
  • Utility-based agents
slide-16
SLIDE 16

Types of Agents Christian Jacob First Back TOC

16

Prev Next Last

4.1 Simple Reflex Agents

Instead of constructing a lookup table for the percept-action mapping, we can summarize portions of the table by noting certain commonly occurring input/

  • utput associations.

This can be accomplished by condition-action rules. Example: if car-in-front-is-breaking then initiate-braking Humans (and animals in general) have many such connections,

  • some of which are learned responses (e.g., driving) and
  • some of which are innate reflexes.
slide-17
SLIDE 17

Types of Agents Christian Jacob First Back TOC

17

Prev Next Last

Schematic diagram of a simple reflex agent

Environment Agent

Sensors Effectors What the world is like now What action I should do now Condition-action rules

slide-18
SLIDE 18

Types of Agents Christian Jacob First Back TOC

18

Prev Next Last

A simple reflex agent

function SIMPLE-REFLEX-AGENT( percept ) returns action static: rules, a set of condition-action rules state <-- INTERPRET-INPUT( percept ) rule <-- RULE-MATCH( state, rules ) action <-- RULE-ACTION[ rule ] return action Works only if the correct decision can be made on the basis of the current percept.

slide-19
SLIDE 19

Types of Agents Christian Jacob First Back TOC

19

Prev Next Last

4.2 Agents that Keep Track of the World

For determining whether a vehicle is braking one has to keep the previous frame from the camera to detect when two red lights at the edge of the vehicle go on or off simultaneously. Hence, the driving agent will have to maintain some sort of internal state. Two kinds of knowledge have to be encoded in the agent program:

  • Information about how the world evolves independently of the agent.
  • Information about how the agent´s own actions will affect the world.
slide-20
SLIDE 20

Types of Agents Christian Jacob First Back TOC

20

Prev Next Last

Schematic diagram of a reflex agent with internal state

Environment Agent

Sensors Effectors What the world is like now What action I should do now Condition-action rules State How the world evolves What my actions do

slide-21
SLIDE 21

Types of Agents Christian Jacob First Back TOC

21

Prev Next Last

A reflex agent with internal state

function REFLEX-AGENT-WITH-STATE( percept ) returns action static: state, a description of the current world state rules, a set of condition-action rules state <-- UPDATE-STATE( state, percept ) rule <-- RULE-MATCH( state, rules ) action <-- RULE-ACTION[ rule ] state <-- UPDATE-STATE( state, action ) return action However, knowing about the current state of the environment is not always enough to decide what to do.

slide-22
SLIDE 22

Types of Agents Christian Jacob First Back TOC

22

Prev Next Last

4.3 Goal-Based Agents

Besides a current state description the agent needs some sort of goal information, which describes situations that are desirable. The agent program can combine this with information about the results of possible actions in order to choose actions that achieve the goal. Achieving the goal may involve

  • a single action or
  • (long) sequences of actions.

The subfields of Ai devoted to finding action sequences that do achieve agent´s goals are

  • searching and
  • planning.

Goal-based agents involve consideration of the future and is more flexibly reacting to changes in the environment.

slide-23
SLIDE 23

Types of Agents Christian Jacob First Back TOC

23

Prev Next Last

Schematic diagram of an agent with explicit goals

Environment Agent

Sensors Effectors What the world is like now What action I should do now Goals State How the world evolves What my actions do What it will be like if I do action A

slide-24
SLIDE 24

Types of Agents Christian Jacob First Back TOC

24

Prev Next Last

4.4 Utility-Based Agents

Goals alone are not really enough to generate high-quality behavior. There might be different ways (action sequences) of achieving a specific goal. If one world state is preferred to another, then it has higher utility for the agent. Utility is therefore a function that maps a state onto a real number, which describes the associated degree of “happiness”. A complete specification of the utility function allows rational decisions in two kinds of cases:

  • When there are conflicting goals, only some of which can be achieved, the

utility function specifies the appropriate trade-off.

  • When there are several goals that the agent can aim for, none of which can be

achieved with certainty, utility provides a way in which the likelihood of success can be weighed up against the importance of the goals.

slide-25
SLIDE 25

Types of Agents Christian Jacob First Back TOC

25

Prev Next Last

Schematic diagram of a complete utilty-based agent

Environment Agent

Sensors Effectors What the world is like now What action I should do now Utility State How the world evolves What my actions do What it will be like if I do action A How happy I will be in such a state

slide-26
SLIDE 26

Environments Christian Jacob First Back TOC

26

Prev Next Last

5 Environments

5.1 Properties of Environments

  • Accessible vs. inaccessible.

An agent´s sensory apparatus gives it access to the complete state of the environment. An environment is effectively accessible if the sensors detect all aspects that are relevant to the choice of action. In an accessible environment an agent need not maintain any internal state to keep track of the world.

  • Deterministic vs. nondeterministic

In a deterministic environment, the next state of the environment is completely determined by the current state and the actions selected by the agents. An agent need not worry about uncertainty in an accessible, deterministic environment. If the environment is inaccessible, however, it may appear to the agent to be nondeterministic.

slide-27
SLIDE 27

Environments Christian Jacob First Back TOC

27

Prev Next Last

  • Episodic vs. nonepisodic

In an episodic environment, the agent´s experience is divided into “episodes”. Each episode consists of the agent perceiving and the acting. Subsequent episodes do not depend on what actions occur in previous episodes. In episodic environments the agent does not have to think ahead.

  • Static vs. dynamic

A dynamic environment can change while an agent is deliberating. In static environments, an agent need not keep looking at the world while it is deciding on an action, nor need it worry about the passage of time. An environment is called semidynamic if it does not change with the passage

  • f time but the agent´s performance score does.
  • Discrete vs. continuous

If there are a limited number of distinct, clearly defined percepts and actions we say that the environment is discrete. Chess is discrete. Taxi driving is continuous.

slide-28
SLIDE 28

Environments Christian Jacob First Back TOC

28

Prev Next Last

Examples of Environments and their Characteristics

Environment Accessible Deterministic Episodic Static Discrete Chess with a clock Chess without a clock Poker Backgammon Taxi driving Medical diagnosis system Image-analysis system Part-picking robot Refinery controller Interactive English tutor Yes Yes No Yes No No Yes No No No Yes Yes No No No No Yes No No No No No No No No No Yes Yes No No Semi Yes Yes Yes No No Semi No No No Yes Yes Yes Yes No No No No No Yes

slide-29
SLIDE 29

Environments Christian Jacob First Back TOC

29

Prev Next Last

5.2 Environment Programs

A generic environment program illustrates the basic relationship between agents and environments. procedure RUN-ENVIRONMENT( state, UPDATE-FN, agents, termination ) inputs: state the initial state of the environment UPDATE-FN function to modify the environment agents a set of agents termination a predicate to test when we are done repeat for each agent in agents do PERCEPT[ agent ] <--- GET-PERCEPT( agent, state ) end for each agent in agents do ACTION[ agent ] <--- PROGRAM[ agent ]( PERCEPT[ agent ]) end state <--- UPDATE-FN( actions, agents, state ) until termination( state )

slide-30
SLIDE 30

Environments Christian Jacob First Back TOC

30

Prev Next Last

Environment Simulator Keeping Track of Agent Performances

procedure RUN-EVAL-ENVIRONMENT( state, UPDATE-FN, agents, termination, PERFORMANCE-FN) returns scores local variables: scores a vector the same size as agents, all 0 repeat for each agent in agents do PERCEPT[ agent ] <--- GET-PERCEPT( agent, state ) end for each agent in agents do ACTION[ agent ] <--- PROGRAM[ agent ]( PERCEPT[ agent ]) end state <--- UPDATE-FN( agents, state ) scores <--- PERFORMANCE-FN( scores, agents, state ) until termination( state ) return scores