Intelligent Agents Chapter 2, Sections 14 of; based on AIMA Slides - - PowerPoint PPT Presentation

intelligent agents
SMART_READER_LITE
LIVE PREVIEW

Intelligent Agents Chapter 2, Sections 14 of; based on AIMA Slides - - PowerPoint PPT Presentation

Intelligent Agents Chapter 2, Sections 14 of; based on AIMA Slides c Artificial Intelligence, spring 2013, Peter Ljungl Stuart Russel and Peter Norvig, 2004 Chapter 2, Sections 14 1 Outline Agents and environments


slide-1
SLIDE 1

Intelligent Agents

Chapter 2, Sections 1–4

Artificial Intelligence, spring 2013, Peter Ljungl¨

  • f; based on AIMA Slides c

Stuart Russel and Peter Norvig, 2004 Chapter 2, Sections 1–4 1

slide-2
SLIDE 2

Outline

♦ Agents and environments ♦ Rationality ♦ PEAS (Performance measure, Environment, Actuators, Sensors) ♦ Environment types ♦ Agent types

Artificial Intelligence, spring 2013, Peter Ljungl¨

  • f; based on AIMA Slides c

Stuart Russel and Peter Norvig, 2004 Chapter 2, Sections 1–4 2

slide-3
SLIDE 3

Agents and environments

? agent percepts sensors actions environment actuators

Agents include humans, robots, softbots, thermostats, etc. The agent function maps from percept histories to actions: f : P∗ → A The agent program runs on the physical architecture to produce f

Artificial Intelligence, spring 2013, Peter Ljungl¨

  • f; based on AIMA Slides c

Stuart Russel and Peter Norvig, 2004 Chapter 2, Sections 1–4 3

slide-4
SLIDE 4

Vacuum-cleaner world

A B

Percepts: location and contents, e.g., [A, Dirty] Actions: Left, Right, Suck, NoOp

Artificial Intelligence, spring 2013, Peter Ljungl¨

  • f; based on AIMA Slides c

Stuart Russel and Peter Norvig, 2004 Chapter 2, Sections 1–4 4

slide-5
SLIDE 5

A vacuum-cleaner agent

A simple agent function is: If the current square is dirty, then suck;

  • therwise, move to the other square.

. . . or as pseudo-code:

function Reflex-Vacuum-Agent( [location,status]) returns an action if status = Dirty then return Suck else if location = A then return Right else if location = B then return Left

How do we know if this is a good agent function? What is the best function? Is there one? Who decides this?

Artificial Intelligence, spring 2013, Peter Ljungl¨

  • f; based on AIMA Slides c

Stuart Russel and Peter Norvig, 2004 Chapter 2, Sections 1–4 5

slide-6
SLIDE 6

Rationality

Fixed performance measure evaluates the environment sequence – one point per square cleaned up in time T? – one point per clean square per time step, minus one per move? – penalize for > k dirty squares? A rational agent chooses any action that – maximizes the expected value of the performance measure – given the percept sequence to date Rational = omniscient – percepts may not supply all relevant information Rational = clairvoyant – action outcomes may not be as expected Hence, rational = successful Rational ⇒ exploration, learning, autonomy

Artificial Intelligence, spring 2013, Peter Ljungl¨

  • f; based on AIMA Slides c

Stuart Russel and Peter Norvig, 2004 Chapter 2, Sections 1–4 6

slide-7
SLIDE 7

PEAS

To design a rational agent, we must specify the task environment, which consists of the following four things: Performance measure?? Environment?? Actuators?? Sensors?? Examples of agents: – Automated taxi, – Internet shopping agent, – Boardgames

Artificial Intelligence, spring 2013, Peter Ljungl¨

  • f; based on AIMA Slides c

Stuart Russel and Peter Norvig, 2004 Chapter 2, Sections 1–4 7

slide-8
SLIDE 8

Automated taxi

The task environment for an automated taxi: Performance measure?? Safety, destination, profits, legality, comfort, . . . Environment?? Streets, traffic, pedestrians, weather, . . . Actuators?? Steering, accelerator, brake, horn, speaker/display, . . . Sensors?? Video, accelerometers, gauges, engine, keyboard, GPS, . . .

Artificial Intelligence, spring 2013, Peter Ljungl¨

  • f; based on AIMA Slides c

Stuart Russel and Peter Norvig, 2004 Chapter 2, Sections 1–4 8

slide-9
SLIDE 9

Internet shopping agent

The task environment for an internet shopping agent: Performance measure?? Price, quality, appropriateness, efficiency Environment?? Current and future WWW sites, vendors, shippers Actuators?? Display to user, follow URL, fill in form Sensors?? HTML pages (text, graphics, scripts)

Artificial Intelligence, spring 2013, Peter Ljungl¨

  • f; based on AIMA Slides c

Stuart Russel and Peter Norvig, 2004 Chapter 2, Sections 1–4 9

slide-10
SLIDE 10

Question-answering system

The task environment for a question-answering system: Performance measure?? User satisfaction? Known questions? Environment?? Wikipedia, Wolfram alpha, ontologies, encyclopedia, . . . Actuators?? Spoken/written language Sensors?? Written/spoken input

Artificial Intelligence, spring 2013, Peter Ljungl¨

  • f; based on AIMA Slides c

Stuart Russel and Peter Norvig, 2004 Chapter 2, Sections 1–4 10

slide-11
SLIDE 11

Question-answering system: Jeopardy!

The task environment for a question-answering system: Performance measure?? $ $ $ Environment?? Wikipedia, Wolfram alpha, ontologies, encyclopedia, . . . Actuators?? Spoken/written language, answer button Sensors?? Written/spoken input Why did IBM choose Jeopardy as its goal for a QA system? Because of the peformance measure!

Artificial Intelligence, spring 2013, Peter Ljungl¨

  • f; based on AIMA Slides c

Stuart Russel and Peter Norvig, 2004 Chapter 2, Sections 1–4 11

slide-12
SLIDE 12

Environment types

Solitaire Backgammon Web shopping Taxi Observable??

Fully Fully Partly Partly

Deterministic??

Deterministic Stochastic Partly Stochastic

Episodic??

Sequential Sequential Sequential Sequential

Static??

Static Static Semi Dynamic

Discrete??

Discrete Discrete Discrete Continuous

Single-agent??

Single Multi Single* Multi *except auctions

The environment type largely determines the agent design The real world is (of course) partially observable, stochastic, sequential, dynamic, continuous, multi-agent

Artificial Intelligence, spring 2013, Peter Ljungl¨

  • f; based on AIMA Slides c

Stuart Russel and Peter Norvig, 2004 Chapter 2, Sections 1–4 12

slide-13
SLIDE 13

Agent types

Four basic types in order of increasing generality: – simple reflex agents – reflex agents with state – goal-based agents – utility-based agents All these can be turned into learning agents

Artificial Intelligence, spring 2013, Peter Ljungl¨

  • f; based on AIMA Slides c

Stuart Russel and Peter Norvig, 2004 Chapter 2, Sections 1–4 13

slide-14
SLIDE 14

Simple reflex agents

Agent Environment

Sensors What the world is like now What action I should do now Condition−action rules Actuators

Artificial Intelligence, spring 2013, Peter Ljungl¨

  • f; based on AIMA Slides c

Stuart Russel and Peter Norvig, 2004 Chapter 2, Sections 1–4 14

slide-15
SLIDE 15

Example

function Reflex-Vacuum-Agent( [location,status]) returns an action if status = Dirty then return Suck else if location = A then return Right else if location = B then return Left

def reflex_vacuum_agent(sensors): if sensors[’status’] == ’Dirty’: return ’Suck’ elif sensors[’location’] == ’A’: return ’Right’ elif sensors[’location’] == ’B’: return ’Left’

Artificial Intelligence, spring 2013, Peter Ljungl¨

  • f; based on AIMA Slides c

Stuart Russel and Peter Norvig, 2004 Chapter 2, Sections 1–4 15

slide-16
SLIDE 16

Reflex agents with state

Agent Environment

Sensors What action I should do now State How the world evolves What my actions do Condition−action rules Actuators What the world is like now

Artificial Intelligence, spring 2013, Peter Ljungl¨

  • f; based on AIMA Slides c

Stuart Russel and Peter Norvig, 2004 Chapter 2, Sections 1–4 16

slide-17
SLIDE 17

Example

function Reflex-Vacuum-Agent( [location,status]) returns an action static: last A, last B, numbers, initially ∞ if status = Dirty then . . .

initial_state = {’last-A’: maxint, ’last-B’: maxint} def reflex_vacuum_agent_state(sensors, state=initial_state): if sensors[’status’] == ’Dirty’: if sensors[’location’] == ’A’: state[’last-A’] = 0 else: state[’last-B’] = 0 return ’Suck’ elif sensors[’location’] == ’A’ and state[’last-B’] > 3: return ’Right’ elif sensors[’location’] == ’B’ and state[’last-A’] > 3: return ’Left’ else: return ’NoOp’

Artificial Intelligence, spring 2013, Peter Ljungl¨

  • f; based on AIMA Slides c

Stuart Russel and Peter Norvig, 2004 Chapter 2, Sections 1–4 17

slide-18
SLIDE 18

Goal-based agents

Agent Environment

Sensors What it will be like if I do action A What action I should do now State How the world evolves What my actions do Goals Actuators What the world is like now

Artificial Intelligence, spring 2013, Peter Ljungl¨

  • f; based on AIMA Slides c

Stuart Russel and Peter Norvig, 2004 Chapter 2, Sections 1–4 18

slide-19
SLIDE 19

Utility-based agents

Agent Environment

Sensors What it will be like if I do action A How happy I will be in such a state What action I should do now State How the world evolves What my actions do Utility Actuators What the world is like now

Artificial Intelligence, spring 2013, Peter Ljungl¨

  • f; based on AIMA Slides c

Stuart Russel and Peter Norvig, 2004 Chapter 2, Sections 1–4 19

slide-20
SLIDE 20

Learning agents

Performance standard

Agent Environment

Sensors Performance element changes knowledge learning goals Problem generator feedback Learning element Critic Actuators

Artificial Intelligence, spring 2013, Peter Ljungl¨

  • f; based on AIMA Slides c

Stuart Russel and Peter Norvig, 2004 Chapter 2, Sections 1–4 20

slide-21
SLIDE 21

Summary

Agents interact with environments through actuators and sensors The agent function describes what the agent does in all circumstances The performance measure evaluates the environment sequence A perfectly rational agent maximizes expected performance Agent programs implement (some) agent functions PEAS descriptions define task environments Environments are categorized along several dimensions:

  • bservable? deterministic? episodic? static? discrete? single-agent?

Several basic agent architectures exist: reflex, reflex with state, goal-based, utility-based

Artificial Intelligence, spring 2013, Peter Ljungl¨

  • f; based on AIMA Slides c

Stuart Russel and Peter Norvig, 2004 Chapter 2, Sections 1–4 21