a b
play

A B Lisp/emacs/AIMA tutorial : 11-1 today and Monday, 271 Soda - PDF document

Agents and environments sensors percepts ? environment Intelligent Agents agent actions actuators Chapter 2 Agents include humans, robots, softbots, thermostats, etc. The agent function maps from percept histories to actions: f : P


  1. Agents and environments sensors percepts ? environment Intelligent Agents agent actions actuators Chapter 2 Agents include humans, robots, softbots, thermostats, etc. The agent function maps from percept histories to actions: f : P ∗ → A The agent program runs on the physical architecture to produce f Chapter 2 1 Chapter 2 4 Reminders Vacuum-cleaner world Assignment 0 (lisp refresher) due 1/28 A B Lisp/emacs/AIMA tutorial : 11-1 today and Monday, 271 Soda Percepts: location and contents, e.g., [ A, Dirty ] Actions: Left , Right , Suck , NoOp Chapter 2 2 Chapter 2 5 Outline A vacuum-cleaner agent ♦ Agents and environments Percept sequence Action [ A, Clean ] Right ♦ Rationality [ A, Dirty ] Suck ♦ PEAS (Performance measure, Environment, Actuators, Sensors) [ B, Clean ] Left [ B, Dirty ] Suck ♦ Environment types [ A, Clean ] , [ A, Clean ] Right [ A, Clean ] , [ A, Dirty ] Suck ♦ Agent types . . . . . . function Reflex-Vacuum-Agent ([ location , status ]) returns an action if status = Dirty then return Suck else if location = A then return Right else if location = B then return Left What is the right function? Can it be implemented in a small agent program? Chapter 2 3 Chapter 2 6

  2. Rationality Internet shopping agent Fixed performance measure evaluates the environment sequence Performance measure?? – one point per square cleaned up in time T ? Environment?? – one point per clean square per time step, minus one per move? – penalize for > k dirty squares? Actuators?? A rational agent chooses whichever action maximizes the expected value of Sensors?? the performance measure given the percept sequence to date Rational � = omniscient – percepts may not supply all relevant information Rational � = clairvoyant – action outcomes may not be as expected Hence, rational � = successful Rational ⇒ exploration, learning, autonomy Chapter 2 7 Chapter 2 10 PEAS Internet shopping agent To design a rational agent, we must specify the task environment Performance measure?? price, quality, appropriateness, efficiency Consider, e.g., the task of designing an automated taxi: Environment?? current and future WWW sites, vendors, shippers Performance measure?? Actuators?? display to user, follow URL, fill in form Environment?? Sensors?? HTML pages (text, graphics, scripts) Actuators?? Sensors?? Chapter 2 8 Chapter 2 11 PEAS Environment types To design a rational agent, we must specify the task environment Solitaire Backgammon Internet shopping Taxi Observable?? Consider, e.g., the task of designing an automated taxi: Deterministic?? Performance measure?? safety, destination, profits, legality, comfort, . . . Episodic?? Static?? Environment?? US streets/freeways, traffic, pedestrians, weather, . . . Discrete?? Single-agent?? Actuators?? steering, accelerator, brake, horn, speaker/display, . . . Sensors?? video, accelerometers, gauges, engine sensors, keyboard, GPS, . . . Chapter 2 9 Chapter 2 12

  3. Environment types Environment types Solitaire Backgammon Internet shopping Taxi Solitaire Backgammon Internet shopping Taxi Observable?? Yes Yes No No Observable?? Yes Yes No No Deterministic?? Deterministic?? Yes No Partly No Episodic?? Episodic?? No No No No Static?? Static?? Yes Semi Semi No Discrete?? Discrete?? Single-agent?? Single-agent?? Chapter 2 13 Chapter 2 16 Environment types Environment types Solitaire Backgammon Internet shopping Taxi Solitaire Backgammon Internet shopping Taxi Observable?? Yes Yes No No Observable?? Yes Yes No No Deterministic?? Yes No Partly No Deterministic?? Yes No Partly No Episodic?? Episodic?? No No No No Static?? Static?? Yes Semi Semi No Discrete?? Discrete?? Yes Yes Yes No Single-agent?? Single-agent?? Chapter 2 14 Chapter 2 17 Environment types Environment types Solitaire Backgammon Internet shopping Taxi Solitaire Backgammon Internet shopping Taxi Observable?? Yes Yes No No Observable?? Yes Yes No No Deterministic?? Yes No Partly No Deterministic?? Yes No Partly No Episodic?? No No No No Episodic?? No No No No Static?? Static?? Yes Semi Semi No Discrete?? Discrete?? Yes Yes Yes No Single-agent?? Single-agent?? Yes No Yes (except auctions) No The environment type largely determines the agent design The real world is (of course) partially observable, stochastic, sequential, dynamic, continuous, multi-agent Chapter 2 15 Chapter 2 18

  4. Agent types Reflex agents with state Four basic types in order of increasing generality: Sensors – simple reflex agents State – reflex agents with state What the world How the world evolves – goal-based agents is like now Environment – utility-based agents What my actions do All these can be turned into learning agents What action I Condition−action rules should do now Agent Actuators Chapter 2 19 Chapter 2 22 Simple reflex agents Example Agent Sensors function Reflex-Vacuum-Agent ([ location , status ]) returns an action static : last A, last B , numbers, initially ∞ What the world if status = Dirty then . . . is like now Environment (defun make-reflex-vacuum-agent-with-state-program () (let ((last-A infinity) (last-B infinity)) #’(lambda (percept) (let ((location (first percept)) (status (second percept))) What action I (incf last-A) (incf last-B) Condition−action rules should do now (cond ((eq status ’dirty) Actuators (if (eq location ’A) (setq last-A 0) (setq last-B 0)) ’Suck) ((eq location ’A) (if (> last-B 3) ’Right ’NoOp)) ((eq location ’B) (if (> last-A 3) ’Left ’NoOp))))))) Chapter 2 20 Chapter 2 23 Example Goal-based agents Sensors function Reflex-Vacuum-Agent ([ location , status ]) returns an action State if status = Dirty then return Suck What the world else if location = A then return Right How the world evolves is like now else if location = B then return Left Environment What it will be like What my actions do if I do action A (setq joe (make-agent :name ’joe :body (make-agent-body) :program (make-reflex-vacuum-agent-program))) (defun make-reflex-vacuum-agent-program () What action I Goals should do now #’(lambda (percept) (let ((location (first percept)) (status (second percept))) Agent (cond ((eq status ’dirty) ’Suck) Actuators ((eq location ’A) ’Right) ((eq location ’B) ’Left))))) Chapter 2 21 Chapter 2 24

  5. Utility-based agents Sensors State What the world How the world evolves is like now Environment What it will be like What my actions do if I do action A How happy I will be Utility in such a state What action I should do now Agent Actuators Chapter 2 25 Learning agents Performance standard Critic Sensors feedback Environment changes Learning Performance element element knowledge learning goals Problem generator Agent Actuators Chapter 2 26 Summary Agents interact with environments through actuators and sensors The agent function describes what the agent does in all circumstances The performance measure evaluates the environment sequence A perfectly rational agent maximizes expected performance Agent programs implement (some) agent functions PEAS descriptions define task environments Environments are categorized along several dimensions: observable? deterministic? episodic? static? discrete? single-agent? Several basic agent architectures exist: reflex, reflex with state, goal-based, utility-based Chapter 2 27

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend