energy usage noise level safety behavior towards hamsters
play

? Energy usage Noise level Safety (behavior towards hamsters/small - PowerPoint PPT Presentation

Contents Foundations of Artificial Intelligence What is an agent? 1 2. Rational Agents What is a rational agent? 2 Nature and Structure of Rational Agents and Their Environments The structure of rational agents 3 Wolfram Burgard, Bernhard


  1. Contents Foundations of Artificial Intelligence What is an agent? 1 2. Rational Agents What is a rational agent? 2 Nature and Structure of Rational Agents and Their Environments The structure of rational agents 3 Wolfram Burgard, Bernhard Nebel, and Martin Riedmiller Different classes of agents 4 Albert-Ludwigs-Universit¨ at Freiburg Types of environments 5 May 2, 2011 (University of Freiburg) Foundations of AI May 2, 2011 2 / 23 Agents Rational Agents Perceive the environment through sensors ( → Percepts) . . . do the “right thing”! Act upon the environment through actuators ( → Actions) In order to evaluate their performance, we have to define a performance measure. Agent Sensors Percepts Autonomous vacuum cleaner example: m 2 per hour Environment Level of cleanliness ? Energy usage Noise level Safety (behavior towards hamsters/small children) Actions Optimal behavior is often unattainable Actuators Not all relevant information is perceivable Complexity of the problem is too high Examples: Humans and animals, robots and software agents (softbots), temperature control, ABS, . . . (University of Freiburg) Foundations of AI May 2, 2011 3 / 23 (University of Freiburg) Foundations of AI May 2, 2011 4 / 23

  2. Rationality vs. Omniscience The Ideal Rational Agent Rational behavior is dependent on Performance measures (goals) Percept sequences Knowledge of the environment An omniscient agent knows the actual effects of its actions Possible actions In comparison, a rational agent behaves according to its percepts and knowledge and attempts to maximize the expected performance Ideal rational agent For each possible percept sequence , a rational agent should select an Example: If I look both ways before crossing the street, and then as I action that is expected to maximize its performance measure , given the cross I am hit by a meteorite, I can hardly be accused of lacking evidence provided by the percept sequence and whatever built-in rationality. knowledge the agent has. Active perception is necessary to avoid trivialization. The ideal rational agent acts according to the function Percept Sequence × World Knowledge → Action (University of Freiburg) Foundations of AI May 2, 2011 5 / 23 (University of Freiburg) Foundations of AI May 2, 2011 6 / 23 Examples of Rational Agents Structure of Rational Agents Performance Agent Type Environment Actuators Sensors Measure display questions, keyboard entry Medical Realization of the ideal mapping through an healthy patient, patient, tests, diagnoses, of symptoms, diagnosis costs, lawsuits hospital, stuff treatments, findings, Agent program , executed on an system referrals patient’s answers Architecture which also provides and interface to the environment Satellite display correct image downlink from color pixel image analysis categorization (percepts, actions) categorization orbiting satellite arrays system of scene percentage of Part-picking conveyor belt jointed arm camera, joint parts in robot with parts, bins and hand angle sensors → Agent = Architecture + Program correct bins temperature, Refinery purity, yield, refinery, valves pumps, pressure, controller safety operators heaters displays chemical sensors display exercises, Interactive student’s score set of students, suggestions, keyboard entry English tutor on test testing agency corrections (University of Freiburg) Foundations of AI May 2, 2011 7 / 23 (University of Freiburg) Foundations of AI May 2, 2011 8 / 23

  3. The Simplest Design: Table-Driven Agents Simple Reflex Agent Agent Sensors function T ABLE -D RIVEN -A GENT ( percept ) returns an action persistent : percepts , a sequence, initially empty What the world is like now table , a table of actions, indexed by percept sequences, initially fully specified Environment append percept to the end of percepts action ← L OOKUP ( percepts , table ) return action What action I Condition-action rules should do now Problems: Actuators The table can become very large and it usually takes a very long time for the designer to specify it (or to learn it) Direct use of perceptions is often not possible due to the large space . . . practically impossible required to store them (e.g., video images). Input therefore is often interpreted before decisions are made. (University of Freiburg) Foundations of AI May 2, 2011 9 / 23 (University of Freiburg) Foundations of AI May 2, 2011 10 / 23 Interpretative Reflex Agents Structure of Model-based Reflex Agents In case the agent’s history in addition to the actual percept is required to decide on the next action, it must be represented in a suitable form. Since storage space required for perceptions is too large, direct interpretation of perceptions Sensors State function S IMPLE -R EFLEX -A GENT ( percept ) returns an action What the world How the world evolves persistent : rules , a set of condition–action rules is like now Environment state ← I NTERPRET -I NPUT ( percept ) What my actions do rule ← R ULE -M ATCH ( state , rules ) action ← rule .A CTION return action What action I Condition-action rules should do now Agent Actuators (University of Freiburg) Foundations of AI May 2, 2011 11 / 23 (University of Freiburg) Foundations of AI May 2, 2011 12 / 23

  4. A Model-based Reflex Agent Model-based, Goal-based Agents function M ODEL -B ASED -R EFLEX -A GENT ( percept ) returns an action Often, percepts alone are insufficient to decide what to do. persistent : state , the agent’s current conception of the world state model , a description of how the next state depends on current state and action This is because the correct action depends on the given explicit goals rules , a set of condition–action rules action , the most recent action, initially none (e.g., go towards X). state ← U PDATE -S TATE ( state , action , percept , model ) The model-based, goal-based agents use an explicit representation of rule ← R ULE -M ATCH ( state , rules ) action ← rule .A CTION goals and consider them for the choice of actions. return action (University of Freiburg) Foundations of AI May 2, 2011 13 / 23 (University of Freiburg) Foundations of AI May 2, 2011 14 / 23 Model-based, Goal-based Agents Model-based, Utility-based Agents Sensors Usually, there are several possible actions that can be taken in a given situation. State What the world How the world evolves is like now In such cases, the utility of the next achieved state can come into Environment What it will be like What my actions do consideration to arrive at a decision. if I do action A A utility function maps a state (or a sequence of states) onto a real number. What action I Goals should do now The agent can also use these numbers to weigh the importance of Agent Actuators competing goals. (University of Freiburg) Foundations of AI May 2, 2011 15 / 23 (University of Freiburg) Foundations of AI May 2, 2011 16 / 23

  5. Model-based, Utility-based Agents Learning Agents � Sensors State What the world How the world evolves is like now Learning agents can become more competent over time. Environment What it will be like What my actions do if I do action A They can start with an initially empty knowledge base. How happy I will be Utility in such a state They can operate in initially unknown environments. What action I should do now Agent Actuators (University of Freiburg) Foundations of AI May 2, 2011 17 / 23 (University of Freiburg) Foundations of AI May 2, 2011 18 / 23 Components of Learning Agents Learning Agents � Performance standard Sensors learning element (responsible for making improvements) Critic performance element (has to select external actions) feedback Environment changes critic (determines the performance of the agent) Learning Performance element element knowledge problem generator (suggests actions that will lead to informative learning goals experiences) Problem generator Actuators Agent (University of Freiburg) Foundations of AI May 2, 2011 19 / 23 (University of Freiburg) Foundations of AI May 2, 2011 20 / 23

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend