intelligent agents
play

Intelligent Agents Chapter 2 B. Ombuki-Berman, B. Ross, with - PowerPoint PPT Presentation

Intelligent Agents Chapter 2 B. Ombuki-Berman, B. Ross, with material from Artificial Intelligence: A Modern Approach (butchered by Earl) Agents An agent is anything that can be viewed as perceiving its environment through sensors and


  1. Intelligent Agents Chapter 2 B. Ombuki-Berman, B. Ross, with material from Artificial Intelligence: A Modern Approach (butchered by Earl)

  2. Agents ● An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators ● Percept: perceptual input (e.g. Text, image, sound, ...) ● Action: output acting on environment (e.g. Spinning a motor, picking up an object, turning on a light) ● Agent function: Maps a percept sequence to an action ● Agent Program: A concrete implementation running in a system

  3. Agents ● More generally, a rational agent, for a possible percept sequence, selects an action that maximizes some performance measure, given evidence provided by the sequence, and built-in knowledge in the agent ● i.e. It “does the right thing for the situation” ● Performance measure: criterion for success ● Generally measured based on states of (i.e. changes to) the environment ● Good vs bad? Better vs worse – Clear criterion vs less well-defined ● Generally chosen by the designer, and task-specific ● Rationality: reasonably correct ● Not perfection!

  4. Rationality ● Rationality depends on: ● The performance measure that defines the criterion for success ● The agent's prior knowledge of the environment ● The actions the agent can perform ● The agent's percept sequence (to date) ● This takes us back to our definition for a rational agent

  5. So... what's a percept sequence? ● A percept is the agent's perceptual inputs at any given time. A percept sequence is the complete history of everything the agent has ever perceived ● Silly examples of the difference: ● If your butlertron offers you a drink, and you decline it, it'd be nice if he didn't immediately offer you another drink ● If your robotic vacuum automaton is on the prowl for dirt, it should probably remember that it's already checked one room, and move on to another ● Remember: an agent function might act on a potentially infinite percept sequence, so you may be required to impose some form of limitation or generalization

  6. Example Time! Percept Sequence Action [A, Clean] Right [A, Dirty] Suck [B, Clean] Left [B, Dirty] Suck A B [A, Clean], [A, Clean] Right [A, Clean], [A, Dirty] Suck ... ... [A, Clean], [A, Clean], [A,Clean] Right [A, Clean], [A, Clean], [A, Dirty] Suck ... ... ● Note that it could be cleaning until the end of time, with a virtually limitless percept history. However, the same task can be accomplished just as well by not acknowledging past states in this case

  7. Is This a Rational Agent? ● Performance Measure? ● What are we actually examining the state of? ● Prior Knowledge? ● What do we know in advance? ● Actions? ● What can we do? ● Percept Sequences? ● Evidence of perception?

  8. Is This a Rational Agent? ● Performance Measure? ● Perhaps we could count up the number of clean squares after some length of time? – e.g. after 500 timesteps ● Note that we're looking at the environment ● Prior Knowledge? ● Actions? ● Percept Sequences?

  9. Is This a Rational Agent? ● Performance Measure? ● Prior Knowledge? ● Presumably we know how many rooms there are, and which are accessible from which ● We can assume squares are still clean during operation after we've cleaned them ● There's no reason to assume we have clairvoyant knowledge of all dirt in advance ● We may or may not know our initial position within the map ● Actions? ● Percept Sequences?

  10. Is This a Rational Agent? ● Performance Measure? ● Prior Knowledge? ● Actions? ● Left , Right , and Suck ● Moving left or right should not be able to result in leaving the map ● Percept Sequences?

  11. Is This a Rational Agent? ● Performance Measure? ● Prior Knowledge? ● Actions? ● Perception? ● Our table of Percept Sequences presupposes the ability to perceive dirt, and to observe location within the map – Is the latter practical? Why or why not?

  12. Percept Sequences ● You probably noticed some other issues with that style of system (other than the potential for Percept Sequences of infinite length) ● Comments? Ideas?

  13. Caveat to Rational Agents ● Note that this view of agents is just to make analyzing systems easier ● If something else works: use it ● Similarly, the ability to incorporate Percept Sequences doesn't necessarily make rational agents an appropriate perspective for a problem ● e.g. A calculator can be modeled as an agent – 2,+,2,= could be presented as a Percept Sequence, but that doesn't make it a good idea!

  14. Final Thought: Rationality != Perfection ● A rational agent should maximize expected performance, based on the acquired knowledge (percepts) to date ● A vacuum-bot not checking for dirt before applying suction? Irrational ● A vacuum-bot not checking for gremlins? Rational – Even if the bot then fails because it's destroyed by gremlins

  15. Learning ● Ideally, we'd like an agent to be able to learn ● In this context, learning means retaining relevant information from perception ● Simplest example: it'd be nice if a robot could 'discover' the existence of a new room in the map if encountered in the field ● Already seen: the vacuum-bot doesn't know which rooms are clean, but perceives these (and then maintains knowledge of perpetual cleanliness post-suckitude) ● In a good system, we can start with prior knowledge, and then amend it as necessary ● e.g. knowing the floorplan of Brock, but then deciding a hallway is off-limits if it has too many tables set up (/too crowded) to repeatedly navigate quickly ● In a very good system, we might even be able to learn trends and adapt accordingly ● e.g. the main hallways at Brock tend to get dirty faster, so the cleaning staff already knows to give them extra attention

  16. Autonomy ● The less an agent relies on prior knowledge, the more autonomous it is ● This isn't just important because of flexibility... – It's possible (even easy) to predispose an agent/robot to certain behaviours through overinforming it in advance – Similarly, it's often more suitable for path-finding systems to know only the objective facts (locations, connections, etc.), and be free of additional biases from the designer ● (If you'd like a counterexample, ask me about Sudoku) ● For a good example, consider our hallway example from the previous slide

  17. Remember to eat your PEAS! ● Let's look at some of the important aspects of rational agents... ● Performance , Environment , Actuators , and Sensors define a task environment ● Let's go straight to an example! Agent Type Performance Environment Actuators Sensors Measure Taxi driver Safe, fast, Roads, other Steering, Cameras, legal, traffic, accelerator, sonar, comfortable pedestrians, brake, signal, speedometer, trip, maximize customers horn, display GPS, odometer, profits accelerometer, (, actually gets engine sensors, there? what keyboard else?)

  18. Agent Type Performance Environment Actuators Sensors Measure Medical diagnosis Healthy patient, Patient, hospital, Display (questions, Keyboard (entry of reduced costs staff? tests, diagnoses, symptoms, findings, sytem treatments, patient answers and referrals) history) Part-picking robot %age of parts in Conveyor belt with Jointed arm, Camera, encoders, correct bins parts; bins manipulator (hand) possibly more? Interactive math Student's score (or Students, possibly Display (exercises, Keyboard tutor change in score) agency corrections) Web-scraper? ? ? ? ? FPS bot? ● Note that some of these are entirely software- based agents ( softbots )

  19. Environment Types ● Fully observable vs Partially observable ● Fully: everything seen; shows all relevant information ● Partially: noise, inaccurate sensors, hidden info, ... ● Basically, will the agent be considered omniscient ? ● Deterministic vs Stochastic ● Deterministic: the next state depends solely on current state and next action ● Stochastic: probabilistic; other factors involved (complex?) ● Basically, will the environment always behave predictably? ● Episodic vs Sequential ● Episodic: one self-contained, independent situation at a time ( atomic ) ● Sequential: current decision affects future ones (you need to 'think ahead') – e.g. chess, automated vehicles, floor-painting, etc. ● Typically, episodic's a lot simpler to deal with

  20. Environment Types ● Static vs dynamic ● Static: environment is fixed during decision making ● Dynamic: environment changes ● Suppose your robolackey is getting lunch for everyone at the office from different restaurants. If you call up his cellphone (yes, your robolackey has a cellphone) and tell him to also pick up a sammich from Fearless Frank's, will he have to start over, or adapt? ● Discrete vs Continuous ● Discrete: finite # of states (measurements, values, ...) ● Continuous: smooth, infinite scale ● Single agent vs multi-agent ● Single: one agent involved ● Multi: more than one (adversary/competitive or cooperative)

  21. Back to Agents... ● As a reminder, in AI, we want to design: ● An agent function... – A mapping of percepts to actions ● ... as implemented as an agent program... – i.e. the actual realization of that agent function ● ... which can be combined with an architecture ... – A computing device or platform, which includes the requisite sensors and actuators ● ... to create an agent ● i.e. architecture + program = agent

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend