ARTIFICIAL INTELLIGENCE
Russell & Norvig Chapter 2: Intelligent Agents, part 2
ARTIFICIAL INTELLIGENCE Russell & Norvig Chapter 2: - - PowerPoint PPT Presentation
ARTIFICIAL INTELLIGENCE Russell & Norvig Chapter 2: Intelligent Agents, part 2 Agent Architecture All agents have the same basic structure: accept percepts from environment, generate actions Agent = Architecture + Program A
Russell & Norvig Chapter 2: Intelligent Agents, part 2
domain)
judge the success of the agent
function Skeleton-Agent(percept) returns action static: memory, the agent's memory of the world memory ← Update-Memory(memory, percept) action ← Choose-Best-Action(memory) memory ← Update-Memory(memory, action) return action
function Table-Driven-Agent(percept) returns action static: percepts, a sequence, initially empty table, a table indexed by percept sequences, initially fully specified append percept to the end of percepts action ← LookUp(percepts, table) return action
appropriate production system. They are stateless devices which do not have memory of past world states
world
information which describes desirable situations. Agents of this kind take future events into consideration
states
function Simple-Reflex-Agent(percept) returns action static: rules, a set of condition-action rules state ← Interpret-Input(percept) rule ← Rule-Match(state, rules) action ← Rule-Action[rule] return action
the table by formulating commonly occurring patterns as condition-action rules:
if car-in-front-brakes then initiate braking
whose condition matches the current situation
current percept is sufficient for making the correct decision
function Reflex-Agent-With-State(percept) returns action static: rules, a set of condition-action rules state, a description of the current world state ← Update-State(state, percept) rule ← Rule-Match(state, rules) action ← Rule-Action[rule] state ← Update-State(state, action) return action
requires two kinds of encoded knowledge
changes (independent of the agents’ actions)
agents’ actions affect the world
state is not always enough
alternative decision paths (e.g., where should the car go at an intersection)?
goal to be achieved
achieve the goal
new destination for the taxi-driving agent
future "what will happen if I do ..." (fundamental difference).
based.
signs and adapt to changes (e.g., specials at the ends of aisles).
induce objects from percepts
is out of some item
there? Achieve the goal some other way. e.g., no milk cartons: get canned milk or powdered milk.
performance
performance
(exploration)
another state)
space graph) from the starting state to a goal state.
with their knowledge about the world and the problem domain
state has been reached.
an architecture and is implemented by a program.
maximizes its expected performance, given the percept sequence received so far.
than built-in knowledge of the environment by the designer.
updates its internal state.