Le Lecture ture 2 Re Representational presentational Di - - PowerPoint PPT Presentation

le lecture ture 2 re representational presentational di
SMART_READER_LITE
LIVE PREVIEW

Le Lecture ture 2 Re Representational presentational Di - - PowerPoint PPT Presentation

Computer Science CPSC 322 Le Lecture ture 2 Re Representational presentational Di Dimensions mensions 1 ANNOU NOUNC NCEMENT EMENT You need to register your Clicker in Connect if you have never done so before Otherwise your


slide-1
SLIDE 1

Computer Science CPSC 322

Le Lecture ture 2 Re Representational presentational Di Dimensions mensions

1

slide-2
SLIDE 2

ANNOU NOUNC NCEMENT EMENT

  • You need to register your Clicker in Connect if

you have never done so before

  • Otherwise your answers won’t be recorded
  • Assignment 0 due on Thurdsay
  • People on the wait list can find the assignment in

Piazza (post @10)

  • You can send it to Vanessa via email by 4:30 on Th. if

you want it to count, in case you get into the course

2

slide-3
SLIDE 3

Te Teaching ching Te Team am

Instr struc uctor tor

  • Cristina Conati ( conati@cs.ubc.ca;
  • ffice ICICS/CS 107)

Te Teachin hing As Assista istants nts

  • Borna Ghotbi (bghotbi@cs.ubc.ca)
  • Vanessa Putnam (vputnam@cs.ubc.ca)
  • Michael Przystupa (bot267@ugrad.cs.ubc.ca)
  • Wenyi Wang (wenyw@cs.ubc.ca)

OFFICE HOURS NOW AVAILABLE ON THE WEBSITE

slide-4
SLIDE 4

Today’s Lecture

4

  • Recap from last lecture
  • Representation and Reasoning: Dimensions
  • An Overview of This Course
  • Further Representational Dimensions
  • Intro to search (time permitting)
slide-5
SLIDE 5

Course urse Essentials entials

  • Course website: CHECK IT OFTEN!

htt ttp:// p://www.cs.ubc.ca w.cs.ubc.ca/~ /~con conat ati/3 i/322/32 22/322-20 2017W1/cou 17W1/course rse- page.ht .html ml

  • Syllabus, lecture slides, other material
  • Textbook: Artificial Intelligence: Foundations of

Computational Agents. by Poole and Mackworth. (P&M)

  • Available electronically (free) http://artint.info/html/ArtInt.html
  • We will cover at least Chapters: 1, 3, 4, 5, 6, 8, 9
  • Connect for assignments and marks
  • Piazza for Discussion board
  • AIspace

pace : online tools for learning Artificial Intelligence http://aispace.org/

5

slide-6
SLIDE 6

Wh What t is is Ar Arti tificia ficial l In Intell telligence? igence?

Clicker Question: We use the following definition

  • The study and design of

6

  • D. Systems that think like humans
  • A. Systems that think rationally
  • C. Systems that act rationally
  • B. Systems that act like humans
slide-7
SLIDE 7

Wh What t is is Ar Arti tificia ficial l In Intell telligence? igence?

Clicker Question: We use the following definition

  • The study and design of

7

  • D. Systems that think like humans
  • A. Systems that think rationally
  • C. Systems that act rationally
  • B. Systems that act like humans
slide-8
SLIDE 8

AI a I as Stu tudy dy and d Design sign of f In Inte tell lligent igent Agents ents

  • Intelligent agents: artifacts that act rationally in their

environment

  • Their actions are appropriate for their goals and circumstances
  • They are flexible to changing environments and goals
  • They learn from experience
  • They make appropriate choices given perceptual limitations and

limited resources

  • This definition drops the constraint of cognitive plausibility
  • Same as building flying machines by understanding general

principles of flying (aerodynamic) vs. by reproducing how birds fly

slide-9
SLIDE 9

In Intel telligent ligent Agents ents in in th the Wo World ld

9

Natural ural Langua nguage e Unde ders rstan anding ing + + Compu mputer er Vis Visio ion Speech ech Reco cogn gnitio ition + Phys ysiologic iological al Sens nsing ing Min inin ing g of Interaction eraction Logs gs Knowled ledge e Represen presentat tation ion Mach chine ine Lear arning ning Reas asoning ning + Decis cision ion Theory

  • ry

+ + Robot botics ics + Human man Computer mputer /Robot bot Inter eraction action Natur ural al Languag nguage e Gener neration ion

abilities

Representatio esentation & Reason

  • ning

ng

slide-10
SLIDE 10

Today’s Lecture

10

  • Recap from last lecture
  • Representation and Reasoning: Dimensions
  • An Overview of This Course
  • Further Representational Dimensions
  • Intro to search (time permitting)
slide-11
SLIDE 11

Representation presentation and d Reasoning asoning

To use these inputs an agent needs to represe resent nt them  knowle wledge One of AI goals: specify how a system can

  • Acquire and represent knowledge about a domain

(represe present ntation ation)

  • Use the knowledge to solve problems in that domain

(reasoni asoning)

Representation & Reasoning

slide-12
SLIDE 12

problem ⟹ representation ⟹ computation⟹ representation ⟹ solution

Rep eprese esenta ntati tion

  • n an

and R d Rea eason

  • ning

ng (R&R) &R) System stem

  • A representation language to describe
  • The environment
  • Problems (questions/tasks) to be solved
  • Computational reasoning procedures to compute

a solution to a problem

  • E.g., an answer, sequence of actions
  • Choice of an appropriate R&R system depends
  • n various dimensions, e.g. properties of
  • the environment, the type of problems, the agent, the

computational resources, etc.

12

slide-13
SLIDE 13

Represent presentational ational Dim imensions ensions

En Enviro ronm nment ent Pr Problem em Ty Type

Deterministic Stochastic Static atic Sequential ntial

Each cell will include a R&R system covered in the course

We’ll start by describing dimensions related to the problem and environment

Then we’ll include in each cell R&R system covered in the course

Then we’ll include in each cell the various R&R systems covered in the course, and discuss some more dimensions

slide-14
SLIDE 14

Probl

  • blem

em Ty Types es

14

  • St

Static: ic: finding a solution does not involve reasoning into the future (time is ignored)

  • One-step solution
  • Se

Sequentia ntial: l: finding a solution requires looking for a number of steps into the future, e.g.,

  • Fixed horizon (fixed number of steps)
  • Indefinite horizon (finite, but unknown number of steps)
slide-15
SLIDE 15

Probl

  • blem

em Ty Types es

15

  • Constraint Satisfaction – Find state that satisfies set
  • f constraints (static).
  • e.g., what is a feasible schedule for final exams?
  • Answering Query – Is a given proposition true/likely

given what is known? (static).

  • e.g., does the patient suffers from viral hepatitis?
  • Planning – Find sequence of actions to reach a goal

state / maximize outcome (sequential).

  • e.g., Navigate through an environment to reach

a particular location

slide-16
SLIDE 16

Represent presentational ational Dim imensions ensions

En Enviro ronm nment ent

16

Pr Problem em Ty Type

Query Planning Deterministic Stochastic Constraint Satisfaction Static atic Sequential ntial

slide-17
SLIDE 17

Deterministic terministic vs vs. . St Stochastic chastic (Uncertain) ncertain) Environment ironment

17

  • Sensing

ing Un Uncert rtai ainty ty: The agent cannot fully observe the current state of the world when acting

  • Effect

fect Un Uncerta rtain inty ty: : the agent does not know for sure the immediate effects of its actions

Sensin ensing g Uncertai rtainty? nty? Teacher’s explanation Soccer Player Kick

slide-18
SLIDE 18

Deterministic terministic vs vs. . St Stochastic chastic (Uncertain) ncertain) Environment ironment

18

  • Se

Sensin ing g Uncerta rtainty inty: The agent cannot fully

  • bserve the current state of the world
  • Ef

Effec ect t Un Uncerta rtainty inty: : the agent does not know for sure the effects of its actions

Sen ensin sing g Uncert rtai ainty nty? Teacher’s explanation YES Soccer Player Kick NO Effect ffect Uncertain rtainty? ty? Teacher’s explanation Soccer Player Kick

slide-19
SLIDE 19

Deterministic terministic vs vs. . St Stochastic chastic (Uncertain) ncertain) Environment ironment

19

  • Se

Sensin ing g Uncerta rtainty inty: The agent cannot fully

  • bserve the current state of the world
  • Ef

Effec ect t Un Uncerta rtainty inty: : the agent does not know for sure the effects of its actions

Sen ensin sing g Uncert rtai ainty nty? Teacher’s explanation YES Soccer Player Kick NO Effect ffect Uncertain rtainty? ty? Teacher’s explanation YES Soccer Player Kick YES

slide-20
SLIDE 20

Cli licker ker Qu Question stion: : Chess ess and d Poker ker

  • A. Poker and Chess are both stochastic
  • B. Chess is stochastic and Poker is deterministic
  • C. Poker and Chess are both deterministic
  • D. Chess is deterministic and Poker is stochastic

20

An environment is stochastic if at least one of these is true

  • Sensing

ing Un Uncert rtai ainty ty: the agent cannot fully observe the current state of the world

  • Ef

Effect fect Un Uncertainty: rtainty: the agent does not know for sure the immediate, direct effects of its actions

slide-21
SLIDE 21

Cli licker ker Qu Question stion: : Chess ess and d Poker ker

  • A. Poker and Chess are both stochastic
  • B. Chess is stochastic and Poker is deterministic
  • C. Poker and Chess are both stochastic
  • D. Chess is deterministic and Poker is stochastic

21

An environment is stochastic if at least one of these is true

  • Sensing

ing Un Uncert rtai ainty ty: the agent cannot fully observe the current state of the world

  • Ef

Effect fect Un Uncertainty: rtainty: the agent does not know for sure the immediate, direct effects of its actions

slide-22
SLIDE 22

Determini terministic stic vs. . Sto tochastic chastic Domains mains

22

  • Historically, AI has been divided into two camps:

those who prefer representations based on logic ic and those who prefer probab abil ilit ity.

  • In CPSC 322 we introduce both representational

families, and 422 goes into more detail

Note: Some of the most exciting current research in AI is actually building bridges between these camps.

slide-23
SLIDE 23

Represent presentational ational Dim imensions ensions

En Enviro ronm nment ent

23

Pr Problem em Ty Type

Deterministic Stochastic Static atic Sequential ntial

Each cell will include a R&R system covered in the course

We described dimensions related to the problem and environment Then we’ll include in each cell R&R system covered in the course Now we include in each cell the various R&R systems covered in the course, and discuss some more dimensions

slide-24
SLIDE 24

Today’s Lecture

24

  • Recap from last lecture
  • Representation and Reasoning: Dimensions
  • An Overview of This Course
  • Further Representational Dimensions
  • Intro to search (time permitting)
slide-25
SLIDE 25

Represent presentational ational Dim imensions ensions

En Environmen nment

25

Problem lem Type Query Planning Deterministic Stochastic Constraint Satisfaction

Search Arc Consistency Search Search Logic gics ST STRIPS Variab riables les + + Cons nstra train ints ts Value Iteration Variable Elimination Baye yesia sian Nets Decis cision ion Nets

Markov kov Processe esses Static atic Sequential ntial

Representatio esentation Reasoning Technique

Variable Elimination

slide-26
SLIDE 26

Oth ther er Rep epresenta esentationa tional Dime mens nsions ions

26

We'v 've alread ady y discu cuss ssed ed:

  • Problem Types (Static vs. Sequential )
  • Deterministic versus stochastic domains

So Some other er importa rtant nt dimensio sions ns

  • Representation scheme: Explicit state or features or

relations

  • Flat or hierarchical representation
  • Knowledge given versus knowledge learned from

experience

  • Goals versus complex preferences
  • Single-agent vs. multi-agent
slide-27
SLIDE 27

Today’s Lecture

27

  • Recap from last lecture
  • Representation and Reasoning: Dimensions
  • An Overview of This Course
  • Further Representational Dimensions
  • Intro to search (time permitting)
slide-28
SLIDE 28

Explicit licit Sta tate te vs Fe Feature tures

28

How do we model the environment?

  • You can enumerate the states

tes of the world or

  • r
  • A state can be described in terms of features
  • Often a more natural description
  • 30 binary features (also called propositions) can

represent

slide-29
SLIDE 29

Explicit licit Sta tate te vs Fe Feature tures

29

How do we model the environment?

  • You can enumerate the states

tes of the world or

  • r
  • A state can be described in terms of features
  • Often a more natural description
  • 30 binary features (also called propositions) can

represent

230=1,073,741,824 states

slide-30
SLIDE 30

Explicit licit Sta tate te vs Fe Feature tures

30

Mars Ex Explorer rer Ex Example le Weather Temperature Longitude Latitude One possible state Number of possible states (mutually exclusive) {S, C} [-40, 40] [0, 359] [0, 179]

slide-31
SLIDE 31

Explicit licit Sta tate te vs Fe Feature tures

31

Mars Ex Explorer rer Ex Example le Weather Temperature Longitude Latitude One possible state Number of possible states (mutually exclusive) {S, -30, 320, 210} 2 x 81 x 360 x 180 {S, C} [-40, 40] [0, 359] [0, 179]

slide-32
SLIDE 32

Expl plicit icit Sta tate te vs.

  • s. Fea

eatu tures es vs. . Rel elat ations ions

  • States can be described in terms of objects and

relationships

  • There is a proposition for each relationship on each tuple
  • f objects
  • University Example:
  • Students (S) = {s1, s2, s3, …, s200)
  • Courses (C) = {c1, c2, c3, …, c10}
  • Registered (S, C)
  • Number of Relations:
  • Number of Propositions:
  • Number of States:

32

slide-33
SLIDE 33

Expl plicit icit Sta tate te vs.

  • s. Fea

eatu tures es vs. . Rel elat ations ions

  • States can be described in terms of objects and

relationships

  • There is a proposition for each relationship on each tuple
  • f objects
  • University Example:
  • Students (S) = {s1, s2, s3, …, s200)
  • Courses (C) = {c1, c2, c3, …, c10}
  • Registered (S, C)
  • Number of Relations: 1
  • Number of Propositions:
  • Number of States:

33

22000 200*10

slide-34
SLIDE 34

Cli licker ker Qu Question stion

One binary relation (e.g., likes) and 9 individuals (e.g. people). How many states?

  • A. 812
  • B. 218
  • C. 281
  • D. 109

34

slide-35
SLIDE 35

Cli licker ker Qu Question stion

One binary relation (e.g., likes) and 9 individuals (e.g. people). How many states?

  • A. 812
  • B. 218
  • C. 281
  • D. 109

35

slide-36
SLIDE 36

Fla lat t vs. hi . hierar rarchica chical

  • Should we model the whole world on the same level of

abstraction?

  • Single level of abstraction: flat
  • Multiple levels of abstraction: hierarchical
  • Example: Planning a trip from here to a resort in

Cancun

Going to the airport Take a cab Call a cab Lookup number Dial number Ride in the cab Pay for the cab Check in ….

  • This course: mainly flat representations
  • Hierarchical representations required for scaling up.

36

slide-37
SLIDE 37

Kno nowled edge ge gi given en vs. . kno nowled ledge ge lea earne ned d fr from

  • m

ex expe perience ience

37

  • The agent is provided with a model of the world
  • nce and for all or
  • The agent can learn how the world works based
  • n experience
  • in this case, the agent often still needs some prior

ior knowle wledge

  • This course: mostly knowledge given
  • Learning: CPSC 340 and CPSC 422
slide-38
SLIDE 38

Go Goals ls vs. . (complex) mplex) preferences eferences

  • An agent may have a goal that it wants to achieve, e.g.,
  • there is some state or set of states that the agent wants to be in
  • there is some proposition or set of propositions that the agent wants

to make true

  • An agent may have preferences
  • a preference/utility function describes how happy the agent is in

each state of the world

  • Agent's task is to reach a state which makes it as happy as possible
  • Preferences can be complex
  • This course: goals and simple preferences

38

What beverage to order?

  • I am in a hurry so I need something quickly
  • I like Cappuccino better than Espresso, but it takes

longer to make…

slide-39
SLIDE 39

Sin ingle gle-agent agent vs. . Mult lti-agen agent t domains mains

  • Does the environment include other agents?
  • If there are other agents, it can be useful to explicitly

model

  • their goals and beliefs,
  • how they react to our actions
  • Other agents can be: cooperative, competitive, or a bit of

both

  • This course: only single agent scenario

39

slide-40
SLIDE 40

Summary mary

Would like most general agents possible, but in this course we need to restrict ourselves to:

  • Flat representations (vs. hierarchical)
  • Knowledge given (vs. knowledge learned)
  • Goals and simple preferences (vs. complex preferences)
  • Single-agent scenarios (vs. multi-agent scenarios)

We will look at

  • Deterministic and stochastic domains
  • Static and Sequential problems

And see examples of representations using

  • Explicit state or features or relations

40

slide-41
SLIDE 41

AI Application

41

  • At the beginning of next class, we will look at some AI

applications that you have found for your assignment 0

  • You are asked to described them in terms of the elements

above and some more

slide-42
SLIDE 42

42

  • What does the AI application do
  • Goals
  • prior knowledge needed
  • past experiences that it does (or could) learn from
  • Observations needed
  • Actions performed
  • AI technologies used
  • Why is it intelligent?
  • Evaluation?
slide-43
SLIDE 43

Today’s Lecture

43

  • Recap from last lecture
  • Representation and Reasoning: Dimensions
  • An Overview of This Course
  • Further Representational Dimensions
  • Intro to Search (time permitting)
slide-44
SLIDE 44

Represent presentational ational Dim imensions ensions

En Environmen nment

44

Problem lem Type Query Planning Deterministic Stochastic Constraint Satisfaction

Search Arc Consistency Search Search Logic gics ST STRIPS Variab riables les + + Cons nstra train ints ts Value Iteration Variable Elimination Baye yesia sian Nets Decis cision ion Nets

Markov kov Processe esses Static atic Sequential ntial

Representatio esentation Reasoning Technique

Variable Elimination

slide-45
SLIDE 45

Represent presentational ational Dim imensions ensions

En Environmen nment

45

Problem lem Type Query Planning Deterministic Stochastic Constraint Satisfaction

Search Arc Consistency Search Search Logic gics ST STRIPS Variab riables les + + Cons nstra train ints ts Value Iteration Variable Elimination Baye yesia sian Nets Decis cision ion Nets

Markov kov Processe esses Static atic Sequential ntial

Representatio esentation Reasoning Technique

Variable Elimination

First Part of the Course

slide-46
SLIDE 46

Represent presentational ational Dim imensions ensions

En Environmen nment

46

Problem lem Type Query Planning Deterministic Stochastic Constraint Satisfaction

Search Arc Consistency Search Search Logic gics ST STRIPS Variab riables les + + Cons nstra train ints ts Value Iteration Variable Elimination Baye yesia sian Nets Decis cision ion Nets

Markov kov Processe esses Static atic Sequential ntial

Representatio esentation Reasoning Technique

Variable Elimination

We’ll focus

  • n Search
slide-47
SLIDE 47

(Adversarial) versarial) Search: rch: Checkers eckers

47

Source: IBM Research

  • Early learning work in 1950s by

Arthur Samuel at IBM

  • Chinook program by Jonathan

Schaeffer (UofA)

  • Search explores the space of

possible moves and their consequence  1994: world champion  2007: declared unbeatable

slide-48
SLIDE 48

(Adversarial) versarial) Search: rch: Chess ess

In 1997, Gary Kasparov, the chess grandmaster and reigning world champion played against Deep Blue, a program written by researchers at IBM

Slide 48

Source: IBM Research

slide-49
SLIDE 49

(Adversarial) versarial) Search: rch: Chess ess

  • Deep Blue’s won 3 games, lost 2, tied 1

Slide 49

  • 30 CPUs + 480 chess processors
  • Searched 126 million states per sec
  • Generated 30 billion positions per move reaching

depth 14 routinely

slide-50
SLIDE 50
  • Often we are not given an algorithm to solve a

problem, but only a specification of what a solution is we have to search for a solution.

– Enumerate a set of potential partial solutions – Check to see if they are solutions or could lead to one

Search arch

50

slide-51
SLIDE 51

Sim imple ple Search arch Agent ent

51

Determ rminis inistic, ic, goal-dr drive iven agent t

  • Agent is in a star

art t state ate

  • Agent is given a goal

goal (subset of possible states)

  • Environment changes only when the agent acts
  • Agent perfectly knows:
  • actions

ions that can be applied in any given state

  • the state

ate it is going to end up in when an action is applied in a given state

  • The sequence of actions (and appropriate ordering) taking

the agent from the start state to a goal state is the solution

slide-52
SLIDE 52

Defini finition tion of a f a se search arch problem

  • blem

52

  • Init

itial ial state(s) ate(s)

  • Set of actions

ions (operators) available to the agent

  • An action

ion funct ction ion that, given a state and an action, returns a new state

  • Goal

l state(s) ate(s)

  • Search

ch space: set of states that will be searched for a path from initial state to goal, given the available actions

  • states

tes ar are node nodes s and actions ions are e lin links between them.

  • Not necessarily given explicitly (state space might be

infinite)

  • Path

h Co Cost t (we ignore this for now)

slide-53
SLIDE 53

Th Three ee exa xamples mples

  • 1. The delivery robot planning the route it will take in

a building to get from one room to another (Ch 1.6)

  • 2. Vacuum cleaner world
  • 3. Solving an 8-puzzle

Slide 53

slide-54
SLIDE 54

Environment ironment fo for Deli livery very Robot bot (ch.1.6) h.1.6)

54

Simplified

  • Consider only

bold locations here

  • Limits in direction
  • f movement

(e.g., can only move in the direction a door

  • pens, can’t go

back to previous location, etc.)

  • Start: o103
  • Goal: r123
slide-55
SLIDE 55

Search Space for the Delivery Robot

55

slide-56
SLIDE 56

Learning Goals for today’s class

  • Define what is a representation and

reasoning system

  • Differentiate between single/static and

sequential problems, as well as between deterministic and stochastic ones

slide-57
SLIDE 57
  • Rea

ead d Ch 3 ( h 3 (3. 3.1-3.5.2, 3.5.2, 3. 3.7. 7.3) 3)

  • Assig

signmen nment t 0

TO TODO O fo for next xt cl class ss

57

  • Review the definitions in the next three
  • slides. I won’t go over them next week
slide-58
SLIDE 58
  • A directed graph consists of a set N of nodes

(vertices) and a set A of ordered pairs of nodes, called edges (arcs).

  • Node n2 is a neighbor of n1 if there is an arc from n1

to n2. That is, if  n1, n2   A.

  • A path is a sequence of nodes n0, n1,..,nk such that

 ni-1, ni   A.

  • A cycle is a non-empty path such that the start node

is the same as the end node.

  • A directed acyclic graph (DAG) is a graph with no

cycles

  • Given a set of start nodes and goal nodes, a solution

is a path from a start node to a goal node

Gr Grap aphs hs

58

slide-59
SLIDE 59

Grap aph h spe pecifica ificati tion

  • n fo

for th the e Del elive very ry Rob

  • bot
  • t

59

N={mail, ts, o103, b3, o109,...} A={ 〈ts,mail〉, 〈o103,ts〉, 〈o103,b3〉, 〈o103,o109〉, ...} One of several solution paths: 〈o103, o109, o119, o123, r123〉

slide-60
SLIDE 60
  • The forward branching factor of a node is the number of

arcs going out of the node

  • The backward branching factor of a node is the number
  • f arcs going into the node
  • If the forward branching factor of a node is b and the

graph is a tree, how many nodes are n steps away from that node?

Branching anching Fa Factor tor

60