Prob obab abil ilit ity y an and d Tim Time: H Hid idde - - PowerPoint PPT Presentation

prob obab abil ilit ity y an and d tim time h hid idde
SMART_READER_LITE
LIVE PREVIEW

Prob obab abil ilit ity y an and d Tim Time: H Hid idde - - PowerPoint PPT Presentation

Prob obab abil ilit ity y an and d Tim Time: H Hid idde den Ma Marko kov v Mo Mode dels ls (H (HMM MMs) s) Com omputer Science c cpsc sc322, Lecture 3 32 (Te Text xtboo ook k Chpt 6.5.2) June, 2 20, 2 2017 CPSC


slide-1
SLIDE 1

CPSC 322, Lecture 32 Slide 1

Prob

  • bab

abil ilit ity y an and d Tim Time: H Hid idde den Ma Marko kov v Mo Mode dels ls (H (HMM MMs) s)

Com

  • mputer Science c

cpsc sc322, Lecture 3 32 (Te Text xtboo

  • ok

k Chpt 6.5.2)

June, 2 20, 2 2017

slide-2
SLIDE 2

CPSC 322, Lecture 32 Slide 2

Lectu ture re Ov Overv rvie iew

  • Recap

p

  • Markov Models
  • Markov Chain
  • Hi

Hidden Mar arko kov v Mod

  • dels

ls

slide-3
SLIDE 3

CPSC 322, Lecture 32 Slide 3

Answerin ing Querie ies unde der Un Uncertai ainty

Sta tati tic B Belief Netw twork

& V Variab able le Elimi mina nation ion Dynamic mic Bayesia ian n Network rk Probab abil ility ity Theory ry Hidden n Markov

  • v Models

Email il spam m filters rs Diagno nosti stic c Sy Systems ms (e.g., ., medici cine ne) Natural al Language age Processin ssing St Student t Tracing ng in tutorin ing g Sy Systems ms Monitori

  • ring

ng (e.g credit t cards) BioInforma rmati tics cs Markov

  • v Chains

Robotics

slide-4
SLIDE 4

Stat atio ionar ary Ma y Markov Ch v Chai ain ( (SMC MC)

A st station

  • nary Marko

kov Chain : for all t >0

  • P (St+1| S0,…,St) =

and

  • P (St +1|

We only need to specify and

  • Simple Model, easy to specify
  • Often the natural model
  • The network can extend indefinitely
  • Var

ariat ations of SM SMC ar are at at th the c core o

  • f most

t Nat atural al L Lan anguag age Processing (NLP) ap applicat ations!

Slide 4 CPSC 322, Lecture 32

slide-5
SLIDE 5

CPSC 322, Lecture 32 Slide 5

Lectu ture re Ov Overv rvie iew

  • Recap
  • Markov Models
  • Markov Chain
  • Hi

Hidden Mar arko kov v Mod

  • dels

ls

slide-6
SLIDE 6

Ho How c can an we we mi minim imal ally ly extend Ma d Markov Ch v Chai ains?

  • Maintaining the Marko

kov and st station

  • nary assumptions?

A useful situation to model is the one in which:

  • the reasoning system doe
  • es

s not

  • t have access

ss to the states

  • but can make

ke obse servations s that give some information about the current state

Slide 6 CPSC 322, Lecture 32

slide-7
SLIDE 7

CPSC 322, Lecture 32 Slide 7

Hidden Markov Model

  • P (S0) specifies initial conditions
  • P (St+1|St) specifies the dynamics
  • P (Ot |St) specifies the sensor model
  • A Hidden M

Mar arkov Mo v Model (HMM) starts with a Markov chain, and adds a noisy observation about the state at each time step:

  • |domain(S)| = k
  • |domain(O)| = h
  • B. h

h x h

  • A. 2

2 x h C . . k k x h D.

  • D. k

k x k

slide-8
SLIDE 8

CPSC 322, Lecture 32 Slide 8

Hidden Markov Model

  • P (S0) specifies initial conditions
  • P (St+1|St) specifies the dynamics
  • P (Ot |St) specifies the sensor model
  • A

Hidden Mar arkov v Model ( (HMM) starts with a Markov chain, and adds a noisy observation about the state at each time step:

  • |domain(S)| = k
  • |domain(O)| = h
slide-9
SLIDE 9

CPSC 322, Lecture 32 Slide 9

Exa xamp mple le: Localization for “Pushed around” Robot

  • Loc
  • caliza

zation

  • n (where am I?) is a fundamental problem in

rob

  • bot
  • tics
  • Suppose a robot is in a circular corridor with 16 locations
  • There are four doors at positions: 2, 4, 7, 1

1

  • The Robot initially doesn’t know where it is
  • The Robot is pushed ar

around. After a push it can stay in the same location, move left or right.

  • The Robot has a Noisy

y sensor telling whether it is in front of a door

slide-10
SLIDE 10

This scenario can be represented as…

  • Exam

ample S Sto tochas asti tic Dyn ynam amics: when pushed, it stays in the same location p=0.2, moves one step left or right with equal probability P(Loct + 1 | Loc t) Loc t= 10 B. A. A. C.

slide-11
SLIDE 11

CPSC 322, Lecture 32 Slide 1 1

This scenario can be represented as…

  • Exam

ample S Sto tochas asti tic Dyn ynam amics: when pushed, it stays in the same location p=0.2, moves left or right with equal probability P(Loct + 1 | Loc t) P(Loc1)

slide-12
SLIDE 12

CPSC 322, Lecture 32 Slide 12

This scenario can be represented as…

Exam ample o

  • f Noisy se

y sensor telling whether it is in front of a door .

  • If it is in front of a door P(O t = T) = .8
  • If not in front of a door P(O t = T) = .1

P(O t | Loc t)

slide-13
SLIDE 13

Usefu ful i l infe ference in in H HMMs

  • Loc
  • caliza

zation

  • n: Robot starts at an unknown location

and it is pushed around t times. It wants to determine where it is

  • In

In ge general: compute the posterior distribution over the current state given all evidence to date P(St | O0 … Ot)

Slide 13 CPSC 322, Lecture 32

slide-14
SLIDE 14

CPSC 322, Lecture 32 Slide 14

Exa xamp mple le : Robo

bot Lo Local aliz izat atio ion

  • Suppose a robot wants to determine its location based on its

actions and its sensor readings

  • Three actions: goRight, goLeft, Stay
  • This can be represented by an augmented HMM
slide-15
SLIDE 15

CPSC 322, Lecture 32 Slide 15

Robo bot Lo Local aliz izat atio ion Se Sensor an and d Dyn ynam amic ics M Mode del

  • Sam

ample S Sensor Model (assume same as for pushed around)

  • Sam

ample S Sto tochas asti tic Dyn ynam amics: P(Loct + 1 | Actiont , Loc t)

P(Loct + 1 = L | Action t = goRight , Loc t = L) = 0.1 P(Loct + 1 = L+1 | Action t = goRight , Loc t = L) = 0.8 P(Loct + 1 = L + 2 | Action t = goRight , Loc t = L) = 0.074 P(Loct + 1 = L’ | Action t = goRight , Loc t = L) = 0.002 for all other locations L’

  • All location arithmetic is modulo 16
  • The action goLeft works the same but to the left
slide-16
SLIDE 16

CPSC 322, Lecture 32 Slide 16

Dyn ynam amic ics Mo Mode del l Mo More De Detai ails ls

  • Sam

ample S Sto tochas asti tic Dyn ynam amics: P(Loct + 1 | Action, Loc t)

P(Loct + 1 = L | Action t = goRight , Loc t = L) = 0.1 P(Loct + 1 = L+1 | Action t = goRight , Loc t = L) = 0.8 P(Loct + 1 = L + 2 | Action t = goRight , Loc t = L) = 0.074 P(Loct + 1 = L’ | Action t = goRight , Loc t = L) = 0.002 for all other locations L’

slide-17
SLIDE 17

CPSC 322, Lecture 32 Slide 17

Robo bot Lo Local aliz izat atio ion add addit itio ional al s sensor

  • Additi

tional al Light S t Sensor: there is light coming through an opening at location 10 P (Lt | Loct)

  • Info from th

the tw two sensors is combined :“Sensor Fusion”

slide-18
SLIDE 18

CPSC 322, Lecture 32 Slide 18

Th The Rob

  • bot
  • t st

starts s at an unkn know

  • wn loc
  • cation
  • n and m

must st determine w where i it is The model appears to be too ambiguous

  • Sensors are too noisy
  • Dynamics are too stochastic to infer anything

http://www.cs.ubc.ca/spider/poole/demos/localization /localization.html

But inference actually works pretty well. You can check it at :

You can use standard Bnet inference. However you typically take advantage of the fact that time moves forward (not in 322)

slide-19
SLIDE 19

CPSC 322, Lecture 32 Slide 19

Sa Samp mple le sc scenar ario io to to exp xplo lore re in in demo mo

  • Keep making observations without moving. What

happens?

  • Then keep moving without making observations. What

happens?

  • Assume you are at a certain position alternate

moves and observations

  • ….
slide-20
SLIDE 20

CPSC 322, Lecture 32 Slide 20

HMMs have many other applications….

Na Natural Langu guage ge Proc

  • cess

ssing: g: e.g., Speech Recognition

  • States:

phoneme \ word

  • Observations: acoustic signal \

phoneme

Bi Bioi

  • infor
  • rmatics: Gene Finding
  • States: coding / non-coding region
  • Observations: DNA Sequences

Fo For these se prob

  • blems

s the critical inference is: s:

find the most likely sequence of states given a sequence of observations

slide-21
SLIDE 21

CPSC 322, Lecture 32 Slide 21

Mar arko kov v Mod

  • del

els

Markov Chains Hidden Markov Model Markov Decision Processes (MDPs) Simplest Possible Dynamic Bnet Add noisy Observations about the state at time t Add Actions and Values (Rewards)

slide-22
SLIDE 22

CPSC 322, Lecture 32 Slide 22

Learning Goals for today’s class

Yo You c can an:

  • Specify the components of an Hidden Markov

Model (HMM)

  • Justify and apply HMMs to Robot Localization

Clarification

  • n on
  • n se

secon

  • nd LG

G for

  • r last

st class ss

You can an:

  • Justify and apply Markov Chains to compute the probability of a

Natural Language sentence (NOT to estimate the conditional probs- slide 18)

slide-23
SLIDE 23

CPSC 322, Lecture 32 Slide 23

Ne Next xt wee eek

Environ

  • nment

Prob

  • blem

Qu Query Planning Dete terministi tic Sto tochas asti tic Se Sear arch Arc Consiste tency Sear arch Sear arch Val alue Ite terat ation Var

  • ar. Eliminat

ation Constr trai aint t Sat atisfac acti tion Logics STRIPS Belief N Nets ts Var ars + Constr trai aints ts Decision Nets ts

Markov

  • v Decisio

ion n Processe sses

Var

  • ar. Eliminat

ation Sta tati tic Se Sequenti tial al Representa tati tion Reas asoning Technique SLS

Markov

  • v Chains

s and HMMs

slide-24
SLIDE 24

CPSC 322, Lecture 32 Slide 24

Ne Next xt Cl Clas ass

  • One-off

ff de decis isio ions(TextBook 9.2)

  • Sin

ingle le S Stag age De Decis isio ion networks ( 9.2.1)

Fin inal

Thu, Jun 29 at 19:00 Final Exam (2.5 hours) Room: BUCH A101