Pr Probability obability an and d Ti Time: e: Hi Hidden dden - - PowerPoint PPT Presentation

pr probability obability an and d ti time e
SMART_READER_LITE
LIVE PREVIEW

Pr Probability obability an and d Ti Time: e: Hi Hidden dden - - PowerPoint PPT Presentation

Pr Probability obability an and d Ti Time: e: Hi Hidden dden Mark arkov ov Mod odels els (H (HMMs) MMs) Co Computer ter Sc Science ce cpsc3 c322, 22, Lectur ture e 32 (Te Text xtbo book ok Chpt 6.5.2) .2) April, l, 7,


slide-1
SLIDE 1

CPSC 322, Lecture 32 Slide 1

Pr Probability

  • bability an

and d Ti Time: e: Hi Hidden dden Mark arkov

  • v Mod
  • dels

els (H (HMMs) MMs)

Co Computer ter Sc Science ce cpsc3 c322, 22, Lectur ture e 32 (Te Text xtbo book

  • k Chpt 6.5.2)

.2)

April, l, 7, 2010

slide-2
SLIDE 2

CPSC 322, Lecture 32 Slide 2

Lecture cture Ov Overview view

  • Re

Recap ap

  • Markov Models
  • Markov Chain
  • Hidden

dden Markov

  • v Models

els

slide-3
SLIDE 3

CPSC 322, Lecture 18 Slide 3

Ans nswer wering ing Que ueries ies un unde der Unc ncerta ertainty inty

Stati atic c Belief Ne Netwo work k

& & Va Variable able El Eliminat nation ion Dynam namic ic Bayesi esian an Netwo work Probabi

  • babili

lity ty Theor

  • ry

Hidde den n Mark rkov

  • v Models

dels Email il spam am filters rs Diagn gnostic

  • stic

Syst stem ems s (e.g., medic dicine) ine) Natural ural Language nguage Proc

  • cessing

ssing Student dent Trac acing ing in tutor

  • ring

ng System ems Monit nitori

  • ring

ng (e.g .g credi edit card rds) s) BioInf nform

  • rmatics

ics Mar arkov kov Chains ins Robot botics ics

slide-4
SLIDE 4

Sta tatio tiona nary ry Ma Markov

  • v Cha

hain in (SMC MC)

A stationary Markov Chain : for all t >0

  • P (St+1| S0,…,St) =

and

  • P (St +1|

We only need to specify and

  • Simple Model, easy to specify
  • Often the natural model
  • The network can extend indefinitely
  • Variati

ation

  • ns

s of SMC C are at the core of most Na Natural al Language ge Processi ssing ng (NL NLP) applicati ations! ns!

slide-5
SLIDE 5

CPSC 322, Lecture 32 Slide 5

Lecture cture Ov Overview view

  • Re

Recap ap

  • Markov Models
  • Markov Chain
  • Hidden

dden Markov

  • v Models

els

slide-6
SLIDE 6

How

  • w can

an we e mi mini nima mally lly ex exte tend nd Ma Marko kov v Cha hain ins?

  • Maintaining the Markov and stationary assumption?

A useful situation to model is the one in which:

  • the reasoning system does not have acces

ess s to the states

  • but can make observ

rvati ation

  • ns

s that give some information about the current state

slide-7
SLIDE 7

CPSC 322, Lecture 30 Slide 7

Hidden Markov Model

  • P (S0) specifies initial conditions
  • P (St+1|St) specifies the dynamics
  • P (Ot |St) specifies the sensor model
  • A Hidden Markov Model (HMM) starts with a Markov

chain, and adds a noisy observation about the state at each time step:

  • |domain(S)| = k
  • |domain(O)| = h
slide-8
SLIDE 8

CPSC 322, Lecture 32 Slide 8

Example: mple: Localization for “Pushed around” Robot

  • Locali

aliza zatio tion (where am I?) is a fundamental problem in robotics

  • Suppose a robot is in a circular corridor with 16

locations

  • There are four doors at positions: 2, 4, 7, 11
  • The Robot initially doesn’t know where it is
  • The Robot is pushed around. After a push it can stay in

the same location, move left or right.

  • The Robot has Noisy sensor telling whether it is in front of

a door

slide-9
SLIDE 9

CPSC 322, Lecture 32 Slide 9

This scenario can be represented as…

  • Examp

mple e Stochastic astic Dy Dynamics cs: when pushed, it stays in the same location p=0.2, moves left or right with equal probability P(Loct + 1 | Loc t) P(Loc1)

slide-10
SLIDE 10

CPSC 322, Lecture 32 Slide 10

This scenario can be represented as…

Examp mple e of No Noisy sensor r telling whether it is in front of a door.

  • If it is in front of a door P(O t = T) = .8
  • If not in front of a door P(O t = T) = .1

P(O t | Loc t)

slide-11
SLIDE 11

Useful eful in infe feren ence ce in in HM HMMs Ms

  • Local

aliz izati ation

  • n: Robot starts at an unknown

location and it is pushed around t times. It wants to determine where it is

  • In general:

l: compute the posterior distribution over the current state given all evidence to date P(St | O0 … Ot)

slide-12
SLIDE 12

CPSC 322, Lecture 32 Slide 12

Example mple : : Rob

  • bot
  • t Lo

Local aliz izati ation

  • n
  • Suppose a robot wants to determine its location based on its

actions and its sensor readings

  • Three actions: goRight, goLeft, Stay
  • This can be represented by an augmented HMM
slide-13
SLIDE 13

CPSC 322, Lecture 32 Slide 13

Rob

  • bot
  • t Lo

Local aliz ization ation Sen ensor

  • r an

and D d Dyna namics mics Mo Mode del

  • Sa

Sample le Se Sensor Model l (assume same as for pushed around)

  • Sa

Sample le St Stochastic astic Dy Dynamics ics: P(Loct + 1 | Actiont , Loc t)

P(Loct + 1 = L | Action t = goRight , Loc t = L) = 0.1 P(Loct + 1 = L+1 | Action t = goRight , Loc t = L) = 0.8 P(Loct + 1 = L + 2 | Action t = goRight , Loc t = L) = 0.074 P(Loct + 1 = L’ | Action t = goRight , Loc t = L) = 0.002 for all other locations L’

  • All location arithmetic is modulo 16
  • The action goLeft works the same but to the left
slide-14
SLIDE 14

CPSC 322, Lecture 32 Slide 14

Dyna namics mics Mo Mode del l Mo More e Det etai ails ls

  • Sample

e Stochastic astic Dy Dynamics cs: P(Loct + 1 | Action, Loc t)

P(Loct + 1 = L | Action t = goRight , Loc t = L) = 0.1 P(Loct + 1 = L+1 | Action t = goRight , Loc t = L) = 0.8 P(Loct + 1 = L + 2 | Action t = goRight , Loc t = L) = 0.074 P(Loct + 1 = L’ | Action t = goRight , Loc t = L) = 0.002 for all other locations L’

slide-15
SLIDE 15

CPSC 322, Lecture 30 Slide 15

Rob

  • bot
  • t Lo

Local aliz ization ation ad addi diti tion

  • nal

al sen ensor

  • r
  • Additi

tion

  • nal

al Light Sensor: r: there is light coming through an

  • pening at location 10

P (Lt | Loct)

  • Info

fo from m the two wo sensors rs is is combin ined ed :“Sensor Fusion”

slide-16
SLIDE 16

CPSC 322, Lecture 30 Slide 16

The Robot starts at an unknown location and must determine where it is The model appears to be too ambiguous

  • Sensors are too noisy
  • Dynamics are too stochastic to infer anything

http://www.cs.ubc.ca/spider/poole/demos/localization /localization.html

But inference actually works pretty well. Let’s check:

You can use standard Bnet inference. However you typically take advantage of the fact that time moves forward (not in 322)

slide-17
SLIDE 17

CPSC 322, Lecture 32 Slide 17

Sampl mple e scenari nario

  • to

to explore lore in demo

  • Keep making observations without moving. What

happens?

  • Then keep moving without making observations.

What happens?

  • Assume you are at a certain position alternate

moves and observations

  • ….
slide-18
SLIDE 18

CPSC 322, Lecture 32 Slide 18

HMMs have many other applications….

Natura ral l Languag age e Pr Process ssin ing: g: e.g., Speech Recognition

  • States:

phoneme \ word

  • Observations: acoustic signal \

phoneme

Bi Bioinfo nform rmatics atics: Gene Finding

  • States: coding / non-coding region
  • Observations: DNA Sequences

Fo For these se problem ems s the critic tical al infere renc nce e is:

find the most likely sequence of states given a sequence of observations

slide-19
SLIDE 19

CPSC 322, Lecture 32 Slide 19

Markov kov Models els

Markov Chains Hidden Markov Model Markov Decision Processes (MDPs) Simplest Possible Dynamic Bnet Add noisy Observations about the state at time t Add Actions and Values (Rewards)

slide-20
SLIDE 20

CPSC 322, Lecture 4 Slide 20

Learning Goals for today’s class

Yo You u can an:

  • Specify the components of an Hidden Markov

Model (HMM)

  • Justify and apply HMMs to Robot Localization

Clarifi ifica catio ion n on second nd LG for last t class ss

You can:

  • Justify and apply Markov Chains to compute the probability
  • f a Natural Language sentence (NOT to estimate the

conditional probs- slide 18)

slide-21
SLIDE 21

CPSC 322, Lecture 2 Slide 21

Next xt week

En Enviro ronm nmen ent Pr Problem em

Query ry Planning De Determi rministic nistic Stocha chastic stic Search ch Arc Co Consisten tency cy Se Search ch Search ch Va Valu lue Iterati eration

  • n

Var. . Eliminati ation Constr trai aint nt Satis isfactio action Logic ics STRI RIPS Belief f Ne Nets Vars s + Co Constr traints aints De Decision

  • n Ne

Nets

Mar arkov kov Decis ision

  • n Proce

cess sses es

Var. . Eliminati ation Stati atic Sequenti ntial al Re Representation sentation Re Reasoning ing Techniqu nique SLS

Mar arkov kov Chains ins and HMMs

slide-22
SLIDE 22

CPSC 322, Lecture 29 Slide 22

Next xt Class ss

  • One

ne-off

  • ff de

decis isions ions(TextBook 9.2)

  • Sin

ingl gle e Sta tage ge Dec ecisi ision

  • n ne

netw twork

  • rks

s ( 9.2.1)