1
play

1 Real HMM Examples Real HMM Examples Speech recognition HMMs: - PDF document

Hidden Markov Models CSE 473: Artificial Intelligence Markov chains not so useful for most agents Hidden Markov Models Eventually you dont know anything anymore Need observations to update your beliefs Hidden Markov models


  1. Hidden Markov Models CSE 473: Artificial Intelligence  Markov chains not so useful for most agents Hidden Markov Models  Eventually you don’t know anything anymore  Need observations to update your beliefs  Hidden Markov models (HMMs)  Underlying Markov chain over states S  You observe outputs (effects) at each time step  As a Bayes’ net: X 1 X 2 X 3 X 4 X N X 5 E 1 E 2 E 3 E 4 E N E 5 Steve Tanimoto --- University of Washington [Most slides were created by Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley. All CS188 materials are available at http://ai.berkeley.edu.] Example Hidden Markov Models X 1 X 2 X 3 X 4 X N X 5 E 1 E 2 E 3 E 4 E 5 E N  Defines a joint probability distribution:  An HMM is defined by:  Initial distribution:  Transitions:  Emissions: Ghostbusters HMM HMM Computations  P(X 1 ) = uniform 1/9 1/9 1/9  Given  P(X’|X) = ghosts usually move clockwise, 1/9 1/9 1/9  parameters but sometimes move in a random direction or stay put 1/9 1/9 1/9  P(E|X) = same sensor model as before:  evidence E 1: n =e 1: n P(X 1 ) red means close, green means far away.  Inference problems include: 1/6 1/6 1/2 X 1 X 2 X 3 X 4  Filtering, find P ( X t |e 1: t ) for all t 0 1/6 0 Etc…  Smoothing, find P ( X t |e 1: n ) for all t 0 0 0 E 1 E 1 E 3 E 4  Most probable explanation, find P(X’|X=<1,2>) x* 1: n = argmax x 1: n P ( x 1: n |e 1: n ) P(red | 3) P(orange | 3) P(yellow | 3) P(green | 3) E 5 P(E|X) 0.05 0.15 0.5 0.3 Etc… (must specify for other distances) 1

  2. Real HMM Examples Real HMM Examples  Speech recognition HMMs:  Machine translation HMMs:  Observations are acoustic signals (continuous valued)  Observations are words (tens of thousands)  States are specific positions in specific words (so, tens of thousands )  States are translation options X 1 X 2 X 3 X 4 X 1 X 2 X 3 X 4 E 1 E 1 E 3 E 4 E 1 E 1 E 3 E 4 Real HMM Examples Conditional Independence  HMMs have two important independence properties:  Robot tracking:  Markov hidden process, future depends on past via the present  Observations are range readings (continuous) ? ?  States are positions on a map (continuous) X 1 X 2 X 3 X 4 X 1 X 2 X 3 X 4 E 1 E 1 E 3 E 4 E 1 E 1 E 3 E 4 Conditional Independence Conditional Independence  HMMs have two important independence properties:  HMMs have two important independence properties:  Markov hidden process, future depends on past via the present  Markov hidden process, future depends on past via the present  Current observation independent of all else given current state  Current observation independent of all else given current state ? X 1 X 2 X 3 X 4 X 1 X 2 X 3 X 4 E 1 E 1 E 3 E 4 E 1 E 1 E 3 E 4 ? ? ?  Quiz: does this mean that observations are independent given no evidence?  [No, correlated by the hidden state] 2

  3. Filtering / Monitoring Example: Robot Localization Example from  Filtering, or monitoring, is the task of tracking the distribution B(X) (the belief state) Michael Pfeiffer over time  We start with B(X) in an initial setting, usually uniform  As time passes, or we get observations, we update B(X)  The Kalman filter (one method – Real valued values) Prob 0 1  invented in the 60’s as a method of trajectory estimation for the Apollo program t=0 Sensor model: can read in which directions there is a wall, never more than 1 mistake Motion model: may not execute action with small prob. Example: Robot Localization Example: Robot Localization Prob 0 1 Prob 0 1 t=1 Lighter grey: was possible to get the reading, but less likely b/c t=2 required 1 mistake Example: Robot Localization Example: Robot Localization Prob 0 1 Prob 0 1 t=3 t=4 3

  4. Example: Robot Localization Inference Recap: Simple Cases X 1 X 1 X 2 E 1 Prob 0 1 t=5 Passage of Time Online Belief Updates  Assume we have current belief P(X | evidence to date)  Every time step, we start with current P(X | evidence)  We update for time: X 1 X 2 X 1 X 2  Then, after one time step passes:  Or, compactly:  We update for evidence: X 2 E 2  Basic idea: beliefs get “pushed” through the transitions  The forward algorithm does both at once (and doesn’t normalize)  With the “B” notation, we have to be careful about what time step t the belief is about, and  Problem: space is |X| and time is |X| 2 per time step what evidence it includes Observation Example: Passage of Time  As time passes, uncertainty “accumulates”  Assume we have current belief P(X | previous evidence): X 1  Then: E 1  Or: T = 1 T = 2 T = 5  Basic idea: beliefs reweighted by likelihood of evidence  Unlike passage of time, we have to renormalize Transition model: ghosts usually go clockwise 4

  5. Example: Observation The Forward Algorithm  As we get observations, beliefs get reweighted, uncertainty  We want to know: “decreases”  We can derive the following updates Before observation After observation  To get , compute each entry and normalize Example: Run the Filter Example HMM  An HMM is defined by:  Initial distribution:  Transitions:  Emissions: Example Pac-man Summary: Filtering  Filtering is the inference process of finding a distribution over X T given e 1 through e T : P( X T | e 1:t )  We first compute P( X 1 | e 1 ):  For each t from 2 to T, we have P( X t-1 | e 1:t-1 )  Elapse time: compute P( X t | e 1:t-1 )  Observe: compute P(X t | e 1:t-1 , e t ) = P( X t | e 1:t ) 5

  6. Recap: Filtering Recap: Reasoning Over Time  0.3  Stationary Markov models Elapse time: compute P( X t | e 1:t-1 ) 0.7 rain sun X 1 X 2 X 3 X 4 0.7 0.3 Observe: compute P( X t | e 1:t )  Hidden Markov models X E P Belief: <P(rain), P(sun)> rain umbrella 0.9 X 1 X 2 <0.5, 0.5> Prior on X 1 X 1 X 2 X 3 X 4 X 5 rain no umbrella 0.1 <0.82, 0.18> Observe E 1 E 2 sun umbrella 0.2 <0.63, 0.37> Elapse time E 1 E 2 E 3 E 4 E 5 <0.88, 0.12> Observe sun no umbrella 0.8 Particle Filtering Representation: Particles  Sometimes |X| is too big to use exact inference  Our representation of P(X) is now a list of N particles 0.0 0.1 0.0  |X| may be too big to even store B(X) (samples)  E.g. X is continuous  Generally, N << |X|  |X| 2 may be too big to do updates 0.0 0.0 0.2  Storing map from X to counts would defeat the point  Solution: approximate inference 0.0 0.2 0.5   Track samples of X, not all values P(x) approximated by number of particles with value x  Samples are called particles  So, many x will have P(x) = 0! Particles:  Time per step is linear in the number of samples (3,3)  But: number needed may be large  More particles, more accuracy (2,3) (3,3)  In memory: list of particles, not states (3,2)  For now, all particles have a weight of 1 (3,3)  This is how robot localization works in practice (3,2) (2,1) (3,3) (3,3) (2,1) Particle Filtering: Elapse Time Particle Filtering: Observe  Each particle is moved by sampling its next position from  Slightly trickier: the transition model  Don’t do rejection sampling (why not?)  We don’t sample the observation, we fix it  This is similar to likelihood weighting, so we downweight our samples based on the evidence  This is like prior sampling – samples’ frequencies reflect the transition probs  Here, most samples move clockwise, but some move in another direction or stay in place  Note that, as before, the probabilities don’t sum to one, since  This captures the passage of time most have been downweighted (in fact they sum to an  If we have enough samples, close to the exact values before approximation of P(e)) and after (consistent) 6

  7. Particle Filtering: Resample Recap: Particle Filtering Old Particles: At each time step t, we have a set of N particles / samples  Rather than tracking weighted (3,3) w=0.1  Initialization: Sample from prior, reweight and resample samples, we resample (2,1) w=0.9 (2,1) w=0.9  Three step procedure, to move to time t+1: (3,1) w=0.4  N times, we choose from our (3,2) w=0.3 1. Sample transitions: for each each particle x , sample next state weighted sample distribution (2,2) w=0.4 (1,1) w=0.4 (i.e. draw with replacement) (3,1) w=0.4 (2,1) w=0.9 2. Reweight: for each particle, compute its weight given the actual observation e  This is equivalent to (3,2) w=0.3 renormalizing the distribution New Particles: (2,1) w=1  Now the update is complete for (2,1) w=1 • Resample: normalize the weights, and sample N new particles from the resulting this time step, continue with the (2,1) w=1 distribution over states next one (3,2) w=1 (2,2) w=1 (2,1) w=1 (1,1) w=1 (3,1) w=1 (2,1) w=1 (1,1) w=1 Particle Filtering Summary Robot Localization  In robot localization:  Represent current belief P(X | evidence to date) as set of n samples (actual assignments X=x)  We know the map, but not the robot’s position  Observations may be vectors of range finder readings  For each new observation e:  State space and readings are typically continuous (works basically like a very fine grid) and so we 1. Sample transition, once for each current particle x cannot store B(X)  Particle filtering is a main technique 2. For each new sample x’, compute importance weights for the new evidence e: 3. Finally, normalize the importance weights and resample N new particles Robot Localization Which Algorithm? Exact filter, uniform initial beliefs QuickTime™ and a GIF decompressor are needed to see this picture. 7

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend