hidden markov models
play

Hidden Markov Models CMSC 473/673 UMBC Recap from last time - PowerPoint PPT Presentation

Hidden Markov Models CMSC 473/673 UMBC Recap from last time Expectation Maximization (EM) 0. Assume some value for your parameters Two step, iterative algorithm 1. E-step: count under uncertainty, assuming these parameters 2. M-step:


  1. Hidden Markov Model π‘ž 𝑨 1 , π‘₯ 1 , 𝑨 2 , π‘₯ 2 , … , 𝑨 𝑂 , π‘₯ 𝑂 = π‘ž 𝑨 1 | 𝑨 0 π‘ž π‘₯ 1 |𝑨 1 β‹― π‘ž 𝑨 𝑂 | 𝑨 π‘‚βˆ’1 π‘ž π‘₯ 𝑂 |𝑨 𝑂 = ΰ·‘ π‘ž π‘₯ 𝑗 |𝑨 𝑗 π‘ž 𝑨 𝑗 | 𝑨 π‘—βˆ’1 𝑗 Goal: maximize (log-)likelihood In practice: we don’t actually observe these z values; we just see the words w

  2. Hidden Markov Model π‘ž 𝑨 1 , π‘₯ 1 , 𝑨 2 , π‘₯ 2 , … , 𝑨 𝑂 , π‘₯ 𝑂 = π‘ž 𝑨 1 | 𝑨 0 π‘ž π‘₯ 1 |𝑨 1 β‹― π‘ž 𝑨 𝑂 | 𝑨 π‘‚βˆ’1 π‘ž π‘₯ 𝑂 |𝑨 𝑂 = ΰ·‘ π‘ž π‘₯ 𝑗 |𝑨 𝑗 π‘ž 𝑨 𝑗 | 𝑨 π‘—βˆ’1 𝑗 Goal: maximize (log-)likelihood In practice: we don’t actually observe these z values; we just see the words w if we did observe z , estimating the if we knew the probability parameters probability parameters would be easy… then we could estimate z and evaluate but we don’t! :( likelihood… but we don’t! :(

  3. Hidden Markov Model Terminology π‘ž 𝑨 1 , π‘₯ 1 , 𝑨 2 , π‘₯ 2 , … , 𝑨 𝑂 , π‘₯ 𝑂 = π‘ž 𝑨 1 | 𝑨 0 π‘ž π‘₯ 1 |𝑨 1 β‹― π‘ž 𝑨 𝑂 | 𝑨 π‘‚βˆ’1 π‘ž π‘₯ 𝑂 |𝑨 𝑂 = ΰ·‘ π‘ž π‘₯ 𝑗 |𝑨 𝑗 π‘ž 𝑨 𝑗 | 𝑨 π‘—βˆ’1 𝑗 Each z i can take the value of one of K latent states

  4. Hidden Markov Model Terminology π‘ž 𝑨 1 , π‘₯ 1 , 𝑨 2 , π‘₯ 2 , … , 𝑨 𝑂 , π‘₯ 𝑂 = π‘ž 𝑨 1 | 𝑨 0 π‘ž π‘₯ 1 |𝑨 1 β‹― π‘ž 𝑨 𝑂 | 𝑨 π‘‚βˆ’1 π‘ž π‘₯ 𝑂 |𝑨 𝑂 = ΰ·‘ π‘ž π‘₯ 𝑗 |𝑨 𝑗 π‘ž 𝑨 𝑗 | 𝑨 π‘—βˆ’1 𝑗 transition probabilities/parameters Each z i can take the value of one of K latent states

  5. Hidden Markov Model Terminology π‘ž 𝑨 1 , π‘₯ 1 , 𝑨 2 , π‘₯ 2 , … , 𝑨 𝑂 , π‘₯ 𝑂 = π‘ž 𝑨 1 | 𝑨 0 π‘ž π‘₯ 1 |𝑨 1 β‹― π‘ž 𝑨 𝑂 | 𝑨 π‘‚βˆ’1 π‘ž π‘₯ 𝑂 |𝑨 𝑂 = ΰ·‘ π‘ž π‘₯ 𝑗 |𝑨 𝑗 π‘ž 𝑨 𝑗 | 𝑨 π‘—βˆ’1 𝑗 emission transition probabilities/parameters probabilities/parameters Each z i can take the value of one of K latent states

  6. Hidden Markov Model Terminology π‘ž 𝑨 1 , π‘₯ 1 , 𝑨 2 , π‘₯ 2 , … , 𝑨 𝑂 , π‘₯ 𝑂 = π‘ž 𝑨 1 | 𝑨 0 π‘ž π‘₯ 1 |𝑨 1 β‹― π‘ž 𝑨 𝑂 | 𝑨 π‘‚βˆ’1 π‘ž π‘₯ 𝑂 |𝑨 𝑂 = ΰ·‘ π‘ž π‘₯ 𝑗 |𝑨 𝑗 π‘ž 𝑨 𝑗 | 𝑨 π‘—βˆ’1 𝑗 emission transition probabilities/parameters probabilities/parameters Each z i can take the value of one of K latent states Transition and emission distributions do not change

  7. Hidden Markov Model Terminology π‘ž 𝑨 1 , π‘₯ 1 , 𝑨 2 , π‘₯ 2 , … , 𝑨 𝑂 , π‘₯ 𝑂 = π‘ž 𝑨 1 | 𝑨 0 π‘ž π‘₯ 1 |𝑨 1 β‹― π‘ž 𝑨 𝑂 | 𝑨 π‘‚βˆ’1 π‘ž π‘₯ 𝑂 |𝑨 𝑂 = ΰ·‘ π‘ž π‘₯ 𝑗 |𝑨 𝑗 π‘ž 𝑨 𝑗 | 𝑨 π‘—βˆ’1 𝑗 emission transition probabilities/parameters probabilities/parameters Each z i can take the value of one of K latent states Transition and emission distributions do not change Q: How many different probability values are there with K states and V vocab items?

  8. Hidden Markov Model Terminology π‘ž 𝑨 1 , π‘₯ 1 , 𝑨 2 , π‘₯ 2 , … , 𝑨 𝑂 , π‘₯ 𝑂 = π‘ž 𝑨 1 | 𝑨 0 π‘ž π‘₯ 1 |𝑨 1 β‹― π‘ž 𝑨 𝑂 | 𝑨 π‘‚βˆ’1 π‘ž π‘₯ 𝑂 |𝑨 𝑂 = ΰ·‘ π‘ž π‘₯ 𝑗 |𝑨 𝑗 π‘ž 𝑨 𝑗 | 𝑨 π‘—βˆ’1 𝑗 emission transition probabilities/parameters probabilities/parameters Each z i can take the value of one of K latent states Transition and emission distributions do not change Q: How many different probability values are there with K states and V vocab items? A: VK emission values and K 2 transition values

  9. Hidden Markov Model Representation π‘ž 𝑨 1 , π‘₯ 1 , 𝑨 2 , π‘₯ 2 , … , 𝑨 𝑂 , π‘₯ 𝑂 = π‘ž 𝑨 1 | 𝑨 0 π‘ž π‘₯ 1 |𝑨 1 β‹― π‘ž 𝑨 𝑂 | 𝑨 π‘‚βˆ’1 π‘ž π‘₯ 𝑂 |𝑨 𝑂 emission transition = ΰ·‘ π‘ž π‘₯ 𝑗 |𝑨 𝑗 π‘ž 𝑨 𝑗 | 𝑨 π‘—βˆ’1 probabilities/parameters probabilities/parameters 𝑗 … z 1 z 2 z 3 z 4 w 1 w 2 w 3 w 4 represent the probabilities and independence assumptions in a graph

  10. Hidden Markov Model Representation π‘ž 𝑨 1 , π‘₯ 1 , 𝑨 2 , π‘₯ 2 , … , 𝑨 𝑂 , π‘₯ 𝑂 = π‘ž 𝑨 1 | 𝑨 0 π‘ž π‘₯ 1 |𝑨 1 β‹― π‘ž 𝑨 𝑂 | 𝑨 π‘‚βˆ’1 π‘ž π‘₯ 𝑂 |𝑨 𝑂 emission transition = ΰ·‘ π‘ž π‘₯ 𝑗 |𝑨 𝑗 π‘ž 𝑨 𝑗 | 𝑨 π‘—βˆ’1 probabilities/parameters probabilities/parameters 𝑗 … z 1 z 2 z 3 z 4 w 1 w 2 w 3 w 4 Graphical Models (see 478/678)

  11. Hidden Markov Model Representation π‘ž 𝑨 1 , π‘₯ 1 , 𝑨 2 , π‘₯ 2 , … , 𝑨 𝑂 , π‘₯ 𝑂 = π‘ž 𝑨 1 | 𝑨 0 π‘ž π‘₯ 1 |𝑨 1 β‹― π‘ž 𝑨 𝑂 | 𝑨 π‘‚βˆ’1 π‘ž π‘₯ 𝑂 |𝑨 𝑂 emission transition = ΰ·‘ π‘ž π‘₯ 𝑗 |𝑨 𝑗 π‘ž 𝑨 𝑗 | 𝑨 π‘—βˆ’1 probabilities/parameters probabilities/parameters 𝑗 … z 1 z 2 z 3 z 4 π‘ž π‘₯ 4 |𝑨 4 π‘ž π‘₯ 1 |𝑨 1 π‘ž π‘₯ 2 |𝑨 2 π‘ž π‘₯ 3 |𝑨 3 w 1 w 2 w 3 w 4

  12. Hidden Markov Model Representation π‘ž 𝑨 1 , π‘₯ 1 , 𝑨 2 , π‘₯ 2 , … , 𝑨 𝑂 , π‘₯ 𝑂 = π‘ž 𝑨 1 | 𝑨 0 π‘ž π‘₯ 1 |𝑨 1 β‹― π‘ž 𝑨 𝑂 | 𝑨 π‘‚βˆ’1 π‘ž π‘₯ 𝑂 |𝑨 𝑂 emission transition = ΰ·‘ π‘ž π‘₯ 𝑗 |𝑨 𝑗 π‘ž 𝑨 𝑗 | 𝑨 π‘—βˆ’1 probabilities/parameters probabilities/parameters 𝑗 π‘ž 𝑨 2 | 𝑨 1 π‘ž 𝑨 3 | 𝑨 2 π‘ž 𝑨 4 | 𝑨 3 … z 1 z 2 z 3 z 4 π‘ž π‘₯ 4 |𝑨 4 π‘ž π‘₯ 1 |𝑨 1 π‘ž π‘₯ 2 |𝑨 2 π‘ž π‘₯ 3 |𝑨 3 w 1 w 2 w 3 w 4

  13. Hidden Markov Model Representation π‘ž 𝑨 1 , π‘₯ 1 , 𝑨 2 , π‘₯ 2 , … , 𝑨 𝑂 , π‘₯ 𝑂 = π‘ž 𝑨 1 | 𝑨 0 π‘ž π‘₯ 1 |𝑨 1 β‹― π‘ž 𝑨 𝑂 | 𝑨 π‘‚βˆ’1 π‘ž π‘₯ 𝑂 |𝑨 𝑂 emission transition = ΰ·‘ π‘ž π‘₯ 𝑗 |𝑨 𝑗 π‘ž 𝑨 𝑗 | 𝑨 π‘—βˆ’1 probabilities/parameters probabilities/parameters 𝑗 initial starting distribution (β€œBOS”) π‘ž 𝑨 1 | 𝑨 0 π‘ž 𝑨 2 | 𝑨 1 π‘ž 𝑨 3 | 𝑨 2 π‘ž 𝑨 4 | 𝑨 3 … z 1 z 2 z 3 z 4 π‘ž π‘₯ 4 |𝑨 4 π‘ž π‘₯ 1 |𝑨 1 π‘ž π‘₯ 2 |𝑨 2 π‘ž π‘₯ 3 |𝑨 3 w 1 w 2 w 3 w 4

  14. Hidden Markov Model Representation π‘ž 𝑨 1 , π‘₯ 1 , 𝑨 2 , π‘₯ 2 , … , 𝑨 𝑂 , π‘₯ 𝑂 = π‘ž 𝑨 1 | 𝑨 0 π‘ž π‘₯ 1 |𝑨 1 β‹― π‘ž 𝑨 𝑂 | 𝑨 π‘‚βˆ’1 π‘ž π‘₯ 𝑂 |𝑨 𝑂 emission transition = ΰ·‘ π‘ž π‘₯ 𝑗 |𝑨 𝑗 π‘ž 𝑨 𝑗 | 𝑨 π‘—βˆ’1 probabilities/parameters probabilities/parameters 𝑗 initial starting distribution (β€œBOS”) π‘ž 𝑨 1 | 𝑨 0 π‘ž 𝑨 2 | 𝑨 1 π‘ž 𝑨 3 | 𝑨 2 π‘ž 𝑨 4 | 𝑨 3 … z 1 z 2 z 3 z 4 π‘ž π‘₯ 4 |𝑨 4 π‘ž π‘₯ 1 |𝑨 1 π‘ž π‘₯ 2 |𝑨 2 π‘ž π‘₯ 3 |𝑨 3 w 1 w 2 w 3 w 4 Each z i can take the value of one of K latent states Transition and emission distributions do not change

  15. Example: 2-state Hidden Markov Model as a Lattice … z 1 = z 2 = z 3 = z 4 = V V V V … z 1 = z 2 = z 3 = z 4 = N N N N w 1 w 2 w 3 w 4

  16. Example: 2-state Hidden Markov Model as a Lattice … z 1 = z 2 = z 3 = z 4 = V V V V … z 1 = z 2 = z 3 = z 4 = N N N N π‘ž π‘₯ 3 |π‘Š π‘ž π‘₯ 4 |π‘Š π‘ž π‘₯ 1 |π‘Š π‘ž π‘₯ 2 |π‘Š π‘ž π‘₯ 4 |𝑂 π‘ž π‘₯ 1 |𝑂 π‘ž π‘₯ 2 |𝑂 π‘ž π‘₯ 3 |𝑂 w 1 w 2 w 3 w 4

  17. Example: 2-state Hidden Markov Model as a Lattice π‘ž π‘Š| start π‘ž π‘Š| π‘Š π‘ž π‘Š| π‘Š π‘ž π‘Š| π‘Š … z 1 = z 2 = z 3 = z 4 = V V V V π‘ž 𝑂| start π‘ž 𝑂| 𝑂 π‘ž 𝑂| 𝑂 π‘ž 𝑂| 𝑂 … z 1 = z 2 = z 3 = z 4 = N N N N π‘ž π‘₯ 3 |π‘Š π‘ž π‘₯ 4 |π‘Š π‘ž π‘₯ 1 |π‘Š π‘ž π‘₯ 2 |π‘Š π‘ž π‘₯ 4 |𝑂 π‘ž π‘₯ 1 |𝑂 π‘ž π‘₯ 2 |𝑂 π‘ž π‘₯ 3 |𝑂 w 1 w 2 w 3 w 4

  18. Example: 2-state Hidden Markov Model as a Lattice π‘ž π‘Š| start π‘ž π‘Š| π‘Š π‘ž π‘Š| π‘Š π‘ž π‘Š| π‘Š … z 1 = z 2 = z 3 = z 4 = V V V V π‘ž π‘Š| 𝑂 π‘ž π‘Š| 𝑂 π‘ž π‘Š| 𝑂 π‘ž 𝑂| π‘Š π‘ž 𝑂| π‘Š π‘ž 𝑂| π‘Š π‘ž 𝑂| start π‘ž 𝑂| 𝑂 π‘ž 𝑂| 𝑂 π‘ž 𝑂| 𝑂 … z 1 = z 2 = z 3 = z 4 = N N N N π‘ž π‘₯ 3 |π‘Š π‘ž π‘₯ 4 |π‘Š π‘ž π‘₯ 1 |π‘Š π‘ž π‘₯ 2 |π‘Š π‘ž π‘₯ 4 |𝑂 π‘ž π‘₯ 1 |𝑂 π‘ž π‘₯ 2 |𝑂 π‘ž π‘₯ 3 |𝑂 w 1 w 2 w 3 w 4

  19. Comparison of Joint Probabilities π‘ž π‘₯ 1 , π‘₯ 2 , … , π‘₯ 𝑂 = π‘ž π‘₯ 1 π‘ž π‘₯ 2 β‹― π‘ž π‘₯ 𝑂 = ΰ·‘ π‘ž π‘₯ 𝑗 𝑗 Unigram Language Model

  20. Comparison of Joint Probabilities π‘ž π‘₯ 1 , π‘₯ 2 , … , π‘₯ 𝑂 = π‘ž π‘₯ 1 π‘ž π‘₯ 2 β‹― π‘ž π‘₯ 𝑂 = ΰ·‘ π‘ž π‘₯ 𝑗 𝑗 Unigram Language Model π‘ž 𝑨 1 , π‘₯ 1 , 𝑨 2 , π‘₯ 2 , … ,𝑨 𝑂 , π‘₯ 𝑂 = π‘ž 𝑨 1 π‘ž π‘₯ 1 |𝑨 1 β‹― π‘ž 𝑨 𝑂 π‘ž π‘₯ 𝑂 |𝑨 𝑂 = ΰ·‘ π‘ž π‘₯ 𝑗 |𝑨 𝑗 π‘ž 𝑨 𝑗 𝑗 Unigram Class- based Language Model (β€œK” coins)

  21. Comparison of Joint Probabilities π‘ž π‘₯ 1 , π‘₯ 2 , … , π‘₯ 𝑂 = π‘ž π‘₯ 1 π‘ž π‘₯ 2 β‹― π‘ž π‘₯ 𝑂 = ΰ·‘ π‘ž π‘₯ 𝑗 𝑗 Unigram Language Model π‘ž 𝑨 1 , π‘₯ 1 , 𝑨 2 , π‘₯ 2 , … ,𝑨 𝑂 , π‘₯ 𝑂 = π‘ž 𝑨 1 π‘ž π‘₯ 1 |𝑨 1 β‹― π‘ž 𝑨 𝑂 π‘ž π‘₯ 𝑂 |𝑨 𝑂 = ΰ·‘ π‘ž π‘₯ 𝑗 |𝑨 𝑗 π‘ž 𝑨 𝑗 𝑗 Unigram Class- based Language Model (β€œK” coins) π‘ž 𝑨 1 , π‘₯ 1 , 𝑨 2 , π‘₯ 2 , … , 𝑨 𝑂 , π‘₯ 𝑂 = π‘ž 𝑨 1 | 𝑨 0 π‘ž π‘₯ 1 |𝑨 1 β‹― π‘ž 𝑨 𝑂 | 𝑨 π‘‚βˆ’1 π‘ž π‘₯ 𝑂 |𝑨 𝑂 = ΰ·‘ π‘ž π‘₯ 𝑗 |𝑨 𝑗 π‘ž 𝑨 𝑗 | 𝑨 π‘—βˆ’1 𝑗 Hidden Markov Model

  22. Estimating Parameters from Observed Data π‘ž π‘Š| π‘Š z 1 = z 2 = z 3 = z 4 = π‘ž 𝑂| π‘Š V V V V Transition Counts π‘ž 𝑂| start π‘ž π‘Š| 𝑂 N V end z 1 = z 2 = z 3 = z 4 = N N N N start π‘ž π‘₯ 4 |𝑂 N π‘ž π‘₯ 1 |𝑂 π‘ž π‘₯ 3 |π‘Š π‘ž π‘₯ 2 |π‘Š V w 1 w 2 w 3 w 4 Emission Counts w 1 w 2 W 3 w 4 z 1 = z 2 = z 3 = z 4 = N V V V V V π‘ž π‘Š| 𝑂 π‘ž 𝑂| π‘Š π‘ž 𝑂| start end emission not shown π‘ž 𝑂| 𝑂 z 1 = z 2 = z 3 = z 4 = N N N N π‘ž π‘₯ 2 |π‘Š π‘ž π‘₯ 4 |𝑂 π‘ž π‘₯ 1 |𝑂 π‘ž π‘₯ 3 |𝑂 w 1 w 2 w 3 w 4

  23. Estimating Parameters from Observed Data π‘ž π‘Š| π‘Š z 1 = z 2 = z 3 = z 4 = π‘ž 𝑂| π‘Š V V V V Transition Counts π‘ž 𝑂| start π‘ž π‘Š| 𝑂 N V end z 1 = z 2 = z 3 = z 4 = N N N N start 2 0 0 π‘ž π‘₯ 4 |𝑂 N 1 2 2 π‘ž π‘₯ 1 |𝑂 π‘ž π‘₯ 3 |π‘Š π‘ž π‘₯ 2 |π‘Š V 2 1 0 w 1 w 2 w 3 w 4 Emission Counts w 1 w 2 W 3 w 4 z 1 = z 2 = z 3 = z 4 = N 2 0 1 2 V V V V V 0 2 1 0 π‘ž π‘Š| 𝑂 π‘ž 𝑂| π‘Š π‘ž 𝑂| start end emission not shown π‘ž 𝑂| 𝑂 z 1 = z 2 = z 3 = z 4 = N N N N π‘ž π‘₯ 2 |π‘Š π‘ž π‘₯ 4 |𝑂 π‘ž π‘₯ 1 |𝑂 π‘ž π‘₯ 3 |𝑂 w 1 w 2 w 3 w 4

  24. Estimating Parameters from Observed Data π‘ž π‘Š| π‘Š z 1 = z 2 = z 3 = z 4 = π‘ž 𝑂| π‘Š V V V V Transition MLE π‘ž 𝑂| start π‘ž π‘Š| 𝑂 N V end z 1 = z 2 = z 3 = z 4 = N N N N start 1 0 0 π‘ž π‘₯ 4 |𝑂 N .2 .4 .4 π‘ž π‘₯ 1 |𝑂 π‘ž π‘₯ 3 |π‘Š π‘ž π‘₯ 2 |π‘Š V 2/3 1/3 0 w 1 w 2 w 3 w 4 Emission MLE w 1 w 2 W 3 w 4 z 1 = z 2 = z 3 = z 4 = N .4 0 .2 .4 V V V V V 0 2/3 1/3 0 π‘ž π‘Š| 𝑂 π‘ž 𝑂| π‘Š π‘ž 𝑂| start end emission not shown π‘ž 𝑂| 𝑂 z 1 = z 2 = z 3 = z 4 = N N N N π‘ž π‘₯ 2 |π‘Š π‘ž π‘₯ 4 |𝑂 π‘ž π‘₯ 1 |𝑂 π‘ž π‘₯ 3 |𝑂 w 1 w 2 w 3 w 4

  25. Estimating Parameters from Observed Data π‘ž π‘Š| π‘Š z 1 = z 2 = z 3 = z 4 = π‘ž 𝑂| π‘Š V V V V Transition MLE π‘ž 𝑂| start π‘ž π‘Š| 𝑂 N V end z 1 = z 2 = z 3 = z 4 = N N N N start 1 0 0 π‘ž π‘₯ 4 |𝑂 N .2 .4 .4 π‘ž π‘₯ 1 |𝑂 π‘ž π‘₯ 3 |π‘Š π‘ž π‘₯ 2 |π‘Š V 2/3 1/3 0 w 1 w 2 w 3 w 4 Emission MLE w 1 w 2 W 3 w 4 z 1 = z 2 = z 3 = z 4 = N .4 0 .2 .4 V V V V V 0 2/3 1/3 0 π‘ž π‘Š| 𝑂 π‘ž 𝑂| π‘Š π‘ž 𝑂| start end emission not shown π‘ž 𝑂| 𝑂 z 1 = z 2 = z 3 = z 4 = N N N N smooth these π‘ž π‘₯ 2 |π‘Š values if π‘ž π‘₯ 4 |𝑂 π‘ž π‘₯ 1 |𝑂 π‘ž π‘₯ 3 |𝑂 needed w 1 w 2 w 3 w 4

  26. Outline HMM Motivation (Part of Speech) and Brief Definition What is Part of Speech? HMM Detailed Definition HMM Tasks

  27. Hidden Markov Model Tasks π‘ž 𝑨 1 , π‘₯ 1 , 𝑨 2 , π‘₯ 2 , … , 𝑨 𝑂 , π‘₯ 𝑂 = π‘ž 𝑨 1 | 𝑨 0 π‘ž π‘₯ 1 |𝑨 1 β‹― π‘ž 𝑨 𝑂 | 𝑨 π‘‚βˆ’1 π‘ž π‘₯ 𝑂 |𝑨 𝑂 emission transition = ΰ·‘ π‘ž π‘₯ 𝑗 |𝑨 𝑗 π‘ž 𝑨 𝑗 | 𝑨 π‘—βˆ’1 probabilities/parameters probabilities/parameters 𝑗 Calculate the (log) likelihood of an observed sequence w 1 , …, w N Calculate the most likely sequence of states (for an observed sequence) Learn the emission and transition parameters

  28. Hidden Markov Model Tasks π‘ž 𝑨 1 , π‘₯ 1 , 𝑨 2 , π‘₯ 2 , … , 𝑨 𝑂 , π‘₯ 𝑂 = π‘ž 𝑨 1 | 𝑨 0 π‘ž π‘₯ 1 |𝑨 1 β‹― π‘ž 𝑨 𝑂 | 𝑨 π‘‚βˆ’1 π‘ž π‘₯ 𝑂 |𝑨 𝑂 emission transition = ΰ·‘ π‘ž π‘₯ 𝑗 |𝑨 𝑗 π‘ž 𝑨 𝑗 | 𝑨 π‘—βˆ’1 probabilities/parameters probabilities/parameters 𝑗 Calculate the (log) likelihood of an observed sequence w 1 , …, w N Calculate the most likely sequence of states (for an observed sequence) Learn the emission and transition parameters

  29. HMM Likelihood Task Marginalize over all latent sequence joint likelihoods π‘ž π‘₯ 1 , π‘₯ 2 , … , π‘₯ 𝑂 = π‘ž 𝑨 1 , π‘₯ 1 , 𝑨 2 , π‘₯ 2 , … , 𝑨 𝑂 , π‘₯ 𝑂 ෍ 𝑨 1 ,β‹―,𝑨 𝑂 Q: In a K-state HMM for a length N observation sequence, how many summands (different latent sequences) are there?

  30. HMM Likelihood Task Marginalize over all latent sequence joint likelihoods π‘ž π‘₯ 1 , π‘₯ 2 , … , π‘₯ 𝑂 = π‘ž 𝑨 1 , π‘₯ 1 , 𝑨 2 , π‘₯ 2 , … , 𝑨 𝑂 , π‘₯ 𝑂 ෍ 𝑨 1 ,β‹―,𝑨 𝑂 Q: In a K-state HMM for a length N observation sequence, how many summands (different latent sequences) are there? A: K N

  31. HMM Likelihood Task Marginalize over all latent sequence joint likelihoods π‘ž π‘₯ 1 , π‘₯ 2 , … , π‘₯ 𝑂 = π‘ž 𝑨 1 , π‘₯ 1 , 𝑨 2 , π‘₯ 2 , … , 𝑨 𝑂 , π‘₯ 𝑂 ෍ 𝑨 1 ,β‹―,𝑨 𝑂 Q: In a K-state HMM for a length N observation sequence, how many summands (different latent sequences) are there? A: K N Goal: Find a way to compute this exponential sum efficiently (in polynomial time)

  32. HMM Likelihood Task Like in language modeling, you need to model when to Marginalize over all latent sequence joint stop generating. likelihoods This ending state is generally not included in β€œK.” π‘ž π‘₯ 1 , π‘₯ 2 , … , π‘₯ 𝑂 = π‘ž 𝑨 1 , π‘₯ 1 , 𝑨 2 , π‘₯ 2 , … , 𝑨 𝑂 , π‘₯ 𝑂 ෍ 𝑨 1 ,β‹―,𝑨 𝑂 Q: In a K-state HMM for a length N observation sequence, how many summands (different latent sequences) are there? A: K N Goal: Find a way to compute this exponential sum efficiently (in polynomial time)

  33. 2 (3)-State HMM Likelihood π‘ž π‘Š| start π‘ž π‘Š| π‘Š π‘ž π‘Š| π‘Š π‘ž π‘Š| π‘Š … z 1 = z 2 = z 3 = z 4 = V V V V π‘ž π‘Š| 𝑂 π‘ž π‘Š| 𝑂 π‘ž π‘Š| 𝑂 π‘ž 𝑂| π‘Š π‘ž 𝑂| π‘Š π‘ž 𝑂| π‘Š π‘ž 𝑂| start π‘ž 𝑂| 𝑂 π‘ž 𝑂| 𝑂 π‘ž 𝑂| 𝑂 … z 1 = z 2 = z 3 = z 4 = N N N N π‘ž π‘₯ 3 |π‘Š π‘ž π‘₯ 4 |π‘Š π‘ž π‘₯ 1 |π‘Š π‘ž π‘₯ 2 |π‘Š π‘ž π‘₯ 4 |𝑂 π‘ž π‘₯ 1 |𝑂 π‘ž π‘₯ 2 |𝑂 π‘ž π‘₯ 3 |𝑂 w 1 w 2 w 3 w 4 Q: What are the latent sequences here (EOS excluded)?

  34. 2 (3)-State HMM Likelihood π‘ž π‘Š| start π‘ž π‘Š| π‘Š π‘ž π‘Š| π‘Š π‘ž π‘Š| π‘Š … z 1 = z 2 = z 3 = z 4 = V V V V π‘ž π‘Š| 𝑂 π‘ž π‘Š| 𝑂 π‘ž π‘Š| 𝑂 π‘ž 𝑂| π‘Š π‘ž 𝑂| π‘Š π‘ž 𝑂| π‘Š π‘ž 𝑂| start π‘ž 𝑂| 𝑂 π‘ž 𝑂| 𝑂 π‘ž 𝑂| 𝑂 … z 1 = z 2 = z 3 = z 4 = N N N N π‘ž π‘₯ 3 |π‘Š π‘ž π‘₯ 4 |π‘Š π‘ž π‘₯ 1 |π‘Š π‘ž π‘₯ 2 |π‘Š π‘ž π‘₯ 4 |𝑂 π‘ž π‘₯ 1 |𝑂 π‘ž π‘₯ 2 |𝑂 π‘ž π‘₯ 3 |𝑂 w 1 w 2 w 3 w 4 Q: What are the latent sequences here (EOS excluded)? A: (N, w 1 ), (N, w 2 ), (N, w 3 ), (N, w 4 ) (N, w 1 ), (V, w 2 ), (N, w 3 ), (N, w 4 ) (V, w 1 ), (N, w 2 ), (N, w 3 ), (N, w 4 ) (N, w 1 ), (N, w 2 ), (N, w 3 ), (V, w 4 ) (N, w 1 ), (V, w 2 ), (N, w 3 ), (V, w 4 ) (V, w 1 ), (N, w 2 ), (N, w 3 ), (V, w 4 ) (N, w 1 ), (N, w 2 ), (V, w 3 ), (N, w 4 ) (N, w 1 ), (V, w 2 ), (V, w 3 ), (N, w 4 ) … (six more) (N, w 1 ), (N, w 2 ), (V, w 3 ), (V, w 4 ) (N, w 1 ), (V, w 2 ), (V, w 3 ), (V, w 4 )

  35. 2 (3)-State HMM Likelihood π‘ž π‘Š| start π‘ž π‘Š| π‘Š π‘ž π‘Š| π‘Š π‘ž π‘Š| π‘Š … z 1 = z 2 = z 3 = z 4 = V V V V π‘ž π‘Š| 𝑂 π‘ž π‘Š| 𝑂 π‘ž π‘Š| 𝑂 π‘ž 𝑂| π‘Š π‘ž 𝑂| π‘Š π‘ž 𝑂| π‘Š π‘ž 𝑂| start π‘ž 𝑂| 𝑂 π‘ž 𝑂| 𝑂 π‘ž 𝑂| 𝑂 … z 1 = z 2 = z 3 = z 4 = N N N N π‘ž π‘₯ 3 |π‘Š π‘ž π‘₯ 4 |π‘Š π‘ž π‘₯ 1 |π‘Š π‘ž π‘₯ 2 |π‘Š π‘ž π‘₯ 4 |𝑂 π‘ž π‘₯ 1 |𝑂 π‘ž π‘₯ 2 |𝑂 π‘ž π‘₯ 3 |𝑂 w 1 w 2 w 3 w 4 Q: What are the latent sequences here (EOS excluded)? A: (N, w 1 ), (N, w 2 ), (N, w 3 ), (N, w 4 ) (N, w 1 ), (V, w 2 ), (N, w 3 ), (N, w 4 ) (V, w 1 ), (N, w 2 ), (N, w 3 ), (N, w 4 ) (N, w 1 ), (N, w 2 ), (N, w 3 ), (V, w 4 ) (N, w 1 ), (V, w 2 ), (N, w 3 ), (V, w 4 ) (V, w 1 ), (N, w 2 ), (N, w 3 ), (V, w 4 ) (N, w 1 ), (N, w 2 ), (V, w 3 ), (N, w 4 ) (N, w 1 ), (V, w 2 ), (V, w 3 ), (N, w 4 ) … (six more) (N, w 1 ), (N, w 2 ), (V, w 3 ), (V, w 4 ) (N, w 1 ), (V, w 2 ), (V, w 3 ), (V, w 4 )

  36. 2 (3)-State HMM Likelihood π‘ž π‘Š| start π‘ž π‘Š| π‘Š π‘ž π‘Š| π‘Š π‘ž π‘Š| π‘Š … z 1 = z 2 = z 3 = z 4 = V V V V π‘ž π‘Š| 𝑂 π‘ž π‘Š| 𝑂 π‘ž π‘Š| 𝑂 π‘ž 𝑂| π‘Š π‘ž 𝑂| π‘Š π‘ž 𝑂| π‘Š π‘ž 𝑂| start π‘ž 𝑂| 𝑂 π‘ž 𝑂| 𝑂 π‘ž 𝑂| 𝑂 … z 1 = z 2 = z 3 = z 4 = N N N N π‘ž π‘₯ 3 |π‘Š π‘ž π‘₯ 4 |π‘Š π‘ž π‘₯ 1 |π‘Š π‘ž π‘₯ 2 |π‘Š π‘ž π‘₯ 4 |𝑂 π‘ž π‘₯ 1 |𝑂 π‘ž π‘₯ 2 |𝑂 π‘ž π‘₯ 3 |𝑂 w 1 w 2 w 3 w 4 N V end w 1 w 2 w 3 w 4 start .7 .2 .1 N .7 .2 .05 .05 N .15 .8 .05 V .2 .6 .1 .1 V .6 .35 .05

  37. 2 (3)-State HMM Likelihood π‘ž π‘Š| start π‘ž π‘Š| π‘Š π‘ž π‘Š| π‘Š π‘ž π‘Š| π‘Š … z 1 = z 2 = z 3 = z 4 = V V V V π‘ž π‘Š| 𝑂 π‘ž π‘Š| 𝑂 π‘ž π‘Š| 𝑂 π‘ž 𝑂| π‘Š π‘ž 𝑂| π‘Š π‘ž 𝑂| π‘Š π‘ž 𝑂| start π‘ž 𝑂| 𝑂 π‘ž 𝑂| 𝑂 π‘ž 𝑂| 𝑂 … z 1 = z 2 = z 3 = z 4 = N N N N π‘ž π‘₯ 3 |π‘Š π‘ž π‘₯ 4 |π‘Š π‘ž π‘₯ 1 |π‘Š π‘ž π‘₯ 2 |π‘Š π‘ž π‘₯ 4 |𝑂 π‘ž π‘₯ 1 |𝑂 π‘ž π‘₯ 2 |𝑂 π‘ž π‘₯ 3 |𝑂 w 1 w 2 w 3 w 4 Q: What’s the probability of N V end (N, w 1 ), (V, w 2 ), (V, w 3 ), (N, w 4 )? w 1 w 2 w 3 w 4 start .7 .2 .1 N .7 .2 .05 .05 N .15 .8 .05 V .2 .6 .1 .1 V .6 .35 .05

  38. 2 (3)-State HMM Likelihood π‘ž π‘Š| π‘Š z 1 = z 2 = z 3 = z 4 = V V V V π‘ž π‘Š| 𝑂 π‘ž 𝑂| π‘Š π‘ž 𝑂| start z 1 = z 2 = z 3 = z 4 = N N N N π‘ž π‘₯ 3 |π‘Š π‘ž π‘₯ 2 |π‘Š π‘ž π‘₯ 4 |𝑂 π‘ž π‘₯ 1 |𝑂 w 1 w 2 w 3 w 4 Q: What’s the probability of N V end (N, w 1 ), (V, w 2 ), (V, w 3 ), (N, w 4 )? w 1 w 2 w 3 w 4 start .7 .2 .1 N .7 .2 .05 .05 A: (.7*.7) * (.8*.6) * (.35*.1) * (.6*.05) = N .15 .8 .05 V .2 .6 .1 .1 0.0002822 V .6 .35 .05

  39. 2 (3)-State HMM Likelihood π‘ž π‘Š| π‘Š z 1 = z 2 = z 3 = z 4 = V V V V π‘ž π‘Š| 𝑂 π‘ž 𝑂| π‘Š π‘ž 𝑂| start z 1 = z 2 = z 3 = z 4 = N N N N π‘ž π‘₯ 3 |π‘Š π‘ž π‘₯ 2 |π‘Š π‘ž π‘₯ 4 |𝑂 π‘ž π‘₯ 1 |𝑂 w 1 w 2 w 3 w 4 Q: What’s the probability of (N, w 1 ), (V, w 2 ), (V, w 3 ), (N, w 4 ) w 1 w 2 w 3 w 4 # N V end with ending included (unique N .7 .2 .05 .05 0 start .7 .2 .1 ending symbol β€œ#”)? V .2 .6 .1 .1 0 A: (.7*.7) * (.8*.6) * (.35*.1) * (.6*.05) * N .15 .8 .05 (.05 * 1) = 0 0 0 0 1 end V .6 .35 .05 0.00001235

  40. 2 (3)-State HMM Likelihood π‘ž π‘Š| start π‘ž π‘Š| π‘Š π‘ž π‘Š| π‘Š π‘ž π‘Š| π‘Š … z 1 = z 2 = z 3 = z 4 = V V V V π‘ž π‘Š| 𝑂 π‘ž π‘Š| 𝑂 π‘ž π‘Š| 𝑂 π‘ž 𝑂| π‘Š π‘ž 𝑂| π‘Š π‘ž 𝑂| π‘Š π‘ž 𝑂| start π‘ž 𝑂| 𝑂 π‘ž 𝑂| 𝑂 π‘ž 𝑂| 𝑂 … z 1 = z 2 = z 3 = z 4 = N N N N π‘ž π‘₯ 3 |π‘Š π‘ž π‘₯ 4 |π‘Š π‘ž π‘₯ 1 |π‘Š π‘ž π‘₯ 2 |π‘Š π‘ž π‘₯ 4 |𝑂 π‘ž π‘₯ 1 |𝑂 π‘ž π‘₯ 2 |𝑂 π‘ž π‘₯ 3 |𝑂 w 1 w 2 w 3 w 4 Q: What’s the probability of N V end (N, w 1 ), (V, w 2 ), (N, w 3 ), (N, w 4 )? w 1 w 2 w 3 w 4 start .7 .2 .1 N .7 .2 .05 .05 N .15 .8 .05 V .2 .6 .1 .1 V .6 .35 .05

  41. 2 (3)-State HMM Likelihood z 1 = z 2 = z 3 = z 4 = V V V V π‘ž π‘Š| 𝑂 π‘ž 𝑂| π‘Š π‘ž 𝑂| start π‘ž 𝑂| 𝑂 z 1 = z 2 = z 3 = z 4 = N N N N π‘ž π‘₯ 2 |π‘Š π‘ž π‘₯ 4 |𝑂 π‘ž π‘₯ 1 |𝑂 π‘ž π‘₯ 3 |𝑂 w 1 w 2 w 3 w 4 N V end Q: What’s the probability of w 1 w 2 w 3 w 4 (N, w 1 ), (V, w 2 ), (N, w 3 ), (N, w 4 )? start .7 .2 .1 N .7 .2 .05 .05 A: (.7*.7) * (.8*.6) * (.6*.05) * (.15*.05) = N .15 .8 .05 V .2 .6 .1 .1 0.00007056 V .6 .35 .05

  42. 2 (3)-State HMM Likelihood z 1 = z 2 = z 3 = z 4 = V V V V π‘ž π‘Š| 𝑂 π‘ž 𝑂| π‘Š π‘ž 𝑂| start π‘ž 𝑂| 𝑂 z 1 = z 2 = z 3 = z 4 = N N N N π‘ž π‘₯ 2 |π‘Š π‘ž π‘₯ 4 |𝑂 π‘ž π‘₯ 1 |𝑂 π‘ž π‘₯ 3 |𝑂 w 1 w 2 w 3 w 4 Q: What’s the probability of w 1 w 2 w 3 w 4 # N V end (N, w 1 ), (V, w 2 ), (N, w 3 ), (N, w 4 ) with N .7 .2 .05 .05 0 ending (unique symbol β€œ#”)? start .7 .2 .1 V .2 .6 .1 .1 0 A: (.7*.7) * (.8*.6) * (.6*.05) * (.15*.05) * N .15 .8 .05 (.05 * 1) = 0 0 0 0 1 end V .6 .35 .05 0.000002646

  43. 2 (3)-State HMM Likelihood π‘ž π‘Š| π‘Š z 1 = z 2 = z 3 = z 4 = π‘ž 𝑂| π‘Š V V V V π‘ž 𝑂| start π‘ž π‘Š| 𝑂 z 1 = z 2 = z 3 = z 4 = N N N N π‘ž π‘₯ 1 |𝑂 π‘ž π‘₯ 4 |𝑂 π‘ž π‘₯ 3 |π‘Š π‘ž π‘₯ 2 |π‘Š w 1 w 2 w 3 w 4 z 1 = z 2 = z 3 = z 4 = V V V V π‘ž π‘Š| 𝑂 π‘ž 𝑂| π‘Š π‘ž 𝑂| start π‘ž 𝑂| 𝑂 z 1 = z 2 = z 3 = z 4 = N N N N π‘ž π‘₯ 2 |π‘Š π‘ž π‘₯ 4 |𝑂 π‘ž π‘₯ 1 |𝑂 π‘ž π‘₯ 3 |𝑂 w 1 w 2 w 3 w 4

  44. 2 (3)-State HMM Likelihood π‘ž π‘Š| π‘Š z 1 = z 2 = z 3 = z 4 = π‘ž 𝑂| π‘Š V V V V π‘ž 𝑂| start π‘ž π‘Š| 𝑂 z 1 = z 2 = z 3 = z 4 = N N N N π‘ž π‘₯ 1 |𝑂 π‘ž π‘₯ 4 |𝑂 π‘ž π‘₯ 3 |π‘Š π‘ž π‘₯ 2 |π‘Š w 1 w 2 w 3 w 4 Up until here, all the computation was the same Let’s reuse what computations we can z 1 = z 2 = z 3 = z 4 = V V V V π‘ž π‘Š| 𝑂 π‘ž 𝑂| π‘Š π‘ž 𝑂| start π‘ž 𝑂| 𝑂 z 1 = z 2 = z 3 = z 4 = N N N N π‘ž π‘₯ 2 |π‘Š π‘ž π‘₯ 4 |𝑂 π‘ž π‘₯ 1 |𝑂 π‘ž π‘₯ 3 |𝑂 w 1 w 2 w 3 w 4

  45. 2 (3)-State HMM Likelihood π‘ž π‘Š| π‘Š z 1 = z 2 = z 3 = z 4 = π‘ž 𝑂| π‘Š V V V V π‘ž 𝑂| start π‘ž π‘Š| 𝑂 z 1 = z 2 = z 3 = z 4 = N N N N π‘ž π‘₯ 1 |𝑂 π‘ž π‘₯ 4 |𝑂 π‘ž π‘₯ 3 |π‘Š π‘ž π‘₯ 2 |π‘Š Solution : pass information w 1 w 2 w 3 w 4 "forward" in the graph, e.g., from timestep 2 to 3... z 1 = z 2 = z 3 = z 4 = V V V V π‘ž π‘Š| 𝑂 π‘ž 𝑂| π‘Š π‘ž 𝑂| start π‘ž 𝑂| 𝑂 z 1 = z 2 = z 3 = z 4 = N N N N π‘ž π‘₯ 2 |π‘Š π‘ž π‘₯ 4 |𝑂 π‘ž π‘₯ 1 |𝑂 π‘ž π‘₯ 3 |𝑂 w 1 w 2 w 3 w 4

  46. 2 (3)-State HMM Likelihood π‘ž π‘Š| π‘Š z 1 = z 2 = z 3 = z 4 = π‘ž 𝑂| π‘Š V V V V π‘ž 𝑂| start π‘ž π‘Š| 𝑂 z 1 = z 2 = z 3 = z 4 = N N N N π‘ž π‘₯ 1 |𝑂 π‘ž π‘₯ 4 |𝑂 π‘ž π‘₯ 3 |π‘Š π‘ž π‘₯ 2 |π‘Š Solution : pass information w 1 w 2 w 3 w 4 "forward" in the graph, e.g., from timestep 2 to 3... z 1 = z 2 = z 3 = z 4 = V V V V Issue : these are only two of the π‘ž π‘Š| 𝑂 16 paths through the trellis π‘ž 𝑂| π‘Š π‘ž 𝑂| start π‘ž 𝑂| 𝑂 z 1 = z 2 = z 3 = z 4 = N N N N π‘ž π‘₯ 2 |π‘Š π‘ž π‘₯ 4 |𝑂 π‘ž π‘₯ 1 |𝑂 π‘ž π‘₯ 3 |𝑂 w 1 w 2 w 3 w 4

  47. 2 (3)-State HMM Likelihood π‘ž π‘Š| π‘Š z 1 = z 2 = z 3 = z 4 = π‘ž 𝑂| π‘Š V V V V π‘ž 𝑂| start π‘ž π‘Š| 𝑂 z 1 = z 2 = z 3 = z 4 = N N N N π‘ž π‘₯ 1 |𝑂 π‘ž π‘₯ 4 |𝑂 π‘ž π‘₯ 3 |π‘Š π‘ž π‘₯ 2 |π‘Š Solution : pass information w 1 w 2 w 3 w 4 "forward" in the graph, e.g., from timestep 2 to 3... z 1 = z 2 = z 3 = z 4 = V V V V Issue : these are only two of the π‘ž π‘Š| 𝑂 16 paths through the trellis π‘ž 𝑂| π‘Š π‘ž 𝑂| start π‘ž 𝑂| 𝑂 z 1 = z 2 = z 3 = z 4 = N N N N Solution : … marginalize (sum) π‘ž π‘₯ 2 |π‘Š out all information from π‘ž π‘₯ 4 |𝑂 π‘ž π‘₯ 1 |𝑂 π‘ž π‘₯ 3 |𝑂 previous timesteps (0 & 1) w 1 w 2 w 3 w 4

  48. Reusing Computation Ξ±(i -1, A) z i-2 z i-1 z i = = A = A A z i-2 z i-1 z i = = B = B B Ξ±(i -1, B) z i-2 z i-1 z i = = C = C C Ξ±(i -1, C) let’s first consider β€œ any shared path ending with B (AB, BB, or CB) β†’ B” assume any necessary information has been properly computed and stored along these paths: Ξ±(i -1, A), Ξ±(i -1, B), Ξ±(i -1, C)

  49. Reusing Computation Ξ±(i -1, A) z i-2 z i-1 z i = = A = A A z i-2 z i-1 z i = = B = B B Ξ±(i -1, B) z i-2 z i-1 z i = = C = C C Ξ±(i -1, C) let’s first consider β€œ any shared path ending with B (AB, BB, or CB) β†’ B” marginalize across the previous hidden state values, Ξ±(i -1, A), Ξ±(i -1, B), Ξ±(i -1, C)

  50. Reusing Computation Ξ±(i -1, A) z i-2 z i-1 z i = = A = A A z i-2 z i-1 z i = = B = B B Ξ±( i, B) Ξ±(i -1, B) z i-2 z i-1 z i = = C = C C Ξ±(i -1, C) let’s first consider β€œ any shared path ending with B (AB, BB, or CB) β†’ B” marginalize across the previous hidden state values, Ξ±(i -1, A), Ξ±(i -1, B), Ξ±(i -1, C) 𝛽 𝑗, 𝐢 = ෍ 𝛽 𝑗 βˆ’ 1, 𝑑 βˆ— π‘ž 𝐢 𝑑) βˆ— π‘ž(obs at 𝑗 | 𝐢) 𝑑

  51. Reusing Computation Ξ±(i -1, A) z i-2 z i-1 z i = = A = A A z i-2 z i-1 z i = = B = B B Ξ±( i, B) Ξ±(i -1, B) z i-2 z i-1 z i = = C = C C Ξ±(i -1, C) let’s first consider β€œ any shared path ending with B (AB, BB, or CB) β†’ B” marginalize across the previous hidden state values 𝛽 𝑗, 𝐢 = ෍ 𝛽 𝑗 βˆ’ 1, 𝑑 βˆ— π‘ž 𝐢 𝑑) βˆ— π‘ž(obs at 𝑗 | 𝐢) 𝑑 computing Ξ± at time i-1 will correctly incorporate paths through time i-2 : we correctly obey the Markov property

  52. Forward Probability z i-2 z i-1 z i = = A = A A z i-2 z i-1 z i = = B = B B z i-2 z i-1 z i = = C = C C let’s first consider β€œ any shared path ending with B (AB, BB, or CB) β†’ B” marginalize across the previous hidden state values Ξ±(i, B) is the total probability of all 𝛽 𝑗 βˆ’ 1, 𝑑 β€² 𝑑 β€² ) βˆ— π‘ž(obs at 𝑗 | 𝐢) 𝛽 𝑗, 𝐢 = ෍ βˆ— π‘ž 𝐢 paths to that state B from the 𝑑 β€² beginning computing Ξ± at time i-1 will correctly incorporate paths through time i-2 : we correctly obey the Markov property

  53. Forward Probability Ξ±(i, s ) is the total probability of all paths: 1. that start from the beginning 2. that end (currently) in s at step i 3. that emit the observation obs at i

  54. Forward Probability what are the what’s the total probability how likely is it to get immediate ways to up until now? into state s this way? get into state s ? Ξ±(i, s ) is the total probability of all paths: 1. that start from the beginning 2. that end (currently) in s at step i 3. that emit the observation obs at i

  55. 2 (3) -State HMM Likelihood with Forward Probabilities Ξ±[3, V] = Ξ±[2, V] * (.35*.1)+ z 3 = z 4 = π‘ž 𝑂| π‘Š Ξ±[2, N] * (.8*.1) V V Ξ±[2, V] = π‘ž π‘Š| π‘Š Ξ±[1, N] * (.8*.6) + z 3 = z 4 = Ξ±[1, V] * (.35*.6) N N π‘ž π‘Š| start z 1 = z 2 = π‘ž π‘₯ 4 |𝑂 π‘ž π‘₯ 3 |π‘Š V V w 3 w 4 π‘ž 𝑂| start π‘ž π‘Š| 𝑂 z 1 = z 2 = Ξ±[1, N] = N N (.7*.7) z 3 = z 4 = π‘ž π‘₯ 2 |π‘Š π‘ž π‘₯ 1 |𝑂 Ξ±[3, N] = V V Ξ±[2, V] * (.6*.05) + w 1 w 2 Ξ±[2, N] * (.15*.05) π‘ž 𝑂| π‘Š π‘ž 𝑂| 𝑂 z 3 = z 4 = N V end N N start .7 .2 .1 w 1 w 2 w 3 w 4 π‘ž π‘₯ 4 |𝑂 π‘ž π‘₯ 3 |𝑂 N .15 .8 .05 N .7 .2 .05 .05 V .2 .6 .1 .1 V .6 .35 .05 w 3 w 4

  56. 2 (3) -State HMM Likelihood with Forward Probabilities Ξ±[3, V] = Ξ±[2, V] * (.35*.1)+ z 3 = z 4 = π‘ž 𝑂| π‘Š Ξ±[2, N] * (.8*.1) V V Ξ±[2, V] = π‘ž π‘Š| π‘Š Ξ±[1, V] = Ξ±[1, N] * (.8*.6) + z 3 = z 4 = (.2*.2) Ξ±[1, V] * (.35*.6) N N π‘ž π‘Š| start z 1 = z 2 = π‘ž π‘₯ 4 |𝑂 π‘ž π‘₯ 3 |π‘Š V V w 3 w 4 π‘ž 𝑂| start π‘ž π‘Š| 𝑂 z 1 = z 2 = Ξ±[1, N] = N N (.7*.7) z 3 = z 4 = π‘ž π‘₯ 2 |π‘Š π‘ž π‘₯ 1 |𝑂 Ξ±[3, N] = V V Ξ±[2, V] * (.6*.05) + w 1 w 2 Ξ±[2, N] * (.15*.05) π‘ž 𝑂| π‘Š π‘ž 𝑂| 𝑂 z 3 = z 4 = N V end N N start .7 .2 .1 w 1 w 2 w 3 w 4 π‘ž π‘₯ 4 |𝑂 π‘ž π‘₯ 3 |𝑂 N .15 .8 .05 N .7 .2 .05 .05 V .2 .6 .1 .1 V .6 .35 .05 w 3 w 4

  57. 2 (3) -State HMM Likelihood with Forward Probabilities Ξ±[3, V] = Ξ±[2, V] * (.35*.1)+ z 3 = z 4 = π‘ž 𝑂| π‘Š Ξ±[2, N] * (.8*.1) V V Ξ±[2, V] = π‘ž π‘Š| π‘Š Ξ±[1, N] * (.8*.6) + Ξ±[1, V] = z 3 = z 4 = Ξ±[1, V] * (.35*.6) = (.2*.2) = .04 N N 0.2436 π‘ž π‘Š| start z 1 = z 2 = π‘ž π‘₯ 4 |𝑂 π‘ž π‘₯ 3 |π‘Š V V w 3 w 4 π‘ž 𝑂| start π‘ž π‘Š| 𝑂 z 1 = z 2 = Ξ±[1, N] = N N (.7*.7) = .49 z 3 = z 4 = π‘ž π‘₯ 2 |π‘Š π‘ž π‘₯ 1 |𝑂 Ξ±[2, N] = Ξ±[3, N] = V V Ξ±[1, N] * (.15*.2) + Ξ±[2, V] * (.6*.05) + w 1 w 2 Ξ±[1, V] * (.6*.2) = Ξ±[2, N] * (.2*.05) π‘ž 𝑂| π‘Š .0195 π‘ž 𝑂| 𝑂 z 3 = z 4 = N V end N N start .7 .2 .1 w 1 w 2 w 3 w 4 π‘ž π‘₯ 4 |𝑂 π‘ž π‘₯ 3 |𝑂 N .15 .8 .05 N .7 .2 .05 .05 V .2 .6 .1 .1 V .6 .35 .05 w 3 w 4

  58. 2 (3) -State HMM Likelihood with Forward Probabilities Ξ±[3, V] = Ξ±[2, V] * (.35*.1)+ z 3 = z 4 = π‘ž 𝑂| π‘Š Ξ±[2, N] * (.8*.1) V V Ξ±[2, V] = π‘ž π‘Š| π‘Š Ξ±[1, N] * (.8*.6) + Ξ±[1, V] = z 3 = z 4 = Ξ±[1, V] * (.35*.6) = (.2*.2) = .04 N N 0.2436 π‘ž π‘Š| start z 1 = z 2 = π‘ž π‘₯ 4 |𝑂 π‘ž π‘₯ 3 |π‘Š V V Use dynamic programming w 3 w 4 π‘ž 𝑂| start π‘ž π‘Š| 𝑂 to build the Ξ± left-to-right z 1 = z 2 = Ξ±[1, N] = N N (.7*.7) = .49 z 3 = z 4 = π‘ž π‘₯ 2 |π‘Š π‘ž π‘₯ 1 |𝑂 Ξ±[2, N] = Ξ±[3, N] = V V Ξ±[1, N] * (.15*.2) + Ξ±[2, V] * (.6*.05) + w 1 w 2 Ξ±[1, V] * (.6*.2) = Ξ±[2, N] * (.2*.05) π‘ž 𝑂| π‘Š .0195 π‘ž 𝑂| 𝑂 z 3 = z 4 = N V end N N start .7 .2 .1 w 1 w 2 w 3 w 4 π‘ž π‘₯ 4 |𝑂 π‘ž π‘₯ 3 |𝑂 N .15 .8 .05 N .7 .2 .05 .05 V .2 .6 .1 .1 V .6 .35 .05 w 3 w 4

  59. Forward Algorithm Ξ± : a 2D table, N+2 x K* N+2: number of observations (+2 for the BOS & EOS symbols) K*: number of states Use dynamic programming to build the Ξ± left-to- right

  60. Forward Algorithm Ξ± = double[N+2][K*] Ξ± [0][*] = 0.0 Ξ± [0][START] = 1.0 for(i = 1; i ≀ N+1; ++ i) { }

  61. Forward Algorithm Ξ± = double[N+2][K*] Ξ± [0][*] = 0.0 Ξ± [0][START] = 1.0 for(i = 1; i ≀ N+1; ++ i) { for(state = 0; state < K*; ++state) { } }

  62. Forward Algorithm Ξ± = double[N+2][K*] Ξ± [0][*] = 0.0 Ξ± [0][START] = 1.0 for(i = 1; i ≀ N+1; ++ i) { for(state = 0; state < K*; ++state) { p obs = p emission (obs i | state) } }

  63. Forward Algorithm Ξ± = double[N+2][K*] Ξ± [0][*] = 0.0 Ξ± [0][START] = 1.0 for(i = 1; i ≀ N+1; ++ i) { for(state = 0; state < K*; ++state) { p obs = p emission (obs i | state) for(old = 0; old < K*; ++old) { p move = p transition (state | old) Ξ± [i][state] += Ξ± [i-1][old] * p obs * p move } } }

  64. Forward Algorithm Ξ± = double[N+2][K*] Ξ± [0][*] = 0.0 we still need to learn these Ξ± [0][START] = 1.0 (EM if not observed) for(i = 1; i ≀ N+1; ++ i) { for(state = 0; state < K*; ++state) { p obs = p emission (obs i | state) for(old = 0; old < K*; ++old) { p move = p transition (state | old) Ξ± [i][state] += Ξ± [i-1][old] * p obs * p move } } }

  65. Forward Algorithm Ξ± = double[N+2][K*] Ξ± [0][*] = 0.0 Ξ± [0][START] = 1.0 Q: What do we return? (How do we return the likelihood of the sequence?) for(i = 1; i ≀ N+1; ++ i) { for(state = 0; state < K*; ++state) { p obs = p emission (obs i | state) for(old = 0; old < K*; ++old) { p move = p transition (state | old) Ξ± [i][state] += Ξ± [i-1][old] * p obs * p move } } }

  66. Forward Algorithm Ξ± = double[N+2][K*] Ξ± [0][*] = 0.0 Ξ± [0][START] = 1.0 Q: What do we return? (How do we return the likelihood of the sequence?) for(i = 1; i ≀ N+1; ++ i) { for(state = 0; state < K*; ++state) { p obs = p emission (obs i | state) A: Ξ± [N+1][end] for(old = 0; old < K*; ++old) { p move = p transition (state | old) Ξ± [i][state] += Ξ± [i-1][old] * p obs * p move } } }

  67. Interactive HMM Example https://goo.gl/rbHEoc (Jason Eisner, 2002) Original: http://www.cs.jhu.edu/~jason/465/PowerPoint/lect24-hmm.xls

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend