hidden markov models and dynamic programming
play

Hidden Markov models and dynamic programming Matthew Macauley - PowerPoint PPT Presentation

Hidden Markov models and dynamic programming Matthew Macauley Department of Mathematical Sciences Clemson University http://www.math.clemson.edu/~macaule/ Math 4500, Spring 2017 M. Macauley (Clemson) Hidden Markov models and dynamic


  1. Hidden Markov models and dynamic programming Matthew Macauley Department of Mathematical Sciences Clemson University http://www.math.clemson.edu/~macaule/ Math 4500, Spring 2017 M. Macauley (Clemson) Hidden Markov models and dynamic programming Math 4500, Spring 2017 1 / 12

  2. The occasionally dishonest casino 3 canonical questions Given a sequence of roles by the casino: WWWLWLWLWLWWLWWLLLWWWWLWWLWWLWLWLLLWLWWLLWWLWLWLLWWLLLWLWWWWLWLWWWWL one may ask: 1. Evaluation: How likely is this sequence given our model? 2. Decoding: When was the casino rolling the fair vs. the unfair die? 3. Learning: Can we deduce the probability parameters if we didn’t know them? (e.g., “ how loaded are the die? ”, and “ how often does the casino switch? ”) 0 . 95 0 . 9 0 . 05 W : 2/3 W : 0.4 L : 1/3 L : 0.6 0 . 1 Fair Unfair M. Macauley (Clemson) Hidden Markov models and dynamic programming Math 4500, Spring 2017 2 / 12

  3. Problem #1: Evaluation For CpG identification, we need the posterior probabilities P ♣ π t ✏ k ⑤ x q , for each k P Q and t ✏ 1 , 2 , . . . , ℓ . By Bayes’ theorem, P ♣ π t ✏ k ⑤ x q ✏ P ♣ x , π t ✏ k q P ♣ x q We can compute P ♣ x , π t ✏ k q recursively: P ♣ x , π t ✏ k q ✏ P ♣ x 1 x 2 ☎ ☎ ☎ x t , π ✏ k q ☎ P ♣ x t � 1 x t � 2 ☎ ☎ ☎ x ℓ ⑤ x 1 x 2 ☎ ☎ ☎ x t , π t ✏ k q ✏ P ♣ x 1 x 2 ☎ ☎ ☎ x t , π ✏ k q ☎ P ♣ x t � 1 x t � 2 ☎ ☎ ☎ x ℓ ⑤ π t ✏ k q ✏ f k ♣ t q ☎ b k ♣ t q . The forward-backward algorithm Given an emitted sequence x ✏ x 1 x 2 x 3 ☎ ☎ ☎ x ℓ , we will use the forward algorithm to compute f k ♣ t q : the probability of getting x ✏ x 1 x 2 x 3 ☎ ☎ ☎ x t and ending up in state k . backward algorithm to compute b j ♣ t q : the probability of observing x t � 1 ☎ ☎ ☎ x ℓ from state k . It is also straightforwrd to compute P ♣ x q using either of these algorithms. M. Macauley (Clemson) Hidden Markov models and dynamic programming Math 4500, Spring 2017 3 / 12

  4. 0 . 7 0 . 6 The forward algorithm . 3 W : 2/3 W : 0.4 Example . Compute P ♣ x q , for x ✏ LWW. L : 1/3 L : 0.6 . 4 Fair Unfair t ✏ 0 t ✏ 1 t ✏ 2 t ✏ 3 0 . 7 F F F 0 . 5 0 . 4 0 . 3 B 0 . 5 0 . 6 U U U Forward algorithm 1. Initialize ( t ✏ 0): Set f B ♣ 0 q ✏ 1, and f j ♣ 0 q , for all j P Q . 2. Recursion: do for t ✏ 1 , 2 , . . . , ℓ : ➳ for each k P Q , define f k ♣ t q : ✏ e k ♣ x t q f j ♣ t ✁ 1 q a jk j P Q ➳ 3. Termination: Set P ♣ x q ✏ f k ♣ ℓ q . k P Q M. Macauley (Clemson) Hidden Markov models and dynamic programming Math 4500, Spring 2017 4 / 12

  5. The forward algorithm 0 . 7 0 . 6 . 3 W : 2/3 W : 0.4 Example . Compute P ♣ x q , for x ✏ LWW. L : 1/3 L : 0.6 . 4 Fair Unfair t ✏ 0 t ✏ 1 t ✏ 2 t ✏ 3 0 . 7 F F F 0 . 5 0 . 4 0 . 3 B 0 . 5 0 . 6 U U U t=0. f B ♣ 0 q ✏ 1, f F ♣ 0 q ✏ 0, f U ♣ 0 q ✏ 0. f F ♣ 1 q ✏ P ♣ x 1 ✏ L , π 1 ✏ F q ✏ f B ♣ 0 q ☎ a BF ☎ e F ♣ L q ✏ 1 ☎ 1 2 ☎ 1 3 ✏ 1 t=1. 6 . f U ♣ 1 q ✏ P ♣ x 1 ✏ L , π 1 ✏ U q ✏ f B ♣ 0 q ☎ a BU ☎ e U ♣ L q ✏ 1 ☎ 1 2 ☎ 6 10 ✏ 0 . 3. M. Macauley (Clemson) Hidden Markov models and dynamic programming Math 4500, Spring 2017 5 / 12

  6. The forward algorithm 0 . 7 0 . 6 . 3 W : 2/3 W : 0.4 Example . Compute P ♣ x q , for x ✏ LWW. L : 1/3 L : 0.6 . 4 Fair Unfair t ✏ 0 t ✏ 1 t ✏ 2 t ✏ 3 0 . 7 F F F 0 . 5 0 . 4 0 . 3 B 0 . 5 0 . 6 U U U t ✏ 2 : f F ♣ 2 q ✏ P ♣ x 1 x 2 ✏ LW , π 2 ✏ F q ✏ f F ♣ 1 q ☎ a FF ☎ e F ♣ W q � f U ♣ 1 q ☎ a UF ☎ e F ♣ W q ✏ 1 6 ♣ . 7 q 2 3 � ♣ . 3 q♣ . 4 q 2 3 ✓ 0 . 1578 . f U ♣ 2 q ✏ P ♣ x 1 x 2 ✏ LW , π 2 ✏ U q ✏ f F ♣ 1 q ☎ a FU ☎ e U ♣ W q � f U ♣ 1 q ☎ a UU ☎ e U ♣ W q ✏ 1 6 ♣ . 3 q♣ . 4 q � ♣ . 3 q♣ . 6 q♣ . 4 q ✏ 0 . 092 . t ✏ 2 : f F ♣ 3 q ✏ P ♣ x 1 x 2 x 3 ✏ LWW , π 3 ✏ F q ✏ f F ♣ 2 q ☎ a FF ☎ e F ♣ W q � f U ♣ 2 q ☎ a UF ☎ e F ♣ W q ✏ ♣ . 1578 q♣ . 7 q 2 3 � ♣ . 092 q♣ . 4 q 2 3 ✓ . 0982 . f U ♣ 3 q ✏ P ♣ x 1 x 2 ✏ LWW , π 3 ✏ U q ✏ f F ♣ 2 q ☎ a FU ☎ e U ♣ W q � f U ♣ 2 q ☎ a UU ☎ e U ♣ W q ✏ ♣ . 1578 q♣ . 3 q♣ . 4 q � ♣ . 092 q♣ . 6 q♣ . 4 q ✓ 0 . 0410 . Now, P ♣ x q ✏ P ♣ x ✏ LWW q ✏ f F ♣ 3 q � f U ♣ 3 q ✓ . 0982 � . 0410 ✏ . 1392. M. Macauley (Clemson) Hidden Markov models and dynamic programming Math 4500, Spring 2017 6 / 12

  7. 0 . 7 0 . 6 The backward algorithm . 3 W : 2/3 W : 0.4 Example . Compute P ♣ x q , for x ✏ LWW. L : 1/3 L : 0.6 . 4 Fair Unfair t ✏ 0 t ✏ 1 t ✏ 2 t ✏ 3 0 . 7 F F F 0 . 5 0 . 4 0 . 3 B 0 . 5 0 . 6 U U U Backward algorithm and b j ♣ ℓ q 1. Initialize ( t ✏ ℓ ): Set b k ♣ ℓ q ✏ 1 for all j P Q . 2. Recursion: do for t ✏ ℓ ✁ 1 , . . . , 2 , 1: for each j P Q , b j ♣ t q : ✏ P ♣ x t � 1 x t � 2 ☎ ☎ ☎ x ℓ ⑤ π t ✏ j q ➳ ✏ P ♣ π t � 1 ✏ k ⑤ π t ✏ j q ☎ e k ♣ x t � 1 q ☎ P ♣ x t � 2 ☎ ☎ ☎ x ℓ ⑤ π t � 1 ✏ k q k P Q ➳ ✏ a jk e k ♣ x t � 1 q b k ♣ t � 1 q . k P Q ➳ 3. Termination: Set P ♣ x q ✏ a Bk e k ♣ x 1 q b k ♣ 1 q . M. Macauley (Clemson) Hidden Markov models and dynamic programming Math 4500, Spring 2017 7 / 12 k P Q

  8. The backward algorithm 0 . 7 0 . 6 . 3 W : 2/3 W : 0.4 Example . Compute P ♣ x q , for x ✏ LWW. L : 1/3 L : 0.6 . 4 Fair Unfair t=3. b F ♣ 3 q ✏ 1, b U ♣ 3 q ✏ 1. b F ♣ 2 q ✏ a FF e F ♣ W q b F ♣ 3 q � a FU e U ♣ W q b U ♣ 3 q ✏ ♣ . 7 q 2 3 � ♣ . 3 q♣ . 4 q ✏ 44 t=2. 75 ✓ 0 . 5866. b U ♣ 2 q ✏ a UF e F ♣ W q b F ♣ 3 q � a UU e U ♣ W q b U ♣ 3 q ✏ ♣ . 4 q 2 3 � ♣ . 6 q♣ . 4 q ✏ 38 75 ✓ 0 . 5067. b F ♣ 1 q ✏ a FF e F ♣ W q b F ♣ 2 q � a FU e U ♣ W q b U ♣ 2 q ✏ ♣ . 7 q 2 3 ☎ 44 75 � ♣ . 3 q♣ . 4 q 38 t=1. 75 ✓ 0 . 3346. b U ♣ 2 q ✏ a UF e F ♣ W q b F ♣ 2 q � a UU e U ♣ W q b U ♣ 2 q ✏ ♣ . 4 q 2 3 ☎ 44 75 � ♣ . 6 q♣ . 4 q 38 75 ✓ 0 . 2780. M. Macauley (Clemson) Hidden Markov models and dynamic programming Math 4500, Spring 2017 8 / 12

  9. The forward-backward algorithm 0 . 7 0 . 6 . 3 W : 2/3 W : 0.4 Example . Compute P ♣ x q , for x ✏ LWW. L : 1/3 L : 0.6 . 4 Fair Unfair P ♣ π 1 ✏ F ⑤ x 1 x 2 x 3 ✏ LWW q ✏ f F ♣ 1 q b F ♣ 1 q ✓ ♣ 1 ④ 6 q♣ . 3346 q ✓ 0 . 4006 P ♣ x q . 1392 P ♣ π 1 ✏ U ⑤ x 1 x 2 x 3 ✏ LWW q ✏ f F ♣ 1 q b F ♣ 1 q ✓ ♣ . 3 q♣ . 2780 q ✓ 0 . 5991 P ♣ x q . 1392 P ♣ π 2 ✏ F ⑤ x 1 x 2 x 3 ✏ LWW q ✏ f F ♣ 1 q b F ♣ 1 q ✓ ♣ . 1578 q♣ . 5866 q ✓ 0 . 6650 P ♣ x q . 1392 P ♣ π 1 ✏ U ⑤ x 1 x 2 x 3 ✏ LWW q ✏ f F ♣ 1 q b F ♣ 1 q ✓ ♣ . 092 q♣ . 5067 q ✓ 0 . 3349 P ♣ x q . 1392 P ♣ π 3 ✏ F ⑤ x 1 x 2 x 3 ✏ LWW q ✏ f F ♣ 1 q b F ♣ 1 q ✓ ♣ . 0982 q♣ 1 q ✓ 0 . 7055 P ♣ x q . 1392 P ♣ π 3 ✏ U ⑤ x 1 x 2 x 3 ✏ LWW q ✏ f F ♣ 1 q b F ♣ 1 q ✓ ♣ . 041 q♣ 1 q ✓ 0 . 2945. P ♣ x q . 1392 M. Macauley (Clemson) Hidden Markov models and dynamic programming Math 4500, Spring 2017 9 / 12

  10. Decoding and the Viterbi algorithm Problem #2: Decoding Given an observed path x ✏ x 1 x 2 x 3 ☎ ☎ ☎ x ℓ , what is the most likely hidden path π ✏ π 1 π 2 π 3 ☎ ☎ ☎ π ℓ to emit x ? That is, compute π max ✏ arg max P ♣ π ⑤ x q ✏ arg max P ♣ x , π q π π Assume that for each j P Q , we’ve computed π 1 π 2 ☎ ☎ ☎ π t ✁ 2 π t ✁ 1 of highest probability among those emitting x 1 x 2 ☎ ☎ ☎ x t ✁ 1 . Denote the probability of this path by v j ♣ t ✁ 1 q ✏ π ✏ π 1 ☎☎☎ π t ✁ 1 P ♣ π t ✁ 1 ✏ j , x t ✁ 1 q . max Then, for each k P Q , say emitting x 1 x 2 ☎ ☎ ☎ x t : ✥ ✭ ✥ ✭ v k ♣ t q ✏ π 1 ☎☎☎ π t ✁ 1 P ♣ π t ✁ 1 ✏ k , x t q ✏ max max v j ♣ t ✁ 1 q a jk e k ♣ x t q ✏ e k ♣ x t q max v j ♣ t ✁ 1 q a jk . j P Q j P Q M. Macauley (Clemson) Hidden Markov models and dynamic programming Math 4500, Spring 2017 10 / 12

  11. Decoding and the Viterbi algorithm Viterbi algorithm 1. Initialize ( t ✏ 0): Set v B ♣ 0 q ✏ 1, and v j ♣ 0 q , for all j P Q . 2. Recursion: do for t ✏ 1 , 2 , . . . , ℓ : for each k P Q , define v k ♣ t q : ✏ e k ♣ x t q max j P Q t v j ♣ t ✁ 1 q a jk ✉ Also, set ptr k ♣ t q ✏ r ✏ arg max j t v j ♣ t ✁ 1 q a jk ✉ . 3. Termination: Set P ♣ x , π ✝ q ✏ max j P Q t v j ♣ ℓ q✉ , and ptr k ♣ ℓ q ✏ π ✝ P ♣ x , π q ✏ max ℓ . π The maximum probability path can be found by tracing back through the pointers. M. Macauley (Clemson) Hidden Markov models and dynamic programming Math 4500, Spring 2017 11 / 12

  12. Decoding and the Viterbi algorithm 0 . 7 0 . 6 . 3 Example . Given x ✏ LWW, what is the W : 2/3 W : 0.4 most likely path π ✏ π 1 π 2 π 3 ? L : 1/3 L : 0.6 . 4 Fair Unfair t ✏ 0 t ✏ 1 t ✏ 2 t ✏ 3 0 . 7 F F F 0 . 5 0 . 4 0 . 3 B 0 . 5 0 . 6 U U U π 1 P ♣ π 1 ✏ F , x 1 ✏ L q ✏ max t v B ♣ 0 q ☎ a BF ☎ e F ♣ L q✉ ✏ 1 ☎ 1 2 ☎ 1 3 ✏ 1 t=1. v F ♣ 1 q ✏ max 6 . π 1 P ♣ π 1 ✏ U , x 1 ✏ L q ✏ max t v B ♣ 0 q ☎ a BU ☎ e U ♣ L q✉ ✏ 1 ☎ 1 v U ♣ 1 q ✏ max 2 ♣ . 6 q ✏ 0 . 3. M. Macauley (Clemson) Hidden Markov models and dynamic programming Math 4500, Spring 2017 12 / 12

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend