cs 730 730w 830 intro ai
play

CS 730/730W/830: Intro AI Break HMMs 1 handout: slides final blog - PowerPoint PPT Presentation

CS 730/730W/830: Intro AI Break HMMs 1 handout: slides final blog entries were due Wheeler Ruml (UNH) Lecture 27, CS 730 1 / 8 Break Wed May 2: HMMs, unsupervised learning, applications Break Mon May 7: special guest Scott


  1. CS 730/730W/830: Intro AI ■ Break HMMs 1 handout: slides final blog entries were due Wheeler Ruml (UNH) Lecture 27, CS 730 – 1 / 8

  2. Break Wed May 2: HMMs, unsupervised learning, applications ■ ■ Break Mon May 7: special guest Scott Kiesel on robot planning ■ HMMs Wed May 9, 9-noon: project presentations ■ Thur May 10, 8am: paper drafts (optional for some) ■ Fri May 11, 10:30: exam 3 (N133) ■ Tues May 15, 3pm: papers (one hardcopy + electronic PDF) ■ menu? Wheeler Ruml (UNH) Lecture 27, CS 730 – 2 / 8

  3. ■ Break HMMs ■ Models ■ The Model ■ Viterbi Decoding ■ Random ■ EOLQs Hidden Markov Models Wheeler Ruml (UNH) Lecture 27, CS 730 – 3 / 8

  4. Probabilistic Models MDPs: ■ Break Naive Bayes: HMMs ■ Models k -Means: ■ The Model Markov chain: ■ Viterbi Decoding ■ Random Hidden Markov model: ■ EOLQs Wheeler Ruml (UNH) Lecture 27, CS 730 – 4 / 8

  5. A Hidden Markov Model ■ Break � P ( x t = j ) = P ( x t − 1 = i ) P ( x t = j | x t − 1 = i ) HMMs ■ Models i ■ The Model � ■ Viterbi Decoding P ( e t = k ) = P ( x t = i ) P ( e = k | x = i ) ■ Random i ■ EOLQs More concisely: � P ( x t ) = P ( x t − 1 ) P ( x t | x t − 1 ) x t − 1 � P ( e t ) = P ( x t ) P ( e | x ) x t Wheeler Ruml (UNH) Lecture 27, CS 730 – 5 / 8

  6. Viterbi Decoding ■ Break given: transition model T ( s, s ′ ) HMMs sensing model S ( s, o ) ■ Models observations o 1 , . . . , o T ■ The Model ■ Viterbi Decoding find: most probable s 1 , . . . , s T ■ Random ■ EOLQs initialize S × T matrix v with 0s v 0 , 0 ← 1 for each time t = 0 to T − 1 for each state s for each new state s ′ score ← v s,t · T ( s, s ′ ) · S ( s ′ , o t ) if score > v s ′ ,t +1 v s ′ ,t +1 ← score best-parent( s ′ ) ← s trace back from s with max v s,T Wheeler Ruml (UNH) Lecture 27, CS 730 – 6 / 8

  7. Random applications ■ Break HMMs ■ Models unsupervised learning: dimensionality reduction ■ The Model ■ Viterbi Decoding ■ Random ■ EOLQs Wheeler Ruml (UNH) Lecture 27, CS 730 – 7 / 8

  8. EOLQs What question didn’t you get to ask today? ■ ■ Break What’s still confusing? ■ HMMs ■ Models What would you like to hear more about? ■ ■ The Model ■ Viterbi Decoding Please write down your most pressing question about AI and put ■ Random ■ EOLQs it in the box on your way out. Thanks! Wheeler Ruml (UNH) Lecture 27, CS 730 – 8 / 8

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend