CS 730/730W/830: Intro AI Break HMMs 1 handout: slides final blog - - PowerPoint PPT Presentation

cs 730 730w 830 intro ai
SMART_READER_LITE
LIVE PREVIEW

CS 730/730W/830: Intro AI Break HMMs 1 handout: slides final blog - - PowerPoint PPT Presentation

CS 730/730W/830: Intro AI Break HMMs 1 handout: slides final blog entries were due Wheeler Ruml (UNH) Lecture 27, CS 730 1 / 8 Break Wed May 2: HMMs, unsupervised learning, applications Break Mon May 7: special guest Scott


slide-1
SLIDE 1

CS 730/730W/830: Intro AI

■ Break HMMs

Wheeler Ruml (UNH) Lecture 27, CS 730 – 1 / 8

1 handout: slides final blog entries were due

slide-2
SLIDE 2

Break

■ Break HMMs

Wheeler Ruml (UNH) Lecture 27, CS 730 – 2 / 8

Wed May 2: HMMs, unsupervised learning, applications

Mon May 7: special guest Scott Kiesel on robot planning

Wed May 9, 9-noon: project presentations

Thur May 10, 8am: paper drafts (optional for some)

Fri May 11, 10:30: exam 3 (N133)

Tues May 15, 3pm: papers (one hardcopy + electronic PDF) menu?

slide-3
SLIDE 3

Hidden Markov Models

■ Break HMMs ■ Models ■ The Model ■ Viterbi Decoding ■ Random ■ EOLQs

Wheeler Ruml (UNH) Lecture 27, CS 730 – 3 / 8

slide-4
SLIDE 4

Probabilistic Models

■ Break HMMs ■ Models ■ The Model ■ Viterbi Decoding ■ Random ■ EOLQs

Wheeler Ruml (UNH) Lecture 27, CS 730 – 4 / 8

MDPs: Naive Bayes: k-Means: Markov chain: Hidden Markov model:

slide-5
SLIDE 5

A Hidden Markov Model

■ Break HMMs ■ Models ■ The Model ■ Viterbi Decoding ■ Random ■ EOLQs

Wheeler Ruml (UNH) Lecture 27, CS 730 – 5 / 8

P(xt = j) =

  • i

P(xt−1 = i)P(xt = j|xt−1 = i) P(et = k) =

  • i

P(xt = i)P(e = k|x = i) More concisely: P(xt) =

  • xt−1

P(xt−1)P(xt|xt−1) P(et) =

  • xt

P(xt)P(e|x)

slide-6
SLIDE 6

Viterbi Decoding

■ Break HMMs ■ Models ■ The Model ■ Viterbi Decoding ■ Random ■ EOLQs

Wheeler Ruml (UNH) Lecture 27, CS 730 – 6 / 8

given: transition model T(s, s′) sensing model S(s, o)

  • bservations o1, . . . , oT

find: most probable s1, . . . , sT initialize S × T matrix v with 0s v0,0 ← 1 for each time t = 0 to T − 1 for each state s for each new state s′ score ← vs,t · T(s, s′) · S(s′, ot) if score > vs′,t+1 vs′,t+1 ← score best-parent(s′)← s trace back from s with max vs,T

slide-7
SLIDE 7

Random

■ Break HMMs ■ Models ■ The Model ■ Viterbi Decoding ■ Random ■ EOLQs

Wheeler Ruml (UNH) Lecture 27, CS 730 – 7 / 8

applications unsupervised learning: dimensionality reduction

slide-8
SLIDE 8

EOLQs

■ Break HMMs ■ Models ■ The Model ■ Viterbi Decoding ■ Random ■ EOLQs

Wheeler Ruml (UNH) Lecture 27, CS 730 – 8 / 8

What question didn’t you get to ask today?

What’s still confusing?

What would you like to hear more about? Please write down your most pressing question about AI and put it in the box on your way out. Thanks!