SLIDE 57 Smoothing
Markov Models HMM
- MC to HMM
- Hidden MM
- HMM Examples
- W-U Example
- HMM tasks
- Filtering
- Online updates
- Forward algorithm
- Umbrella example
- Prediction
- Model evaluation
- Smoothing
- Umbrella smooth.
- Forward-backward
- Most likely seq.
- Viterbi
Summary
2017 Artificial Intelligence – 24 / 34
Compute the distribution over past state given evidence up to present: P(Xk|et
1) for some k < t.
Let’s factorize the distribution as follows: P(Xk|et
1) = P(Xt|ek 1, et k+1) = (split the evidence sequence)
= αP(et
k+1|Xk, ek 1)P(Xk|ek 1) = (from Bayes rule)
= α P(et
k+1|Xk)
P(Xk|ek
1)
(using Markov assumption)
P(et
k+1|Xk) = ∑ xk+1
P(et
k+1|Xk, xk+1)P(xk+1|Xk) = (condition on Xk+1)
= ∑
xk+1
P(et
k+1|xk+1)P(xk+1|Xk) = (using Markov assumption)
= ∑
xk+1
P(ek+1, et
k+2|xk+1)P(xk+1|Xk) = (split evidence sequence)
= ∑
xk+1
P(ek+1|xk+1)
P(et
k+2|xk+1)
P(xk+1|Xk)
(using cond. independence)