dynamic bayesian networks and hidden markov models
play

Dynamic Bayesian Networks and Hidden Markov Models Decision Trees - PowerPoint PPT Presentation

Lecture 11 Dynamic Bayesian Networks and Hidden Markov Models Decision Trees Marco Chiarandini Deptartment of Mathematics & Computer Science University of Southern Denmark Slides by Stuart Russell and Peter Norvig Exercise Uncertainty


  1. Lecture 11 Dynamic Bayesian Networks and Hidden Markov Models Decision Trees Marco Chiarandini Deptartment of Mathematics & Computer Science University of Southern Denmark Slides by Stuart Russell and Peter Norvig

  2. Exercise Uncertainty over Time Speech Recognition Course Overview Learning ✔ Introduction ✔ Uncertain knowledge and Reasoning ✔ Artificial Intelligence ✔ Intelligent Agents ✔ Probability and Bayesian approach ✔ Search ✔ Bayesian Networks ✔ Uninformed Search Hidden Markov Chains ✔ Heuristic Search Kalman Filters ✔ Adversarial Search Learning ✔ Minimax search Decision Trees ✔ Alpha-beta pruning Maximum Likelihood ✔ Knowledge representation and EM Algorithm Reasoning Learning Bayesian Networks ✔ Propositional logic Neural Networks ✔ First order logic Support vector machines ✔ Inference 2

  3. Exercise Uncertainty over Time Speech Recognition Performance of approximation algorithms Learning Absolute approximation: | P ( X | e ) − ˆ P ( X | e ) | ≤ ǫ 3

  4. Exercise Uncertainty over Time Speech Recognition Performance of approximation algorithms Learning Absolute approximation: | P ( X | e ) − ˆ P ( X | e ) | ≤ ǫ | P ( X | e ) − ˆ P ( X | e ) | Relative approximation: ≤ ǫ P ( X | e ) 3

  5. Exercise Uncertainty over Time Speech Recognition Performance of approximation algorithms Learning Absolute approximation: | P ( X | e ) − ˆ P ( X | e ) | ≤ ǫ | P ( X | e ) − ˆ P ( X | e ) | Relative approximation: ≤ ǫ P ( X | e ) ⇒ absolute since 0 ≤ P ≤ 1 (may be O ( 2 − n ) ) Relative = 3

  6. Exercise Uncertainty over Time Speech Recognition Performance of approximation algorithms Learning Absolute approximation: | P ( X | e ) − ˆ P ( X | e ) | ≤ ǫ | P ( X | e ) − ˆ P ( X | e ) | Relative approximation: ≤ ǫ P ( X | e ) ⇒ absolute since 0 ≤ P ≤ 1 (may be O ( 2 − n ) ) Relative = Randomized algorithms may fail with probability at most δ 3

  7. Exercise Uncertainty over Time Speech Recognition Performance of approximation algorithms Learning Absolute approximation: | P ( X | e ) − ˆ P ( X | e ) | ≤ ǫ | P ( X | e ) − ˆ P ( X | e ) | Relative approximation: ≤ ǫ P ( X | e ) ⇒ absolute since 0 ≤ P ≤ 1 (may be O ( 2 − n ) ) Relative = Randomized algorithms may fail with probability at most δ Polytime approximation: poly ( n , ǫ − 1 , log δ − 1 ) 3

  8. Exercise Uncertainty over Time Speech Recognition Performance of approximation algorithms Learning Absolute approximation: | P ( X | e ) − ˆ P ( X | e ) | ≤ ǫ | P ( X | e ) − ˆ P ( X | e ) | Relative approximation: ≤ ǫ P ( X | e ) ⇒ absolute since 0 ≤ P ≤ 1 (may be O ( 2 − n ) ) Relative = Randomized algorithms may fail with probability at most δ Polytime approximation: poly ( n , ǫ − 1 , log δ − 1 ) Theorem (Dagum and Luby, 1993): both absolute and relative approximation for either deterministic or randomized algorithms are NP-hard for any ǫ, δ < 0 . 5 (Absolute approximation polytime with no evidence—Chernoff bounds) 3

  9. Exercise Uncertainty over Time Speech Recognition Summary Learning Exact inference by variable elimination: – polytime on polytrees, NP-hard on general graphs – space = time, very sensitive to topology Approximate inference by Likelihood Weighting (LW), Markov Chain Monte Carlo Method (MCMC): – PriorSampling and RejectionSampling unusable as evidence grow – LW does poorly when there is lots of (late-in-the-order) evidence – LW, MCMC generally insensitive to topology – Convergence can be very slow with probabilities close to 1 or 0 – Can handle arbitrary combinations of discrete and continuous variables 4

  10. Exercise Uncertainty over Time Speech Recognition Outline Learning 1. Exercise 2. Uncertainty over Time 3. Speech Recognition 4. Learning 5

  11. Exercise Uncertainty over Time Speech Recognition Wumpus World Learning 1,4 2,4 3,4 4,4 1,3 2,3 3,3 4,3 1,2 2,2 3,2 4,2 B OK 1,1 2,1 3,1 4,1 B OK OK P ij = true iff [ i , j ] contains a pit B ij = true iff [ i , j ] is breezy Include only B 1 , 1 , B 1 , 2 , B 2 , 1 in the probability model 6

  12. Exercise Uncertainty over Time Speech Recognition Specifying the probability model Learning The full joint distribution is P ( P 1 , 1 , . . . , P 4 , 4 , B 1 , 1 , B 1 , 2 , B 2 , 1 ) Apply product rule: P ( B 1 , 1 , B 1 , 2 , B 2 , 1 | P 1 , 1 , . . . , P 4 , 4 ) P ( P 1 , 1 , . . . , P 4 , 4 ) (Do it this way to get P ( Effect | Cause ) .) 7

  13. Exercise Uncertainty over Time Speech Recognition Specifying the probability model Learning The full joint distribution is P ( P 1 , 1 , . . . , P 4 , 4 , B 1 , 1 , B 1 , 2 , B 2 , 1 ) Apply product rule: P ( B 1 , 1 , B 1 , 2 , B 2 , 1 | P 1 , 1 , . . . , P 4 , 4 ) P ( P 1 , 1 , . . . , P 4 , 4 ) (Do it this way to get P ( Effect | Cause ) .) First term: 1 if pits are adjacent to breezes, 0 otherwise Second term: pits are placed randomly, probability 0.2 per square: 4 , 4 P ( P i , j ) = 0 . 2 n × 0 . 8 16 − n � P ( P 1 , 1 , . . . , P 4 , 4 ) = i , j = 1 , 1 for n pits. 7

  14. Exercise Uncertainty over Time Speech Recognition Observations and query Learning We know the following facts: b = ¬ b 1 , 1 ∧ b 1 , 2 ∧ b 2 , 1 known = ¬ p 1 , 1 ∧ ¬ p 1 , 2 ∧ ¬ p 2 , 1 Query is P ( P 1 , 3 | known , b ) Define Unknown = P ij s other than P 1 , 3 and Known 8

  15. Exercise Uncertainty over Time Speech Recognition Observations and query Learning We know the following facts: b = ¬ b 1 , 1 ∧ b 1 , 2 ∧ b 2 , 1 known = ¬ p 1 , 1 ∧ ¬ p 1 , 2 ∧ ¬ p 2 , 1 Query is P ( P 1 , 3 | known , b ) Define Unknown = P ij s other than P 1 , 3 and Known For inference by enumeration, we have � P ( P 1 , 3 | known , b ) = α P ( P 1 , 3 , unknown , known , b ) unknown Grows exponentially with number of squares! 8

  16. Exercise Uncertainty over Time Speech Recognition Using conditional independence Learning Basic insight: observations are conditionally independent of other hidden squares given neighbouring hidden squares 1,4 2,4 3,4 4,4 1,3 2,3 3,3 4,3 OTHER QUERY 1,2 2,2 3,2 4,2 FRINGE 1,1 2,1 3,1 4,1 KNOWN 9

  17. Exercise Uncertainty over Time Speech Recognition Using conditional independence Learning Basic insight: observations are conditionally independent of other hidden squares given neighbouring hidden squares 1,4 2,4 3,4 4,4 1,3 2,3 3,3 4,3 OTHER QUERY 1,2 2,2 3,2 4,2 FRINGE 1,1 2,1 3,1 4,1 KNOWN Define Unknown = Fringe ∪ Other P ( b | P 1 , 3 , Known , Unknown ) = P ( b | P 1 , 3 , Known , Fringe ) Manipulate query into a form where we can use this! 9

  18. Exercise Uncertainty over Time Speech Recognition Using conditional independence contd. Learning X P ( P 1 , 3 | known , b ) = α P ( P 1 , 3 , unknown , known , b ) unknown 10

  19. Exercise Uncertainty over Time Speech Recognition Using conditional independence contd. Learning X P ( P 1 , 3 | known , b ) = α P ( P 1 , 3 , unknown , known , b ) unknown X = P ( b | P 1 , 3 , known , unknown ) P ( P 1 , 3 , known , unknown ) α unknown 10

  20. Exercise Uncertainty over Time Speech Recognition Using conditional independence contd. Learning X P ( P 1 , 3 | known , b ) = α P ( P 1 , 3 , unknown , known , b ) unknown X = P ( b | P 1 , 3 , known , unknown ) P ( P 1 , 3 , known , unknown ) α unknown X X = P ( b | known , P 1 , 3 , fringe , other ) P ( P 1 , 3 , known , fringe , other ) α fringe other 10

  21. Exercise Uncertainty over Time Speech Recognition Using conditional independence contd. Learning X P ( P 1 , 3 | known , b ) = α P ( P 1 , 3 , unknown , known , b ) unknown X = P ( b | P 1 , 3 , known , unknown ) P ( P 1 , 3 , known , unknown ) α unknown X X = P ( b | known , P 1 , 3 , fringe , other ) P ( P 1 , 3 , known , fringe , other ) α fringe other X X = P ( b | known , P 1 , 3 , fringe ) P ( P 1 , 3 , known , fringe , other ) α fringe other 10

  22. Exercise Uncertainty over Time Speech Recognition Using conditional independence contd. Learning X P ( P 1 , 3 | known , b ) = α P ( P 1 , 3 , unknown , known , b ) unknown X = P ( b | P 1 , 3 , known , unknown ) P ( P 1 , 3 , known , unknown ) α unknown X X = P ( b | known , P 1 , 3 , fringe , other ) P ( P 1 , 3 , known , fringe , other ) α fringe other X X = P ( b | known , P 1 , 3 , fringe ) P ( P 1 , 3 , known , fringe , other ) α fringe other X X = P ( b | known , P 1 , 3 , fringe ) P ( P 1 , 3 , known , fringe , other ) α fringe other 10

  23. Exercise Uncertainty over Time Speech Recognition Using conditional independence contd. Learning X P ( P 1 , 3 | known , b ) = α P ( P 1 , 3 , unknown , known , b ) unknown X = P ( b | P 1 , 3 , known , unknown ) P ( P 1 , 3 , known , unknown ) α unknown X X = P ( b | known , P 1 , 3 , fringe , other ) P ( P 1 , 3 , known , fringe , other ) α fringe other X X = P ( b | known , P 1 , 3 , fringe ) P ( P 1 , 3 , known , fringe , other ) α fringe other X X = P ( b | known , P 1 , 3 , fringe ) P ( P 1 , 3 , known , fringe , other ) α fringe other X X = P ( b | known , P 1 , 3 , fringe ) P ( P 1 , 3 ) P ( known ) P ( fringe ) P ( other ) α fringe other 10

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend