reasoning with probabilities
play

Reasoning with Probabilities Paolo Turrini Department of Computing, - PowerPoint PPT Presentation

Intro to AI (2nd Part) Reasoning with Probabilities Paolo Turrini Department of Computing, Imperial College London Introduction to Artificial Intelligence 2nd Part Paolo Turrini Intro to AI (2nd Part) Intro to AI (2nd Part) The main


  1. Intro to AI (2nd Part) Back to joint distributions: combining evidence Start with the joint distribution: P ( Cavity | toothache ∧ catch ) = α � 0 . 108 , 0 . 016 � = � 0 . 871 , 0 . 129 � Paolo Turrini Intro to AI (2nd Part)

  2. Intro to AI (2nd Part) Back to joint distributions: combining evidence Start with the joint distribution: It doesn’t scale up to a large number of variables Absolute Independence is very rare Can we use Bayes’ rule? Paolo Turrini Intro to AI (2nd Part)

  3. Intro to AI (2nd Part) Back to joint distributions: combining evidence Start with the joint distribution: P ( Cavity | toothache ∧ catch ) = α P ( toothache ∧ catch | Cavity ) P ( Cavity ) Paolo Turrini Intro to AI (2nd Part)

  4. Intro to AI (2nd Part) Back to joint distributions: combining evidence Start with the joint distribution: P ( Cavity | toothache ∧ catch ) = α P ( toothache ∧ catch | Cavity ) P ( Cavity ) Still not good: with n evidence variables 2 n possible combinations for which we would need to know the conditional probabilities Paolo Turrini Intro to AI (2nd Part)

  5. Intro to AI (2nd Part) Back to joint distributions: combining evidence We can’t use absolute independence: Paolo Turrini Intro to AI (2nd Part)

  6. Intro to AI (2nd Part) Back to joint distributions: combining evidence We can’t use absolute independence: Toothache and Catch are not independent: Paolo Turrini Intro to AI (2nd Part)

  7. Intro to AI (2nd Part) Back to joint distributions: combining evidence We can’t use absolute independence: Toothache and Catch are not independent: If the probe catches in the tooth then it is likely the tooth has a cavity, which means that toothache is likely too. Paolo Turrini Intro to AI (2nd Part)

  8. Intro to AI (2nd Part) Back to joint distributions: combining evidence We can’t use absolute independence: Toothache and Catch are not independent: If the probe catches in the tooth then it is likely the tooth has a cavity, which means that toothache is likely too. But they are independent given the presence or the absence of cavity! Paolo Turrini Intro to AI (2nd Part)

  9. Intro to AI (2nd Part) Back to joint distributions: combining evidence We can’t use absolute independence: Toothache and Catch are not independent: If the probe catches in the tooth then it is likely the tooth has a cavity, which means that toothache is likely too. But they are independent given the presence or the absence of cavity! Toothache depends on the state of the nerves in the tooth, catch depends on the dentist’s skills, to which toothache is irrelevant Paolo Turrini Intro to AI (2nd Part)

  10. Intro to AI (2nd Part) Conditional Independence Paolo Turrini Intro to AI (2nd Part)

  11. Intro to AI (2nd Part) Conditional independence Paolo Turrini Intro to AI (2nd Part)

  12. Intro to AI (2nd Part) Conditional independence 1 P ( catch | toothache , cavity ) = P ( catch | cavity ), the same independence holds if I haven’t got a cavity: Paolo Turrini Intro to AI (2nd Part)

  13. Intro to AI (2nd Part) Conditional independence 1 P ( catch | toothache , cavity ) = P ( catch | cavity ), the same independence holds if I haven’t got a cavity: 2 P ( catch | toothache , ¬ cavity ) = P ( catch |¬ cavity ) Paolo Turrini Intro to AI (2nd Part)

  14. Intro to AI (2nd Part) Conditional independence 1 P ( catch | toothache , cavity ) = P ( catch | cavity ), the same independence holds if I haven’t got a cavity: 2 P ( catch | toothache , ¬ cavity ) = P ( catch |¬ cavity ) Catch is conditionally independent of Toothache given Cavity : Paolo Turrini Intro to AI (2nd Part)

  15. Intro to AI (2nd Part) Conditional independence P ( Catch | Toothache , Cavity ) = P ( Catch | Cavity ) Paolo Turrini Intro to AI (2nd Part)

  16. Intro to AI (2nd Part) Conditional independence P ( Catch | Toothache , Cavity ) = P ( Catch | Cavity ) Equivalent statements: Paolo Turrini Intro to AI (2nd Part)

  17. Intro to AI (2nd Part) Conditional independence P ( Catch | Toothache , Cavity ) = P ( Catch | Cavity ) Equivalent statements: P ( Toothache | Catch , Cavity ) = P ( Toothache | Cavity ) Paolo Turrini Intro to AI (2nd Part)

  18. Intro to AI (2nd Part) Conditional independence P ( Catch | Toothache , Cavity ) = P ( Catch | Cavity ) Equivalent statements: P ( Toothache | Catch , Cavity ) = P ( Toothache | Cavity ) P ( Toothache , Catch | Cavity ) = P ( Toothache | Cavity ) P ( Catch | Cavity ) Paolo Turrini Intro to AI (2nd Part)

  19. Intro to AI (2nd Part) Conditional independence contd. Write out full joint distribution using chain rule: P ( Toothache , Catch , Cavity ) Paolo Turrini Intro to AI (2nd Part)

  20. Intro to AI (2nd Part) Conditional independence contd. Write out full joint distribution using chain rule: P ( Toothache , Catch , Cavity ) = P ( Toothache | Catch , Cavity ) P ( Catch , Cavity ) Paolo Turrini Intro to AI (2nd Part)

  21. Intro to AI (2nd Part) Conditional independence contd. Write out full joint distribution using chain rule: P ( Toothache , Catch , Cavity ) = P ( Toothache | Catch , Cavity ) P ( Catch , Cavity ) = P ( Toothache | Catch , Cavity ) P ( Catch | Cavity ) P ( Cavity ) Paolo Turrini Intro to AI (2nd Part)

  22. Intro to AI (2nd Part) Conditional independence contd. Write out full joint distribution using chain rule: P ( Toothache , Catch , Cavity ) = P ( Toothache | Catch , Cavity ) P ( Catch , Cavity ) = P ( Toothache | Catch , Cavity ) P ( Catch | Cavity ) P ( Cavity ) = P ( Toothache | Cavity ) P ( Catch | Cavity ) P ( Cavity ) Paolo Turrini Intro to AI (2nd Part)

  23. Intro to AI (2nd Part) Conditional independence contd. Write out full joint distribution using chain rule: P ( Toothache , Catch , Cavity ) = P ( Toothache | Catch , Cavity ) P ( Catch , Cavity ) = P ( Toothache | Catch , Cavity ) P ( Catch | Cavity ) P ( Cavity ) = P ( Toothache | Cavity ) P ( Catch | Cavity ) P ( Cavity ) I.e., 2 + 2 + 1 = 5 independent numbers (equations 1 and 2 remove 2) Paolo Turrini Intro to AI (2nd Part)

  24. Intro to AI (2nd Part) Conditional independence contd. In most cases, the use of conditional independence reduces the size of the representation of the joint distribution from exponential in n to linear in n . Paolo Turrini Intro to AI (2nd Part)

  25. Intro to AI (2nd Part) Conditional independence contd. In most cases, the use of conditional independence reduces the size of the representation of the joint distribution from exponential in n to linear in n . Conditional independence is our most basic and robust form of knowledge about uncertain environments. Paolo Turrini Intro to AI (2nd Part)

  26. Intro to AI (2nd Part) Bayes’ Rule and conditional independence P ( Cavity | toothache ∧ catch ) Paolo Turrini Intro to AI (2nd Part)

  27. Intro to AI (2nd Part) Bayes’ Rule and conditional independence P ( Cavity | toothache ∧ catch ) = α P ( toothache ∧ catch | Cavity ) P ( Cavity ) Paolo Turrini Intro to AI (2nd Part)

  28. Intro to AI (2nd Part) Bayes’ Rule and conditional independence P ( Cavity | toothache ∧ catch ) = α P ( toothache ∧ catch | Cavity ) P ( Cavity ) = α P ( toothache | Cavity ) P ( catch | Cavity ) P ( Cavity ) Paolo Turrini Intro to AI (2nd Part)

  29. Intro to AI (2nd Part) Bayes’ Rule and conditional independence P ( Cavity | toothache ∧ catch ) = α P ( toothache ∧ catch | Cavity ) P ( Cavity ) = α P ( toothache | Cavity ) P ( catch | Cavity ) P ( Cavity ) Paolo Turrini Intro to AI (2nd Part)

  30. Intro to AI (2nd Part) Bayes’ Rule and conditional independence P ( Cavity | toothache ∧ catch ) = α P ( toothache ∧ catch | Cavity ) P ( Cavity ) = α P ( toothache | Cavity ) P ( catch | Cavity ) P ( Cavity ) This is an example of a naive Bayes model: P ( Cause , Effect 1 , . . . , Effect n ) = P ( Cause ) Π i P ( Effect i | Cause ) Paolo Turrini Intro to AI (2nd Part)

  31. Intro to AI (2nd Part) Bayes’ Rule and conditional independence P ( Cavity | toothache ∧ catch ) = α P ( toothache ∧ catch | Cavity ) P ( Cavity ) = α P ( toothache | Cavity ) P ( catch | Cavity ) P ( Cavity ) This is an example of a naive Bayes model: P ( Cause , Effect 1 , . . . , Effect n ) = P ( Cause ) Π i P ( Effect i | Cause ) Total number of parameters is linear in n Paolo Turrini Intro to AI (2nd Part)

  32. Intro to AI (2nd Part) The Wumps World Paolo Turrini Intro to AI (2nd Part)

  33. Intro to AI (2nd Part) The Wumpus World Paolo Turrini Intro to AI (2nd Part)

  34. Intro to AI (2nd Part) The Wumpus World Paolo Turrini Intro to AI (2nd Part)

  35. Intro to AI (2nd Part) The Wumpus World Paolo Turrini Intro to AI (2nd Part)

  36. Intro to AI (2nd Part) The Wumpus World Paolo Turrini Intro to AI (2nd Part)

  37. Intro to AI (2nd Part) The Wumpus World Paolo Turrini Intro to AI (2nd Part)

  38. Intro to AI (2nd Part) The Wumpus World Paolo Turrini Intro to AI (2nd Part)

  39. Intro to AI (2nd Part) The Wumpus World Paolo Turrini Intro to AI (2nd Part)

  40. Intro to AI (2nd Part) The Wumpus World Paolo Turrini Intro to AI (2nd Part)

  41. Intro to AI (2nd Part) The Wumpus World Paolo Turrini Intro to AI (2nd Part)

  42. Intro to AI (2nd Part) Wumpus World Paolo Turrini Intro to AI (2nd Part)

  43. Intro to AI (2nd Part) Wumpus World P ij = true iff [ i , j ] contains a pit B ij = true iff [ i , j ] is breezy Paolo Turrini Intro to AI (2nd Part)

  44. Intro to AI (2nd Part) Wumpus World P ij = true iff [ i , j ] contains a pit B ij = true iff [ i , j ] is breezy Paolo Turrini Intro to AI (2nd Part)

  45. Intro to AI (2nd Part) Specifying the probability model Paolo Turrini Intro to AI (2nd Part)

  46. Intro to AI (2nd Part) Specifying the probability model Include only B 1 , 1 , B 1 , 2 , B 2 , 1 in the probability model! Paolo Turrini Intro to AI (2nd Part)

  47. Intro to AI (2nd Part) Specifying the probability model Include only B 1 , 1 , B 1 , 2 , B 2 , 1 in the probability model! The full joint distribution is P ( P 1 , 1 , . . . , P 4 , 4 , B 1 , 1 , B 1 , 2 , B 2 , 1 ) Paolo Turrini Intro to AI (2nd Part)

  48. Intro to AI (2nd Part) Specifying the probability model Include only B 1 , 1 , B 1 , 2 , B 2 , 1 in the probability model! The full joint distribution is P ( P 1 , 1 , . . . , P 4 , 4 , B 1 , 1 , B 1 , 2 , B 2 , 1 ) Apply product rule: P ( B 1 , 1 , B 1 , 2 , B 2 , 1 | P 1 , 1 , . . . , P 4 , 4 ) P ( P 1 , 1 , . . . , P 4 , 4 ) (Do it this way to get P ( Effect | Cause ).) Paolo Turrini Intro to AI (2nd Part)

  49. Intro to AI (2nd Part) Specifying the probability model Include only B 1 , 1 , B 1 , 2 , B 2 , 1 in the probability model! The full joint distribution is P ( P 1 , 1 , . . . , P 4 , 4 , B 1 , 1 , B 1 , 2 , B 2 , 1 ) Apply product rule: P ( B 1 , 1 , B 1 , 2 , B 2 , 1 | P 1 , 1 , . . . , P 4 , 4 ) P ( P 1 , 1 , . . . , P 4 , 4 ) (Do it this way to get P ( Effect | Cause ).) First term: 1 if pits are adjacent to breezes, 0 otherwise Paolo Turrini Intro to AI (2nd Part)

  50. Intro to AI (2nd Part) Specifying the probability model Include only B 1 , 1 , B 1 , 2 , B 2 , 1 in the probability model! The full joint distribution is P ( P 1 , 1 , . . . , P 4 , 4 , B 1 , 1 , B 1 , 2 , B 2 , 1 ) Apply product rule: P ( B 1 , 1 , B 1 , 2 , B 2 , 1 | P 1 , 1 , . . . , P 4 , 4 ) P ( P 1 , 1 , . . . , P 4 , 4 ) (Do it this way to get P ( Effect | Cause ).) First term: 1 if pits are adjacent to breezes, 0 otherwise Second term: pits are placed randomly, probability 0.2 per square: Paolo Turrini Intro to AI (2nd Part)

  51. Intro to AI (2nd Part) Specifying the probability model Include only B 1 , 1 , B 1 , 2 , B 2 , 1 in the probability model! The full joint distribution is P ( P 1 , 1 , . . . , P 4 , 4 , B 1 , 1 , B 1 , 2 , B 2 , 1 ) Apply product rule: P ( B 1 , 1 , B 1 , 2 , B 2 , 1 | P 1 , 1 , . . . , P 4 , 4 ) P ( P 1 , 1 , . . . , P 4 , 4 ) (Do it this way to get P ( Effect | Cause ).) First term: 1 if pits are adjacent to breezes, 0 otherwise Second term: pits are placed randomly, probability 0.2 per square: 4 , 4 i , j = 1 , 1 P ( P i , j ) = 0 . 2 n × 0 . 8 16 − n P ( P 1 , 1 , . . . , P 4 , 4 ) = Π for n pits. Paolo Turrini Intro to AI (2nd Part)

  52. Intro to AI (2nd Part) Observations and query We know the following facts: b = ¬ b 1 , 1 ∧ b 1 , 2 ∧ b 2 , 1 Paolo Turrini Intro to AI (2nd Part)

  53. Intro to AI (2nd Part) Observations and query We know the following facts: b = ¬ b 1 , 1 ∧ b 1 , 2 ∧ b 2 , 1 explored = ¬ p 1 , 1 ∧ ¬ p 1 , 2 ∧ ¬ p 2 , 1 Paolo Turrini Intro to AI (2nd Part)

  54. Intro to AI (2nd Part) Observations and query We know the following facts: b = ¬ b 1 , 1 ∧ b 1 , 2 ∧ b 2 , 1 explored = ¬ p 1 , 1 ∧ ¬ p 1 , 2 ∧ ¬ p 2 , 1 Query is P ( P 1 , 3 | explored , b ) Paolo Turrini Intro to AI (2nd Part)

  55. Intro to AI (2nd Part) Observations and query We know the following facts: b = ¬ b 1 , 1 ∧ b 1 , 2 ∧ b 2 , 1 explored = ¬ p 1 , 1 ∧ ¬ p 1 , 2 ∧ ¬ p 2 , 1 Query is P ( P 1 , 3 | explored , b ) Define Unexplored = P ij s other than P 1 , 3 and Explored Paolo Turrini Intro to AI (2nd Part)

  56. Intro to AI (2nd Part) Observations and query We know the following facts: b = ¬ b 1 , 1 ∧ b 1 , 2 ∧ b 2 , 1 explored = ¬ p 1 , 1 ∧ ¬ p 1 , 2 ∧ ¬ p 2 , 1 Query is P ( P 1 , 3 | explored , b ) Define Unexplored = P ij s other than P 1 , 3 and Explored Paolo Turrini Intro to AI (2nd Part)

  57. Intro to AI (2nd Part) Complexity For inference by enumeration, we have P ( P 1 , 3 | explored , b ) = α Σ unexplored P ( P 1 , 3 , unexplored , explored , b ) Paolo Turrini Intro to AI (2nd Part)

  58. Intro to AI (2nd Part) Complexity For inference by enumeration, we have P ( P 1 , 3 | explored , b ) = α Σ unexplored P ( P 1 , 3 , unexplored , explored , b ) There are 12 unknown squares Paolo Turrini Intro to AI (2nd Part)

  59. Intro to AI (2nd Part) Complexity For inference by enumeration, we have P ( P 1 , 3 | explored , b ) = α Σ unexplored P ( P 1 , 3 , unexplored , explored , b ) There are 12 unknown squares The summation contains 2 12 = 4096 terms Paolo Turrini Intro to AI (2nd Part)

  60. Intro to AI (2nd Part) Complexity For inference by enumeration, we have P ( P 1 , 3 | explored , b ) = α Σ unexplored P ( P 1 , 3 , unexplored , explored , b ) There are 12 unknown squares The summation contains 2 12 = 4096 terms In general the summation grows exponentiatlly with the number of squares! Paolo Turrini Intro to AI (2nd Part)

  61. Intro to AI (2nd Part) Complexity For inference by enumeration, we have P ( P 1 , 3 | explored , b ) = α Σ unexplored P ( P 1 , 3 , unexplored , explored , b ) There are 12 unknown squares The summation contains 2 12 = 4096 terms In general the summation grows exponentiatlly with the number of squares! And now? Paolo Turrini Intro to AI (2nd Part)

  62. Intro to AI (2nd Part) Using conditional independence Basic insight: observations are conditionally independent of other hidden squares given neighbouring hidden squares Paolo Turrini Intro to AI (2nd Part)

  63. Intro to AI (2nd Part) Using conditional independence Basic insight: observations are conditionally independent of other hidden squares given neighbouring hidden squares Paolo Turrini Intro to AI (2nd Part)

  64. Intro to AI (2nd Part) Using conditional independence Basic insight: observations are conditionally independent of other hidden squares given neighbouring hidden squares Define Unexplored = Fringe ∪ Other Paolo Turrini Intro to AI (2nd Part)

  65. Intro to AI (2nd Part) Using conditional independence Basic insight: observations are conditionally independent of other hidden squares given neighbouring hidden squares Define Unexplored = Fringe ∪ Other P ( b | P 1 , 3 , Explored , Unexplored ) = P ( b | P 1 , 3 , Explored , Fringe ) Paolo Turrini Intro to AI (2nd Part)

  66. Intro to AI (2nd Part) Using conditional independence Basic insight: observations are conditionally independent of other hidden squares given neighbouring hidden squares Define Unexplored = Fringe ∪ Other P ( b | P 1 , 3 , Explored , Unexplored ) = P ( b | P 1 , 3 , Explored , Fringe ) Manipulate query into a form where we can use this! Paolo Turrini Intro to AI (2nd Part)

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend