dealing with uncertainty
play

Dealing with Uncertainty Paolo Turrini Department of Computing, - PowerPoint PPT Presentation

Intro to AI (2nd Part) Dealing with Uncertainty Paolo Turrini Department of Computing, Imperial College London Introduction to Artificial Intelligence 2nd Part Paolo Turrini Intro to AI (2nd Part) Intro to AI (2nd Part) Uncertainty and


  1. Intro to AI (2nd Part) Probability These are not claims of a “probabilistic tendency” in the current situation (but might be learned from past experience of similar situations) Probabilities of propositions change with new evidence: Paolo Turrini Intro to AI (2nd Part)

  2. Intro to AI (2nd Part) Probability These are not claims of a “probabilistic tendency” in the current situation (but might be learned from past experience of similar situations) Probabilities of propositions change with new evidence: e.g., P ( S 25 | no reported accidents , 5 a.m.) = 0 . 8 Paolo Turrini Intro to AI (2nd Part)

  3. Intro to AI (2nd Part) Probability These are not claims of a “probabilistic tendency” in the current situation (but might be learned from past experience of similar situations) Probabilities of propositions change with new evidence: e.g., P ( S 25 | no reported accidents , 5 a.m.) = 0 . 8 Analogous to logical entailment status KB | = α , not truth. Paolo Turrini Intro to AI (2nd Part)

  4. Intro to AI (2nd Part) If you snooze you lose? Suppose I believe the following: Paolo Turrini Intro to AI (2nd Part)

  5. Intro to AI (2nd Part) If you snooze you lose? Suppose I believe the following: P ( S 0 gets me there on time | . . . ) = 0 . 99 Paolo Turrini Intro to AI (2nd Part)

  6. Intro to AI (2nd Part) If you snooze you lose? Suppose I believe the following: P ( S 0 gets me there on time | . . . ) = 0 . 99 P ( S 1 gets me there on time | . . . ) = 0 . 90 Paolo Turrini Intro to AI (2nd Part)

  7. Intro to AI (2nd Part) If you snooze you lose? Suppose I believe the following: P ( S 0 gets me there on time | . . . ) = 0 . 99 P ( S 1 gets me there on time | . . . ) = 0 . 90 P ( S 10 gets me there on time | . . . ) = 0 . 6 Paolo Turrini Intro to AI (2nd Part)

  8. Intro to AI (2nd Part) If you snooze you lose? Suppose I believe the following: P ( S 0 gets me there on time | . . . ) = 0 . 99 P ( S 1 gets me there on time | . . . ) = 0 . 90 P ( S 10 gets me there on time | . . . ) = 0 . 6 P ( S 25 gets me there on time | . . . ) = 0 . 1 Paolo Turrini Intro to AI (2nd Part)

  9. Intro to AI (2nd Part) If you snooze you lose? Suppose I believe the following: P ( S 0 gets me there on time | . . . ) = 0 . 99 P ( S 1 gets me there on time | . . . ) = 0 . 90 P ( S 10 gets me there on time | . . . ) = 0 . 6 P ( S 25 gets me there on time | . . . ) = 0 . 1 Which action should I choose? Paolo Turrini Intro to AI (2nd Part)

  10. Intro to AI (2nd Part) If you snooze you lose? Suppose I believe the following: P ( S 0 gets me there on time | . . . ) = 0 . 99 P ( S 1 gets me there on time | . . . ) = 0 . 90 P ( S 10 gets me there on time | . . . ) = 0 . 6 P ( S 25 gets me there on time | . . . ) = 0 . 1 Which action should I choose? IT DEPENDS Paolo Turrini Intro to AI (2nd Part)

  11. Intro to AI (2nd Part) If you snooze you lose? Suppose I believe the following: P ( S 0 gets me there on time | . . . ) = 0 . 99 P ( S 1 gets me there on time | . . . ) = 0 . 90 P ( S 10 gets me there on time | . . . ) = 0 . 6 P ( S 25 gets me there on time | . . . ) = 0 . 1 Which action should I choose? IT DEPENDS on my preferences Paolo Turrini Intro to AI (2nd Part)

  12. Intro to AI (2nd Part) If you snooze you lose? Suppose I believe the following: P ( S 0 gets me there on time | . . . ) = 0 . 99 P ( S 1 gets me there on time | . . . ) = 0 . 90 P ( S 10 gets me there on time | . . . ) = 0 . 6 P ( S 25 gets me there on time | . . . ) = 0 . 1 Which action should I choose? IT DEPENDS on my preferences e.g., missing class vs. sleeping Paolo Turrini Intro to AI (2nd Part)

  13. Intro to AI (2nd Part) If you snooze you lose? Suppose I believe the following: P ( S 0 gets me there on time | . . . ) = 0 . 99 P ( S 1 gets me there on time | . . . ) = 0 . 90 P ( S 10 gets me there on time | . . . ) = 0 . 6 P ( S 25 gets me there on time | . . . ) = 0 . 1 Which action should I choose? IT DEPENDS on my preferences e.g., missing class vs. sleeping S 0 : ages in the Huxley building, therefore feeling miserable. Paolo Turrini Intro to AI (2nd Part)

  14. Intro to AI (2nd Part) Chances and Utility Utility theory is used to represent and infer preferences Paolo Turrini Intro to AI (2nd Part)

  15. Intro to AI (2nd Part) Chances and Utility Utility theory is used to represent and infer preferences Decision theory = utility theory + probability theory Paolo Turrini Intro to AI (2nd Part)

  16. Intro to AI (2nd Part) Probability basics Begin with a set Ω—the sample space Paolo Turrini Intro to AI (2nd Part)

  17. Intro to AI (2nd Part) Probability basics Begin with a set Ω—the sample space e.g., 6 possible rolls of a dice. Paolo Turrini Intro to AI (2nd Part)

  18. Intro to AI (2nd Part) Probability basics Begin with a set Ω—the sample space e.g., 6 possible rolls of a dice. w ∈ Ω is a sample point/possible world/atomic event Paolo Turrini Intro to AI (2nd Part)

  19. Intro to AI (2nd Part) Probability basics A probability space or probability model is a sample space Ω with an assignment P ( w ) for every w ∈ Ω s.t. Paolo Turrini Intro to AI (2nd Part)

  20. Intro to AI (2nd Part) Probability basics A probability space or probability model is a sample space Ω with an assignment P ( w ) for every w ∈ Ω s.t. 0 ≤ P ( w ) ≤ 1 Paolo Turrini Intro to AI (2nd Part)

  21. Intro to AI (2nd Part) Probability basics A probability space or probability model is a sample space Ω with an assignment P ( w ) for every w ∈ Ω s.t. 0 ≤ P ( w ) ≤ 1 Σ w P ( w ) = 1 Paolo Turrini Intro to AI (2nd Part)

  22. Intro to AI (2nd Part) Probability basics A probability space or probability model is a sample space Ω with an assignment P ( w ) for every w ∈ Ω s.t. 0 ≤ P ( w ) ≤ 1 Σ w P ( w ) = 1 e.g., P (1) = P (2) = P (3) = P (4) = P (5) = P (6) = 1 / 6. Paolo Turrini Intro to AI (2nd Part)

  23. Intro to AI (2nd Part) Events An event A is any subset of Ω Paolo Turrini Intro to AI (2nd Part)

  24. Intro to AI (2nd Part) Events An event A is any subset of Ω P ( A ) = Σ { w ∈ A } P ( w ) Paolo Turrini Intro to AI (2nd Part)

  25. Intro to AI (2nd Part) Events An event A is any subset of Ω P ( A ) = Σ { w ∈ A } P ( w ) E.g., P (dice roll < 4) = P (1) + P (2) + P (3) = 1 / 6 + 1 / 6 + 1 / 6 = 1 / 2 Paolo Turrini Intro to AI (2nd Part)

  26. Intro to AI (2nd Part) Random variables A random variable is a function from sample points to some range, e.g., R , [0 , 1], { true , false } . . . Paolo Turrini Intro to AI (2nd Part)

  27. Intro to AI (2nd Part) Random variables A random variable is a function from sample points to some range, e.g., R , [0 , 1], { true , false } . . . e.g., Odd (1) = true . Paolo Turrini Intro to AI (2nd Part)

  28. Intro to AI (2nd Part) Random variables A random variable is a function from sample points to some range, e.g., R , [0 , 1], { true , false } . . . e.g., Odd (1) = true . P induces a probability distribution for any random variable X : P ( X = x i ) = Σ { w : X ( w ) = x i } P ( w ) Paolo Turrini Intro to AI (2nd Part)

  29. Intro to AI (2nd Part) Random variables A random variable is a function from sample points to some range, e.g., R , [0 , 1], { true , false } . . . e.g., Odd (1) = true . P induces a probability distribution for any random variable X : P ( X = x i ) = Σ { w : X ( w ) = x i } P ( w ) e.g., P ( Odd = true ) = P (1) + P (3) + P (5) = 1 / 6 + 1 / 6 + 1 / 6 = 1 / 2 Paolo Turrini Intro to AI (2nd Part)

  30. Intro to AI (2nd Part) Propositions A proposition can be seen as an event (set of sample points) where the proposition is true Paolo Turrini Intro to AI (2nd Part)

  31. Intro to AI (2nd Part) Propositions A proposition can be seen as an event (set of sample points) where the proposition is true Given Boolean random variables A and B : Paolo Turrini Intro to AI (2nd Part)

  32. Intro to AI (2nd Part) Propositions A proposition can be seen as an event (set of sample points) where the proposition is true Given Boolean random variables A and B : event a = set of sample points where A ( w ) = true Paolo Turrini Intro to AI (2nd Part)

  33. Intro to AI (2nd Part) Propositions A proposition can be seen as an event (set of sample points) where the proposition is true Given Boolean random variables A and B : event a = set of sample points where A ( w ) = true event ¬ a = set of sample points where A ( w ) = false Paolo Turrini Intro to AI (2nd Part)

  34. Intro to AI (2nd Part) Propositions A proposition can be seen as an event (set of sample points) where the proposition is true Given Boolean random variables A and B : event a = set of sample points where A ( w ) = true event ¬ a = set of sample points where A ( w ) = false event a ∧ b = points where A ( w ) = true and B ( w ) = true Paolo Turrini Intro to AI (2nd Part)

  35. Intro to AI (2nd Part) Events and Propositional Logic Proposition = disjunction of atomic events in which it is true Paolo Turrini Intro to AI (2nd Part)

  36. Intro to AI (2nd Part) Events and Propositional Logic Proposition = disjunction of atomic events in which it is true e.g., ( a ∨ b ) ≡ ( ¬ a ∧ b ) ∨ ( a ∧ ¬ b ) ∨ ( a ∧ b ) Paolo Turrini Intro to AI (2nd Part)

  37. Intro to AI (2nd Part) Events and Propositional Logic Proposition = disjunction of atomic events in which it is true e.g., ( a ∨ b ) ≡ ( ¬ a ∧ b ) ∨ ( a ∧ ¬ b ) ∨ ( a ∧ b ) ⇒ P ( a ∨ b ) Paolo Turrini Intro to AI (2nd Part)

  38. Intro to AI (2nd Part) Events and Propositional Logic Proposition = disjunction of atomic events in which it is true e.g., ( a ∨ b ) ≡ ( ¬ a ∧ b ) ∨ ( a ∧ ¬ b ) ∨ ( a ∧ b ) ⇒ P ( a ∨ b ) = P ( ¬ a ∧ b ) + P ( a ∧ ¬ b ) + P ( a ∧ b ) Paolo Turrini Intro to AI (2nd Part)

  39. Intro to AI (2nd Part) Events and Propositional Logic Proposition = disjunction of atomic events in which it is true e.g., ( a ∨ b ) ≡ ( ¬ a ∧ b ) ∨ ( a ∧ ¬ b ) ∨ ( a ∧ b ) ⇒ P ( a ∨ b ) = P ( ¬ a ∧ b ) + P ( a ∧ ¬ b ) + P ( a ∧ b ) = P ( ¬ a ∧ b ) + P ( a ∧ ¬ b ) + P ( a ∧ b ) + P ( a ∧ b ) − P ( a ∧ b ) Paolo Turrini Intro to AI (2nd Part)

  40. Intro to AI (2nd Part) Events and Propositional Logic Proposition = disjunction of atomic events in which it is true e.g., ( a ∨ b ) ≡ ( ¬ a ∧ b ) ∨ ( a ∧ ¬ b ) ∨ ( a ∧ b ) ⇒ P ( a ∨ b ) = P ( ¬ a ∧ b ) + P ( a ∧ ¬ b ) + P ( a ∧ b ) = P ( ¬ a ∧ b ) + P ( a ∧ ¬ b ) + P ( a ∧ b ) + P ( a ∧ b ) − P ( a ∧ b ) = P ( a ) + P ( b ) − P ( a ∧ b ) Paolo Turrini Intro to AI (2nd Part)

  41. Intro to AI (2nd Part) Probabilities are logical Theorem (De Finetti 1931) An agent who bets according to ”illogical” probabilities can be tricked into a bet that loses money regardless of outcome. Paolo Turrini Intro to AI (2nd Part)

  42. Intro to AI (2nd Part) Syntax for propositions Propositional e.g., Cavity (do I have a cavity?) Cavity = true is a proposition, also written cavity Paolo Turrini Intro to AI (2nd Part)

  43. Intro to AI (2nd Part) Syntax for propositions Propositional e.g., Cavity (do I have a cavity?) Cavity = true is a proposition, also written cavity Discrete e.g., Weather is one of � sunny , rain , cloudy , snow � . Weather = rain is a proposition. Paolo Turrini Intro to AI (2nd Part)

  44. Intro to AI (2nd Part) Syntax for propositions Propositional e.g., Cavity (do I have a cavity?) Cavity = true is a proposition, also written cavity Discrete e.g., Weather is one of � sunny , rain , cloudy , snow � . Weather = rain is a proposition. Important : exhaustive and mutually exclusive Paolo Turrini Intro to AI (2nd Part)

  45. Intro to AI (2nd Part) Syntax for propositions Propositional e.g., Cavity (do I have a cavity?) Cavity = true is a proposition, also written cavity Discrete e.g., Weather is one of � sunny , rain , cloudy , snow � . Weather = rain is a proposition. Important : exhaustive and mutually exclusive Continuous e.g., Temp = 21 . 6; Temp < 22 . 0. Paolo Turrini Intro to AI (2nd Part)

  46. Intro to AI (2nd Part) Probabilities Unconditional probabilities Conditional probabilities Paolo Turrini Intro to AI (2nd Part)

  47. Intro to AI (2nd Part) Prior probability Prior/unconditional probabilities of propositions: Paolo Turrini Intro to AI (2nd Part)

  48. Intro to AI (2nd Part) Prior probability Prior/unconditional probabilities of propositions: e.g., P ( Cavity = true ) = 0 . 1 and P ( Weather = sunny ) = 0 . 72, correspond to belief prior to arrival of any (new) evidence Paolo Turrini Intro to AI (2nd Part)

  49. Intro to AI (2nd Part) Prior probability Prior/unconditional probabilities of propositions: e.g., P ( Cavity = true ) = 0 . 1 and P ( Weather = sunny ) = 0 . 72, correspond to belief prior to arrival of any (new) evidence Probability distribution gives values for all possible assignments: P ( Weather ) = � 0 . 72 , 0 . 1 , 0 . 08 , 0 . 1 � (normalized, i.e., sums to 1) Paolo Turrini Intro to AI (2nd Part)

  50. Intro to AI (2nd Part) Prior probability cont. Joint probability distribution probability of every sample point Paolo Turrini Intro to AI (2nd Part)

  51. Intro to AI (2nd Part) Prior probability cont. Joint probability distribution probability of every sample point P ( Weather , Cavity ) = a 4 × 2 matrix of values: Paolo Turrini Intro to AI (2nd Part)

  52. Intro to AI (2nd Part) Prior probability cont. Joint probability distribution probability of every sample point P ( Weather , Cavity ) = a 4 × 2 matrix of values: Weather = sunny rain cloudy snow Cavity = true 0 . 144 0 . 02 0 . 016 0 . 02 Cavity = false 0 . 576 0 . 08 0 . 064 0 . 08 Paolo Turrini Intro to AI (2nd Part)

  53. Intro to AI (2nd Part) Prior probability cont. Joint probability distribution probability of every sample point P ( Weather , Cavity ) = a 4 × 2 matrix of values: Weather = sunny rain cloudy snow Cavity = true 0 . 144 0 . 02 0 . 016 0 . 02 Cavity = false 0 . 576 0 . 08 0 . 064 0 . 08 Every question about a domain can be answered by the joint distribution because every event is a sum of sample points Paolo Turrini Intro to AI (2nd Part)

  54. Intro to AI (2nd Part) Conditional probability Conditional or posterior probabilities Paolo Turrini Intro to AI (2nd Part)

  55. Intro to AI (2nd Part) Conditional probability Conditional or posterior probabilities e.g., P ( cavity | toothache ) = 0 . 8 Paolo Turrini Intro to AI (2nd Part)

  56. Intro to AI (2nd Part) Conditional probability Conditional or posterior probabilities e.g., P ( cavity | toothache ) = 0 . 8 i.e., given that toothache is all I know NOT “if toothache then 80% chance of cavity ” Paolo Turrini Intro to AI (2nd Part)

  57. Intro to AI (2nd Part) Conditional probability Conditional or posterior probabilities e.g., P ( cavity | toothache ) = 0 . 8 i.e., given that toothache is all I know NOT “if toothache then 80% chance of cavity ” (Notation for conditional distributions: P ( Cavity | Toothache ) = 2-element vector of 2-element vectors) Paolo Turrini Intro to AI (2nd Part)

  58. Intro to AI (2nd Part) Conditional probability If we know more, e.g., cavity is also given, then we have P ( cavity | toothache , cavity ) = ... Paolo Turrini Intro to AI (2nd Part)

  59. Intro to AI (2nd Part) Conditional probability If we know more, e.g., cavity is also given, then we have P ( cavity | toothache , cavity ) = ... = 1 Paolo Turrini Intro to AI (2nd Part)

  60. Intro to AI (2nd Part) Conditional probability If we know more, e.g., cavity is also given, then we have P ( cavity | toothache , cavity ) = ... = 1 Note: the less specific belief remains valid after more evidence arrives, but is not always useful Paolo Turrini Intro to AI (2nd Part)

  61. Intro to AI (2nd Part) Conditional probability If we know more, e.g., cavity is also given, then we have P ( cavity | toothache , cavity ) = ... = 1 Note: the less specific belief remains valid after more evidence arrives, but is not always useful New evidence may be irrelevant, allowing simplification Paolo Turrini Intro to AI (2nd Part)

  62. Intro to AI (2nd Part) Conditional probability If we know more, e.g., cavity is also given, then we have P ( cavity | toothache , cavity ) = ... = 1 Note: the less specific belief remains valid after more evidence arrives, but is not always useful New evidence may be irrelevant, allowing simplification , e.g., P ( cavity | toothache ) = P ( cavity | toothache , Cristiano Ronaldo scores) = 0 . 8 Paolo Turrini Intro to AI (2nd Part)

  63. Intro to AI (2nd Part) Conditional probability If we know more, e.g., cavity is also given, then we have P ( cavity | toothache , cavity ) = ... = 1 Note: the less specific belief remains valid after more evidence arrives, but is not always useful New evidence may be irrelevant, allowing simplification , e.g., P ( cavity | toothache ) = P ( cavity | toothache , Cristiano Ronaldo scores) = 0 . 8 This kind of inference is crucial! Paolo Turrini Intro to AI (2nd Part)

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend