artificial intelligence uncertainty probabilistic
play

ARTIFICIAL INTELLIGENCE Uncertainty: probabilistic reasoning - PowerPoint PPT Presentation

Utrecht University INFOB2KI 2019-2020 The Netherlands ARTIFICIAL INTELLIGENCE Uncertainty: probabilistic reasoning Lecturer: Silja Renooij These slides are part of the INFOB2KI Course Notes available from


  1. Utrecht University INFOB2KI 2019-2020 The Netherlands ARTIFICIAL INTELLIGENCE Uncertainty: probabilistic reasoning Lecturer: Silja Renooij These slides are part of the INFOB2KI Course Notes available from www.cs.uu.nl/docs/vakken/b2ki/schema.html

  2. Outline  Reasoning under uncertainty  Probabilities  Bayes’ rule & Bayesian Networks  Bayesian skill rating

  3. Uncertainty Let action A t = “leave for airport t minutes before flight” Will A t get me there on time? Problems: 1. partial observability (road state, other drivers' plans, etc.) 2. noisy sensors (traffic reports) 3. uncertainty in action outcomes (flat tire, etc.) 4. immense complexity of modeling and predicting traffic Hence a purely logical approach either 1. risks falsehood: “ A 125 will get me there on time”, or 2. leads to conclusions that are too weak for decision making: “ A 125 will get me there on time if there's no accident on the bridge and it doesn't rain and my tires remain intact etc etc.” ( A 1440 might reasonably be said to get me there on time but I'd have to stay overnight in the airport …)

  4. How do we deal with uncertainty?  Implicit: – Ignore what you are uncertain of, when you can – Build procedures that are robust to uncertainty  Explicit: – Build a model of the world that describes uncertainty about its state, dynamics, and observations – Reason about the effect of actions given the model

  5. Methods for handling uncertainty  Default or nonmonotonic logic: – e.g. assume my car does not have a flat tire – e.g. assume A 125 works unless contradicted by evidence  Issues: What assumptions are reasonable? How to handle contradiction?  Rules with fudge factors: – e.g . A 125 |→ 0.3 get there on time; – e.g . Sprinkler |→ 0.99 WetGrass ; WetGrass |→ 0.7 Rain  Issues: Problems with combination, e.g., Sprinkler implies Rain ??  Fuzzy Logic – e.g. The road is “busy” – e.g. At the airport 120 minutes before departure is “more than in time” – e.g. IF road(busy) and A 125 THEN at_airport (just‐in‐time)  Probability Model agent's degree of belief, given the available evidence – e.g. A 25 will get me there on time with probability 0.04

  6. Probability  A well‐known and well‐understood framework for uncertainty  Probabilistic assertions summarize effects of – laziness: failure to enumerate exceptions, qualifications, etc. – ignorance: lack of relevant facts, initial conditions, etc. – …  Clear semantics (mathematically correct)  Provides principled answers for: – Combining evidence – Predictive & Diagnostic reasoning – Incorporation of new evidence  Intuitive (at some level) to human experts  Can be assessed from data

  7. Axioms of probability  For any propositions A, B – 0 ≤ P(A) ≤ 1 – P(True) = 1 and P(False) = 0 – P(A  B) = P(A) + P(B) ‐ P(A  B) Note: P(Av  A) = P(A)+P(  A)‐P(A   A) P(True) = P(A)+P(  A)‐P(False) 1 = P(A) + P(  A) So: P(A) = 1 ‐ P(  A)

  8. Frequency Interpretation  Draw a ball from an urn containing n balls of the same size, r red and s yellow.  The probability that the proposition A = “the ball is red” is true corresponds to the relative frequency with which we expect to draw a red ball  P(A) = ?  I.e. to the frequentist, probability lies objectively in the external world.

  9. Subjective Interpretation There are many situations in which there is no objective frequency interpretation: – On a windy day, just before paragliding from the top of El Capitan, you say “there is a probability of 0.05 that I am going to die” – You have worked hard on your AI class and you believe that the probability that you will pass is 0.9  Bayesian Viewpoint – probability is "degree‐of‐belief", or "degree‐of‐uncertainty". – To the Bayesian, probability lies subjectively in the mind, and can be different for people with different information – e.g., the probability that Wayne will get rich from selling his kidney.

  10. Bayesian probability updating 1. You have a prior (or unconditional) assessment of the probability of an event 2. You subsequently receive additional information or evidence 3. Your posterior assessment is now your previous assessment, updated with this new info Images from Moserware.com 1. 2. 3.

  11. Random variables  A proposition that takes the value True with probability p and False with probability 1‐p is a random variable with distribution <p,1‐p>  If an urn contains balls having 3 possible colors – red, yellow, and blue – the color of a ball picked at random from the bag is a random variable with 3 possible values  The (probability) distribution of a random variable X with n values x 1 , x 2 , …, x n is: <p 1 , p 2 , …, p n > � with P(X=x i ) = p i and p i = 1 ���

  12. Joint Distribution  Consider k random variables X 1 , …, X k  joint distribution on these variables: a table where each entry gives the probability of one combination of values of X 1 , …, X k  Example: two‐valued variables Cavity and Toothache P (C  T ) toothache  toothache Shorthand notation for propositions: cavity 0.04 0.06 Cavity = yes and  cavity 0.01 0.89 Cavity = no P(  cavity  toothache) P(  cavity  toothache)

  13. Joint Distribution Says It All P (C  T ) toothache  toothache cavity 0.04 0.06  cavity 0.01 0.89  P( toothache ) = P( (toothache  cavity) v (toothache   cavity) ) = P( toothache  cavity ) + P( toothache   cavity ) (Marginalisation) = 0.04 + 0.01 = 0.05 ! use P(a v b) = P(a) + P(b) – P(a  b) or P(a) = P(a  b) + P(a   b)  P( toothache v cavity ) = P( (toothache  cavity) v (toothache   cavity) v (  toothache  cavity) ) = 0.04 + 0.01 + 0.06 = 0.11

  14. Conditional Probability  Definition: P(A | B) =P(A  B) / P(B) (assumes P(B) > 0 !)  Read P(A | B) : probability of A given B, i.e. the ‘context’ B is assumed to be known with certainty  can also write this as: P(A  B) = P(A | B) P(B) which is called the product rule Note: P(A  B) is often written as P(A,B)

  15. Example P (C  T ) toothache  toothache cavity 0.04 0.06  cavity 0.01 0.89  P(cavity | toothache) = = P(cavity  toothache) / P(toothache) – P(cavity  toothache) = ? – P(toothache) = ? – P(cavity | toothache) = 0.04/0.05 = 0.8

  16. Normalization P (C  T ) toothache  toothache cavity 0.04 0.06  cavity 0.01 0.89 Denominator can be viewed as a normalization constant α P(cavity | toothache) = α P(cavity, toothache) = α 0.04 P(  cavity | toothache) = α P(  cavity, toothache) = α 0.01  1 = α 0.04 + α 0.01 = α 0.05  α = 20

  17. Bayes’ Rule From the product rule: P(A  B) = P(A | B) P(B) = P(B | A) P(A)  Bayes’ rule: P(A | B) P(B) P(B | A) = P(A) Useful for assessing diagnostic from causal probability: – P(Cause | Effect) = P(Effect | Cause) P(Cause) / P(Effect) – E.g., let M be meningitis, S be stiff neck: P(m | s) = P(s | m) P(m) / P(s) = 0.8 × 0.0001 / 0.1 = 0.0008 – Note: posterior probability of meningitis still very small!

  18. Generalizations  P(A  B  C) = P(A  B | C) P(C) = P(A | B  C) P(B | C) P(C) chain rule  P(A  B  C) = P(A  B | C) P(C) = P(B | A  C) P(A | C) P(C)  P(A | B,C) P(B | C) P(B | A,C) = P(A | C) P(X  Y=y) Marginalisation rule: P(X) = ��� ������ �

  19. Representing Probability  Naïve representations of probability run into problems.  Example: Patients in hospital are described by several attributes (variables): • Background: age, gender, history of diseases, … • Symptoms: fever, blood pressure, headache, … • Diseases: pneumonia, heart attack, …  A probability distribution needs to assign a number to each combination of values of these variables – 20 binary variables already require 2 10 ~ 10 6 numbers – Real examples usually involve hundreds of attributes

  20. Practical Representation  Key idea ‐‐ exploit regularities  Here we focus on exploiting ( conditional) independence properties

  21. Independent Random Variables  Two variables X and Y are independent if – P(X = x|Y = y) = P(X = x) for all values x,y – That is, learning the values of Y does not change prediction of X  If X and Y are independent then – P(X,Y) = P(X|Y)P(Y) = P(X)P(Y)  In general, if X 1 ,…,X n are independent, then – P(X 1 ,…,X n )= P(X 1 )...P(X n ) – Requires O(n) parameters

  22. Independence: example P (Toothache, Catch, Cavity, Weather) = P (Toothache, Catch, Cavity) P (Weather)  32 entries reduced to 12 (4 for Weather and 8 for Toothache & Catch & Cavity );  Absolute independence: powerful but rare  Dentistry is a large field with hundreds of variables, none of which are independent. What to do?

  23. Conditional Independence  A more suitable notion is that of conditional independence  Two variables X and Y are conditionally independent given Z if – P(X = x|Y = y,Z=z) = P(X = x|Z=z) for all values x,y,z – That is, learning the values of Y does not change prediction of X once we know the value of Z

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend