343h honors ai
play

343H: Honors AI Lecture 9: Bayes nets, part 1 2/13/2014 Kristen - PowerPoint PPT Presentation

343H: Honors AI Lecture 9: Bayes nets, part 1 2/13/2014 Kristen Grauman UT Austin Slides courtesy of Dan Klein, UC Berkeley Unless otherwise noted Outline Last time: Probability Random Variables Joint and Marginal Distributions


  1. 343H: Honors AI Lecture 9: Bayes nets, part 1 2/13/2014 Kristen Grauman UT Austin Slides courtesy of Dan Klein, UC Berkeley Unless otherwise noted

  2. Outline  Last time: Probability  Random Variables  Joint and Marginal Distributions  Conditional Distribution  Product Rule, Chain Rule, Bayes’ Rule  Inference  Today:  Independence  Intro to Bayesian Networks

  3. Quiz: Bayes’ Rule  What is P(W | dry) ?

  4. Models and simplifications 4

  5. Probabilistic Models  Models describe how (a portion of) the world works  Models are always simplifications  May not account for every variable  May not account for all interactions between variables  “All models are wrong; but some are useful.” – George E. P. Box  What do we do with probabilistic models?  We (or our agents) need to reason about unknown variables, given evidence  Example: explanation (diagnostic reasoning)  Example: prediction (causal reasoning)  Example: value of information

  6. Probabilistic Models  A probabilistic model is a joint distribution over a set of variables  Given a joint distribution, we can reason about unobserved variables given observations (evidence)  General form of a query: Stuff you Stuff you care about already know  This kind of posterior distribution is also called the belief function of an agent which uses this model 6

  7. Independence  Two variables are independent if:  This says that their joint distribution factors into a product two simpler distributions  Another form:  We write:  Independence is a simplifying modeling assumption  Empirical joint distributions: at best “ close ” to independent  What could we assume for {Weather, Traffic, Cavity, 7 Toothache}?

  8. Example: Independence? T P warm 0.5 cold 0.5 T W P T W P warm sun 0.4 warm sun 0.3 warm rain 0.1 warm rain 0.2 cold sun 0.2 cold sun 0.3 cold rain 0.3 cold rain 0.2 W P sun 0.6 rain 0.4

  9. Example: Independence  N fair, independent coin flips: h 0.5 h 0.5 h 0.5 t 0.5 t 0.5 t 0.5

  10. Conditional Independence  P(Toothache, Cavity, Catch)  If I have a cavity, the probability that the probe catches in it doesn't depend on whether I have a toothache:  P(+catch | +toothache, +cavity) = P(+catch | +cavity) The same independence holds if I don’t have a cavity:   P(+catch | +toothache,  cavity) = P(+catch|  cavity)  Catch is conditionally independent of Toothache given Cavity:  P(Catch | Toothache, Cavity) = P(Catch | Cavity)  Equivalent statements:  P(Toothache | Catch , Cavity) = P(Toothache | Cavity)  P(Toothache, Catch | Cavity) = P(Toothache | Cavity) P(Catch | Cavity)

  11. Conditional Independence  Unconditional (absolute) independence very rare (why?)  Conditional independence is our most basic and robust form of knowledge about uncertain environments.  X is conditionally independent of Y given Z iff:  Or, equivalently, iff:

  12. Conditional independence  What about this domain?  Traffic  Umbrella  Raining

  13. Conditional independence  What about this domain?  Fire  Smoke  Alarm

  14. Cond indep and the Chain Rule  Trivial decomposition:  With assumption of conditional independence:  Bayes’ nets / graphical models help us express conditional independence assumptions

  15. Ghostbusters Chain Rule P(T,B,G) = P(G) P(T|G) P(B|T,G)  Each sensor depends only on where the ghost is = P(G) P(T|G) P(B|G)  T B G P(T,B,G) That means, the two sensors are conditionally independent, given the +t +b +g 0.16 ghost position  g +t +b 0.16  T: Top square is red  b B: Bottom square is red +t +g 0.24 G: Ghost is in the top  b  g +t 0.04   t Givens: +b +g 0.04 P( +g ) = 0.5 P(G)  t  g +b 0.24 P( +t | +g ) = 0.8 P(T|G) P( +t |  g ) = 0.4  t  b +g 0.06 P( +b | +g ) = 0.4 P(B|G) P( +b |  g ) = 0.8  t  b  g 0.06

  16. Bayes’ Nets: Big Picture  Two problems with using full joint distribution tables as our probabilistic models:  Unless there are only a few variables, the joint is WAY too big to represent explicitly  Hard to learn (estimate) anything empirically about more than a few variables at a time  Bayes ’ nets: a technique for describing complex joint distributions (models) using simple, local distributions (conditional probabilities)  More properly called graphical models  We describe how variables locally interact  Local interactions chain together to give global, indirect interactions  For now, we ’ ll be vague about how these interactions are specified

  17. Example Bayes ’ Net: Insurance 17

  18. Example Bayes’ Net: Car 18

  19. Graphical Model Notation  Nodes: variables (with domains)  Can be assigned (observed) or unassigned (unobserved)  Arcs: interactions  Indicate “direct influence” between variables  Formally: encode conditional independence (more later)  For now: imagine that arrows mean direct causation (in general, they don’t!)

  20. Example: Coin Flips  N independent coin flips X 1 X 2 X n  No interactions between variables: absolute independence

  21. Example: Traffic  Variables:  R: It rains R  T: There is traffic  Model 1: independence T  Model 2: rain causes traffic  Why is an agent using model 2 better?

  22. Example: Traffic II  Let’s build a causal graphical model  Variables  T: Traffic  R: It rains  L: Low pressure  D: Roof drips  B: Ballgame  C: Cavity

  23. Example: Alarm Network  Variables  B: Burglary  A: Alarm goes off  M: Mary calls  J: John calls  E: Earthquake!

  24. Example: Part-based object models [Fischler and Elschlager, 1973] Kristen Grauman

  25. Example: Part-based object models One possible graphical model: Fully connected constellation model x 1 x 6 x 2 x 5 x 3 x 4  e.g. Constellation Model  Parts fully connected N image features, P parts in the model Slide credit: Rob Fergus

  26. x 1 x 6 Probabilistic constellation model x 2 x 5 x 3 x 4  ( | ) ( , | ) P image object P appearance shape object  max ( | , ) ( | , ) ( | ) P appearance h object p shape h object p h object h Part Part h: assignment of features to parts descriptors locations Candidate parts Source: Lana Lazebnik

  27. x 1 x 6 Probabilistic constellation model x 2 x 5 x 3 x 4  ( | ) ( , | ) P image object P appearance shape object  max ( | , ) ( | , ) ( | ) P appearance h object p shape h object p h object h h: assignment of features to parts Part 1 Part 3 Part 2 Source: Lana Lazebnik

  28. x 1 x 6 Probabilistic constellation model x 2 x 5 x 3 x 4  ( | ) ( , | ) P image object P appearance shape object  max ( | , ) ( | , ) ( | ) P appearance h object p shape h object p h object h h: assignment of features to parts Part 1 Part 3 Part 2 Source: Lana Lazebnik

  29. Appearance: 10 Face model patches closest to mean for each part Recognition results Fergus et al. CVPR 2003

  30. Appearance: 10 Face model patches closest to mean for each part Recognition results Test images: size of circles indicates score of hypothesis Fergus et al. CVPR 2003 Kristen Grauman

  31. Appearance: 10 Motorbike patches closest model to mean for each part Recognition results Fergus et al. CVPR 2003 Kristen Grauman

  32. Spotted cat Appearance: 10 model patches closest to mean for each part Recognition results Fergus et al. CVPR 2003 Kristen Grauman

  33. Example: Part-based object models Two possible graphical models: “Star” shape model Fully connected constellation model x 1 x 1 x 6 x 2 x 6 x 2 x 5 x 3 x 5 x 3 x 4 x 4  e.g. implicit shape model  e.g. Constellation Model  Parts mutually independent  Parts fully connected  Recognition complexity: O(NP)  Recognition complexity: O(N P ) N image features, P parts in the model Slide credit: Rob Fergus

  34. x 1 Star-shaped graphical model x 6 x 2 x 5 x 3 x 4 • Discrete set of part appearances are used to index votes for object position Part with displacement vectors training image annotated with object localization info B. Leibe, A. Leonardis, and B. Schiele, Combined Object Categorization and Segmentation with an Implicit Shape Model, ECCV Workshop on Statistical Learning in Computer Vision 2004

  35. x 1 Star-shaped graphical model x 6 x 2 x 5 x 3 x 4 • Discrete set of part appearances are used to index votes for object position test image B. Leibe, A. Leonardis, and B. Schiele, Combined Object Categorization and Segmentation with an Implicit Shape Model, ECCV Workshop on Statistical Learning in Computer Vision 2004

  36. x 1 Naïve Bayes model of parts x 6 x 2 x 5 x 3 x 4 N      ( | ) ( ) ( | ) p c w p c p w c ( ) ( | ) p c p w n c arg max c c  1 n patches Object class Prior prob. of Image likelihood decision the object classes given the class

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend