probabilistic graphical models
play

Probabilistic Graphical Models David Sontag New York University - PowerPoint PPT Presentation

Probabilistic Graphical Models David Sontag New York University Lecture 1, January 26, 2012 David Sontag (NYU) Graphical Models Lecture 1, January 26, 2012 1 / 37 One of the most exciting advances in machine learning (AI, signal processing,


  1. Probabilistic Graphical Models David Sontag New York University Lecture 1, January 26, 2012 David Sontag (NYU) Graphical Models Lecture 1, January 26, 2012 1 / 37

  2. One of the most exciting advances in machine learning (AI, signal processing, coding, control, . . . ) in the last decades David Sontag (NYU) Graphical Models Lecture 1, January 26, 2012 2 / 37

  3. How can we gain global insight based on local observations ? David Sontag (NYU) Graphical Models Lecture 1, January 26, 2012 3 / 37

  4. Key idea 1 Represent the world as a collection of random variables X 1 , . . . , X n with joint distribution p ( X 1 , . . . , X n ) 2 Learn the distribution from data 3 Perform “ inference ” (compute conditional distributions p ( X i | X 1 = x 1 , . . . , X m = x m )) David Sontag (NYU) Graphical Models Lecture 1, January 26, 2012 4 / 37

  5. Reasoning under uncertainty As humans, we are continuously making predictions under uncertainty Classical AI and ML research ignored this phenomena Many of the most recent advances in technology are possible because of this new, probabilistic , approach David Sontag (NYU) Graphical Models Lecture 1, January 26, 2012 5 / 37

  6. Applications: Deep question answering David Sontag (NYU) Graphical Models Lecture 1, January 26, 2012 6 / 37

  7. Applications: Machine translation David Sontag (NYU) Graphical Models Lecture 1, January 26, 2012 7 / 37

  8. Applications: Speech recognition David Sontag (NYU) Graphical Models Lecture 1, January 26, 2012 8 / 37

  9. Applications: Stereo vision output: disparity ! input: two images ! David Sontag (NYU) Graphical Models Lecture 1, January 26, 2012 9 / 37

  10. Key challenges 1 Represent the world as a collection of random variables X 1 , . . . , X n with joint distribution p ( X 1 , . . . , X n ) How does one compactly describe this joint distribution? Directed graphical models (Bayesian networks) Undirected graphical models (Markov random fields, factor graphs) 2 Learn the distribution from data Maximum likelihood estimation. Other estimation methods? How much data do we need? How much computation does it take? 3 Perform “ inference ” (compute conditional distributions p ( X i | X 1 = x 1 , . . . , X m = x m )) David Sontag (NYU) Graphical Models Lecture 1, January 26, 2012 10 / 37

  11. Syllabus overview We will study Representation, Inference & Learning First in the simplest case Only discrete variables Fully observed models Exact inference & learning Then generalize Continuous variables Partially observed data during learning (hidden variables) Approximate inference & learning Learn about algorithms, theory & applications David Sontag (NYU) Graphical Models Lecture 1, January 26, 2012 11 / 37

  12. Logistics Class webpage: http://cs.nyu.edu/~dsontag/courses/pgm12/ Sign up for mailing list! Draft slides posted before each lecture Book: Probabilistic Graphical Models: Principles and Techniques by Daphne Koller and Nir Friedman, MIT Press (2009) Office hours: Tuesday 5-6pm and by appointment. 715 Broadway, 12th floor, Room 1204 Grading: problem sets (70%) + final exam (30%) Grader is Chris Alberti (chris.alberti@gmail.com) 6-7 assignments (every 2 weeks). Both theory and programming First homework out today , due Feb. 9 at 5pm See collaboration policy on class webpage David Sontag (NYU) Graphical Models Lecture 1, January 26, 2012 12 / 37

  13. Quick review of probability Reference: Chapter 2 and Appendix A What are the possible outcomes? Coin toss: Ω = { ”heads”, “tails” } Die: Ω = { 1 , 2 , 3 , 4 , 5 , 6 } An event is a subset of outcomes S ⊆ Ω: Examples for die: { 1 , 2 , 3 } , { 2 , 4 , 6 } , . . . We measure each event using a probability function David Sontag (NYU) Graphical Models Lecture 1, January 26, 2012 13 / 37

  14. Probability function Assign non-negative weight, p ( ω ), to each outcome such that � p ( ω ) = 1 ω ∈ Ω Coin toss: p ( “head” ) + p ( “tail” ) = 1 Die: p (1) + p (2) + p (3) + p (4) + p (5) + p (6) = 1 Probability of event S ⊆ Ω: � p ( S ) = p ( ω ) ω ∈ S Example for die: p ( { 2 , 4 , 6 } ) = p (2) + p (4) + p (6) Claim: p ( S 1 ∪ S 2 ) = p ( S 1 ) + p ( S 2 ) − p ( S 1 ∩ S 2 ) David Sontag (NYU) Graphical Models Lecture 1, January 26, 2012 14 / 37

  15. Independence of events Two events S 1 , S 2 are independent if p ( S 1 ∩ S 2 ) = p ( S 1 ) p ( S 2 ) David Sontag (NYU) Graphical Models Lecture 1, January 26, 2012 15 / 37

  16. Conditional probability Let S 1 , S 2 be events, p ( S 2 ) > 0. p ( S 1 | S 2 ) = p ( S 1 ∩ S 2 ) p ( S 2 ) Claim 1: � ω ∈ S p ( ω | S ) = 1 Claim 2: If S 1 and S 2 are independent, then p ( S 1 | S 2 ) = p ( S 1 ) David Sontag (NYU) Graphical Models Lecture 1, January 26, 2012 16 / 37

  17. Two important rules 1 Chain rule Let S 1 , . . . S n be events, p ( S i ) > 0. p ( S 1 ∩ S 2 ∩ · · · ∩ S n ) = p ( S 1 ) p ( S 2 | S 1 ) · · · p ( S n | S 1 , . . . , S n − 1 ) 2 Bayes’ rule Let S 1 , S 2 be events, p ( S 1 ) > 0 and p ( S 2 ) > 0. p ( S 1 | S 2 ) = p ( S 1 ∩ S 2 ) = p ( S 2 | S 1 ) p ( S 1 ) p ( S 2 ) p ( S 2 ) David Sontag (NYU) Graphical Models Lecture 1, January 26, 2012 17 / 37

  18. Discrete random variables Often each outcome corresponds to a setting of various attributes (e.g., “age”, “gender”, “hasPneumonia”, “hasDiabetes”) A random variable X is a mapping X : Ω → D D is some set (e.g., the integers) Induces a partition of all outcomes Ω For some x ∈ D , we say p ( X = x ) = p ( { ω ∈ Ω : X ( ω ) = x } ) “probability that variable X assumes state x ” Notation: Val( X ) = set D of all values assumed by X (will interchangeably call these the “values” or “states” of variable X ) p ( X ) is a distribution: � x ∈ Val ( X ) p ( X = x ) = 1 David Sontag (NYU) Graphical Models Lecture 1, January 26, 2012 18 / 37

  19. Multivariate distributions Instead of one random variable, have random vector X ( ω ) = [ X 1 ( ω ) , . . . , X n ( ω )] X i = x i is an event. The joint distribution p ( X 1 = x 1 , . . . , X n = x n ) is simply defined as p ( X 1 = x 1 ∩ · · · ∩ X n = x n ) We will often write p ( x 1 , . . . , x n ) instead of p ( X 1 = x 1 , . . . , X n = x n ) Conditioning, chain rule, Bayes’ rule, etc. all apply David Sontag (NYU) Graphical Models Lecture 1, January 26, 2012 19 / 37

  20. Working with random variables For example, the conditional distribution p ( X 1 | X 2 = x 2 ) = p ( X 1 , X 2 = x 2 ) . p ( X 2 = x 2 ) This notation means p ( X 1 = x 1 | X 2 = x 2 ) = p ( X 1 = x 1 , X 2 = x 2 ) ∀ x 1 ∈ Val ( X 1 ) p ( X 2 = x 2 ) Two random variables are independent , X 1 ⊥ X 2 , if p ( X 1 = x 1 , X 2 = x 2 ) = p ( X 1 = x 1 ) p ( X 2 = x 2 ) for all values x 1 ∈ Val ( X 1 ) and x 2 ∈ Val ( X 2 ). David Sontag (NYU) Graphical Models Lecture 1, January 26, 2012 20 / 37

  21. Example Consider three binary-valued random variables Val ( X i ) = { 0 , 1 } X 1 , X 2 , X 3 Let outcome space Ω be the cross-product of their states: Ω = Val ( X 1 ) × Val ( X 2 ) × Val ( X 3 ) X i ( ω ) is the value for X i in the assignment ω ∈ Ω Specify p ( ω ) for each outcome ω ∈ Ω by a big table: x 1 x 2 x 3 p ( x 1 , x 2 , x 3 ) 0 0 0 .11 0 0 1 .02 . . . 1 1 1 .05 How many parameters do we need to specify? 2 3 − 1 David Sontag (NYU) Graphical Models Lecture 1, January 26, 2012 21 / 37

  22. Marginalization Suppose X and Y are random variables with distribution p ( X , Y ) X : Intelligence, Val( X ) = { “Very High”, “High” } Y : Grade, Val( Y ) = { “a”, “b” } Joint distribution specified by: X vh h Y a 0.7 0.15 b 0.1 0.05 p ( Y = a ) = ?= 0 . 85 More generally, suppose we have a joint distribution p ( X 1 , . . . , X n ). Then, � � � � � p ( X i = x i ) = · · · · · · p ( x 1 , . . . , x n ) x 1 x 2 x i − 1 x i +1 x n David Sontag (NYU) Graphical Models Lecture 1, January 26, 2012 22 / 37

  23. Conditioning Suppose X and Y are random variables with distribution p ( X , Y ) X : Intelligence, Val( X ) = { “Very High”, “High” } Y : Grade, Val( Y ) = { “a”, “b” } X vh h Y a 0.7 0.15 b 0.1 0.05 Can compute the conditional probability p ( Y = a , X = vh ) p ( Y = a | X = vh ) = p ( X = vh ) p ( Y = a , X = vh ) = p ( Y = a , X = vh ) + p ( Y = b , X = vh ) 0 . 7 = = 0 . 875 . 0 . 7 + 0 . 1 David Sontag (NYU) Graphical Models Lecture 1, January 26, 2012 23 / 37

  24. Example: Medical diagnosis Variable for each symptom (e.g. “fever”, “cough”, “fast breathing”, “shaking”, “nausea”, “vomiting”) Variable for each disease (e.g. “pneumonia”, “flu”, “common cold”, “bronchitis”, “tuberculosis”) Diagnosis is performed by inference in the model: p ( pneumonia = 1 | cough = 1 , fever = 1 , vomiting = 0) One famous model, Quick Medical Reference (QMR-DT), has 600 diseases and 4000 findings David Sontag (NYU) Graphical Models Lecture 1, January 26, 2012 24 / 37

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend