csc 411 lecture 14 probabilistic models ii
play

CSC 411 Lecture 14: Probabilistic Models II Roger Grosse, - PowerPoint PPT Presentation

CSC 411 Lecture 14: Probabilistic Models II Roger Grosse, Amir-massoud Farahmand, and Juan Carrasquilla University of Toronto UofT CSC 411: 14-Probabilistic Models II 1 / 42 Overview Bayesian parameter estimation MAP estimation Gaussian


  1. CSC 411 Lecture 14: Probabilistic Models II Roger Grosse, Amir-massoud Farahmand, and Juan Carrasquilla University of Toronto UofT CSC 411: 14-Probabilistic Models II 1 / 42

  2. Overview Bayesian parameter estimation MAP estimation Gaussian discriminant analysis UofT CSC 411: 14-Probabilistic Models II 2 / 42

  3. Data Sparsity Maximum likelihood has a pitfall: if you have too little data, it can overfit. E.g., what if you flip the coin twice and get H both times? 2 N H θ ML = = 2 + 0 = 1 N H + N T Because it never observed T, it assigns this outcome probability 0. This problem is known as data sparsity. If you observe a single T in the test set, the log-likelihood is −∞ . UofT CSC 411: 14-Probabilistic Models II 3 / 42

  4. Bayesian Parameter Estimation In maximum likelihood, the observations are treated as random variables, but the parameters are not. The Bayesian approach treats the parameters as random variables as well. To define a Bayesian model, we need to specify two distributions: The prior distribution p ( θ ), which encodes our beliefs about the parameters before we observe the data The likelihood p ( D | θ ), same as in maximum likelihood When we update our beliefs based on the observations, we compute the posterior distribution using Bayes’ Rule: p ( θ ) p ( D | θ ) p ( θ | D ) = p ( θ ′ ) p ( D | θ ′ ) d θ ′ . � We rarely ever compute the denominator explicitly. UofT CSC 411: 14-Probabilistic Models II 4 / 42

  5. Bayesian Parameter Estimation Let’s revisit the coin example. We already know the likelihood: L ( θ ) = p ( D ) = θ N H (1 − θ ) N T It remains to specify the prior p ( θ ). We can choose an uninformative prior, which assumes as little as possible. A reasonable choice is the uniform prior. But our experience tells us 0.5 is more likely than 0.99. One particularly useful prior that lets us specify this is the beta distribution: p ( θ ; a , b ) = Γ( a + b ) Γ( a )Γ( b ) θ a − 1 (1 − θ ) b − 1 . This notation for proportionality lets us ignore the normalization constant: p ( θ ; a , b ) ∝ θ a − 1 (1 − θ ) b − 1 . UofT CSC 411: 14-Probabilistic Models II 5 / 42

  6. Bayesian Parameter Estimation Beta distribution for various values of a , b : Some observations: The expectation E [ θ ] = a / ( a + b ). The distribution gets more peaked when a and b are large. The uniform distribution is the special case where a = b = 1. The main thing the beta distribution is used for is as a prior for the Bernoulli distribution. UofT CSC 411: 14-Probabilistic Models II 6 / 42

  7. Bayesian Parameter Estimation Computing the posterior distribution: p ( θ | D ) ∝ p ( θ ) p ( D | θ ) � θ a − 1 (1 − θ ) b − 1 � � � θ N H (1 − θ ) N T ∝ = θ a − 1+ N H (1 − θ ) b − 1+ N T . This is just a beta distribution with parameters N H + a and N T + b . The posterior expectation of θ is: N H + a E [ θ | D ] = N H + N T + a + b The parameters a and b of the prior can be thought of as pseudo-counts. The reason this works is that the prior and likelihood have the same functional form. This phenomenon is known as conjugacy, and it’s very useful. UofT CSC 411: 14-Probabilistic Models II 7 / 42

  8. Bayesian Parameter Estimation Bayesian inference for the coin flip example: Small data setting Large data setting N H = 2, N T = 0 N H = 55, N T = 45 When you have enough observations, the data overwhelm the prior. UofT CSC 411: 14-Probabilistic Models II 8 / 42

  9. Bayesian Parameter Estimation What do we actually do with the posterior? The posterior predictive distribution is the distribution over future observables given the past observations. We compute this by marginalizing out the parameter(s): � p ( D ′ | D ) = p ( θ | D ) p ( D ′ | θ ) d θ . (1) For the coin flip example: θ pred = Pr ( x ′ = H | D ) � p ( θ | D ) Pr ( x ′ = H | θ ) d θ = � = Beta ( θ ; N H + a , N T + b ) · θ d θ = E Beta ( θ ; N H + a , N T + b ) [ θ ] N H + a = (2) N H + N T + a + b , UofT CSC 411: 14-Probabilistic Models II 9 / 42

  10. Bayesian Parameter Estimation Bayesian estimation of the mean temperature in Toronto Assume observations are i.i.d. Gaussian with known standard deviation σ and unknown mean µ Broad Gaussian prior over µ , centered at 0 We can compute the posterior and posterior predictive distributions analytically (full derivation in notes) Why is the posterior predictive distribution more spread out than the posterior distribution? UofT CSC 411: 14-Probabilistic Models II 10 / 42

  11. Bayesian Parameter Estimation Comparison of maximum likelihood and Bayesian parameter estimation The Bayesian approach deals better with data sparsity Maximum likelihood is an optimization problem, while Bayesian parameter estimation is an integration problem This means maximum likelihood is much easier in practice, since we can just do gradient descent Automatic differentiation packages make it really easy to compute gradients There aren’t any comparable black-box tools for Bayesian parameter estimation (although Stan can do quite a lot) UofT CSC 411: 14-Probabilistic Models II 11 / 42

  12. Maximum A-Posteriori Estimation Maximum a-posteriori (MAP) estimation: find the most likely parameter settings under the posterior This converts the Bayesian parameter estimation problem into a maximization problem ˆ θ MAP = arg max p ( θ | D ) θ p ( θ , D ) = arg max θ p ( θ ) p ( D | θ ) = arg max θ log p ( θ ) + log p ( D | θ ) = arg max θ UofT CSC 411: 14-Probabilistic Models II 12 / 42

  13. Maximum A-Posteriori Estimation Joint probability in the coin flip example: log p ( θ, D ) = log p ( θ ) + log p ( D | θ ) = const + ( a − 1) log θ + ( b − 1) log(1 − θ ) + N H log θ + N T log(1 − θ ) = const + ( N H + a − 1) log θ + ( N T + b − 1) log(1 − θ ) Maximize by finding a critical point 0 = d d θ log p ( θ, D ) = N H + a − 1 − N T + b − 1 1 − θ θ Solving for θ , N H + a − 1 ˆ θ MAP = N H + N T + a + b − 2 UofT CSC 411: 14-Probabilistic Models II 13 / 42

  14. Maximum A-Posteriori Estimation Comparison of estimates in the coin flip example: N H = 2 , N T = 0 N H = 55 , N T = 45 Formula ˆ N H 55 θ ML 1 100 = 0 . 55 N H + N T N H + a 4 57 θ pred 6 ≈ 0 . 67 104 ≈ 0 . 548 N H + N T + a + b ˆ N H + a − 1 3 56 θ MAP 4 = 0 . 75 102 ≈ 0 . 549 N H + N T + a + b − 2 ˆ θ MAP assigns nonzero probabilities as long as a , b > 1. UofT CSC 411: 14-Probabilistic Models II 14 / 42

  15. Maximum A-Posteriori Estimation Comparison of predictions in the Toronto temperatures example 1 observation 7 observations UofT CSC 411: 14-Probabilistic Models II 15 / 42

  16. Gaussian Discriminant Analysis UofT CSC 411: 14-Probabilistic Models II 16 / 42

  17. Motivation Generative models - model p ( x | t = k ) Instead of trying to separate classes, try to model what each class ”looks like”. Recall that p ( x | t = k ) may be very complex p ( x 1 , · · · , x d , y ) = p ( x 1 | x 2 , · · · , x d , y ) · · · p ( x d − 1 | x d , y ) p ( x d , y ) Naive bayes used a conditional independence assumption. What else could we do? Choose a simple distribution. Today we will discuss fitting Gaussian distributions to our data. UofT CSC 411: 14-Probabilistic Models II 17 / 42

  18. Bayes Classifier Let’s take a step back... Bayes Classifier h ( x ) = arg max p ( t = k | x ) = arg max p ( x | t = k ) p ( t = k ) p ( x ) = arg max p ( x | t = k ) p ( t = k ) Talked about Discrete x , what if x is continuous? UofT CSC 411: 14-Probabilistic Models II 18 / 42

  19. Classification: Diabetes Example Observation per patient: White blood cell count & glucose value. How can we model p ( x | t = k )? Multivariate Gaussian UofT CSC 411: 14-Probabilistic Models II 19 / 42

  20. Gaussian Discriminant Analysis (Gaussian Bayes Classifier) Gaussian Discriminant Analysis in its general form assumes that p ( x | t ) is distributed according to a multivariate normal (Gaussian) distribution Multivariate Gaussian distribution: 1 � − 1 � 2( x − µ k ) T Σ − 1 p ( x | t = k ) = k ( x − µ k ) (2 π ) d / 2 | Σ k | 1 / 2 exp where | Σ k | denotes the determinant of the matrix, and d is dimension of x Each class k has associated mean vector µ k and covariance matrix Σ k Σ k has O ( d 2 ) parameters - could be hard to estimate (more on that later). UofT CSC 411: 14-Probabilistic Models II 20 / 42

  21. Multivariate Data Multiple measurements (sensors) d inputs/features/attributes N instances/observations/examples x (1) x (1) x (1)   · · · 1 2 d x (2) x (2) x (2) · · ·   1 2 d   X = . . . ...   . . .  . . .    x ( N ) x ( N ) x ( N ) · · · 1 2 d UofT CSC 411: 14-Probabilistic Models II 21 / 42

  22. Multivariate Parameters Mean E [ x ] = [ µ 1 , · · · , µ d ] T Covariance σ 2  σ 12 · · · σ 1 d  1 σ 2 σ 12 · · · σ 2 d  2  Σ = Cov ( x ) = E [( x − µ ) T ( x − µ )] =  . . .  ... . . .   . . .   σ 2 · · · σ d 1 σ d 2 d For Gaussians - all you need to know to represent! (not true in general) UofT CSC 411: 14-Probabilistic Models II 22 / 42

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend