learning deep generative models
play

Learning Deep Generative Models Inference & Representation - PowerPoint PPT Presentation

Introduction Variational Inference Deep Generative Models Summary Learning Deep Generative Models Inference & Representation Lecture 12 Rahul G. Krishnan Fall 2015 Rahul G. Krishnan Learning Deep Generative Models Introduction


  1. Introduction Variational Inference Deep Generative Models Summary Learning Deep Generative Models Inference & Representation Lecture 12 Rahul G. Krishnan Fall 2015 Rahul G. Krishnan Learning Deep Generative Models

  2. Introduction Variational Inference Deep Generative Models Summary Outline 1 Introduction Variational Bound Summary 2 Variational Inference Latent Dirichlet Allocation Learning LDA Stochastic Variational Inference 3 Deep Generative Models Bayesian Networks & Deep-Learning Learning Summary of DGMs 4 Summary Rahul G. Krishnan Learning Deep Generative Models

  3. Introduction Variational Inference Variational Bound Deep Generative Models Summary Summary Outline 1 Introduction Variational Bound Summary 2 Variational Inference Latent Dirichlet Allocation Learning LDA Stochastic Variational Inference 3 Deep Generative Models Bayesian Networks & Deep-Learning Learning Summary of DGMs 4 Summary Rahul G. Krishnan Learning Deep Generative Models

  4. Introduction Variational Inference Variational Bound Deep Generative Models Summary Summary Overview of Lecture 1 Review mathematical concepts: Jensen’s Inequality and the Maximum Likelihood (ML) principle 2 Learning as Optimization : Maximizing the Evidence Lower Bound (ELBO) 3 Learning in LDA 4 Stochastic Variational Inference 5 Learning Deep Generative Models 6 Summarize Rahul G. Krishnan Learning Deep Generative Models

  5. Introduction Variational Inference Variational Bound Deep Generative Models Summary Summary Recap Jensen’s Inequality: For concave f , we have f ( E [ X ]) ≥ E [ f ( X )] f ((1 − λ ) a + λb ) | {z } f ( E ( X )) where P [ X = a ]=1 − λ,P [ X = b ]= λ (1 − λ ) f ( a ) + λf ( b ) f | {z } E [ f ( X )] where P [ X = a ]=1 − λ,P [ X = b ]= λ f ( E [ X ]) ≥ E [ f ( X )] a b Figure: Jensen’s Inequality Rahul G. Krishnan Learning Deep Generative Models

  6. Introduction Variational Inference Variational Bound Deep Generative Models Summary Summary Recap We assume that for D = { x 1 , . . . , x N } , x i ∼ p ( x ) i.i.d We hypothesize a model (with parameters θ ) for how the data is generated The Maximum Likelihood Principle: max θ p ( D ; θ ) = � N i =1 p ( x i ; θ ) Typically work with the log probability: i.e � N max θ i =1 log p ( x i ; θ ) Rahul G. Krishnan Learning Deep Generative Models

  7. Introduction Variational Inference Variational Bound Deep Generative Models Summary Summary A simple Bayesian Network z x Lets start with a very simple generative model for our data We assume that the data is generated i.i.d as: z ∼ p ( z ) x ∼ p ( x | z ) z is latent/hidden and x is observed Rahul G. Krishnan Learning Deep Generative Models

  8. Introduction Variational Inference Variational Bound Deep Generative Models Summary Summary Bounding the Marginal Likelihood Log-Likelihood of a single datapoint x ∈ D under the model: log p ( x ; θ ) Important: Assume ∃ q ( z ; φ ), (variational approximation) � log p ( x ) = log p ( x, z ) (Multiply and divide by q ( z )) z � p ( x, z ) � � q ( z ) p ( x, z ) = log = log E z ∼ q ( z ) (By Jensen’s Inequality) q ( z ) q ( z ) z � q ( z ) log p ( x, z ) ≥ = L ( x ; θ, φ ) q ( z ) z = E q ( z ) [log p ( x, z )] + H( q ( z )) � �� � � �� � Entropy Expectation of Joint distribution Rahul G. Krishnan Learning Deep Generative Models

  9. Introduction Variational Inference Variational Bound Deep Generative Models Summary Summary Evidence Lower BOund (ELBO)/Variational Bound When is the lower bound tight? Look at: function - lower bound log p ( x ; θ ) − L ( x ; θ, φ ) � q ( z ) log p ( x, z ) log p ( x ) − q ( z ) z � � q ( z ) log p ( x, z ) q ( z ) log p ( x ) − = q ( z ) z z � q ( z ) log q ( z ) p ( x ) = p ( x, z ) z = KL( q ( z ; φ ) || p ( z | x )) Rahul G. Krishnan Learning Deep Generative Models

  10. Introduction Variational Inference Variational Bound Deep Generative Models Summary Summary Evidence Lower BOund (ELBO)/Variational Bound We assumed the existance of q ( z ; φ ) What we just showed is that: Key Point The optimal q ( z ; φ ) corresponds to the one that realizes KL( q ( z ; φ ) || p ( z | x )) = 0 ⇐ ⇒ q ( z ; φ ) = p ( z | x ) Rahul G. Krishnan Learning Deep Generative Models

  11. Introduction Variational Inference Variational Bound Deep Generative Models Summary Summary Evidence Lower BOund (ELBO)/Variational Bound In order to estimate the liklihood of the entire dataset D , we need � N i =1 log p ( x i ; θ ) Summing up over datapoints we get: N N � � log p ( x i ; θ ) ≥ L ( x i ; θ, φ i ) max max θ θ,φ 1 ,...,φ N i =1 i =1 � �� � ELBO Note that we use a different φ i for every data point Rahul G. Krishnan Learning Deep Generative Models

  12. Introduction Variational Inference Variational Bound Deep Generative Models Summary Summary Outline 1 Introduction Variational Bound Summary 2 Variational Inference Latent Dirichlet Allocation Learning LDA Stochastic Variational Inference 3 Deep Generative Models Bayesian Networks & Deep-Learning Learning Summary of DGMs 4 Summary Rahul G. Krishnan Learning Deep Generative Models

  13. Introduction Variational Inference Variational Bound Deep Generative Models Summary Summary Summary Learning as Optimization Variational learning turns learning into an optimization problem, namely: N � L ( x i ; θ, φ i ) max θ,φ 1 ,...,φ N i =1 Rahul G. Krishnan Learning Deep Generative Models

  14. Introduction Variational Inference Variational Bound Deep Generative Models Summary Summary Summary Optimal q The optimal q ( z ; φ ) used in the bound corresponds to the intractable posterior distribution p ( z | x ) Rahul G. Krishnan Learning Deep Generative Models

  15. Introduction Variational Inference Variational Bound Deep Generative Models Summary Summary Summary Approximating the Posterior The better q ( z ; φ ) can approximate the posterior, the smaller KL ( q ( z ; φ ) || p ( z | x )) we can achieve, the closer ELBO will be to log p ( x ; θ ) Rahul G. Krishnan Learning Deep Generative Models

  16. Introduction Latent Dirichlet Allocation Variational Inference Learning LDA Deep Generative Models Stochastic Variational Inference Summary Outline 1 Introduction Variational Bound Summary 2 Variational Inference Latent Dirichlet Allocation Learning LDA Stochastic Variational Inference 3 Deep Generative Models Bayesian Networks & Deep-Learning Learning Summary of DGMs 4 Summary Rahul G. Krishnan Learning Deep Generative Models

  17. Introduction Latent Dirichlet Allocation Variational Inference Learning LDA Deep Generative Models Stochastic Variational Inference Summary Generative Model Latent Dirichlet Allocation (LDA) η α β K z w θ N M Figure: Generative Model for Latent Dirichlet Allocation Rahul G. Krishnan Learning Deep Generative Models

  18. Introduction Latent Dirichlet Allocation Variational Inference Learning LDA Deep Generative Models Stochastic Variational Inference Summary Generative Model 1 Sample global topics β k ∼ Dir( η k ) For document d = 1 , . . . , N 2 Sample θ d ∼ Dir( α ) 3 For each word m = 1 , . . . , M 4 Sample topic z dm ∼ Mult( θ d ) 5 Sample word w dm ∼ Mult( β z dm ) 6 S denotes the simplex V is the vocabulary and K is the number of topics θ d ∈ S K β z dm ∈ S V Rahul G. Krishnan Learning Deep Generative Models

  19. Introduction Latent Dirichlet Allocation Variational Inference Learning LDA Deep Generative Models Stochastic Variational Inference Summary Variational Distribution w are observed and z, β, θ are latent We will perform inference over z, β, θ As before, we will assume that there exists a distribution over our latent variables We will assume that our distribution factorizes (mean-field assumption) Variational Distribution: � N � � K � � � q ( θ, z, β ; Φ) = q ( θ ; γ ) q ( z n ; φ n ) q ( β k ; λ k ) n =1 k =1 Denote Φ = { γ, φ, λ } , the parameters of the variational approximation Rahul G. Krishnan Learning Deep Generative Models

  20. Introduction Latent Dirichlet Allocation Variational Inference Learning LDA Deep Generative Models Stochastic Variational Inference Summary Homework Your next homework assignment involves implementing a mean-field algorithm for inference in LDA Assume Topic-Word Probabilities β 1: K observed and fixed, you won’t have to infer these Perform inference over θ and z The following slides are to give you intuition and understanding on how to derive the updates for inference Read Blei et al. (2003) (particularly the appendix) for details on derivation Rahul G. Krishnan Learning Deep Generative Models

  21. Introduction Latent Dirichlet Allocation Variational Inference Learning LDA Deep Generative Models Stochastic Variational Inference Summary Outline 1 Introduction Variational Bound Summary 2 Variational Inference Latent Dirichlet Allocation Learning LDA Stochastic Variational Inference 3 Deep Generative Models Bayesian Networks & Deep-Learning Learning Summary of DGMs 4 Summary Rahul G. Krishnan Learning Deep Generative Models

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend