generative adversarial networks gans
play

Generative Adversarial Networks (GANs) Ian Goodfellow, OpenAI - PowerPoint PPT Presentation

Generative Adversarial Networks (GANs) Ian Goodfellow, OpenAI Research Scientist Presentation at Berkeley Artificial Intelligence Lab, 2016-08-31 Generative Modeling Density estimation Sample generation Training examples Model samples


  1. Generative Adversarial Networks (GANs) Ian Goodfellow, OpenAI Research Scientist Presentation at Berkeley Artificial Intelligence Lab, 2016-08-31

  2. Generative Modeling • Density estimation • Sample generation Training examples Model samples (Goodfellow 2016)

  3. Maximum Likelihood θ ∗ = arg max E x ∼ p data log p model ( x | θ ) θ (Goodfellow 2016)

  4. Taxonomy of Generative Models … Direct Maximum Likelihood GAN Explicit density Implicit density Markov Chain Tractable density Approximate density GSN -Fully visible belief nets -NADE Variational Markov Chain -MADE -PixelRNN Variational autoencoder Boltzmann machine -Change of variables models (nonlinear ICA) (Goodfellow 2016)

  5. Fully Visible Belief Nets • Explicit formula based on chain (Frey et al, 1996) rule: n Y p model ( x ) = p model ( x 1 ) p model ( x i | x 1 , . . . , x i − 1 ) i =2 • Disadvantages: • O( n ) sample generation cost PixelCNN elephants • Currently, do not learn a useful (van den Ord et al 2016) latent representation (Goodfellow 2016)

  6. Change of Variables � ◆� ✓ ∂ g ( x ) � � y = g ( x ) ⇒ p x ( x ) = p y ( g ( x )) � det � � ∂ x � e.g. Nonlinear ICA (Hyvärinen 1999) Disadvantages: - Transformation must be invertible - Latent dimension must match visible dimension 64x64 ImageNet Samples Real NVP (Dinh et al 2016) (Goodfellow 2016)

  7. Variational Autoencoder � (Kingma and Welling 2013, Rezende et al 2014) log p ( x ) � log p ( x ) � D KL ( q ( z ) k p ( z | x )) z = E z ∼ q log p ( x , z ) + H ( q ) Disadvantages: x -Not asymptotically consistent unless q is perfect -Samples tend to have lower quality CIFAR-10 samples (Kingma et al 2016) (Goodfellow 2016)

  8. Boltzmann Machines p ( x ) = 1 Z exp ( � E ( x , z )) X X Z = exp ( � E ( x , z )) x z • Partition function is intractable • May be estimated with Markov chain methods • Generating samples requires Markov chains too (Goodfellow 2016)

  9. GANs • Use a latent code • Asymptotically consistent (unlike variational methods) • No Markov chains needed • Often regarded as producing the best samples • No good way to quantify this (Goodfellow 2016)

  10. Generator Network x = G ( z ; θ ( G ) ) z -Must be di ff erentiable - In theory, could use REINFORCE for discrete variables - No invertibility requirement x - Trainable for any size of z - Some guarantees require z to have higher dimension than x - Can make x conditionally Gaussian given z but need not do so (Goodfellow 2016)

  11. Training Procedure • Use SGD-like algorithm of choice (Adam) on two minibatches simultaneously: • A minibatch of training examples • A minibatch of generated samples • Optional: run k steps of one player for every step of the other player. (Goodfellow 2016)

  12. Minimax Game J ( D ) = � 1 2 E x ∼ p data log D ( x ) � 1 2 E z log (1 � D ( G ( z ))) J ( G ) = � J ( D ) -Equilibrium is a saddle point of the discriminator loss -Resembles Jensen-Shannon divergence -Generator minimizes the log-probability of the discriminator being correct (Goodfellow 2016)

  13. Non-Saturating Game J ( D ) = � 1 2 E x ∼ p data log D ( x ) � 1 2 E z log (1 � D ( G ( z ))) J ( G ) = � 1 2 E z log D ( G ( z )) -Equilibrium no longer describable with a single loss -Generator maximizes the log-probability of the discriminator being mistaken -Heuristically motivated; generator can still learn even when discriminator successfully rejects all generator samples (Goodfellow 2016)

  14. Maximum Likelihood Game J ( D ) = − 1 2 E x ∼ p data log D ( x ) − 1 2 E z log (1 − D ( G ( z ))) J ( G ) = − 1 σ − 1 ( D ( G ( z ))) � � 2 E z exp When discriminator is optimal, the generator gradient matches that of maximum likelihood (“On Distinguishability Criteria for Estimating Generative Models”, Goodfellow 2014, pg 5) (Goodfellow 2016)

  15. Maximum Likelihood Samples (Goodfellow 2016)

  16. Discriminator Strategy Optimal D ( x ) for any p data ( x ) and p model ( x ) is always p data ( x ) D ( x ) = p data ( x ) + p model ( x ) A cooperative rather than Discriminator Data adversarial view of GANs: Model the discriminator tries to distribution estimate the ratio of the data and model distributions, and x informs the generator of its estimate in order to guide its z improvements. (Goodfellow 2016)

  17. Comparison of Generator Losses (Goodfellow 2016)

  18. DCGAN Architecture Most “deconvs” are batch normalized (Radford et al 2015) (Goodfellow 2016)

  19. DCGANs for LSUN Bedrooms (Radford et al 2015) (Goodfellow 2016)

  20. Vector Space Arithmetic = - + Man Woman Man with glasses Woman with Glasses (Goodfellow 2016)

  21. Mode Collapse • Fully optimizing the discriminator with the generator held constant is safe • Fully optimizing the generator with the discriminator held constant results in mapping all points to the argmax of the discriminator • Can partially fix this by adding nearest-neighbor features constructed from the current minibatch to the discriminator (“minibatch GAN”) (Salimans et al 2016) (Goodfellow 2016)

  22. Minibatch GAN on CIFAR Training Data Samples (Salimans et al 2016) (Goodfellow 2016)

  23. Minibatch GAN on ImageNet (Salimans et al 2016) (Goodfellow 2016)

  24. Cherry-Picked Results (Goodfellow 2016)

  25. GANs Work Best When Output Entropy is Low this small bird has a pink this magnificent fellow is breast and crown, and black almost all black with a red primaries and secondaries. crest, and white cheek patch. the flower has petals that this white and yellow flower have thin white petals and a are bright pinkish purple round yellow stamen with white stigma (Reed et al 2016) (Goodfellow 2016)

  26. Optimization and Games Optimization: find a minimum: θ ∗ = argmin θ J ( θ ) Game: Player 1 controls θ (1) Player 2 controls θ (2) Player 1 wants to minimize J (1) ( θ (1) , θ (2) ) Player 2 wants to minimize J (2) ( θ (1) , θ (2) ) Depending on J functions, they may compete or cooperate. (Goodfellow 2016)

  27. Games optimization ⊇ Example: θ (1) = θ θ (2) = {} J (1) ( θ (1) , θ (2) ) = J ( θ (1) ) J (2) ( θ (1) , θ (2) ) = 0 (Goodfellow 2016)

  28. Nash Equilibrium • No player can reduce their cost by changing their own strategy: ∀ θ (1) ,J (1) ( θ (1) , θ (2) ∗ ) ≥ J (1) ( θ (1) ∗ , θ (2) ∗ ) ∀ θ (2) ,J (2) ( θ (1) ∗ , θ (2) ) ≥ J (2) ( θ (1) ∗ , θ (2) ∗ ) • In other words, each player’s cost is minimal with respect to that player’s strategy • Finding Nash equilibria optimization (but not clearly useful) ⊆ (Goodfellow 2016)

  29. Well-Studied Cases • Finite minimax (zero-sum games) • Finite mixed strategy games • Continuous, convex games • Di ff erential games (lion chases gladiator) (Goodfellow 2016)

  30. Continuous Minimax Game Solution is a saddle point of V. Not just any saddle point: must specifically be a maximum for player 1 and a minimum for player 2 (Goodfellow 2016)

  31. Local Di ff erential Nash Equilibria r θ ( i ) J ( i ) ( θ (1) , θ (2) ) = 0 Necessary: r 2 θ ( i ) J ( i ) ( θ (1) , θ (2) ) is positive semi-definite Su ffi cient: r 2 θ ( i ) J ( i ) ( θ (1) , θ (2) ) is positive definite (Ratli ff et al 2013) (Goodfellow 2016)

  32. Su ffi cient Condition for Simultaneous Gradient Descent to Converge  r θ (1) J (1) ( θ (1) , θ (2) ) � ω = r θ (2) J (2) ( θ (1) , θ (2) ) The eigenvalues of r θ ω must have positive real part: r 2 θ (1) J (1) r θ (1) r θ (2) J (2)  � r θ (2) r θ (1) J (1) r 2 θ (2) J (2) (I call this the “generalized Hessian”) (Ratli ff et al 2013) (Goodfellow 2016)

  33. Interpretation • Each player’s Hessian should have large, positive eigenvalues, expressing a strong preference to keep doing their current strategy • The Jacobian of one player’s gradient with respect to the other player’s parameters should have smaller contributions to the eigenvalues, meaning each player has limited ability to change the other player’s behavior at convergence • Does not apply to GANs, so their convergence remains an open question (Goodfellow 2016)

  34. Equilibrium Finding Heuristics • Keep parameters near their running average • Periodically assign running average value to parameters • Constrain parameters to lie near running average • Add loss for deviation from running average (Goodfellow 2016)

  35. Stabilized Training (Goodfellow 2016)

  36. Other Games in AI • Robust optimization / robust control • for security/safety, e.g. resisting adversarial examples • Domain-adversarial learning for domain adaptation • Adversarial privacy • Guided cost learning • Predictability minimization • … (Goodfellow 2016)

  37. Conclusion • GANs are generative models that use supervised learning to approximate an intractable cost function • GANs can simulate many cost functions, including the one used for maximum likelihood • Finding Nash equilibria in high-dimensional, continuous, non-convex games is an important open research problem (Goodfellow 2016)

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend