unsuperv rvised learning
play

Unsuperv rvised Learning Niloy Mit Ni Mitra Ias asonas Kok - PowerPoint PPT Presentation

Deep Learning for Graphics Unsuperv rvised Learning Niloy Mit Ni Mitra Ias asonas Kok okkin inos os Pau aul l Gu Guer errero Vl Vladim imir ir Ki Kim Kos ostas Rematas Tobi obias s Ritsc schel UCL UCL/Facebook UCL Adobe


  1. Deep Learning for Graphics Unsuperv rvised Learning Niloy Mit Ni Mitra Ias asonas Kok okkin inos os Pau aul l Gu Guer errero Vl Vladim imir ir Ki Kim Kos ostas Rematas Tobi obias s Ritsc schel UCL UCL/Facebook UCL Adobe Research U Washington UCL

  2. Timetable Niloy Iasonas Paul Vova Kostas Tobias Introduction X X X X Theory X NN Basics X X Supervised Applications X X Data X Unsupervised Applications X Beyond 2D X X Outlook X X X X X X EG Course “Deep Learning for Graphics” 2

  3. Unsupervised Learning • There is no direct ground truth for the quantity of interest • Autoencoders • Variational Autoencoders (VAEs) • Generative Adversarial Networks (GANs) EG Course “Deep Learning for Graphics”

  4. Autoencoders Goal: Meaningful features that capture the main factors of variation in the dataset • These are good for classification, clustering, exploration, generation, … • We have no ground truth for them Features Encoder Input data EG Course “Deep Learning for Graphics” Slide Credit: Fei-Fei Li, Justin Johnson, Serena Yeung, CS 231n

  5. Autoencoders Goal: Meaningful features that capture the main factors of variation Features that can be used to reconstruct the image Decoder L2 Loss function: Features (Latent variables) Encoder Input data EG Course “Deep Learning for Graphics” Slide Credit: Fei-Fei Li, Justin Johnson, Serena Yeung, CS 231n

  6. Autoencoders Linear Transformation for Encoder and Decoder give result close to PCA Deeper networks give better reconstructions, since basis can be non-linear Original Autoencoder PCA EG Course “Deep Learning for Graphics” Image Credit: Reducing the Dimensionality of Data with Neural Networks, . Hinton and Salakhutdinov

  7. Example: Document Word Prob. → 2D Code LSA (based on PCA) Autoencoder EG Course “Deep Learning for Graphics” Image Credit: Reducing the Dimensionality of Data with Neural Networks, Hinton and Salakhutdinov

  8. Example: Semi-Supervised Classification • Many images, but few ground truth labels supervised fine-tuning start unsupervised train classification network on labeled images train autoencoder on many images Loss function (Softmax, etc) Predicted Label GT Label Decoder Classifier L2 Loss function: Features Features (Latent Variables) Encoder Encoder Input data EG Course “Deep Learning for Graphics” Slide Credit: Fei-Fei Li, Justin Johnson, Serena Yeung, CS 231n

  9. Code example Autoencoder (autoencoder.ipynb) 9

  10. Generative Models • Assumption: the dataset are samples from an unknown distribution • Goal: create a new sample from that is not in the dataset … ? Dataset Generated Image credit: Progressive Growing of GANs for Improved EG Course “Deep Learning for Graphics” Quality, Stability, and Variation, Karras et al.

  11. Generative Models • Assumption: the dataset are samples from an unknown distribution • Goal: create a new sample from that is not in the dataset … Dataset Generated Image credit: Progressive Growing of GANs for Improved EG Course “Deep Learning for Graphics” Quality, Stability, and Variation, Karras et al.

  12. Generative Models Generator with parameters known and easy to sample from EG Course “Deep Learning for Graphics”

  13. Generative Models How to measure similarity of and ? 1) Likelihood of data in Generator with Variational Autoencoders (VAEs) parameters 2) Adversarial game: Discriminator distinguishes Generator makes it vs known and and hard to distinguish easy to sample from Generative Adversarial Networks (GANs) EG Course “Deep Learning for Graphics”

  14. Autoencoders as Generative Models? • A trained decoder transforms some features to approximate samples from Decoder = Generator? • What happens if we pick a random ? • We do not know the distribution of random features that decode to likely samples Feature space / latent space EG Course “Deep Learning for Graphics” Image Credit: Reducing the Dimensionality of Data with Neural Networks , Hinton and Salakhutdinov

  15. Variational Autoencoders (VAEs) • Pick a parametric distribution for features • The generator maps to an image distribution (where are parameters) Generator with parameters sample • Train the generator to maximize the likelihood of the data in : EG Course “Deep Learning for Graphics”

  16. Outputting a Distribution Bernoulli distribution Normal distribution Generator with Generator with parameters parameters sample sample EG Course “Deep Learning for Graphics”

  17. Variational Autoencoders (VAEs): Naïve Sampling (Monte-Carlo) • SGD approximates the expected values over samples • In each training iteration, sample from … • … and randomly from the dataset, and maximize: EG Course “Deep Learning for Graphics”

  18. Variational Autoencoders (VAEs): Naïve Sampling (Monte-Carlo) Loss function: Generator with Random from dataset parameters • In each training iteration, sample from … sample • … and randomly from the dataset • SGD approximates the expected values over samples EG Course “Deep Learning for Graphics”

  19. Variational Autoencoders (VAEs): Naïve Sampling (Monte-Carlo) Loss function: Generator with Random from dataset parameters • In each training iteration, sample from … sample • … and randomly from the dataset • SGD approximates the expected values over samples • Few pairs have non-zero gradients with non-zero loss gradient for EG Course “Deep Learning for Graphics”

  20. Variational Autoencoders (VAEs): The Encoder Loss function: • During training, another network can guess a Generator with parameters good for a given sample • should be much smaller than • This also gives us the data point Encoder with parameters EG Course “Deep Learning for Graphics”

  21. Variational Autoencoders (VAEs): The Encoder Loss function: • Can we still easily sample a new ? Generator with parameters • Need to make sure approximates sample • Regularize with KL-divergence • Negative loss can be shown to be a lower bound Encoder with for the likelihood, and equivalent if parameters EG Course “Deep Learning for Graphics”

  22. Reparameterization Trick Example when : , where Generator with parameters sample Backprop Backprop? sample Encoder with Encoder with parameters parameters Does not depend on parameters EG Course “Deep Learning for Graphics”

  23. Generating Data MNIST Frey Faces sample Generator with parameters sample EG Course “Deep Learning for Graphics” Image Credit: Auto-Encoding Variational Bayes , Kingma and Welling

  24. Demos VAE on MNIST http://dpkingma.com/sgvb_mnist_demo/demo.html VAE on Faces http://vdumoulin.github.io/morphing_faces/online_demo.html 24

  25. Code example Variational Autoencoder (variational_autoencoder.ipynb) 25

  26. Generative Adversarial Networks Player 1: generator Player 2: discriminator real/fake Scores if discriminator Scores if it can distinguish can’t distinguish output between real and fake from real image from dataset EG Course “Deep Learning for Graphics”

  27. Naïve Sampling Revisited Loss function: Generator with Random from dataset parameters sample • Few pairs have non-zero gradients • This is a problem of the maximum likelihood • Use a different loss: Train a discriminator network to measure similarity with non-zero loss gradient for EG Course “Deep Learning for Graphics”

  28. Why Adversarial? • If discriminator approximates : • at maximum of has lowest loss : discriminator • Optimal has single mode at , small variance with parameters : generator with parameters sample EG Course “Deep Learning for Graphics” Image Credit: How (not) to Train your Generative Model: Scheduled Sampling, Likelihood, Adversary? , Ferenc Huszár

  29. Why Adversarial? • For GANs, the discriminator instead approximates: : discriminator depends on the generator with parameters : generator with parameters sample EG Course “Deep Learning for Graphics” Image Credit: How (not) to Train your Generative Model: Scheduled Sampling, Likelihood, Adversary? , Ferenc Huszár

  30. Why Adversarial? VAEs: GANs: Maximize likelihood of Maximize likelihood of generator samples in Adversarial game data samples in approximate EG Course “Deep Learning for Graphics” Image Credit: How (not) to Train your Generative Model: Scheduled Sampling, Likelihood, Adversary? , Ferenc Huszár

  31. Why Adversarial? VAEs: GANs: Maximize likelihood of Maximize likelihood of generator samples in Adversarial game data samples in approximate EG Course “Deep Learning for Graphics” Image Credit: How (not) to Train your Generative Model: Scheduled Sampling, Likelihood, Adversary? , Ferenc Huszár

  32. GAN Objective probability that is not fake fake/real classification loss (BCE): :discriminator Discriminator objective: :generator Generator objective: sample EG Course “Deep Learning for Graphics”

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend