generative adversarial networks
play

Generative Adversarial Networks Ian Goodfellow Research Scientist - PowerPoint PPT Presentation

Generative Adversarial Networks Ian Goodfellow Research Scientist GPU Technology Conference San Jose, California 2016-04-05 Generative Modeling Have training examples: x p train ( x ) Want a model that can draw samples: x p


  1. Generative Adversarial Networks Ian Goodfellow Research Scientist GPU Technology Conference San Jose, California 2016-04-05

  2. Generative Modeling • Have training examples: x ∼ p train ( x ) • Want a model that can draw samples: x ∼ p model ( x ) • Want p model ( x ) = p data ( x ) (Images from Toronto Face Database)

  3. Example Applications • Image manipulation • Text to speech • Machine translation

  4. Modeling Priorities q ∗ = argmin q D KL ( p k q ) q ∗ = argmin q D KL ( q k p ) p ( x ) p( x ) Probability Density Probability Density q ∗ ( x ) q ∗ ( x ) ( Deep Learning , Goodfellow, Bengio, and Courville 2016) x x Put high probability where there Put low probability where there should be high probability should be low probability

  5. Generative Adversarial Networks D tries to D tries to output 1 output 0 Differentiable Differentiable function D function D x sampled x sampled from data from model x x Differentiable function G (“Generative Input noise Adversarial Networks”, Z z Goodfellow et al 2014)

  6. Discriminator Strategy Optimal D ( x ) for any p data ( x ) and p model ( x ) is always p data ( x ) D ( x ) = p data ( x ) + p model ( x ) Data distribution Model distribution

  7. Learning Process Discriminator Data distribution response Model distribution ... After updating D After updating G Poorly fit model Mixed strategy equilibrium

  8. Generator Transformation Videos MNIST digit dataset Toronto Face Dataset (TFD)

  9. Non-Convergence (Alec Radford)

  10. Laplacian Pyramid (Denton+Chintala et al 2015)

  11. LAPGAN Results • 40% of samples mistaken by humans for real photographs (Denton+Chintala et al 2015)

  12. DCGAN Results (Radford et al 2015)

  13. Arithmetic on Face Semantics = - + Man wearing Man Woman glasses Woman wearing glasses (Radford et al 2015)

  14. Mean Squared Error Ignores Small Details Input Reconstruction (Chelsea Finn)

  15. GANs Learn a Cost Function Ground Truth MSE Adversarial Capture predictable details regardless of scale (Lotter et al, 2015)

  16. Conclusion • Generative adversarial nets • Prioritize generating realistic samples over assigning high probability to all samples • Learn a cost function instead of using a fixed cost function • Learn that all predictable structures are important, even if they are small or faint

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend