gdpp learning diverse generations using determinantal
play

GDPP Learning Diverse Generations using Determinantal Point Process - PowerPoint PPT Presentation

GDPP Learning Diverse Generations using Determinantal Point Process Mohamed Elfeki , Camille Couprie, Morgane Rivire and Mohamed Elhoseiny * https://github.com/M-Elfeki/GDPP Whats wrong with Generative models? Whats wrong with


  1. GDPP Learning Diverse Generations using Determinantal Point Process Mohamed Elfeki , Camille Couprie, Morgane Rivière and Mohamed Elhoseiny * https://github.com/M-Elfeki/GDPP

  2. What’s wrong with Generative models?

  3. What’s wrong with Generative models? GAN Real Sample Fake Sample

  4. What’s wrong with Generative models? GAN GDPP-GAN Real Sample Fake Sample

  5. Determinantal Point Process (DPP) φ is feature representation of subset S sampled from ground set Y

  6. Determinantal Point Process (DPP) φ is feature representation of subset S sampled from ground set Y L S : DPP kernel, models the diversity of a mini-batch S

  7. What is GDPP? Fake Data Real Data Generation Loss

  8. What GDPP? L S B Fake Data Real Data Generation Loss Diversity Loss: Eigen Values/Vectors L DB

  9. What GDPP? L S B Fake Data Real Data Generation Loss Diversity Loss: Eigen Values/Vectors L DB

  10. How GDPP? Fake Non-Diverse Batch Real Diverse Batch

  11. How GDPP? Z B G Fake Non-Diverse Batch Real Diverse Batch

  12. How GDPP? Z B φ (.) Fake/Real D/E G OR Fake Non-Diverse Batch Real Diverse Batch

  13. How GDPP? Z B φ (.) Fake/Real Diversity Loss D/E G OR Fake Non-Diverse Batch Real Diverse Batch

  14. Does it work? (Synthetic) Real Sample Fake Sample GAN

  15. Does it work? (Synthetic) Real Sample Fake Sample ALI Unrolled-GAN VEE-GAN WP-GAN GAN GDPP-GAN

  16. Does it work? (Real) GDPP-GAN GDPP-VAE

  17. What else? Data Efficient

  18. What else? Data Efficient Time Efficient

  19. What else? Data Efficient Time Efficient Fast Training Time

  20. What else? Stabilizes Adversarial Training Data Efficient Time Efficient Fast Training Time

  21. What else? Stabilizes Adversarial Training Robust to poor Initialization Data Efficient Time Efficient Fast Training Time

  22. Why GDPP? 1. No extra trainable parameters (cost-free)

  23. Why GDPP? 1. No extra trainable parameters (cost-free) 2. Unsupervised Setting (No labels)

  24. Why GDPP? 1. No extra trainable parameters (cost-free) 2. Unsupervised Setting (No labels) 3. Stabilizes Adversarial Training

  25. Why GDPP? 1. No extra trainable parameters (cost-free) 2. Unsupervised Setting (No labels) 3. Stabilizes Adversarial Training 4. Time and Data efficient

  26. Why GDPP? 1. No extra trainable parameters (cost-free) 2. Unsupervised Setting (No labels) 3. Stabilizes Adversarial Training 4. Time and Data efficient 5. Architecture & Model Invariant (GAN & VAE)

  27. Why GDPP? 1. No extra trainable parameters (cost-free) 2. Unsupervised Setting (No labels) 3. Stabilizes Adversarial Training 4. Time and Data efficient 5. Architecture & Model Invariant (GAN & VAE) Yet, Consistently outperforms state-of-the-art

  28. Fo For ma r many y mor ore, e, jo join in us in in 143

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend