GDPP Learning Diverse Generations using Determinantal Point Process - - PowerPoint PPT Presentation

gdpp learning diverse generations using determinantal
SMART_READER_LITE
LIVE PREVIEW

GDPP Learning Diverse Generations using Determinantal Point Process - - PowerPoint PPT Presentation

GDPP Learning Diverse Generations using Determinantal Point Process Mohamed Elfeki , Camille Couprie, Morgane Rivire and Mohamed Elhoseiny * https://github.com/M-Elfeki/GDPP Whats wrong with Generative models? Whats wrong with


slide-1
SLIDE 1

GDPP Learning Diverse Generations using Determinantal Point Process

Mohamed Elfeki, Camille Couprie, Morgane Rivière and Mohamed Elhoseiny * https://github.com/M-Elfeki/GDPP

slide-2
SLIDE 2

What’s wrong with Generative models?

slide-3
SLIDE 3

What’s wrong with Generative models?

Real Sample Fake Sample

GAN

slide-4
SLIDE 4

What’s wrong with Generative models?

Real Sample Fake Sample

GAN GDPP-GAN

slide-5
SLIDE 5

Determinantal Point Process (DPP)

φ is feature representation of subset S sampled from ground set Y

slide-6
SLIDE 6

Determinantal Point Process (DPP)

φ is feature representation of subset S sampled from ground set Y

LS: DPP kernel, models the diversity of a mini-batch S

slide-7
SLIDE 7

What is GDPP?

Real Data Fake Data Generation Loss

slide-8
SLIDE 8

What GDPP?

Real Data Fake Data Generation Loss Diversity Loss: Eigen Values/Vectors

LSB LDB

slide-9
SLIDE 9

What GDPP?

Real Data Fake Data Generation Loss Diversity Loss: Eigen Values/Vectors

LSB LDB

slide-10
SLIDE 10

Real Diverse Batch Fake Non-Diverse Batch

How GDPP?

slide-11
SLIDE 11

ZB G

Real Diverse Batch Fake Non-Diverse Batch

How GDPP?

slide-12
SLIDE 12

ZB G D/E φ(.)

Fake/Real OR

Real Diverse Batch Fake Non-Diverse Batch

How GDPP?

slide-13
SLIDE 13

ZB G D/E φ(.)

Fake/Real OR

Real Diverse Batch Fake Non-Diverse Batch

Diversity Loss

How GDPP?

slide-14
SLIDE 14

GAN

Does it work? (Synthetic)

Real Sample Fake Sample

slide-15
SLIDE 15

GAN GDPP-GAN

Does it work? (Synthetic)

ALI Unrolled-GAN VEE-GAN WP-GAN Real Sample Fake Sample

slide-16
SLIDE 16

Does it work? (Real)

GDPP-GAN GDPP-VAE

slide-17
SLIDE 17

What else?

Data Efficient

slide-18
SLIDE 18

What else?

Data Efficient Time Efficient

slide-19
SLIDE 19

What else?

Data Efficient Time Efficient Fast Training Time

slide-20
SLIDE 20

What else?

Data Efficient Time Efficient Fast Training Time Stabilizes Adversarial Training

slide-21
SLIDE 21

What else?

Data Efficient Time Efficient Fast Training Time Stabilizes Adversarial Training

Robust to poor Initialization

slide-22
SLIDE 22

Why GDPP?

  • 1. No extra trainable parameters (cost-free)
slide-23
SLIDE 23

Why GDPP?

  • 1. No extra trainable parameters (cost-free)
  • 2. Unsupervised Setting (No labels)
slide-24
SLIDE 24

Why GDPP?

  • 1. No extra trainable parameters (cost-free)
  • 2. Unsupervised Setting (No labels)
  • 3. Stabilizes Adversarial Training
slide-25
SLIDE 25

Why GDPP?

  • 1. No extra trainable parameters (cost-free)
  • 2. Unsupervised Setting (No labels)
  • 3. Stabilizes Adversarial Training
  • 4. Time and Data efficient
slide-26
SLIDE 26

Why GDPP?

  • 1. No extra trainable parameters (cost-free)
  • 2. Unsupervised Setting (No labels)
  • 3. Stabilizes Adversarial Training
  • 4. Time and Data efficient
  • 5. Architecture & Model Invariant (GAN & VAE)
slide-27
SLIDE 27

Why GDPP?

  • 1. No extra trainable parameters (cost-free)
  • 2. Unsupervised Setting (No labels)
  • 3. Stabilizes Adversarial Training
  • 4. Time and Data efficient
  • 5. Architecture & Model Invariant (GAN & VAE)

Yet, Consistently outperforms state-of-the-art

slide-28
SLIDE 28

Fo For ma r many y mor

  • re,

e, jo join in us in in 143