cmp784
play

CMP784 DEEP LEARNING Lecture #11 Variational Autoencoders Aykut - PowerPoint PPT Presentation

latent by Tom White CMP784 DEEP LEARNING Lecture #11 Variational Autoencoders Aykut Erdem // Hacettepe University // Spring 2020 Artificial faces synthesized by StyleGAN (Nvidia) Previously on CMP784 Supervised vs. Unsupervised


  1. latent by Tom White CMP784 DEEP LEARNING Lecture #11 – Variational Autoencoders Aykut Erdem // Hacettepe University // Spring 2020

  2. Artificial faces synthesized by StyleGAN (Nvidia) Previously on CMP784 • Supervised vs. Unsupervised Representation Learning • Sparse Coding • Autoencoders • Autoregressive Generative Models 2

  3. Lecture overview • Motivation for Variational Autoencoders (VAEs) • Mechanics of VAEs • Separatibility of VAEs • Training of VAEs • Evaluating representations • Vector Quantized Variational Autoencoders (VQ-VAEs) sclaimer: Much of the material and slides for this lecture were borrowed from Discl — Pavlov Protopapas, Mark Glickman and Chris Tanner's Harvard CS109B class — Andrej Risteski's CMU 10707 class — David McAllester's TTIC 31230 class 3

  4. Lecture overview • Motivation for Variational Autoenco coders s (VAEs) s) • Mechanics of VAEs • Separatibility of VAEs • Training of VAEs • Evaluating representations • Vector Quantized Variational Autoencoders (VQ-VAEs) sclaimer: Much of the material and slides for this lecture were borrowed from Discl — Pavlov Protopapas, Mark Glickman and Chris Tanner's Harvard CS109B class — Andrej Risteski's CMU 10707 class — David McAllester's TTIC 31230 class 4

  5. Recap: Autoencoders Feature Representation Feed-back, Feed-back, Feed-forward, Feed-forward, generative, generative, bottom-up Decoder Encoder bottom-up top-down top-down path Input Image • • Details of what goes insider the encoder and decoder matter! • Need constraints to avoid learning an identity. 5

  6. Parameter space of autoencoder • Let’s examine the latent space of an AE. • Is there any separation of the different classes? If the AE learned the “essence” of the MNIST images, similar images should be close to each other. • Plot the latent space and examine the separation. • Here we plot the 2 PCA components of the latent space. Image taken from A. Glassner, Deep Learning, Vol. 2: From Basics to Practice 6

  7. Traversing the latent space • We start at the start of the arrows in latent space and then move to end of the arrow in 7 steps. • For each value of z we use the already trained decoder to produce an image. Image taken from A. Glassner, Deep Learning, Vol. 2: From Basics to Practice 7

  8. Problems with Autoencoders • Gaps in the latent space • Discrete latent space • Separability in the latent space 8

  9. Lecture overview • Motivation for Variational Autoencoders (VAEs) • Mech chanics cs of VAEs • Separatibility of VAEs • Training of VAEs • Evaluating representations • Vector Quantized Variational Autoencoders (VQ-VAEs) sclaimer: Much of the material and slides for this lecture were borrowed from Discl — Pavlov Protopapas, Mark Glickman and Chris Tanner's Harvard CS109B class — Andrej Risteski's CMU 10707 class — David McAllester's TTIC 31230 class 9

  10. Generative models • Imagine we want to generate data from a distribution, x ∼ p ( x ) • e.g. x ∼ N ( µ, σ )

  11. Generative models • But how do we generate such samples? z ∼ Unif(0 , 1)

  12. Generative models • But how do we generate such samples? z ∼ Unif(0 , 1) x = ln z

  13. Generative models • In other words we can think that if we choose z ~ Uniform then there is a mapping: # = "(&) such as: # ∼ )(#) where in general " is some complicated function. • We already know that Neural Networks are great in learning complex functions . # ∼ )(#) & ∼ *(&) # = "(&)

  14. Traditional Autoencoders • In traditional autoencoders, we can think of encoder and decoders as some function mapping. z Encoder Decoder & = ℎ(") " = $(&) ! 14

  15. Variational Autoencoders • To go to variational autoencoders, we need to first add some stochasticity and think of it as a probabilistic modeling. z Encoder Decoder 15

  16. Variational Autoencoders Sample from g (z) Decoder e.g. Standard z !(# $|&) Gaussian & ∼ )(&) $ = +(&) # $ ∼ !($|&) # 16

  17. Variational Autoencoders z Encoder Tr Tradit ditiona ional A l AE E Consider this ! " to be the mean of a normal $ Decode Encoder Consider this to ! # be the std of a normal % Va Variational AE Randomly chosen value Latent value, z 17

  18. Variational Autoencoders 18

  19. Variational Autoencoders 19

  20. Variational Autoencoders 512 256 256 512 784 20 neurons neurons neurons neurons neurons neurons ReLU ReLU ReLU ReLU ReLU ReLU Centers 512 256 20 256 512 784 Random neurons neurons neurons neurons neurons neurons Variable ReLU ReLU ReLU ReLU ReLU ReLU Spreads 20

  21. Lecture overview • Motivation for Variational Autoencoders (VAEs) • Mechanics of VAEs • Sep Separ arat atibility of of VAEs AEs • Training of VAEs • Evaluating representations • Vector Quantized Variational Autoencoders (VQ-VAEs) sclaimer: Much of the material and slides for this lecture were borrowed from Discl — Pavlov Protopapas, Mark Glickman and Chris Tanner's Harvard CS109B class — Andrej Risteski's CMU 10707 class — David McAllester's TTIC 31230 class 21

  22. Separability in Variational Autoencoders • Separability is not only between classes but we also want similar items in the same class to be near each other. • For example, there are different ways of writing “2”, we want similar styles to end up near each other. • Let’s examine VAE, there is something magic happening once we add stochasticity in the latent space. 22

  23. Separability in Variational Autoencoders Latent Space SD σ ENCODER DECODER Mean µ Encode the first sample (a “2”) and find ! " , $ " 23

  24. Separability in Variational Autoencoders Latent Space SD σ ENCODER DECODER Mean µ Sample z " ∼ $(& " , ( " ) 24

  25. Blending Latent Variables Latent Space SD σ ENCODER DECODER Mean µ Decode to ! " # 25

  26. Separability in Variational Autoencoders Latent Space SD σ ENCODER DECODER Mean µ Encode the second sample (a “3”) find ! " , $ " . Sample z " ∼ ((! " , $ " ) 26

  27. Separability in Variational Autoencoders Latent Space SD σ ENCODER DECODER Mean µ Decode to ! " # 27

  28. Separability in Variational Autoencoders Latent Space SD σ ENCODER DECODER Mean µ Train with the first sample (a “2”) again and find ! " , $ " . However z " ∼ ((! " , $ " ) will not be the sam same . It can happen to be close to the “3” in latent space. 28

  29. Separability in Variational Autoencoders Latent Space SD σ ENCODER DECODER Mean µ Decode to ! " # . Since the decoder only knows how to map from latent space to ! " space, it will return a “3”. 29

  30. Separability in Variational Autoencoders Train with 1 st sample again Latent Space SD σ ENCODER DECODER Latent space starts to re-organize Mean µ 30

  31. Separability in Variational Autoencoders And again… Latent Space SD σ ENCODER DECODER 3 is pushed away Mean µ 31

  32. Separability in Variational Autoencoders Many times… Latent Space SD σ ENCODER DECODER Mean µ 32

  33. Separability in Variational Autoencoders Now lets test again Latent Space SD σ ENCODER DECODER Mean µ 33

  34. Separability in Variational Autoencoders Training on 3’s again Latent Space SD σ ENCODER DECODER Mean µ 34

  35. Separability in Variational Autoencoders Many times… Latent Space SD σ ENCODER DECODER Mean µ 35

  36. Lecture overview • Motivation for Variational Autoencoders (VAEs) • Mechanics of VAEs • Separatibility of VAEs • Tr Traini ning ng of of VAEs AEs • Evaluating representations • Vector Quantized Variational Autoencoders (VQ-VAEs) sclaimer: Much of the material and slides for this lecture were borrowed from Discl — Pavlov Protopapas, Mark Glickman and Chris Tanner's Harvard CS109B class — Andrej Risteski's CMU 10707 class — David McAllester's TTIC 31230 class 36

  37. Training Encoder Decoder µ ' ' ( ! ! & " # % Training means learning ! " and ! # . Define a loss function ℒ • Use stochastic gradient descent (or Adam) to minimize ℒ • The Loss function: , ' / 1 Reconstruction error: ℒ * = - ∑ / ' / − ( • Similarity between the probability of z given x, p & ' , and some predefined probability • distribution p(z) , which can be computed by Kullback-Leibler divergence (KL): 67(8(&|')||8 & ) 37

  38. Bayesian AE Encoder Decoder µ ' ' ( Bayes rule: ! ! & " # p ) * ∝ , * ) , ) % Parameters Posterior for our parameters, z is: of the model p & ', ( ' ∝ , ( ' &, ' , & ( ) is z) Posterior predictive, probability to see ( ' given '; this is INFERENCE : p ( ' ' = ∫ , ( ' &, ' , & ' 1& Posterior Decoder: NN 38

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend