latent bernoulli autoencoder
play

Latent Bernoulli Autoencoder ICML 2020 Jiri Fajtl 1 , Vasileios - PowerPoint PPT Presentation

Latent Bernoulli Autoencoder ICML 2020 Jiri Fajtl 1 , Vasileios Argyriou 1 , Dorothy Monekosso 2 and Paolo Remagnino 1 1 Kingston University, London, UK 2 Leeds Beckett University, Leeds, UK August 15, 2020 Jiri Fajtl et al. LBAE - ICML 2020


  1. Latent Bernoulli Autoencoder ICML 2020 Jiri Fajtl 1 , Vasileios Argyriou 1 , Dorothy Monekosso 2 and Paolo Remagnino 1 1 Kingston University, London, UK 2 Leeds Beckett University, Leeds, UK August 15, 2020 Jiri Fajtl et al. LBAE - ICML 2020 August 15, 2020 1 / 29

  2. Motivation Questions: Can we realize a deterministic autoencoder to learn discrete latent space with a competitive performance? How to sample from latent space? How to interpolate between given samples in this latent space? Can we modify sample attributes in the latent space and how? What are the simplest possible solutions to the above? Why discrete representations? Gating, hard attention, memory addressing Compact representation for storage, compression Encoding for energy models such as Hopfield memory[1] or HTM[2] Interpretability Jiri Fajtl et al. LBAE - ICML 2020 August 15, 2020 2 / 29

  3. Latent Bernoulli Autoencoder LBAE We propose a simple, deterministic encoder-decoder model that learns multivariate Bernoulli distribution in the latent space by binarization of continuous activations For N -dimensional latent space the information bottleneck of a typical autoencoder is in LBAE replaced with tanh() followed by binarization f b () ∈ {− 1 , 1 } N with unit gradient surrogate function f s () for backward pass Binarization 1 b = f b ( z ) z = tanh( h ) 1 -1 1 h z b 1 Encoder 1 Decoder g 𝜚 ( X ) f 𝜾 ( b ) x -1 -1 -1 X Xʼ 1 𝜖 f s ( z ) =1 L = || X - Xʼ || 2 𝜖 z MSE Loss Figure: Black forward pass, yellow backward pass Jiri Fajtl et al. LBAE - ICML 2020 August 15, 2020 3 / 29

  4. Sampling From the Bernoulli Distribution Without enforcing any prior on the latent space the learned distribution is unknown We parametrize the distribution by its first two moments learned from latents encoded on the training data Dimensions of the binary latent space are relaxed into vectors on a unit hypersphere given the first two moments A random Bernoulli vector with the distribution of the latent space is generated by randomly splitting the hypersphere and assigning logical ones to latent dimensions represented by vectors in one hemisphere and zeros to the rest (encoded as {− 1 , 1 } ) 1 Matrix of 1 Moments -1 H (N+1)x(N+1) 1 Decoder -1 r ∼ 𝓞 (N+1) (0, I (N+1) ) b 1 Xʼ Jiri Fajtl et al. LBAE - ICML 2020 August 15, 2020 4 / 29

  5. Interpolation in Latent Space Given latent representations of two images, generate latents producing interpolation in the image space For source and target latents we find hyperplanes on the hypersphere Divide the angle between source and target hyperplane normals into T steps and for each produce a new hyperplane Decode these hyperplanes into latents and then to images Dec. Latent <- Hyperplane Latent -> Hyperplane Enc. target Enc. Latent -> Hyperplane source Jiri Fajtl et al. LBAE - ICML 2020 August 15, 2020 5 / 29

  6. Changing Attributes Statistically significant attributes of the training data can be identified in the latent space e.g. images of faces with eyeglasses No need to train the LBAE in a conditional setting Collect latents of samples with the given attribute and find highly positively and negatively correlated latent bits The attribute is then modified by changing these bits in the latent vector Enc. Dec. -1 1 1 1 -1 1 1 -1 -1 -1 1 -1 1 -1 1 -1 -1 1 1 -1 -1 1 Set eyeglasses attribute bits Jiri Fajtl et al. LBAE - ICML 2020 August 15, 2020 6 / 29

  7. Results Reconstruction on test datasets Random Samples Interpolation on test datasets Adding eyeglasses and goatee CelebA attributes on test dataset Quantitative Results at the end of the presentation Jiri Fajtl et al. LBAE - ICML 2020 August 15, 2020 7 / 29

  8. Deep Dive Learning Bernoulli latent space Sampling correlated multivariate Bernoulli latents Interpolation in latent space Changing sample attributes Quantitative & qualitative results Conclusion Jiri Fajtl et al. LBAE - ICML 2020 August 15, 2020 8 / 29

  9. Learning Bernoulli Latent Space Problematic with grandient based methods , not differentiable - no backprop Leave non differentiable binarization fcn in the forward pass and bypass it during backprop. Proposed earlier by Hinton & Bengio. But the convergence is slow or impossible without limiting the magnitude of the error gradient in the encoder Limiting the activation to [ − 1 , 1] with tanh() alleviates this issue Jiri Fajtl et al. LBAE - ICML 2020 August 15, 2020 9 / 29

  10. Learning Bernoulli Latent Space For N -dimensional latent space we replace the information bottleneck of a typical autoencoder with tanh() followed by binarization f b ( z i ) = { 1 , if z i ≥ 0 and − 1 otherwise } with unit gradient surrogate function f s () for backward pass Binarization 1 b = f b ( z ) z = tanh( h ) 1 -1 1 h z b 1 Encoder 1 Decoder g 𝜚 ( X ) f 𝜾 ( b ) x -1 -1 -1 X Xʼ 1 𝜖 f s ( z ) =1 L = || X - Xʼ || 2 𝜖 z MSE Loss We found lower overfitting with the binarization compared to an identical AE with similar bit-size continuous latents Quantization noise helps with regularisation Jiri Fajtl et al. LBAE - ICML 2020 August 15, 2020 10 / 29

  11. Latent Space Representation Without enforcing any prior on the latent space the learned distribution is unknown How to parametrize the latent distribution? GMM, KDE, autoregressive models, ...? Marginal Bernoulli distribution has a limit on information carried by single dimension given by its unimodal distribution with expectation p = E [ b ] Most information is carried by higher moments We parametrize the latent distribution by its first and second non-central moments learned from latents encoded on the training dataset Our method is based on random hyperplane rounding proposed by Goemans-Williamson for the MAX-CUT [3] algorithm Jiri Fajtl et al. LBAE - ICML 2020 August 15, 2020 11 / 29

  12. Latent Space Representation Relax latent dimensions into unit vectors on a hypersphere Set angles between the vectors to be proportional to covariances of corresponding latent dimensions Add a boundary vector (yellow) representing the expected value of the distribution b -1 1 1 1 …... -1 1 -1 1 -1 -1 Jiri Fajtl et al. LBAE - ICML 2020 August 15, 2020 12 / 29

  13. Latent Space Parametrization Let us consider a matrix Y ∈ {− 1 , 1 } ( N × K ) of K N -dimensional latents encoded on the training dataset Parametrize the latent space distribution by first two moments as: � E [ YY T ] E [ Y ] � , M ∈ [ − 1 , 1] ( N +1) × ( N +1) M = E [ Y ] T 1 Generate N + 1 unit length vectors on a sphere S ( N +1) organized as rows in matrix V ∈ R ( N +1) × ( N +1) , ∀ i ∈ [1 , .., N + 1] , � V i � = 1 Setup angles α i , j between pair of vectors ( V i , V j ) as: ◮ α i , j − → 0 for high positive covariance ◮ α i , j − → π for high negative covariance ◮ α i , j ≈ π 2 for independent dimensions Jiri Fajtl et al. LBAE - ICML 2020 August 15, 2020 13 / 29

  14. Latent Space Parametrization Relate covariances in M to the angle α i , j and scalar product � V i , V j � = 1 − cos − 1 ( � V i , V j � ) 1 2( M i , j + 1) = 1 − α i , j π π Get V as a function of M � π � H i , j = cos 2 (1 − M i , j ) where H is a Gram matrix H i , j = � V i , V j � H = VV T H � 0 , s . t . where V is a row-normal lower triangular matrix after Cholesky decomposition with rows being the desired unit vectors on S ( N +1) . Jiri Fajtl et al. LBAE - ICML 2020 August 15, 2020 14 / 29

  15. Sampling Correlated Multivariate Bernoulli Latents Generate random hyperplane through the center of S ( N +1) (green) r ∼ N ( N +1) (0 , I ( N +1) ) Set positive states (red) to dimensions represented by vectors in hemisphere shared by the boundary vector V N +1 (yellow) and negative to the rest � 1 , if f b ( � V i , r � ) = f b ( � V ( N +1) , r � ) b i = , ∀ i ∈ [1 , .., N ] − 1 , otherwise 1.00 0.75 0.50 0.25 0.00 0.25 0.50 0.75 1.00 1.0 1.0 0.5 0.5 0.0 0.0 0.5 0.5 1.0 1.0 Jiri Fajtl et al. LBAE - ICML 2020 August 15, 2020 15 / 29

  16. Sampling Correlated Multivariate Bernoulli Latents Why not sample from multivariate normal distributions with rounding? Σ = E [ YY T ] − E [ Y ] E [ Y ] T , z ∼ N N (0 , I N ) b ∈ {− 1 , 1 } N , b = f b ( Lz + E [ Y ]) , where Σ = LL T is a lower triangular Cholesky decomposition. 0.6 Ground truth 0.60 Ground truth Hyperplane bin. Hyperplane bin. 0.4 Direct bin. 0.55 Direct bin. Covariance C ( i , j ) 0.2 p ( z i = 1) 0.50 0.0 0.2 0.45 0.4 0.40 0.6 0 5000 10000 15000 20000 0 50 100 150 200 Latent dimension Index to vec ( C ) (a) Sorted marginal probabilities (b) Vectorized, sorted covariances. Ground truth (GT) vs LBAE sampling vs normal dist. sampling. GT and LBAE sampling appear identical. Note that GT (blue) is mostly hidden behind the red. Jiri Fajtl et al. LBAE - ICML 2020 August 15, 2020 16 / 29

  17. Interpolation in Bernoulli Latent Space Encode source and target images to latents s and t For each find a hyperplane r s and r t that generates original latents Get T equally spaced vectors r i , i ∈ [1 , ..., T ] between r s and r t For each hyperplane with normal r i generate a latent and decode it to an image Dec. Latent <- Hyperplane Latent -> Hyperplane Enc. target Enc. Latent -> Hyperplane source Jiri Fajtl et al. LBAE - ICML 2020 August 15, 2020 17 / 29

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend