multi objective training of generative adversarial
play

Multi-objective training of Generative Adversarial Networks with - PowerPoint PPT Presentation

Multi-objective training of Generative Adversarial Networks with multiple discriminators Isabela Albuquerque , Jo ao Monteiro , Thang Doan, Breandan Considine, Tiago Falk, and Ioannis Mitliagkas Equal contribution 1 / 11 The


  1. Multi-objective training of Generative Adversarial Networks with multiple discriminators Isabela Albuquerque ∗ , Jo˜ ao Monteiro ∗ , Thang Doan, Breandan Considine, Tiago Falk, and Ioannis Mitliagkas ∗ Equal contribution 1 / 11

  2. The multiple discriminators GAN setting ◮ Recent literature proposed to tackle GANs training instability* issues with multiple discriminators (Ds) 1. Generative multi-adversarial networks, Durugkar et al. (2016) 2. Stabilizing GANs training with multiple random projections, Neyshabur et al. (2017) 3. Online Adaptative Curriculum Learning for GANs, Doan et al. (2018) 4. Domain Partitioning Network, Csaba et al. (2019) *Mode-collapse or vanishing gradients 2 / 11

  3. The multiple discriminators GAN setting 3 / 11

  4. Our work 4 / 11

  5. Our work min L G ( z ) = [ l 1 ( z ) , l 2 ( z ) , ..., l K ( z )] T ◮ Each l k = − E z ∼ p z log D k ( G ( z )) is the loss provided by the k -th discriminator 4 / 11

  6. Our work min L G ( z ) = [ l 1 ( z ) , l 2 ( z ) , ..., l K ( z )] T ◮ Multiple gradient descent (MGD) is a natural choice to solve this problem ◮ But it might be too costly ◮ Alternative: maximize the hypervolume (HV) of a single solution 4 / 11

  7. Multiple gradient descent ◮ Seeks a Pareto-stationary solution ◮ Two steps: 1. Find a common descent direction ∀ l k 1.1 Minimum norm element within the convex hull of all ∇ l k ( x ) w ∗ 2. Update the parameters with x t +1 = x t − λ t || , where t || w ∗ K � t = argmin || w || 2 , w ∗ w = α k ∇ l k ( x t ) , k =1 K � s.t. α k = 1 , α k ≥ 0 ∀ k k =1 5 / 11

  8. Hypervolume maximization for training GANs L D 2 η η ∗ l 2 L G L D 1 η l 1 6 / 11

  9. Hypervolume maximization for training GANs L D 2 � K � η � η ∗ L G = − log ( η − l k ) k =1 l 2 L G K � L G = − log( η − l k ) L D 1 k =1 η l 1 K ∂ L G 1 ∂ l k � = η − l k ∂θ ∂θ k =1 6 / 11

  10. Hypervolume maximization for training GANs L D 2 � K � η η ∗ � L G = − log ( η − l k ) k =1 l 2 L G K � L G = − log( η − l k ) L D 1 k =1 η l 1 K ∂ L G 1 ∂ l k η t = δ max � k { l t = k } , δ > 1 ∂θ η − l k ∂θ k =1 6 / 11

  11. MGD vs. HV maximization vs. Average loss minimization ◮ MGD seeks a Pareto-stationary solution ◮ x t +1 ≺ x t ◮ HV maximization seeks Pareto-optimal solutions ◮ HV( x t +1 ) > HV( x t ) ◮ For the single-solution case, central regions of the Pareto-front are preferred ◮ Average loss minimization does not enforce equally good individual losses ◮ Might be problematic in case there is a trade-off between discriminators 7 / 11

  12. MNIST ◮ Same architecture, hyperparameters, and initialization for all methods ◮ 8 Ds, 100 epochs ◮ FID was calculated using a LeNet trained on MNIST until 98% test accuracy 12 HV 2500 GMAN Best FID achieved during training 2400 MGD 11 AVG 20.0 17.5 10 FID - MNIST 15.0 12.5 9 10.0 7.5 8 5.0 7 2.5 0.0 AVG GMAN HV MGD 0 250 500 750 1000 1250 1500 1750 Model Wall-clock time until best FID (minutes) 8 / 11

  13. Upscaled CIFAR-10 - Computational cost ◮ Different GANs with both 1 and 24 Ds + HV ◮ Same architecture and initialization for all methods ◮ Comparison of minimum FID obtained during training, along with computation cost in terms of time and space # Disc. FID-ResNet FLOPS ∗ Memory 1 4.22 8e10 1292 DCGAN 24 1.89 5e11 5671 1 4.55 8e10 1303 LSGAN 24 1.91 5e11 5682 1 6.17 8e10 1303 HingeGAN 24 2.25 5e11 5682 ∗ Floating point operations per second ◮ Additional cost → performance improvement 9 / 11

  14. Cats 256 × 256 10 / 11

  15. Thank you! Questions? Come to our poster! #4 Code: https://github.com/joaomonteirof/hGAN 11 / 11

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend