gan dissection visualizing and understanding generative
play

GAN Dissection: Visualizing and Understanding Generative Adversarial - PowerPoint PPT Presentation

GAN Dissection: Visualizing and Understanding Generative Adversarial Networks David Bau, Jun-Yan Zhu, Hendrik Strobelt, Bolei Zhou, Joshua B. Tenenbaum, William T. Freeman, Antonio Torralba Ing. Jakub Zitn y Faculty of Information


  1. GAN Dissection: Visualizing and Understanding Generative Adversarial Networks David Bau, Jun-Yan Zhu, Hendrik Strobelt, Bolei Zhou, Joshua B. Tenenbaum, William T. Freeman, Antonio Torralba Ing. Jakub ˇ Zitn´ y Faculty of Information Technology, Czech Technical University in Prague Supervisor: doc. Ing. Pavel Kord´ ık PhD. April 6, 2020 J.ˇ Zitn´ y GAN Dissection 1/ 20

  2. Introduction Dissertation topic: interpretability, explainability Visualizing the Impact of Feature Attribution Baselines 10.23915/distill.00022 Focus on: generative models, medical imaging (applications) BraTS, KiTS, RA2, MURA DeepLesion: automated mining of large-scale lesion annotations and universal lesion detection with deep learning Figure: DC-GAN sample of tumour segmentation (KiTS dataset) J.ˇ Zitn´ y GAN Dissection 2/ 20

  3. Motivation Motivation How does GAN represent a our visual world internally? What causes artifacts in GAN results? How do architectural choices affect GAN learning? J.ˇ Zitn´ y GAN Dissection 3/ 20

  4. Motivation Motivation Does GAN contain internal variables that correspond to the objects that humans perceive? If so, do they cause the actual generation or they just correlate? J.ˇ Zitn´ y GAN Dissection 4/ 20

  5. Previous work Previous work Network dissection: Quantifying interpretability of deep visual representations (Bau, Zhou, et al., CVPR 17) Unified perceptual parsing for scene understanding (Zhou et al., ECCV 18) Generative adversarial nets (Goodfellow et al., NIPS 14) Progressive growing of gans for improved quality, stability, and variation (Karras et al., ICLR 18) J.ˇ Zitn´ y GAN Dissection 5/ 20

  6. Previous work Generative Adversarial Networks min G max D V ( D , G ) = E x ∼ p data ( x ) [log D ( x )] + E z ∼ p z ( z ) [log (1 − D ( G ( z )))] J.ˇ Zitn´ y GAN Dissection 6/ 20

  7. Previous work J.ˇ Zitn´ y GAN Dissection 7/ 20

  8. Method Overview Method 1. The information is present, but how? 2. Characterizing units by dissection 3. Measuring causal relationships using intervention J.ˇ Zitn´ y GAN Dissection 8/ 20

  9. Method Overview tensor r from layer from G r = h ( z ) image x from random z through a composition of layers x = f ( r ) = f ( h ( z )) = G ( z ) so x is a function of r J.ˇ Zitn´ y GAN Dissection 9/ 20

  10. Method Overview feature map r U , P universe of concepts c ∈ C can we factor r at locations P? r U , P = ( r U , P , r U , P ) where P depends on r U , P and not on r U , P J.ˇ Zitn´ y GAN Dissection 10/ 20

  11. Method Dissection featuremap thresholded single unit u upsample r u,P r u ↑ > t agreement IoU u,c generated image segmentation not t ree generate h r f segment tree z s c ( x ) x generator G Figure: Which units correlate to a object class? J.ˇ Zitn´ y GAN Dissection 11/ 20

  12. Method Dissection Characterizing units by dissection Intersection-over-union measure for spatial agreement between unit u ’s thresholded featuremap and c ’s segmentation E z | ( r ↑ u , P > t u , c ) ∧ s c ( x ) | IoU u , c ≡ E z | ( r ↑ u , P > t u , c ) ∨ s c ( x ) | I ( r ↑ u , P > s c ( x )) t u , c = arg max H ( r ↑ t u , P > s c ( x ) J.ˇ Zitn´ y GAN Dissection 12/ 20

  13. Method Causality intervention After we identified units that match closely with object class, we want to know which ones are responsible for triggering the rendering of the object. force r U,P on inserted image segmentation f force U off unforced units causal units U segment x i s c ( x i ) h causal effect d ablated image segmentation U → c z r U,P r U,P f segment force r U,P off x a s c ( x a ) Figure: Insert and remove units and observe causality. J.ˇ Zitn´ y GAN Dissection 13/ 20

  14. Method Causality intervention Causal relationships intervention Original image x = G ( z ) ≡ f ( r ) ≡ f ( r U , P r U , P ) U ablated at P x a = f (0 , r U , P ) U inserted at P x i = f ( k , r U , P ) J.ˇ Zitn´ y GAN Dissection 14/ 20

  15. Method Causality intervention Average causal effect of units u on c δ U → c ≡ E z , P [ s c ( x i )] − E z , P [ s c ( x a )] Relaxed to partial ablations/insertions x a = f ((1 − α ) ⊙ r U , P , r U , P ) x i = f ( α ⊙ k + (1 − α ) ⊙ r U , P , r U , P ) Optimize α , SGD, L2 α ∗ = arg min α ( − δ α → c + λ || α || 2 ) J.ˇ Zitn´ y GAN Dissection 15/ 20

  16. Method Causality intervention J.ˇ Zitn´ y GAN Dissection 16/ 20

  17. Method Causality intervention J.ˇ Zitn´ y GAN Dissection 17/ 20

  18. Method Causality intervention J.ˇ Zitn´ y GAN Dissection 18/ 20

  19. Method Results Results Practical implications Debugging, monitoring, tracing Controlling — tuning / composing GAN outputs Observations Usually multiple units are responsible for generating an object First has no units that match semantic objects Later layers are dominated by low-level materials, edges and colors Network learns the context of object location (e.g. windows can be on building, but not in the sky) J.ˇ Zitn´ y GAN Dissection 19/ 20

  20. Method Results DEMO J.ˇ Zitn´ y GAN Dissection 20/ 20

  21. Questions? Thank you

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend