GAN Dissection: Visualizing and Understanding Generative Adversarial - - PowerPoint PPT Presentation

gan dissection visualizing and understanding generative
SMART_READER_LITE
LIVE PREVIEW

GAN Dissection: Visualizing and Understanding Generative Adversarial - - PowerPoint PPT Presentation

GAN Dissection: Visualizing and Understanding Generative Adversarial Networks David Bau, Jun-Yan Zhu, Hendrik Strobelt, Bolei Zhou, Joshua B. Tenenbaum, William T. Freeman, Antonio Torralba Ing. Jakub Zitn y Faculty of Information


slide-1
SLIDE 1

GAN Dissection: Visualizing and Understanding Generative Adversarial Networks

David Bau, Jun-Yan Zhu, Hendrik Strobelt, Bolei Zhou, Joshua B. Tenenbaum, William T. Freeman, Antonio Torralba

  • Ing. Jakub ˇ

Zitn´ y

Faculty of Information Technology, Czech Technical University in Prague

Supervisor: doc. Ing. Pavel Kord´ ık PhD.

April 6, 2020 J.ˇ Zitn´ y GAN Dissection 1/ 20

slide-2
SLIDE 2

Introduction

Dissertation topic: interpretability, explainability

Visualizing the Impact of Feature Attribution Baselines 10.23915/distill.00022

Focus on: generative models, medical imaging (applications)

BraTS, KiTS, RA2, MURA DeepLesion: automated mining of large-scale lesion annotations and universal lesion detection with deep learning Figure: DC-GAN sample of tumour segmentation (KiTS dataset)

J.ˇ Zitn´ y GAN Dissection 2/ 20

slide-3
SLIDE 3

Motivation

Motivation

How does GAN represent a our visual world internally? What causes artifacts in GAN results? How do architectural choices affect GAN learning?

J.ˇ Zitn´ y GAN Dissection 3/ 20

slide-4
SLIDE 4

Motivation

Motivation

Does GAN contain internal variables that correspond to the objects that humans perceive? If so, do they cause the actual generation or they just correlate?

J.ˇ Zitn´ y GAN Dissection 4/ 20

slide-5
SLIDE 5

Previous work

Previous work

Network dissection: Quantifying interpretability of deep visual representations (Bau, Zhou, et al., CVPR 17) Unified perceptual parsing for scene understanding (Zhou et al., ECCV 18) Generative adversarial nets (Goodfellow et al., NIPS 14) Progressive growing of gans for improved quality, stability, and variation (Karras et al., ICLR 18)

J.ˇ Zitn´ y GAN Dissection 5/ 20

slide-6
SLIDE 6

Previous work

Generative Adversarial Networks

min

G max D V (D, G) = Ex∼pdata(x)[log D(x)] + Ez∼pz(z)[log (1 − D(G(z)))]

J.ˇ Zitn´ y GAN Dissection 6/ 20

slide-7
SLIDE 7

Previous work J.ˇ Zitn´ y GAN Dissection 7/ 20

slide-8
SLIDE 8

Method Overview

Method

  • 1. The information is present, but how?
  • 2. Characterizing units by dissection
  • 3. Measuring causal relationships using intervention

J.ˇ Zitn´ y GAN Dissection 8/ 20

slide-9
SLIDE 9

Method Overview

tensor r from layer from G r = h(z) image x from random z through a composition of layers x = f (r) = f (h(z)) = G(z) so x is a function of r

J.ˇ Zitn´ y GAN Dissection 9/ 20

slide-10
SLIDE 10

Method Overview

feature map rU,P universe of concepts c ∈ C can we factor r at locations P? rU,P = (rU,P, rU,P) where P depends on rU,P and not on rU,P

J.ˇ Zitn´ y GAN Dissection 10/ 20

slide-11
SLIDE 11

Method Dissection

z ru,P r x f h ru

↑ > t

sc(x) G IoUu,c

segment upsample generate single unit u featuremap thresholded generated image segmentation generator agreement

tree not t ree

Figure: Which units correlate to a object class?

J.ˇ Zitn´ y GAN Dissection 11/ 20

slide-12
SLIDE 12

Method Dissection

Characterizing units by dissection

Intersection-over-union measure for spatial agreement between unit u’s thresholded featuremap and c’s segmentation IoUu,c ≡ Ez|(r↑

u,P > tu,c) ∧ sc(x)|

Ez|(r↑

u,P > tu,c) ∨ sc(x)|

tu,c = arg max

t

I(r↑

u,P > sc(x))

H(r↑

u,P > sc(x)

J.ˇ Zitn´ y GAN Dissection 12/ 20

slide-13
SLIDE 13

Method Causality intervention

After we identified units that match closely with object class, we want to know which ones are responsible for triggering the rendering of the object.

z h rU,P rU,P f f d

U→c

force U off

force r U,P on xa xi sc(xi) sc(xa) force r U,P off

segment segment causal units U unforced units ablated image inserted image segmentation segmentation causal effect

Figure: Insert and remove units and observe causality.

J.ˇ Zitn´ y GAN Dissection 13/ 20

slide-14
SLIDE 14

Method Causality intervention

Causal relationships intervention

Original image x = G(z) ≡ f (r) ≡ f (rU,PrU,P) U ablated at P xa = f (0, rU,P) U inserted at P xi = f (k, rU,P)

J.ˇ Zitn´ y GAN Dissection 14/ 20

slide-15
SLIDE 15

Method Causality intervention

Average causal effect of units u on c δU→c ≡ Ez,P[sc(xi)] − Ez,P[sc(xa)] Relaxed to partial ablations/insertions xa = f ((1 − α) ⊙ rU,P, rU,P) xi = f (α ⊙ k + (1 − α) ⊙ rU,P, rU,P) Optimize α, SGD, L2 α∗ = arg min

α (−δα→c + λ||α||2)

J.ˇ Zitn´ y GAN Dissection 15/ 20

slide-16
SLIDE 16

Method Causality intervention J.ˇ Zitn´ y GAN Dissection 16/ 20

slide-17
SLIDE 17

Method Causality intervention J.ˇ Zitn´ y GAN Dissection 17/ 20

slide-18
SLIDE 18

Method Causality intervention J.ˇ Zitn´ y GAN Dissection 18/ 20

slide-19
SLIDE 19

Method Results

Results

Practical implications

Debugging, monitoring, tracing Controlling — tuning / composing GAN outputs

Observations

Usually multiple units are responsible for generating an object First has no units that match semantic objects Later layers are dominated by low-level materials, edges and colors Network learns the context of object location (e.g. windows can be on building, but not in the sky)

J.ˇ Zitn´ y GAN Dissection 19/ 20

slide-20
SLIDE 20

Method Results

DEMO

J.ˇ Zitn´ y GAN Dissection 20/ 20

slide-21
SLIDE 21

Questions? Thank you