Interpretability of Machine Learning for Computer Vision Xinshuo - - PowerPoint PPT Presentation

interpretability of machine learning for computer vision
SMART_READER_LITE
LIVE PREVIEW

Interpretability of Machine Learning for Computer Vision Xinshuo - - PowerPoint PPT Presentation

Interpretability of Machine Learning for Computer Vision Xinshuo Weng* *Most slides borrowed from CVPR 2018 tutorial and Stanford Type of interpretability methods Type of interpretability methods Type of interpretability methods Type of


slide-1
SLIDE 1

Interpretability of Machine Learning for Computer Vision

Xinshuo Weng*

*Most slides borrowed from CVPR 2018 tutorial and Stanford

slide-2
SLIDE 2

Type of interpretability methods

slide-3
SLIDE 3

Type of interpretability methods

slide-4
SLIDE 4

Type of interpretability methods

slide-5
SLIDE 5

Type of interpretability methods

slide-6
SLIDE 6

Understanding Models at Different Granularity

  • What is a unit doing?
  • What are all the units are doing?
  • How units are relevant to prediction? Understanding the explainable model.
slide-7
SLIDE 7
  • Visualize the filter

What is a unit doing? - Visualize the individual unit

Krizhevsky et al, “ImageNet Classification with Deep Convolutional Neural Networks”, NIPS, 2012.

slide-8
SLIDE 8
  • Visualize the filter
  • Visualize the activation

What is a unit doing? - Visualize the individual unit

Yosinski et al, “Understanding Neural Networks Through Deep Visualization”, ICML, 2015.

slide-9
SLIDE 9
  • Visualize the filter
  • Visualize the activation
  • Visualize the corresponding images
  • Top activated images (NN)

What is a unit doing? - Visualize the individual unit

Yosinski et al, “Understanding Neural Networks Through Deep Visualization”, ICML, 2015.

slide-10
SLIDE 10
  • Visualize the filter
  • Visualize the activation
  • Visualize the corresponding images
  • Top activated images (NN)
  • Deconvolution

What is a unit doing? - Visualize the individual unit

Zeiler and Fergus, “Visualizing and Understanding Convolutional Networks”, ECCV, 2014.

slide-11
SLIDE 11
  • Visualize the filter
  • Visualize the activation
  • Visualize the corresponding images
  • Top activated images (NN)
  • Deconvolution
  • Back-propagation

What is a unit doing? - Visualize the individual unit

slide-12
SLIDE 12
  • Visualize the filter
  • Visualize the activation
  • Visualize the corresponding images
  • Top activated images (NN)
  • Deconvolution
  • Back-propagation

What is a unit doing? - Visualize the individual unit

Erhan et al, “Visualizing Higher-Layer Features of a Deep Network”, University of Montreal, 2009.

slide-13
SLIDE 13
  • Visualize the filter
  • Visualize the activation
  • Visualize the corresponding images
  • Top activated images (NN)
  • Deconvolution
  • Back-propagation

What is a unit doing? - Visualize the individual unit

Springenberg et al, “Striving for Simplicity: the All Convolutional Net”, ICLR 2015.

slide-14
SLIDE 14
  • Visualize the features
  • Dimensionality reduction: t-SNE

What are all the units doing?

PCA t-SNE

Maaten and Hinton, “Visualizing Data using t-SNE”, JMLR, 2008.

slide-15
SLIDE 15
  • Visualize the features
  • Dimensionality reduction: T-SNE
  • Visualize the corresponding images
  • Top activated image (NN)

What are all the units doing?

Krizhevsky et al, “ImageNet Classification with Deep Convolutional Neural Networks”, NIPS, 2012.

slide-16
SLIDE 16
  • Visualize the features
  • Dimensionality reduction: T-SNE
  • Visualize the corresponding images
  • Top activated image (NN)
  • Back-propagation

What are all the units doing?

Ulyanov et al, “Deep Image Prior”, CVPR 2018.

slide-17
SLIDE 17
  • Visualize the features
  • Dimensionality reduction: T-SNE
  • Visualize the corresponding images
  • Top activated image (NN)
  • Back-propagation

What are all the units doing?

Ulyanov et al, “Deep Image Prior”, CVPR 2018.

slide-18
SLIDE 18
  • Visualize the features
  • Dimensionality reduction: T-SNE
  • Visualize the corresponding images
  • Top activated image (NN)
  • Back-propagation
  • Image Synthesis

What are all the units doing?

Dosovitskiy And Brox, “Generating Images with Perceptual Similarity Metrics based on Deep Networks”, NIPS 2016.

slide-19
SLIDE 19
  • Visualize the features
  • Dimensionality reduction: T-SNE
  • Visualize the corresponding images
  • Top activated image (NN)
  • Back-propagation
  • Image Synthesis

What are all the units doing?

Dosovitskiy And Brox, “Generating Images with Perceptual Similarity Metrics based on Deep Networks”, NIPS 2016.

slide-20
SLIDE 20
  • From qualitative to quantitative analysis: Network Dissection

What are all the units doing?

Bau et al, “Network Dissection: Quantifying Interpretability of Deep Visual Representations”, CVPR 2017.

slide-21
SLIDE 21
  • From qualitative to quantitative analysis: Network Dissection

What are all the units doing?

Bau et al, “Network Dissection: Quantifying Interpretability of Deep Visual Representations”, CVPR 2017.

slide-22
SLIDE 22
  • Ablation study: occlusion effect

How units are relevant to prediction? Understanding the explainable model

Zhou et al, “Revisiting the Importance of Individual Units in CNNs via Ablation”, arXiv 2018.

slide-23
SLIDE 23
  • Ablation study: occlusion effect

How units are relevant to prediction? Understanding the explainable model

Zhou et al, “Revisiting the Importance of Individual Units in CNNs via Ablation”, arXiv 2018.

slide-24
SLIDE 24
  • Ablation study: occlusion effect

How units are relevant to prediction? Understanding the explainable model

Zhou et al, “Revisiting the Importance of Individual Units in CNNs via Ablation”, arXiv 2018.

slide-25
SLIDE 25
  • Ablation study: occlusion effect

How units are relevant to prediction? Understanding the explainable model

Zhou et al, “Revisiting the Importance of Individual Units in CNNs via Ablation”, arXiv 2018.

slide-26
SLIDE 26
  • Ablation study: occlusion effect
  • For sequential modeling: true for different model configuration
  • Range of context (memory) is limited – 200 tokens
  • Order matters in nearby context (not long-range context) – 50 tokens

How units are relevant to prediction? Understanding the explainable model

Khandelwal, et al, “Sharp Nearby, Fuzzy Far Away: How Neural Language Models Use Context”, ACL 2018.

slide-27
SLIDE 27
  • How do we improve the mode based on the interpretability?

Conclusion