Published @ ICCV 2017 Context Many try to explain CNN predictions - - PowerPoint PPT Presentation

published iccv 2017 context
SMART_READER_LITE
LIVE PREVIEW

Published @ ICCV 2017 Context Many try to explain CNN predictions - - PowerPoint PPT Presentation

Published @ ICCV 2017 Context Many try to explain CNN predictions Good overview: CVPR 2018 tutorial on Interpretable ML for CV https://interpretablevision.github.io/ Studies show existing methods that use gradients are problematic


slide-1
SLIDE 1

Published @ ICCV 2017

slide-2
SLIDE 2

Context

  • Many try to explain CNN predictions
  • Good overview: CVPR 2018 tutorial on Interpretable ML for CV
  • https://interpretablevision.github.io/
  • Studies show existing methods that use gradients are problematic
  • Today: a 'good' explenation method
slide-3
SLIDE 3

What is an explenation?

  • A rule that predicts the response of f to certain inputs
  • Examples:
  • f(x) = +1

if x contains a cat

  • f(x) = f(x')

if x and x' are related by a rotation. x' is perturbed version

  • Rules tested using data
  • Quality of a rule: generalization to unseen data
  • Rules can be discovered and learned
slide-4
SLIDE 4

Explenations for CNN's: Saliency

  • What region of the image is important to get decision f(x)?
  • Idea: delete parts of x until posterior drops
  • Deletion = blurring
  • Task: find smallest mask m that minimizes f(x) significantly
slide-5
SLIDE 5

Artefacts

  • Naively learning the mask introduces artefacts
  • Remember: explenation should generalize! So if the image x changes,

explenation should still hold.

  • Solution 1: apply mask with random offsets during optimization
  • Solution 2: regularize mask: smoother / more natural perturbations
slide-6
SLIDE 6

Better interpretability

  • Mask highlights only essential evidence.
  • Other methods often find 'irrelevant' evidence.
slide-7
SLIDE 7

Spurious correlation

  • Method finds CNN errors
slide-8
SLIDE 8

Better understanding

  • Use extra annotations of Imagenet + masks to improve understanding
  • Animal faces are more important than feet for CNN's
slide-9
SLIDE 9

Adverserial images have strange masks

slide-10
SLIDE 10

Detecting adverserial images

  • After blurring the 'adverserial' mask,

CNN can recover original prediction in 40% of the cases

slide-11
SLIDE 11

Take home

  • Saliency != gradient
  • Proposed method can be used to diagnose and understand CNN's
  • Paper with extensive, proper evaluation
  • Proposed method can be slow (requires 300 iterations of Adam)