interpreting interpretations organizing attribution
play

Interpreting Interpretations: Organizing Attribution Methods by - PowerPoint PPT Presentation

IEEE CVPR2020 WORKSHOP ON FAIR, DATA EFFICIENT AND TRUSTED COMPUTER VISION Interpreting Interpretations: Organizing Attribution Methods by Criteria Zifan Wang, Piotr Mardziel, Anupam Datta, Matt Fredrikson Accountable System Lab


  1. IEEE CVPR2020 WORKSHOP ON FAIR, DATA EFFICIENT AND TRUSTED COMPUTER VISION Interpreting Interpretations: Organizing Attribution Methods by Criteria Zifan Wang, Piotr Mardziel, Anupam Datta, Matt Fredrikson Accountable System Lab (https://fairlyaccountable.org/) Carnegie Mellon University 1

  2. Introduction 𝑦 = [𝑦 ! , 𝑦 " , … , 𝑦 #$" ] Input Attribution 𝑨 = [𝑨 ! , 𝑨 " , … , 𝑨 #$" ] Concept-Based Map Explanations Instance-based Attribution Map explanations Activation-Based Attribution Maps may not agree with each other Explanations Dog Attribution Map : Find the most important features in the input towards the prediction The lady IS or IS NOT used by the model to predict “dog” class 2

  3. Background Goal of this paper : • • Decompose the importance Evaluate different attribution methods with numerical analysis to answer which attribution Logical meaning method is better at what extent ? A sufficient condition is A necessary condition is one which can one without which a Recall: • independently make a statement is false statement true Attribution Map : Find the most important features in the input towards the prediction Necessity Importance Sufficiency Without therse features With these features the model will lose more independently, the confidence than others model will gain more confidence than others Deep Neural Networks 3

  4. Necessity Ordering Methods 0.99 𝑔(𝑦) 0.67 0.95 Quantifying Necessity & Sufficiency • Criteria One: Ordering Share of pixel Scores à Rank Order 0.3 0.02 0 𝑔(𝑦) Quantify the ordering: Modify features from top rank orders to the bottom. Modify à Ablate (Necessity) Modify à Add (Sufficiency) Share of pixel Similar Metrics Sufficiency Ordering Necessity Ordering : AOPC [Samek et al. 2017], AUC [Binder et al. 2016], MoRF [Ancona et al. 2017] Sufficiency Ordering : LeRF [Ancona et al. 2017] Average % Dorp [Chattopadhyay et al. 2017] 4

  5. Methods Quantifying Necessity & Sufficiency • Criteria One: Ordering However, there is more than rank orders that attribution scores offer, and the actual values are overlooked. 5

  6. Methods Quantifying Necessity & Sufficiency Sum ( ) = Sum ( ) • Criteria Two: Proportionality Motivations: • Magnitude also captures information Equally important • Interpret attribution scores linearly: Sensitivity-N [Ancona et al. 2017] given 𝑦 " , 𝑦 % , if 𝑨 " = 2𝑨 % , then 𝑦 " is Randomly choose regions --> expected to be twice important than Loosing rank order information 𝑦 % is. Method Property under a given ordering: TPN Choose from Top 1 … N Necessity Ours: TPS Choose from Top N … 1 Sufficiency 6

  7. Methods Quantifying Necessity & Sufficiency • Criteria Two: Proportionality TPN measures Necessity TPS measures Sufficiency Area between the curves measures disproportionality. Smaller area between the curves, better the Necessity/Sufficiency 7

  8. Methods Summary of Evaluation Metrics What it means by importance Necessity TPN N-Ord What should be offered in the Ordering Proportionality attribution map Sufficiency S-Ord TPS 8

  9. Results Evaluation performed with ImageNet and pretrainedVGG-16 model Recommended Methods under Ordering Criteria : Necessity: DeepLIFT • Sufficiency: GradCAM, LRP • Recommended Methods under Proportionality Criteria: Necessity: SmoothGrad, Saliency Map, Integrated • Gradient Sufficiency: Guided Backpropagation, LRP, DeepLIFT • lower scores indicate the better performance 9

  10. Interpret Interpretations How do we use necessity/sufficiency and ordering/proportionality to interpret different attribution maps. GradCAM highlights an area of sufficient features (Good S-Ord), the model can make Conclusions of experiments let us impart additional correct prediction without the lady. However, interpretation to these results among the sufficient features, higher scores do not strictly mean that those features are a lot more sufficient than features with lower scores (Poor TPS). SmoothGrad highlights necessary features (Poor S-Ord and TPS) , and attribution scores are proportional to actual necessity (Good TPN). Point clouds around the lady are less The lady IS or IS NOT used by the intensive compared to the dog; therefore , the model to predict “dog” class lady is less necessary compared to the dog in the prediction towards “dog” class . 10

  11. IEEE CVPR2020 WORKSHOP ON FAIR, DATA EFFICIENT AND TRUSTED COMPUTER VISION Thanks for watching our presentation Contact us: zifan.wang@sv.cmu.edu Zifan Wang, Piotr Mardziel, Anupam Datta, Matt Fredrikson Accountable System Lab (https://fairlyaccountable.org/) Carnegie Mellon University 11

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend