machine learning
play

Machine Learning Lecture 09: Explainable AI (I) Nevin L. Zhang - PowerPoint PPT Presentation

Machine Learning Lecture 09: Explainable AI (I) Nevin L. Zhang Department of Computer Science and Engineering The Hong Kong University of Science and Technology This set of notes is based on internet resources and references listed at the end.


  1. Machine Learning Lecture 09: Explainable AI (I) Nevin L. Zhang Department of Computer Science and Engineering The Hong Kong University of Science and Technology This set of notes is based on internet resources and references listed at the end. Nevin L. Zhang (HKUST) Machine Learning 1 / 52

  2. Introduction Outline 1 Introduction 2 Pixel-Level Explanations Pixel Sensitivity Evaluation 3 Feature-Level Explanations 4 Concept-Level Explanations 5 Instance-Level Explanations Nevin L. Zhang (HKUST) Machine Learning 2 / 52

  3. Introduction What is XAI? Cunning 2019 Nevin L. Zhang (HKUST) Machine Learning 3 / 52

  4. Introduction What is XAI? Cunning 2019 Nevin L. Zhang (HKUST) Machine Learning 4 / 52

  5. Introduction What is XAI? Wikipedia: Explainable AI (XAI) refers to methods and techniques in the application of artificial intelligence technology (AI) such that the results of the solution can be understood by human experts and users. It contrasts with the concept of the ”black box” in machine learning where even their designers cannot explain why the AI arrived at a specific decision Nevin L. Zhang (HKUST) Machine Learning 5 / 52

  6. Introduction The Need for XAI Explanations foster trust and verifiability Patients trust well-explained therapy. Doctors trust well-explained suggestions. Nevin L. Zhang (HKUST) Machine Learning 6 / 52

  7. Introduction The Need for XAI Explanations foster trust and verifiability Kim et al. 2018 Nevin L. Zhang (HKUST) Machine Learning 7 / 52

  8. Introduction The Need for XAI Explanations help to determine if predication is based on the wrong reason (Clever Hans) Samek (2019) Nevin L. Zhang (HKUST) Machine Learning 8 / 52

  9. Introduction The Need for XAI Explanations help to determine if predication is based on the wrong reason (Clever Hans) Rebeiro (2016) Nevin L. Zhang (HKUST) Machine Learning 9 / 52

  10. Introduction The Need for XAI Explanations are required by regulations The EUs General Data Protection Regulation (GDPR) confers a right of explanation for all individuals to obtain meaningful explanations of the logic involved for automated decision making. Lundberg (2019): Explaining interest rate of loan. Nevin L. Zhang (HKUST) Machine Learning 10 / 52

  11. Introduction The Interpretability and Accuracy Tradeoff Lecue et al . (2020) Nevin L. Zhang (HKUST) Machine Learning 11 / 52

  12. Introduction Target Users of XAI Mosheni et al. 2020 Nevin L. Zhang (HKUST) Machine Learning 12 / 52

  13. Introduction Models to be explained Image classifiers Tabular data classifiers Text classifiers Reinforcement learning models Clustering algorithms . . . An XAI method is typically applicable to multiple models. We will focus on two tasks, image classification and tabular data classification. Nevin L. Zhang (HKUST) Machine Learning 13 / 52

  14. Introduction Types of Explanations Local vs Global explanations : Local XAI: Explains one particular prediction made by a model. Global XAI: Explains general behaviour of a model. Model-specific or model-agnostic : Model-agnostic XAI: Treats models as black-box. Model-specific XAI: Depends on the type of selected model Ante Hoc. vs Post Hoc. : Ante Hoc. XAI: Learn models that are interpretable. Post Hoc. XAI: Interpret models that are not interpretable by themselves. Nevin L. Zhang (HKUST) Machine Learning 14 / 52

  15. Pixel-Level Explanations Outline 1 Introduction 2 Pixel-Level Explanations Pixel Sensitivity Evaluation 3 Feature-Level Explanations 4 Concept-Level Explanations 5 Instance-Level Explanations Nevin L. Zhang (HKUST) Machine Learning 15 / 52

  16. Pixel-Level Explanations The Setup An image x = ( x 1 , . . . , x D ) ⊤ is fed to a DNN to produce a latent feature vector h . An affine transformation is performed on h to get a logit vector z = ( z 1 , . . . , z C ), which is used to define a probability distributions over the classes via softmax. Question: How important is a pixel x i to the score z c ( x )? Sensitivity : How sensitive is the score z c ( x ) to changes in x i ? Attribution : How much does x i contribution to the score z c ( x )? Nevin L. Zhang (HKUST) Machine Learning 16 / 52

  17. Pixel-Level Explanations The Case of Linear Model Let w c = ( w c 1 , . . . , w cD ). Suppose h = x . Then, z c = w ⊤ c x + b c , The sensitivity of x i to z c ( x ) is determined by w ci = ∂ z c . ∂ x i The contribution of x i to z c ( x ) is ∂ z c x i w ci = x i . ∂ x i In words, it is input × gradient. Nevin L. Zhang (HKUST) Machine Learning 17 / 52

  18. Pixel-Level Explanations Pixel Sensitivity Saliency Map In the general case, we can still determine the sensitivity of z c to x i using ∂ z c ∂ x i . Saliency Map ( Simonyan et al . 2013) is a way to visualize the gradients w.r.t all the pixel. A saliency map highlights the pixels that have the largest impact on class score if perturbed. Nevin L. Zhang (HKUST) Machine Learning 19 / 52

  19. Pixel-Level Explanations Pixel Sensitivity Computation of Saliency Map � � z c = u j W cj , u j = g ( z j ) , z j = (Bias ignored for simplicity.) u i W ji j i Backpropgation during training revisited : Training example ( x , y ) Forward propagation : Compute activations, z , and loss L ( z , Y ). ∂ L Backward propagation : (Purpose: ∂ W ji ) For output unit (each class) c : δ c ← ∂ L ∂ z c For each unit j on second-last layer: δ j ← ∂ u j � c W cj δ c ∂ z j For each unit i on third-last layer: δ i ← ∂ u i � j W ji δ j ∂ z i ∂ L ∂ W ji ← u i δ j , etc Nevin L. Zhang (HKUST) Machine Learning 20 / 52

  20. Pixel-Level Explanations Pixel Sensitivity Computation of Saliency Map Explaining the score z c ( x ) for input x : Forward propagation : Compute activations and z . ∂ z c Backward propagation : (Purpose: ∂ x i ) ∂ z c For each unit j on second-last layer: ∂ u j ← W cj ∂ u j ∂ z c ∂ z c For each unit i on third-last layer : ∂ u i ← � j W ji ∂ z j ∂ u j . . . Nevin L. Zhang (HKUST) Machine Learning 21 / 52

  21. Pixel-Level Explanations Pixel Sensitivity Computation of Saliency Map ∂ u j ∂ z c ∂ z c Notes on Backprop : ∂ u i ← � if u j = g ( z j ) , z j = � j W ji i u i W ji ∂ z j ∂ u j In the case of max pooling: u j = max i u i ∂ z c � if u i = u j ∂ z c ∂ u j ← ∂ u i 0 if u i � = u j ∂ z c ∂ u j is backpropagated along one of the connections. Forward activations act like “switches” for backprop. If unit j is a ReLU unit and z j < 0, then u j = 0 and ∂ u j ∂ z j = 0 , and hence ∂ z c ∂ u j is not backpropagated. Forward activations act like gates for backprop. Nevin L. Zhang (HKUST) Machine Learning 22 / 52

  22. Pixel-Level Explanations Pixel Sensitivity Guided Backpropagation ∂ u j ∂ z c ∂ z c Vanilla Backprop : ∂ u i ← � j W ji ∂ z j ∂ u j If the gradient ∂ z c ∂ u j < 0, then u j ( > = 0) contributes to z c negatively. If we want to find the pixels the contribution to z c positively, we can ignore negative gradients. This gives rise to Guided Backpropagation (Springenberg et al . 2014): ∂ z c ∂ u j ReLU ( ∂ z c � ← W ji ) . ∂ u i ∂ z j ∂ u j j Nevin L. Zhang (HKUST) Machine Learning 23 / 52

  23. Pixel-Level Explanations Pixel Sensitivity Guided Backpropagation In general, Guided Backprog produces sharper saliency maps than Vanilla Gradient Adebayo et al. (2018) It is a combination of Vanilla Gradient and deconvNet (Zeiler et al. 2014 ), which maps a neuron activation back to the input pixel space, showing what input pattern originally caused a given activation in the feature maps. Nevin L. Zhang (HKUST) Machine Learning 24 / 52

  24. Pixel-Level Explanations Pixel Sensitivity Grad-CAM (Selvaraju et al. 2017) Unlike previous methods, Grad-CAM (Gradient-weighted Class Activation Mapping) is class-discriminative: It localizes class in the image. Nevin L. Zhang (HKUST) Machine Learning 25 / 52

  25. Pixel-Level Explanations Pixel Sensitivity Grad-CAM (Selvaraju et al. 2017) Let A k = [ a k ij ] be a feature map in the last convolutional layer for an input x . The activations are local. Nevin L. Zhang (HKUST) Machine Learning 26 / 52

  26. Pixel-Level Explanations Pixel Sensitivity Grad-CAM (Selvaraju et al. 2017) The following quantity measures the “importance” of the feature map A k is to the score z c ( x ): k = 1 ∂ z c � α c , ∂ a k Z ij i , j where Z is the number of pixels in A k . ( Global average pooling). The Grad-CAM heatmap is computed as follows: L c � α c k A K ) . G rad − CAM = ReLU ( k ReLU is used because we are only interested in features that have positive influence on z c . Nevin L. Zhang (HKUST) Machine Learning 27 / 52

  27. Pixel-Level Explanations Pixel Sensitivity Grad-CAM (Selvaraju et al. 2017) Grad-CAM essentially combines only those feature maps that contribute positively to c , and hence is effective in localize it. Nevin L. Zhang (HKUST) Machine Learning 28 / 52

  28. Pixel-Level Explanations Pixel Sensitivity Guided Grad-CAM (Selvaraju et al. 2017) The Grad-CAM heatmap has smaller dimensions than the input. It is upsampled and multiplied with the saliency map by Guided BackProp. Nevin L. Zhang (HKUST) Machine Learning 29 / 52

  29. Pixel-Level Explanations Pixel Sensitivity Guided Grad-CAM (Selvaraju et al. 2017) Nevin L. Zhang (HKUST) Machine Learning 30 / 52

  30. Pixel-Level Explanations Pixel Sensitivity Guided Grad-CAM (Selvaraju et al. 2017) Nevin L. Zhang (HKUST) Machine Learning 31 / 52

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend