explaining deep learning predictions and
play

Explaining Deep Learning Predictions and Isaac Ahern Integrating - PowerPoint PPT Presentation

Explaining Deep Learning Predictions and Integrating Domain Ontologies Explaining Deep Learning Predictions and Isaac Ahern Integrating Domain Ontologies Outline Project Background problems Isaac Ahern domains Explaining any


  1. Explaining Deep Learning Predictions and Integrating Domain Ontologies Explaining Deep Learning Predictions and Isaac Ahern Integrating Domain Ontologies Outline Project Background problems Isaac Ahern domains ”Explaining any Classifiers” LIME SP-LIME July 16, 2018 Ontological Deep Learning ORBM+ Explanation Generation References

  2. Explaining Deep Learning Predictions 1 Project Background and Integrating problems Domain Ontologies domains Isaac Ahern 2 ”Explaining any Classifiers” Outline LIME Project Background SP-LIME problems domains ”Explaining 3 Ontological Deep Learning any Classifiers” ORBM + LIME SP-LIME Explanation Generation Ontological Deep Learning ORBM+ 4 References Explanation Generation References

  3. CBL project problem Explaining Deep Learning Predictions and Integrating Domain Ontologies Human behavior prediction problems, and the issue of Isaac Ahern explaining deep learning predictions. Outline Deep Learning predictions shouldn’t be treated as a ’black Project Background box’ — want to explain classifiers. problems domains Avoid fitting bias induced from learning ’flat models’, ”Explaining using domain ontologies to structure models. any Classifiers” LIME SP-LIME Ontological Deep Learning ORBM+ Explanation Generation References

  4. CBL project domains Explaining Deep Learning Predictions and Integrating Domain Ontologies Isaac Ahern PeaceHealth (Electronic Health Records) Outline Project Eli Lilly (Drug Information) Background problems Baidu (Social Media) domains ”Explaining any Classifiers” LIME SP-LIME Ontological Deep Learning ORBM+ Explanation Generation References

  5. PeaceHealth Explaining Deep Learning Predictions and Integrating Domain Ontologies Isaac Ahern Nonprofit Health Care Network Outline Predicting Health outcomes and recurrences: Project Background incorporate explicit & implicit social and environmental problems domains factors and self motivation into DL model ”Explaining any Classifiers” LIME SP-LIME Ontological Deep Learning ORBM+ Explanation Generation References

  6. Eli Lilly Explaining Deep Learning Predictions and Integrating Domain Ontologies Isaac Ahern Global Pharmaceutical Company Outline Project Understanding healthcare outcome relationships between Background patients and products problems domains ”Explaining any Classifiers” LIME SP-LIME Ontological Deep Learning ORBM+ Explanation Generation References

  7. Baidu Explaining Deep Learning Predictions and Integrating Domain Ontologies Isaac Ahern Search engine/internet company — ”the Chinese Google” Outline Project Incorporate social media user data for human behavior Background prediction problems domains ”Explaining any Classifiers” LIME SP-LIME Ontological Deep Learning ORBM+ Explanation Generation References

  8. LIME Explaining Deep Learning Predictions and Integrating Domain Ontologies Isaac Ahern Goal: provide an ”explanation” for any given classifier, i.e., Outline provide some characteristic which illustrates qualitative Project understanding of the relationship between an instance in the Background problems data, and the corresponding model prediction. domains ”Explaining any Classifiers” LIME SP-LIME Ontological Deep Learning ORBM+ Explanation Generation References

  9. Model Accuracy vs Explanation Explaining Deep Learning Predictions and It may be desirable to choose a less accurate model for content Integrating Domain recommendations based on the importance afforded to different Ontologies features (e.g., predictions related to ‘clickbait’ articles which Isaac Ahern may hurt user retention). Outline Project metrics we can optimize: accuracy Background problems metrics we might actually care about: user engagement, domains retention ”Explaining any Classifiers” In this case, it is important to have a heuristic for explaining LIME SP-LIME how a model is making predictions, along with the actual Ontological predictions themselves. Deep Learning ORBM+ Explanation Generation References

  10. LIME Explaining Deep Learning Predictions and Integrating Domain Ontologies An algorithm that can explain model predictions such that: Isaac Ahern explanations are locally-faithful to the model. Outline explainations are interpretable . Project Background explanations are model-agnostic . problems domains can be extended to a measure of a model’s trustworthiness ”Explaining any — i.e., extended to explain the model . Classifiers” LIME SP-LIME Ontological Deep Learning ORBM+ Explanation Generation References

  11. LIME Explaining Deep Learning Predictions and Integrating Domain Ontologies Isaac Ahern L ocal Outline I nterpretable Project Background M odel-Agnostic problems domains E xplanations ”Explaining any Classifiers” LIME SP-LIME Ontological Deep Learning ORBM+ Explanation Generation References

  12. Interpretable vs Features Explaining Deep Learning Predictions and text classification: Integrating Domain interpretable explanation — binary vector indicating Ontologies presence/absence of a word. Isaac Ahern feature — word embedding (i.e. W2V Skipgram). Outline image classification: Project interpretable explanation — binary vector indicating Background problems presence/absence of super-pixels : contiguous patches of domains ”similar” pixels. ”Explaining any feature — representation of image as tensor via ConvNet Classifiers” with 3 color channels / pixel. LIME SP-LIME x ∈ R d the representation of an instance � x ′ ∈ { 0 , 1 } d ′ Ontological Deep Learning is a corresponding interpretable representation. ORBM+ Explanation Generation References

  13. Optimization Criteria Explaining Deep Learning Predictions and Ω( g ) = complexity of the model g . Integrating Domain f : R d → R a model, i.e. f ( x ) is the probability that x Ontologies belongs to a certain class. Isaac Ahern L ( f , g , π x ) = measure of the error in approximation of f Outline by g in the region defined by π x (locality-aware loss). Project Background problems Then, the LIME model balances the constraints of domains interpretability and faithfulness by selecting (locally) ”Explaining any Classifiers” ξ ( x ) = argmin g ∈ G L ( f , g , π x ) + Ω( g ) LIME SP-LIME Ontological where G is a class of potentially interpretable models, such as Deep Learning ORBM+ linear models, decision trees, or falling rule lists Explanation Generation References

  14. Sampling Explaining Deep Learning Predictions To perform minimization as defined by ξ ( x ), ”sample uniformly and Integrating from instances around x ′ ”, weighted according to π x . This Domain recovers points z ∈ R d to which we apply the label f ( z ) (model Ontologies Isaac Ahern prediction), yielding the dataset Z = { ( z , f ( z )) } sampled z . We then optimize model’s argmin g ∈ G L ( f , g , π x ) + Ω( g ) for Z . Outline Project Background Sparse Linear Explanations problems domains model class: G = { g : z ′ �→ w g · z ′ } . ”Explaining locality distribution: π x ( z ) = e − D ( x , z ) 2 /σ 2 (gaussian / any Classifiers” LIME exponential kernel) for a domain-appropriate distance SP-LIME measure D . (i.e., cos, L 2 , etc.) Ontological Deep Learning ORBM+ z , z ′ ∈Z π x ( z )( f ( z ) − g ( z ′ )) 2 . L ( f , g , π x ) = � Explanation Generation References

  15. Example: Intuition Explaining Deep Learning Predictions and Integrating Domain Ontologies Isaac Ahern Outline Project Background problems domains ”Explaining any Classifiers” LIME SP-LIME Ontological Deep Learning ORBM+ Explanation Generation References

  16. Example: ConvNet Explaining Deep Learning Predictions and Integrating Domain Ontologies Isaac Ahern Outline Project Background problems domains ”Explaining any Classifiers” LIME SP-LIME Ontological Deep Learning ORBM+ Explanation Generation References

  17. Explaining Models: SP-LIME Explaining Deep Learning Predictions and Integrating Domain Extend LIME so as to give a global understanding of the Ontologies model, by explaining a set of individual instances. Problem is Isaac Ahern to select a set of instances which is simultaneously feasible to Outline inspect and gives non-redundant explanations that represent Project Background the model’s global behavior. problems domains Given X instances, construct an | X | × d ′ explanation matrix ”Explaining any W = ( | w g i j | ), where g i = ξ ( x i ) is the LIME-selected Classifiers” LIME interpretable local sparse-linear model approximation. SP-LIME Ontological Deep Learning ORBM+ Explanation Generation References

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend