explainable ai and related concepts
play

EXPLAINABLE AI (AND RELATED CONCEPTS) A QUICK TOUR AI Present and - PowerPoint PPT Presentation

EXPLAINABLE AI (AND RELATED CONCEPTS) A QUICK TOUR AI Present and future Jacques Fleuriot STATE OF PLAY Generally, machine learning models are black boxes Not intuitive Difficult for humans to understand, often even by their


  1. EXPLAINABLE AI (AND RELATED CONCEPTS) A QUICK TOUR AI Present and future Jacques Fleuriot

  2. STATE OF PLAY • Generally, machine learning models are black boxes • Not intuitive • Difficult for humans to understand, often even by their designers do not fully understand the decision-making procedures. • Yet they are being widely deployed in ways that affect our daily lives • Bad press when things go wrong • We’ll look at a few areas/case studies quickly but this is an active, fast growing research field and there is much, much more going on.

  3. SOME OF THE ISSUES • Fairness/Bias • e.g. Amazon ML tool for recruitment was found to be biased against women (Reuters, Oct 2018) • Even what one might consider benign e.g. Netflix serving artworks for movies that many thought were based on racial profiling (Guardian, Oct 2018) • We know that “bad” training data can result in biases and unfairness • Challenge: How does one define “fairness” in a rigorous, concrete way (in order to model it)? • Trust • One survey: 9% of respondents said they trusted AI with their financials, and only 4% trusted it for hiring processes (Can We Solve AI’s ‘Trust Problem’? MIT Sloan Management Review, November 2018) • Safety • Can one be sure that a self-driving car will not behave dangerously in situations never encountered before? • Ethics (see previous lectures)

  4. EXPLAINABLE AI • Not a new topic • Rule-based systems are generally viewed as explainable (but are not scalable, able to deal with data, etc.) • Example: MYCIN expert system (with ~500 rules) for diagnosing patients based on reported symptoms (1970s) • It could be asked to explain the reasoning leading to the diagnosis and recommendation • It operated at the same level of competence as a specialist and better than a GP • Poor interface and relatively limited compute power at the time • Ethical and legal issues related to the use of computers in medicine were raised (even) at the time • Are there ways of marrying powerful blackbox/ML approaches with (higher-level) symbolic reasoning?

  5. DARPA’S XAI: ONE POSSIBLE VISION Incident Detecting Ceasefire Violations Today • Why do you say that? • What is the evidence? • Could it have been an Learning This incident is accident? a violation Process (p = .93) • I don’t know if this should be reported or not Video & Learned Output What should Social Media Function I report? Incident Detecting Ceasefire Violations Tomorrow • I understand why This is a violation: New • I understand the evidence for this Learning recommendation Process These events occur • This is clearly one to before tweet reports investigate Video & Explainable Explanation What should Social Media Model Interface I report? Source: DARPA XAI

  6. PERFORMANCE VS EXPLAINABILITY Neural Nets Learning Performance Graphical Models Deep Ensemble Learning Bayesian Methods Belief Nets Random SRL Forests CRFs HBNs AOGs Statistical MLNs Decision Models Markov Trees Explainability SVMs Models Source: DARPA XAI

  7. PERFORMANCE VS EXPLAINABILITY Neural Nets Learning Performance Graphical Models Deep Ensemble Learning Bayesian Methods Belief Nets Random SRL Forests CRFs HBNs AOGs Statistical MLNs Decision Models Markov Trees Explainability SVMs Models Deep Explanation Interpretable Models Model Induction Modified deep learning Techniques to learn more Techniques to infer an explainable techniques to learn explainable structured, interpretable, causal model from any model as a black Source: features models box DARPA XAI

  8. INTERPRETABLE MODELS • Example: Human-level concept learning through probabilistic program induction (Lake et al. 2015, Science) • “Model represents concepts as simple programs that best explain observed examples under a Bayesian criterion” • Bayesian Program Learning: Learn visual concepts from just a single example and genralise in a human-like fashion — one-shot learning • Key ideas: compositionality, causality and learning to learn • Note: Interpretable and explainable are not necessarily the same

  9. INTERPRETABLE MODEL Source: DARPA XAI

  10. DATASET • Dataset of 1623 characters from 50 writing systems • Images and pen strokes collected • Good for comparing humans and machine learning: • cognitively natural and are used as benchmarks for ML algorithm

  11. GENERATIVE MODEL

  12. BAYESIAN PROGRAM LEARNING • Lake et al. showed that it is possible to perform one-shot learning at human-level accuracy • Most judges couldn’t distinguish between the machine- and human-generated characters in (“visual Turing”) tests. • However, BPL still sees less structure in visual concepts than humans • Also what’s the relationship with explainability?

  13. MODEL INDUCTION • Example: Bayesian Rule Lists (Letham et al., Ann. of Applied Statistics, 2015) • Aim: Predictive models that are not only accurate, but are also interpretable to human experts. • Models are decision lists, which consist of a series of if ... then ...statements • Approach: Produce a posterior distribution over permutations of if...then... rules, starting from a large, pre-mined set of possible rules • Used to develop interpretable patient-level predictive models using massive observational medical data

  14. BAYESIAN RULE LISTS • BRLs can discretise a high-dimensional, multivariate feature space into a series of interpretable decision statements e.g. 4146 unique medications and condition for above + age and gender • Experiments showed that BRLs have predictive accuracy on par with the top ML algorithms at the time (approx. 85- 90% as effective) but with models that are much more interpretable • For technical details of underlying generative model, MCMC sampling, etc. (see paper). 


  15. USING VISUALISATION • Example: Interpretable Learning for Self-Driving Cars by Visualizing Causal Attention (Kim and Canny, ICCV 2017) • If deep neural perception and control networks are to be a key component of self- driving vehicles, these models need to be explainable • Visual explanations in the form of real-time highlighted regions of an image that causally influence the car steering control (i.e. the deep network output)

  16. USING VISUALISATION Method highlights regions that causally influence deep neural • perception and control networks for self-driving cars. The visual attention model is augmented with an additional layer of • causal filtering. Does this correspond to where a driver would gaze though? •

  17. USING VISUALISATION AND TEXT Video-to-text language model to produce • textual rationales that justify the model’s decision The explanation generator uses a spatio- • temporal attention mechanism, which is Show, Attend, Control, and Justify: Interpretable Learning for Self-Driving Cars. encouraged to match the controller’s attention Kim et al, Interpretable ML Symposium (NIPS 2017)

  18. COMBINING DEEP LEARNING AND SYMBOLIC/LOGICAL REASONING (Just when you thought you were done with logic programming) • Example: DeepProbLog: Neural Probabilistic Logic Programming, (Manhaeve et al., 2018) • Deep learning + ProbLog = DeepProbLog ! • Approach that aims to “integrate low-level perception with high-level reasoning” • Incorporate DL into probabilistic LP such that • probabilistic/logical modelling and reasoning is possible • general purpose NNs are possible • end-to-end training is possible

  19. DEEP PROBLOG Source: Manhaeve et al. DeepProbLog: Neural Probabilistic Programming

  20. DEEP PROBLOG Source: Manhaeve et al. DeepProbLog: Neural Probabilistic Programming

  21. EXAMPLE Source: Manhaeve et al. DeepProbLog: Neural Probabilistic Programming

  22. DEEP PROBLOG • Relies on the fact that there is a differentiable version of ProbLog that allows for parameters update of the logic program using gradient descent • Seemless integration with NN training (which uses backprop) is thus possible • Allows for a combination of “symbolic and sub-symbolic reasoning and learning, program induction and probabilistic reasoning” • For technical details, consult the paper by Manhaeve et al.

  23. DEEP + SYMBOLIC • Recent exciting work (aside from one involving ProbLog!): • Harnessing Deep Neural Networks with Logic Rules (Hu et al., 2016) • End-to-End Differentiable Proving (Rocktäschel and Riedel, 2017) • Logical Rule Induction and Theory Learning Using Neural Theorem Proving (Campero et al., 2018) • Planning Chemical Syntheses with Deep Neural Networks and Symbolic AI (Segler et al., 2018) • and many more…

  24. CONCLUSION • Current ML models tend to be opaque and there’s an urgent need for interpretability/ explainability to ensure fairness, trust, safety, etc. • Rapidly moving research area that is still wide open because (among many other issues): • It’s unclear to many ML/DL researchers how their models actually achieve their decisions (yet alone come up with explanation or interpretation, or both) • What is an explanation anyway? • DARPA XAI is a step in the right direction (but there are other initiatives and views on the topic) • Logical/symbolic representations and inference provide high-level (abstract) means of reasoning, these are usually explainable too • Combining probabilistic, symbolic and sub-symbolic reasoning and learning seems promising • Finally, as this course tried to emphasise, AI ≠ ML (or DL)

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend