stakeholders in explainable ai
play

Stakeholders in Explainable AI Alun Preece, Dan Harborne (Cardiff), - PowerPoint PPT Presentation

Stakeholders in Explainable AI Alun Preece, Dan Harborne (Cardiff), Dave Braines, Richard Tomsett (IBM UK), Supriyo Chakraborty (IBM US) https://arxiv.org/abs/1810.00184 Explainability in GOFAI Explainability in AI is not a new problem. In the


  1. Stakeholders in Explainable AI Alun Preece, Dan Harborne (Cardiff), Dave Braines, Richard Tomsett (IBM UK), Supriyo Chakraborty (IBM US) https://arxiv.org/abs/1810.00184

  2. Explainability in GOFAI Explainability in AI is not a new problem. In the last ‘AI summer’ (the expert systems boom, a.k.a., ‘Good Old Fashioned AI’) it was acknowledged that explainability was needed for • system development • gaining end-user trust It was also realized that these require different forms of explanation, framed by developers’ vs end-users’ conceptual models. The problem wasn’t solved before funding dried up!

  3. Interpretability in AI now ‘Interpretability’ is now preferred over ‘explainability’. For our purposes, an explanation is a message intended to convey the cause and reason for a system output; an interpretation is the understanding gained by the recipient of the message. We can still observe different motivations: • understanding how modern ‘deep learning’ based systems actually work • gaining end-user trust Again, different forms of explanation are required, due to the differing conceptual models of the recipients….

  4. Transparency vs post hoc explanation A useful distinction (Lipton 2017) • Transparency reveals the internal workings of an AI system (e.g., deep neural network model; decision tree; rule base) • Post hoc methods construct ‘rationalizations’ of the internal workings ‘after the fact’ (e.g., visualization of salient features, explanation by similar training examples) Notes: • ‘Full technical transparency’ is generally considered infeasible • Explanations from humans are always post hoc

  5. “Interpretable to Whom?” framework WHI workshop at ICML 2018 https://arxiv.org/abs/1806.07552 Argues that a machine learning system’s interpretability should be defined in relation to a specific agent or task: we should not ask if the system is interpretable, but to whom is it interpretable.

  6. Four stakeholder communities Developers are chiefly concerned with building AI applications. Theorists are chiefly concerned with understanding and advancing AI. Ethicists are chiefly concerned with fairness, accountability, transparency and related societal aspects of AI. Users are chiefly concerned with using AI systems. The first three of these communities are well-represented in the AI interpretability literature. The fourth will ultimately determine how long summer lasts.

  7. Verification, validation & explanation Verification is about ‘building the system right’. Validation is about ‘building the right system’. [ ⚠ Overgeneralization alert] • Developer / theorist communities are more focused on verification • User / ethicist communities are more focused on validation It is hard to envisage verification without (some) transparency. Post hoc explanations are valuable for validation.

  8. Rumsfeldian perspective Known knowns: things we train an AI system to know Known unknowns: things we train an AI system to predict Unknown knowns: things we know the AI system doesn’t know (i.e., things outside its bounds) Unknown unknowns: the key area of concern for all communities! Verification tends to be used to define the space of knowns. Validation is essential to define the space of unknowns.

  9. Transparency-based explanations Methods that derive at least in part from internal states of an AI system, e.g., deep Taylor decomposition, Google feature viz. Caveats: • Concerns that these methods are non-axiomatic • Can be hard to interpret by user & ethicist communities • Where explanation does not clearly highlight meaningful features of the input, may make recipients less Traffic congestion classifier: ‘congested’ image explanation inclined to trust system… by deep Taylor decomposition

  10. Post hoc explanations Important to differentiate post hoc from transparent explanations, e.g., LIME local – approximations, to recipients. Explanation features favoured by subject-matter experts include • examples • counterfactuals • background knowledge • text narrative ‘Congested’: explanation by example via influence functions

  11. Layered explanations Composite explanation object that can be unpacked per a recipient’s requirements. Layer 1 — Traceability: transparency-based bindings to internal states of the system showing the system ‘did the thing right’ [main stakeholders: developers and theorists] Layer 2 — Justification: post-hoc semantic relationships between input and output features showing the system ‘did the right thing’ [main stakeholders: developers and users] Layer 3 — Assurance: post-hoc representations with explicit reference to policy/ontology to give confidence that the system ‘does the right thing’ [main stakeholders: users and ethicists]

  12. Example: wildlife monitoring Layer 1 (traceability): saliency map visualisation of input layer features for classification ‘gorilla’ Layer 2 (justification): ‘right for the right reasons’ semantic annotation of salient gorilla features Layer 3 (assurance): counterfactual examples showing that images of humans are not miss-classified as ‘gorilla’

  13. Summary / conclusion Explanation in AI is an old problem: ‘debuggability’ shouldn’t be conflated with ‘trustability’. We can identify four communities with different stakes in explanation: developers, theorists, ethicists users. Developers & theorists tend to focus on verification (knowns); users & ethicists are more interested in validation (unknowns). Can the needs of multiple stakeholders be addressed via ‘joined- up’, composite explanations? Despite a large and growing literature on explanation/interpretation in AI, the voice of users is under-represented. As in the 1980s, it will be the users that determine whether AI thrives!

  14. Thanks for listening! Any questions? This research was sponsored by the U.S. Army Research Laboratory and the UK Ministry of Defence under Agreement Number W911NF–16–3–0001. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Army Research Laboratory, the U.S. Government, the UK Ministry of Defence or the UK Government. The U.S. and UK Governments are authorized to reproduce and distribute reprints for Government purposes notwithstanding any copy-right notation hereon.

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend