Provisioning Robust & Interpretable AI/ML-based Service Bundles - - PowerPoint PPT Presentation

provisioning robust interpretable ai ml based service
SMART_READER_LITE
LIVE PREVIEW

Provisioning Robust & Interpretable AI/ML-based Service Bundles - - PowerPoint PPT Presentation

Provisioning Robust & Interpretable AI/ML-based Service Bundles Alun Preece, Dan Harborne (Cardiff), Ramya Raghavendra (IBM US), Richard Tomsett, Dave Braines (IBM UK) Context: DAIS-ITA Distributed Analytics & Information Sciences


slide-1
SLIDE 1

Provisioning Robust & Interpretable AI/ML-based Service Bundles

Alun Preece, Dan Harborne (Cardiff), Ramya Raghavendra (IBM US), Richard Tomsett, Dave Braines (IBM UK)

slide-2
SLIDE 2

Context: DAIS-ITA

Distributed Analytics & Information Sciences International Technology Alliance

slide-3
SLIDE 3
slide-4
SLIDE 4

AI/ML Robustness & Interpretability

  • Robustness of AI/ML methods to handle unknowns:
  • Known unknowns: uncertainties explicitly captured in the AI/ML

methods

  • Unknown unknowns: uncertainties not captured by the AI/ML methods
  • Interpretability of AI/ML methods to improve decision making:
  • Decisions must be justifiable – hard when AI/ML system is black box
  • Users may not trust un-interpretable methods & revert to less accurate
  • nes
slide-5
SLIDE 5

How do we achieve dynamic AI/ML service provisioning while meeting requirements for robustness and interpretability?

slide-6
SLIDE 6

Prior Work: Task-Bundle-Asset (TBA) Model

Pizzocaro et al. 2011

slide-7
SLIDE 7

Robust & Interpretable TBA?

  • “Portfolio” approach to improving robustness: use many, diverse

services, e.g. both reasoning & ML-based approaches

  • Coalition collectively has greater diversity of assets: good for robustness!
  • But: current optimal asset allocation methods disfavor bundles with

redundant assets

  • Interpretability requirements must also be explicitly modeled
  • But: varying definitions/conceptions of interpretability,

lack of formal ontologies

slide-8
SLIDE 8

Example task: congestion detection

slide-9
SLIDE 9
slide-10
SLIDE 10

Example LIME saliency map

slide-11
SLIDE 11
slide-12
SLIDE 12

Example SSO output

slide-13
SLIDE 13

Original TBA model

TA TASK:

  • type (eg detect)
  • target (eg congestion)
  • area of interest
  • temporal interval

ASSE ASSET:

  • type
  • capabilities
  • deployment constraints

BUN BUNDLE DLE:

  • set of assets meeting all

task requirements

slide-14
SLIDE 14

Extended TBA model

TA TASK:

  • type (eg detect)
  • target (eg congestion)
  • area of interest
  • temporal interval
  • accu

accuracy acy req equirem emen ents

  • inter

erpret etab ability req equirem emen ents

  • ro

robustness re require rements ASSE ASSET:

  • type
  • capabilities
  • deployment constraints
  • ki

kinds of interpretability BUN BUNDLE DLE:

  • set of assets meeting all

task requirements

slide-15
SLIDE 15

Interpretability for Congestion Detection task

Super-types:

  • Transparent explanation
  • Post-hoc explanation
  • Sub-types:
  • Reasoning trace (transparent)
  • Saliency map (post hoc)
  • Explanation by example (post hoc)
  • SSO
slide-16
SLIDE 16

Future challenges

  • Extending/refining interpretability typology
  • Extending & refining the model for more generic applicability
slide-17
SLIDE 17

“Interpretable to Whom?” framework

WHI workshop at ICML 2018 https://arxiv.org/abs/1806.07552

Argues that a machine learning system’s interpretability should be defined in relation to a specific agent & task: we should not ask if the system is interpretable, but to whom is it interpretable.

slide-18
SLIDE 18

“Interpretable to Whom?” framework

WHI workshop at ICML 2018 https://arxiv.org/abs/1806.07552

Argues that a machine learning system’s interpretability should be defined in relation to a specific agent & task: we should not ask if the system is interpretable, but to whom is it interpretable.

Mission Commander Analyst Coalition Engineer

slide-19
SLIDE 19

Credit: U.S. Army/Sgt. Edwin Bridges)

Tool for comparing and showcasing interpretability techniques.

Need:

Open-source framework offering data generation across an evolving set of datasets, models and explanation techniques.

Explanation generation comparison tool

Open-Sourced at: github.com/dais-ita/interpretability_framework

For Developers – An API framework allowing for development of research data generation pipeline. For subject matter experts – access to the current available ML explanation techniques. For DAIS research colleagues - opportunities to collaborate!

Capability: Takeaways:

slide-20
SLIDE 20

Fr Framework k features:

  • Easy Comparison of existing (and new) interpretability techniques.
  • Generating Data for experiments both analytical and human based

testing.

  • Sharable Tool that through open-sourcing can help engage the wider

Machine Learning community.

slide-21
SLIDE 21

Datasets

  • Gun Wielder

Classification

  • Traffic Congestion
  • CIFAR-10
  • MNIST
  • ImageNet

Models Neural Networks:

  • VGG16
  • VGG19
  • InceptionV3
  • Inception ResNet V2
  • MobileNet
  • Xception

Other Models:

  • Support Vector Machine

Explanation Techniques

  • LIME
  • Shapley
  • Deep Taylor LRP
  • Influence Functions

Experimentation Framework - Datasets, Models and Explanations

slide-22
SLIDE 22

Experimentation Framework - Datasets, Models and Explanations

slide-23
SLIDE 23

Experimentation Framework – Use Cases Us Use Case 1 – Mu Multiple Explanation Techniques on the Same Input:

  • Build Intuition for different techniques.
  • Compare Utility of different techniques – do different users (with different

roles/knowledge) prefer different techniques.

slide-24
SLIDE 24

Experimentation Framework – Use Cases Us Use Case 2 – Mu Multiple Explanations from the Same Technique:

  • Build Intuition for how stable the technique is.
  • Generating Data for experiments.
  • Refine techniques
slide-25
SLIDE 25

Experimentation Framework - Architecture

slide-26
SLIDE 26

task topic trained model dataset

is supported by can be supported by relates to

explanation running service status

status is built on uses

service location environment

contains runs in task matching service status

error

error

Extending the conceptual model (work in progress!)

slide-27
SLIDE 27

model trained model dataset license explanation layer stack label layer

  • utput

resource usage estimate argument modality

layers license license license is based

  • n

uses label layer is compatible with is compatible with additional argument additional argument resource usage additional output modality (post-hoc explanation) (transparent explanation)

Extending the conceptual model (work in progress!)

slide-28
SLIDE 28

Summary / conclusion

Considered the problem of service bundle provision with additional constraints:

  • Provisioning suitably diverse services to promote robustness
  • Incorporating interpretability into the selection of service bundles
  • Provided an initial typology of interpretability requirements
  • … but this is work in progress!
slide-29
SLIDE 29

Thanks for listening! Any questions?

This research was sponsored by the U.S. Army Research Laboratory and the UK Ministry of Defence under Agreement Number W911NF–16–3–0001. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Army Research Laboratory, the U.S. Government, the UK Ministry of Defence or the UK Government. The U.S. and UK Governments are authorized to reproduce and distribute reprints for Government purposes notwithstanding any copy-right notation hereon.