Provisioning Robust & Interpretable AI/ML-based Service Bundles - - PowerPoint PPT Presentation
Provisioning Robust & Interpretable AI/ML-based Service Bundles - - PowerPoint PPT Presentation
Provisioning Robust & Interpretable AI/ML-based Service Bundles Alun Preece, Dan Harborne (Cardiff), Ramya Raghavendra (IBM US), Richard Tomsett, Dave Braines (IBM UK) Context: DAIS-ITA Distributed Analytics & Information Sciences
Context: DAIS-ITA
Distributed Analytics & Information Sciences International Technology Alliance
AI/ML Robustness & Interpretability
- Robustness of AI/ML methods to handle unknowns:
- Known unknowns: uncertainties explicitly captured in the AI/ML
methods
- Unknown unknowns: uncertainties not captured by the AI/ML methods
- Interpretability of AI/ML methods to improve decision making:
- Decisions must be justifiable – hard when AI/ML system is black box
- Users may not trust un-interpretable methods & revert to less accurate
- nes
How do we achieve dynamic AI/ML service provisioning while meeting requirements for robustness and interpretability?
Prior Work: Task-Bundle-Asset (TBA) Model
Pizzocaro et al. 2011
Robust & Interpretable TBA?
- “Portfolio” approach to improving robustness: use many, diverse
services, e.g. both reasoning & ML-based approaches
- Coalition collectively has greater diversity of assets: good for robustness!
- But: current optimal asset allocation methods disfavor bundles with
redundant assets
- Interpretability requirements must also be explicitly modeled
- But: varying definitions/conceptions of interpretability,
lack of formal ontologies
Example task: congestion detection
Example LIME saliency map
Example SSO output
Original TBA model
TA TASK:
- type (eg detect)
- target (eg congestion)
- area of interest
- temporal interval
ASSE ASSET:
- type
- capabilities
- deployment constraints
BUN BUNDLE DLE:
- set of assets meeting all
task requirements
Extended TBA model
TA TASK:
- type (eg detect)
- target (eg congestion)
- area of interest
- temporal interval
- accu
accuracy acy req equirem emen ents
- inter
erpret etab ability req equirem emen ents
- ro
robustness re require rements ASSE ASSET:
- type
- capabilities
- deployment constraints
- ki
kinds of interpretability BUN BUNDLE DLE:
- set of assets meeting all
task requirements
Interpretability for Congestion Detection task
Super-types:
- Transparent explanation
- Post-hoc explanation
- Sub-types:
- Reasoning trace (transparent)
- Saliency map (post hoc)
- Explanation by example (post hoc)
- SSO
Future challenges
- Extending/refining interpretability typology
- Extending & refining the model for more generic applicability
“Interpretable to Whom?” framework
WHI workshop at ICML 2018 https://arxiv.org/abs/1806.07552
Argues that a machine learning system’s interpretability should be defined in relation to a specific agent & task: we should not ask if the system is interpretable, but to whom is it interpretable.
“Interpretable to Whom?” framework
WHI workshop at ICML 2018 https://arxiv.org/abs/1806.07552
Argues that a machine learning system’s interpretability should be defined in relation to a specific agent & task: we should not ask if the system is interpretable, but to whom is it interpretable.
Mission Commander Analyst Coalition Engineer
Credit: U.S. Army/Sgt. Edwin Bridges)
Tool for comparing and showcasing interpretability techniques.
Need:
Open-source framework offering data generation across an evolving set of datasets, models and explanation techniques.
Explanation generation comparison tool
Open-Sourced at: github.com/dais-ita/interpretability_framework
For Developers – An API framework allowing for development of research data generation pipeline. For subject matter experts – access to the current available ML explanation techniques. For DAIS research colleagues - opportunities to collaborate!
Capability: Takeaways:
Fr Framework k features:
- Easy Comparison of existing (and new) interpretability techniques.
- Generating Data for experiments both analytical and human based
testing.
- Sharable Tool that through open-sourcing can help engage the wider
Machine Learning community.
Datasets
- Gun Wielder
Classification
- Traffic Congestion
- CIFAR-10
- MNIST
- ImageNet
Models Neural Networks:
- VGG16
- VGG19
- InceptionV3
- Inception ResNet V2
- MobileNet
- Xception
Other Models:
- Support Vector Machine
Explanation Techniques
- LIME
- Shapley
- Deep Taylor LRP
- Influence Functions
Experimentation Framework - Datasets, Models and Explanations
Experimentation Framework - Datasets, Models and Explanations
Experimentation Framework – Use Cases Us Use Case 1 – Mu Multiple Explanation Techniques on the Same Input:
- Build Intuition for different techniques.
- Compare Utility of different techniques – do different users (with different
roles/knowledge) prefer different techniques.
Experimentation Framework – Use Cases Us Use Case 2 – Mu Multiple Explanations from the Same Technique:
- Build Intuition for how stable the technique is.
- Generating Data for experiments.
- Refine techniques
Experimentation Framework - Architecture
task topic trained model dataset
is supported by can be supported by relates to
explanation running service status
status is built on uses
service location environment
contains runs in task matching service status
error
error
Extending the conceptual model (work in progress!)
model trained model dataset license explanation layer stack label layer
- utput
resource usage estimate argument modality
layers license license license is based
- n
uses label layer is compatible with is compatible with additional argument additional argument resource usage additional output modality (post-hoc explanation) (transparent explanation)
Extending the conceptual model (work in progress!)
Summary / conclusion
Considered the problem of service bundle provision with additional constraints:
- Provisioning suitably diverse services to promote robustness
- Incorporating interpretability into the selection of service bundles
- Provided an initial typology of interpretability requirements
- … but this is work in progress!