provisioning robust interpretable ai ml based service
play

Provisioning Robust & Interpretable AI/ML-based Service Bundles - PowerPoint PPT Presentation

Provisioning Robust & Interpretable AI/ML-based Service Bundles Alun Preece, Dan Harborne (Cardiff), Ramya Raghavendra (IBM US), Richard Tomsett, Dave Braines (IBM UK) Context: DAIS-ITA Distributed Analytics & Information Sciences


  1. Provisioning Robust & Interpretable AI/ML-based Service Bundles Alun Preece, Dan Harborne (Cardiff), Ramya Raghavendra (IBM US), Richard Tomsett, Dave Braines (IBM UK)

  2. Context: DAIS-ITA Distributed Analytics & Information Sciences International Technology Alliance

  3. AI/ML Robustness & Interpretability • Robustness of AI/ML methods to handle unknowns: • Known unknowns: uncertainties explicitly captured in the AI/ML methods • Unknown unknowns: uncertainties not captured by the AI/ML methods • Interpretability of AI/ML methods to improve decision making: • Decisions must be justifiable – hard when AI/ML system is black box • Users may not trust un-interpretable methods & revert to less accurate ones

  4. How do we achieve dynamic AI/ML service provisioning while meeting requirements for robustness and interpretability ?

  5. Prior Work: Task-Bundle-Asset (TBA) Model Pizzocaro et al. 2011

  6. Robust & Interpretable TBA? • “Portfolio” approach to improving robustness: use many, diverse services, e.g. both reasoning & ML-based approaches • Coalition collectively has greater diversity of assets: good for robustness! • But: current optimal asset allocation methods disfavor bundles with redundant assets • Interpretability requirements must also be explicitly modeled • But: varying definitions/conceptions of interpretability, lack of formal ontologies

  7. Example task: congestion detection

  8. Example LIME saliency map

  9. Example SSO output

  10. Original TBA model TASK: TA ASSE ASSET: - type (eg detect) - type - target (eg congestion) - capabilities - area of interest - deployment constraints - temporal interval BUN BUNDLE DLE: - set of assets meeting all task requirements

  11. Extended TBA model TASK: TA ASSET: ASSE - type (eg detect) - type - target (eg congestion) - capabilities - area of interest - deployment constraints - temporal interval - ki kinds of interpretability - accu accuracy acy req equirem emen ents - inter erpret etab ability req equirem emen ents - ro robustness re require rements BUN BUNDLE DLE: - set of assets meeting all task requirements

  12. Interpretability for Congestion Detection task Super-types: - Transparent explanation - Post-hoc explanation - Sub-types: - Reasoning trace (transparent) - Saliency map (post hoc) - Explanation by example (post hoc) - SSO

  13. Future challenges - Extending/refining interpretability typology - Extending & refining the model for more generic applicability

  14. “Interpretable to Whom?” framework WHI workshop at ICML 2018 https://arxiv.org/abs/1806.07552 Argues that a machine learning system’s interpretability should be defined in relation to a specific agent & task: we should not ask if the system is interpretable, but to whom is it interpretable.

  15. “Interpretable to Whom?” framework WHI workshop at ICML 2018 https://arxiv.org/abs/1806.07552 Argues that a machine Coalition Engineer learning system’s Mission Commander interpretability should be defined in relation to a specific agent & task: we should not ask if the Analyst system is interpretable, but to whom is it interpretable.

  16. Credit: U.S. Army/Sgt. Edwin Bridges) Explanation generation comparison tool Tool for comparing and showcasing interpretability Need: techniques. Open-source framework offering data generation across an Capability: evolving set of datasets, models and explanation techniques. Takeaways: For Developers – An API framework allowing for development of research data generation pipeline. For subject matter experts – access to the current available ML explanation techniques. For DAIS research colleagues - opportunities to collaborate! Open-Sourced at: github.com/dais-ita/interpretability_framework

  17. Fr Framework k features: • Easy Comparison of existing (and new) interpretability techniques. • Generating Data for experiments both analytical and human based testing. • Sharable Tool that through open-sourcing can help engage the wider Machine Learning community.

  18. Experimentation Framework - Datasets, Models and Explanations Datasets Models Explanation Techniques Gun Wielder Neural Networks: • • LIME Classification VGG16 • • Shapley Traffic Congestion VGG19 • • • Deep Taylor LRP CIFAR-10 InceptionV3 • • Influence Functions • MNIST Inception ResNet V2 • • ImageNet MobileNet • • Xception • Other Models: Support Vector Machine •

  19. Experimentation Framework - Datasets, Models and Explanations

  20. Experimentation Framework – Use Cases Us Use Case 1 – Mu Multiple Explanation Techniques on the Same Input: Build Intuition for different techniques. • Compare Utility of different techniques – do different users (with different • roles/knowledge) prefer different techniques.

  21. Experimentation Framework – Use Cases Us Use Case 2 – Mu Multiple Explanations from the Same Technique: Build Intuition for how stable the technique is. • Generating Data for experiments. • Refine techniques •

  22. Experimentation Framework - Architecture

  23. Extending the conceptual model (work in progress!) service status environment error task matching error contains is supported by is built on status trained model running service status uses runs in relates task topic service location explanation to dataset can be supported by

  24. Extending the conceptual model (work in progress!) license modality license license license modality is compatible with (post-hoc explanation) is compatible model dataset explanation with (transparent explanation) layers uses is based resource additional output on usage resource usage label layer stack trained model output estimate layer additional label argument layer additional argument argument

  25. Summary / conclusion Considered the problem of service bundle provision with additional constraints: - Provisioning suitably diverse services to promote robustness - Incorporating interpretability into the selection of service bundles - Provided an initial typology of interpretability requirements - … but this is work in progress!

  26. Thanks for listening! Any questions? This research was sponsored by the U.S. Army Research Laboratory and the UK Ministry of Defence under Agreement Number W911NF–16–3–0001. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Army Research Laboratory, the U.S. Government, the UK Ministry of Defence or the UK Government. The U.S. and UK Governments are authorized to reproduce and distribute reprints for Government purposes notwithstanding any copy-right notation hereon.

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend