illuminating ai
play

ILLUMINATING AI: UNDERSTANDING AI'S GOALS, REASONING & - PowerPoint PPT Presentation

ILLUMINATING AI: UNDERSTANDING AI'S GOALS, REASONING & COMPROMISES AI says 7 (99%) Image Tsvi Achler MD PhD Tsvi Achler MD PhD If SIRI makes a mistake, the impact is limited In most applications a mistake has more serious


  1. ILLUMINATING AI: UNDERSTANDING AI'S GOALS, REASONING & COMPROMISES AI says “7” (99%) Image Tsvi Achler MD PhD Tsvi Achler MD PhD

  2. If SIRI makes a mistake, the impact is limited In most applications a mistake has more serious consequences: Banking | Medicine | Self-Driving Cars AI Adoption Requires Transparency : Trust, Regulation, and Understanding of the AI’s Compromises But the problem is: AI is a Black Box 2

  3. The Lack of Transparency Leaves You Asking: What Is the AI Really Recognizing? + = Google With undetectable noise Before EU Legislates: Users Have a Right to an Explanation DARPA commitment: “Explainable AI” Initiative 3

  4. Solution Pathways for Explanability (1) DARPA: Trial & Error to Find What Effects the Network Takes Time & the AI Remains a Black Box (2) Optimizing Mind: Mirrors the brain’s ability to provide reasons The Ground Truth Of The AI

  5. 5

  6. The Brain Relies on Feedback During Recognition Feedback: AI Does Not No Feedback:

  7. Feedback is Found Throughout the Brain More Feedback Than Feedforward Retrograde Signaling e.g.: nitric oxide Tri-synaptic connections (eg Aroniadou-Anderjaska et al 2000)

  8. AI Exclusively Uses Feedforward Connections W During Recognition Recognition Outputs Y Y Inputs X   W  Y W X W= Outputs Y X Inputs X Mathematical Feedforward Computational Connectivity Notation Function Caricature Caricature Weight Matrix (during recognition) Do not be mislead: Even when an AI is called “recurrent” it still uses W 8

  9. Why Is Feedback Needed? For Optimization • What is Optimization? • Difference between “Feedforward” and Feedforward-Feedback methods • Why lack of Feedback During Recognition matters 9

  10. Example: Solving Jigsaw Puzzle Optimization (OP) Modify OP Try Evaluate Try configuration, evaluate, modify and repeat until optimal fit 10

  11. Recognition Algorithms: 1) Optimized weights W are a “Black Box “Feedforward” methods Fodor & Pylyshyn (1988) Optimize weights so that Sun (2002) recognition occurs using a simple multiplication Recall Bird Bird Bird Memory Learning Y OP OP Recognition- W Inference by Optimizing (OP) during learning X Y  WX Learned feedforward weights in : Deep, Convolutional, Recurrent, LSTM, Reinforcement Networks, Support Vector Machines (SVM), Perceptrons, “Neural Networks” … everything learned via Backprop etc.

  12. Recognition Algorithms: “Feedforward” methods Fodor & Pylyshyn (1988) Sun (2002) Recall Bird Bird Bird Memory Learning Y OP OP Recognition- W Inference X Y  WX encodes uniqueness into weights

  13. Determining Uniqueness is essential to perform efficient recognition Training Instance 1 Training Instance 2 O O X X O O O Unique thus X O O X Unique thus Important! Important! Problem: Uniqueness changes with context cannot learn uniqueness for all possible contexts Besides: relevant context is during recognition, not learning

  14. We suggest uniqueness is determined during recognition instead when the context is available Weights “Clear Box” Recall Bird Bird Bird Memory Learning Recognition OP OP 1) Determining activation Y (not weights) while estimating uniqueness By Optimizing (OP) during recognition 2) Optimizing only current test pattern (not all of training data) → Reducing computational costs

  15. 15

  16. Illuminated AI Switches When Optimization Occurs Requires feedback for learning: for example to back-propagate error Not really Feedforward !!! Model-type During Learning (weight Δ) During Recognition (find Y) Optimization Optimization Feedforward “Feedforward” to find weights W recognition Illuminated AI Simpler Learning M to find neuron activation Switch Dynamics Why would the brain only use feedback during learning?

  17. Recognition with Illuminated AI Neuron Computational Connectivity Notation Caricature Caricature Y Outputs Y Inputs X M M M= Outputs Y X Inputs X Weights are Expectations Symmetrical Inhibitory For optimization (allows explainability & update) connections modulate inputs using output activity 17

  18. Same Results But Transparent Method  resulting neuron SAME Feedforward: Y activation (output)    Y W X  pattern from the SAME X environment (input) "feedforward" weights Illuminated Networks: OP    SAME X Y Y M Illuminated Weights Easier to Explain, Learn and Update 18

  19. Example: You train your AI Learn Digits It gets good grades (performance) 95% You are done … right? 19

  20. Why are Explainable Regulatory Feedback Networks xRFN beneficial? 1) Can convert existing feedforward networks to xRFN & see what they are doing: Black Box -> “ Clear Box ” MNIST Demonstration SVM xRFN See Inside! Feedforward Equivalent Overall Accuracy 91.65% By Digit: 1 2 3 4 5 6 7 8 9 0 SVM False Positives 45 67 106 79 86 67 75 152 109 49 False Negatives 19 129 91 71 137 50 83 125 109 21 Overall Accuracy 91.65% By Digit: 1 2 3 4 5 6 7 8 9 0 RFN False Positives 45 67 106 79 86 67 75 152 109 49 False Negatives 19 129 91 71 137 50 83 125 109 21

  21. Does the brain perform optimization during recognition? How long does it take to Find the single pattern? E E O E O O E O E E E O O E O O O O E vs. E O O E E F X O O O O O E O E E O O E E O E O E If brain uses Feedforwad: Y=WX fixed propagation, fixed amount of time If brain uses optimization: should be faster with unique patterns (like jigsaw) Brain takes longer in right box suggesting a signal-to-noise phenomena (like seen in the jigsaw puzzle) Rosenholtz 2001 This occurs in all modalities, including audition, vision, tactile, and even in olfaction with its poor spatial resolution Rinberg etal 2006 suggests optimization during recognition is ubiquitous (taking what is commonly considered a spatial attention phenomena and attributing it to a recognition phenomena)

  22. Does It Scale To Large AI? Does Illuminated AI Consume More Resources Than Feedforward AI?

  23. Tests on Random Data Increasing in Size Nodes Features Matrix size 10 100 1,000 100 1,000 100,000 Computational Costs: 500 1,000 500,000 SVM During Learning 1,000 10,000 10,000,000 Out of Memory! 2,000 10,000 20,000,000 F SVM Learning (W) 6,000 12,000 72,000,000 8,000 15,000 120,000,000 Fastest Feedforward Learning (W) 9,000 20,000 180,000,000 Illuminated Learning (M) Computing Time (s) 7,000 Can Learn > 100x Faster Out of 6,000 Memory! 5,000 Without Balancing Data 4,000 3,000 2,000 Out of Memory! 1,000 … 120 ! 0 0 Million 5 10 15 20 Matrix Size

  24. Nodes Features Matrix size Computational Costs: 10 100 1,000 During Recognition 100 1,000 100,000 500 1,000 500,000 SVM 1,000 10,000 10,000,000 Feedforward AI (W) 2,000 10,000 20,000,000 F Illuminated AI (M) 6,000 12,000 72,000,000 8,000 15,000 120,000,000 Best Alternate AI (KNN) Without Optimization 9,000 20,000 180,000,000 Computational Cost 10 per Test (s) 8 6 Out of Memory! 4 2 0 50 100 0 20 120 Millions Matrix Size

  25. Accelerated on GPU’s Torch/Lua 25

  26. Also Useful for Simpler AI Such As: Logistic Regression & Random Forests 26

  27. The Current Standard For Explainability Is Decision Trees Based On Logistic Regression (In FinTech, Medicine, Government, …) 30% Loss In Accuracy Occurs When Explaining Using Decision Trees 0% Loss With Adaptive Insight Using Illuminated AI 27

  28. Understanding Decisions at a Glance Histogram of Factors that Hinder vs. Help the Case Case #2 Score 0.81 Case #3 Score 0.98 Case #1 Score 0.75 Borderline Approved Strongly Approved Not Approved 25 14 12 Hinders Helps Hinders Helps Helps 12 10 Number of Factors 20 10 Hinders 8 15 8 6 6 10 4 4 5 2 2 0 0 0 -0.25 -0.2 -0.15 -0.1 -0.05 0 0.05 0.1 0.15 -0.5 0 0.5 1 1.5 -0.2 -0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 Most Hindering Factor: E Most Helping Factor: G Most Helping Factor : M With value 66.2 With value -8.8 with score 87.0 26.8 below expected 5.8 above expected 12.2 above expected 28

  29. Comparison: Feedforward Illuminated Outputs Y Outputs Y Y Y Structure M W M (during recognition) X X Inputs X Inputs X Feedforward-Feedback “Feedforward” During Learning During Recognition Optimization Easy to Learn & Yes No Update? 1 2 3 4 5 Explainable? 6 7 8 9 0 No Yes 29

  30. Collective Benefits Enabling Wider AI Adoption Regulators Understanding of AI’s goals, compromises, and decision process Company Reduce: development costs, time Increase: trust, adoption, and flexibility Users Developers Better understanding and Less guessing, easier trust of AI’s decision process debugging and updating 30

  31. Offerings 1) Illuminate Your AI • Convert & Explain Any Feedforward AI • Boost: Internal Development & Quality Assurance • Assist Your Regulators: FDA, AMA, DMV, FTC … 2) Train or Update Your AI Fast Without Rehearsal * • Faster Learning - 100x • Less Data Cleaning • User Personalization • Easier Update * for Certain Feedforward AI

  32. How Will You Use Illuminated AI? Tsvi Achler 650.486.2303 achler@OptimizingMind.com 32

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend