governance for
play

Governance for Artificial Intelligence/ Machine Learning Akbar - PDF document

10/7/2019 1 Governance for Artificial Intelligence/ Machine Learning Akbar Siddiqui Technical Director Civil Liberties, Privacy, and Transparency Office National Security Agency 2 1 10/7/2019 The N SA Mission The National Security


  1. 10/7/2019 1 Governance for Artificial Intelligence/ Machine Learning Akbar Siddiqui Technical Director Civil Liberties, Privacy, and Transparency Office National Security Agency 2 1

  2. 10/7/2019 The N SA Mission The National Security Agency is responsible for: Signals Intelligence U.S. Cybersecurity Providing our nation’s policy Protecting and defending makers and military commands sensitive information systems with foreign intelligence to gain a and networks critical to national decisive advantage. security and infrastructure. 3 What is AI/ML? Learning Tasks Real-time Structure Game AI Image Decisions Discovery Classification Feature Reinforcement Identity Fraud Customer Elicitation Learning Meaningful Detection Retention Compression Dimensional Classification Reduction Robot Skills Diagnostics Navigation Acquisition Big Data Estimating Life Visualization Expectancy Unsupervised Supervised Advertising Recommender Learning Popularity Learning Systems MACHINE Prediction Clustering Regression LEARNING Weather Targeted Population Forecasting Marketing Growth Prediction Customer Market Segmentation Forecasting Raw Data Models Labeled Training Data Experimental Parameters Output 4 2

  3. 10/7/2019 Governance in the Process Authority Purpose Collect Process Evaluate Retain Disseminate Training Guidance Compliance Controls Technical Safeguards 5 Governance in the Process Authority Purpose Collect Process Evaluate Retain Disseminate Training Guidance Compliance Controls Technical Safeguards 6 3

  4. 10/7/2019 Governance in the Process Feedback Training Develop Apply to Purpose Purpose Collect Process Evaluate Retain Use Outputs Disseminate Data Model Data 7 Machine Learning Process Safeguards Feedback • Build user into workflow • Check for adversarial techniques Training Develop Apply to Purpose Use Outputs Data Model Data • Authorities, Ethical • Collect or generate • Explainability • Use Limitation • Accountability Use training data and • Explainability and test data Redress • Testing/Validation • User Interpretation • Explainable and Validation Purpose and • Documentation: • Human control • Confidence Levels Methods Datasheets for • Confidence level Datasets follows outputs • Documentation: Model Cards 8 4

  5. 10/7/2019 Defin fined Pur urpose an and Use se Purpose • Governance Bodies • Check for Authorities • Check for Ethical Use (Principles) • Explainable Purpose and Methods 9 Trai aining Data Purpose Training Data • Collect or generate training data and test data • Data Selection, Feature Engineering, Labeling • Issue: Collecting and maintaining “negative” examples • Documentation: Datasheets for Datasets • Identify and document features, purpose, limitations, and known issues • biases (explicit and implicit) 10 5

  6. 10/7/2019 Mod odel l Devel elopment Develop Purpose Training Data Model • Explainability • Testing and Validation • Check for bias in weights/methodology • ID situations where model performs poorly/unreliably or is vulnerable to adversarial techniques • Confidence Level • Documentation: Model Cards 11 Stakeholders Business Intelligence Metrics (ROI) Chief Data Officer (CDO) Security Deployment Compliance (AVG) Mission Risk Acceptance Mission Review (Mission Element Owners) Model Labeled Data Sharing Equities Legal (Equities Review Board) Senior Operations Peer Review Data Officer (SODO), Compliance for Dissemination (LPOC) Senior Operations Analytics Officer (SOAO) Civil Liberties, Privacy, and Transparency (CLPT) 12 6

  7. 10/7/2019 App pply lying Mod odels to o Data Develop Purpose Training Data Apply to Data Model • Use Limitation • User Interpretation and Validation 13 Usin sing Out utputs Develop Purpose Training Data Apply to Data Use Outputs Model • Accountability • Explainability and Redress • Human control • Confidence level follows outputs 14 7

  8. 10/7/2019 Feedback Fee eedback Develop Purpose Training Data Apply to Data Use Outputs Model • Build user corrections into workflow • Drift • Biased weights • Check for adversarial techniques 15 Q & A 16 8

  9. 10/7/2019 17 Mitigating Adversarial Machine Learning Machine learning (ML) can be a solution to scalable defensive and offensive measures for cybersecurity. These can range from semi- automated decision support to fully-automated capabilities. However, ML models can be exploited in at least four ways. Adversaries can: (a) poison training data used to train ML algorithms to degrade prediction quality, Original or redirect predictions, altogether; outcome → (b) evade by manipulating runtime data to ensure ML models misclassify malicious behavior as benign; (c) infer records into the training data; and (d) reconstruct the ML model for further analysis and exploitation. Tricked outcome → When ML models of varying qualities are integrated into an ensemble, an adversary can * Popular Science: Fooling The Machine , The Byzantine science of deceiving artificial exploit weaknesses in individual models to intelligence; Dave Gershgorn (March 30, 2016) coordinate a malicious effect in the overall system. 18 18 9

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend