a tour of machine learning security
play

A Tour of Machine Learning Security Florian Tramr CISPA August 6 th - PowerPoint PPT Presentation

A Tour of Machine Learning Security Florian Tramr CISPA August 6 th 2018 The Deep Learning Revolution First they came for images The Deep Learning Revolution And then everything else The ML Revolution Including things that likely


  1. A Tour of Machine Learning Security Florian Tramèr CISPA August 6 th 2018

  2. The Deep Learning Revolution First they came for images…

  3. The Deep Learning Revolution And then everything else…

  4. The ML Revolution Including things that likely won’t work… 4

  5. What does this mean for privacy & security? Crypto, Trusted hardware Differential privacy Privacy & integrity Data inference Outsourced learning Model theft Test outputs Training data Data poisoning Blockchain ??? dog cat bird Robust statistics Adversarial examples Outsourced inference Test data Privacy & integrity Crypto, Trusted hardware Adapted from (Goodfellow 2018) 5

  6. This talk: security of deployed models Crypto, Trusted hardware Differential privacy Privacy & integrity Data inference Outsourced learning Model theft Test outputs Training data Data poisoning ??? dog cat bird Robust statistics Adversarial examples Outsourced inference Test data Privacy & integrity Crypto, Trusted hardware 6

  7. Stealing ML Models Crypto, Trusted hardware Differential privacy Privacy & integrity Data inference Outsourced learning Model theft Test outputs Training data Data poisoning ??? dog cat bird Robust statistics Adversarial examples Outsourced inference Test data Privacy & integrity Crypto, Trusted hardware 7

  8. Machine Learning as a Service Goal 2: Model Confidentiality • Model/Data Monetization • Sensitive Data Model f Training API Prediction API input classification Data Black Box Goal 1: Rich Prediction APIs $$$ per query • Highly Available • High-Precision Results 8

  9. Model Extraction Goal: Adversarial client learns close approximation of f using as few queries as possible x Data Attack Model f f’ f(x) Applications: 1) Undermine pay-for-prediction pricing model 2) ”White-box” attacks: › Infer private training data › Model evasion (adversarial examples) 9

  10. Model Extraction Goal: Adversarial client learns close approximation of f using as few queries as possible x Data Attack Model f f’ f(x) Isn’t this “just No! Prediction APIs Machine Learning”? return fine-grained information that makes extracting much easier than learning 10

  11. Learning vs Extraction Learning f(x) Extracting f(x) Function to learn Noisy real-world “Simple” deterministic function f(x) phenomenon 11

  12. Learning vs Extraction Learning f(x) Extracting f(x) Function to learn Noisy real-world “Simple” deterministic function f(x) phenomenon Available labels hard labels Depending on API: (e.g., “cat”, “dog”, …) - Hard labels - Soft labels (class probas) - Gradients (for interpretability) 12

  13. Learning vs Extraction Learning f(x) Extracting f(x) Function to learn Noisy real-world “Simple” deterministic function f(x) phenomenon Available labels hard labels Depending on API: (e.g., “cat”, “dog”, …) - Hard labels - Soft labels (class probas) - Gradients (Milli et al.) Labeling function Humans, real-world Query f(x) on any input x data collection => No need for labeled data => Queries can be adaptive 13

  14. Learning vs Extraction for specific models Learning f(x) Extracting f(x) Logistic |Data| ≈ 10 * |Features| - Hard labels only: (Loyd & Meek) Regression - With confidences: simple system of equations ( T et al.) |Data| = |Features| + cte 14

  15. Learning vs Extraction for specific models Learning f(x) Extracting f(x) Logistic |Data| ≈ 10 * |Features| - Hard labels only: (Loyd & Meek) Regression - With confidences: simple system of equations ( T et al.) |Data| = |Features| + cte Decision - NP-hard in general “Differential testing” algorithm to Trees - polytime for Boolean trees recover the full tree ( T et al.) (Kushilevitz & Mansour) 15

  16. Learning vs Extraction for specific models Learning f(x) Extracting f(x) Logistic |Data| ≈ 10 * |Features| - Hard labels only: (Loyd & Meek) Regression - With confidences: simple system of equations ( T et al.) |Data| = |Features| + cte Decision - NP-hard in general “Differential testing” algorithm to Trees - polytime for Boolean trees recover the full tree ( T et al.) (Kushilevitz & Mansour) Neural Large models required - Distillation (Hinton et al.) Networks “The more data the better” Make smaller copy of model from confidence scores - Extraction from hard labels No quantitative analysis for (Papernot et al., T et al.) large neural nets yet 16

  17. Crypto, Trusted hardware Differential privacy Privacy & integrity Data inference Outsourced learning Model theft Test outputs Training data Data poisoning ??? dog cat bird Robust statistics Adversarial examples Outsourced inference Test data Privacy & integrity Crypto, Trusted hardware 17

  18. Trusted execution of ML: 3 motivating scenarios 1. Cloud ML APIs Data Privacy Integrity - Model “downgrade” - Disparate impact - Other malicious tampering 18

  19. Trusted execution of ML: 3 motivating scenarios 2. Federated Learning Integrity Data privacy Poison model updates 19

  20. Trusted execution of ML: 3 motivating scenarios 3. Trojaned hardware (Verifiable ASICs model, Wahby et al.) Integrity 20

  21. Solutions Cryptography / Statistics 1. Outsourced ML : FHE, MPC, (ZK) proof systems 2. Federated learning : robust statistics? 3. Trojaned hardware : some root of trust is needed Trusted Execution Environments (TEEs) 1. Outsourced ML : secure enclaves 2. Federated learning : trusted sensors + secure enclaves 3. Trojaned hardware: fully trusted (but possibly slow) hardware

  22. Trusted Execution: At what cost? • Trusted ASICs (Wahby et al.): ~10 8 � worse than SOTA • Intel SGX: VGG16 Inference 400 350 350 300 Paging at Images / sec 250 ~90MB 200 150 100 50 1 0 GPU SGX GPU: Nvidia TITAN XP SGX: Intel Core i7-6700 Skylake Single Core @ 3.40GHz https://medium.com/@danny_harnik/impressions-of-intel-sgx-performance-22442093595a

  23. “How do we efficiently leverage TEEs for secure machine learning computations?” Idea: outsource work to collocated , faster but untrusted device and verify results x TEE F(x), proof Computations Gap between trusted Privacy and untrusted device Verifiable ASICs Arithmetic circuits ~ 8 orders of magnitude No (Wahby et al., 2016) (GKR protocol) Slalom DNN inference ~ 1-2 orders “Yes” Not in this talk

  24. Bottlenecks in deep neural networks non linear stuff (cheap) MATRIX MULTIPLICATION ~ 97% VGG16 Inference on 1 CPU core

  25. Outsourcing matrix multiplication: Freivald’s algorithm X ∈ " n ⨉ n , W ∈ " n ⨉ n Input: DNN weights. Fixed at inference time Direct Compute: Z = X * W ≈ n 3 multiplications or O(n 2.81 ) with Strassen Outsource + Verify: • Sample r ← " n uniformly at random • Check: Z * r = X * (W * r) • Complexity: ≈ 3n 2 multiplications • Soundness: 1 / | " | (boost by repeating)

  26. Freivald variants for arbitrary linear operators Linear operator: z = F(x) = x * A Matrix of size |x| × |z| Vector of size |z| Vector of size |x| • Batched verification Z = F ( [x 1 … x B ] ) = [ F(x 1 ) … F(x B ) ] ⇒ Complexity = B*cost(F) Freivald: r * Z = F ( r * [x 1 … x B ] ) ⇒ Complexity = B*(|x|+|z|) + cost(F) • With precomputation Precompute A’ = A * r = ( ∇ x F)(r) Freivald: z * r = x * A’ ⇒ Complexity = |x|+|z| 2 inner products!

  27. Slalom Summary Slalom TEE X 1 Z 1 = X 1 * W 1 1. Freivald check for (X 1 , W 1 , Z 1 ) 2. X 2 = σ(Z 1 ) X 2 Z 2 = X 2 * W 2 Arbitrary non-linearity … 27

  28. Design and Evaluation TEE • TEE: Intel SGX ”Desktop” CPU (single thread) • Untrusted device: Nvidia Tesla GPU • Port of the Eigen linear algebra C++ library to SGX (used in e.g., TensorFlow) • No simulation mode! VGG16 MobileNet 25 120 97.1 19.6 100 20 Images / sec 80 15 60 10 40 30 15.9 5 20 1.7 1 0 0 Compute Verify Verify with Compute Verify Verify with preproc preproc 28

  29. Crypto, Trusted hardware Differential privacy Privacy & integrity Data inference Outsourced learning Model theft Test outputs Training data Data poisoning ??? dog cat bird Robust statistics Adversarial examples Outsourced inference Test data Privacy & integrity Crypto, Trusted hardware 29

  30. ML models make surprising mistakes + . 007 ⇥ = Pretty sure this I’m certain this is a panda is a gibbon (Szegedy et al. 2013, Goodfellow et al. 2015) 30

  31. Where are the defenses? • Adversarial training Prevent “all/most Szegedy et al. 2013, Goodfellow et al. 2015, attacks” for a Kurakin et al. 2016, T et al. 2017, Madry et al. 2017, Kannan et al. 2018 given norm ball • Convex relaxations with provable guarantees Raghunathan et al. 2018, Kolter & Wong 2018, Sinha et al. 2018 • A lot of broken defenses… 31

  32. Do we have a realistic threat model? (no…) Current approach: 1. Fix a ”toy” attack model (e.g., some l ∞ ball) 2. Directly optimize over the robustness measure Þ Defenses do not generalize to other attack models Þ Defenses are meaningless for applied security What do we want? • Model is “always correct” (sure, why not?) • Model has blind spots that are “hard to find” • “Non-information-theoretic” notions of robustness? • CAPTCHA threat model is interesting to think about 32

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend