safe and robust deep learning
play

Safe and Robust Deep Learning Mislav Balunovi Department of Computer - PowerPoint PPT Presentation

Safe and Robust Deep Learning Mislav Balunovi Department of Computer Science 1 SafeAI @ ETH Zurich (safeai.ethz.ch) Joint work with Markus Gagandeep Petar Martin Timon Matthew Maximilian Dana Pschel Singh Tsankov Vechev Gehr


  1. Safe and Robust Deep Learning Mislav Balunović Department of Computer Science 1

  2. SafeAI @ ETH Zurich (safeai.ethz.ch) Joint work with Markus Gagandeep Petar Martin Timon Matthew Maximilian Dana Püschel Singh Tsankov Vechev Gehr Mirman Baader Drachsler Systems: Publications: S&P’18: AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation ERAN: Generic neural network verifier https://github.com/eth-sri/eran/ NeurIPS’18: Fast and Effective Robustness Certification POPL’19: An Abstract Domain for Certifying Neural Networks DiffAI: System for training provably robust networks https://github.com/eth-sri/diffai ICLR’19: Boosting Robustness Certification of Neural Networks ICML’18: Differentiable Abstract Interpretation for Provably Robust Neural Networks DL2: System for training and querying networks with logical constraints ICML’19: DL2: Training and Querying Neural Network with Logic https://github.com/eth-sri/dl2 2

  3. Deep Learning Systems Self driving cars Translation Voice assistant https://waymo.com/tech/ https://translate.google.com https://www.amazon.com/ Amazon-Echo-And-Alexa-Devices 3

  4. Attacks on Deep Learning The self-driving car incorrectly decides to turn right on Input 2 and crashes into the guardrail DeepXplore: Automated Whitebox T esting of Deep Learning Systems, SOSP’17 4

  5. Attacks on Deep Learning The self-driving car incorrectly The Ensemble model is fooled by decides to turn right on Input 2 the addition of an adversarial and crashes into the guardrail distracting sentence in blue. DeepXplore: Automated Whitebox Adversarial Examples for Evaluating T esting of Deep Learning Systems, Reading Comprehension Systems, SOSP’17 EMNLP’17 5

  6. Attacks on Deep Learning The self-driving car incorrectly The Ensemble model is fooled by Adding small noise to the input decides to turn right on Input 2 the addition of an adversarial audio makes the network and crashes into the guardrail distracting sentence in blue. transcribe any arbitrary phrase DeepXplore: Automated Whitebox Adversarial Examples for Evaluating Audio Adversarial Examples: T esting of Deep Learning Systems, Reading Comprehension Systems, Targeted Attacks on Speech-to-T ext, SOSP’17 EMNLP’17 ICML 2018 6

  7. Attacks based on intensity changes in images 8 𝐽 𝑝 7

  8. Attacks based on intensity changes in images 8 𝐽 𝑝 0 𝐽 = 𝐽 𝑝 + 0.01 8

  9. Attacks based on intensity changes in images 8 𝐽 𝑝 0 𝐽 = 𝐽 𝑝 + 0.01 𝑀 ∞ -norm: consider all images 𝐽 in the 𝜗 -ball ℬ (𝐽 0 ,∞) (𝜗) around 𝐽 0 T o verify absence of attack: 9

  10. Attacks based on geometric transformations 7 𝐽 𝑝 10

  11. Attacks based on geometric transformations 7 𝐽 𝑝 3 𝐽 = 𝑠𝑝𝑢𝑏𝑢𝑓(𝐽 𝑝 ,- 35) 11

  12. Attacks based on geometric transformations 7 𝐽 𝑝 3 𝐽 = 𝑠𝑝𝑢𝑏𝑢𝑓(𝐽 𝑝 ,- 35) Consider all images 𝐽 obtained by applying geometric transformations to ℬ (𝐽 0 ,∞) (𝜗) T o verify absence of attack: 12

  13. Attacks based on intensity changes to sound 𝑡 𝑝 “Stop” 13

  14. Attacks based on intensity changes to sound 𝑡 𝑝 “Stop” 𝑡 = 𝑡 𝑝 “Go” − 110 𝑒𝐶 14

  15. Attacks based on intensity changes to sound 𝑡 𝑝 “Stop” 𝑡 = 𝑡 𝑝 “Go” − 110 𝑒𝐶 Consider all signals 𝑡 in the 𝜗 -ball ℬ (𝑡 0 ,∞) (𝜗) around 𝑡 0 T o verify absence of attack: 15

  16. Neural Network Verification: Problem statement Example networks and regions: Given: Neural Network f, Image classification network f Input Region R Region R based on changes to pixel intensity Safety Property S Region R based on geometric : e.g., rotation Speech recognition network f Region R based on added noise to audio signal Prove: for all I in R, Aircraft collision avoidance network f Region R based on input sensor values prove f(I) satisfies S Input Region R can contain an infinite number of inputs, thus enumeration is infeasible 16

  17. Experimental vs. Certified Robustness Experimental robustness Certified robustness Prove absence of violating inputs Tries to find violating inputs Actual verification guarantees Like testing, no full guarantees E.g.: Reluplex [2017], Wong et al. 2018, AI2 [2018] E.g. Goodfellow 2014, Carlini & Wagner 2016, Madry et al. 2017 In this talk we will focus on certified robustness 17

  18. General Approaches to Network Verification Complete verifiers: exact but suffer from scalability issues: SMT: Reluplex [CAV’17], MILP: MIPVerify [ICLR’19], Splitting: Neurify [NeurIPS’18],… Incomplete verifiers, trade-off precision for scalability: Box/HBox [ICML'18],SDP [ICLR’18], Wong et.al. [ICML'18], FastLin [ICML'18], Crown [NeurIPS'18],… Key Challenge: scalable and precise automated verifier 18

  19. Network Verification with Eran Input region ERAN verification framework https://github.com/eth-sri/eran Based on Pixel Intensity changes Box Based on Geometric DeepZ transformations: vector fields, rotations, etc. DeepPoly Yes Based on Audio processing GPUPoly No RefineZono: MILP + DeepZ Aircraft Possible sensor values sensors K-Poly: MILP + DeepPoly Neural Network Fully connected ReLU Extensible to other verification tasks Convolutional Sigmoid State-of-the-art complete and Residual Tanh incomplete verification LSTM Maxpool Sound w.r.t. floating point arithmetic Safety Property 19

  20. Complete and Incomplete Verification with ERAN Faster Complete Verification Aircraft collision avoidance system (ACAS) Reluplex Neurify ERAN > 32 hours 921 sec 227 sec Scalable Incomplete Verification 𝝑 CIFAR10 ResNet-34 %verified Time (s) 0.03 66% 79 sec 20

  21. Geometric and Audio Verification with ERAN Rotation between - 30 ° and 30 ° on MNIST Geometric Verification 𝝑 CNN with 4,804 neurons %verified Time(s) 0.001 86 10 sec Audio Verification 𝝑 LSTM with 64 hidden neurons %verified Time (s) -110 dB 90% 9 sec 21

  22. Example: Analysis of a Toy Neural Network 0 0 1 Input layer Hidden layers Output layer max(0, 𝑦 3 ) max(0, 𝑦 7 ) [−1,1] 1 1 1 𝑦 1 𝑦 3 𝑦 5 𝑦 7 𝑦 9 𝑦 11 1 1 0 1 1 1 𝑦 2 𝑦 4 𝑦 6 𝑦 8 𝑦 10 𝑦 12 [−1,1] −1 −1 1 max(0, 𝑦 4 ) max(0, 𝑦 8 ) 0 0 0 We want to prove that 𝑦 11 > 𝑦 12 for all values of 𝑦 1 , 𝑦 2 in the input set 22

  23. 0 0 1 Input layer Hidden layers Output layer max(0, 𝑦 3 ) max(0, 𝑦 7 ) [−1,1] 1 1 1 𝑦 1 𝑦 3 𝑦 5 𝑦 7 𝑦 9 𝑦 11 1 1 0 1 1 1 𝑦 2 𝑦 4 𝑦 6 𝑦 8 𝑦 10 𝑦 12 [−1,1] −1 −1 1 max(0, 𝑦 4 ) max(0, 𝑦 8 ) 0 0 0 23

  24. 0 0 1 Input layer Hidden layers Output layer max(0, 𝑦 3 ) max(0, 𝑦 7 ) [−1,1] 1 1 1 𝑦 1 𝑦 3 𝑦 5 𝑦 7 𝑦 9 𝑦 11 1 1 0 1 1 1 𝑦 2 𝑦 4 𝑦 6 𝑦 8 𝑦 10 𝑦 12 [−1,1] −1 −1 1 max(0, 𝑦 4 ) max(0, 𝑦 8 ) 0 0 0 Each 𝑦 𝑘 = 𝐧𝐛𝐲(0,𝑦 𝑗 ) corresponds to ( 𝑦 𝑗 ≤ 0 and 𝑦 𝑘 = 0 ) or ( 𝑦 𝑗 > 0 and 𝑦 𝑘 = 𝑦 𝑗 ) Solver has to explore two paths per ReLU resulting in exponential number of paths Complete verification with solvers often does not scale 24

  25. Network Verification with ERAN: High Level Idea Certification ... All possible outputs Output constraint 𝜒 𝑜 (before softmax) Attacker region: 𝑦 0 = 0 𝑦 0 = 0 𝑦 1 = 0.975 + 0.025𝜗 1 𝑦 1 = 2.60+ 0.015𝜗 0 + 0.023𝜗 1 + 5.181𝜗 2 + ⋯ 𝑦 2 = 0.125 𝑦 2 = 4.63 − 0.005𝜗 0 − 0.006𝜗 1 + 0.023𝜗 2 + ⋯ 𝑦 784 = 0.938 + 0.062𝜗 784 𝑦 9 = 0.12− 0.125𝜗 0 + 0.102𝜗 1 + 3.012𝜗 2 + ⋯ … ∀𝑗. 𝜗 𝑗 ∈ [0,1] … ∀𝑗. 𝜗 𝑗 ∈ [0,1] 25

  26. Box Approximation (scalable but imprecise) [1,7] [−2,2] [0,4] [0,4] [−1,1] [0,2] 0 0 1 max(0, 𝑦 3 ) max(0, 𝑦 7 ) [−1,1] 1 1 1 𝑦 1 𝑦 3 𝑦 5 𝑦 7 𝑦 9 𝑦 11 1 1 0 1 1 1 𝑦 2 𝑦 4 𝑦 6 𝑦 8 𝑦 10 𝑦 12 [−1,1] −1 −1 1 max(0, 𝑦 4 ) max(0, 𝑦 8 ) 0 0 0 [−1,1] [−2,2] [0,2] [−2,2] [0,2] [0,2] 26

  27. Box Approximation (scalable but imprecise) [1,7] [−2,2] [0,4] [0,4] [−1,1] [0,2] 0 0 1 max(0, 𝑦 3 ) max(0, 𝑦 7 ) [−1,1] 1 1 1 𝑦 1 𝑦 3 𝑦 5 𝑦 7 𝑦 9 𝑦 11 1 1 0 1 1 1 𝑦 2 𝑦 4 𝑦 6 𝑦 8 𝑦 10 𝑦 12 [−1,1] −1 −1 1 max(0, 𝑦 4 ) max(0, 𝑦 8 ) 0 0 0 [−1,1] [−2,2] [0,2] [−2,2] [0,2] [0,2] Verification with the Box domain fails as it cannot capture relational information 27

  28. DeepPoly Approximation [POPL’19] Shape: associate a lower polyhedral 𝑏 𝑗 ≤ and an upper polyhedral 𝑏 𝑗 ≥ constraint with each 𝑦 𝑗 Key points: Captures affine transformation precisely Custom approximations for ReLU, sigmoid, tanh, and maxpool activations Less precise but more scalable than general Polyhedra 28

  29. Example: Verification using DeepPoly 0 0 1 max(0, 𝑦 3 ) max(0, 𝑦 7 ) [−1,1] 1 1 1 𝑦 1 𝑦 3 𝑦 5 𝑦 7 𝑦 9 𝑦 11 1 1 0 1 1 1 𝑦 2 𝑦 4 𝑦 6 𝑦 8 𝑦 10 𝑦 12 [−1,1] −1 −1 1 max(0, 𝑦 4 ) max(0, 𝑦 8 ) 0 0 0 29

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend