on the adversarial robustness of uncertainty aware deep
play

ON THE ADVERSARIAL ROBUSTNESS OF UNCERTAINTY AWARE DEEP NEURAL - PowerPoint PPT Presentation

ON THE ADVERSARIAL ROBUSTNESS OF UNCERTAINTY AWARE DEEP NEURAL NETWORKS APRIL 29 TH , 2019 PREPARED BY: ALI HARAKEH QUESTION Can a neural network mitigate the effects of adversarial attacks by estimating the uncertainty in its predictions ?


  1. ON THE ADVERSARIAL ROBUSTNESS OF UNCERTAINTY AWARE DEEP NEURAL NETWORKS APRIL 29 TH , 2019 PREPARED BY: ALI HARAKEH

  2. QUESTION Can a neural network mitigate the effects of adversarial attacks by estimating the uncertainty in its predictions ? 4/29/2019 Ali Harakeh 2

  3. ADVERSARIAL ROBUSTNESS

  4. HOW GOOD IS YOUR NEURAL NETWORK ? • Neural networks are not robust to input perturbations. • Example: Carlini and Wagner Attack on MNIST 3 1 2 0 4/29/2019 Ali Harakeh 4

  5. ADVERSARIAL PERTURBATIONS X 0 X a X b Decision Boundary 1 Decision Boundary 3 Ostrich Vacuum Shoe Minimum Perturbation X a X 0 X b Decision Boundary 2 4/29/2019 Ali Harakeh 5

  6. UNCERTAINTY IN DNNS

  7. SOURCES OF UNCERTAINTY IN DNNS • Two sources of uncertainty exist in DNNs. • Epistemic (Model) Uncertainty: Captures the ignorance about which model generated our data. • Aleatoric (Observation) Uncertainty: Captures the inherent noise in the observations. Original Image Epsitemic Uncertainty Aleatoric Uncertainty 4/29/2019 Ali Harakeh 7

  8. CAPTURING EPISTEMIC UNCERTAINTY • Marginalizing over neural network parameters: Conv3-64 Conv3-64 Conv3-64 Soft Max FC-10 4/29/2019 Ali Harakeh 8

  9. CHANGE IN DECISION BOUNDARIES X 0 X a X b Decision Boundary 1 Decision Boundary 3 Ostrich Vacuum Shoe X a X 0 X b Decision Boundary 2 4/29/2019 Ali Harakeh 9

  10. CAPTURING ALEATORIC UNCERTAINTY • Heteroscedastic variance estimation: Conv3-64 Conv3-64 Conv3-64 Soft Max FC-10 4/29/2019 Ali Harakeh 10

  11. CHANGE IN DATA POINT X 0 X a X b Decision Boundary 1 Decision Boundary 3 Ostrich Vacuum Shoe X a X 0 X b Decision Boundary 2 4/29/2019 Ali Harakeh 11

  12. METHODOLOGY

  13. 13 Soft Max Soft Max Average Pool FC-10 Conv3-10 Conv3-64 Average Pool Conv3-64 Conv3-64 NEURAL NETWORKS AND DATASETS Conv3-64 Ali Harakeh Conv3-64 ConvNet On CIFAR10 ConvNet On MNIST 4/29/2019

  14. 14 Soft Max Soft Max Average Pool FC-10 Dropout Conv3-10 Conv3-64 Average Pool Dropout EPISTEMIC UNCERTAINTY: AN APPROXIMATION Dropout Conv3-64 Conv3-64 Dropout Dropout Conv3-64 Ali Harakeh Conv3-64 ConvNet On CIFAR10 ConvNet On MNIST 4/29/2019

  15. 15 Soft Max Soft Max Sampler Sampler Average Pool Average Pool FC-10 FC-10 Conv3-10 Conv3-10 Conv3-64 Average Pool Conv3-64 Conv3-64 Conv3-64 Conv3-64 ALEATORIC UNCERTAINTY ESTIMATION Ali Harakeh ConvNet On CIFAR10 ConvNet On MNIST 4/29/2019

  16. GENERATING ADVERSARIAL PERTURBATIONS • Use Cleverhans: https://github.com/tensorflow/cleverhans • Adversarial Attacks: Fast Gradient Sign Method (FGSM): Goodfellow et. Al. 1. Jacobian- Based Saliency Map Attacks (JSMA): Paparnot et. Al. 2. Carlini and Wagner Attacks : Carlini et. Al. 3. Black Box Attack : Papernot et. Al. 4. 4/29/2019 Ali Harakeh 16

  17. RESULTS

  18. RESULTS 4/29/2019 Ali Harakeh 18

  19. EPISTEMIC UNCERTAINTY ESTIMATION 4/29/2019 Ali Harakeh 19

  20. ALEATORIC UNCERTAINTY ESTIMATION 4/29/2019 Ali Harakeh 20

  21. BLACK BOX ATTACK 4/29/2019 Ali Harakeh 21

  22. MC-DROPOUT APPROXIMATION 4/29/2019 Ali Harakeh 22

  23. CONCLUSION

  24. QUESTION Can a neural network mitigate the effects of adversarial attacks by estimating the uncertainty in its predictions ? 4/29/2019 Ali Harakeh 24

  25. ANSWER(S) • Adversarial perturbations cannot be distinguished as input noise through aleatoric uncertainty estimation. • Epistemic uncertainty estimation, manifested as Bayesian Neural Networks might be robust to adversarial attacks. • Results inconclusive, due to the lack of mathematical bounds on the approximation through ensembles and MC-Dropout. • Sufficient Conditions for Robustness to Adversarial Examples: a Theoretical and Empirical Study with Bayesian Neural Network. • https://openreview.net/forum?id=B1eZRiC9YX 4/29/2019 Ali Harakeh 25

  26. CONCLUSION • There is no easy way out of using robustness certification to guarantee safety of deep neural networks. • Even then, the mode of action of a specific type of adversarial attack needs to be taken into consideration. • Research Question : How to certify against black box attacks? 4/29/2019 Ali Harakeh 26

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend