APRIL 29TH, 2019 PREPARED BY: ALI HARAKEH
ON THE ADVERSARIAL ROBUSTNESS OF UNCERTAINTY AWARE DEEP NEURAL - - PowerPoint PPT Presentation
ON THE ADVERSARIAL ROBUSTNESS OF UNCERTAINTY AWARE DEEP NEURAL - - PowerPoint PPT Presentation
ON THE ADVERSARIAL ROBUSTNESS OF UNCERTAINTY AWARE DEEP NEURAL NETWORKS APRIL 29 TH , 2019 PREPARED BY: ALI HARAKEH QUESTION Can a neural network mitigate the effects of adversarial attacks by estimating the uncertainty in its predictions ?
QUESTION
4/29/2019 Ali Harakeh 2
Can a neural network mitigate the effects of adversarial attacks by estimating the uncertainty in its predictions ?
ADVERSARIAL ROBUSTNESS
HOW GOOD IS YOUR NEURAL NETWORK ?
- Neural networks are not robust to input perturbations.
- Example: Carlini and Wagner Attack on MNIST
4/29/2019 Ali Harakeh 4
2 3 1
ADVERSARIAL PERTURBATIONS
4/29/2019 Ali Harakeh 5
Decision Boundary 2 Decision Boundary 3 Decision Boundary 1 X0 Ostrich X0 Vacuum Xa Shoe Xb Xb Xa Minimum Perturbation
UNCERTAINTY IN DNNS
SOURCES OF UNCERTAINTY IN DNNS
- Two sources of uncertainty exist in DNNs.
- Epistemic (Model) Uncertainty: Captures the ignorance about which model
generated our data.
- Aleatoric (Observation) Uncertainty: Captures the inherent noise in the
- bservations.
4/29/2019 Ali Harakeh 7
Aleatoric Uncertainty Epsitemic Uncertainty Original Image
- Marginalizing over neural network parameters:
CAPTURING EPISTEMIC UNCERTAINTY
4/29/2019 Ali Harakeh 8
Conv3-64 Conv3-64 Conv3-64 FC-10 Soft Max
CHANGE IN DECISION BOUNDARIES
4/29/2019 Ali Harakeh 9
Decision Boundary 2 Decision Boundary 3 Decision Boundary 1 X0 Ostrich X0 Vacuum Xa Shoe Xb Xb Xa
- Heteroscedastic variance estimation:
CAPTURING ALEATORIC UNCERTAINTY
4/29/2019 Ali Harakeh 10
Conv3-64 Conv3-64 Conv3-64 FC-10 Soft Max
CHANGE IN DATA POINT
4/29/2019 Ali Harakeh 11
Decision Boundary 2 Decision Boundary 3 Decision Boundary 1 X0 Ostrich X0 Vacuum Xa Shoe Xb Xb Xa
METHODOLOGY
NEURAL NETWORKS AND DATASETS
4/29/2019 Ali Harakeh 13
Conv3-64 Conv3-64 Average Pool Soft Max Conv3-10 Average Pool
ConvNet On CIFAR10 ConvNet On MNIST
Conv3-64 Conv3-64 Conv3-64 FC-10 Soft Max
Conv3-64 Conv3-64 Average Pool Soft Max Conv3-10 Dropout Dropout Average Pool Conv3-64 Dropout Conv3-64 Dropout Conv3-64 Dropout FC-10 Soft Max
EPISTEMIC UNCERTAINTY: AN APPROXIMATION
4/29/2019 Ali Harakeh 14
ConvNet On CIFAR10 ConvNet On MNIST
ALEATORIC UNCERTAINTY ESTIMATION
4/29/2019 Ali Harakeh 15
ConvNet On CIFAR10 ConvNet On MNIST
Conv3-64 Conv3-64 Conv3-64 FC-10 Soft Max FC-10 Sampler Conv3-64 Conv3-64 Average Pool Soft Max Conv3-10 Conv3-10 Sampler Average Pool Average Pool
GENERATING ADVERSARIAL PERTURBATIONS
- Use Cleverhans: https://github.com/tensorflow/cleverhans
- Adversarial Attacks:
1.
Fast Gradient Sign Method (FGSM): Goodfellow et. Al.
2.
Jacobian- Based Saliency Map Attacks (JSMA): Paparnot et. Al.
3.
Carlini and Wagner Attacks: Carlini et. Al.
4.
Black Box Attack: Papernot et. Al.
4/29/2019 Ali Harakeh 16
RESULTS
RESULTS
4/29/2019 Ali Harakeh 18
EPISTEMIC UNCERTAINTY ESTIMATION
4/29/2019 Ali Harakeh 19
ALEATORIC UNCERTAINTY ESTIMATION
4/29/2019 Ali Harakeh 20
BLACK BOX ATTACK
4/29/2019 Ali Harakeh 21
MC-DROPOUT APPROXIMATION
4/29/2019 Ali Harakeh 22
CONCLUSION
QUESTION
4/29/2019 Ali Harakeh 24
Can a neural network mitigate the effects of adversarial attacks by estimating the uncertainty in its predictions ?
ANSWER(S)
- Adversarial perturbations cannot be distinguished as input noise through
aleatoric uncertainty estimation.
- Epistemic uncertainty estimation, manifested as Bayesian Neural Networks
might be robust to adversarial attacks.
- Results inconclusive, due to the lack of mathematical bounds on the
approximation through ensembles and MC-Dropout.
- Sufficient Conditions for Robustness to Adversarial Examples: a
Theoretical and Empirical Study with Bayesian Neural Network.
- https://openreview.net/forum?id=B1eZRiC9YX
4/29/2019 Ali Harakeh 25
CONCLUSION
- There is no easy way out of using robustness certification to guarantee
safety of deep neural networks.
- Even then, the mode of action of a specific type of adversarial attack needs
to be taken into consideration.
- Research Question: How to certify against black box attacks?
4/29/2019 Ali Harakeh 26