ON THE ADVERSARIAL ROBUSTNESS OF UNCERTAINTY AWARE DEEP NEURAL - - PowerPoint PPT Presentation

on the adversarial robustness of uncertainty aware deep
SMART_READER_LITE
LIVE PREVIEW

ON THE ADVERSARIAL ROBUSTNESS OF UNCERTAINTY AWARE DEEP NEURAL - - PowerPoint PPT Presentation

ON THE ADVERSARIAL ROBUSTNESS OF UNCERTAINTY AWARE DEEP NEURAL NETWORKS APRIL 29 TH , 2019 PREPARED BY: ALI HARAKEH QUESTION Can a neural network mitigate the effects of adversarial attacks by estimating the uncertainty in its predictions ?


slide-1
SLIDE 1

APRIL 29TH, 2019 PREPARED BY: ALI HARAKEH

ON THE ADVERSARIAL ROBUSTNESS OF UNCERTAINTY AWARE DEEP NEURAL NETWORKS

slide-2
SLIDE 2

QUESTION

4/29/2019 Ali Harakeh 2

Can a neural network mitigate the effects of adversarial attacks by estimating the uncertainty in its predictions ?

slide-3
SLIDE 3

ADVERSARIAL ROBUSTNESS

slide-4
SLIDE 4

HOW GOOD IS YOUR NEURAL NETWORK ?

  • Neural networks are not robust to input perturbations.
  • Example: Carlini and Wagner Attack on MNIST

4/29/2019 Ali Harakeh 4

2 3 1

slide-5
SLIDE 5

ADVERSARIAL PERTURBATIONS

4/29/2019 Ali Harakeh 5

Decision Boundary 2 Decision Boundary 3 Decision Boundary 1 X0 Ostrich X0 Vacuum Xa Shoe Xb Xb Xa Minimum Perturbation

slide-6
SLIDE 6

UNCERTAINTY IN DNNS

slide-7
SLIDE 7

SOURCES OF UNCERTAINTY IN DNNS

  • Two sources of uncertainty exist in DNNs.
  • Epistemic (Model) Uncertainty: Captures the ignorance about which model

generated our data.

  • Aleatoric (Observation) Uncertainty: Captures the inherent noise in the
  • bservations.

4/29/2019 Ali Harakeh 7

Aleatoric Uncertainty Epsitemic Uncertainty Original Image

slide-8
SLIDE 8
  • Marginalizing over neural network parameters:

CAPTURING EPISTEMIC UNCERTAINTY

4/29/2019 Ali Harakeh 8

Conv3-64 Conv3-64 Conv3-64 FC-10 Soft Max

slide-9
SLIDE 9

CHANGE IN DECISION BOUNDARIES

4/29/2019 Ali Harakeh 9

Decision Boundary 2 Decision Boundary 3 Decision Boundary 1 X0 Ostrich X0 Vacuum Xa Shoe Xb Xb Xa

slide-10
SLIDE 10
  • Heteroscedastic variance estimation:

CAPTURING ALEATORIC UNCERTAINTY

4/29/2019 Ali Harakeh 10

Conv3-64 Conv3-64 Conv3-64 FC-10 Soft Max

slide-11
SLIDE 11

CHANGE IN DATA POINT

4/29/2019 Ali Harakeh 11

Decision Boundary 2 Decision Boundary 3 Decision Boundary 1 X0 Ostrich X0 Vacuum Xa Shoe Xb Xb Xa

slide-12
SLIDE 12

METHODOLOGY

slide-13
SLIDE 13

NEURAL NETWORKS AND DATASETS

4/29/2019 Ali Harakeh 13

Conv3-64 Conv3-64 Average Pool Soft Max Conv3-10 Average Pool

ConvNet On CIFAR10 ConvNet On MNIST

Conv3-64 Conv3-64 Conv3-64 FC-10 Soft Max

slide-14
SLIDE 14

Conv3-64 Conv3-64 Average Pool Soft Max Conv3-10 Dropout Dropout Average Pool Conv3-64 Dropout Conv3-64 Dropout Conv3-64 Dropout FC-10 Soft Max

EPISTEMIC UNCERTAINTY: AN APPROXIMATION

4/29/2019 Ali Harakeh 14

ConvNet On CIFAR10 ConvNet On MNIST

slide-15
SLIDE 15

ALEATORIC UNCERTAINTY ESTIMATION

4/29/2019 Ali Harakeh 15

ConvNet On CIFAR10 ConvNet On MNIST

Conv3-64 Conv3-64 Conv3-64 FC-10 Soft Max FC-10 Sampler Conv3-64 Conv3-64 Average Pool Soft Max Conv3-10 Conv3-10 Sampler Average Pool Average Pool

slide-16
SLIDE 16

GENERATING ADVERSARIAL PERTURBATIONS

  • Use Cleverhans: https://github.com/tensorflow/cleverhans
  • Adversarial Attacks:

1.

Fast Gradient Sign Method (FGSM): Goodfellow et. Al.

2.

Jacobian- Based Saliency Map Attacks (JSMA): Paparnot et. Al.

3.

Carlini and Wagner Attacks: Carlini et. Al.

4.

Black Box Attack: Papernot et. Al.

4/29/2019 Ali Harakeh 16

slide-17
SLIDE 17

RESULTS

slide-18
SLIDE 18

RESULTS

4/29/2019 Ali Harakeh 18

slide-19
SLIDE 19

EPISTEMIC UNCERTAINTY ESTIMATION

4/29/2019 Ali Harakeh 19

slide-20
SLIDE 20

ALEATORIC UNCERTAINTY ESTIMATION

4/29/2019 Ali Harakeh 20

slide-21
SLIDE 21

BLACK BOX ATTACK

4/29/2019 Ali Harakeh 21

slide-22
SLIDE 22

MC-DROPOUT APPROXIMATION

4/29/2019 Ali Harakeh 22

slide-23
SLIDE 23

CONCLUSION

slide-24
SLIDE 24

QUESTION

4/29/2019 Ali Harakeh 24

Can a neural network mitigate the effects of adversarial attacks by estimating the uncertainty in its predictions ?

slide-25
SLIDE 25

ANSWER(S)

  • Adversarial perturbations cannot be distinguished as input noise through

aleatoric uncertainty estimation.

  • Epistemic uncertainty estimation, manifested as Bayesian Neural Networks

might be robust to adversarial attacks.

  • Results inconclusive, due to the lack of mathematical bounds on the

approximation through ensembles and MC-Dropout.

  • Sufficient Conditions for Robustness to Adversarial Examples: a

Theoretical and Empirical Study with Bayesian Neural Network.

  • https://openreview.net/forum?id=B1eZRiC9YX

4/29/2019 Ali Harakeh 25

slide-26
SLIDE 26

CONCLUSION

  • There is no easy way out of using robustness certification to guarantee

safety of deep neural networks.

  • Even then, the mode of action of a specific type of adversarial attack needs

to be taken into consideration.

  • Research Question: How to certify against black box attacks?

4/29/2019 Ali Harakeh 26