A geometric perspective on the robustness of deep networks - - PowerPoint PPT Presentation

a geometric perspective on the robustness of deep networks
SMART_READER_LITE
LIVE PREVIEW

A geometric perspective on the robustness of deep networks - - PowerPoint PPT Presentation

A geometric perspective on the robustness of deep networks Seyed-Mohsen Moosavi-Dezfooli Amirkabir Artificial Intelligence Summer Summit July 2019 Tehran Polytechnic Iran 2 EPFL Lausanne, Switzerland 3 Pascal Frossard Alhussein


slide-1
SLIDE 1

Seyed-Mohsen Moosavi-Dezfooli Amirkabir Artificial Intelligence Summer Summit July 2019

A geometric perspective on
 the robustness of deep networks

slide-2
SLIDE 2 2

Tehran Polytechnic Iran

slide-3
SLIDE 3 3

EPFL Lausanne, Switzerland

slide-4
SLIDE 4

Omar Fawzi ENS-Lyon Stefano Soatto UCLA Pascal Frossard EPFL Alhussein Fawzi Google DeepMind Jonathan Uesato Google DeepMind Can Kanbak Bilkent Apostolos Modas EPFL

Collaborators

slide-5
SLIDE 5 Rachel Jones—Biomedical Computation Review David Paul Morris—Bloomberg/Getty Images Google Research

Are we ready?

slide-6
SLIDE 6

+

=

School Bus Ostrich

x r ˆ k(·)

▪ Intriguing properties of neural networks,

Szegedy et al., ICLR 2014.

Adversarial perturbations

slide-7
SLIDE 7 7

▪ Universal adversarial perturbations,

Moosavi et al., CVPR 2017.

Ballon Joystick Flag pole Face powder Labrador Labrador Chihuahua Chihuahua

Universal (adversarial) perturbations

slide-8
SLIDE 8 8

“Geometry is not true, it is advantageous.”

Henri Poincaré

slide-9
SLIDE 9

Adversarial perturbations

How large is the “space” of adversarial examples?

9

Universal perturbations

What causes the vulnerability of deep networks to universal perturbations?

Geometry of …

Adversarial training

What geometric features contribute to a better robustness properties?

slide-10
SLIDE 10 10

Geometry of adversarial perturbations

slide-11
SLIDE 11

r∗ = argmin

r

krk2 s.t. ˆ k(x + r) 6= ˆ k(x)

x + r∗ x0 x ∈ Rd

Geometric interpretation

  • f adversarial

perturbations

slide-12
SLIDE 12

Adversarial examples are “blind spots”. Deep classifiers are “too linear”.

Two hypotheses

▪ Intriguing properties of

neural networks,
 Szegedy et al., ICLR 2014.

▪ Explaining and harnessing

adversarial examples,
 Goodfellow et al., ICLR 2015.

slide-13
SLIDE 13

▪ Robustness of classifiers:


from adversarial to random noise,
 Fawzi, Moosavi, Frossard, NIPS 2016.

U TxB x v r

Normal cross- sections of decision boundary

slide-14
SLIDE 14

▪ Robustness of classifiers:


from adversarial to random noise,
 Fawzi, Moosavi, Frossard, NIPS 2016.

Curvature of decision boundary of deep nets

  • 100
  • 50
50 100 150
  • 2
  • 1
1 2

x B 2 B 1

Decision boundary of CNNs is almost flat along random directions.

slide-15
SLIDE 15

rS(x) = arg min

r∈S krk s.t. ˆ

k(x + r) 6= ˆ k(x)

rS(x) = Θ r d mr(x) !

Adversarial perturbations constrained to a random subspace of dimension m.

Space of adversarial perturbations

For low curvature classifiers, w.h.p., we have

r∗ S x∗ r∗

S

x

slide-16
SLIDE 16

Flowerpot Pineapple

+ =

Structured additive perturbations

▪ Robustness of classifiers:


from adversarial to random noise,
 Fawzi, Moosavi, Frossard, NIPS 2016.

The “space” of adversarial examples is quite vast.

slide-17
SLIDE 17

Summary

Geometry of adversarial examples

Decision boundary is “locally” almost flat. Datapoints lie close to the decision boundary.

Flatness can be used to

construct diverse set of perturbations. design efficient attacks.

slide-18
SLIDE 18 18

Geometry of universal perturbations

slide-19
SLIDE 19 19

▪ Universal adversarial perturbations,

Moosavi et al., CVPR 2017.

Ballon Joystick Flag pole Face powder Labrador Labrador Chihuahua Chihuahua

Universal adversarial perturbations (UAP)

85%

slide-20
SLIDE 20

Diversity of UAPs

20

CaffeNet GoogLeNet ResNet-152 VGG-16 VGG-19 VGG-F

Diversity of perturbations

slide-21
SLIDE 21 21

Why do universal perturbations exist?

Flat model Curved model

slide-22
SLIDE 22 22

Flat model

▪ Robustness of classifiers to universal perturbations,

Moosavi et al., ICLR 2018.

slide-23
SLIDE 23 23

Flat model (cont’d)

▪ Robustness of classifiers to universal perturbations,

Moosavi et al., ICLR 2018.

1 50’000 Plot of singular values Random vectors Normals of the decision boundary

Normals to the decision boundary are “globally” correlated.

slide-24
SLIDE 24 24

Flat model (cont’d)

▪ Robustness of classifiers to universal perturbations,

Moosavi et al., ICLR 2018.

The flat model only partially explains the universality.

13% 38% 85%

Random UAP (greedy algorithm)

slide-25
SLIDE 25 25

Curved model

▪ Robustness of classifiers to universal perturbations,

Moosavi et al., ICLR 2018.

The principal curvatures of the decision boundary:

0.0 3000 2500 2000 1500 1000 500

slide-26
SLIDE 26 26

Curved model (cont’d)

▪ Robustness of classifiers to universal perturbations,

Moosavi et al., ICLR 2018.

The principal curvatures of the decision boundary:

0.0 3000 2500 2000 1500 1000 500

v n x

slide-27
SLIDE 27 27

Curved model (cont’d)

▪ Robustness of classifiers to universal perturbations,

Moosavi et al., ICLR 2018.

The principal curvatures of the decision boundary:

0.0 3000 2500 2000 1500 1000 500

x v n

slide-28
SLIDE 28 28

Curved model (cont’d)

▪ Robustness of classifiers to universal perturbations,

Moosavi et al., ICLR 2018.

The principal curvatures of the decision boundary:

0.0 3000 2500 2000 1500 1000 500

x v n

slide-29
SLIDE 29 29

Curved directions are shared

▪ Robustness of classifiers to universal perturbations,

Moosavi et al., ICLR 2018.

Normal sections of the decision boundary (for different datapoints) along a single direction:

UAP direction Random direction

slide-30
SLIDE 30 30

Curved directions are shared (cont’d)

▪ Robustness of classifiers to universal perturbations,

Moosavi et al., ICLR 2018.

The curved model better explains the existence of universal perturbations.

67% 13% 38% 85%

UAP Random Curved model Flat model

slide-31
SLIDE 31 31

Summary

Universality of perturbations

Shared curved directions explain this vulnerability.

A possible solution

Regularizing the geometry to combat against universal perturbations.

Why are deep nets curved?

▪ With friends like these, who needs adversaries?,


Jetley et al., NeurIPS 2018.

slide-32
SLIDE 32 32

Geometry of adversarial training

slide-33
SLIDE 33

In a nutshell

x x + r

Adversarial perturbations Image batch Training

Adversarial training Curvature regularization

slide-34
SLIDE 34

Adversarial training

x x + r

Adversarial perturbations Image batch Training

One of the most effective methods to improve adversarial robustness…

▪ Obfuscated gradients give a false sense of security,

Athalye et al., ICML 2018. (Best paper)

slide-35
SLIDE 35

Geometry of adversarial training

Curvature profiles of normally and adversarially trained networks:

0.0 3000 2500 2000 1500 1000 500

Normal Adversarial

▪ Robustness via curvature regularisation, and vice versa,

Moosavi et al., CVPR 2019.

slide-36
SLIDE 36

Curvature Regularization (CURE)

▪ Robustness via curvature regularisation, and vice versa,

Moosavi et al., CVPR 2019.

94.9%

Normal training

81.2%

CURE

79.4%

Adversarial training

0.0% 36.3% 43.7%

PGD with Clean

kr∗k∞ = 8

slide-37
SLIDE 37

AT vs CURE

AT CURE

Implicit regularization Time consuming SOTA robustness Explicit regularization 3x to 5x faster On par with SOTA

▪ Robustness via curvature regularisation, and vice versa,

Moosavi et al., CVPR 2019.

slide-38
SLIDE 38

Summary

Inherently more robust classifiers

Curvature regularization can significantly improve the robustness properties.

Counter-intuitive observation

Due to a more linear nature, an adversarially trained net is “easier” to fool.

A better trade-off?

▪ Adversarial Robustness through Local Linearization,


Qin et al., arXiv.

slide-39
SLIDE 39 39

Future challenges

slide-40
SLIDE 40

Architectures

Batch-norm, dropout, depth, width, etc.

40

Data

# of modes, convexity, distinguishability, etc.

Disentangling different factors

Training

Batch size, solver, learning rate, etc.

slide-41
SLIDE 41 41

Beyond additive perturbations

Bear Fox

▪ Geometric robustness

  • f deep networks,


Canbak, Moosavi, Frossard, CVPR 2018.

▪ Spatially transformed

adversarial examples,
 Xiao et al., ICLR 2018. “0” “2”

slide-42
SLIDE 42 42

“Interpretability” and robustness

Original image Standard training Adversarial training

▪ Robustness may be at odds with accuracy,

Tsipras et al., NeurIPS 2018.

slide-43
SLIDE 43 43

ETHZ Zürich, Switzerland

Google Zürich

slide-44
SLIDE 44 44

Interested in my research?

moosavi.sm@gmail.com smoosavi.me