fabrizio.falchi@cnr.it fabrizio.falchi@cnr.it - - PowerPoint PPT Presentation

fabrizio falchi cnr it fabrizio falchi cnr it fabrizio
SMART_READER_LITE
LIVE PREVIEW

fabrizio.falchi@cnr.it fabrizio.falchi@cnr.it - - PowerPoint PPT Presentation

A TTACKING D EEP N EURAL N ETWORKS WITH A DVERSARIAL I MAGES Fabrizio Falchi ISTI, CNR, Pisa, italy www.fabriziofalchi.it COST ACTION CA16101 - Dubrovnik, November 7th fabrizio.falchi@cnr.it fabrizio.falchi@cnr.it fabrizio.falchi@cnr.it W HAT


slide-1
SLIDE 1

Fabrizio Falchi

ISTI, CNR, Pisa, italy

www.fabriziofalchi.it

ATTACKING DEEP NEURAL NETWORKS

WITH ADVERSARIAL IMAGES

COST ACTION CA16101 - Dubrovnik, November 7th

slide-2
SLIDE 2

fabrizio.falchi@cnr.it

slide-3
SLIDE 3

fabrizio.falchi@cnr.it

slide-4
SLIDE 4

fabrizio.falchi@cnr.it

slide-5
SLIDE 5

fabrizio.falchi@cnr.it

WHAT’S THAT?

slide-6
SLIDE 6

fabrizio.falchi@cnr.it

WHAT’S THAT?

slide-7
SLIDE 7

fabrizio.falchi@cnr.it

WHAT’S THAT?

slide-8
SLIDE 8

fabrizio.falchi@cnr.it

WHAT’S THAT?

slide-9
SLIDE 9

fabrizio.falchi@cnr.it

ADVERSARIAL EXAMPLES

slide-10
SLIDE 10

fabrizio.falchi@cnr.it

ILLUSIONS

Edward H. Adelson

slide-11
SLIDE 11

fabrizio.falchi@cnr.it

ILLUSIONS

Edward H. Adelson

slide-12
SLIDE 12

fabrizio.falchi@cnr.it

slide-13
SLIDE 13

fabrizio.falchi@cnr.it

DUBROVNIK

slide-14
SLIDE 14

fabrizio.falchi@cnr.it

DUBROVNIK – DEEP DREAM

slide-15
SLIDE 15

KNOW YOUR ENEMY

slide-16
SLIDE 16

fabrizio.falchi@cnr.it

ADVERSARY

Goal Knowledge Capability

slide-17
SLIDE 17

ADVERSARY’S GOAL

slide-18
SLIDE 18

fabrizio.falchi@cnr.it

GENUINE IMAGES

… Mushrooms Pineapple Toucan …

slide-19
SLIDE 19

fabrizio.falchi@cnr.it

NON-TARGETED ATTACK

19

… Mushrooms <whatever> …

+ =

NON-TARGETED

Goal

slide-20
SLIDE 20

fabrizio.falchi@cnr.it

TARGETED ATTACK

20

… Mushrooms Toucan …

+ =

TARGETED

Goal

slide-21
SLIDE 21

fabrizio.falchi@cnr.it

Goal Knowledge Capability

slide-22
SLIDE 22

fabrizio.falchi@cnr.it

Slide credit: Biggio

slide-23
SLIDE 23

fabrizio.falchi@cnr.it

ATTACKING DEEP NEURAL NETWORKS

slide-24
SLIDE 24

fabrizio.falchi@cnr.it

BLACK BOX ADVERSARIAL EXAMPLE ATTACKS

Practical Black-Box Attacks against Machine Learning Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z. Berkay Celik, Ananthram Swami

slide-25
SLIDE 25

ATTACKING FACE RECOGNITION SYSTEMS

slide-26
SLIDE 26

fabrizio.falchi@cnr.it

ADVERIAL FACES

Fast Geometrically-Perturbed Adversarial Faces Ali Dabouei, Sobhan Soleymani, Jeremy Dawson, Nasser M. Nasrabadi

slide-27
SLIDE 27

fabrizio.falchi@cnr.it

ADVERSARIAL FACES

Fast Geometrically-Perturbed Adversarial Faces Ali Dabouei, Sobhan Soleymani, Jeremy Dawson, Nasser M. Nasrabadi

slide-28
SLIDE 28

fabrizio.falchi@cnr.it

Fast Geometrically-Perturbed Adversarial Faces Ali Dabouei, Sobhan Soleymani, Jeremy Dawson, Nasser M. Nasrabadi

slide-29
SLIDE 29

fabrizio.falchi@cnr.it

ADVERSARIAL FACES

Fast Geometrically-Perturbed Adversarial Faces Ali Dabouei, Sobhan Soleymani, Jeremy Dawson, Nasser M. Nasrabadi

slide-30
SLIDE 30

fabrizio.falchi@cnr.it

Fast Geometrically-Perturbed Adversarial Faces Ali Dabouei, Sobhan Soleymani, Jeremy Dawson, Nasser M. Nasrabadi

slide-31
SLIDE 31

fabrizio.falchi@cnr.it

Fast Geometrically-Perturbed Adversarial Faces Ali Dabouei, Sobhan Soleymani, Jeremy Dawson, Nasser M. Nasrabadi

slide-32
SLIDE 32

ATTACKING IN REAL WORLD

slide-33
SLIDE 33

fabrizio.falchi@cnr.it

ADVERSARIAL IMAGE

33 Photo: labsix

slide-34
SLIDE 34

fabrizio.falchi@cnr.it

ROTATE ADVERSARIAL IMAGE

34 Photo: labsix

slide-35
SLIDE 35

fabrizio.falchi@cnr.it

slide-36
SLIDE 36

fabrizio.falchi@cnr.it

36

slide-37
SLIDE 37

fabrizio.falchi@cnr.it

Robust Physical-World Attacks on Deep Learning Models Eykholt, Evtimov, Fernandes, Bo Li, Rahmati, Xiao, Prakash, Kohno, Song

slide-38
SLIDE 38

fabrizio.falchi@cnr.it

slide-39
SLIDE 39

fabrizio.falchi@cnr.it

Adversarial Generative Nets: Neural Network Attacks on State-of-the-Art Face Recognition Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer, Michael K. Reiter

slide-40
SLIDE 40

fabrizio.falchi@cnr.it

ATTACKING DNN IN REAL WORLD

Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer, Michael K. Reiter

slide-41
SLIDE 41

fabrizio.falchi@cnr.it

ATTACKING DNN IN REAL WORLD

Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer, Michael K. Reiter

slide-42
SLIDE 42

fabrizio.falchi@cnr.it

ATTACKING DNN IN REAL WORLD

Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer, Michael K. Reiter

slide-43
SLIDE 43

ATTACKING FACE VERIFICATION SYSTEMS

slide-44
SLIDE 44

fabrizio.falchi@cnr.it

ID1 ID2 ID3 ... ID10 ... IDn

FACE RCOGNITION

slide-45
SLIDE 45

fabrizio.falchi@cnr.it

FACE VERIFICATION

slide-46
SLIDE 46

fabrizio.falchi@cnr.it

FACE VERIFIATION

Unravelling Robustness of Deep Learning based Face Recognition Against Adversarial Attacks Goswami, Ratha, Agarwal, Singh, Vatsa

46

slide-47
SLIDE 47

fabrizio.falchi@cnr.it

FACE VERIFIATION

Unravelling Robustness of Deep Learning based Face Recognition Against Adversarial Attacks Goswami, Ratha, Agarwal, Singh, Vatsa

47

slide-48
SLIDE 48

ADVERSARY-AWARE MACHINE LEARNING

slide-49
SLIDE 49

fabrizio.falchi@cnr.it

ADVERSARY-AWARE MACHINE LEARNING

49

Machine learning system should be aware of the arms race with the adversary

Security evaluation of pattern classifiers under attack Biggio, Fumera, Roli

slide-50
SLIDE 50

fabrizio.falchi@cnr.it

slide-51
SLIDE 51

ADVERSARIAL EXAMPLE DETECTION

slide-52
SLIDE 52

fabrizio.falchi@cnr.it

GENIUNE IMAGES

52

… Mushrooms Pineapple Toucan …

slide-53
SLIDE 53

fabrizio.falchi@cnr.it

NON-TARGETED ATTACK

53

… Mushrooms <whatever> …

+ =

slide-54
SLIDE 54

fabrizio.falchi@cnr.it

DEFENSE

54

… Mushrooms …

+ =

Increase robustness

slide-55
SLIDE 55

fabrizio.falchi@cnr.it

… Mushrooms …

DETECTION

55

+ =

Attack detection

slide-56
SLIDE 56

fabrizio.falchi@cnr.it

ADVERSARIAL EXAMPLES DETECTION

Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer, Michael K. Reiter

slide-57
SLIDE 57

OUR APPROACH

slide-58
SLIDE 58

fabrizio.falchi@cnr.it

DEEP LEARNING (FROM NATURE)

AI

Machine Learning Repres. Learning Deep Learning

slide-59
SLIDE 59

fabrizio.falchi@cnr.it

DEEP LEARNING (FROM NATURE)

Representation learning methods that allow a machine to be fed with raw data and to automatically discover the representations needed for detection or classification. Deep-learning are representation learning methods

  • with multiple levels of representation, obtained by
  • composing simple but non-linear modules that each
  • transform the representation at one level

into a representation at a higher, slightly more abstract level.

slide-60
SLIDE 60

fabrizio.falchi@cnr.it

MULTIPLE LEVELS OF ABSTRACTION

slide-61
SLIDE 61

fabrizio.falchi@cnr.it

MULTIPLE LEVELS OF ABSTRACTION

slide-62
SLIDE 62

fabrizio.falchi@cnr.it

OUR APPROACH

A detection scheme for adversarial images based on internal representation (aka deep features) of the neural network classifier.

  • Main intuition: look at the evolution of features, i.e. the path formed by their positions in

the feature spaces, during the forward pass of the network.

  • Claim: The trajectories traced by authentic inputs and adversarial examples differ and

can be used to discern them.

Adversarial examples detection in features distance spaces

  • F. Carrara, R. Becarelli, R. Caldelli, F. Falchi, G. Amato

ECCV WOCM Workshop 2018

slide-63
SLIDE 63

fabrizio.falchi@cnr.it

MULTIPLE LEVELS OF ABSTRACTION

slide-64
SLIDE 64

fabrizio.falchi@cnr.it

OUR APPROACH: RESULTS

slide-65
SLIDE 65

fabrizio.falchi@cnr.it

EASY TO IDENTIFY ADVERSARIAL IMAGES

66

slide-66
SLIDE 66

fabrizio.falchi@cnr.it

HARD TO IDENTIFY ADVERSARIAL IMAGES

slide-67
SLIDE 67

fabrizio.falchi@cnr.it

OTHER DETECTION APPROACHES

  • Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods [2017]

Nicholas Carlini, David Wagner

  • On Detecting Adversarial Perturbations [2017]

Jan Hendrik Metzen, Tim Genewein, Volker Fischer, Bastian Bischoff

  • Trace and detect adversarial attacks on CNNs using feature response maps [2018]

Mohammadreza, Friedhelm, Thilo

  • Adversarial examples detection in features distance spaces [2018]
  • F. Carrara, R. Becarelli, R. Caldelli, F. Falchi, G. Amato
slide-68
SLIDE 68

RELATED TOPICS

slide-69
SLIDE 69

fabrizio.falchi@cnr.it

DETECTING FACE MORPHING ATTACKS

Detection of Face Morphing Attacks by Deep Learning

  • C. Seibold, W. Samek, A. Hilsmann, P. Eisert
slide-70
SLIDE 70

fabrizio.falchi@cnr.it

ADVERSARIAL EXAMPLES DETECTION

HiDDeN: Hiding Data With Deep Networks Jiren Zhu, Russell Kaplan, Justin Johnson, Li Fei-Fei

slide-71
SLIDE 71

fabrizio.falchi@cnr.it

slide-72
SLIDE 72

fabrizio.falchi@cnr.it

slide-73
SLIDE 73

fabrizio.falchi@cnr.it

THANKS!

Questions are welcomed

Fabrizio Falchi

fabrizio.falchi@cnr.it

slide-74
SLIDE 74

fabrizio.falchi@cnr.it

CONCLUSIONS

  • Machine Learning and Deep Learning in particular can be attacked
  • Slightly modifying images but also in real world
  • Even if our neural network is a black box for the enemy
  • Many approaches have been proposed to make DL more robust
  • Adversarial examples detection is its early stages
  • We need adversary-aware machine learning