On the (In-)Security of Machine Learning Nicholas Carlini Google - - PowerPoint PPT Presentation

on the in security of machine learning
SMART_READER_LITE
LIVE PREVIEW

On the (In-)Security of Machine Learning Nicholas Carlini Google - - PowerPoint PPT Presentation

On the (In-)Security of Machine Learning Nicholas Carlini Google Brain Written: Sept 24, 2014 Written: Sept 24, 2014 Today: Oct 16, 2018 Written: Sept 24, 2014 Today: Oct 16, 2018 ... 4 years ago So how are we doing? 95% it is a


slide-1
SLIDE 1

On the (In-)Security

  • f Machine Learning

Nicholas Carlini

Google Brain

slide-2
SLIDE 2
slide-3
SLIDE 3
slide-4
SLIDE 4
slide-5
SLIDE 5
slide-6
SLIDE 6
slide-7
SLIDE 7

Written: Sept 24, 2014

slide-8
SLIDE 8

Written: Today: Sept 24, 2014 Oct 16, 2018

slide-9
SLIDE 9

... 4 years ago Written: Today: Sept 24, 2014 Oct 16, 2018

slide-10
SLIDE 10

So how are we doing?

slide-11
SLIDE 11

95% it is a French Bulldog

slide-12
SLIDE 12

83% it is a Old English Sheepdog

slide-13
SLIDE 13

78% it is a Greater Swiss Mountain Dog

slide-14
SLIDE 14

67% it is a Great Dane

slide-15
SLIDE 15

99.99% it is Guacamole

slide-16
SLIDE 16

96% it is a Golden
 Retriever

slide-17
SLIDE 17

99.99% it is Guacamole

slide-18
SLIDE 18

This phenomenon is known as an adversarial example

  • B. Biggio, I. Corona, D. Maiorca, B. Nelson, N. Srndic, P. Laskov, G. Giacinto, and F. Roli. Evasion attacks against machine learning at test time. 2013.
  • C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus. Intriguing properties of neural networks. ICLR 2014.
  • I. Goodfellow, J. Shlens, and C. Szegedy. Explaining and harnessing adversarial examples. 2014.
slide-19
SLIDE 19
slide-20
SLIDE 20
slide-21
SLIDE 21
slide-22
SLIDE 22
slide-23
SLIDE 23

Why should we care about adversarial examples?

Make ML robust

slide-24
SLIDE 24
slide-25
SLIDE 25

Why should we care about adversarial examples?

Make ML robust Make ML better

slide-26
SLIDE 26

How do we generate adversarial examples?

slide-27
SLIDE 27

DEFN: The loss of a neural network

  • n an input x for a label y

is a measure of how wrong the network is on x.

slide-28
SLIDE 28

loss( , dog) is small loss( , guacamole) is large

slide-29
SLIDE 29

neural network loss


  • n the given input

the perturbation is less than a given threshold MAXIMIZE SUCH THAT

slide-30
SLIDE 30

What do we need to know? Everything.

slide-31
SLIDE 31
slide-32
SLIDE 32
slide-33
SLIDE 33
slide-34
SLIDE 34

WHY does this work?

slide-35
SLIDE 35

Truck Dog

slide-36
SLIDE 36

Dog Truck Airplane

slide-37
SLIDE 37
slide-38
SLIDE 38

( (

slide-39
SLIDE 39

Okay, lesson learned.

slide-40
SLIDE 40

Okay, lesson learned. Don't classify dogs with neural networks.

slide-41
SLIDE 41

99.99% it is a School Bus

slide-42
SLIDE 42

Okay, lesson learned.

slide-43
SLIDE 43

Okay, lesson learned. Don't classify dogs with neural networks. images

slide-44
SLIDE 44

And now for something completely different And now for something completely different And now for something completely different

slide-45
SLIDE 45

Mozilla's DeepSpeech

slide-46
SLIDE 46

Mozilla's DeepSpeech transcribes this as "most of them were staring
 quietly at the big table"

slide-47
SLIDE 47

Mozilla's DeepSpeech transcribes this as "most of them were staring
 quietly at the big table"

slide-48
SLIDE 48

What about this?

slide-49
SLIDE 49

"It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the epoch of belief, it was the epoch of incredulity"

slide-50
SLIDE 50
slide-51
SLIDE 51
slide-52
SLIDE 52
slide-53
SLIDE 53
slide-54
SLIDE 54

Okay, lesson learned.

slide-55
SLIDE 55

Okay, lesson learned. Don't classify images with neural networks.

  • r audio

^

slide-56
SLIDE 56
slide-57
SLIDE 57

Okay, lesson learned.

slide-58
SLIDE 58

Okay, lesson learned. Don't let adversaries perform gradient descent.

slide-59
SLIDE 59
slide-60
SLIDE 60
slide-61
SLIDE 61
slide-62
SLIDE 62
slide-63
SLIDE 63
slide-64
SLIDE 64
slide-65
SLIDE 65
slide-66
SLIDE 66
slide-67
SLIDE 67

Okay, lesson learned.

slide-68
SLIDE 68

Okay, lesson learned. Don't let adversaries have ANY access to my model

slide-69
SLIDE 69
slide-70
SLIDE 70

Okay, lesson learned.

slide-71
SLIDE 71

Okay, lesson learned. Give up.

slide-72
SLIDE 72
slide-73
SLIDE 73
slide-74
SLIDE 74
slide-75
SLIDE 75

Yes, machine learning gives amazing results

slide-76
SLIDE 76

Guacamole (99%)

However, there are
 also significant 
 vulnerabilities

slide-77
SLIDE 77

https://nicholas.carlini.com nicholas@carlini.com

Questions?

slide-78
SLIDE 78
slide-79
SLIDE 79