When foes are friends adversarial examples as protective - - PowerPoint PPT Presentation

when foes are friends adversarial examples as protective
SMART_READER_LITE
LIVE PREVIEW

When foes are friends adversarial examples as protective - - PowerPoint PPT Presentation

When foes are friends adversarial examples as protective technologies Carmela Troncoso @carmelatroncoso Security and Privacy Engineering Lab The machine learning revolution 2 Machine Learning ) Definition (Wikipedia) Machine learning [...]


slide-1
SLIDE 1

When foes are friends adversarial examples as protective technologies

Carmela Troncoso @carmelatroncoso Security and Privacy Engineering Lab

slide-2
SLIDE 2

The machine learning revolution

2

slide-3
SLIDE 3

Machine Learning )

Definition (Wikipedia)

Machine learning [...] gives "computers the ability to learn without being explicitly programmed" [and] [...] explores the study and construction of algorithms that can learn from and make predictions on data – such algorithms overcome following strictly static program instructions by making data-driven predictions or decisions, through building a model from sample inputs.

3

Data Machine learning Services

slide-4
SLIDE 4

Machine Learning

4

Data Machine learning Services

Learning / Training Data Expected Output Model

slide-5
SLIDE 5

Machine Learning

5

Data Machine learning Services

Learning / Training Data Expected Output Model Model New Data Output

slide-6
SLIDE 6

Poisoning

6

Adversarial machine learning

Adversarial examples

slide-7
SLIDE 7

Adversarial machine learning

Adversarial examples

7

Crafted testing samples that get misclassified by an ML algorithm

https://ai.googleblog.com/2018/09/introducing-unrestricted-adversarial.html

Model New Data Output

slide-8
SLIDE 8

Poisoning

8

Crafted training samples that change the decision boundaries of an ML algorithm

Adversarial machine learning

Learning Data Expected Output Model

slide-9
SLIDE 9

Adversarial examples Poisoning

9

Crafted testing samples that get misclassified by an ML algorithm Crafted training samples that change the decision boundaries of an ML algorithm

Adversarial machine learning is here to stay

slide-10
SLIDE 10

10

slide-11
SLIDE 11

11

slide-12
SLIDE 12

Three ways of using adversarial examples as defensive technologies

Adversarial examples as security metrics Adversarial examples to defend from adversarial machine learning uses

For Privacy For Social Justice For Security

12

slide-13
SLIDE 13

Three ways of using adversarial examples as defensive technologies

Adversarial examples as security metrics Adversarial examples to defend from adversarial machine learning uses

For Privacy For Social Justice For Security

13

slide-14
SLIDE 14

Defending against adversarial examples?

slide-15
SLIDE 15

Defending against adversarial examples?

slide-16
SLIDE 16

Defending against adversarial examples?

Adversarial training

training on simulated adversarial examples

slide-17
SLIDE 17

Does this solve security problems?

Adversarial training

training on simulated adversarial examples

slide-18
SLIDE 18

Adversarial training

training on simulated adversarial examples

Does this solve security problems?

Images Random Noise

slide-19
SLIDE 19

Does this solve security problems?

Malware Twitter Bots Spam ??? ??? ???

slide-20
SLIDE 20

Does this solve security problems?

Malware Twitter Bots Spam ??? ??? ???

In security problems examples belong to a DISCRETE and CONSTRAINED domain FEASIBILITY COST How can we become the enemy?

slide-21
SLIDE 21

Does this solve security problems?

Malware Twitter Bots Spam ??? ??? ???

In security problems examples belong to a DISCRETE and CONSTRAINED domain FEASIBILITY COST How can we become the enemy?

slide-22
SLIDE 22

Does this solve security problems?

Malware Twitter Bots Spam ??? ??? ???

In security problems examples belong to a DISCRETE and CONSTRAINED domain FEASIBILITY COST How can we become the enemy?

slide-23
SLIDE 23

Our approach: search as a graph

23

Number of followers: x Age of account: y

ML

slide-24
SLIDE 24

Our approach: search as a graph

24

Number of followers: x Age of account: y

Number of followers: few Age of account: new

ML

slide-25
SLIDE 25

Our approach: search as a graph

Number of followers: few Age of account: new

25

Number of followers: x Age of account: y

ML

Number of followers: few Age of account: 1 year Number of followers: few Age of account: 2 years Number of followers: few Age of account: 3 years

slide-26
SLIDE 26

Our approach: search as a graph

Number of followers: few Age of account: new Number of followers: few Age of account: 1 year

26

Number of followers: few Age of account: 2 years Number of followers: few Age of account: 3 years

Number of followers: x Age of account: y

Number of followers: some Age of account: 1 year Number of followers: many Age of account: 1 year

ML

slide-27
SLIDE 27

Number of followers: some Age of account: 1 year Number of followers: many Age of account: 1 year Number of followers: few Age of account: 3 years

Our approach: search as a graph

Number of followers: few Age of account: new Number of followers: few Age of account: 1 year

27

Number of tweets: few Age of account: 2 years

Number of followers: x Age of account: y

In security problems examples belong to a DISCRETE and CONSTRAINED domain FEASIBILITY COST How can we become the enemy?

ML

slide-28
SLIDE 28

Number of followers: some Age of account: 1 year Number of followers: many Age of account: 1 year Number of followers: few Age of account: 3 years

Our approach: search as a graph

Number of followers: few Age of account: new Number of followers: few Age of account: 1 year

28

Number of tweets: few Age of account: 2 years

Number of followers: x Age of account: y

ML

$2 $4 $7 $2 $8 cost = $2 + $2 = $4 cost = $2 + $8 = $10

slide-29
SLIDE 29

Number of followers: some Age of account: 1 year Number of followers: many Age of account: 1 year Number of followers: few Age of account: 3 years

Our approach: search as a graph

Number of followers: few Age of account: new Number of followers: few Age of account: 1 year

29

Number of tweets: few Age of account: 2 years

Number of followers: x Age of account: y

ML

$2 $4 $7 $2 $8 cost = $2 + $2 = $4 cost = $2 + $8 = $10

In security problems examples belong to a DISCRETE and CONSTRAINED domain FEASIBILITY COST How can we become the enemy?

slide-30
SLIDE 30

The graph approach comes with more advantages

30

Enables the use of graph theory to

EFFICIENTLY find adversarial examples (A*, beam search, hill climbing, etc)

CAPTURES most attacks in the literature! (comparison base)

slide-31
SLIDE 31

The graph approach comes with more advantages

31

Enables the use of graph theory to

EFFICIENTLY find adversarial examples (A*, beam search, hill climbing, etc)

CAPTURES most attacks in the literature! (comparison base)

Find MINIMAL COST adversarial examples if

  • The discrete domain is a subset of Rm

For example, categorical one-hot encoded features: [0 1 0 0]

  • Cost of each single transformation is Lp

For example, L∞([0 1 0 0], [1 0 0 0]) = 1

  • We can compute pointwise robustness for the target classifier over Rm
slide-32
SLIDE 32

A* search with a heuristic

Pointwise robustness over Rm

32

slide-33
SLIDE 33

A* search with a heuristic

33

slide-34
SLIDE 34

A* search with a heuristic

34

slide-35
SLIDE 35

Minimal adversarial example found!

35

slide-36
SLIDE 36

Minimal adversarial example found!

36

slide-37
SLIDE 37

Minimal adversarial example found!

37

Confidence of the example

slide-38
SLIDE 38

Is this really efficient?

l =d  just flip decision

38

slide-39
SLIDE 39

Is this really efficient?

l >d  high confidence example

39

slide-40
SLIDE 40

Is this really efficient?

l >d  high confidence example

40

slide-41
SLIDE 41

Why is this relevant?

41

MINIMAL COST adversarial examples can become security metrics!

Cost can be associated with RISK Cannot stop attacks, but can we ensure they are expensive? Constrained domains security Continuous-domains approaches can be very conservative!

slide-42
SLIDE 42

Applicability

42

Non-linear models:

  • approximate heuristics
  • transferability (basic attacks transfer worse than high confidence)

Faster search:

  • tradeoff guarantees vs. search speed (e.g., hill climbing)

Other cost functions

slide-43
SLIDE 43

Three ways of using adversarial examples as defensive technologies

Adversarial examples as security metrics Adversarial examples to defend from adversarial machine learning uses

For Social Justice For Security

43

For Privacy

slide-44
SLIDE 44

Adversarial examples are only adversarial when you are the algorithm!

slide-45
SLIDE 45

Machine learning as a privacy adversary

45

ML Privacy-oriented Literature

Data Service Avoid that learns about data

slide-46
SLIDE 46

Machine learning as a privacy adversary

46

ML Privacy-oriented Literature

Data Service Avoid that learns about data Actively (maybe not willingly) provide data. Solutions like Differential privacy and Encryption are suitable

slide-47
SLIDE 47

Machine learning as a privacy adversary

47

ML Privacy-oriented Literature

Data Service Avoid that learns about data Actively (maybe not willingly) provide data. Solutions like Differential privacy and Encryption are suitable No active sharing! Cannot count on

slide-48
SLIDE 48

Adversarial examples as privacy defenses

48

Data Inferences

slide-49
SLIDE 49

Adversarial examples as privacy defenses

49

Data Inferences

Goal: modify the data to avoid inferences Social networks data, browsing patterns, traffic patterns, location …

slide-50
SLIDE 50

Adversarial examples as privacy defenses

50

Data Inferences

Goal: modify the data to avoid inferences Social networks data, browsing patterns, traffic patterns, location … Privacy solutions are also CONSTRAINED in FEASIBILITY and COST!

slide-51
SLIDE 51

Protecting from traffic analysis

51

Encrypted traffic trace

ML

Apps, webs, words,…

slide-52
SLIDE 52

Protecting from traffic analysis

52

Encrypted traffic trace

ML

Apps, webs, words,… Add 1 packet in position 1 Add 2 packets in position 1 Add 1 packet in position 2 Add 1 packet in position N Add 3 packets in position 1 Add 2 packets in position 2

… …

FEASIBLE perturbations:

  • Add packets
  • Delay packets
slide-53
SLIDE 53

Protecting from traffic analysis

53

Encrypted traffic trace

ML

Apps, webs, words,… Add 1 packet in position 1 Add 2 packets in position 1 Add 1 packet in position 2 Add 1 packet in position N Add 3 packets in position 1 Add 2 packets in position 2

… …

FEASIBLE perturbations:

  • Add packets
  • Delay packets

COST is essential: packets on the network cost money!

slide-54
SLIDE 54

54

Website fingerprinting

Machine learning To learn how websites traffic patterns look like

We consider the monitored / unmonitored problem We use CUMUL attack, non-linear and non-robustness metric

slide-55
SLIDE 55

55

Comparison with existing defenses

slide-56
SLIDE 56

Adversarial examples for privacy

56

Provide privacy in domains where the ML model is adversarial Privacy is also CONSTRAINED so the graphical approach can be used to

EFFICIENTLY find FEASIBLE adversarial examples

find MINIMAL COST adversarial examples Provide a BASELINE to compare defenses’ efficiency

slide-57
SLIDE 57

Takeaways

  • Adversarial machine learning is hard to defend from

… but this opens great opportunities for security and privacy applications

Adversarial machine learning as protective technologies

  • New graphical framework to approach the search of adversarial examples

… we can use of graph theory to improve efficiency and provide guarantees

57

slide-58
SLIDE 58

https://github.com/spring-epfl/ https://arxiv.org/abs/1810.10939

PETs

58

Security evaluation

https://spring.epfl.ch/en http://carmelatroncoso.com/