When foes are friends adversarial examples as protective - - PowerPoint PPT Presentation
When foes are friends adversarial examples as protective - - PowerPoint PPT Presentation
When foes are friends adversarial examples as protective technologies Carmela Troncoso @carmelatroncoso Security and Privacy Engineering Lab The machine learning revolution 2 Machine Learning ) Definition (Wikipedia) Machine learning [...]
The machine learning revolution
2
Machine Learning )
Definition (Wikipedia)
Machine learning [...] gives "computers the ability to learn without being explicitly programmed" [and] [...] explores the study and construction of algorithms that can learn from and make predictions on data – such algorithms overcome following strictly static program instructions by making data-driven predictions or decisions, through building a model from sample inputs.
3
Data Machine learning Services
Machine Learning
4
Data Machine learning Services
Learning / Training Data Expected Output Model
Machine Learning
5
Data Machine learning Services
Learning / Training Data Expected Output Model Model New Data Output
Poisoning
6
Adversarial machine learning
Adversarial examples
Adversarial machine learning
Adversarial examples
7
Crafted testing samples that get misclassified by an ML algorithm
https://ai.googleblog.com/2018/09/introducing-unrestricted-adversarial.html
Model New Data Output
Poisoning
8
Crafted training samples that change the decision boundaries of an ML algorithm
Adversarial machine learning
Learning Data Expected Output Model
Adversarial examples Poisoning
9
Crafted testing samples that get misclassified by an ML algorithm Crafted training samples that change the decision boundaries of an ML algorithm
Adversarial machine learning is here to stay
10
11
Three ways of using adversarial examples as defensive technologies
Adversarial examples as security metrics Adversarial examples to defend from adversarial machine learning uses
For Privacy For Social Justice For Security
12
Three ways of using adversarial examples as defensive technologies
Adversarial examples as security metrics Adversarial examples to defend from adversarial machine learning uses
For Privacy For Social Justice For Security
13
Defending against adversarial examples?
Defending against adversarial examples?
Defending against adversarial examples?
Adversarial training
training on simulated adversarial examples
Does this solve security problems?
Adversarial training
training on simulated adversarial examples
Adversarial training
training on simulated adversarial examples
Does this solve security problems?
Images Random Noise
Does this solve security problems?
Malware Twitter Bots Spam ??? ??? ???
Does this solve security problems?
Malware Twitter Bots Spam ??? ??? ???
In security problems examples belong to a DISCRETE and CONSTRAINED domain FEASIBILITY COST How can we become the enemy?
Does this solve security problems?
Malware Twitter Bots Spam ??? ??? ???
In security problems examples belong to a DISCRETE and CONSTRAINED domain FEASIBILITY COST How can we become the enemy?
Does this solve security problems?
Malware Twitter Bots Spam ??? ??? ???
In security problems examples belong to a DISCRETE and CONSTRAINED domain FEASIBILITY COST How can we become the enemy?
Our approach: search as a graph
23
Number of followers: x Age of account: y
ML
Our approach: search as a graph
24
Number of followers: x Age of account: y
Number of followers: few Age of account: new
ML
Our approach: search as a graph
Number of followers: few Age of account: new
25
Number of followers: x Age of account: y
ML
Number of followers: few Age of account: 1 year Number of followers: few Age of account: 2 years Number of followers: few Age of account: 3 years
Our approach: search as a graph
Number of followers: few Age of account: new Number of followers: few Age of account: 1 year
26
Number of followers: few Age of account: 2 years Number of followers: few Age of account: 3 years
Number of followers: x Age of account: y
Number of followers: some Age of account: 1 year Number of followers: many Age of account: 1 year
ML
Number of followers: some Age of account: 1 year Number of followers: many Age of account: 1 year Number of followers: few Age of account: 3 years
Our approach: search as a graph
Number of followers: few Age of account: new Number of followers: few Age of account: 1 year
27
Number of tweets: few Age of account: 2 years
Number of followers: x Age of account: y
In security problems examples belong to a DISCRETE and CONSTRAINED domain FEASIBILITY COST How can we become the enemy?
ML
Number of followers: some Age of account: 1 year Number of followers: many Age of account: 1 year Number of followers: few Age of account: 3 years
Our approach: search as a graph
Number of followers: few Age of account: new Number of followers: few Age of account: 1 year
28
Number of tweets: few Age of account: 2 years
Number of followers: x Age of account: y
ML
$2 $4 $7 $2 $8 cost = $2 + $2 = $4 cost = $2 + $8 = $10
Number of followers: some Age of account: 1 year Number of followers: many Age of account: 1 year Number of followers: few Age of account: 3 years
Our approach: search as a graph
Number of followers: few Age of account: new Number of followers: few Age of account: 1 year
29
Number of tweets: few Age of account: 2 years
Number of followers: x Age of account: y
ML
$2 $4 $7 $2 $8 cost = $2 + $2 = $4 cost = $2 + $8 = $10
In security problems examples belong to a DISCRETE and CONSTRAINED domain FEASIBILITY COST How can we become the enemy?
The graph approach comes with more advantages
30
Enables the use of graph theory to
EFFICIENTLY find adversarial examples (A*, beam search, hill climbing, etc)
CAPTURES most attacks in the literature! (comparison base)
The graph approach comes with more advantages
31
Enables the use of graph theory to
EFFICIENTLY find adversarial examples (A*, beam search, hill climbing, etc)
CAPTURES most attacks in the literature! (comparison base)
Find MINIMAL COST adversarial examples if
- The discrete domain is a subset of Rm
For example, categorical one-hot encoded features: [0 1 0 0]
- Cost of each single transformation is Lp
For example, L∞([0 1 0 0], [1 0 0 0]) = 1
- We can compute pointwise robustness for the target classifier over Rm
A* search with a heuristic
Pointwise robustness over Rm
32
A* search with a heuristic
33
A* search with a heuristic
34
Minimal adversarial example found!
35
Minimal adversarial example found!
36
Minimal adversarial example found!
37
Confidence of the example
Is this really efficient?
l =d just flip decision
38
Is this really efficient?
l >d high confidence example
39
Is this really efficient?
l >d high confidence example
40
Why is this relevant?
41
MINIMAL COST adversarial examples can become security metrics!
Cost can be associated with RISK Cannot stop attacks, but can we ensure they are expensive? Constrained domains security Continuous-domains approaches can be very conservative!
Applicability
42
Non-linear models:
- approximate heuristics
- transferability (basic attacks transfer worse than high confidence)
Faster search:
- tradeoff guarantees vs. search speed (e.g., hill climbing)
Other cost functions
Three ways of using adversarial examples as defensive technologies
Adversarial examples as security metrics Adversarial examples to defend from adversarial machine learning uses
For Social Justice For Security
43
For Privacy
Adversarial examples are only adversarial when you are the algorithm!
Machine learning as a privacy adversary
45
ML Privacy-oriented Literature
Data Service Avoid that learns about data
Machine learning as a privacy adversary
46
ML Privacy-oriented Literature
Data Service Avoid that learns about data Actively (maybe not willingly) provide data. Solutions like Differential privacy and Encryption are suitable
Machine learning as a privacy adversary
47
ML Privacy-oriented Literature
Data Service Avoid that learns about data Actively (maybe not willingly) provide data. Solutions like Differential privacy and Encryption are suitable No active sharing! Cannot count on
Adversarial examples as privacy defenses
48
Data Inferences
Adversarial examples as privacy defenses
49
Data Inferences
Goal: modify the data to avoid inferences Social networks data, browsing patterns, traffic patterns, location …
Adversarial examples as privacy defenses
50
Data Inferences
Goal: modify the data to avoid inferences Social networks data, browsing patterns, traffic patterns, location … Privacy solutions are also CONSTRAINED in FEASIBILITY and COST!
Protecting from traffic analysis
51
Encrypted traffic trace
ML
Apps, webs, words,…
Protecting from traffic analysis
52
Encrypted traffic trace
ML
Apps, webs, words,… Add 1 packet in position 1 Add 2 packets in position 1 Add 1 packet in position 2 Add 1 packet in position N Add 3 packets in position 1 Add 2 packets in position 2
… …
FEASIBLE perturbations:
- Add packets
- Delay packets
Protecting from traffic analysis
53
Encrypted traffic trace
ML
Apps, webs, words,… Add 1 packet in position 1 Add 2 packets in position 1 Add 1 packet in position 2 Add 1 packet in position N Add 3 packets in position 1 Add 2 packets in position 2
… …
FEASIBLE perturbations:
- Add packets
- Delay packets
COST is essential: packets on the network cost money!
54
Website fingerprinting
Machine learning To learn how websites traffic patterns look like
We consider the monitored / unmonitored problem We use CUMUL attack, non-linear and non-robustness metric
55
Comparison with existing defenses
Adversarial examples for privacy
56
Provide privacy in domains where the ML model is adversarial Privacy is also CONSTRAINED so the graphical approach can be used to
EFFICIENTLY find FEASIBLE adversarial examples
find MINIMAL COST adversarial examples Provide a BASELINE to compare defenses’ efficiency
Takeaways
- Adversarial machine learning is hard to defend from
… but this opens great opportunities for security and privacy applications
Adversarial machine learning as protective technologies
- New graphical framework to approach the search of adversarial examples
… we can use of graph theory to improve efficiency and provide guarantees
57
https://github.com/spring-epfl/ https://arxiv.org/abs/1810.10939
PETs
58
Security evaluation
https://spring.epfl.ch/en http://carmelatroncoso.com/