Adversarial Robustness of Machine Learning Models for Graphs Prof. - - PowerPoint PPT Presentation

adversarial robustness of machine learning models for
SMART_READER_LITE
LIVE PREVIEW

Adversarial Robustness of Machine Learning Models for Graphs Prof. - - PowerPoint PPT Presentation

Adversarial Robustness of Machine Learning Models for Graphs Prof. Dr. Stephan Gnnemann Department of Informatics Technical University of Munich 28.10.2019 Adversarial Robustness of Machine Learning Models for Graphs S. Gnnemann Can you


slide-1
SLIDE 1
  • S. Günnemann

Adversarial Robustness of Machine Learning Models for Graphs

Adversarial Robustness of Machine Learning Models for Graphs

  • Prof. Dr. Stephan Günnemann

Department of Informatics Technical University of Munich

28.10.2019

slide-2
SLIDE 2
  • S. Günnemann

Adversarial Robustness of Machine Learning Models for Graphs

Adversarial Robustness of Machine Learning Models for Graphs

  • Prof. Dr. Stephan Günnemann

Department of Informatics Technical University of Munich

28.10.2019

Can you trust the predictions of graph-based ML models?

slide-3
SLIDE 3
  • S. Günnemann

Adversarial Robustness of Machine Learning Models for Graphs

Graphs are Everywhere

Computational Social Sciences Meshes Computational Chemistry, Proteomics, Biology Reasoning Systems Scene Graphs

2

slide-4
SLIDE 4
  • S. Günnemann

Adversarial Robustness of Machine Learning Models for Graphs

Machine Learning for Graphs

3

§ Graph neural networks have become extremely popular § Example: GNNs for semi-supervised node classification

Partially labeled, attributed graph GNN

? ? ? ? ? ? ?

Message passing ℎ"

($) = '

( ) " ⋅ + $,- ⋅ . $

slide-5
SLIDE 5
  • S. Günnemann

Adversarial Robustness of Machine Learning Models for Graphs

Are machine learning models for graphs robust with respect to (adversarial) perturbations?

§ Reliable/safe use of ML models requires correctness even in the worst-case

– adversarial perturbations = worst-case corruptions

§ Adversaries are common in many application scenarios where graphs are used (e.g. recommender systems, social networks, knowledge graphs)

Graphs & Robustness

4

slide-6
SLIDE 6
  • S. Günnemann

Adversarial Robustness of Machine Learning Models for Graphs

§ State-of-the-art (deep) learning methods are not robust against small deliberate perturbations

Adversarial Attacks in the Image Domain

5

Training data Training Model 99%

slide-7
SLIDE 7
  • S. Günnemann

Adversarial Robustness of Machine Learning Models for Graphs

92%

Perturbation

§ State-of-the-art (deep) learning methods are not robust against small deliberate perturbations

Adversarial Attacks in the Image Domain

6

Training data Training Model

Perturbed image

slide-8
SLIDE 8
  • S. Günnemann

Adversarial Robustness of Machine Learning Models for Graphs

The relational nature of the data might…

7

Cause Cascading Failures perturbations in one part of the graph can propagate to the rest

ML for graphs

? ? ? ? ? ? ?

Message passing

Improve Robustness predictions are computed jointly rather than in isolation

slide-9
SLIDE 9
  • S. Günnemann

Adversarial Robustness of Machine Learning Models for Graphs

ü Introduction & Motivation

  • 2. Are ML models for graphs robust?
  • 3. Can we give guarantees, i.e. certificates?
  • 4. Conclusion

Remaining Roadmap

8

slide-10
SLIDE 10
  • S. Günnemann

Adversarial Robustness of Machine Learning Models for Graphs

Semi-Supervised Node Classification

9

Partially labeled, attributed graph ML for graphs

? ? ? ? ? ? ?

Message passing

Can we change the predictions by slightly perturbing the data?

slide-11
SLIDE 11
  • S. Günnemann

Adversarial Robustness of Machine Learning Models for Graphs

Target node ! ∈ #: node whose classification label we want to change Attacker nodes $ ⊂ #: nodes the attacker can modify

Direct attack ($ = {!}) § Modify the target‘s features § Add connections to the target § Remove connections from the target

Unique Aspects of the Graph Domain

10

Target node Indirect attack (! ∉ $) § Modify the attackers‘ features § Add connections to the attackers § Remove connections from the attackers Attacker node Attacker node Change website content Buy likes/ followers Example Unfollow untrusted users Hijack friends

  • f target

Create a link/ spam farm Example

slide-12
SLIDE 12
  • S. Günnemann

Adversarial Robustness of Machine Learning Models for Graphs

min

$%,'% min ()(*+, log 01,(*+, 2

− log 01,(

2

where 02 = 5

6 72, 82 = 9:5;<=> ?

7′ ABCD ? 7′8′E F E G

Single Node Attack for a GCN

11

Message passing

7′ ∈ 0,1 K×K: modified adjacency matrix 8′ ∈ 0,1 K×M: modified node attributes N : target node

Zügner, Akbarnejad, Günnemann. Adversarial Attacks on Neural Networks for Graph Data. KDD 2018

§ Classification margin

> 0: no change in classification < 0: change in classification

§ Core idea: Linearization → efficient greedy approach

slide-13
SLIDE 13
  • S. Günnemann

Adversarial Robustness of Machine Learning Models for Graphs

Results: Cora Data

12

Ours Direct Gradient Direct Random Direct Clean Ours- Indirect −1.0 −0.5 0.0 0.5 1.0

90.3% 60.8% 2.7% 1.0% 67.2% % Correct:

Classification margin

Ours Direct Gradient Direct Random Direct Clean Ours- Indirect −1.0 −0.5 0.0 0.5 1.0

83.8% 46.2% 9.8% 2.1% 59.2%

Poisoning attack on GCN Poisoning attack on DeepWalk

Clean Inter-class Random Grad. Ours Direct Ours Indirect Clean Inter-class Random Grad. Ours Direct Ours Indirect

Graph learning models are not robust to adversarial perturbations.

Wrong classification Correct classification

slide-14
SLIDE 14
  • S. Günnemann

Adversarial Robustness of Machine Learning Models for Graphs

Results: Cora Data

13

Ours Direct Gradient Direct Random Direct Clean Ours- Indirect −1.0 −0.5 0.0 0.5 1.0

90.3% 60.8% 2.7% 1.0% 67.2% % Correct:

Classification margin

Ours Direct Gradient Direct Random Direct Clean Ours- Indirect −1.0 −0.5 0.0 0.5 1.0

83.8% 46.2% 9.8% 2.1% 59.2%

Poisoning attack on GCN Poisoning attack on DeepWalk

Clean Inter-class Random Grad. Ours Direct Ours Indirect Clean Inter-class Random Grad. Ours Direct Ours Indirect

Graph learning models are not robust to adversarial perturbations.

Wrong classification Correct classification

slide-15
SLIDE 15
  • S. Günnemann

Adversarial Robustness of Machine Learning Models for Graphs

Results: Analysis

14

Given a target node !, what are the properties of the nodes an attack "connects to"/"disconnects from"?

fraction of nodes

slide-16
SLIDE 16
  • S. Günnemann

Adversarial Robustness of Machine Learning Models for Graphs

Results: Attacking Multiple Nodes Jointly

15

Using a perturbed graph is worse than using attributes alone!

Clean graph Poisoned graph CLN GCN Log. reg. 70 60 50 Accuracy (%)

Zügner, Günnemann. Adversarial Attacks on Graph Neural Networks via Meta Learning. ICLR 2019

Aim: Damage the overall performance on the test set Core idea: Meta-learning

  • Treat the graph as a hyper-

parameter to optimize

  • Backpropagate through the

learning phase Accuracy on test set (Citeseer data)

slide-17
SLIDE 17
  • S. Günnemann

Adversarial Robustness of Machine Learning Models for Graphs

§ Graph neural networks are highly vulnerable to adversarial perturbations

– Targeted as well as global attacks – Performance on the perturbed graph might even be lower compared to only using attributes (no structure) – Attacks are successful even under restrictive attack scenarios, e.g. no access to target node or limited knowledge about the graph

§ Non-Robustness holds for graph embeddings as well

– see e.g. Bojchevski, Günnemann. ICML 2019

Intermediate Summary

16

ℝ"

slide-18
SLIDE 18
  • S. Günnemann

Adversarial Robustness of Machine Learning Models for Graphs

ü Introduction & Motivation ü Are ML models for graphs robust? No!

  • 3. Can we give guarantees, i.e. certificates?
  • 4. Conclusion

Remaining Roadmap

17

Robustness certificate: Mathematical guarantee that the predicted class of an instance does not change under any admissible perturbation

slide-19
SLIDE 19
  • S. Günnemann

Adversarial Robustness of Machine Learning Models for Graphs

Classification margin

18

1 1 1 1 1 1 1 1 1

?

Graph neural network

Classification margin:

! = min

&'&∗ log ,(.∗) − log ,(.)

> 0: correct classification < 0: incorrect classification Class 1 Class 2 Class 3 Class predictions

  • f target node

Graph

Bojchevski, Günnemann. Certifiable Robustness to Graph Perturbations. NeurIPS 2019 Zügner, Günnemann. Certifiable Robustness and Robust Training for Graph Convolutional Networks. KDD 2019

slide-20
SLIDE 20
  • S. Günnemann

Adversarial Robustness of Machine Learning Models for Graphs

Classification margin

19

Class 1 Class 2 Class 3

1 1 1 1 1 1 1 1 1

?

Graph neural network

Negative margin after perturbation Classification margin:

! = min

&'&∗ log ,(.∗) − log ,(.)

> 0: correct classification < 0: incorrect classification

1

Class predictions

  • f target node

Worst-case margin !∗ = minimize

345675896:;<=

min

>?9== &'&∗

log , .∗ − log ,(.)

Classification margin !

Graph

Bojchevski, Günnemann. Certifiable Robustness to Graph Perturbations. NeurIPS 2019 Zügner, Günnemann. Certifiable Robustness and Robust Training for Graph Convolutional Networks. KDD 2019

slide-21
SLIDE 21
  • S. Günnemann

Adversarial Robustness of Machine Learning Models for Graphs

Core Idea: Robustness Certification

20

Reachable via perturbations Decision boundary

log $(&') log $(&))

Negative margin (not robust) Positive margin (robust) Lower bound on the worst-case margin Worst-case margin

Classification margin No perturbation Worst possible (intractable, unknown) Lower bound (tractable) robust not robust

Convex relaxation

Robustness certificate

slide-22
SLIDE 22
  • S. Günnemann

Adversarial Robustness of Machine Learning Models for Graphs

20 40 60 80

Robustness Certification: Citeseer

21

<25% of nodes robust, >50% certifiably nonrobust for 10 perturbations. Allowed Perturbations

slide-23
SLIDE 23
  • S. Günnemann

Adversarial Robustness of Machine Learning Models for Graphs

20 40 60 80

Robustness Certification: Citeseer

22

<25% of nodes robust, >50% certifiably nonrobust for 10 perturbations. Robust training 85% robust! Allowed Perturbations

slide-24
SLIDE 24
  • S. Günnemann

Adversarial Robustness of Machine Learning Models for Graphs

Results: Robust Training

23

25 50 75 100

% robust for Q = 12

25 50 75 100 25 50 75 100

Citeseer Cora-ML PubMed

Robust Hinge Baseline Loss Cross Entropy Robust Hinge Baseline Loss Cross Entropy Robust Hinge Baseline Loss Cross Entropy > 4x improvement

Baseline loss adapted from [Wong and Kolter 2018]

slide-25
SLIDE 25
  • S. Günnemann

Adversarial Robustness of Machine Learning Models for Graphs

Results: No Cost in Accuracy

24

Citeseer Cora-ML PubMed

Robust Hinge Baseline Loss Cross Entropy Robust Hinge Baseline Loss Cross Entropy Robust Hinge Baseline Loss Cross Entropy

Baseline loss adapted from [Wong and Kolter 2018]

slide-26
SLIDE 26
  • S. Günnemann

Adversarial Robustness of Machine Learning Models for Graphs

ü Introduction & Motivation ü Are ML models for graphs robust? No! ü Can we give guarantees, i.e. certificates? Yes!

  • 4. Conclusion

Remaining Roadmap

25 10 20 30 Allowed Perturbations 50 100 % Nodes

Certifiably robust Certifiably nonrobust

slide-27
SLIDE 27
  • S. Günnemann

Adversarial Robustness of Machine Learning Models for Graphs

Conclusion

26

Thank you!

10 20 30 Allowed Perturbations 50 100 % Nodes

Certifiably robust Certifiably nonrobust

§ Graph learning models are not robust

– Supervised & unsupervised methods, attacks generalize to many models, only limited knowledge required

§ Crucial for a reliable use of these models:

– Certificates & robustification principles

§ Many open questions

– E.g. exact understanding of what makes a perturbation harmful (underlying "patterns") – Core challenges in general: discreteness of graph structure, !(#$) potential edges, dependencies/non-i.i.d., variety of models, heterogeneous data, …