adversarial attacks on node embeddings via graph poisoning
play

Adversarial Attacks on Node Embeddings via Graph Poisoning - PowerPoint PPT Presentation

Adversarial Attacks on Node Embeddings via Graph Poisoning Aleksandar Bojchevski, Stephan Gnnemann Technical University of Munich ICML 2019 Node embeddings are used to Classify scientific papers Recommend items Classify proteins


  1. Adversarial Attacks on Node Embeddings via Graph Poisoning Aleksandar Bojchevski, Stephan Günnemann Technical University of Munich ICML 2019

  2. Node embeddings are used to • Classify scientific papers • Recommend items • Classify proteins • Detect fraud • Predict disease-gene associations num. papers • Spam filtering 200 • ….. 100 0 Aleksandar Bojchevski Adversarial Attacks on Node Embeddings via Graph Poisoning 2

  3. Background: Node embeddings Every node 𝑤 ∈ 𝒲 is mapped to a low-dimensional vector 𝑨 𝑤 ∈ ℝ 𝑒 such that the graph structure is captured. node classification link prediction 𝑗 𝑘 … other tasks ℝ 2 Similar nodes are close to each other in the embedding space. Aleksandar Bojchevski Adversarial Attacks on Node Embeddings via Graph Poisoning 3

  4. Background: Random walk based embeddings Let nodes = words and random walks = sentences. Train a language model, e.g. Word2Vec. train sample graph random walks embeddings Nodes that co-occur in the random-walks have similar embeddings. Aleksandar Bojchevski Adversarial Attacks on Node Embeddings via Graph Poisoning 4

  5. Are node embeddings robust to adversarial attacks? In domains where graph embeddings are used (e.g. the Web) adversaries are common and false data is easy to inject. Aleksandar Bojchevski Adversarial Attacks on Node Embeddings via Graph Poisoning 5

  6. Adversarial attacks in the graph domain adversarial flips: + = add ( ) and/or remove ( ) edges poisoned graph clean graph Aleksandar Bojchevski Adversarial Attacks on Node Embeddings via Graph Poisoning 6

  7. Poisoning: train after the attack ✓ node classification train eval ✓ 𝑗 𝑘 link prediction … ✓ other tasks clean graph clean embedding X node classification train eval X 𝑗 𝑘 link prediction … X other tasks poisoned graph poisoned embedding Aleksandar Bojchevski Adversarial Attacks on Node Embeddings via Graph Poisoning 7

  8. Poisoning attack formally The graph after perturbing some edges ℒ(𝐻, 𝑎 ∗ (𝐻)) 𝐻 𝑞𝑝𝑗𝑡. = argmax 𝐻 ∈ 𝑏𝑚𝑚 𝑕𝑠𝑏𝑞ℎ𝑡 𝐻 𝑑𝑚𝑓𝑏𝑜 −𝐻 ≤𝑐𝑣𝑒𝑕𝑓𝑢 𝑎 ∗ (𝐻) = argmin ℒ(𝐻, 𝑎) 𝑎 The optimal embedding from the to be optimized graph 𝐻 Aleksandar Bojchevski Adversarial Attacks on Node Embeddings via Graph Poisoning 8

  9. Poisoning attack for random walk models The graph after perturbing some edges ℒ(𝐻, 𝑎 ∗ (𝐻)) 𝐻 𝑞𝑝𝑗𝑡. = argmax 𝐻 ∈ 𝑏𝑚𝑚 𝑕𝑠𝑏𝑞ℎ𝑡 𝐻 𝑑𝑚𝑓𝑏𝑜 −𝐻 ≤𝑐𝑣𝑒𝑕𝑓𝑢 𝑎 ∗ (𝐻) = argmin ℒ( 𝑠 1 , 𝑠 2 , … 𝐻 , 𝑎) 𝑠 𝑗 = 𝑠𝑜𝑒_𝑥𝑏𝑚𝑙 𝐻 𝑎 The optimal embedding from the to be optimized graph 𝐻 Aleksandar Bojchevski Adversarial Attacks on Node Embeddings via Graph Poisoning 9

  10. G 𝑞𝑝𝑗𝑡. = argmax min 𝑎 ℒ( 𝑠 1 , 𝑠 2 , … 𝐻 , 𝑎) 𝐻 ∈ 𝑏𝑚𝑚 𝑕𝑠𝑏𝑞ℎ𝑡 𝐻 𝑑𝑚𝑓𝑏𝑜 −𝐻 ≤𝑐𝑣𝑒𝑕𝑓𝑢 Challenges Bi-level optimization problem. Combinatorial search space. Inner optimization includes non-differentiable sampling. Aleksandar Bojchevski Adversarial Attacks on Node Embeddings via Graph Poisoning 10

  11. random walks min 𝑎 ℒ = 𝑔( ) = + ≈ (1b) optimal ℒ via spectrum (1a) Matrix factorization (2) Approximate poisoned spectrum Overview 1. Reduce the bi-level problem to a single-level a) DeepWalk as Matrix Factorization b) Express the optimal ℒ via the graph spectrum 2. Approximate the poisoned graph’s spectrum Aleksandar Bojchevski Adversarial Attacks on Node Embeddings via Graph Poisoning 11

  12. 1. Reduce bi-level problem to a single-level a) DeepWalk corresponds to factorizing the PPMI matrix. transition/degree matrix 𝑄 𝑠 𝐸 −1 𝑈 𝑇 = σ 𝑠=1 𝑁 𝑗𝑘 = log max{𝑑𝑇 𝑗𝑘 , 1} Get the embeddings 𝑎 via SVD of 𝑁 Rewrite 𝑇 in terms of the generalized spectrum of 𝐵 . Λ 𝑠 𝑉 𝑈 𝑈 𝑇 = 𝑉 σ 𝑠=1 𝐵𝑣 = 𝜇𝐸𝑣 generalized eigenvalues/vectors Aleksandar Bojchevski Adversarial Attacks on Node Embeddings via Graph Poisoning 12

  13. 1. Reduce bi-level problem to a single-level b) The optimal loss is now a simple function of the eigenvalues. min 𝑎 ℒ(𝐻, 𝑎) = 𝑔(𝜇 𝑗 , 𝜇 𝑗+1 , … ) Training the embedding is replaced by computing eigenvalues. ⇒ 𝐻 𝑞𝑝𝑗𝑡. = argmax 𝑔(𝜇 𝑗 , 𝜇 𝑗+1 , … ) 𝐻 𝑞𝑝𝑗𝑡. = argmax min 𝑎 ℒ(𝐻, 𝑎) 𝐻 𝐻 Aleksandar Bojchevski Adversarial Attacks on Node Embeddings via Graph Poisoning 13

  14. 2. Approximate the poisoned graph’s spectrum Compute the change using Eigenvalue Perturbation Theory. 𝐵 𝑞𝑝𝑗𝑡𝑝𝑜𝑓𝑒 = 𝐵 𝑑𝑚𝑓𝑏𝑜 + Δ𝐵 𝑈 𝜇 𝑞𝑝𝑗𝑡𝑝𝑜𝑓𝑒 = 𝜇 𝑑𝑚𝑓𝑏𝑜 + 𝑣 𝑑𝑚𝑓𝑏𝑜 Δ𝐵 + 𝜇 𝑑𝑚𝑓𝑏𝑜 Δ𝐸 𝑣 𝑑𝑚𝑓𝑏𝑜 simplifies for a single edge flip (𝑗, 𝑘) 2 + 𝑣 𝑑𝑘 2 ) 𝜇 𝑞 = 𝜇 𝑑 + ΔA 𝑗𝑘 2𝑣 𝑑𝑗 ⋅ 𝑣 𝑑𝑘 − 𝜇 𝑑 (𝑣 𝑑𝑗 # compute in 𝑃(1) Aleksandar Bojchevski Adversarial Attacks on Node Embeddings via Graph Poisoning 14

  15. random walks min 𝑎 ℒ = 𝑔( ) = + ≈ (1b) optimal ℒ via spectrum (1a) Matrix factorization (2) Approximate poisoned spectrum Overall algorithm 1. Compute generalized eigenvalues/vectors ( Λ/𝑉 ) of the graph 2. For all candidate edge flips ( 𝑗 , 𝑘 ) compute the change in 𝜇 𝑗 3. Greedily pick the top candidates leading to largest optimal loss Aleksandar Bojchevski Adversarial Attacks on Node Embeddings via Graph Poisoning 15

  16. General attack Poisoning decreases the overall quality of the embeddings. Our attacks: Gradient baseline: Simple baselines: Clean graph: Aleksandar Bojchevski Adversarial Attacks on Node Embeddings via Graph Poisoning 16

  17. Targeted attack Goal: attack a specific node and/or a specific downstream task. before after before after General Attack Targeted Attack Examples: • Misclassify a single given target node 𝑢 • Increase/decrease the similarity of a set of node pairs 𝒰 ⊂ 𝒲 × 𝒲 Aleksandar Bojchevski Adversarial Attacks on Node Embeddings via Graph Poisoning 17

  18. Targeted attack Most nodes can be misclassified with few adversarial edges. Before attack After attack Aleksandar Bojchevski Adversarial Attacks on Node Embeddings via Graph Poisoning 18

  19. Transferability Our selected adversarial edges transfer to other (un)supervised methods. DeepWalk DeepWalk node Spectral Label Graph budget SVD Sampling 2vec Embed. Prop. Conv. 250 -7.59 -5.73 -6.45 -3.58 -4.99 -2.21 500 -9.68 -11.47 -10.24 -4.57 -6.27 -8.61 The change in 𝐺 1 score (in percentage points) compared to the clean graph. Lower is better. Aleksandar Bojchevski Adversarial Attacks on Node Embeddings via Graph Poisoning 19

  20. Analysis of adversarial edges There is no simple heuristic that can find the adversarial edges. Aleksandar Bojchevski Adversarial Attacks on Node Embeddings via Graph Poisoning 20

  21. Poster: #61, Pacific Ballroom, Today Code: github.com/abojchevski/node_embedding_attack random walks min 𝑎 ℒ = 𝑔( ) = + ≈ (1b) optimal ℒ via spectrum (1a) Matrix factorization (2) Approximate poisoned spectrum Summary  Node embeddings are vulnerable to adversarial attacks.  Find adversarial edges via matrix factorization and the graph spectrum.  Relatively few perturbations degrade the embedding quality and the performance on downstream tasks. Aleksandar Bojchevski Adversarial Attacks on Node Embeddings via Graph Poisoning 21

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend