adversarial robustness for code
play

Adversarial Robustness for Code Pavol Bielik , Martin Vechev - PowerPoint PPT Presentation

ICML 2020 Adversarial Robustness for Code Pavol Bielik , Martin Vechev pavol.bielik@inf.ethz.ch, martin.vechev@inf.ethz.ch Department of Computer Science 1 Adversarial Robustness panda gibbon Vision + = Explaining and Harnessing


  1. ICML 2020 Adversarial Robustness for Code Pavol Bielik , Martin Vechev pavol.bielik@inf.ethz.ch, martin.vechev@inf.ethz.ch Department of Computer Science 1

  2. Adversarial Robustness panda gibbon Vision + = Explaining and Harnessing Adversarial Examples. Goodfellow et. al. ICLR’15 + = Sound noise Audio Adversarial Examples: Targeted Attacks on Speech-to-Text. Carlini et. al. ICML’18 workshop 2

  3. Adversarial Robustness for Code panda gibbon Vision + = Explaining and Harnessing Adversarial Examples. Goodfellow et. al. ICLR’15 + = Sound noise Audio Adversarial Examples: Targeted Attacks on Speech-to-Text. Carlini et. al. ICML’18 workshop code Code + = refactoring 3

  4. Deep Learning + Code Bug Detection Loop Invariants Bug Repair Code Classification Code Search Type Inference Neural Decompilation Code Captioning Code Completion Variable Naming Program Translation 2016 2017 2018 2019 Prior Works 90% Accuracy 4

  5. Adversarial Robustness for Code Bug Detection Loop Invariants Bug Repair Code Classification Code Search Type Inference Neural Decompilation Code Captioning Code Completion Variable Naming Program Translation 2016 2017 2018 2019 Prior Works 90% ? Accuracy Robustness 5

  6. Adversarial Robustness for Code Bug Detection Loop Invariants Bug Repair Code Classification Code Search Type Inference Neural Decompilation Code Captioning Code Completion Variable Naming Program Translation 2016 2017 2018 2019 Prior Works This Work 90% 4%-50% 88% 84% Accuracy Robustness Accuracy Robustness 6

  7. Adversarial Robustness Example (Type Inference) Model Program Properties Input Program f( x ) → y y x ... ... v num = parseInt num ( v = parseInt( hex str .substr str (1), hex.substr(1), radix num radix ) ) ... ... Goal (Adversarially Robustness): Model is correct for all label preserving program transformations ... ... ... ... v = parseInt( v = parseInt( v = parseInt( parseInt( color .substr(1), hex.substr( 42 ), hex.substr(1), hex.substr(1), radix radix radix + 0 radix ) ) ) ) ... ... ... ... 7 variable renaming constant replacement semantic equivalence remove assignment

  8. Our Work: Three Key Techniques ... Allows model v = parseInt( not to make hex abs .substr abs (1), radix abs a prediction ) if uncertain ... Abstain 1 8

  9. Our Work: Three Key Techniques ... ... 𝜀 = hex → color v num = parseInt num ( v = parseInt( 54% 54% hex abs .substr abs (1), color .substr(1), radix abs radix robustness robustness ) ) ... ... Abstain Adversarial 1 2 Training 9

  10. Our Work: Three Key Techniques ... ... 𝜀 = hex → color v num = parseInt num ( 𝛽 ( x + 𝜀 ) v = parseInt( parseInt num ( hex abs .substr abs (1), color .substr(1), _, radix abs radix _ ) ) ) ... ... Abstain Adversarial Representation 1 2 3 Training Learning 10

  11. Our Work: Three Key Techniques ... ... 𝜀 = hex → color v num = parseInt num ( 𝛽 ( x + 𝜀 ) v = parseInt( parseInt num ( 84% hex abs .substr abs (1), color .substr(1), _, radix abs radix _ robustness ) ) ) ... ... Abstain Adversarial Representation 1 2 3 Training Learning 11

  12. Our Work: Three Key Techniques ... ... v = parseInt num ( 𝜀 = hex → color v num = parseInt num ( 𝛽 ( x + 𝜀 ) parseInt num ( hex abs .substr abs (1), color .substr(1), _, radix abs radix _ ) ) ) ... ... Abstain Adversarial Representation 1 2 3 Training Learning Refinement 4 12

  13. Learning to Abstain Abstains Model should be y 1 y 2 only Robust = + Model should be both abstain Robust and Accurate Predict Class input x i Leads to a simpler Property prediction optimization problem problem is undecidable 13

  14. Learning to Abstain Main Insight Combine Robustness + Learning to Abstain = + Model should be both Deep Gamblers: Learning to Abstain with Portfolio Theory. How to Abstain? abstain Robust and Accurate Liu et. al. NeurIPS’19 Predict Class input x i Leads to a simpler Property prediction optimization problem problem is undecidable 14

  15. Our Work: Three Key Techniques y 1 y 2 ... ... v = parseInt num ( 𝜀 = hex → color v num = parseInt num ( 𝛽 ( x + 𝜀 ) parseInt num ( hex abs .substr abs (1), color .substr(1), _, radix abs radix _ ) ) ) ... ... abstain Abstain Adversarial Representation 1 2 3 Training Learning Learned Jointly Refinement 4 15

  16. Adversarial Training measures the model performance ground-truth label Standard training min loss ( 𝜄 , x , y ) Adversarial training min [max loss ( 𝜄 , x + 𝜀 , y )] 𝜀 ∊ S(x) Label preserving program transformations Define the space S of Solve the inner 2 1 max loss efficiently program transformations 16

  17. Label Preserving Program Transformations Word Substitution Constants, Binary Operators, ... x + 𝜀 tensors + 𝜀 7 42 very fast radix + offset radix - offset Word Renaming Rename Variables, Parameters, Fields, Method Names, ... x + 𝜀 tensors + 𝜀 + analysis def getID() {...} def get_id () {...} fast client.Name client. name Sequence Substitution Adding Dead Code, Reordering Statements, ... x + 𝜀 tensors → code + 𝜀 + analysis → tensors a = get_id() b = 42 slow b = 42 a = get_id() 17

  18. Adversarial Training measures the model performance ground-truth label Standard training min loss ( 𝜄 , x , y ) Adversarial training min [max loss ( 𝜄 , x + 𝜀 , y )] 𝜀 ∊ S(x) Label preserving program transformations Define the space S of Solve the inner 2 1 max loss efficiently program transformations 18

  19. Solving the Inner max loss Efficiently Gradient Based Optimization Limitations 𝜄 ← 𝜄 - ∇ loss ( 𝜄 , x + 𝜀 , y ) 54% 54% 𝜀 ∊ S(x) standard adversarial decision boundary same or worse robustness Discrete and Highly structured S(x) disruptive changes and large programs x + 𝜀 hard optimization problem no structural transformations Adversarial Examples for Models of Code. Yefet et. al. ArXiv’20 19

  20. Solving the Inner max loss Efficiently Gradient Based Optimization Refine S min [max loss ( 𝜄 , x + 𝜀 , y )] 𝜄 ← 𝜄 - ∇ loss ( 𝜄 , x + 𝜀 , y ) 𝜀 ∊ S(x) 𝜀 ∊ S( 𝛽 ( x)) ... v = parseInt( parseInt( color .substr(1), _, radix _ ) ) ... S(x) S( 𝛽 (x)) x + 𝜀 learned representation 20

  21. Solving the Inner max loss Efficiently Gradient Based Optimization Refine S min [max loss ( 𝜄 , x + 𝜀 , y )] 𝜄 ← 𝜄 - ∇ loss ( 𝜄 , x + 𝜀 , y ) 𝜀 ∊ S(x) 𝜀 ∊ S( 𝛽 ( x)) ... v = parseInt( parseInt( color .substr(1), _, radix _ ) ) ... S(x) S( 𝛽 (x)) x + 𝜀 reduces the search space leads to an easier optimization 21

  22. Solving the Inner max loss Efficiently Gradient Based Optimization Refine S min [max loss ( 𝜄 , x + 𝜀 , y )] 𝜄 ← 𝜄 - ∇ loss ( 𝜄 , x + 𝜀 , y ) 𝜀 ∊ S(x) 𝜀 ∊ S( 𝛽 ( x)) ... orthogonal to gradient optimization v = parseInt( parseInt( color .substr(1), _, radix _ supports all transformations ) ) ... S(x) S( 𝛽 (x)) x + 𝜀 reduces the search space leads to an easier optimization 22

  23. Our Work: Three Key Techniques y 1 y 2 ... ... v = parseInt num ( 𝜀 = hex → color v num = parseInt num ( 𝛽 ( x + 𝜀 ) parseInt num ( hex abs .substr abs (1), color .substr(1), _, radix abs radix _ ) ) ) ... ... abstain Abstain Adversarial Representation 1 2 3 Training Learning Learned Jointly Refinement 4 23

  24. Representation Learning = = v + v + v = x + 7 x 7 x 7 nodes attributes 𝛽 : G = 〈 V , E , 𝜊 〉 〈 V , E , 𝜊 〉 → 〈 V , E’ ⊆ E , 𝜊 〉 edges Programs as Graphs Define Refinement 1 2 Learning to Represent Programs with Graphs. Remove Graph Edges Allamanis et. al. ICLR’18 Generative Code Modeling with Graphs. Brockschmidt et. al. ICLR’19 24

  25. Representation Learning All decisions = = are made locally v + v + v = x + 7 x 7 x 7 nodes attributes 𝛽 : G = 〈 V , E , 𝜊 〉 〈 V , E , 𝜊 〉 → 〈 V , E’ ⊆ E , 𝜊 〉 edges Programs as Graphs Define Refinement 1 2 Learning to Represent Programs with Graphs. Remove Graph Edges Allamanis et. al. ICLR’18 Generative Code Modeling with Graphs. Brockschmidt et. al. ICLR’19 25

  26. Representation Learning = = arg min ∑ | 𝛽 ( x )| ( x, y ) ∈ 𝛽 v + v + subject to loss ( 𝜄 , x , y ) ≈ loss ( 𝜄 , 𝛽 ( x ), y ) v = x + 7 x 7 x 7 nodes attributes 𝛽 : G = 〈 V , E , 𝜊 〉 〈 V , E , 𝜊 〉 → 〈 V , E’ ⊆ E , 𝜊 〉 edges Programs as Graphs Define Refinement Optimize 𝛽 1 2 3 Learning to Represent Programs with Graphs. Remove Graph Edges Minimize Graph Size Allamanis et. al. ICLR’18 Generative Code Modeling with Graphs. Brockschmidt et. al. ICLR’19 26

  27. Our Work: Three Key Techniques y 1 y 2 ... ... v = parseInt num ( 𝜀 = hex → color v num = parseInt num ( 𝛽 ( x + 𝜀 ) parseInt num ( hex abs .substr abs (1), color .substr(1), _, radix abs radix _ ) ) ) ... ... abstain Abstain Adversarial Representation 1 2 3 Training Learning Learned Jointly Refinement 4 27

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend