fundamental tradeoffs between invariance and sensitivity
play

Fundamental Tradeoffs between Invariance and Sensitivity to - PowerPoint PPT Presentation

Fundamental Tradeoffs between Invariance and Sensitivity to Adversarial Perturbations Florian Tramr Jens Behrmann Nicholas Carlini Nicolas Papernot Jrn-Henrik Jacobsen What are Adversarial Examples? any input to a ML model that is


  1. Fundamental Tradeoffs between Invariance and Sensitivity to Adversarial Perturbations Florian Tramèr Jens Behrmann Nicholas Carlini Nicolas Papernot Jörn-Henrik Jacobsen

  2. What are Adversarial Examples? “any input to a ML model that is intentionally designed by an attacker to fool the model into producing an incorrect output” “Small” perturbations “Large” perturbations Nonsensical inputs etc. 99% Guacamole 99% Guacamole 99% Guacamole

  3. L p -bounded Adversarial Examples Given input x , find x′ that is misclassified such that x ! − x ≤ ε ( + ) Easy to formalize Adversarial ( − ) Incomplete Examples L p Concrete measure of progress: bounded “my classifier has 97% accuracy for (excessive perturbations of L 2 norm bounded by 𝜁 = 2 ” sensitivity)

  4. Goodhart’s Law “When a measure becomes a target, it ceases to be a good measure”

  5. New Vulnerability: Invariance Adversarial Examples Small semantics-altering perturbations that don’t change classification Adversarial Examples excessive excessive invariance 3 3 sensitivity

  6. Our Results State-of-the-art robust models are too robust Invariance to semantically meaningful Model with 88% features can be exploited certified robust accuracy Inherent tradeoffs Solving excessive sensitivity & invariance 1 1 implies perfect classifier 12% agreement with human labels

  7. A Fundamental Tradeoff x ! − x " ≤ 22 Hermit-crab Guacamole OK! I’ll make my classifier robust to L 2 perturbations of size 22 (we don’t yet know how to do this on ImageNet)

  8. A Fundamental Tradeoff x ! − x " ≤ 22 Hermit-crab Hermit-crab OK! I’ll choose a better norm than L 2

  9. A Fundamental Tradeoff Theorem (informal) Choosing a “good” norm is as hard as building a perfect classifier

  10. Are Current Classifiers Already too Robust?

  11. A Case-Study on MNIST State-of-the-art certified robustness: = 0.3 - 𝑀 # ≤ 0.3 : 93% accuracy ∞ = 0.4 𝑀 # ≤ 0.4 : 88% accuracy - ∞ Model certifies that it labels both inputs the same

  12. Automatically Generating Invariance Attacks Challenge : ensure label is changed from human perspective Meta-procedure : alignment via data augmentation a few tricks input diff input from semantics- result other class preserving transformation

  13. Do our invariance examples change human labels? 88% Open problem: better automated 37% attacks 21% 0% no attack ℓ # ≤ 0.4 (manual) ℓ # ≤ 0.3 ℓ # ≤ 0.4

  14. Which models agree most with humans? Most robust model provably gets all invariance examples wrong! More robust models

  15. Why can models be accurate yet overly invariant? Or, why can an MNIST model achieve 88% test-accuracy for ℓ # ≤ 0.4 ? Problem: dataset is not diverse enough Partial solution: data augmentation More robust models

  16. Conclusion Robustness isn’t yet another metric to monotonically optimize! Max “real” robust accuracy on MNIST: ≈80% at ℓ " = 0.3 ≈10% at ℓ " = 0.4 Þ We’ve already over-optimized! Are we really making classifiers more robust, or just overly smooth?

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend