exploring the landscape of spa5al robustness
play

Exploring the Landscape of Spa5al Robustness Logan Engstrom (with - PowerPoint PPT Presentation

Exploring the Landscape of Spa5al Robustness Logan Engstrom (with Brandon Tran*, Dimitris Tsipras*, Ludwig Schmidt, Aleksander Mdry) madry-lab.ml ML Glitch: Adversarial Examples ML Glitch: Adversarial Examples pig small,


  1. Exploring the Landscape of Spa5al Robustness Logan Engstrom (with Brandon Tran*, Dimitris Tsipras*, Ludwig Schmidt, Aleksander Mądry) madry-lab.ml

  2. ML “Glitch”: Adversarial Examples

  3. ML “Glitch”: Adversarial Examples “pig” small, nonrandom noise “airliner”

  4. ML “Glitch”: Adversarial Examples “pig” small, non-random noise “airliner”

  5. ML “Glitch”: Adversarial Examples “pig” small, non-random noise “airliner”

  6. ML “Glitch”: Adversarial Examples “pig” small , non-random noise “airliner” What does small mean here?

  7. ML “Glitch”: Adversarial Examples “pig” small , non-random noise “airliner” What does small mean here? Traditionally: perturbations that have small l_p norm

  8. ML “Glitch”: Adversarial Examples “pig” small , non-random noise “airliner” What does small mean here? Traditionally: perturbations that have small l_p norm Do small l_p norms capture every sense of “small”?

  9. Spa5al Perturba5ons

  10. Spa5al Perturba5ons

  11. Spa5al Perturba5ons rotation up to 30°

  12. Spa5al Perturba5ons rotation up to 30° x, y translations up to ~10%

  13. Spa5al Perturba5ons rotation up to 30° x, y translations up to ~10% These are not small l_p perturbations!

  14. Spa5al Perturba5ons rotation up to 30° x, y translations up to ~10% These are not small l_p perturbations! How robust are models to spatial perturbations?

  15. Spa5al Robustness

  16. Spa5al Robustness Spoiler: models are not robust

  17. Spa5al Robustness Spoiler: models are not robust

  18. Spa5al Robustness Spoiler: models are not robust Can we train more spatially robust classifiers?

  19. Spa5al Defenses

  20. Spa5al Defenses Lesson from l_p robustness: use robust optimization (= train on worst-case perturbed inputs) [Goodfellow et al ‘15 ][Madry et al ’18]

  21. Spa5al Defenses Lesson from l_p robustness: use robust optimization (= train on worst-case perturbed inputs) [Goodfellow et al ‘15 ][Madry et al ’18] Key question : how to find worst-case translations, rotations?

  22. Spa5al Defenses Lesson from l_p robustness: use robust optimization (= train on worst-case perturbed inputs) [Goodfellow et al ‘15 ][Madry et al ’18] Key question : how to find worst-case translations, rotations? Attempt #1: first-order methods

  23. Spa5al Defenses Lesson from l_p robustness: use robust optimization (= train on worst-case perturbed inputs) [Goodfellow et al ‘15 ][Madry et al ’18] Key question : how to find worst-case translations, rotations? Attempt #1: first-order methods

  24. Spa5al Defenses Lesson from l_p robustness: use robust optimization (= train on worst-case perturbed inputs) [Goodfellow et al ‘15 ][Madry et al ’18] Key question : how to find worst-case translations, rotations? Attempt #1: first-order methods

  25. Spa5al Defenses Lesson from l_p robustness: use robust optimization (= train on worst-case perturbed inputs) [Goodfellow et al ‘15 ][Madry et al ’18] Key question : how to find worst-case translations, rotations? Attempt #1: first-order methods

  26. Spa5al Defenses Lesson from l_p robustness: use robust optimization (= train on worst-case perturbed inputs) [Goodfellow et al ‘15 ][Madry et al ’18] Key question : how to find worst-case translations, rotations? Attempt #1: first-order methods Attempt #2: exhaustive search

  27. Spa5al Defenses Lesson from l_p robustness: use robust optimization (= train on worst-case perturbed inputs) [Goodfellow et al ‘15 ][Madry et al ’18] Key question : how to find worst-case translations, rotations? Attempt #1: first-order methods Attempt #2: exhaustive search Exhaustive search is feasible, and a strong adversary! (discretize translations and rotations, try every combination)

  28. Spa5al Defenses Lesson from l_p robustness: use robust optimization (= train on worst-case perturbed inputs) [Goodfellow et al ‘15 ][Madry et al ’18] Key question : how to find worst-case translations, rotations? Attempt #1: first-order methods Attempt #2: exhaustive search Exhaustive search is feasible, and a strong adversary! (discretize translations and rotations, try every combination)

  29. Spa5al Defenses Lesson from l_p robustness: use robust optimization (= train on worst-case perturbed inputs) [Goodfellow et al ‘15 ][Madry et al ’18] Key question : how to find worst-case translations, rotations? Attempt #1: first-order methods Attempt #2: exhaustive search Exhaustive search is feasible, and a strong adversary! (discretize translations and rotations, try every combination) Train only on “worst” transformed input (highest loss)

  30. Spa5al Defenses Lesson from l_p robustness: use robust optimization (= train on worst-case perturbed inputs) [Goodfellow et al ‘15 ][Madry et al ’18] Key question : how to find worst-case translations, rotations? Attempt #1: first-order methods Attempt #2: exhaustive search Exhaustive search is feasible, and a strong adversary! (discretize translations and rotations, try every combination) (we approximate via 10 random samples to quicken training)

  31. Spa5al Defenses With robust optimization:

  32. Spa5al Defenses With robust optimization: CIFAR classifier accuracy: 3% adversarial to 71% adversarial

  33. Spa5al Defenses With robust optimization: CIFAR classifier accuracy: 3% adversarial to 71% adversarial (compare to 93% standard accuracy)

  34. Spa5al Defenses With robust optimization: CIFAR classifier accuracy: 3% adversarial to 71% adversarial (compare to 93% standard accuracy) ImageNet classifier accuracy: 31% adversarial to 53% adversarial

  35. Spa5al Defenses With robust optimization: CIFAR classifier accuracy: 3% adversarial to 71% adversarial (compare to 93% standard accuracy) ImageNet classifier accuracy: 31% adversarial to 53% adversarial (compare to 76% standard accuracy)

  36. Spa5al Defenses With robust optimization: (+10 sample majority vote) CIFAR classifier accuracy: 3% adversarial to 71% adversarial (compare to 93% standard accuracy) ImageNet classifier accuracy: 31% adversarial to 53% adversarial (compare to 76% standard accuracy)

  37. Spa5al Defenses With robust optimization: 82% (+10 sample majority vote) CIFAR classifier accuracy: 3% adversarial to 71% adversarial (compare to 93% standard accuracy) ImageNet classifier accuracy: 31% adversarial to 53% adversarial (compare to 76% standard accuracy)

  38. Spa5al Defenses With robust optimization: 82% (+10 sample majority vote) CIFAR classifier accuracy: 3% adversarial to 71% adversarial (compare to 93% standard accuracy) 56% ImageNet classifier accuracy: 31% adversarial to 53% adversarial (compare to 76% standard accuracy)

  39. Spa5al Defenses With robust optimization: 82% (+10 sample majority vote) CIFAR classifier accuracy: 3% adversarial to 71% adversarial (compare to 93% standard accuracy) 56% ImageNet classifier accuracy: 31% adversarial to 53% adversarial (compare to 76% standard accuracy) Still significant room for improvement!

  40. Conclusions

  41. Conclusions Robust models need more refined notions of similarity

  42. Conclusions Robust models need more refined notions of similarity We do not have true spatial robustness

  43. Conclusions Robust models need more refined notions of similarity We do not have true spatial robustness Intuitions from l_p robustness do not transfer

  44. Conclusions Robust models need more refined notions of similarity We do not have true spatial robustness Intuitions from l_p robustness do not transfer Come to our poster! Pacific Ballroom #142

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend