benchmarking adversarial robustness on image
play

Benchmarking Adversarial Robustness on Image Classification Yinpeng - PowerPoint PPT Presentation

Benchmarking Adversarial Robustness on Image Classification Yinpeng Dong, Qi-An Fu, Xiao Yang, Tianyu Pang, Zihao Xiao, Hang Su, Jun Zhu Dept. of Comp. Sci. and Tech., BNRist Center, Institute for AI, THBI Lab, Tsinghua University, Beijing,


  1. Benchmarking Adversarial Robustness on Image Classification Yinpeng Dong, Qi-An Fu, Xiao Yang, Tianyu Pang, Zihao Xiao, Hang Su, Jun Zhu Dept. of Comp. Sci. and Tech., BNRist Center, Institute for AI, THBI Lab, Tsinghua University, Beijing, 100084, China Contact: dyp17@mails.tsinghua.edu.cn; fqa19@mails.tsinghua.edu.cn

  2. Adversarial Examples An adversarial example is crafted by adding a small perturbation, which is visually indistinguishable from the corresponding normal one, but yet are misclassified by the target model. Alps: 94.39% Dog: 99.99% There is an “arms race” between attacks and defenses, making it hard to understand their effects. Puffer: 97.99% Crab: 100.00% Figure from Dong et al. (2018). Attacks Defenses Adaptive attacks [Athalye et al., 2018] Randomization, denoising [Xie et al., 2018; Liao et al., 2018] Optimization-based attacks [Carlini and Wagner, 2017] Defensive distillation [Papernot et al., 2016] Iterative attacks[kurakin et al., 2016] Adversarial training with FGSM [Kurakin et al., 2015] One-step attacks [Goodfellow et al., 2014] 2

  3. Robustness Benchmark n Threat Models: we define complete threat models n Attacks: we adopt 15 attacks n Defenses: we adopt 16 defenses on CIFAR-10 and ImageNet n Evaluation Metrics: • Accuracy (attack success rate) vs. perturbation budget curves • Accuracy (attack success rate) vs. attack strength curves 3

  4. Evaluation Results on CIFAR-10 ℓ " norm; untargeted attacks; white-box; accuracy curves 4

  5. Platform: RealSafe • We developed a new platform for adversarial machine learning research called RealSafe focusing on benchmarking adversarial robustness on image classification correctly & efficiently. • Available at https://github.com/thu-ml/realsafe (Scan the QR code for this URL). Feature highlights: • Modular implementation, which consists of attacks, models, defenses, datasets, and evaluations. • Support tensorflow & pytorch models with the same interface. • Support 11 attacks & many defenses benchmarked in this work. • Provide ready-to-use pre-trained baseline models (8 on ImageNet & 8 on CIFAR10). • Provide efficient & easy-to-use tools for benchmarking models with the 2 robustness curves. 5

  6. Thanks

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend