Scalable Differential Privacy with Certified Robustness in - - PowerPoint PPT Presentation

β–Ά
scalable differential privacy with certified robustness
SMART_READER_LITE
LIVE PREVIEW

Scalable Differential Privacy with Certified Robustness in - - PowerPoint PPT Presentation

The 37th International Conference on Machine Learning (ICML20), Jul 12 th - 18 th , 2020. Scalable Differential Privacy with Certified Robustness in Adversarial Learning NhatHai Phan 1 , My T. Thai 2 , Han Hu 1 , Ruoming Jin 3 , Tong Sun 4 ,


slide-1
SLIDE 1

Scalable Differential Privacy with Certified Robustness in Adversarial Learning

NhatHai Phan1, My T. Thai2, Han Hu1, Ruoming Jin3, Tong Sun4, and Dejing Dou5

1 Ying Wu College of Computing, New Jersey Institute of Technology 2 Department of Computer & Information Sciences & Engineering, University of Florida 3 Computer Science Department, Kent State University 4 Adobe Research Lab 5 Computer and Information Science Department, University of Oregon

Email: phan@njit.edu

1

The 37th International Conference on Machine Learning (ICML’20), Jul 12th - 18th, 2020.

slide-2
SLIDE 2

Outline

  • Motivation and Background
  • Differential Privacy (DP) in Adversarial Learning
  • Composition of Certified Robustness
  • Stochastic Batch Training (StoBatch)
  • Experimental Results and Conclusion

2

slide-3
SLIDE 3

Motivation

  • DNNs are vulnerable to both privacy

attacks and adversarial examples

  • Existing efforts only focus on either

preserving DP or deriving certified robustness, but not both DP and robustness!

  • private models are unshielded under

adversarial examples

  • robust models (adversarial training)

do not offer privacy protections to the training data

  • Bounding the robustness of a model

(protects data privacy and is robust against adversarial examples) at scale is nontrivial

  • adversarial examples introduces a

previously unknown privacy risk

  • unrevealed interplay (trade-off) among

DP preservation, adversarial learning, and robustness bounds

3

slide-4
SLIDE 4

Goals Methods Results Deliverables

  • Privacy-preserving (Laplace) noise is

injected into inputs and hidden layers to achieve DP in learning private model parameters.

  • The privacy noise π‘ž is projected on the

scale of the robustness noise 𝑠.

– a composition of certified robustness in both input and latent spaces

  • Leverage the recipe of distributed

adversarial training to develop a stochastic batch training

– disjoint and fixed batches are distributed to local DP trainers

  • Established a connection among DP

preservation to protect the training data, adversarial learning, and certified robustness.

  • Derived a sequential composition robustness

in both input and latent spaces.

  • Addressed the trade-off among model utility,

privacy loss, and robustness.

  • Rigorous experiments shown that our

mechanism significantly enhances the robustness and scalability of DP DNNs.

  • Algorithms and models:

https://github.com/haiphanNJIT/StoBatch

  • Develop a novel mechanism (StoBatch) to: 1) preserve DP of the training data, 2) be

provably and practically robust to adversarial examples, 3) retain high model utility, and 4) be scalable.

slide-5
SLIDE 5

Differential Privacy

  • Databases 𝐸 and 𝐸’ are neighbors if they are different in one

individual’s contribution

  • (πœ—, πœ€)-Differential Privacy: for all 𝐸, 𝐸’ neighbors, the distribution of

A 𝐸 is (nearly) the same as the distribution of 𝐡 𝐸′ for all 𝐩:

privacy loss

5

slide-6
SLIDE 6

DP Mechanisms

6

[Chaudhuri & Sarwate]

slide-7
SLIDE 7

Robustness Condition [LΓ©cuyer et al., 2019]

7

where 𝑙 = 𝑧(𝑦), indicating that a small perturbation in the input does not change the predicted label 𝑧(𝑦).

βˆ€π›½ ∈ π‘š' 𝜈 : 𝑔

( 𝑦 + 𝛽 > max ):)+( 𝑔 ) 𝑦 + 𝛽

slide-8
SLIDE 8

DP with Certified Robustness

[LΓ©cuyer et al., 2019]

  • Image level: 𝑦 = 𝑦 + 𝑂 0, 𝜏#

$

  • 𝜏, β‰₯

2 ln

  • ./0

1!

βˆ†,/πœ—,

8

slide-9
SLIDE 9

Outline

  • Motivation and Background
  • Differential Privacy in Adversarial Learning
  • Composition of Certified Robustness
  • Stochastic Batch Training (StoBatch)
  • Experimental Results and Conclusion

9

slide-10
SLIDE 10

Differential Privacy in Adversarial Learning [Overview]

  • easier to train, small sensitivity bounds, and reusability

10

slide-11
SLIDE 11

DP Auto-Encoder

11

6 β„› 2

3" πœ„- = : 4#∈ 2 3"

:

67- 8

1 2 πœ„-66 β„Ž) βˆ’ Μ… 𝑦) ? 𝑦) Μ… 𝑦) = 𝑦) + 1 𝑛 π‘€π‘π‘ž βˆ†β„› 𝜁- , and 6 β„Ž) = πœ„-

: Μ…

𝑦) + 2 𝑛 π‘€π‘π‘ž βˆ†β„› 𝜁-

DP

slide-12
SLIDE 12
  • DP Adversarial Examples
  • DP Objective function

Adversarial Learning with DP

12

privacy leakage

slide-13
SLIDE 13

Algorithm

13

slide-14
SLIDE 14

Outline

  • Motivation and Background
  • Differential Privacy in Adversarial Learning
  • Composition of Certified Robustness
  • Stochastic Batch Training (StoBatch)
  • Experimental Results and Conclusion

14

slide-15
SLIDE 15
  • Project the privacy noise π‘ž on the scale of the robustness noise 𝑠.
  • What is the general robustness bound, given πœ† and πœ’?

Composition of Certified Robustness

15

πœ† = βˆ†β„› π‘›πœ" / βˆ†#

$

𝜁# , Μ… 𝑦% = 𝑦% + π‘€π‘π‘ž πœ†βˆ†#

$

𝜁# πœ’ = βˆ†β„› π‘›πœ" / βˆ†#

&

𝜁# , / β„Ž% = β„Ž% + π‘€π‘π‘ž πœ’βˆ†#

&

𝜁#

Sequential Composition of Certified Robustness: Lemma 5, Theorem 5

𝑦 𝜈 πœ†+πœ’

πœ†+πœ’-𝜈

slide-16
SLIDE 16

Verified Inference

  • StoBatch Robustness

16

where 𝑙 = 𝑧(𝑦), indicating that a small perturbation in the input does not change the predicted label 𝑧(𝑦).

βˆ€π›½ ∈ π‘š' πœ† + πœ’ ()$: 𝑔

* 𝑦 + 𝛽 > max %:%,* 𝑔 % 𝑦 + 𝛽

slide-17
SLIDE 17

Stochastic Batch Mechanism

  • Under the same DP protection.
  • Training from multiple batches with

more adversarial examples, without affecting the DP bound.

  • The optimization of one batch does

not affect the DP protection at any

  • ther batch and at the dataset level

𝐸, across π‘ˆ training steps.

17

slide-18
SLIDE 18

Outline

  • Motivation and Background
  • Differential Privacy in Adversarial Learning
  • Composition of Certified Robustness
  • Stochastic Batch Training (StoBatch)
  • Experimental Results and Conclusion

18

slide-19
SLIDE 19

Experimental Results

  • Interplay among model utility,

privacy loss, and robustness bounds

  • privacy budget
  • attack sizes
  • scalability
  • CNNs on MNIST, CIFAR-10
  • ResNet-18 on Tiny ImageNet
  • Baseline approaches
  • PixelDP [Lecuyer et al., S&P’19]
  • DPSGD [Abadi et al., CCS’16]
  • AdLM [Phan et al., ICDM’17]
  • Secure-SGD [Phan et al., IJCAI’19]

with AGM [Balle et al., ICML’18]

19

[LΓ©cuyer et al., 2019]

slide-20
SLIDE 20

CIFAR-10

20

  • StoBatch
  • 45.25 Β± 1.6%

(conventional)

  • 42.59 Β± 1.58%

(certified)

  • SecureSGD
  • 29.08 Β± 11.95%

(conventional)

  • 19.58 Β± 5.0%

(certified)

  • p < 2.75e-20
  • 2-tail t-test
slide-21
SLIDE 21

21

  • StoBatch
  • 29.78 Β± 4.8%

(conventional)

  • 28.31 Β± 1.58%

(certified)

  • SecureSGD
  • 8.99 Β± 5.95%

(conventional)

  • 8.72 Β± 5.5%

(certified)

  • p < 1.55e-42
  • 2-tail t-test

Tiny ImageNet

slide-22
SLIDE 22

Conclusion

  • Established a connection among DP preservation to protect the

training data, adversarial learning, and certified robustness.

  • Derived a sequential composition robustness in both input and latent

spaces.

  • Addressed the trade-off among model utility, privacy loss, and

robustness.

  • Rigorous experiments shown that our mechanism significantly

enhances the robustness and scalability of DP DNNs.

22

slide-23
SLIDE 23

Thank you! phan@njit.edu, we are hiring!

23

The 37th International Conference on Machine Learning (ICML’20), Jul 12th - 18th, 2020.