scalable pate the secret sharer
play

Scalable PATE The Secret Sharer work by the Brain Privacy and - PowerPoint PPT Presentation

Scalable PATE The Secret Sharer work by the Brain Privacy and Security team and collaborators at UC Berkeley presented by Ian Goodfellow PATE / PATE-G Private / Papernot Aggregation / Abadi Teacher / Talwar Ensembles / Erlingsson


  1. Scalable PATE The Secret Sharer work by the Brain Privacy and Security team and collaborators at UC Berkeley presented by Ian Goodfellow

  2. PATE / PATE-G • Private / Papernot • Aggregation / Abadi • Teacher / Talwar • Ensembles / Erlingsson • Generative / Goodfellow

  3. Threat Model Types of adversaries and our threat model Model querying ( black-box adversary ) Black-box ? ML Shokri et al. (2016) Membership Inference Attacks against ML Models Fredrikson et al. (2015) Model Inversion Attacks Model inspection ( white-box adversary ) Zhang et al. (2017) Understanding DL requires rethinking generalization In our work, the threat model assumes: - Adversary can make a potentially unbounded number of queries - Adversary has access to model internals 35

  4. A definition of privacy: di ff erential privacy A definition of privacy: differential privacy } Answer 1 Answer 2 Randomized ... Algorithm ? ? Answer n ? ? Answer 1 Randomized Answer 2 Algorithm ... Answer n 36

  5. A tangent • Which other fields need their “di ff erential privacy moment”? • Adversarial robustness needs a provable mechanism • Interpretability needs measurable / actionable definitions • Di ff erential privacy is maybe the brightest spot in ML theory, especially in adversarial settings. Real guarantees that hold in practice

  6. Di ff erent teachers learn from di ff erent subsets Private Aggregation of Teacher Ensembles (PATE) Partition 1 Teacher 1 Partition 2 Teacher 2 Sensitive Partition 3 Teacher 3 Data ... ... Partition n Teacher n Training Data flow Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data [ICLR 2017 best paper] 37 Nicolas Papernot, Martín Abadi, Úlfar Erlingsson, Ian Goodfellow, and Kunal Talwar

  7. Aggregation Aggregation Aggregation Count votes Take maximum Count votes Take maximum 38 38

  8. Intuitive Privacy Analysis Intuitive privacy analysis If most teachers agree on the label, it does not depend on specific partitions, so the privacy cost is small. If two classes have close vote counts, the disagreement may reveal private information. 39

  9. Student Training Student training Not available to the adversary Available to the adversary Partition 1 Teacher 1 Partition 2 Teacher 2 Sensitive Aggregated Partition 3 Teacher 3 Student Queries Data Teacher ... ... Partition n Teacher n Public Data Training Inference Data flow 42

  10. Why train a student model? Why train an additional “student” model? The aggregated teacher violates our threat model: Each prediction increases total privacy loss. 1 Privacy budgets create a tension between the accuracy and number of predictions. Inspection of internals may reveal private data. 2 Privacy guarantees should hold in the face of white-box adversaries. 43

  11. Label-e ffi cient learning • More queries to teacher while training student = more privacy lost • Use semi-supervised GAN (Salimans et al 2016) to achieve high accuracy with few labels

  12. Supervised Discriminator for Semi-Supervised Learning Real Real Fake Real Fake cat dog Learn to read with Hidden Hidden units units 100 labels rather than 60,000 Input Input (Odena 2016, Salimans et al 2016) (Goodfellow 2018)

  13. Trade-o ff between accuracy and privacy Trade-off between student accuracy and privacy 47

  14. Scalable PATE • Nicolas Papernot*, Shuang Song*, Ilya Mironov, Ananth Raghunathan, Kunal Talwar, Úlfar Erlingsson

  15. Limitations of first PATE paper • Only on MNIST / SVHN • Very clean • 10 classes (easier to get consensus) • Scalable PATE • More classes • Unbalanced classes • Mislabeled training examples

  16. Improvements • Noisy votes use Gaussian rather than Laplace distribution • More likely to achieve consensus for large number of classes • Selective teacher response

  17. Selective Teacher Response • Check for overwhelming consensus • Use high variance noise • Check if noisy votes for argmax exceed threshold T • Consensus? Publish noisy votes with smaller variance • No consensus? Don’t publish anything, student skips • Note: running the noisy consensus check still spent some of our privacy budget

  18. Background: adversarial training Labeled as bird Still has same label (bird) Decrease probability of bird class

  19. Virtual Adversarial Training Unlabeled; model New guess should guesses it’s probably match old guess a bird, maybe a plane (probably bird, maybe plane) Adversarial perturbation intended to change the guess (Miyato et al, 2015)

  20. VAT performance (Oliver+Odena+Ra ff el et al, 2018)

  21. Scalable PATE: Improved Synergy between utility and privacy Results 1. Check privately for consensus 2. Run noisy argmax only when consensus is sufficient (LNMax=PATE, Confident-GNMax=Scalable PATE) Scalable Private Learning with PATE [ICLR 2018] 48 Nicolas Papernot, Shuang Song, Ilya Mironov, Ananth Raghunathan, Kunal Talwar, Ulfar Erlingsson

  22. Scalable PATE: Improved tradeo ff Trade-off between student accuracy and privacy Selective PATE 49

  23. The Secret Sharer • Nicholas Carlini, Chang Liu, Jernej Kos, Úlfar Erlingsson, Dawn Song

  24. Secret with format known to adversary • “My social security number is ___-__-____” Secret • Measure memorization with exposure

  25. Definitions • Suppose model assigns probability p to the actual secret • The rank of the secret is the number of other strings given probability ≤ p • Minimum value is 1 • Exposure: negative log prob of sampling a string with probability less than p • equivalent: Exposure : log (# possible strings) - log rank

  26. Practical Experiments • Can estimate exposure via sampling • Can approximately find most likely secret value with optimization (beam search)

  27. Memorization during learning

  28. Observations • Exposure is high • Exposure rises early during learning • Exposure is not caused by overfitting • Peaks before overfitting occurs

  29. Comparisons • Across architectures: • More accuracy -> more exposure • LSTM / GRU: high accuracy, high exposure • CNN: lower accuracy, lower exposure • Larger batch size -> more memorization • Larger model -> more memorization • Secret memorization happens even when compressed model smaller than compressed dataset • Choice of optimizer: no significant di ff erence

  30. Defenses • Regularization does not work • Weight decay • Dropout • Weight quantization • Di ff erentially privacy works, as guaranteed • Even for very small epsilon, with little theoretical guarantee, the exposure measured in practice decreases significantly

  31. Questions

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend