making generalization robust
play

Making Generalization Robust Katrina Ligett HUJI & Caltech - PowerPoint PPT Presentation

Making Generalization Robust Katrina Ligett HUJI & Caltech joint with Rachel Cummings, Kobbi Nissim, Aaron Roth, and Steven Wu A model for science A model for science Hypothesis Learning Alg domain: contains all possible


  1. Making Generalization Robust Katrina Ligett HUJI & Caltech joint with Rachel Cummings, Kobbi Nissim, Aaron Roth, and Steven Wu

  2. A model for science…

  3. A model for science…

  4. Hypothesis Learning Alg • domain: contains all possible examples • hypothesis: X-> {0,1} labels examples • learning alg samples labeled examples, returns hypothesis

  5. Hypothesis Learning Alg The goal of science: Find hypothesis that has low true error on the distribution D: err(h) = Pr x~D [h(x) ≠ h*(x)]

  6. Why does science work?

  7. Why does science work?

  8. Hypothesis Learning Alg The goal of science: Find hypothesis that has low true error on the distribution D: err(h) = Pr x~D [h(x) ≠ h*(x)] Idea: find hypothesis that has low empirical error on S, plus guarantee that findings on the sample generalize to D

  9. Hypothesis Learning Alg Empirical error: err E (h) = 1/n ∑ x ∈ S 1 [h(x) ≠ h*(x)] Generalization: output h s.t. Pr[|h(S) - h(D) |] ≤ 𝛃 ] ≥ 1 - β

  10. taken from Understanding Machine Learning, Shai Shalev-Schwarts and Shai Ben-David

  11. Problem solved!

  12. Science doesn’t Problem happen in a solved? vacuum.

  13. One thing that can go wrong: post-processing

  14. • Learning an SVM: Output encodes Support Vectors (sample points) • This output could be post-processed to obtain a non-generalizing hypothesis: “10% of all data points are x_k”

  15. Oh, man. Our approach on this Kaggle competition really failed on the test data. Oh well, let’s try again. Did you see that paper published by the Smith lab? Yeah, I bet they’d see an even bigger effect if they accounted for sunspots! The journal requires open access to the data—let’s try it and see!

  16. A second big problem: adaptive composition q 1 a 1 q 2 a 2 . . .

  17. A second big problem: adaptive composition q 1 a 1 q 2 a 2 . . .

  18. A second big problem: adaptive composition q 1 a 1 q 2 a 2 . . . Adaptive composition can cause overfitting! Generalization guarantees don’t “add up”

  19. A second big problem: adaptive composition q 1 a 1 q 2 a 2 . . . Adaptive composition can cause overfitting! Generalization guarantees don’t “add up” • Pick parameters; fit model

  20. A second big problem: adaptive composition q 1 a 1 q 2 a 2 . . . Adaptive composition can cause overfitting! Generalization guarantees don’t “add up” • Pick parameters; fit model • ML competitions

  21. A second big problem: adaptive composition q 1 a 1 q 2 a 2 . . . Adaptive composition can cause overfitting! Generalization guarantees don’t “add up” • Pick parameters; fit model • ML competitions • Scientific fields that share one dataset

  22. Some basic questions • Is it possible to get good learning algorithms that also are robust to post-processing? Adaptive composition? • How to construct them? Existing algorithms? How much extra data do they need? • Accuracy + generalization + post-processing-robustness = ? • Accuracy + generalization + adaptive composition = ? • What composes with what? How well (how quickly does generalization degrade)? Why?

  23. Notice: generalization doesn’t require correct hypotheses, just that they perform the same on the sample as on the distribution Generalization alone is easy. What’s interesting: generalization + accuracy.

  24. Generalization + post-processing robustness • Robust generalization “no adversary can use output to find a hypothesis that overfits” information-theoretic (could think computational)

  25. Robust Generalization

  26. Robust Generalization • Robust to post-processing • Somewhat robust to adaptive composition (more on this later)

  27. Do Robustly-Generalizing Algs Exist?

  28. Do Robustly-Generalizing Algs Exist? Yes!

  29. Do Robustly-Generalizing Algs Exist? Yes!

  30. Do Robustly-Generalizing Algs Exist? Yes! • This paper: Compression Schemes -> Robust Generalization

  31. Do Robustly-Generalizing Algs Exist? Yes! • This paper: Compression Schemes -> Robust Generalization • [DFHPRR15a]: Bounded description length -> Robust Generalization

  32. Do Robustly-Generalizing Algs Exist? Yes! • This paper: Compression Schemes -> Robust Generalization • [DFHPRR15a]: Bounded description length -> Robust Generalization • [BNSSSU16]: Differential privacy -> Robust Generalization

  33. Do Robustly-Generalizing Algs Exist? Yes! • This paper: Compression Schemes -> Robust Generalization • [DFHPRR15a]: Bounded description length -> Robust Generalization • [BNSSSU16]: Differential privacy -> Robust Generalization

  34. Compression schemes Hypothesis

  35. Compression schemes Hypothesis

  36. Robust Generalization via compression A A algorithm

  37. What Can be Learned under RG? Theorem (informal; thanks to Shay Moran): sample complexity of robustly generalizing learning is the same up to log factors, as the sample complexity of PAC learning

  38. Do Robustly-Generalizing Algs Exist? Yes! • This paper: Compression Schemes -> Robust Generalization • [DFHPRR15a]: Bounded description length -> Robust Generalization • [BNSSSU16]: Differential privacy -> Robust Generalization

  39. Differential Privacy [DMNS ‘06]

  40. Differential Privacy [DMNS ‘06] • Robust to post-processing [DMNS ‘06] and adaptive composition [DRV ‘10] • Necessarily randomized output • No mention of how samples drawn!

  41. Does DP = RG?

  42. Does DP = RG?

  43. Does DP = RG?

  44. Does DP = RG? No “quick fix” to make RG learner satisfy DP

  45. Notions of generalization • Robust generalization “no adversary can use output to find a hypothesis that overfits” • Differential privacy [DMNS ‘06] “similar samples should have the same output” • Perfect generalization “output reveals nothing about the sample”

  46. Perfect Generalization

  47. PG as a privacy notion • Differential privacy gives privacy to the individual Changing one entry in the database shouldn’t change the output too much • Perfect generalization gives privacy to the data provider (e.g. school, hospital) Changing the entire sample to something “typical” shouldn’t change the output too much

  48. Exponential Mechanism [MT07]

  49. DP implies PG with worse parameters

  50. PG implies DP…sort of

  51. PG implies DP…sort of

  52. PG implies DP…sort of Problems that are solvable under PG are also solvable under DP

  53. Notions of generalization • Robust generalization “no adversary can use output to find a hypothesis that overfits” • Differential privacy [DMNS ‘06] “similar samples should have the same output” • Perfect generalization “output reveals nothing about the sample”

  54. Some basic questions • Is it possible to get good learning algorithms that also are robust to post-processing? Adaptive composition? • How to construct them? Existing algorithms? How much extra data do they need? • Accuracy + generalization + post-processing-robustness = ? • Accuracy + generalization + adaptive composition = ? • What composes with what? How well (how quickly does generalization degrade)? Why?

  55. Making Generalization Robust Katrina Ligett katrina.ligett@mail.huji.ac.il HUJI & Caltech joint with Rachel Cummings, Kobbi Nissim, Aaron Roth, and Steven Wu

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend