differential privacy techniques beyond differential
play

Differential Privacy Techniques Beyond Differential Privacy Steven - PowerPoint PPT Presentation

Differential Privacy Techniques Beyond Differential Privacy Steven Wu Assistant Professor University of Minnesota 1 Differential privacy? Isnt it just adding noise? How to add smart noise to guarantee privacy without sacrificing


  1. Differential Privacy Techniques Beyond Differential Privacy Steven Wu Assistant Professor University of Minnesota 1

  2. “Differential privacy? Isn’t it just adding noise?”

  3. How to add smart noise to guarantee privacy without sacrificing utility in private data analysis? How to add smart noise to achieve stability and gain more utility in data analysis?!

  4. Technical Connections Algorithmic Mechanism Design Adaptive Data Differential Privacy Analysis Certified Robustness for Adversarial Examples 4

  5. Outline • Simple Introduction to Differential Privacy • Mechanism Design • Adaptive Data Analysis • Certified Robustness 5

  6. Outline • Simple Introduction to Differential Privacy • Mechanism Design • Adaptive Data Analysis • Certified Robustness 6

  7. Statistical Database • X : the set of all possible records (e.g. { 0 , 1 } d ) • D ∈ X n : a collection of n rows ("one row per person") Private Algorithm Sensitive Database Output (e.g. medical records) Information 7

  8. Privacy as a Stability Notion Alice Algorithm Bob Database Data Analyst Stability: the data analyst learns (approximately) same information if any row is replaced by another person of the population 8

  9. Differential Privacy [ DN03, DMNS06 ] D 1 D 1 D 2 D 2 D = D 3 D’ = D’ 3 … … D n D n D and D’ are neighbors if they differ by at most one row A private algorithm needs to have close output distributions on any pair of neighbors Definition: A (randomized) algorithm A is ε -differentially private if for all neighbors D, D ’ and every S ⊆ Range(A) Pr [ A ( D ) ∈ S ] ≤ e ε Pr [ A ( D’ ) ∈ S ] 9

  10. Differential Privacy [ DN03, DMNS06 ] Definition: A (randomized) algorithm A is ( ε , δ )-differentially private if for all neighbors D, D ’ and every S ⊆ Range(A) Pr [ A ( D ) ∈ S ] ≤ e ε Pr [ A ( D’ ) ∈ S ] + δ One Interpretation of the Definition: If a bad event is very unlikely when I’m not in the database ( D ), then it is still very unlikely when I am in the database ( D’ ). 10

  11. Nice Properties of Differential Privacy • Privacy loss measure ( ε ) • Bounds the cumulative privacy losses across different computations and databases • Resilience to arbitrary post-processing • Adversary’s background knowledge is irrelevant • Compositional reasoning • Programmability: construct complicated private analyses from simple private building blocks 11

  12. Other Formulations • Renyi Differential Privacy [ Mir17 ] • (Zero)-Concentrated Differential Privacy [ DR16, BS16 ] • Truncated-Concentrated Differential Privacy [ BDRS18 ]

  13. Privacy as a Tool for Mechanism Design 13

  14. Warmup: Revenue Maximization $1.00 $1.00 n buyers w/ private value $1.00 $4.01 • Could set the price of apples at $1.00 for profit: $4.00 • Could set the price of apples at $4.01 for profit $4.01 • Best price: $4.01, 2nd best price: $1.00 • Profit if you set the price at $4.02: $0 • Profit if you set the price at $1.01: $1.01

  15. Incentivizing Truth-telling M : 𝒴 n → ℛ • A mechanism for some abstract range ℛ • = reported value; = {$1.00, $1.01, $1.02, $1.03, …} 𝒴 ℛ • Each agent has a utility function u i : ℛ → [ − B , B ] i • For example, , if is the selected price u i ( r ) = 1 [ x ≥ r ]( v − r ) r Definition. A mechanism is -approximately dominant strategy truthful M α if for any with private value , any reported value from 
 i v i x i i and any reported values from everyone else x − i 𝔽 M [ u i ( M ( v i , x − i ))] ≥ 𝔽 M [ u i ( M ( x i , x − i ))] − α No matter what other people do, truthful report is (almost) the best

  16. Privacy Truthfulness ⇒ M : 𝒴 n → ℛ • A mechanism for some abstract range ℛ • Each agent has a utility function u i : ℛ → [ − B , B ] i Theorem [ MT07 ] . Any -differentially private mechanism is ϵ M -approximately dominant strategy truthful . ϵ B Proof idea. Utilitarian view of the DP definition: for all utility function u i 𝔽 M [ u i ( M ( x i , x − i ))] ≥ exp( ϵ ) 𝔽 M [ u i ( M ( x ′ � i , x − i ))]

  17. The Exponential Mechanism [ MT 07 ] M : 𝒴 n → ℛ • A mechanism for some abstract range ℛ • = reported value; = {$1.00, $1.01, $1.02, $1.03, …} 𝒴 ℛ q : 𝒴 n × ℛ → ℝ • Paired with a quality score . • 𝑟 ( 𝐸 , 𝑠 ) represents how good output 𝑠 is for input data 𝐸 , (e.g., revenue) • Sensitivity : for all neighboring and , Δ q D D ′ � r ∈ ℛ | q ( D , r ) − q ( D ′ � , r ) | ≤ Δ q

  18. The Exponential Mechanism [ MT 07 ] • Input: data set , range , quality score , privacy parameter ℛ D q ϵ • Select a random outcome with probability proportional to r ℙ [ r ] ∝ exp ( ) ϵ q ( D , r ) 2 Δ q Idea: Make high quality outputs exponentially more likely at a rate that depends on the sensitivity of the quality and the privacy parameter Δ q ϵ

  19. 
 ̂ The Exponential Mechanism [ MT 07 ] • Input: data set , range , quality score , privacy parameter ℛ D q ϵ • Select a random outcome with probability proportional to r ℙ [ r ] ∝ exp ( ) ϵ q ( D , r ) 2 Δ q Theorem [ MT07 ] . The exponential mechanism is -differentially private, ϵ -approximately DS truthful and with probability O ( ϵ ) 1 − β , the selected outcome satisfies r r ) ≥ OPT − 2 Δ q log( | ℛ | / β ) q ( D , ̂ ϵ

  20. Limitations • Everything is an approximate dominant strategy, not just truth telling. • Sometimes it is easy to find a beneficial deviation • [ NST 12, HK 12 ] obtain exact truthfulness • Many interesting problems cannot be solved under the standard constraint of differential privacy • Joint Differential Privacy as a Tool

  21. Allocation Problem k types of goods n buyers s copies of each Each buyer has private value for each good i v i ( j ) = v ij j 21

  22. Mechanism Design Goal Design a mechanism that computes a feasible allocation • M and a set of item prices such that 
 x 1 , …, x n p 1 , …, p k • The allocation maximizes social welfare n ∑ SW = v i ( x i ) i =1 • -approximately dominant strategy truthful α 𝔽 M ( V ′ � ) [ v i ( x i ) − p ( x i )] ≤ 𝔽 M ( V ) [ v i ( x i ) − p ( x i )] + α for any and V = ( v 1 , …, v i , …, v n ) V ′ � = ( v 1 , …, v ′ � i , …, v n )

  23. Using Privacy as a Hammer? Impossible to solve under standard differential privacy Output of the algorithm: assignment of items to the buyers • Differential privacy requires the output to be insensitive to • change of any buyer’s private valuation But to achieve high welfare, we will have to give the buyers • what they want Still the same ? 23

  24. Structure of the Problem n buyers’ n buyers’ assigned private values Algorithm items • Both the input and output are partitioned amongst n buyers • The next best thing: protect a buyer’s privacy from all other buyers 24

  25. Joint Differential Privacy (JDP) [ KPRU14 ] Definition: Two inputs D, D’ are i -neighbors if they only differ by i ’s input. An algorithm A: X → R n satisfies ( ε , δ )-joint differential privacy if for all neighbors D, D ’ and every S ⊆ R n- 1 Pr [ A ( D ) -i ∈ S ] ≤ e ε Pr [ A ( D’ ) -i ∈ S ] + δ buyer 1 = Algorithm insensitive to buyer 1’s data Even if all the other buyers collude, they will not learn about buyer 1’s private values! 25

  26. How to solve the allocation problem under 
 joint differential privacy? [ HHRR W14 , HHR W16 ] Key idea: use prices under standard differential privacy as a coordination device among the buyers 26

  27. Price Coordination under JDP “Billboard” Price (Dual) Buyers (Primal) ( p t + 1 , . . . , p t + 1 ( p t 1 , p t 2 , . . . , p t ) k ) 1 k Iteratively updates prices 
 best response Perturb the gradient (for privacy) • Buyers best respond 
 Gradient descent update on the prices to prices separately • Raise prices on over-demand goods • lower prices on under-demand goods • The aggregate demand gives gradient feedback Demand the favorite item Final Solution (average allocation): given the prices Let each buyer uniformly randomly sampled an item from the sequence of best responses 27

  28. Approximate Truthfulness Incentivize truth-telling with privacy • Final prices are computed under differential privacy 
 (insensitive to any single buyer’s misreporting) • Each buyer is getting the (approximately) most preferred assignment given the final prices • Truthfully reporting their data is an approximate dominant strategy for all buyers 28

  29. Extension to Combinatorial Auctions Allocating bundles of goods • [ HHRR W14 ] Gross substitutes valuations • [ HHR W16 ] - demand valuations 
 d (general valuation over bundles of size at most ) d Compared to VCG mechanism • JDP gives item prices; VCG charges payments on bundles • JDP approximate envy-free; VCG not envy-free

  30. Joint Differential Privacy as a Hammer Meta-Theorem [KPRU14] Computing equilibria subject to joint differential privacy robustly incentivizes truth telling. Solves large-market mechanism design problems for: • [KMR W15 ] Many-to-one stable matching • First approximate student-truthful mechanism for approximate school-optimal stable matchings without distributional assumptions • [RR14, RRUW15] Coordinate traffic routing (with tolls) • [CKR W15 ] Equilibrium selection in anonymous games 30

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend