approximate relational reasoning for probabilistic
play

Approximate Relational Reasoning for Probabilistic Programs PhD - PowerPoint PPT Presentation

Approximate Relational Reasoning for Probabilistic Programs PhD Candidate: Federico Olmedo Supervisor: Gilles Barthe IMDEA Software Institute PhD Examination Technical University of Madrid January 9, 2014 Selecting Locations for


  1. Approximate Relational Reasoning for Probabilistic Programs PhD Candidate: Federico Olmedo Supervisor: Gilles Barthe IMDEA Software Institute PhD Examination – Technical University of Madrid January 9, 2014

  2. Selecting Locations for Rehabilitation Centers Home Feasible location for rehab. centers Scenario: 2 new rehab. centers to be opened; 4 feasible locations. Goal: select locations that minimize average patient commute time. 1 / 26

  3. Selecting Locations for Rehabilitation Centers Home hosting pa- tients in treatment Feasible location for rehab. centers Scenario: 2 new rehab. centers to be opened; 4 feasible locations. Goal: select locations that minimize average patient commute time. 1 / 26

  4. Selecting Locations for Rehabilitation Centers Home hosting pa- tients in treatment Feasible location for rehab. centers Rehab. center Scenario: 2 new rehab. centers to be opened; 4 feasible locations. Goal: select locations that minimize average patient commute time. 1 / 26

  5. Selecting Locations for Rehabilitation Centers Home hosting pa- tients in treatment Feasible location for rehab. centers Rehab. center Scenario: 2 new rehab. centers to be opened; 4 feasible locations. Goal: select locations that minimize average patient commute time. 1 / 26

  6. Selecting Locations for Rehabilitation Centers Home hosting pa- tients in treatment Feasible location for rehab. centers Scenario: 2 new rehab. centers to be opened; 4 feasible locations. Goal: select locations that minimize average patient commute time. Optimum Solution Approach: Highest utility. Leakage of sensitive information. 1 / 26

  7. The Privacy–Utility Conflict utility privacy 2 / 26

  8. The Privacy–Utility Conflict D ifferential P rivacy (DP) utility privacy [Dwork + , ICALP ’06] 2 / 26

  9. The Privacy–Utility Conflict D ifferential P rivacy (DP) utility privacy [Dwork + , ICALP ’06] Privacy definition 1 Selection Algorithm 2 / 26

  10. The Privacy–Utility Conflict D ifferential P rivacy (DP) utility privacy [Dwork + , ICALP ’06] Privacy definition 1 Selection Algorithm Privacy realization 2 Basic mechanisms for numeric / discrete-valued computations. Composition theorems. 2 / 26

  11. Di ff erentially Private Location Selection [Gupta+, SODA’10] function k M edian ( C , F 0 ) i ← 0; 1 while i < T do 2 ( x , y ) ← pick − swap ( F i × F i ); $ 3 F i + 1 ← ( F i \{ x } ) ∪ { y } ; 4 i ← i + 1 5 end ; 6 j ← pick − solution ([1 , . . . , T ] , F ); $ 7 return F j 8 3 / 26

  12. Verifying Di ff erential Privacy Dynamic verification: PINQ [McSherry ’09] A iravat [Roy + ’10] Static verification: Fuzz [Reed & Pierce ’10] and DFuzz [Gaboardi + ’13] [Chaudhuri + ’11] Limitations of theses techniques: Only programs that are combinations of basic mechanisms. Only standard di ff erential privacy. Fixed set of domains and / or operations. 4 / 26

  13. In this Dissertation Our Goal Verify di ff erential privacy properties of probabilistic programs. We want our technique to Circumvent limitations of existing techniques. Provide strong evidence of correctness. Be extensible to reason about other quantitative properties of probabilistic programs. 5 / 26

  14. Outline Motivation 1 Verification of Di ff erential Privacy 2 Extensions of our Technique 3 Summary and Conclusions 4 6 / 26

  15. Outline Motivation 1 Verification of Di ff erential Privacy 2 Extensions of our Technique 3 Summary and Conclusions 4 7 / 26

  16. Di ff erential Privacy – Definition Mining Process 8 / 26

  17. Di ff erential Privacy – Definition Mining Process Location Selection 8 / 26

  18. Di ff erential Privacy – Definition Mining Process 8 / 26

  19. Di ff erential Privacy – Definition Mining Process 8 / 26

  20. Di ff erential Privacy – Definition Mining Process A randomized mechanism K is ǫ -di ff erentially private i ff for all databases d 1 and d 2 , and all events A , ∆ ( d 1 , d 2 ) ≤ 1 = ⇒ Pr [ K ( d 1 ) ∈ A ] ≤ e ǫ Pr [ K ( d 2 ) ∈ A ] 8 / 26

  21. Di ff erential Privacy – Definition Bounded ratio Mining Process A randomized mechanism K is ǫ -di ff erentially private i ff for all databases d 1 and d 2 , and all events A , ∆ ( d 1 , d 2 ) ≤ 1 = ⇒ Pr [ K ( d 1 ) ∈ A ] ≤ e ǫ Pr [ K ( d 2 ) ∈ A ] 8 / 26

  22. Di ff erential Privacy – Definition Mining Process A randomized mechanism K is ( ǫ, δ )-di ff erentially private i ff for all databases d 1 and d 2 , and all events A , ∆ ( d 1 , d 2 ) ≤ 1 = ⇒ Pr [ K ( d 1 ) ∈ A ] ≤ e ǫ Pr [ K ( d 2 ) ∈ A ] + δ 8 / 26

  23. Di ff erential Privacy – Fundamentals Basic mechanism for numeric queries. f ( d ) ǫ -DP f ( d ) + � ∼ d Composition theorem. ǫ -DP ǫ ′ -DP ǫ + ǫ ′ -DP 9 / 26

  24. Verifying Di ff erential Privacy – Our Approach Di ff erential privacy is a quantitative 2-safety property : ∆ ( d 1 , d 2 ) ≤ 1 = ⇒ ∀ A • Pr [ K ( d 1 ) ∈ A ] ≤ e ǫ Pr [ K ( d 2 ) ∈ A ] + δ 10 / 26

  25. Verifying Di ff erential Privacy – Our Approach Di ff erential privacy is a quantitative 2-safety property : ∆ ( d 1 , d 2 ) ≤ 1 = ⇒ ∀ A • Pr [ K ( d 1 ) ∈ A ] ≤ e ǫ Pr [ K ( d 2 ) ∈ A ] + δ relational pre-condition 10 / 26

  26. Verifying Di ff erential Privacy – Our Approach Di ff erential privacy is a quantitative 2-safety property : ∆ ( d 1 , d 2 ) ≤ 1 = ⇒ ∀ A • Pr [ K ( d 1 ) ∈ A ] ≤ e ǫ Pr [ K ( d 2 ) ∈ A ] + δ relational quantitative relational pre-condition post-condition 10 / 26

  27. Verifying Di ff erential Privacy – Our Approach Di ff erential privacy is a quantitative 2-safety property : ∆ ( d 1 , d 2 ) ≤ 1 = ⇒ ∀ A • Pr [ K ( d 1 ) ∈ A ] ≤ e ǫ Pr [ K ( d 2 ) ∈ A ] + δ relational quantitative relational pre-condition post-condition We propose a quantitative probabilistic relational Hoare logic { Ψ } c 1 ∼ α,δ c 2 { Φ } such that a program c is ( ǫ, δ )-DP i ff {≃} c ∼ e ǫ ,δ c {≡} database equality on adjacency observable output 10 / 26

  28. Relational Program Reasoning Standard Hoare Logic | = { Ψ } c { Φ } m Ψ ( m ) � c � Φ ( m ′ ) m ′ 11 / 26

  29. Relational Program Reasoning Standard Hoare Logic Relational Hoare Logic | = { Ψ } c { Φ } | = { Ψ } c 1 ∼ c 2 { Φ } m Ψ Ψ ( m ) m 1 m 2 � c � � c 1 � � c 2 � Φ ( m ′ ) m ′ m ′ m ′ 1 2 Φ 11 / 26

  30. Characterizing Di ff erential Privacy Our Goal {≃} c ∼ e ǫ ,δ c {≡} c is ( ǫ, δ )-DP i ff To achieve so we rely on a lifting operation and a distance measure. 12 / 26

  31. Characterizing Di ff erential Privacy Our Goal {≃} c ∼ e ǫ ,δ c {≡} c is ( ǫ, δ )-DP i ff To achieve so we rely on a lifting operation and a distance measure. L δ α ( · ) ∆ α ( · , · ) α ≥ 1 , δ ≥ 0 D A × D A → R ≥ 0 P ( A × B ) → P ( D A ×D B ) 12 / 26

  32. Characterizing Di ff erential Privacy Our Goal {≃} c ∼ e ǫ ,δ c {≡} c is ( ǫ, δ )-DP i ff To achieve so we rely on a lifting operation and a distance measure. L δ α ( · ) ∆ α ( · , · ) α ≥ 1 , δ ≥ 0 D A × D A → R ≥ 0 P ( A × B ) → P ( D A ×D B ) Judgment { Ψ } c 1 ∼ α,δ c 2 { Φ } is interpreted as 1 m 1 Ψ m 2 = ⇒ ( � c 1 � m 1 ) L δ α ( Φ ) ( � c 2 � m 2 ) 12 / 26

  33. Characterizing Di ff erential Privacy Our Goal {≃} c ∼ e ǫ ,δ c {≡} c is ( ǫ, δ )-DP i ff To achieve so we rely on a lifting operation and a distance measure. L δ α ( · ) ∆ α ( · , · ) α ≥ 1 , δ ≥ 0 D A × D A → R ≥ 0 P ( A × B ) → P ( D A ×D B ) Judgment { Ψ } c 1 ∼ α,δ c 2 { Φ } is interpreted as 1 m 1 Ψ m 2 = ⇒ ( � c 1 � m 1 ) L δ α ( Φ ) ( � c 2 � m 2 ) c is ( ǫ, δ )-DP i ff for all memories m 1 and m 2 , 3 m 1 ≃ m 2 = ⇒ ∀ A • Pr [ c ( m 1 ) ∈ A ] ≤ e ǫ Pr [ c ( m 2 ) ∈ A ] + δ 12 / 26

  34. Characterizing Di ff erential Privacy Our Goal {≃} c ∼ e ǫ ,δ c {≡} c is ( ǫ, δ )-DP i ff To achieve so we rely on a lifting operation and a distance measure. L δ α ( · ) ∆ α ( · , · ) α ≥ 1 , δ ≥ 0 D A × D A → R ≥ 0 P ( A × B ) → P ( D A ×D B ) Judgment { Ψ } c 1 ∼ α,δ c 2 { Φ } is interpreted as 1 m 1 Ψ m 2 = ⇒ ( � c 1 � m 1 ) L δ α ( Φ ) ( � c 2 � m 2 ) c is ( ǫ, δ )-DP i ff for all memories m 1 and m 2 , 3 m 1 ≃ m 2 = ⇒ ∆ e ǫ ( � c � m 1 , � c � m 2 ) ≤ δ 12 / 26

  35. Characterizing Di ff erential Privacy Our Goal {≃} c ∼ e ǫ ,δ c {≡} c is ( ǫ, δ )-DP i ff To achieve so we rely on a lifting operation and a distance measure. L δ α ( · ) ∆ α ( · , · ) α ≥ 1 , δ ≥ 0 D A × D A → R ≥ 0 P ( A × B ) → P ( D A ×D B ) Judgment {≃} c ∼ e ǫ ,δ c {≡} is interpreted as 1 m 1 ≃ m 2 = ⇒ ( � c � m 1 ) L δ e ǫ ( ≡ ) ( � c � m 2 ) c is ( ǫ, δ )-DP i ff for all memories m 1 and m 2 , 3 m 1 ≃ m 2 = ⇒ ∆ e ǫ ( � c � m 1 , � c � m 2 ) ≤ δ 12 / 26

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend