adversarially learned representations for information
play

Adversarially Learned Representations for Information Obfuscation - PowerPoint PPT Presentation

Adversarially Learned Representations for Information Obfuscation and Inference Martin Bertran 1 , Natalia Martinez 1 , Afroditi Papadaki 2 Qiang Qiu 1 , Miguel Rodrigues 2 , Galen Reeves 1 , Guillermo Sapiro 1 1. Duke University 2. University


  1. Adversarially Learned Representations for Information Obfuscation and Inference Martin Bertran 1 , Natalia Martinez 1 , Afroditi Papadaki 2 Qiang Qiu 1 , Miguel Rodrigues 2 , Galen Reeves 1 , Guillermo Sapiro 1 1. Duke University 2. University College London

  2. Motivation Why do users share their data? Shared data Facial Image Utility Service Subject verification Provider User decision 2

  3. Motivation Why do users share their data? Shared data Facial Image Utility Service Subject verification Provider Sensitive attributes User Emotion decision Gender Race 2

  4. Motivation Can we do better? 3

  5. Motivation Can we do better? Filtered Image Shared data Facial Image Utility Service Subject verification Provider Sensitive attributes User Gender decision Learn space-preserving representations that obfuscate sensitive information while preserving utility. 3

  6. Motivation Example: Preserve gender & obfuscate emotion Original Filtered P(Male) = 0.98 P(Male) = 0.98 P(Smile) = 0.78 P(Smile) = 0.38 P(Female) = 0.99 P(Female) = 0.99 P(Serious) = 0.98 P(Serious) = 0.31 4

  7. Motivation Example: Preserve subject & obfuscate gender Original Filtered P(Male) = 0.99 P(Male) = 0.70 Subject verified Subject verified P(Female) = 0.99 P(Female) = 0.54 Subject verified Subject verified 5

  8. Sample of related work • (2003) Chechik et al. Extracting relevant structures with side information. • (2016) Basciftci et al. On privacy-utility tradeoffs for constrained data release mechanisms. • (2018) Madras et al. Learning adversarially fair and transferable representations. • (2018) Sun et al. A hybrid model for identity obfuscation by face replacement. 6

  9. Problem formulation Utility Sensible variable variable High-dimensional data Sanitized data 7

  10. Problem formulation Utility Sensible ( U, S ) ∼ p ( U, S ) variable variable High-dimensional data Sanitized data 7

  11. Problem formulation Utility Sensible ( U, S ) ∼ p ( U, S ) variable variable X ∼ p ( X | U, S ) High-dimensional data Sanitized data 7

  12. Problem formulation Utility Sensible ( U, S ) ∼ p ( U, S ) variable variable X ∼ p ( X | U, S ) High-dimensional data Y ∼ p ( Y | X ) Sanitized data Our objective! 7

  13. Problem formulation Utility Sensible ( U, S ) ∼ p ( U, S ) variable variable X ∼ p ( X | U, S ) High-dimensional data Y ∼ p ( Y | X ) Sanitized data Our objective! Want to learn Y ∼ p ( Y | X ) such that : p ( S | Y ) ∼ p ( S ) • p ( U | Y ) ∼ p ( U | X ) • 7

  14. Problem formulation Utility Sensible ( U, S ) ∼ p ( U, S ) variable variable X ∼ p ( X | U, S ) High-dimensional data Y ∼ p ( Y | X ) Sanitized data Our objective! Want to learn Y ∼ p ( Y | X ) such that : p ( S | Y ) ∼ p ( S ) D KL [ p ( S | Y ) || p ( S )] • min p ( U | Y ) ∼ p ( U | X ) • 7

  15. Problem formulation Utility Sensible ( U, S ) ∼ p ( U, S ) variable variable X ∼ p ( X | U, S ) High-dimensional data Y ∼ p ( Y | X ) Sanitized data Our objective! Want to learn Y ∼ p ( Y | X ) such that : p ( S | Y ) ∼ p ( S ) D KL [ p ( S | Y ) || p ( S )] • min p ( U | Y ) ∼ p ( U | X ) D KL [ p ( U | X ) || p ( U | Y )] • min 7

  16. Problem formulation Want to learn Y ∼ p ( Y | X ) such that: D KL [ p ( S | Y ) || p ( S )] • min D KL [ p ( U | X ) || p ( U | Y )] • min 8

  17. Problem formulation Want to learn Y ∼ p ( Y | X ) such that: E Y [ . ] I ( S ; Y ) D KL [ p ( S | Y ) || p ( S )] • min D KL [ p ( U | X ) || p ( U | Y )] • min 8

  18. Problem formulation Want to learn Y ∼ p ( Y | X ) such that: E Y [ . ] I ( S ; Y ) D KL [ p ( S | Y ) || p ( S )] • min E X,Y [ . ] I ( U ; X | Y ) D KL [ p ( U | X ) || p ( U | Y )] • min 8

  19. Problem formulation Want to learn Y ∼ p ( Y | X ) such that: E Y [ . ] I ( S ; Y ) D KL [ p ( S | Y ) || p ( S )] • min E X,Y [ . ] I ( U ; X | Y ) D KL [ p ( U | X ) || p ( U | Y )] • min 8

  20. Problem formulation Want to learn Y ∼ p ( Y | X ) such that: E Y [ . ] I ( S ; Y ) D KL [ p ( S | Y ) || p ( S )] • min E X,Y [ . ] I ( U ; X | Y ) D KL [ p ( U | X ) || p ( U | Y )] • min Objective: min I ( U ; X | Y ) p ( Y | X ) I ( S ; Y ) ≤ k s.t. 8

  21. Problem formulation Want to learn Y ∼ p ( Y | X ) such that: E Y [ . ] I ( S ; Y ) D KL [ p ( S | Y ) || p ( S )] • min E X,Y [ . ] I ( U ; X | Y ) D KL [ p ( U | X ) || p ( U | Y )] • min Objective: min I ( U ; X | Y ) I ( U ; Y ) ∼ max p ( Y | X ) p ( Y | X ) I ( S ; Y ) ≤ k s.t. 8

  22. Performance bounds Given the objective min I ( U ; X | Y ) I ( S ; Y ) ≤ k s.t. p ( Y | X ) 9

  23. Performance bounds Given the objective min I ( U ; X | Y ) I ( S ; Y ) ≤ k s.t. p ( Y | X ) What are the intrinsic limits on the trade-offs for this problem? 9

  24. Performance bounds Given the objective min I ( U ; X | Y ) I ( S ; Y ) ≤ k s.t. p ( Y | X ) What are the intrinsic limits on the trade-offs for this problem? Lemma 1. finite alphabets, . ( U, S ) ∈ U × S X ∼ p ( X | U, S ) Then: min I ( U ; X | Y ) ≥ I ( U ; X ) − I ( U ; Y ) min p ( Y | X ) p ( Y | U, S ) I ( S ; Y ) ≤ k I ( S ; Y ) ≤ k s.t. s.t. I ( U ; Y ) ≤ I ( U ; X ) • With finite we can compute a sequence of upper bounds: Restricted |Y| cardinality sequence (RCS). 9

  25. Performance bounds Given the objective min I ( U ; X | Y ) s.t. : I ( S ; Y ) ≤ k p ( Y | X ) What are the intrinsic limits on the trade-offs for this problem? Lemma 2. Given ( X, U, S ) ∼ p ( X, U, S ) I ( U ; X | Y ) ≥ − I ( S ; Y ) + I ( U ; S ) − I ( U ; S | X ) 10

  26. Performance bounds Given the objective min I ( U ; X | Y ) s.t. : I ( S ; Y ) ≤ k p ( Y | X ) What are the intrinsic limits on the trade-offs for this problem? Lemma 2. Given ( X, U, S ) ∼ p ( X, U, S ) I ( U ; X | Y ) ≥ − I ( S ; Y ) + I ( U ; S ) − I ( U ; S | X ) ∀ Lemma 3. Given , such that: ∃ ( X, U, S ) ∼ p ( X, U, S ) p ( Y | X ) k ≥ 0 I ( S ; Y ) ≤ k k I ( U ; X | Y ) = max (0 , 1 − I ( S ; X )) I ( U ; X ) 10

  27. Performance bounds Lemmas 1, 2 and 3 can be approximated using contingency tables. Lemma 1 (RCS) I ( S ; X ) Lemma 2 (lower bound) Lemma 3 (achievable upper bound) I ( S ; Y ) ≤ I ( U ; S ) )) I ( U ; X ) I ( U ; S ) I ( U ; X | Y ) * Sketch under the assumption that I ( U ; S | X ) = 0 11

  28. Proposed framework 12

  29. Proposed framework Objective: min I ( U ; X | Y ) p ( Y | X ) ∼ q θ ( X, Z ) s.t. : I ( S ; Y ) ≤ k 12

  30. Proposed framework Objective: min I ( U ; X | Y ) p ( Y | X ) ∼ q θ ( X, Z ) s.t. : I ( S ; Y ) ≤ k Optimization objective: [ I ( U ; X | Y ) + λ max { I ( S ; Y ) − k, 0 } 2 ] min p ( Y | X ) ∼ q θ ( X, Z ) 12

  31. Implementation Optimization objective: [ I ( U ; X | Y ) + λ max { I ( S ; Y ) − k, 0 } 2 ] min q θ ( X, Z ) 13

  32. Implementation Optimization objective: [ I ( U ; X | Y ) + λ max { I ( S ; Y ) − k, 0 } 2 ] min q θ ( X, Z ) Learning the stochastic mapping : Y = q θ ( X, Z ) ˆ ⇥ ⇤ p φ ( U | X ) φ = argmin φ E X,U − log ( p φ ( U | X ) p ( U | X ) ∼ ˆ ⇥ ⇤ p ψ ( U | Y ) ψ = argmin ψ E X,U,Z − log ( p ψ ( U | q ˆ θ ( X, Z )) p ( U | Y ) ∼ ⇥ ⇤ p η ( S | Y ) η = argmin η E X,S,Z ˆ − log ( p η ( S | q ˆ θ ( X, Z )) p ( S | Y ) ∼ 13

  33. Implementation Optimization objective: [ I ( U ; X | Y ) + λ max { I ( S ; Y ) − k, 0 } 2 ] min q θ ( X, Z ) Learning the stochastic mapping : Y = q θ ( X, Z ) ˆ ⇥ ⇤ p φ ( U | X ) φ = argmin φ E X,U − log ( p φ ( U | X ) p ( U | X ) ∼ ˆ ⇥ ⇤ p ψ ( U | Y ) ψ = argmin ψ E X,U,Z − log ( p ψ ( U | q ˆ θ ( X, Z )) p ( U | Y ) ∼ ⇥ ⇤ p η ( S | Y ) η = argmin η E X,S,Z ˆ − log ( p η ( S | q ˆ θ ( X, Z )) p ( S | Y ) ∼ ˆ ⇥ θ = argmin θ E X,Z D KL [ p ˆ φ ( U | X ) || p ˆ ψ ( U | q θ ( X, Z ))]] η ( S | q θ ( X, Z )) || P ( S )]] − k, 0) 2 ⇥ + λ max( E X,Z D KL [ p ˆ 13

  34. Implementation Optimization objective: [ I ( U ; X | Y ) + λ max { I ( S ; Y ) − k, 0 } 2 ] min q θ ( X, Z ) Learning the stochastic mapping : Y = q θ ( X, Z ) ˆ ⇥ ⇤ p φ ( U | X ) φ = argmin φ E X,U − log ( p φ ( U | X ) p ( U | X ) ∼ Xception ˆ ⇥ ⇤ p ψ ( U | Y ) ψ = argmin ψ E X,U,Z − log ( p ψ ( U | q ˆ θ ( X, Z )) p ( U | Y ) ∼ Networks ⇥ ⇤ p η ( S | Y ) η = argmin η E X,S,Z ˆ − log ( p η ( S | q ˆ θ ( X, Z )) p ( S | Y ) ∼ ˆ ⇥ θ = argmin θ E X,Z D KL [ p ˆ φ ( U | X ) || p ˆ ψ ( U | q θ ( X, Z ))]] U-NET + noise η ( S | q θ ( X, Z )) || P ( S )]] − k, 0) 2 ⇥ + λ max( E X,Z D KL [ p ˆ 13

  35. Experiments Emotion obfuscation vs gender detection k ∞ 0.5 0.3 14

  36. Experiments Emotion obfuscation vs gender detection k ∞ 0.5 0.3 15

  37. Experiments Gender obfuscation vs subject verification k ∞ 0.3 0.2 16

  38. Experiments Gender obfuscation vs subject verification k ∞ 0.3 0.2 17

  39. Experiments Subject within Subject Consenting Nonconsenting User User Subject verified k Subject verified ∞ Subject verified Subject verified 0.5 18

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend