Adversarially Learned Representations for Information Obfuscation - - PowerPoint PPT Presentation

adversarially learned representations for information
SMART_READER_LITE
LIVE PREVIEW

Adversarially Learned Representations for Information Obfuscation - - PowerPoint PPT Presentation

Adversarially Learned Representations for Information Obfuscation and Inference Martin Bertran 1 , Natalia Martinez 1 , Afroditi Papadaki 2 Qiang Qiu 1 , Miguel Rodrigues 2 , Galen Reeves 1 , Guillermo Sapiro 1 1. Duke University 2. University


slide-1
SLIDE 1

Adversarially Learned Representations for Information Obfuscation and Inference

Martin Bertran1, Natalia Martinez1, Afroditi Papadaki2 Qiang Qiu1, Miguel Rodrigues2, Galen Reeves1, Guillermo Sapiro1

  • 1. Duke University
  • 2. University College London
slide-2
SLIDE 2

Motivation

2

Why do users share their data? Facial Image

Service Provider

Utility Subject verification Shared data

User decision

slide-3
SLIDE 3

Motivation

2

Why do users share their data? Facial Image

Service Provider

Utility Subject verification Shared data Sensitive attributes Emotion Gender

User decision

Race

slide-4
SLIDE 4

Motivation

3

Can we do better?

slide-5
SLIDE 5

Motivation

3

Can we do better? Facial Image

Service Provider

Shared data Sensitive attributes Gender

User decision

Utility Subject verification Filtered Image Learn space-preserving representations that obfuscate sensitive information while preserving utility.

slide-6
SLIDE 6

Motivation

4

Example: Preserve gender & obfuscate emotion

P(Serious) = 0.98 P(Female) = 0.99 P(Male) = 0.98 P(Smile) = 0.78

Original Filtered

P(Serious) = 0.31 P(Female) = 0.99 P(Male) = 0.98 P(Smile) = 0.38

slide-7
SLIDE 7

Motivation

5

Example: Preserve subject & obfuscate gender

P(Male) = 0.99 Subject verified

Original Filtered

P(Female) = 0.54 P(Female) = 0.99 Subject verified Subject verified P(Male) = 0.70 Subject verified

slide-8
SLIDE 8

Sample of related work

6

  • (2003) Chechik et al. Extracting relevant structures with side

information.

  • (2016) Basciftci et al. On privacy-utility tradeoffs for constrained

data release mechanisms.

  • (2018) Madras et al. Learning adversarially fair and transferable

representations.

  • (2018) Sun et al. A hybrid model for identity obfuscation by face

replacement.

slide-9
SLIDE 9

Problem formulation

7

Utility variable Sensible variable High-dimensional data Sanitized data

slide-10
SLIDE 10

Problem formulation

7

Utility variable Sensible variable High-dimensional data Sanitized data

(U, S) ∼ p(U, S)

slide-11
SLIDE 11

Problem formulation

7

Utility variable Sensible variable High-dimensional data Sanitized data

(U, S) ∼ p(U, S)

X ∼ p(X|U, S)

slide-12
SLIDE 12

Problem formulation

7

Utility variable Sensible variable High-dimensional data Sanitized data

(U, S) ∼ p(U, S)

X ∼ p(X|U, S)

Y ∼ p(Y |X)

Our objective!

slide-13
SLIDE 13

Problem formulation

7

Utility variable Sensible variable High-dimensional data Sanitized data

(U, S) ∼ p(U, S)

X ∼ p(X|U, S)

Y ∼ p(Y |X)

Our objective!

Want to learn such that :

Y ∼ p(Y |X) p(S|Y ) ∼ p(S)

p(U|Y ) ∼ p(U|X)

slide-14
SLIDE 14

Problem formulation

7

Utility variable Sensible variable High-dimensional data Sanitized data

(U, S) ∼ p(U, S)

X ∼ p(X|U, S)

Y ∼ p(Y |X)

Our objective!

Want to learn such that :

Y ∼ p(Y |X) p(S|Y ) ∼ p(S)

p(U|Y ) ∼ p(U|X)

  • DKL[p(S|Y )||p(S)]

min

slide-15
SLIDE 15

Problem formulation

7

Utility variable Sensible variable High-dimensional data Sanitized data

(U, S) ∼ p(U, S)

X ∼ p(X|U, S)

Y ∼ p(Y |X)

Our objective!

Want to learn such that :

Y ∼ p(Y |X) p(S|Y ) ∼ p(S)

p(U|Y ) ∼ p(U|X)

  • DKL[p(S|Y )||p(S)]

min

DKL[p(U|X)||p(U|Y )]

min

slide-16
SLIDE 16

Problem formulation

8

Want to learn such that:

Y ∼ p(Y |X)

  • DKL[p(S|Y )||p(S)]

min

DKL[p(U|X)||p(U|Y )]

min

slide-17
SLIDE 17

Problem formulation

8

Want to learn such that:

Y ∼ p(Y |X)

  • DKL[p(S|Y )||p(S)]

min

DKL[p(U|X)||p(U|Y )]

min

EY [.]

I(S; Y )

slide-18
SLIDE 18

Problem formulation

8

Want to learn such that:

Y ∼ p(Y |X)

  • DKL[p(S|Y )||p(S)]

min

DKL[p(U|X)||p(U|Y )]

min

EY [.]

EX,Y [.]

I(U; X|Y )

I(S; Y )

slide-19
SLIDE 19

Problem formulation

8

Want to learn such that:

Y ∼ p(Y |X)

  • DKL[p(S|Y )||p(S)]

min

DKL[p(U|X)||p(U|Y )]

min

EY [.]

EX,Y [.]

I(U; X|Y )

I(S; Y )

slide-20
SLIDE 20

Problem formulation

8

Want to learn such that:

Y ∼ p(Y |X)

  • DKL[p(S|Y )||p(S)]

min

DKL[p(U|X)||p(U|Y )]

min

EY [.]

EX,Y [.]

I(U; X|Y )

I(S; Y )

min I(U; X|Y )

p(Y |X)

s.t. I(S; Y ) ≤ k

Objective:

slide-21
SLIDE 21

Problem formulation

8

Want to learn such that:

Y ∼ p(Y |X)

  • DKL[p(S|Y )||p(S)]

min

DKL[p(U|X)||p(U|Y )]

min

EY [.]

EX,Y [.]

I(U; X|Y )

I(S; Y )

min I(U; X|Y )

p(Y |X)

I(S; Y ) ≤ k

Objective:

∼ max

p(Y |X)

I(U; Y )

s.t.

slide-22
SLIDE 22

Performance bounds

9

Given the objective min I(U; X|Y )

p(Y |X)

s.t. I(S; Y ) ≤ k

slide-23
SLIDE 23

Performance bounds

9

Given the objective min I(U; X|Y )

p(Y |X)

s.t. I(S; Y ) ≤ k

What are the intrinsic limits on the trade-offs for this problem?

slide-24
SLIDE 24

Performance bounds

9

Given the objective min I(U; X|Y )

p(Y |X)

s.t. I(S; Y ) ≤ k

What are the intrinsic limits on the trade-offs for this problem? Lemma 1. finite alphabets, .

min I(U; X|Y )

p(Y |X)

s.t. I(S; Y ) ≤ k

I(U; X) − I(U; Y )

I(U; Y ) ≤ I(U; X)

p(Y |U, S)

min

s.t. I(S; Y ) ≤ k

Then:

(U, S) ∈ U × S

X ∼ p(X|U, S)

  • With finite we can compute a sequence of upper bounds: Restricted

cardinality sequence (RCS).

|Y|

slide-25
SLIDE 25

Performance bounds

10

Given the objective min I(U; X|Y )

p(Y |X)

s.t. : I(S; Y ) ≤ k

Lemma 2.

(X, U, S) ∼ p(X, U, S)

Given What are the intrinsic limits on the trade-offs for this problem? I(U; X|Y ) ≥ −I(S; Y ) + I(U; S) − I(U; S|X)

slide-26
SLIDE 26

Performance bounds

10

Given the objective min I(U; X|Y )

p(Y |X)

s.t. : I(S; Y ) ≤ k

Lemma 2.

(X, U, S) ∼ p(X, U, S)

Given What are the intrinsic limits on the trade-offs for this problem? Lemma 3.

k ≥ 0

p(Y |X)

I(U; X|Y ) = max(0, 1 − k I(S; X))I(U; X)

I(S; Y ) ≤ k

(X, U, S) ∼ p(X, U, S)

Given , such that:

I(U; X|Y ) ≥ −I(S; Y ) + I(U; S) − I(U; S|X)

slide-27
SLIDE 27

Performance bounds

11

Lemmas 1, 2 and 3 can be approximated using contingency tables. I(U; X|Y )

I(S; Y ) ≤

))I(U; X)

I(U; S) I(U; S)

I(S; X)

Lemma 1 (RCS) Lemma 2 (lower bound) Lemma 3 (achievable upper bound) * Sketch under the assumption that

I(U; S|X) = 0

slide-28
SLIDE 28

Proposed framework

12

slide-29
SLIDE 29

Proposed framework

12

min I(U; X|Y )

s.t. : I(S; Y ) ≤ k

Objective:

p(Y |X) ∼ qθ(X, Z)

slide-30
SLIDE 30

Proposed framework

12

min I(U; X|Y )

s.t. : I(S; Y ) ≤ k

Objective: Optimization objective:

[I(U; X|Y ) + λmax{I(S; Y ) − k, 0}2]

min

p(Y |X) ∼ qθ(X, Z) p(Y |X) ∼ qθ(X, Z)

slide-31
SLIDE 31

Implementation

13

Optimization objective:

[I(U; X|Y ) + λmax{I(S; Y ) − k, 0}2]

min

qθ(X, Z)

slide-32
SLIDE 32

Implementation

13

Optimization objective:

[I(U; X|Y ) + λmax{I(S; Y ) − k, 0}2]

min

qθ(X, Z)

Learning the stochastic mapping :

p(S|Y )

p(U|Y )

p(U|X)

Y = qθ(X, Z)

pη(S|Y )

pφ(U|X) pψ(U|Y )

∼ ∼ ∼

ˆ ψ = argminψEX,U,Z ⇥ − log(pψ(U | qˆ

θ(X, Z))

ˆ η = argminηEX,S,Z ⇥ − log(pη(S | qˆ

θ(X, Z))

⇤ ˆ φ = argminφEX,U ⇥ − log(pφ(U | X) ⇤

slide-33
SLIDE 33

Implementation

13

Optimization objective:

[I(U; X|Y ) + λmax{I(S; Y ) − k, 0}2]

min

qθ(X, Z)

Learning the stochastic mapping :

p(S|Y )

p(U|Y )

p(U|X)

Y = qθ(X, Z)

pη(S|Y )

pφ(U|X) pψ(U|Y )

∼ ∼ ∼

ˆ ψ = argminψEX,U,Z ⇥ − log(pψ(U | qˆ

θ(X, Z))

ˆ η = argminηEX,S,Z ⇥ − log(pη(S | qˆ

θ(X, Z))

⇤ ˆ φ = argminφEX,U ⇥ − log(pφ(U | X) ⇤

ˆ θ = argminθEX,Z ⇥ DKL[p ˆ

φ(U | X) || p ˆ ψ(U | qθ(X, Z))]]

+λ max(EX,Z ⇥ DKL[pˆ

η(S | qθ(X, Z)) || P(S)]] − k, 0)2

slide-34
SLIDE 34

Implementation

13

Optimization objective:

[I(U; X|Y ) + λmax{I(S; Y ) − k, 0}2]

min

qθ(X, Z)

Learning the stochastic mapping :

p(S|Y )

p(U|Y )

p(U|X)

Y = qθ(X, Z)

pη(S|Y )

pφ(U|X) pψ(U|Y )

∼ ∼ ∼

ˆ ψ = argminψEX,U,Z ⇥ − log(pψ(U | qˆ

θ(X, Z))

ˆ η = argminηEX,S,Z ⇥ − log(pη(S | qˆ

θ(X, Z))

⇤ ˆ φ = argminφEX,U ⇥ − log(pφ(U | X) ⇤

ˆ θ = argminθEX,Z ⇥ DKL[p ˆ

φ(U | X) || p ˆ ψ(U | qθ(X, Z))]]

+λ max(EX,Z ⇥ DKL[pˆ

η(S | qθ(X, Z)) || P(S)]] − k, 0)2

Xception Networks U-NET + noise

slide-35
SLIDE 35

Experiments

14

Emotion obfuscation vs gender detection ∞

0.5 0.3

k

slide-36
SLIDE 36

Experiments

15

Emotion obfuscation vs gender detection ∞

0.5 0.3

k

slide-37
SLIDE 37

Experiments

16

Gender obfuscation vs subject verification ∞

0.3 0.2

k

slide-38
SLIDE 38

Experiments

17

Gender obfuscation vs subject verification ∞

0.3 0.2

k

slide-39
SLIDE 39

Experiments

18

Subject within Subject k ∞

0.5

Consenting User Nonconsenting User Subject verified Subject verified Subject verified Subject verified

slide-40
SLIDE 40

Concluding remarks

19

  • Learned representations that preserve utility and obfuscate sensitive information.
  • Derived easy-to-compute bounds.
  • Experimental results show representations compare favorably against

derived bounds. Limitations:

  • Expectation-based approach.
  • Reliance on adversary as a proxy for information.
  • Transformations are space-preserving. Can reuse existing pipelines.
slide-41
SLIDE 41

Thanks!

20

Please visit us at poster #81