Adversarially Learned Representations for Information Obfuscation and Inference
Martin Bertran1, Natalia Martinez1, Afroditi Papadaki2 Qiang Qiu1, Miguel Rodrigues2, Galen Reeves1, Guillermo Sapiro1
- 1. Duke University
- 2. University College London
Adversarially Learned Representations for Information Obfuscation - - PowerPoint PPT Presentation
Adversarially Learned Representations for Information Obfuscation and Inference Martin Bertran 1 , Natalia Martinez 1 , Afroditi Papadaki 2 Qiang Qiu 1 , Miguel Rodrigues 2 , Galen Reeves 1 , Guillermo Sapiro 1 1. Duke University 2. University
Martin Bertran1, Natalia Martinez1, Afroditi Papadaki2 Qiang Qiu1, Miguel Rodrigues2, Galen Reeves1, Guillermo Sapiro1
2
2
3
3
4
P(Serious) = 0.98 P(Female) = 0.99 P(Male) = 0.98 P(Smile) = 0.78
P(Serious) = 0.31 P(Female) = 0.99 P(Male) = 0.98 P(Smile) = 0.38
5
P(Male) = 0.99 Subject verified
P(Female) = 0.54 P(Female) = 0.99 Subject verified Subject verified P(Male) = 0.70 Subject verified
6
7
Utility variable Sensible variable High-dimensional data Sanitized data
7
Utility variable Sensible variable High-dimensional data Sanitized data
7
Utility variable Sensible variable High-dimensional data Sanitized data
7
Utility variable Sensible variable High-dimensional data Sanitized data
Our objective!
7
Utility variable Sensible variable High-dimensional data Sanitized data
Our objective!
Want to learn such that :
7
Utility variable Sensible variable High-dimensional data Sanitized data
Our objective!
Want to learn such that :
7
Utility variable Sensible variable High-dimensional data Sanitized data
Our objective!
Want to learn such that :
8
Want to learn such that:
8
Want to learn such that:
8
Want to learn such that:
8
Want to learn such that:
8
Want to learn such that:
p(Y |X)
Objective:
8
Want to learn such that:
p(Y |X)
Objective:
p(Y |X)
9
Given the objective min I(U; X|Y )
p(Y |X)
9
Given the objective min I(U; X|Y )
p(Y |X)
What are the intrinsic limits on the trade-offs for this problem?
9
Given the objective min I(U; X|Y )
p(Y |X)
What are the intrinsic limits on the trade-offs for this problem? Lemma 1. finite alphabets, .
min I(U; X|Y )
p(Y |X)
s.t. I(S; Y ) ≤ k
I(U; X) − I(U; Y )
I(U; Y ) ≤ I(U; X)
p(Y |U, S)
min
s.t. I(S; Y ) ≤ k
Then:
(U, S) ∈ U × S
X ∼ p(X|U, S)
cardinality sequence (RCS).
10
Given the objective min I(U; X|Y )
p(Y |X)
Lemma 2.
(X, U, S) ∼ p(X, U, S)
Given What are the intrinsic limits on the trade-offs for this problem? I(U; X|Y ) ≥ −I(S; Y ) + I(U; S) − I(U; S|X)
10
Given the objective min I(U; X|Y )
p(Y |X)
Lemma 2.
(X, U, S) ∼ p(X, U, S)
Given What are the intrinsic limits on the trade-offs for this problem? Lemma 3.
p(Y |X)
(X, U, S) ∼ p(X, U, S)
Given , such that:
I(U; X|Y ) ≥ −I(S; Y ) + I(U; S) − I(U; S|X)
11
Lemmas 1, 2 and 3 can be approximated using contingency tables. I(U; X|Y )
))I(U; X)
I(U; S) I(U; S)
I(S; X)
Lemma 1 (RCS) Lemma 2 (lower bound) Lemma 3 (achievable upper bound) * Sketch under the assumption that
I(U; S|X) = 0
12
12
Objective:
p(Y |X) ∼ qθ(X, Z)
12
Objective: Optimization objective:
p(Y |X) ∼ qθ(X, Z) p(Y |X) ∼ qθ(X, Z)
13
Optimization objective:
qθ(X, Z)
13
Optimization objective:
qθ(X, Z)
Learning the stochastic mapping :
p(U|X)
Y = qθ(X, Z)
pφ(U|X) pψ(U|Y )
ˆ ψ = argminψEX,U,Z ⇥ − log(pψ(U | qˆ
θ(X, Z))
⇤
ˆ η = argminηEX,S,Z ⇥ − log(pη(S | qˆ
θ(X, Z))
⇤ ˆ φ = argminφEX,U ⇥ − log(pφ(U | X) ⇤
13
Optimization objective:
qθ(X, Z)
Learning the stochastic mapping :
p(U|X)
Y = qθ(X, Z)
pφ(U|X) pψ(U|Y )
ˆ ψ = argminψEX,U,Z ⇥ − log(pψ(U | qˆ
θ(X, Z))
⇤
ˆ η = argminηEX,S,Z ⇥ − log(pη(S | qˆ
θ(X, Z))
⇤ ˆ φ = argminφEX,U ⇥ − log(pφ(U | X) ⇤
ˆ θ = argminθEX,Z ⇥ DKL[p ˆ
φ(U | X) || p ˆ ψ(U | qθ(X, Z))]]
+λ max(EX,Z ⇥ DKL[pˆ
η(S | qθ(X, Z)) || P(S)]] − k, 0)2
13
Optimization objective:
qθ(X, Z)
Learning the stochastic mapping :
p(U|X)
Y = qθ(X, Z)
pφ(U|X) pψ(U|Y )
ˆ ψ = argminψEX,U,Z ⇥ − log(pψ(U | qˆ
θ(X, Z))
⇤
ˆ η = argminηEX,S,Z ⇥ − log(pη(S | qˆ
θ(X, Z))
⇤ ˆ φ = argminφEX,U ⇥ − log(pφ(U | X) ⇤
ˆ θ = argminθEX,Z ⇥ DKL[p ˆ
φ(U | X) || p ˆ ψ(U | qθ(X, Z))]]
+λ max(EX,Z ⇥ DKL[pˆ
η(S | qθ(X, Z)) || P(S)]] − k, 0)2
Xception Networks U-NET + noise
14
Emotion obfuscation vs gender detection ∞
0.5 0.3
k
15
Emotion obfuscation vs gender detection ∞
0.5 0.3
k
16
Gender obfuscation vs subject verification ∞
0.3 0.2
k
17
Gender obfuscation vs subject verification ∞
0.3 0.2
k
18
Subject within Subject k ∞
0.5
Consenting User Nonconsenting User Subject verified Subject verified Subject verified Subject verified
19
20