Differentially Private Distributed Convex Optimization via - - PowerPoint PPT Presentation

differentially private distributed convex optimization
SMART_READER_LITE
LIVE PREVIEW

Differentially Private Distributed Convex Optimization via - - PowerPoint PPT Presentation

Differentially Private Distributed Convex Optimization via Functional Perturbation Erfan Nozari Department of Mechanical and Aerospace Engineering University of California, San Diego http://carmenere.ucsd.edu/erfan July 6, 2016 Joint work with


slide-1
SLIDE 1

Differentially Private Distributed Convex Optimization via Functional Perturbation

Erfan Nozari

Department of Mechanical and Aerospace Engineering University of California, San Diego http://carmenere.ucsd.edu/erfan July 6, 2016 Joint work with Pavankumar Tallapragada and Jorge Cort´ es

slide-2
SLIDE 2

Distributed Coordination

Erfan Nozari (UCSD) Differentially Private Distributed Optimization

2

slide-3
SLIDE 3

Distributed Coordination

What if local information is sensitive?

Erfan Nozari (UCSD) Differentially Private Distributed Optimization

2

slide-4
SLIDE 4

Motivating Scenario: Optimal EV Charging

[Han et. al., 2014]

Erfan Nozari (UCSD) Differentially Private Distributed Optimization

3

slide-5
SLIDE 5

Motivating Scenario: Optimal EV Charging

[Han et. al., 2014]

Central aggregator solves: minimize

r1,...,rn

U n

i=1 ri

  • subject to

ri ∈ Ci i ∈ {1, . . . , n}

  • U = energy cost function
  • ri = ri(t) = charging rate
  • Ci = local constraints

Erfan Nozari (UCSD) Differentially Private Distributed Optimization

3

slide-6
SLIDE 6

Motivating Scenario: Optimal EV Charging

[Han et. al., 2014]

Central aggregator solves: minimize

r1,...,rn

U n

i=1 ri

  • subject to

ri ∈ Ci i ∈ {1, . . . , n}

  • U = energy cost function
  • ri = ri(t) = charging rate
  • Ci = local constraints

Erfan Nozari (UCSD) Differentially Private Distributed Optimization

3

slide-7
SLIDE 7

Myth: Aggregation Preserves Privacy

Erfan Nozari (UCSD) Differentially Private Distributed Optimization

4

slide-8
SLIDE 8

Myth: Aggregation Preserves Privacy

  • Fact: NOT in the presence of side-information

Erfan Nozari (UCSD) Differentially Private Distributed Optimization

4

slide-9
SLIDE 9

Myth: Aggregation Preserves Privacy

  • Fact: NOT in the presence of side-information
  • Toy example:

1 100 2 120 . . . n 90 Database Average = 110

Erfan Nozari (UCSD) Differentially Private Distributed Optimization

4

slide-10
SLIDE 10

Myth: Aggregation Preserves Privacy

  • Fact: NOT in the presence of side-information
  • Toy example:

1 100 2 120 . . . n 90 Database Average = 110 2 120 . . . n 90 Side Information ⇒ d1 = 100

Erfan Nozari (UCSD) Differentially Private Distributed Optimization

4

slide-11
SLIDE 11

Myth: Aggregation Preserves Privacy

  • Fact: NOT in the presence of side-information
  • Toy example:

1 100 2 120 . . . n 90 Database Average = 110 2 120 . . . n 90 Side Information ⇒ d1 = 100

  • Real example: A. Narayanan and V. Shmatikov successfully

de-anonymized Netflix Prize dataset (2007) Side information: IMDB databases!

Erfan Nozari (UCSD) Differentially Private Distributed Optimization

4

slide-12
SLIDE 12

Outline

1 DP Distributed Optimization

Problem Formulation Impossibility Result

2 Functional Perturbation

Perturbation Design

3 DP Distributed Optimization via Functional Perturbation

Regularization Algorithm Design and Analysis

Erfan Nozari (UCSD) Differentially Private Distributed Optimization

5

slide-13
SLIDE 13

Outline

1 DP Distributed Optimization

Problem Formulation Impossibility Result

2 Functional Perturbation

Perturbation Design

3 DP Distributed Optimization via Functional Perturbation

Regularization Algorithm Design and Analysis

Erfan Nozari (UCSD) Differentially Private Distributed Optimization

5

slide-14
SLIDE 14

Problem Formulation

Optimization

Standard additive convex optimization problem: minimize

x∈D

f(x)

n

  • i=1

fi(x) subject to G(x) ≤ 0 Ax = b

1 2 3 4 5 6 7 8

Assumption:

  • D is compact
  • fi’s are strongly

convex and C2

Erfan Nozari (UCSD) Differentially Private Distributed Optimization

6

slide-15
SLIDE 15

Problem Formulation

Optimization

Standard additive convex optimization problem: minimize

x∈D

f(x)

n

  • i=1

fi(x) subject to G(x) ≤ 0 Ax = b minimize

x∈X

f(x)

n

  • i=1

fi(x)

1 2 3 4 5 6 7 8

Assumption:

  • D is compact
  • fi’s are strongly

convex and C2

Erfan Nozari (UCSD) Differentially Private Distributed Optimization

6

slide-16
SLIDE 16

Problem Formulation

Optimization

Standard additive convex optimization problem: minimize

x∈X

f(x)

n

  • i=1

fi(x)

1 2 3 4 5 6 7 8

Assumption:

  • D is compact
  • fi’s are strongly

convex and C2

Erfan Nozari (UCSD) Differentially Private Distributed Optimization

6

slide-17
SLIDE 17

Problem Formulation

Optimization

Standard additive convex optimization problem: minimize

x∈X

f(x)

n

  • i=1

fi(x)

  • A non-private solution

[Nedic et. al., 2010]: xi(k + 1) = projX(zi(k) − αk∇fi(zi(k))) zi(k) =

n

  • j=1

wijxj(k)

1 2 3 4 5 6 7 8

Assumption:

  • D is compact
  • fi’s are strongly

convex and C2

Erfan Nozari (UCSD) Differentially Private Distributed Optimization

6

slide-18
SLIDE 18

Problem Formulation

Optimization

Standard additive convex optimization problem: minimize

x∈X

f(x)

n

  • i=1

fi(x)

  • A non-private solution

[Nedic et. al., 2010]: xi(k + 1) = projX(zi(k) − αk∇fi(zi(k))) zi(k) =

n

  • j=1

wijxj(k)

αk = ∞ α2

k < ∞

1 2 3 4 5 6 7 8

Assumption:

  • D is compact
  • fi’s are strongly

convex and C2

Erfan Nozari (UCSD) Differentially Private Distributed Optimization

6

slide-19
SLIDE 19

Problem Formulation

Privacy

  • “Information”: F = (fi)n

i=1 ∈ Fn

Erfan Nozari (UCSD) Differentially Private Distributed Optimization

7

slide-20
SLIDE 20

Problem Formulation

Privacy

  • “Information”: F = (fi)n

i=1 ∈ Fn

  • Given (V, · V) with V ⊆ F,

Adjacency F, F ′ ∈ Fn are V-adjacent if there exists i0 ∈ {1, . . . , n} such that fi = f ′

i for i = i0

and fi0 − f ′

i0 ∈ V

Erfan Nozari (UCSD) Differentially Private Distributed Optimization

7

slide-21
SLIDE 21

Problem Formulation

Privacy

  • “Information”: F = (fi)n

i=1 ∈ Fn

  • Given (V, · V) with V ⊆ F,

Adjacency F, F ′ ∈ Fn are V-adjacent if there exists i0 ∈ {1, . . . , n} such that fi = f ′

i for i = i0

and fi0 − f ′

i0 ∈ V

  • For a random map M : Fn × Ω → X and ǫ ∈ Rn

>0

Differential Privacy (DP) M is ǫ-DP if ∀ V-adjacent F, F ′ ∈Fn ∀O ⊆ X P{M(F ′, ω) ∈ O} ≤ eǫi0fi0−f ′

i0VP{M(F, ω) ∈ O}

Erfan Nozari (UCSD) Differentially Private Distributed Optimization

7

slide-22
SLIDE 22

Case Study

Linear Classification with Logistic Loss Function

  • Training records: {(aj, bj)}N

j=1

where aj ∈ [0, 1]2 and bj ∈ {−1, 1}

  • Goal: find the best separating

hyperplane xT a

1 1 a1 a2 xT a = 0

Erfan Nozari (UCSD) Differentially Private Distributed Optimization

8

slide-23
SLIDE 23

Case Study

Linear Classification with Logistic Loss Function

  • Training records: {(aj, bj)}N

j=1

where aj ∈ [0, 1]2 and bj ∈ {−1, 1}

  • Goal: find the best separating

hyperplane xT a

1 1 a1 a2 xT a = 0

Convex Optimization Problem x∗ = argmin

x∈X N

  • j=1
  • ℓ(x; aj, bj) + λ

2 |x|2

  • Logistic loss: ℓ(x; a, b) = ln(1 + e−baT x)

Erfan Nozari (UCSD) Differentially Private Distributed Optimization

8

slide-24
SLIDE 24

Case Study

Linear Classification with Logistic Loss Function

  • Training records: {(aj, bj)}N

j=1

where aj ∈ [0, 1]2 and bj ∈ {−1, 1}

  • Goal: find the best separating

hyperplane xT a

1 1 a1 a2 xT a = 0

Convex Optimization Problem x∗ = argmin

x∈X n

  • i=1

Ni

  • j=1
  • ℓ(x; ai,j, bi,j) + λ

2 |x|2

  • Logistic loss: ℓ(x; a, b) = ln(1 + e−baT x)

1 2 3 4 5 6 7 8

Erfan Nozari (UCSD) Differentially Private Distributed Optimization

8

slide-25
SLIDE 25

Message Perturbation vs. Objective Perturbation

A generic distributed optimization algorithm: Message Passing i j Local State Update x+

i = hi(xi, x−i)

fi Network

Erfan Nozari (UCSD) Differentially Private Distributed Optimization

9

slide-26
SLIDE 26

Message Perturbation vs. Objective Perturbation

Message Perturbation: Message Passing i j Local State Update x+

i = hi(xi, x−i)

fi Network Objective Perturbation: Message Passing i j Local State Update x+

i = hi(xi, x−i)

fi Network

Erfan Nozari (UCSD) Differentially Private Distributed Optimization

9

slide-27
SLIDE 27

Message Perturbation vs. Objective Perturbation

Message Perturbation: Message Passing i j Local State Update x+

i = hi(xi, x−i)

fi Network Objective Perturbation: Message Passing i j Local State Update x+

i = hi(xi, x−i)

fi Network

Erfan Nozari (UCSD) Differentially Private Distributed Optimization

9

slide-28
SLIDE 28

Message Perturbation vs. Objective Perturbation

Message Perturbation: Message Passing i j Local State Update x+

i = hi(xi, x−i)

fi Network Objective Perturbation: Message Passing i j Local State Update x+

i = hi(xi, x−i)

fi Network

Erfan Nozari (UCSD) Differentially Private Distributed Optimization

9

slide-29
SLIDE 29

Impossibility Result

Generic message-perturbing algorithm: x(k + 1) = aI(x(k), ξ(k)) ξ(k) = x(k) + η(k)

Erfan Nozari (UCSD) Differentially Private Distributed Optimization

10

slide-30
SLIDE 30

Impossibility Result

Generic message-perturbing algorithm: x(k + 1) = aI(x(k), ξ(k)) ξ(k) = x(k) + η(k) Theorem If

  • The η → x dynamics is 0-LAS
  • ηi(k) ∼ Lap(bi(k)) or ηi(k) ∼ N(0, bi(k))
  • bi(k) is O( 1

kp ) for some p > 0

Then no ǫ-DP of the information set I for any ǫ > 0

Erfan Nozari (UCSD) Differentially Private Distributed Optimization

10

slide-31
SLIDE 31

Impossibility Result: An Example

Algorithm proposed in [Huang et. al., 2015]: xi(k + 1) = projX(zi(k) − αk∇fi(zi(k))) zi(k) =

n

  • j=1

wijξj(k) ξj(k) = xj(k) + ηj(k)

Erfan Nozari (UCSD) Differentially Private Distributed Optimization

11

slide-32
SLIDE 32

Impossibility Result: An Example

Algorithm proposed in [Huang et. al., 2015]: xi(k + 1) = projX(zi(k) − αk∇fi(zi(k))) zi(k) =

n

  • j=1

wijξj(k) ξj(k) = xj(k) + ηj(k)

Erfan Nozari (UCSD) Differentially Private Distributed Optimization

11

slide-33
SLIDE 33

Impossibility Result: An Example

Algorithm proposed in [Huang et. al., 2015]: xi(k + 1) = projX(zi(k) − αk∇fi(zi(k))) zi(k) =

n

  • j=1

wijξj(k) ξj(k) = xj(k) + ηj(k)

  • ηj(k) ∼ Lap(∝ pk)
  • αk ∝ qk

0 < q < p < 1

Erfan Nozari (UCSD) Differentially Private Distributed Optimization

11

slide-34
SLIDE 34

Impossibility Result: An Example

Algorithm proposed in [Huang et. al., 2015]: xi(k + 1) = projX(zi(k) − αk∇fi(zi(k))) zi(k) =

n

  • j=1

wijξj(k) ξj(k) = xj(k) + ηj(k)

  • ηj(k) ∼ Lap(∝ pk)
  • αk ∝ qk

0 < q < p < 1 Finite sum

Erfan Nozari (UCSD) Differentially Private Distributed Optimization

11

slide-35
SLIDE 35

Impossibility Result: An Example

Algorithm proposed in [Huang et. al., 2015]:

  • Simulation results for a linear classification problem:

1013 1014 1015 1016

ǫ

10-3 10-2 10-1 100 101 102 103

|˜ x − x∗|

10-1 100 101 102 Empirical data Linear fit of log |˜ x∗ − x∗| against log ǫ Theoretical upper bound on |E[˜ x∗] − x∗|

Erfan Nozari (UCSD) Differentially Private Distributed Optimization

11

slide-36
SLIDE 36

Outline

1 DP Distributed Optimization

Problem Formulation Impossibility Result

2 Functional Perturbation

Perturbation Design

3 DP Distributed Optimization via Functional Perturbation

Regularization Algorithm Design and Analysis

Erfan Nozari (UCSD) Differentially Private Distributed Optimization

11

slide-37
SLIDE 37

State of the Art

  • [Chaudhuri et. al., 2011]
  • First proposed “objective perturbation” by adding linear random

functions

  • Extended by [Kifer et. al., 2012] to constrained and non-differentiable

problems

  • Preserves DP of objective function parameters

Erfan Nozari (UCSD) Differentially Private Distributed Optimization

12

slide-38
SLIDE 38

State of the Art

  • [Chaudhuri et. al., 2011]
  • First proposed “objective perturbation” by adding linear random

functions

  • Extended by [Kifer et. al., 2012] to constrained and non-differentiable

problems

  • Preserves DP of objective function parameters
  • [Zhang et. al., 2012]
  • Proposed objective perturbation by adding sample path of Gaussian

stochastic process

  • Preserves DP of objective function parameters

Erfan Nozari (UCSD) Differentially Private Distributed Optimization

12

slide-39
SLIDE 39

State of the Art

  • [Chaudhuri et. al., 2011]
  • First proposed “objective perturbation” by adding linear random

functions

  • Extended by [Kifer et. al., 2012] to constrained and non-differentiable

problems

  • Preserves DP of objective function parameters
  • [Zhang et. al., 2012]
  • Proposed objective perturbation by adding sample path of Gaussian

stochastic process

  • Preserves DP of objective function parameters
  • [Hall et. al., 2013]
  • Proposed objective perturbation by adding quadratic random functions
  • Preserves DP of objective function parameters

Erfan Nozari (UCSD) Differentially Private Distributed Optimization

12

slide-40
SLIDE 40

Prelim: Hillbert Spaces

  • Hilbert space H = complete inner-product space
  • Orthonormal basis {ek}k∈I ⊂ H
  • If H is separable:

h =

  • k=1

δk

h, ek ek

Erfan Nozari (UCSD) Differentially Private Distributed Optimization

13

slide-41
SLIDE 41

Prelim: Hillbert Spaces

  • Hilbert space H = complete inner-product space
  • Orthonormal basis {ek}k∈I ⊂ H
  • If H is separable:

h =

  • k=1

δk

h, ek ek

  • For D ⊆ Rd, L2(D) is a separable Hilbert space ⇒ F = L2(D)

Erfan Nozari (UCSD) Differentially Private Distributed Optimization

13

slide-42
SLIDE 42

Functional Perturbation via Laplace Noise

  • Φ : coefficient sequence δ → function h = ∞

k=1 δkek

  • Adjacency space:

Vq =

  • Φ(δ) |

  • k=1

(kqδk)2 < ∞

  • Erfan Nozari (UCSD)

Differentially Private Distributed Optimization

14

slide-43
SLIDE 43

Functional Perturbation via Laplace Noise

  • Φ : coefficient sequence δ → function h = ∞

k=1 δkek

  • Adjacency space:

Vq =

  • Φ(δ) |

  • k=1

(kqδk)2 < ∞

  • Random map:

M(f, η) = Φ

  • Φ−1(f) + η
  • = f + Φ(η)

Functional Perturbation

Erfan Nozari (UCSD) Differentially Private Distributed Optimization

14

slide-44
SLIDE 44

Functional Perturbation via Laplace Noise

  • Φ : coefficient sequence δ → function h = ∞

k=1 δkek

  • Adjacency space:

Vq =

  • Φ(δ) |

  • k=1

(kqδk)2 < ∞

  • Random map:

M(f, η) = Φ

  • Φ−1(f) + η
  • = f + Φ(η)

Functional Perturbation Theorem For ηk ∼ Lap( γ

kp ), q > 1, and p ∈

1

2, q − 1 2

  • , M guarantees ǫ-DP with

ǫ = 1 γ

  • ζ(2(q − p))

Erfan Nozari (UCSD) Differentially Private Distributed Optimization

14

slide-45
SLIDE 45

Outline

1 DP Distributed Optimization

Problem Formulation Impossibility Result

2 Functional Perturbation

Perturbation Design

3 DP Distributed Optimization via Functional Perturbation

Regularization Algorithm Design and Analysis

Erfan Nozari (UCSD) Differentially Private Distributed Optimization

14

slide-46
SLIDE 46

Resilience to Post-processing

Algorithm sketch:

  • 1. Each agent perturbs its own objective function (offline)
  • 2. Agents participate in an arbitrary distributed optimization

algorithm with perturbed functions (online)

Erfan Nozari (UCSD) Differentially Private Distributed Optimization

15

slide-47
SLIDE 47

Resilience to Post-processing

Algorithm sketch:

  • 1. Each agent perturbs its own objective function (offline)
  • 2. Agents participate in an arbitrary distributed optimization

algorithm with perturbed functions (online)

  • M : L2(D)n × Ω → L2(D)n
  • F : L2(D)n → X, where (X, ΣX ) is an arbitrary measurable space

Corollary (special case of [Ny & Pappas 2014, Theorem 1]) If M is ǫ-DP, then F ◦ M : L2(D)n × Ω → X is ǫ-DP.

Erfan Nozari (UCSD) Differentially Private Distributed Optimization

15

slide-48
SLIDE 48

Ensuring Regularity of Perturbed Functions

  • ˆ

fi = M(fi, ηi) may be discontinuous/non-convex/...

Erfan Nozari (UCSD) Differentially Private Distributed Optimization

16

slide-49
SLIDE 49

Ensuring Regularity of Perturbed Functions

  • ˆ

fi = M(fi, ηi) may be discontinuous/non-convex/...

  • S = {Regular functions} ⊂ C2(D) ⊂ L2(D)

Erfan Nozari (UCSD) Differentially Private Distributed Optimization

16

slide-50
SLIDE 50

Ensuring Regularity of Perturbed Functions

  • ˆ

fi = M(fi, ηi) may be discontinuous/non-convex/...

  • S = {Regular functions} ⊂ C2(D) ⊂ L2(D)
  • Ensuring Smoothness: C2(D) is dense in L2(D) so

∀εi > 0 pick ˆ f s

i ∈ C2(D)

such that ˆ fi − ˆ f s

i < εi

ˆ fi εi ˆ f s

i

Erfan Nozari (UCSD) Differentially Private Distributed Optimization

16

slide-51
SLIDE 51

Ensuring Regularity of Perturbed Functions

  • ˆ

fi = M(fi, ηi) may be discontinuous/non-convex/...

  • S = {Regular functions} ⊂ C2(D) ⊂ L2(D)
  • Ensuring Smoothness: C2(D) is dense in L2(D) so

∀εi > 0 pick ˆ f s

i ∈ C2(D)

such that ˆ fi − ˆ f s

i < εi

  • Ensuring Regularity:

˜ fi = projS( ˆ f s

i )

ˆ fi εi ˆ f s

i

Proposition S is convex and closed relative to C2(D)

Erfan Nozari (UCSD) Differentially Private Distributed Optimization

16

slide-52
SLIDE 52

Algorithm

  • 1. Each agent perturbs its function:

ˆ fi = M(fi, ηi) = fi + Φ(ηi), ηi,k ∼ Lap(bi,k), bi,k = γi kpi

  • 2. Each agent selects ˆ

f s

i ∈ S0 such that

ˆ fi − ˆ f s

i < εi

  • 3. Each agent projects ˆ

f s

i onto S:

˜ fi = projS( ˆ f s

i )

  • 4. Agents participate in any distributed optimization algorithm

with ( ˜ fi)n

i=1

Erfan Nozari (UCSD) Differentially Private Distributed Optimization

17

slide-53
SLIDE 53

Algorithm

  • 1. Each agent perturbs its function:

ˆ fi = M(fi, ηi) = fi + Φ(ηi), ηi,k ∼ Lap(bi,k), bi,k = γi kpi

  • 2. Each agent selects ˆ

f s

i ∈ S0 such that

ˆ fi − ˆ f s

i < εi

  • 3. Each agent projects ˆ

f s

i onto S:

˜ fi = projS( ˆ f s

i )

  • 4. Agents participate in any distributed optimization algorithm

with ( ˜ fi)n

i=1

Offline

Erfan Nozari (UCSD) Differentially Private Distributed Optimization

17

slide-54
SLIDE 54

Accuracy Analysis

  • Set of “regular” functions:

S = {h ∈ C2(D) | αId ≤ ∇2h(x) ≤ βId and |∇h(x)| ≤ u} Lemma (K-Lipschitzness of argmin) For f, g ∈ S,

  • argmin

x∈X

f − argmin

x∈X

g

  • ≤ κα,β(f − g)

Erfan Nozari (UCSD) Differentially Private Distributed Optimization

18

slide-55
SLIDE 55

Accuracy Analysis

  • Set of “regular” functions:

S = {h ∈ C2(D) | αId ≤ ∇2h(x) ≤ βId and |∇h(x)| ≤ u} Lemma (K-Lipschitzness of argmin) For f, g ∈ S,

  • argmin

x∈X

f − argmin

x∈X

g

  • ≤ κα,β(f − g)
  • Define

˜ x∗ = argmin

x∈X n

  • i=1

˜ fi and x∗ = argmin

x∈X n

  • i=1

fi, Theorem (Accuracy) E |˜ x∗ − x∗| ≤

n

  • i=1

κn ζ(qi) ǫi

  • + κn(εi)

Erfan Nozari (UCSD) Differentially Private Distributed Optimization

18

slide-56
SLIDE 56

Simulation Results

Linear Classification with Logistic Loss Function

ǫ

10-2 10-1 100 101 102 103

|˜ x∗ − x∗|

10-5 10-4 10-3 10-2 10-1 100 101 102 103 104 Theoretical bound Empirical data Piecewise linear fit

ǫ

10-2 10-1 100 101 102 103 10-5 10-4 10-3 10-2 10-1 100 101 102 103 104 Theoretical bound 2nd order 6th order 14th order

Erfan Nozari (UCSD) Differentially Private Distributed Optimization

19

slide-57
SLIDE 57

Conclusions and Future Work

In this talk, we

  • Proposed a definition of DP for functions
  • Illustrated a fundamental limitation of message-perturbing strategies
  • Proposed the method of functional perturbation
  • Discussed how functional perturbation can be applied to distributed

convex optimization

Erfan Nozari (UCSD) Differentially Private Distributed Optimization

20

slide-58
SLIDE 58

Conclusions and Future Work

In this talk, we

  • Proposed a definition of DP for functions
  • Illustrated a fundamental limitation of message-perturbing strategies
  • Proposed the method of functional perturbation
  • Discussed how functional perturbation can be applied to distributed

convex optimization Future work includes

  • relaxation of the smoothness, convexity, and compactness assumptions
  • comparing the numerical efficiency of different bases for L2
  • characterizing the expected sub-optimality gap of the algorithm and

the optimal privacy-accuracy trade-off curve

  • further understanding the appropriate scales of privacy parameters for

particular applications

Erfan Nozari (UCSD) Differentially Private Distributed Optimization

20

slide-59
SLIDE 59

Questions and Comments

Full results of this talk available in:

  • E. Nozari, P. Tallapragada, J. Cort´

es, “Differentially Private Distributed Convex Optimization via Functional Perturbation,” IEEE Trans. on Control of Net. Sys., provisionally accepted, http://arxiv.org/abs/1512.00369 Erfan Nozari (UCSD) Differentially Private Distributed Optimization

21

slide-60
SLIDE 60

Formal Definition

in original context [Dwork et. al., 2006]

Context:

  • D ∈ D: A database of records
  • Adjacency: D1, D2 ∈ D are adjacent if they differ by at most 1 record
  • (Ω, Σ, P): Probability space
  • q : D → X: (Honest) query function
  • M : D × Ω → X: Randomized/sanitized query function
  • ǫ > 0: Level of privacy

Definition

M is ǫ-DP if ∀ adjacent D1, D2 ∈ D ∀O ⊆ X P{M(D1) ∈ O} ≤ eǫP{M(D2) ∈ O}

  • Adjacency is symmetric ⇒
  • P{M(D1) ∈ O} ≤ eǫP{M(D2) ∈ O}

P{M(D2) ∈ O} ≤ eǫP{M(D1) ∈ O}

Erfan Nozari (UCSD) Differentially Private Distributed Optimization

slide-61
SLIDE 61

Formal Definition: Geometric Interpretation

in original context Definition

M is ǫ-DP if ∀ adjacent D1, D2 ∈ D ∀O ⊆ X P{M(D1) ∈ O} ≤ eǫP{M(D2) ∈ O} D X D1 D2 q(D1) q(D2) q q M M M(D1, ω) M(D2, ω) O

Erfan Nozari (UCSD) Differentially Private Distributed Optimization

slide-62
SLIDE 62

Operational Meaning of DP

A binary decision example [Geng&Pramod, 2013]

  • Adversary’s decision =
  • TRUE

if M(D, ω) ∈ O FALSE if M(D, ω) ∈ Oc

  • MD = {M(D1, ω) ∈ Oc}
  • FA = {M(D2, ω) ∈ O}

D X D1 D2 q(D1) q(D2) q q O

  • If M is ǫ-DP then
  • P{M(D1, ω) ∈ O} ≤ eǫP{M(D2, ω) ∈ O}

P{M(D2, ω) ∈ Oc} ≤ eǫP{M(D1, ω) ∈ Oc} ⇒

  • 1 − pMD ≤ eǫpFA

1 − pFA ≤ eǫpMD ⇒ pMD, pFA ≥ eǫ − 1 e2ǫ − 1

Erfan Nozari (UCSD) Differentially Private Distributed Optimization

slide-63
SLIDE 63

Generalizing the Definition: Using Metrics

[Chatzikokolakis et. al., 2013]

  • If D1, D2 differ in N elements then

P{M(D1, ω) ∈ O} ≤ eNǫP{M(D2, ω) ∈ O}

  • d : D × D → [0, ∞) metric on D

Definition –revisited

M gives/preserves ǫ-differential privacy if ∀D1, D2 ∈ D ∀O ⊆ X we have P{M(D1, ω) ∈ O} ≤ eǫd(D1, D2)P{M(D2, ω) ∈ O}

Erfan Nozari (UCSD) Differentially Private Distributed Optimization