Jointly Private Convex Programming PrivDuDe Justin Hsu 1 , Zhiyi - - PowerPoint PPT Presentation

jointly private convex programming
SMART_READER_LITE
LIVE PREVIEW

Jointly Private Convex Programming PrivDuDe Justin Hsu 1 , Zhiyi - - PowerPoint PPT Presentation

Jointly Private Convex Programming PrivDuDe Justin Hsu 1 , Zhiyi Huang 2 , Aaron Roth 1 , Steven Zhiwei Wu 1 1 University of Pennsylvania 2 University of Hong Kong January 10, 2016 1 One hot summer...not enough electricity! 2 Solution:


slide-1
SLIDE 1

Jointly Private Convex Programming

“PrivDuDe” Justin Hsu1, Zhiyi Huang2, Aaron Roth1, Steven Zhiwei Wu1

1University of Pennsylvania 2University of Hong Kong

January 10, 2016

1

slide-2
SLIDE 2

One hot summer...not enough electricity!

2

slide-3
SLIDE 3

Solution: Turn off air-conditioning

Decide when customers get electricity

◮ Divide day into time slots ◮ Customers have values for slots ◮ Customers have hard minimum requirements for slots

Goal: maximize welfare

3

slide-4
SLIDE 4

Scheduling optimization problem

Constants (Inputs to the problem)

◮ Customer i’s value for electricity in time slot t: v(i) t

∈ [0, 1]

◮ Customer i’s minimum requirement: d(i) t

∈ [0, 1]

◮ Total electricity supply in time slot t: st ∈ R 4

slide-5
SLIDE 5

Scheduling optimization problem

Constants (Inputs to the problem)

◮ Customer i’s value for electricity in time slot t: v(i) t

∈ [0, 1]

◮ Customer i’s minimum requirement: d(i) t

∈ [0, 1]

◮ Total electricity supply in time slot t: st ∈ R

Variables (Outputs)

◮ Electricity level for user i, time t: x(i) t 4

slide-6
SLIDE 6

Scheduling optimization problem

Maximize welfare

max

  • i,t

v(i)

t

· x(i)

t 5

slide-7
SLIDE 7

Scheduling optimization problem

Maximize welfare

max

  • i,t

v(i)

t

· x(i)

t

...subject to constraints

◮ Don’t exceed power supply:

  • i

x(i)

t

≤ st

5

slide-8
SLIDE 8

Scheduling optimization problem

Maximize welfare

max

  • i,t

v(i)

t

· x(i)

t

...subject to constraints

◮ Don’t exceed power supply:

  • i

x(i)

t

≤ st

◮ Meet minimum energy requirements:

x(i)

t

≥ d(i)

t 5

slide-9
SLIDE 9

Privacy concerns

Private data

◮ Values v(i) t

for time slots

◮ Customer requirements d(i) t 6

slide-10
SLIDE 10

Privacy concerns

Private data

◮ Values v(i) t

for time slots

◮ Customer requirements d(i) t

Customers shouldn’t learn private data of others

6

slide-11
SLIDE 11

More generally...

Convex program

◮ Want to maximize:

  • i

f (i)(x(i)) f (i) concave

7

slide-12
SLIDE 12

More generally...

Convex program

◮ Want to maximize:

  • i

f (i)(x(i)) f (i) concave

◮ Coupling constraints:

  • i

g(i)

j (x(i)) ≤ hj

g(i)

j

convex

7

slide-13
SLIDE 13

More generally...

Convex program

◮ Want to maximize:

  • i

f (i)(x(i)) f (i) concave

◮ Coupling constraints:

  • i

g(i)

j (x(i)) ≤ hj

g(i)

j

convex

◮ Personal constraints:

x(i) ∈ S(i) S(i) convex

7

slide-14
SLIDE 14

More generally...

Key feature: separable

◮ Partition variables: Agent i’s “part” of solution is x(i)

Agent i’s private data affects:

◮ Objective f (i) ◮ Coupling constraints g(i) j ◮ Personal constraints S(i)

Examples

◮ Matching LP ◮ d-demand fractional allocation ◮ Multidimensional fractional knapsack 8

slide-15
SLIDE 15

Our results, in one slide

Theorem

Let ε > 0 be a privacy parameter. For a separable convex program with k coupling constraints, there is an efficient algorithm for privately finding a solution with objective at least OPT −O

k

ε

  • ,

and exceeding constraints by at most k/ε in total.

No polynomial dependence on number of variables

9

slide-16
SLIDE 16

The plan today

◮ Convex program solution ↔ equilibrium of a game ◮ Compute equilibrium via gradient descent ◮ Ensure privacy 10

slide-17
SLIDE 17

The convex program game

11

slide-18
SLIDE 18

The convex program two-player, zero-sum game

The players

◮ Primal player: plays candidate solutions x ∈ S(1) × · · · × S(n) ◮ Dual player: plays dual solutions λ 12

slide-19
SLIDE 19

The convex program two-player, zero-sum game

The players

◮ Primal player: plays candidate solutions x ∈ S(1) × · · · × S(n) ◮ Dual player: plays dual solutions λ

The payoff function

◮ Move constraints depending on multiple players (coupling

constraints) into objective as penalty terms L(x, λ) =

  • i

f (i)(x(i)) +

  • j

λj

  • i

g(i)

j (x(i)) − hj

  • ◮ Primal player maximizes, dual player minimizes

12

slide-20
SLIDE 20

Idea: Solution ↔ equilibrium

Convex duality

◮ Optimal solution x∗ gets payoff OPT versus any λ ◮ Optimal dual λ∗ gets payoff at least −OPT versus any x

In game theoretic terms...

◮ The value of the game is OPT ◮ Optimal primal-dual solution (x∗, λ∗) is an equilibrium 13

slide-21
SLIDE 21

Idea: Solution ↔ equilibrium

Convex duality

◮ Optimal solution x∗ gets payoff OPT versus any λ ◮ Optimal dual λ∗ gets payoff at least −OPT versus any x

In game theoretic terms...

◮ The value of the game is OPT ◮ Optimal primal-dual solution (x∗, λ∗) is an equilibrium

Find an equilibrium to find an

  • ptimal solution

approximate approximately

13

slide-22
SLIDE 22

Idea: Solution ↔ equilibrium

Convex duality

◮ Optimal solution x∗ gets payoff OPT versus any λ ◮ Optimal dual λ∗ gets payoff at least −OPT versus any x

In game theoretic terms...

◮ The value of the game is OPT ◮ Optimal primal-dual solution (x∗, λ∗) is an equilibrium

Find an equilibrium to find an

  • ptimal solution

approximate approximately

13

slide-23
SLIDE 23

Finding the equilibrium

14

slide-24
SLIDE 24

Known: techniques for finding equilibrium [FS96]

Simulated play

◮ First player chooses the action xt with best payoff ◮ Second player uses a no-regret algorithm to select action λt ◮ Use payoff L(xt, λt) to update the second player ◮ Repeat 15

slide-25
SLIDE 25

Known: techniques for finding equilibrium [FS96]

Simulated play

◮ First player chooses the action xt with best payoff ◮ Second player uses a no-regret algorithm to select action λt ◮ Use payoff L(xt, λt) to update the second player ◮ Repeat

Key features

◮ Average of (xt, λt) converges to approximate equilibrium ◮ Limited access to payoff data, can be made private 15

slide-26
SLIDE 26

Gradient descent dynamics (linear case)

Idea: repeatedly go “downhill”

◮ Given primal point x(i) t , gradient of L(xt, −) is

ℓj =

  • i

g(i)

j

· x(i)

t

− hj

◮ Update:

λt+1 = λt − η · ℓ

16

slide-27
SLIDE 27

Achieving privacy

17

slide-28
SLIDE 28

(Plain) Differential privacy [DMNS06]

18

slide-29
SLIDE 29

More formally

Definition (DMNS06)

Let M be a randomized mechanism from databases to range R, and let D, D′ be databases differing in one record. M is (ε, δ)-differentially private if for every S ⊆ R, Pr[M(D) ∈ S] ≤ eε · Pr[M(D′) ∈ S] + δ.

19

slide-30
SLIDE 30

More formally

Definition (DMNS06)

Let M be a randomized mechanism from databases to range R, and let D, D′ be databases differing in one record. M is (ε, δ)-differentially private if for every S ⊆ R, Pr[M(D) ∈ S] ≤ eε · Pr[M(D′) ∈ S] + δ.

For us: too strong!

19

slide-31
SLIDE 31

A relaxed notion of privacy [KPRU14]

Idea

◮ Give separate outputs to agents ◮ Group of agents can’t violate privacy of other agents 20

slide-32
SLIDE 32

A relaxed notion of privacy [KPRU14]

Idea

◮ Give separate outputs to agents ◮ Group of agents can’t violate privacy of other agents

Definition

An algorithm M : Cn → Ωn is (ε, δ)-joint differentially private if for every agent i, pair of i-neighbors D, D′ ∈ Cn, and subset of outputs S ⊆ Ωn−1, Pr[M(D)−i ∈ S] ≤ exp(ε) Pr[M(D′)−i ∈ S] + δ.

20

slide-33
SLIDE 33

Achieving joint differential privacy

“Billboard” mechanisms

◮ Compute signal S satisfying standard differential privacy ◮ Agent i’s output is a function of i’s private data and S 21

slide-34
SLIDE 34

Achieving joint differential privacy

“Billboard” mechanisms

◮ Compute signal S satisfying standard differential privacy ◮ Agent i’s output is a function of i’s private data and S

Lemma (Billboard lemma [HHRRW14])

Let S : D → S be (ε, δ)-differentially private. Let agent i have private data Di ∈ X, and let F : X × S → R. Then the mechanism M(D)i = F(Di, S(D)) is (ε, δ)-joint differentially private.

21

slide-35
SLIDE 35

Our signal: noisy dual variables

Privacy for the dual player

◮ Recall gradient is

ℓj =

  • i

g(i)

j

· x(i)

t

− hj

◮ May depend on private data in a low-sensitivity way 22

slide-36
SLIDE 36

Our signal: noisy dual variables

Privacy for the dual player

◮ Recall gradient is

ℓj =

  • i

g(i)

j

· x(i)

t

− hj

◮ May depend on private data in a low-sensitivity way ◮ Use Laplace mechanism to add noise, “noisy gradient”:

ˆ ℓj =

  • i

g(i)

j

· x(i)

t

− hj + Lap(∆/ε)

◮ Noisy gradients satisfy standard differential privacy 22

slide-37
SLIDE 37

Private action: best response to dual variables

(Joint) privacy for the primal player

◮ Best response problem:

max

x∈S L(x, λt) = max x∈S

  • i

f (i)·x(i)+

  • j

λj,t

  • i

g(i)

j

· x(i) − hj

  • 23
slide-38
SLIDE 38

Private action: best response to dual variables

(Joint) privacy for the primal player

◮ Best response problem:

max

x∈S L(x, λt) = max x∈S

  • i

f (i)·x(i)+

  • j

λj,t

  • i

g(i)

j

· x(i) − hj

  • ◮ Can optimize separately:

max

x(i)∈S(i) f (i) · x(i) +

  • j

λj,t

  • g(i)

j

· x(i)

23

slide-39
SLIDE 39

Private action: best response to dual variables

(Joint) privacy for the primal player

◮ Best response problem:

max

x∈S L(x, λt) = max x∈S

  • i

f (i)·x(i)+

  • j

λj,t

  • i

g(i)

j

· x(i) − hj

  • ◮ Can optimize separately:

max

x(i)∈S(i) f (i) · x(i) +

  • j

λj,t

  • g(i)

j

· x(i)

◮ Key point: optimization for x(i) depends only on λ and

functions of i’s private data (S(i), f (i), g(i))

23

slide-40
SLIDE 40

The algorithm: PrivDuDe

◮ For iterations t = 1, . . . , T: 24

slide-41
SLIDE 41

The algorithm: PrivDuDe

◮ For iterations t = 1, . . . , T: ◮ For i = 1, . . . , n, compute best response:

x(i)

t

= max

x∈S(i) f (i) · x −

  • j

λj,t(g(i)

j

· x)

24

slide-42
SLIDE 42

The algorithm: PrivDuDe

◮ For iterations t = 1, . . . , T: ◮ For i = 1, . . . , n, compute best response:

x(i)

t

= max

x∈S(i) f (i) · x −

  • j

λj,t(g(i)

j

· x)

◮ For coupling constraints j = 1, . . . , k, compute noisy gradient:

ˆ ℓj,t =

  • i

g(i)

j

· x(i)

t

− hj + Lap(∆/ε)

24

slide-43
SLIDE 43

The algorithm: PrivDuDe

◮ For iterations t = 1, . . . , T: ◮ For i = 1, . . . , n, compute best response:

x(i)

t

= max

x∈S(i) f (i) · x −

  • j

λj,t(g(i)

j

· x)

◮ For coupling constraints j = 1, . . . , k, compute noisy gradient:

ˆ ℓj,t =

  • i

g(i)

j

· x(i)

t

− hj + Lap(∆/ε)

◮ Do gradient descent update:

λt+1 = λt − η · ˆ ℓt

24

slide-44
SLIDE 44

The algorithm: PrivDuDe

◮ For iterations t = 1, . . . , T: ◮ For i = 1, . . . , n, compute best response:

x(i)

t

= max

x∈S(i) f (i) · x −

  • j

λj,t(g(i)

j

· x)

◮ For coupling constraints j = 1, . . . , k, compute noisy gradient:

ˆ ℓj,t =

  • i

g(i)

j

· x(i)

t

− hj + Lap(∆/ε)

◮ Do gradient descent update:

λt+1 = λt − η · ˆ ℓt

◮ Output: time averages 1 T

  • t x(i)

t

to agent i

24

slide-45
SLIDE 45

Privacy guarantee

Theorem

PrivDuDe satisfies (ε, δ)-joint differential privacy. The mechanism that releases just the dual variables λt satisfies (ε, δ)-standard differential privacy.

25

slide-46
SLIDE 46

Accuracy guarantee

Theorem

PrivDuDe produces a solution x such that:

◮ it achieves objective at least OPT − α ; ◮ it satisfies all personal constraints ; and ◮ the total infeasibility over all coupling constraints is at most α ;

where α = ˜ O(σk log(1/δ)/ε), and σ measures the sensitivity of the convex program.

26

slide-47
SLIDE 47

Wrapping up

27

slide-48
SLIDE 48

See paper for...

Approximate truthfulness Exact feasibility

28

slide-49
SLIDE 49

Conclusion

Main ideas

◮ Equilibrium ↔ solution to convex program ◮ Joint differential privacy for separable convex programs

PrivDuDe

◮ Approximately solve separable convex programs ◮ Satisfies (joint) differential privacy ◮ Error/infeasibility linear in number of coupling constraints 29

slide-50
SLIDE 50

Open problems and future directions

Expanding the class of convex programs

◮ Can we handle something beyond separable convex programs? ◮ Terms depending on at most two agents?

Improving the accuracy

◮ Is linear dependence on number of constraints k necessary? ◮ What is the best dependence possible? 30

slide-51
SLIDE 51

Jointly Private Convex Programming

“PrivDuDe” Justin Hsu1, Zhiyi Huang2, Aaron Roth1, Steven Zhiwei Wu1

1University of Pennsylvania 2University of Hong Kong

January 10, 2016

31