CSC2412: Private Multiplicative Weights Sasho Nikolov 1 Query - - PowerPoint PPT Presentation

csc2412 private multiplicative weights
SMART_READER_LITE
LIVE PREVIEW

CSC2412: Private Multiplicative Weights Sasho Nikolov 1 Query - - PowerPoint PPT Presentation

CSC2412: Private Multiplicative Weights Sasho Nikolov 1 Query Release Reminder: Query Release 50,1 } . :D oh Recall the query release problem: q.lt/--tnEZ9ilx ) Workload Q = { q 1 , . . . , q k } of k counting queries . , Xu } teh x


slide-1
SLIDE 1

CSC2412: Private Multiplicative Weights

Sasho Nikolov

1
slide-2
SLIDE 2

Query Release

slide-3
SLIDE 3

Reminder: Query Release

Recall the query release problem:

  • Workload Q = {q1, . . . , qk} of k counting queries

Q(X) = B @ q1(X) . . . qk(X) 1 C A 2 [0, 1]k.

  • Compute, with (", )-DP, some Y 2 Rk so that
k

max

i=1 |Yi qi(X)|  ↵,

with probability 1 .

2
  • h
. :D → 50,1}

q.lt/--tnEZ9ilx

)

where

teh x ,

. . . . , Xu}
slide-4
SLIDE 4

Motivating example

`-wise marginals queries:

  • X = {0, 1}d
  • a query qS,a for any S = {i1, . . . , ik} ✓ [d] and a = (ai1, . . . , aik):

qS,a(x) = 8 < : 1 xij = aij 8ij 2 S

  • therwise

. E.g., “smoker and female?”, “smoker and over 30?”, “smoker and heart disease?”, etc.

3
  • i. e
.

d binary attributes

e

e

Q,

=

workload

  • f

all E- wise marginal

queries

  • n foil

"

tael

= ( del .de# I

'

slide-5
SLIDE 5

What do we know?

4 E- DP

:

Using

the

Laplace

noise

mechanism

,

we can

answer k

counting queries

a'

'

I

win

none :L I;9i÷m

"

( ed )

  • Dp : Using

the

Gaussian

noise

mechanism

:
  • n ,,ltem§

no > Water

LE EL

slide-6
SLIDE 6

Private Multiplicative Weights

We will see an algorithm that achieves:

  • under "-DP, error ↵ with probability 1 when

n log(k) log(|X|) ↵3" .

  • under (", )-DP, error ↵ with probability 1 when

n log(k) p log(|X|) log(1/) ↵2" .

5 is constant

.

I:÷÷÷

.

K

slide-7
SLIDE 7

Learning a distribution

slide-8
SLIDE 8

A probability view

We can think of X = {x1, . . . , xn} as a probability distribution p: P

x⇠p(x = y) = |i : xi = y|

n Then, for any counting query q : X ! {0, 1}, q(X) = 1 n

n

X

i=1

q(x)

6 → allowed toabemultiset

µ

  • ver

5 uniform

  • ver

K

H XL ,

. .
  • , Xu
i = Eagan . """#=×Ep gin
  • : qcpi

ie

.

yet )

=

expectation

  • f

q

under the

empirical

distribution

  • f

X

slide-9
SLIDE 9

Learning a distribution

Task: Learn an approximation ˆ p of the empirical distribution p such that 8q 2 Q : |q(ˆ p) q(p)|  ↵.

7
  • Query

release problem

distributions over

&

a

workload of

4 queries

q

q

III

,

Y' "

11

If

we can

do

this

, we can

release

answers

  • f 451

for all

qeQ

Trick

( again ) :

we

will

assume

that if

q

is

asked,

then

I - q is

also

asked

enough

to

make

sure

ginger

945 '

  • 947g ,:p
slide-10
SLIDE 10

Bounded mistake learner

Distribution learning algorithm U:

  • takes a ˆ

p and q such that q(ˆ p) q(p) > ↵

  • returns a new distribution ˆ

p0 = U(q, ˆ p) Suppose that ˆ p0 = uniform over X and ˆ pt = U(ˆ pt1, qt). U makes at most L mistakes if any such sequence ˆ p0, ˆ p1, . . . , ˆ p` must have t  L.

8

tonged

't

know

p

→ update algorithm

§

makes

a mistake

T ryon which

§

makes

a

mistake

  • n

q

13€

> an improvement
  • f §

initial

'Ivers ↳ keep improving It by poinmitifagqegoat

l

After making

L

mistakes

( and

L improvements )

poi

must

be

accurate

for

all g

slide-11
SLIDE 11

Multiplicative Weights Learner

Theorem There exists a distribution learner U that makes L  4 ln |X|

↵2

mistakes.

9

II.

slide-12
SLIDE 12

The Learner

U(q, ˆ p) 8x 2 X : ˜ p(x) = ˆ p(x)eηq(x) ˆ p0(x) =

˜ p(x) P y∈X ˜ p(y)

return ˆ p0

10

Multiplicative

weight update

Algorithm

Reminder

:

gip )

  • qcp) > L
  • I. e
.

pa

gives

too

much weight

to

x

st

. qcx, it

9451--17,9

" '

you

,

= prob
  • f
x

*

:

g.

parameter itogefbecafer

under

F

decrease

join

it

  • pen

t

↳ normalize

to

get

a

prob

.

distribution

slide-13
SLIDE 13

Why it works

KL-divergence: D(pkˆ pt) = P

x2X p(x) log p(x) ˆ pt(x)
  • 1. D(pkˆ

p0)  log |X| because ˆ p0 is uniform

  • 2. D(pkˆ

pt) 0 for all t

  • 3. D(pkˆ

pt) D(pkˆ pt1)  η

2(qt1(p) qt(ˆ

pt1)) + η2

4 . 11

Ponton

notion

=

  • hee byte:#
⇒ Fo
  • L
→ entropy of p

Dlptlpol

  • Ige pcxilbgllael )
  • they P"')
=

fog , 'Ll

  • t.zepcxi.bg#TyEloglH

initial guess

pro

find mistake

q ,

5.

= Utpo.gl )

find mistake

92

→iEi¥¥.

.

1%a1ft-il-qt.ie#hyIa-iI---IfIY.tu

Maps

set a- a

slide-14
SLIDE 14

Private Multiplicative Weights

slide-15
SLIDE 15

Idea for private algorithm

  • Start with t = 0, ˆ

p0 uniform.

  • Private find the most wrongly answered query q 2 Q
  • If q(ˆ

pt) q(p) < ↵, output ˆ pt

  • Else set ˆ

pt+1 = U(p, q) and increase t

10

tr

n#

/

q? ,

a

"" themes in

a

hare error

  • c. a

(

terminates

after

e

L

= 4617L

iterations

slide-16
SLIDE 16

The algorithm in detail

ˆ p0 = uniform over X for t = 1 . . L Sample q 2 Q w/ prob / exp ⇣

n(q(ˆ pt)q(p)) 2"0

⌘ Yt = q(p) + Zt, Zt ⇠ Lap(0,

1 "0n)

if q(ˆ pt) Yt) > 2↵ ˆ pt = U(ˆ pt1, q) else Output ˆ pt

11

hlnhhl

go

parameter,

to

be set in the priv .

T

analysis

  • I
→ exponential

mechanism

want
  • l

gipp

  • f CP)
to achieve

w/ score

approx

worst

" 9N) = gift
  • gu )
error

*

qq.in#-EapeaaYsitiitys7onthe

In

a Mma noise week

*

w/ priv

param Eo

exponential

mech

  • w, privacy

parameter

↳ Max

error

E gift )
  • gcp)

th

  • E 945ft
  • Yt -12L
E

32

I

slide-17
SLIDE 17

Privacy analysis

12

Approach

:

found

privacy

loss

per

iteration .

use

composition

theorem

to

bound

total

prior loss F-xp

mech

Eo
  • DP

priv

loss per

iteration

:

tap mech

e.

Leo

  • DP

by

composition

Total

  • f

EL

iterations

→ total

priv

.

loss

E

2L

Eo - DP

set

eo

= IT =

FLIRT

slide-18
SLIDE 18

Accuracy analysis

13

t)

we want

that

w/ prof

z

  • 1. p

Pll Ztl

? d) E e
  • neon

t t

14T

  • qcpll

Eh

query

in

round

t

Laplace mechanism

w,

s

L adaptive

queries enough

to

have

ns.enk/pl=2LlnkMIalbIkH

  • ed

Ed

Eod

2)

we

prob

? I - P

at

every

iteration

qlpnel

  • gaps
? qq.to

q' ( Ftl

  • 9" M
  • L

if

n → l°L!!L = 21¥13! 2¥13 ) { 2

Eh