Universal Multi-Party Poisoning Attacks Saeed Mahloujifar Mohammad - - PowerPoint PPT Presentation

β–Ά
universal multi party poisoning attacks
SMART_READER_LITE
LIVE PREVIEW

Universal Multi-Party Poisoning Attacks Saeed Mahloujifar Mohammad - - PowerPoint PPT Presentation

Universal Multi-Party Poisoning Attacks Saeed Mahloujifar Mohammad Mahmoody Ameer Mohammed Multi-Party Learning Distributions Data Providers 1 1 Model Multi-Party Learning (Round j) Distributions


slide-1
SLIDE 1

Universal Multi-Party Poisoning Attacks

Saeed Mahloujifar Mohammad Mahmoody Ameer Mohammed

slide-2
SLIDE 2

Multi-Party Learning

𝐸1 πΈπ‘œ 𝑄

1

𝑄

π‘œ

𝐻

Model

Distributions Data Providers

slide-3
SLIDE 3

Multi-Party Learning (Round j)

𝐸1 𝐸𝑗 πΈπ‘œ 𝑄

1

𝑄

𝑗

𝑄

π‘œ

𝐻

Distributions Data Providers Modeljβˆ’1

slide-4
SLIDE 4

Multi-Party Learning (Round j)

𝐸1 𝐸𝑗 πΈπ‘œ 𝑄

1

𝑄

𝑗

𝑄

π‘œ

𝐻

Distributions Data Providers Modeljβˆ’1

slide-5
SLIDE 5

Multi-Party Learning (Round j)

𝐸1 𝐸𝑗 πΈπ‘œ 𝑄

1

𝑄

𝑗

𝑄

π‘œ

𝐻

Distributions Data Providers

π‘’π‘˜ π‘£π‘˜

Modeljβˆ’1

slide-6
SLIDE 6

Multi-Party Learning (Round j)

𝐸1 𝐸𝑗 πΈπ‘œ 𝑄

1

𝑄

𝑗

𝑄

π‘œ

𝐻

Modelj Distributions Data Providers

π‘’π‘˜ π‘£π‘˜

Modeljβˆ’1

slide-7
SLIDE 7

Multi-Party Learning (Round j)

𝐸1 𝐸𝑗 πΈπ‘œ 𝑄

1

𝑄

𝑗

𝑄

π‘œ

𝐻

Modelj Distributions Data Providers Modelj Modeljβˆ’1 Modelj Modelj Modelj Modelj

slide-8
SLIDE 8

Poisoning in Multi-Party Learning

𝐸1 𝐸𝑗 πΈπ‘œ 𝑄

1

𝑄

𝑗

𝑄

π‘œ

𝐻

Model Distributions Data Providers

An adversary (partially) controls a number of data providers

slide-9
SLIDE 9

𝑒 , ≀ π‘Ÿ

(𝑙, π‘Ÿ)-Poisoning Attack Model

𝑙 (out of π‘œ) of the parties become corrupted Each corrupted party 𝑄𝑗 samples from a different distribution 𝐸𝑗 𝑄

𝑗

𝐸𝑗 𝐸𝑗

𝑙 = π‘œ β†’ π‘Ÿ-Tampering [ACMPS14] [MM17] [MM18] π‘Ÿ = 1 β†’ Static Corruption in MPC (crypto)

slide-10
SLIDE 10

What is the inherent power of 𝑙, π‘Ÿ -poisoning adversaries against Multi-party Learning?

slide-11
SLIDE 11

Main Theorem: Power of 𝑙, π‘Ÿ -Poisoning

Let 𝐢 be a bad property of the model 𝑁

  • E.g. 𝐢(𝑁) = 1 if 𝑁 misclassified an specific instance 𝑦

For any π‘œ-party learning protocol there is a 𝑙, π‘Ÿ -poisoning adversary that increases Pr[𝐢] from

πœ— β†’ πœ—1βˆ’ π‘™π‘Ÿ

π‘œ

slide-12
SLIDE 12

Main Theorem: Power of 𝑙, π‘Ÿ -Poisoning

Let 𝐢 be a bad property of the model 𝑁

  • E.g. 𝐢(𝑁) = 1 if 𝑁 misclassified an specific instance 𝑦

For any π‘œ-party learning protocol there is a 𝑙, π‘Ÿ -poisoning adversary that increases Pr[𝐢] from

πœ— β†’ πœ—1βˆ’ π‘™π‘Ÿ

π‘œ

Pr[𝐢] Before attack 𝒓 𝒍 Pr[𝐢] after attack 5% 1/2 π‘œ/2 11% 5% 1/2 π‘œ 22% 5% 1 π‘œ/2 22%

slide-13
SLIDE 13

Features of Attack

  • Universal: provably work against any learning protocol
  • In contrast with: [Bagdasaryan et al 2018; Bhagoji et al. 2018]
  • Clean label: Only uses correct labels
  • Similar to: [M et al 2017; Shafahi et al 2018]
  • Polynomial time
  • Similar to: [M and Mahmoody 2019]
slide-14
SLIDE 14

Ideas Behind Attack

  • Main Idea: Treat protocol as random process and run a biasing attack
  • The bad property is a function over the random process
  • We want to bias that function, similar to attacks in coin tossing
slide-15
SLIDE 15

Ideas Behind Attack

  • Main Idea: Treat protocol as random process and run a biasing attack
  • The bad property is a function over the random process
  • We want to bias that function, similar to attacks in coin tossing
  • New biasing model: Generalized π‘ž-Tampering.
slide-16
SLIDE 16

Ideas Behind Attack

  • Main Idea: Treat protocol as random process and run a biasing attack
  • The bad property is a function over the random process
  • We want to bias that function, similar to attacks in coin tossing
  • New biasing model: Generalized π‘ž-Tampering.

Let 𝑔 ∢ 𝑉1, … , π‘‰π‘œ β†’ {0,1}

slide-17
SLIDE 17

Ideas Behind Attack

  • Main Idea: Treat protocol as random process and run a biasing attack
  • The bad property is a function over the random process
  • We want to bias that function, similar to attacks in coin tossing
  • New biasing model: Generalized π‘ž-Tampering.

Let 𝑔 ∢ 𝑉1, … , π‘‰π‘œ β†’ {0,1} Input blocks 𝑣1, 𝑣2, … π‘£π‘œ are sampled one-by one in online way:

slide-18
SLIDE 18

Ideas Behind Attack

  • Main Idea: Treat protocol as random process and run a biasing attack
  • The bad property is a function over the random process
  • We want to bias that function, similar to attacks in coin tossing
  • New biasing model: Generalized π‘ž-Tampering.

Let 𝑔 ∢ 𝑉1, … , π‘‰π‘œ β†’ {0,1} Input blocks 𝑣1, 𝑣2, … π‘£π‘œ are sampled one-by one in online way: 𝑣𝑗= α‰Š 𝑉𝑗 with marginal probability 1 βˆ’ π‘ž with marginal probability π‘ž

slide-19
SLIDE 19

Ideas Behind Attack

  • Main Idea: Treat protocol as random process and run a biasing attack
  • The bad property is a function over the random process
  • We want to bias that function, similar to attacks in coin tossing
  • New biasing model: Generalized π‘ž-Tampering.

Let 𝑔 ∢ 𝑉1, … , π‘‰π‘œ β†’ {0,1} Input blocks 𝑣1, 𝑣2, … π‘£π‘œ are sampled one-by one in online way: 𝑣𝑗= α‰Š 𝑉𝑗 with marginal probability 1 βˆ’ π‘ž with marginal probability π‘ž

Our generalized p-tampering attack based on Ideas in coin tossing attacks [BOL89,IH14]

slide-20
SLIDE 20

Summary

We show Poisoning attacks against multi-party learning protocols:

  • Universal: Provably apply to any multi-party learning protocol
  • Clean label: Only uses samples with correct labels
  • Run in polynomial time
  • Increase the probability of any chosen bad property

Poster #160