Universal Multi-Party Poisoning Attacks Saeed Mahloujifar Mohammad Mahmoody Ameer Mohammed
Multi-Party Learning Distributions Data Providers πΈ 1 π 1 Model π» πΈ π π π
Multi-Party Learning (Round j) Distributions Data Providers πΈ 1 π 1 π» πΈ π π π Model jβ1 πΈ π π π
Multi-Party Learning (Round j) Distributions Data Providers πΈ 1 π 1 π» πΈ π π π Model jβ1 πΈ π π π
Multi-Party Learning (Round j) Distributions Data Providers πΈ 1 π 1 π£ π π π π» πΈ π π π Model jβ1 πΈ π π π
Multi-Party Learning (Round j) Distributions Data Providers πΈ 1 π 1 π£ π π π Model j π» πΈ π π π Model jβ1 πΈ π π π
Multi-Party Learning (Round j) Distributions Data Providers πΈ 1 π 1 Model j Model j Model j π» Model j πΈ π π π Model j Model j Model jβ1 πΈ π π π
Poisoning in Multi-Party Learning Distributions Data Providers An adversary (partially) controls a πΈ 1 π number of data providers 1 Model π» πΈ π π π πΈ π π π
(π, π) -Poisoning Attack Model π (out of π ) of the parties become corrupted πΈ π π π Each corrupted party π π samples from a different distribution π , β€ π πΈ π πΈ π π = π β π -Tampering [ACMPS14] [MM17] [MM18] π = 1 β Static Corruption in MPC (crypto)
What is the inherent power of π, π -poisoning adversaries against Multi-party Learning?
Main Theorem: Power of π, π -Poisoning Let πΆ be a bad property of the model π β’ E.g. πΆ(π) = 1 if π misclassified an specific instance π¦ For any π -party learning protocol there is a π, π -poisoning adversary that increases Pr[πΆ] from π β π 1β ππ π
Main Theorem: Power of π, π -Poisoning Let πΆ be a bad property of the model π β’ E.g. πΆ(π) = 1 if π misclassified an specific instance π¦ For any π -party learning protocol there is a π, π -poisoning adversary that increases Pr[πΆ] from π β π 1β ππ π Pr[πΆ] Before attack π π Pr[πΆ] after attack 5% 1/2 π/2 11% 5% 1/2 π 22% 5% 1 π/2 22%
Features of Attack β’ Universal: provably work against any learning protocol β’ In contrast with: [Bagdasaryan et al 2018; Bhagoji et al. 2018] β’ Clean label: Only uses correct labels β’ Similar to: [M et al 2017; Shafahi et al 2018] β’ Polynomial time β’ Similar to: [M and Mahmoody 2019]
Ideas Behind Attack β’ Main Idea: Treat protocol as random process and run a biasing attack β’ The bad property is a function over the random process β’ We want to bias that function, similar to attacks in coin tossing
Ideas Behind Attack β’ Main Idea: Treat protocol as random process and run a biasing attack β’ The bad property is a function over the random process β’ We want to bias that function, similar to attacks in coin tossing β’ New biasing model: Generalized π -Tampering.
Ideas Behind Attack β’ Main Idea: Treat protocol as random process and run a biasing attack β’ The bad property is a function over the random process β’ We want to bias that function, similar to attacks in coin tossing β’ New biasing model: Generalized π -Tampering. Let π βΆ π 1 , β¦ , π π β {0,1}
Ideas Behind Attack β’ Main Idea: Treat protocol as random process and run a biasing attack β’ The bad property is a function over the random process β’ We want to bias that function, similar to attacks in coin tossing β’ New biasing model: Generalized π -Tampering. Let π βΆ π 1 , β¦ , π π β {0,1} Input blocks π£ 1 , π£ 2 , β¦ π£ π are sampled one-by one in online way:
Ideas Behind Attack β’ Main Idea: Treat protocol as random process and run a biasing attack β’ The bad property is a function over the random process β’ We want to bias that function, similar to attacks in coin tossing β’ New biasing model: Generalized π -Tampering. Let π βΆ π 1 , β¦ , π π β {0,1} Input blocks π£ 1 , π£ 2 , β¦ π£ π are sampled one-by one in online way: π π with marginal probability 1 β π π£ π = α with marginal probability π
Ideas Behind Attack β’ Main Idea: Treat protocol as random process and run a biasing attack β’ The bad property is a function over the random process β’ We want to bias that function, similar to attacks in coin tossing β’ New biasing model: Generalized π -Tampering. Let π βΆ π 1 , β¦ , π π β {0,1} Input blocks π£ 1 , π£ 2 , β¦ π£ π are sampled one-by one in online way: π π with marginal probability 1 β π π£ π = α with marginal probability π Our generalized p-tampering attack based on Ideas in coin tossing attacks [BOL89,IH14]
Summary We show Poisoning attacks against multi-party learning protocols: β’ Universal: Provably apply to any multi-party learning protocol β’ Clean label: Only uses samples with correct labels β’ Run in polynomial time Poster #160 β’ Increase the probability of any chosen bad property
Recommend
More recommend