universal multi party poisoning attacks

Universal Multi-Party Poisoning Attacks Saeed Mahloujifar Mohammad - PowerPoint PPT Presentation

Universal Multi-Party Poisoning Attacks Saeed Mahloujifar Mohammad Mahmoody Ameer Mohammed Multi-Party Learning Distributions Data Providers 1 1 Model Multi-Party Learning (Round j) Distributions


  1. Universal Multi-Party Poisoning Attacks Saeed Mahloujifar Mohammad Mahmoody Ameer Mohammed

  2. Multi-Party Learning Distributions Data Providers 𝐸 1 𝑄 1 Model 𝐻 𝐸 π‘œ 𝑄 π‘œ

  3. Multi-Party Learning (Round j) Distributions Data Providers 𝐸 1 𝑄 1 𝐻 𝐸 𝑗 𝑄 𝑗 Model jβˆ’1 𝐸 π‘œ 𝑄 π‘œ

  4. Multi-Party Learning (Round j) Distributions Data Providers 𝐸 1 𝑄 1 𝐻 𝐸 𝑗 𝑄 𝑗 Model jβˆ’1 𝐸 π‘œ 𝑄 π‘œ

  5. Multi-Party Learning (Round j) Distributions Data Providers 𝐸 1 𝑄 1 𝑣 π‘˜ 𝑒 π‘˜ 𝐻 𝐸 𝑗 𝑄 𝑗 Model jβˆ’1 𝐸 π‘œ 𝑄 π‘œ

  6. Multi-Party Learning (Round j) Distributions Data Providers 𝐸 1 𝑄 1 𝑣 π‘˜ 𝑒 π‘˜ Model j 𝐻 𝐸 𝑗 𝑄 𝑗 Model jβˆ’1 𝐸 π‘œ 𝑄 π‘œ

  7. Multi-Party Learning (Round j) Distributions Data Providers 𝐸 1 𝑄 1 Model j Model j Model j 𝐻 Model j 𝐸 𝑗 𝑄 𝑗 Model j Model j Model jβˆ’1 𝐸 π‘œ 𝑄 π‘œ

  8. Poisoning in Multi-Party Learning Distributions Data Providers An adversary (partially) controls a 𝐸 1 𝑄 number of data providers 1 Model 𝐻 𝐸 𝑗 𝑄 𝑗 𝐸 π‘œ 𝑄 π‘œ

  9. (𝑙, π‘Ÿ) -Poisoning Attack Model 𝑙 (out of π‘œ ) of the parties become corrupted 𝐸 𝑗 𝑄 𝑗 Each corrupted party 𝑄 𝑗 samples from a different distribution 𝑒 , ≀ π‘Ÿ 𝐸 𝑗 𝐸 𝑗 𝑙 = π‘œ β†’ π‘Ÿ -Tampering [ACMPS14] [MM17] [MM18] π‘Ÿ = 1 β†’ Static Corruption in MPC (crypto)

  10. What is the inherent power of 𝑙, π‘Ÿ -poisoning adversaries against Multi-party Learning?

  11. Main Theorem: Power of 𝑙, π‘Ÿ -Poisoning Let 𝐢 be a bad property of the model 𝑁 β€’ E.g. 𝐢(𝑁) = 1 if 𝑁 misclassified an specific instance 𝑦 For any π‘œ -party learning protocol there is a 𝑙, π‘Ÿ -poisoning adversary that increases Pr[𝐢] from πœ— β†’ πœ— 1βˆ’ π‘™π‘Ÿ π‘œ

  12. Main Theorem: Power of 𝑙, π‘Ÿ -Poisoning Let 𝐢 be a bad property of the model 𝑁 β€’ E.g. 𝐢(𝑁) = 1 if 𝑁 misclassified an specific instance 𝑦 For any π‘œ -party learning protocol there is a 𝑙, π‘Ÿ -poisoning adversary that increases Pr[𝐢] from πœ— β†’ πœ— 1βˆ’ π‘™π‘Ÿ π‘œ Pr[𝐢] Before attack 𝒓 𝒍 Pr[𝐢] after attack 5% 1/2 π‘œ/2 11% 5% 1/2 π‘œ 22% 5% 1 π‘œ/2 22%

  13. Features of Attack β€’ Universal: provably work against any learning protocol β€’ In contrast with: [Bagdasaryan et al 2018; Bhagoji et al. 2018] β€’ Clean label: Only uses correct labels β€’ Similar to: [M et al 2017; Shafahi et al 2018] β€’ Polynomial time β€’ Similar to: [M and Mahmoody 2019]

  14. Ideas Behind Attack β€’ Main Idea: Treat protocol as random process and run a biasing attack β€’ The bad property is a function over the random process β€’ We want to bias that function, similar to attacks in coin tossing

  15. Ideas Behind Attack β€’ Main Idea: Treat protocol as random process and run a biasing attack β€’ The bad property is a function over the random process β€’ We want to bias that function, similar to attacks in coin tossing β€’ New biasing model: Generalized π‘ž -Tampering.

  16. Ideas Behind Attack β€’ Main Idea: Treat protocol as random process and run a biasing attack β€’ The bad property is a function over the random process β€’ We want to bias that function, similar to attacks in coin tossing β€’ New biasing model: Generalized π‘ž -Tampering. Let 𝑔 ∢ 𝑉 1 , … , 𝑉 π‘œ β†’ {0,1}

  17. Ideas Behind Attack β€’ Main Idea: Treat protocol as random process and run a biasing attack β€’ The bad property is a function over the random process β€’ We want to bias that function, similar to attacks in coin tossing β€’ New biasing model: Generalized π‘ž -Tampering. Let 𝑔 ∢ 𝑉 1 , … , 𝑉 π‘œ β†’ {0,1} Input blocks 𝑣 1 , 𝑣 2 , … 𝑣 π‘œ are sampled one-by one in online way:

  18. Ideas Behind Attack β€’ Main Idea: Treat protocol as random process and run a biasing attack β€’ The bad property is a function over the random process β€’ We want to bias that function, similar to attacks in coin tossing β€’ New biasing model: Generalized π‘ž -Tampering. Let 𝑔 ∢ 𝑉 1 , … , 𝑉 π‘œ β†’ {0,1} Input blocks 𝑣 1 , 𝑣 2 , … 𝑣 π‘œ are sampled one-by one in online way: 𝑉 𝑗 with marginal probability 1 βˆ’ π‘ž 𝑣 𝑗 = α‰Š with marginal probability π‘ž

  19. Ideas Behind Attack β€’ Main Idea: Treat protocol as random process and run a biasing attack β€’ The bad property is a function over the random process β€’ We want to bias that function, similar to attacks in coin tossing β€’ New biasing model: Generalized π‘ž -Tampering. Let 𝑔 ∢ 𝑉 1 , … , 𝑉 π‘œ β†’ {0,1} Input blocks 𝑣 1 , 𝑣 2 , … 𝑣 π‘œ are sampled one-by one in online way: 𝑉 𝑗 with marginal probability 1 βˆ’ π‘ž 𝑣 𝑗 = α‰Š with marginal probability π‘ž Our generalized p-tampering attack based on Ideas in coin tossing attacks [BOL89,IH14]

  20. Summary We show Poisoning attacks against multi-party learning protocols: β€’ Universal: Provably apply to any multi-party learning protocol β€’ Clean label: Only uses samples with correct labels β€’ Run in polynomial time Poster #160 β€’ Increase the probability of any chosen bad property

Recommend


More recommend