fair resource allocation in federated learning
play

Fair Resource Allocation in Federated Learning Tian Li (CMU) , - PowerPoint PPT Presentation

Fair Resource Allocation in Federated Learning Tian Li (CMU) , Maziar Sanjabi (Facebook AI), Ahmad Beirami (Facebook AI), Virginia Smith (CMU) tianli@cmu.edu 1 Federated Learning 2 2 Federated Learning Privacy-preserving training in


  1. Fair Resource Allocation in Federated Learning Tian Li (CMU) , Maziar Sanjabi (Facebook AI), Ahmad Beirami (Facebook AI), Virginia Smith (CMU) tianli@cmu.edu 1

  2. Federated Learning 2 2

  3. Federated Learning Privacy-preserving training in heterogeneous, (potentially) massive networks 2 2

  4. Federated Learning Privacy-preserving training in heterogeneous, (potentially) massive networks 2 2

  5. Federated Learning Privacy-preserving training in heterogeneous, (potentially) massive networks 2 2

  6. Federated Learning Privacy-preserving training in heterogeneous, (potentially) massive networks 2

  7. Federated Learning Privacy-preserving training in heterogeneous, (potentially) massive networks 2

  8. Federated Learning Privacy-preserving training in heterogeneous, (potentially) massive networks 2

  9. Federated Learning Privacy-preserving training in heterogeneous, (potentially) massive networks 2

  10. Challenges 3

  11. Challenges ) … ( objective: + p 2 min p 1 + p N + F 1 F 2 F N w 3

  12. Challenges ) … ( objective: + p 2 min p 1 + p N + F 1 F 2 F N w no accuracy guarantee for individual devices 3

  13. Challenges ) … ( objective: + p 2 min p 1 + p N + F 1 F 2 F N w no accuracy guarantee for individual devices model performance can vary widely 3

  14. Challenges ) … ( objective: + p 2 min p 1 + p N + F 1 F 2 F N w no accuracy guarantee for individual devices Can we devise an efficient federated optimization method to encourage a more fair (i.e., more uniform ) distribution of the model performance across devices? model performance can vary widely 3

  15. Fair Resource Allocation Objective

  16. Fair Resource Allocation Objective ) … ( + p 2 min p 1 + p N + F 1 F 2 F N w

  17. Fair Resource Allocation Objective ) q + 1 … q + 1 ( q + 1 1 q -FFL: + p 2 min p 1 + p N + F 1 F 2 F N q + 1 w

  18. Fair Resource Allocation Objective ) q + 1 … q + 1 ( q + 1 1 q -FFL: + p 2 min p 1 + p N + F 1 F 2 F N q + 1 w Inspired by 𝛽 -fairness for fair resource allocation in wireless networks

  19. Fair Resource Allocation Objective ) q + 1 … q + 1 ( q + 1 1 q -FFL: + p 2 min p 1 + p N + F 1 F 2 F N q + 1 w Inspired by 𝛽 -fairness for fair resource allocation in wireless networks A tunable framework ( : previous objective; : minimax fairness) q = 0 q = ∞

  20. Fair Resource Allocation Objective ) q + 1 … q + 1 ( q + 1 1 q -FFL: + p 2 min p 1 + p N + F 1 F 2 F N q + 1 w Inspired by 𝛽 -fairness for fair resource allocation in wireless networks A tunable framework ( : previous objective; : minimax fairness) q = 0 q = ∞ Theory

  21. Fair Resource Allocation Objective ) q + 1 … q + 1 ( q + 1 1 q -FFL: + p 2 min p 1 + p N + F 1 F 2 F N q + 1 w Inspired by 𝛽 -fairness for fair resource allocation in wireless networks A tunable framework ( : previous objective; : minimax fairness) q = 0 q = ∞ Theory

  22. Fair Resource Allocation Objective ) q + 1 … q + 1 ( q + 1 1 q -FFL: + p 2 min p 1 + p N + F 1 F 2 F N q + 1 w Inspired by 𝛽 -fairness for fair resource allocation in wireless networks A tunable framework ( : previous objective; : minimax fairness) q = 0 q = ∞ Theory Generalization guarantees Increasing results in more ‘uniform’ accuracy distributions (in q terms of various uniformity measures such as variance)

  23. Fair Resource Allocation Objective ) … q + 1 q + 1 q + 1 ( q + 1 1 q -FFL: + p 2 min p 1 + p N + F 1 F 2 F N w

  24. Fair Resource Allocation Objective ) … q + 1 q + 1 q + 1 ( q + 1 1 q -FFL: + p 2 min p 1 + p N + F 1 F 2 F N w # test accuracy 0.4 0.2 0.6 0.8

  25. Fair Resource Allocation Objective ) … q + 1 q + 1 q + 1 ( q + 1 1 q -FFL: + p 2 min p 1 + p N + F 1 F 2 F N w Baseline # test accuracy 0.4 0.2 0.6 0.8

  26. Fair Resource Allocation Objective ) … q + 1 q + 1 q + 1 ( q + 1 1 q -FFL: + p 2 min p 1 + p N + F 1 F 2 F N w Baseline q -FFL # test accuracy 0.4 0.2 0.6 0.8

  27. Fair Resource Allocation Objective ) … q + 1 q + 1 q + 1 ( q + 1 1 q -FFL: + p 2 min p 1 + p N + F 1 F 2 F N w Baseline q -FFL # test accuracy 0.4 0.2 0.6 0.8

  28. Efficient Solver

  29. Efficient Solver Challenges

  30. Efficient Solver Challenges Different fairness/accuracy tradeoffs: different q’s

  31. Efficient Solver Challenges Different fairness/accuracy tradeoffs: different q’s

  32. Efficient Solver Challenges Different fairness/accuracy tradeoffs: different q’s Heterogeneous networks, expensive communication

  33. Efficient Solver Challenges Different fairness/accuracy tradeoffs: different q’s Heterogeneous networks, expensive communication

  34. Efficient Solver Challenges Different fairness/accuracy tradeoffs: different q’s Heterogeneous networks, expensive communication

  35. Efficient Solver Challenges High level ideas Different fairness/accuracy tradeoffs: different q’s Heterogeneous networks, expensive communication

  36. Efficient Solver Challenges High level ideas Different fairness/accuracy Dynamically estimate the step tradeoffs: different q’s sizes associated with different ’s q Heterogeneous networks, expensive communication

  37. Efficient Solver Challenges High level ideas Different fairness/accuracy Dynamically estimate the step tradeoffs: different q’s sizes associated with different ’s q Heterogeneous networks, expensive communication

  38. Efficient Solver Challenges High level ideas Different fairness/accuracy Dynamically estimate the step tradeoffs: different q’s sizes associated with different ’s q Heterogeneous networks, Allow for low device expensive communication participation, local updating

  39. Empirical Results Objective Average Worst 10% Best 10% Variance Dataset Synthetic q = 0 80.8 18.8 100.0 724 q = 1 79.0 31.1 100.0 472 Vehicle q = 0 87.3 43.0 95.7 291 q = 5 87.7 69.9 94.0 48 Sent140 q = 0 65.1 15.9 100.0 697 q = 1 66.5 23.0 100.0 509 Shakespeare q = 0 51.1 39.7 72.9 82 q = .001 52.1 42.1 69.0 54

  40. Empirical Results Benchmark: LEAF (leaf.cmu.edu) Objective Average Worst 10% Best 10% Variance Dataset Synthetic q = 0 80.8 18.8 100.0 724 q = 1 79.0 31.1 100.0 472 Vehicle q = 0 87.3 43.0 95.7 291 q = 5 87.7 69.9 94.0 48 Sent140 q = 0 65.1 15.9 100.0 697 q = 1 66.5 23.0 100.0 509 Shakespeare q = 0 51.1 39.7 72.9 82 q = .001 52.1 42.1 69.0 54

  41. Empirical Results Benchmark: LEAF (leaf.cmu.edu) Objective Average Worst 10% Best 10% Variance Dataset Synthetic q = 0 80.8 18.8 100.0 724 q = 1 79.0 31.1 100.0 472 Vehicle q = 0 87.3 43.0 95.7 291 q = 5 87.7 69.9 94.0 48 Sent140 q = 0 65.1 15.9 100.0 697 q = 1 66.5 23.0 100.0 509 Shakespe q = 0 51.1 39.7 72.9 82 q = .001 52.1 42.1 69.0 54

  42. Empirical Results Benchmark: similar average LEAF (leaf.cmu.edu) accuracy Objective Average Worst 10% Best 10% Variance Dataset Synthetic q = 0 80.8 18.8 100.0 724 q = 1 79.0 31.1 100.0 472 Vehicle q = 0 87.3 43.0 95.7 291 q = 5 87.7 69.9 94.0 48 Sent140 q = 0 65.1 15.9 100.0 697 q = 1 66.5 23.0 100.0 509 Shakespe q = 0 51.1 39.7 72.9 82 q = .001 52.1 42.1 69.0 54

  43. Empirical Results Benchmark: LEAF (leaf.cmu.edu) Objective Average Worst 10% Best 10% Variance Dataset Synthetic q = 0 80.8 18.8 100.0 724 q = 1 79.0 31.1 100.0 472 Vehicle q = 0 87.3 43.0 95.7 291 q = 5 87.7 69.9 94.0 48 Sent140 q = 0 65.1 15.9 100.0 697 q = 1 66.5 23.0 100.0 509 Shakespe q = 0 51.1 39.7 72.9 82 q = .001 52.1 42.1 69.0 54

  44. Empirical Results Benchmark: decrease variance LEAF (leaf.cmu.edu) significantly Objective Average Worst 10% Best 10% Variance Dataset Synthetic q = 0 80.8 18.8 100.0 724 q = 1 79.0 31.1 100.0 472 Vehicle q = 0 87.3 43.0 95.7 291 q = 5 87.7 69.9 94.0 48 Sent140 q = 0 65.1 15.9 100.0 697 q = 1 66.5 23.0 100.0 509 Shakespe q = 0 51.1 39.7 72.9 82 q = .001 52.1 42.1 69.0 54

  45. Empirical Results Benchmark: LEAF (leaf.cmu.edu) Objective Average Worst 10% Best 10% Variance Dataset Synthetic q = 0 80.8 18.8 100.0 724 q = 1 79.0 31.1 100.0 472 Vehicle q = 0 87.3 43.0 95.7 291 q = 5 87.7 69.9 94.0 48 Sent140 q = 0 65.1 15.9 100.0 697 q = 1 66.5 23.0 100.0 509 Shakespe q = 0 51.1 39.7 72.9 82 q = .001 52.1 42.1 69.0 54

  46. Empirical Results increase the Benchmark: accuracy of the LEAF (leaf.cmu.edu) worst 10% devices Objective Average Worst 10% Best 10% Variance Dataset Synthetic q = 0 80.8 18.8 100.0 724 q = 1 79.0 31.1 100.0 472 Vehicle q = 0 87.3 43.0 95.7 291 q = 5 87.7 69.9 94.0 48 Sent140 q = 0 65.1 15.9 100.0 697 q = 1 66.5 23.0 100.0 509 Shakespe q = 0 51.1 39.7 72.9 82 q = .001 52.1 42.1 69.0 54

  47. Empirical Results slightly decrease Benchmark: the accuracy of the LEAF (leaf.cmu.edu) best devices Objective Average Worst 10% Best 10% Variance Dataset Synthetic q = 0 80.8 18.8 100.0 724 q = 1 79.0 31.1 100.0 472 Vehicle q = 0 87.3 43.0 95.7 291 q = 5 87.7 69.9 94.0 48 Sent140 q = 0 65.1 15.9 100.0 697 q = 1 66.5 23.0 100.0 509 Shakespe q = 0 51.1 39.7 72.9 82 q = .001 52.1 42.1 69.0 54

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend