why is my classifier discriminatory
play

Why is My Classifier Discriminatory? Irene Y. Chen, Fredrik D. - PowerPoint PPT Presentation

Why is My Classifier Discriminatory? Irene Y. Chen, Fredrik D. Johansson, David Sontag Massachusetts Institute of Technology (MIT) NeurIPS 2018, Poster #120 Thurs 12/6 10:45am 12:45pm @ 210 & 230 It is su y to make a surprisi singly y


  1. Why is My Classifier Discriminatory? Irene Y. Chen, Fredrik D. Johansson, David Sontag Massachusetts Institute of Technology (MIT) NeurIPS 2018, Poster #120 Thurs 12/6 10:45am – 12:45pm @ 210 & 230

  2. It is su y to make a surprisi singly y easy discriminatory algorithm.

  3. Source: Shutterstock

  4. Asian Black Hispanic Other White 0 . 16 0 . 18 0 . 20 0 . 22 Zero-one loss

  5. In this paper 1. We want to find the sources of unfairness to guide resource allocation.

  6. In this paper 1. We want to find the sources of unfairness to guide resource allocation. 2. We decompose unfairness into bias, variance, and noise.

  7. In this paper 1. We want to find the sources of unfairness to guide resource allocation. 2. We decompose unfairness into bias, variance, and noise. 3. We demonstrate methods to guide feature augmentation and training data collection to fix unfairness.

  8. Classification fairness: many factors Model • Loss function constraints • Kamairan et al, 2010; Zafar et al, 2017 • Representation learning • Zemel et al, 2013 • Regularization • Kamishima et al, 2007; Bechvod and Ligett, 2017 • Tradeoffs • Chouldechova, 2017; Kleinberg et al, 2016; Corbett-Davies et al, 2017

  9. Classification fairness: many factors Model Data • Loss function constraints • Kamairan et al, 2010; Zafar et al, 2017 • Representation learning • Zemel et al, 2013 • Regularization • Kamishima et al, 2007; Bechvod and Ligett, 2017 • Tradeoffs • Chouldechova, 2017; Kleinberg et al, 2016; Corbett-Davies et al, 2017

  10. Classification fairness: many factors Model Data • Data processing • Loss function constraints • Haijan and Domingo-Ferrer, • Kamairan et al, 2010; Zafar et al, 2017 • Representation learning 2013; Feldman et al, 2015 • Cohort selection • Zemel et al, 2013 • Regularization • Sample size • Kamishima et al, 2007; Bechvod and • Number of features Ligett, 2017 • Group distribution • Tradeoffs • Chouldechova, 2017; Kleinberg et al, 2016; Corbett-Davies et al, 2017

  11. Classification fairness: many factors We should examine fairness Model Data • Data processing • Loss function constraints algorithms in the context of • Haijan and Domingo-Ferrer, • Kamairan et al, 2010; Zafar et al, 2017 • Representation learning 2013; Feldman et al, 2015 • Cohort selection • Zemel et al, 2013 the da data a and and mode del . • Regularization • Sample size • Kamishima et al, 2007; Bechvod and • Number of features Ligett, 2017 • Group distribution • Tradeoffs • Chouldechova, 2017; Kleinberg et al, 2016; Corbett-Davies et al, 2017

  12. Why might my classifier be unfair?

  13. Why might my classifier be unfair?

  14. Why might my classifier be unfair? True data function

  15. Why might my classifier be unfair?

  16. Why might my classifier be unfair? Learned model

  17. Why might my classifier be unfair? Learned model

  18. Why might my classifier be unfair? Learned model True data function

  19. Why might my classifier be unfair? Error from variance ce can be solved Learned model True data function by collect mples . cting mo more sa samp

  20. Why might my classifier be unfair?

  21. Why might my classifier be unfair? Learned model

  22. Why might my classifier be unfair? Learned model

  23. Why might my classifier be unfair? Learned model Orange dot model error

  24. Why might my classifier be unfair? Learned model Orange dot model error Blue dot model error

  25. Why might my classifier be unfair? 𝒛 = 𝟏. 𝟔𝒚 𝟑 True data function 𝒛 = 𝒚 − 𝟐

  26. Why might my classifier be unfair? 𝒛 = 𝟏. 𝟔𝒚 𝟑 Error from bi s can be solved bias True data function by ch changing the model cl class . 𝒛 = 𝒚 − 𝟐

  27. Why might my classifier be unfair?

  28. Why might my classifier be unfair? Learned model

  29. Why might my classifier be unfair? Learned model Orange dot model error

  30. Why might my classifier be unfair? Learned model Orange dot model error Blue dot model error

  31. Why might my classifier be unfair? Error from no se can be solved noise Learned model Orange dot model error by collect cting mo more featur tures . Blue dot model error

  32. How do we define fairness?

  33. How do we define fairness? We define fairness in the context of loss like false positive rate, false negative rate, etc. + : For example, zero-one loss for data D and prediction 𝑍 +, 𝑍, 𝐸 ∶= 𝑄 2 𝑍 + ≠ 𝑍 𝐵 = 𝑏) 𝛿 - 𝑍 We can then formalize unfairness as group differences. 9 𝑍 + Γ ∶= | 𝛿 ; − 𝛿 < | We rely on accurate Y labels and focus on algorithmic error

  34. How do we define fairness? We define fairness in the context of loss like false positive rate, false negative rate, etc. + : For example, zero-one loss for data D and prediction 𝑍 +, 𝑍, 𝐸 ∶= 𝑄 2 𝑍 + ≠ 𝑍 𝐵 = 𝑏) 𝛿 - 𝑍 We can then formalize unfairness as group differences. 9 𝑍 + Γ ∶= | 𝛿 ; − 𝛿 < | We rely on accurate Y labels and focus on algorithmic error.

  35. Why might my classifier be unfair? Theorem 1: For error over group a given predictor 𝑍 + : + 9 - 𝑍 + + 𝑊 9 +) + 𝑂 C - 𝛿̅ - 𝑍 = 𝐶 - (𝑍 Note that 𝑂 - indicates the expectation of 𝑂 - over X and data D . 9: = |𝛿 ; Accordingly, the expected discrimination level Γ C − 𝛿̅ < | can be decomposed into differences in bias, differences in variance, and differences in noise. 9 = (𝐶 9 ; − 𝐶 9 < + (𝑊 9 ; −𝑊 9 < ) + (𝑂 C ; −𝑂 C < )| Γ

  36. Why might my classifier be unfair? Theorem 1: For error over group a given predictor 𝑍 + : + 9 - 𝑍 + + 𝑊 9 +) + 𝑂 C - 𝛿̅ - 𝑍 = 𝐶 - (𝑍 Note that 𝑂 - indicates the expectation of 𝑂 - over X and data D . 9: = |𝛿 ; Accordingly, the expected discrimination level Γ C − 𝛿̅ < | can be decomposed into differences in bias, differences in variance, and differences in noise. 9 = (𝐶 9 ; − 𝐶 9 < + (𝑊 9 ; −𝑊 9 < ) + (𝑂 C ; −𝑂 C < )| Γ

  37. Mortality prediction from MIMIC-III clinical notes 1. We found statistically significant racial differences Asian in zero-one loss. Black Hispanic Other White 0 . 16 0 . 18 0 . 20 0 . 22 Zero-one loss Asian Black Hispanic Other White

  38. Mortality prediction from MIMIC-III clinical notes 1. We found statistically significant racial differences 0.27 in zero-one loss. 0.25 Zero-one loss 2. By subsampling data, we fit inverse power laws to 0.23 estimate the benefit of more 0.21 data and reducing variance. 0.19 0 5000 10000 15000 Training data size Asian Black Hispanic Other White

  39. Mortality prediction from MIMIC-III clinical notes 0 . 35 1. We found statistically 2564 significant racial differences 0 . 30 in zero-one loss. 1877 0 . 25 Error enrichment 2. By subsampling data, we fit 1106 4181 inverse power laws to 19711 0 . 20 estimate the benefit of more 0 . 15 619 data and reducing variance. 17649 0 . 10 3. Using topic modeling, we 736 2100 identified subpopulations to 1211 0 . 05 gather more features to 0 . 00 reduce noise. Cancer patients Cardiac patients Asian Black Hispanic Other White

  40. Where do we go from here? 1. For accurate and fair models deployed in real world applications, both the data and model should be considered. 2. Using easily implemented fairness checks , we hope others will check their algorithms for bias, variance, and noise-- which will guide further efforts to reduce unfairness. Come to poster #120 in Room 210 & 230.

  41. Where do we go from here? 1. For accurate and fair models deployed in real world applications, both the data and model should be considered. 2. Using easily implemented fairness checks , we hope others will check their algorithms for bias, variance, and noise-- which will guide further efforts to reduce unfairness. Come to poster #120 in Room 210 & 230.

  42. Where do we go from here? 1. For accurate and fair models deployed in real world applications, both the data and model should be considered. 2. Using easily implemented fairness checks , we hope others will check their algorithms for bias, variance, and noise-- which will guide further efforts to reduce unfairness. Come to poster #120 in Room 210 & 230.

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend