leveraging gans for fairness evaluations
play

Leveraging GANs for fairness evaluations Emily Denton Research - PowerPoint PPT Presentation

Leveraging GANs for fairness evaluations Emily Denton Research Scientist, Google Brain Emily Denton Margaret Mitchell Timnit Gebru Ben Hutchinson Background ML Fairness seeks to address algorithmic unfairness , with a focus on machine


  1. Leveraging GANs for fairness evaluations Emily Denton Research Scientist, Google Brain Emily Denton Margaret Mitchell Timnit Gebru Ben Hutchinson

  2. Background ML Fairness seeks to address algorithmic unfairness , with a focus on machine learning systems Very broad research area! I will be focusing on one specifjc component: detecting undesirable bias in computer vision systems P • 2

  3. The Coded Gaze: Unmasking Algorithmic Bias Bias in Joy Buolamwini Computer Vision Unrepresentative training data can lead to disparities in accuracy for difgerent demographics P • 3

  4. Bias in Computer Vision [Wilson et al. Predictive inequity in object detection. arXiv:1902.11097, 2019] P • 4

  5. Bias in Computer Vision Social biases embedded in data distribution can be reproduced and/or amplifjed [Zhao et al. Men Also Like Shopping: Reducing Gender Bias Amplifjcation using Corpus-level Constraints. EMNLP, 2017.] [Hendricks et al. Women also snowboard: Overcoming bias in captioning models. ECCV, 2018] P • 5

  6. Bias in Computer Vision Human reporuing bias can afgect annotations [Misra et al. Seeing through the Human Reporuing Bias: Visual Classifjers from Noisy Human-Centric Labels. CVPR 2016] P • 6

  7. Bias in Computer Vision Human reporuing bias can afgect annotations “Green bananas” [Misra et al. Seeing through the Human Reporuing Bias: Visual Classifjers from Noisy Human-Centric Labels. CVPR 2016] P • 7

  8. Bias in Computer Vision Social biases can afgect annotations and propagate through ML system “doctor” “female doctor” or “nurse” [Misra et al. Seeing through the Human Reporuing Bias: Visual Classifjers from Noisy Human-Centric Labels. CVPR 2016] P • 8

  9. Bias in Computer Vision Social biases can afgect annotations and propagate through ML system [Rhue. Racial Infmuence on Automated Perceptions of Emotions. 2019] P • 9

  10. How can GANs help? High quality photo realistic images [Karras et al. Progressive growing of gans for improved quality, stability, and variation. ICLR, 2018] P • 10

  11. How can GANs help? High quality photo realistic images [Karras et al. Progressive growing of gans for improved quality, stability, and variation. ICLR, 2018] Controllable image synthesis P • 11

  12. How can GANs help? Generative techniques provide tools for testing a classifjer’s sensitivity to difgerent image features Can answer questions of the form: How does the classifjer’s output change as some characteristic of the image is systematically varied? Is the classifjer sensitive to a characteristic that should be irrelevant for the task? P • 12

  13. GANs can help uncover undesirable bias P(Smile | x ) x CNN P • 13

  14. GANs can help uncover undesirable bias P(Smile | x ) Manipulate facial hair x CNN x’ P • 14

  15. GANs can help uncover undesirable bias P(Smile | x ) Manipulate facial hair x CNN Did it change? P(Smile | x’ ) x’ CNN P • 15

  16. Can observe the efgect on a classifjers of systematically manipulating factors of variation in an image P(Smile | x ) P(Smile | x ) P • 16

  17. Can observe the efgect on a classifjers of systematically manipulating factors of variation in an image X P(Smile | x ) P(Smile | x ) All else being equal, the presence of facial hair should be irrelevant to the classifjer P • 17

  18. Experimental setup Smiling classifjer trained on CelebA (128x128 resolution images) P • 18

  19. Experimental setup Smiling classifjer trained on CelebA (128x128 resolution images) Standard progressive GAN trained to generate 128x128 CelebA images P • 19

  20. Experimental setup Smiling classifjer trained on CelebA (128x128 resolution images) Standard progressive GAN trained to generate 128x128 CelebA images Encoder trained to infer latent codes that generated an images P • 20

  21. Aturibute vectors Directions in latent space that manipulate a paruicular factor of variation in the image Latent codes Aturibute vector corresponding to images with aturibute a d a Latent codes corresponding to images without aturibute a P • 21

  22. Aturibute vectors We infer aturibute vectors using binary CelebA annotations Eyeglasses = 1 Eyeglasses = 0 Mustache = 1 Mustache = 0 Blond_Hair = 1 Blond_Hair = 0 P • 22

  23. CelebA aturibute vectors E ( ) E ( ) E ( ) E ( ) E ( ) d Mustache E ( ) E ( ) E ( ) Mustache = 1 Mustache = 0 P • 23

  24. A note on CelebA aturibute vectors Many of the atuributes are subjective or ill-defjned Interpretation of category boundaries is contingent on the annotators The resulting manipulations refmect how the paruicular atuributes were operationalized and measured within the CelebA dataset P • 24

  25. Manipulating images with CelebA aturibute vectors P • 25

  26. Manipulating images with CelebA aturibute vectors P • 26

  27. Manipulating images with CelebA aturibute vectors P • 27

  28. Manipulating images with CelebA aturibute vectors P • 28

  29. Quantifying classifjer sensitivity Model f outputs the probability of a smile being present in the image: Sensitivity of the continuous valued output of f to changes defjned by the aturibute vector d : Difgerence in classifjers’ output that results from moving in direction d in latent space P • 29

  30. Quantifying classifjer sensitivity Given a threshold, 0 ≤ c ≤ 1, binary classifjcations are obtained: Frequency with which classifjcation fmips from smiling Sensitivity of the discrete classifjcation decision to to not smiling peruurbations along an vector d as: Frequency with which classifjcation fmips from not smiling to smiling P • 30

  31. Quantifying classifjer sensitivity Given a threshold, 0 ≤ c ≤ 1, binary classifjcations are obtained: Sensitivity of the discrete classifjcation decision to peruurbations along an vector d as: P • 31

  32. Quantifying classifjer sensitivity Given a threshold, 0 ≤ c ≤ 1, binary classifjcations are obtained: Sensitivity of the discrete classifjcation decision to peruurbations along an vector d as: P • 32

  33. Quantifying classifjer sensitivity Given a threshold, 0 ≤ c ≤ 1, binary classifjcations are obtained: Sensitivity of the discrete classifjcation decision to peruurbations along an vector d as: P • 33

  34. What have the aturibute vectors encoded? ~12% of images initially classifjed as not smiling get classifjed as smiling afuer Heavy_Makeup augmentation P • 34

  35. What have the aturibute vectors encoded? ~12% of images initially classifjed as not smiling get classifjed as smiling afuer Heavy_Makeup augmentation P • 35

  36. What have the aturibute vectors encoded? ~7% of images initially classifjed as smiling get classifjed as not smiling afuer Young augmentation P • 36

  37. BUT, need to be careful the aturibute vector hasn’t actually encoded something that should be relevant to smiling classifjcation ! Mouth expression has defjnitely changed ~40% of images initially classifjed as not smiling get classifjed as smiling afuer High_Cheekbones augmentation P • 37

  38. BUT, need to be careful the aturibute vector hasn’t actually encoded something that should be relevant to smiling classifjcation ! So far we’re verifjed makeup, facial hair and age related aturibute directions leave basic mouth shape/smile unchanged In process of running more of these studies on complete set of atuributes P • 38

  39. Social context is imporuant Generative techniques can be used to detect unintended and undesirable bias in facial analysis Equalizing error statistics across difgerent groups (defjned along cultural, demographic, phenotypical lines) is imporuant but not suffjcient for building fair, equitable, just or inclusive technology This analysis should be paru of a larger, socially contextualized , project to critically assess broader ethical concerns relating to facial analysis technology P • 39

  40. Future work GAN can be trained on difgerent dataset than classifjer ● Increased disentanglement of latent space ● Extend beyond faces ● Other ways of leveraging synthetic data for evaluation (or training?) purposes ● i.e. mine GANs for data, not people ○ P • 40

  41. Related work Countergactual fairness Kilberuus et al. Avoiding discrimination through causal reasoning. NIPS, 2017. Kusner et al. Countergactual fairness. NIPS, 2017. Countergactual fairness for text Garg et al. Countergactual Fairness in Text Classifjcation through Robustness. AIES, 2019 Individual fairness Dwork et al. Fairness Through Awareness. ITCS, 2012. Model interpretability Kim et al. Interpretability beyond feature aturibution: Quantitative testing with concept activation vectors (tcav). ICML, 2018. Chang et al. Explaining image classifjers by countergactual generation. ICLR, 2019. Fong and Vedaldi. Interpretable explanations of black boxes by meaningful peruurbation. I CCV, 2017. Dabkowski and Gal. Real time image saliency for black box classifjers . NIPS, 2017 Simonyan et al. Deep inside convolutional networks: Visualising image classifjcation models and saliency maps . 2013 P • 41

  42. Thanks! Denton et al. Detecting Bias with Generative Countergactual Face Aturibute Augmentation. CVPR Workshop on Fairness, Accountability, Transparency and Ethics in Computer Vision , 2019. P • 42

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend