dat ata a bias as in visual ual re reco cognition nition
play

Dat ata a Bias as in Visual ual Re Reco cognition nition - PowerPoint PPT Presentation

Mar. 2020 VALSE Dat ata a Bias as in Visual ual Re Reco cognition nition 1 Visual al recognit nition Courtesy of Prof. Fei-fei Li 2 History ry of CNN Geoff Hinton Yann LeCun Kunihiko


  1. Mar. 2020 VALSE Dat ata a Bias as in Visual ual Re Reco cognition nition 报 告人: 邓伟 洪 北京 邮电 大学 1

  2. Visual al recognit nition Courtesy of Prof. Fei-fei Li 2

  3. History ry of CNN Geoff Hinton Yann LeCun Kunihiko Fukushima 3 A Krizhevsky, I Sutskever, GE Hinton, NIPS 2012 K Fukushima, Biological cybernetics, 1980 Y LeCun, et al, Proceedings of the IEEE, 1998

  4. Real-world rld Recogni nitio ion n Bias Google Photo Amazon Rekognition Tesla Autopolit Data bias Algorithm bias 4

  5. What cause the bias of visual recognition 𝐪(𝒛|𝒚) ∝ 𝒒 𝒚 𝒛 𝒒(𝒛) p(y) is biased Y is biased P(x|y) is biased • classes are • class labels • Training&test imbalanced are noisy conditional distributions are different 5

  6. Racial Bias Racial Faces in-the-Wild (RFW) Mei Wang, Weihong Deng, et al., Racial Faces in-the-Wild: Reducing Racial Bias by Information Maximization Adaptation Network, ICCV 2019. 6

  7. Existence of racial bias RFW Model Caucasian Indian Asian African Center-loss 87.18 81.92 79.32 78.00 SphereFace 90.80 87.02 82.95 82.28 SOTA Algorithms ArcFace 92.15 88.00 83.98 84.93 VGGFace2 89.90 86.13 84.93 83.38 Mean 90.01 85.77 82.80 82.15 Face++ 93.90 88.55 92.47 87.50 Baidu 89.13 86.53 90.27 77.97 Commercial Amazon 90.45 87.20 84.87 86.27 APIs Microsoft 87.60 82.83 79.67 75.83 Mean 90.27 86.28 86.82 81.89 7

  8. A major drive iver r of bias in face recognit nitio ion Racial distribution (%) Database Caucasian Asian Indian African CURRENT TRAINING DBS CASIA- 84.5 2.6 1.6 11.3 Caucasian Asian Indian African WebFace VGGFace2 74.2 6.0 4.0 15.8 MS-Celeb-1M African 76.3 6.6 2.6 14.5 14% 14% Indian 3% 3% Asian 78.3 5.0 2.7 13.8 Average 5% 5% Cauc ucasian 78% 78% 8

  9. Racial bias: A special imbalance learning problem Tens of thousands of classes • Balance among groups of • classes Open-set recognition • Mei Wang, Weihong Deng, Mitigating Bias in Face Recognition using Skewness-Aware Reinforcement Learning, CVPR 2020 9

  10. Reinforcement learning based race-balance network (RL-RBN) Mei Wang, Weihong Deng, Mitigating Bias in Face Recognition using Skewness-Aware Reinforcement Learning, CVPR 2020 10

  11. Ethnicit icity y Aware Training ning Sets s for RFW BUPT-Ba Balancedf edface BUPT-Globa Globalface Caucasian Asian Indian African Caucasian Asian Indian African African African Cauc ucasian 13% 13% 25% 25% 25% 25% Cauc ucasian Indian 38% 38% 18% 18% 1.3M Images 2M Images Indian Asian 25% 25% 25% 25% Asian 31% 31% Mei Wang, Weihong Deng, Mitigating Bias in Face Recognition using Skewness-Aware Reinforcement Learning, CVPR 2020 11

  12. Deficiency of Current Training Datasets We summary some interesting findings and problems about these training sets: depth v.s. breadth, long tail distribution, data noise and data bias. Long tail property refers to the condition where only limited number of object classes appear frequently, while most of the others remain relatively rarely. Long tail distribution Mei Wang & Weihong Deng, Deep Face Recognition: A Survey, arXiv:1804.06655 12

  13. Unequal Training for Noisy Long-tailed learning sufficient number of Contain sufficient number samples to model intra- of classes to model inter- class variability class variability Yaoyao Zhong, Weihong Deng, Mei Wang, Jiani Hu, et al., Unequal-training for deep face recognition with long-tailed noisy data, CVPR 2019. 13 13

  14. Fair loss for imbalanced training data Overview Class Grouping according to sample size Bingyu Liu, Weihong Deng, et al., Fair Loss: Margin-aware Reinforcement Learning for Deep Face Recognition, ICCV 2019. 14

  15. What cause the bias of visual recognition 𝐪(𝒛|𝒚) ∝ 𝒒 𝒚 𝒛 𝒒(𝒛) p(y) is biased Y is biased P(x|y) is biased • classes are • class labels • Training&test imbalanced are noisy conditional distributions are different 15

  16. Crowdsourcing: Select a single basic expression 16 16

  17. Dataset Construction (RAF-DB & RAF-ML) Data collection and Annotation Process Results 30K 30K Keywords XML download 1 ‘smile’ ‘crying’ image ge parse 1. ‘OMG’… Collection 0.8 s Probability URLs 60,000 images 0.6 downloaded 0.39 0.34 0.4 0.12 0.11 0.2 0.02 0.01 0 Crowd-sourcing 0 315 volunteers online Each image labelled 40 times 1.2M 2. Annotation labels Single label / Mutli-label Learning from labels RAF-ML ML RAF-DB DB Filter out An Enhanced unreliable labels Reliability 3. EM Reliability Estimation framework 17

  18. Crowdsourcing: Label reliability estimation algorithm Shan Li, Weihong Deng, Blended Emotion in-the-Wild: Multi-label Facial Expression Recognition Using Crowdsourced Annotations and Deep Locality Feature Learning. IJCV 2019. 18 18

  19. Crowdsourcing: Select a single basic expression Blended Compound expression expression Surprise Surprise 0.343750 0. 483871 Fear Fear 0.375000 0.064516 Disgust Disgust 0 0 Happiness Happiness 0 0 Sadness Sadness 0 0.032258 Anger Anger 0.281250 0.419355 19 19

  20. Dataset Construction (RAF-DB) Shan Li, Weihong Deng, Reliable Crowdsourcing and Deep Locality-Preserving Learning for Unconstrained Facial Expression Recognition. IEEE TIP 2019. 20

  21. Dataset Construction (RAF-ML) Shan Li, Weihong Deng, Blended Emotion in-the-Wild: Multi-label Facial Expression Recognition Using Crowdsourced Annotations and Deep Locality Feature Learning. IJCV 2019. 21

  22. Labels of real-world face datasets are noisy Motivation: When the face-recognition accuracy of deep models is already much higher than human, it is possible the machine can boost itself by automagical data cleansing. Mei Wang & Weihong Deng, Deep Face Recognition: A Survey, arXiv:1804.06655 22

  23. Same or Different People? Linda Dano Donald Keck Roger Cook Liza Minnelli DCNN correct , Students wrong The image pairs are from Similar-Looking LFW database 23 Weihong Deng, et al., Pattern Recognition , 2017

  24. Human-Machine Comparison Arcface Deep CNN versus My Students 100 99.85 99.55 CVPR19 CNN 96.78 8% 95 93.75 92.03 90 Human 88.42 87.33 85 80.45 80 LFW SLL LLFW CALFW CPLFW PLFW CNN ~ CNN >> Human n > CNN > Human n ~ Human CNN Human 100% 100% 24 24

  25. Methodology – Overview  Global Graph Net (GGN)  Local Graph Net (LGN) Yaobing Zhang, Weihong Deng, et al., Global-Local GCN: Large-Scale Label Noise Cleansing for Face 25 Recognition, CVPR 2020 25

  26. Methodology – Local Graph Net  Subgraph construction • Select low confidence nodes as “local centers” • Take one-hop and two-hop neighbors to build the local subgraphs  Forward propagating LGN with subgraphs  Multi-task learning (MT) • Node classification: refine the GGN prediction results • Graph classification: recognize garbage classes  LGN loss Yaobing Zhang, Weihong Deng, et al., Global-Local GCN: Large-Scale Label Noise Cleansing for Face 26 Recognition, CVPR 2020 26

  27. Experiments – MillionCelebs (2/3) MegaFace Challenge IJB-B and IJB-C Yaobing Zhang, Weihong Deng, et al., Global-Local GCN: Large-Scale Label Noise Cleansing for Face 27 Recognition, CVPR 2020 27

  28. What cause the bias of visual recognition 𝐪(𝒛|𝒚) ∝ 𝒒 𝒚 𝒛 𝒒(𝒛) p(y) is biased Y is biased P(x|y) is biased • classes are • class labels • Training&test imbalanced are noisy conditional distributions are different 28

  29. Ethnicit icity y Aware Training ning Sets s for RFW BUPT-Transferface Caucasian Asian Indian African Unlabeled 8% 8% Unlabeled 8% 8% Unlabeled 8% 8% Cauc ucasian 75% 75% Mei Wang, Weihong Deng, et al., Racial Faces in-the-Wild: Reducing Racial Bias by Information Maximization Adaptation Network, ICCV 2019. 29

  30. Deep information maximization adaptation network (IMAN) Learn discriminative Clustering to generate distribution at cluster- pseudo-labels level for color races Methods Caucasian Indian Asian African Softmax 94.12 88.33 84.60 83.47 DDC-S - 90.53 86.32 84.95 Recognition accuracy on DAN-S - 89.98 85.53 84.10 color races is boosted IMAN-S (ours) - 91.08 89.88 89.13 Mei Wang, Weihong Deng, et al., Racial Faces in-the-Wild: Reducing Racial Bias by Information Maximization Adaptation Network, ICCV 2019. 30

  31. A Deeper Look at Facial Expression Datasets Bias Datasets play an important role in the progress of facial expression recognition algorithms, but they may suffer from obvious biases caused by different cultures and collection conditions. Hence, evaluating methods with intra-database protocol would render them lack generalization capability on unseen samples at test time. Shan Li, and Weihong Deng, A Deeper Look at Facial Expression Dataset Bias. IEEE TAC 2020. 31

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend