long tailed sources open compound targets
play

Long-Tailed Sources & Open Compound Targets Boqing Gong CVPR - PowerPoint PPT Presentation

Towards Visual Recognition in the Wild : Long-Tailed Sources & Open Compound Targets Boqing Gong CVPR 2009 50 classes 85 attributes Kernel Methods for Unsupervised Domain Adaptation 10~100 classes 2011-2015 ILSVRC 2010-2017 ~1000


  1. Towards Visual Recognition in the Wild : Long-Tailed Sources & Open Compound Targets Boqing Gong

  2. CVPR 2009 50 classes 85 attributes

  3. Kernel Methods for Unsupervised Domain Adaptation 10~100 classes 2011-2015

  4. ILSVRC 2010-2017 ~1000 classes Bottom image credit: http://www.thegreenmedium.com/blog/2019/5/24/why-robots-will-help-you-rather-than-t ry-to-take-over-the-world-a-brief-history-of-ai

  5. ICML 2014 Deep features!

  6. Object recognition in the wild 5k~8k classes

  7. in the wild

  8. in the wild Right image credit: https://natureneedsmore.org/the-elephant-in-the-room/

  9. CVPR 2019 (oral), improving neural architectures

  10. Long-tailed ImageNet (1000 classes) A memory bank Long-tailed Places-365 to enhance tail classes Long-tailed MS1M ArcFace (74.5k ids) CVPR 2019 (oral), improving neural architectures

  11. An old AI problem A new AI problem (meta-learning, transfer learning, zero-shot learning) Acknowledgement: Matthew Brown @Google

  12. Existing work Class-wise weighting, over/under-sampling, etc. [CVPR’18] Large Scale Fine-Grained Categorization and Domain-Specifjc Transfer Learning [CVPR’19] Class-Balanced Loss Based on Efgective Number of Samples [NeurIPS’19] Learning Imbalanced Datasets with Label-Distribution-Aware Margin Loss [ICLR’20] Decoupling Representation and Classifjer for Long-Tailed Recognition

  13. Frequency Classes

  14. Existing work … as domain adaptation Class-wise weighting, Target over/under-sampling, etc. [CVPR’18] Large Scale Fine-Grained Categorization and Domain-Specifjc Transfer Source Learning [CVPR’19] Class-Balanced Loss Based on Efgective Number of Samples [NeurIPS’19] Learning Imbalanced Datasets Existing work assumes 𝝑 =0 with Label-Distribution-Aware Margin Loss [ICLR’20] Decoupling Representation and Classifjer for Long-Tailed Recognition

  15. Head vs. tail … as domain adaptation Many training images in a head class: 𝝑 =0 Few-shot training images in a tail class: 𝝑 ≠0 Existing work assumes 𝝑 =0

  16. CVPR 2020 (oral), long-tailed recognition ⩰ domain adaptation

  17. Our approach SOTA on six datasets Estimating both & ○ CIFAR-LT-10 ○ CIFAR-LT-100 by unifying [CVPR’19] & an ○ ImageNet-LT improved meta-learning ○ Places-LT method ○ iNaturalist 2017 ○ iNaturalist 2018

  18. Long-tailed visual … as domain adaptation recognition (LTVR) New perspective to LTVR Emerging challenge as the New powerhouse of methods datasets grow in scale Domain-invariant representation learning Timely topic Curriculum domain adaptation Adversarial learning Datasets: iNaturalist, LVIS, Classifjer discrepancy ImageNet, COCO, etc. Data augmentation & synthesis, etc. Tasks: almost all Difg: no access to target data

  19. in the wild

  20. Open compound test cases (target)

  21. Open compound test cases (target)

  22. Open compound domain adaptation Training: Labeled source domain data Unlabeled data of the compound target Testing: in the compound target domain and in previously unseen domains Liu, Ziwei, Zhongqi Miao, Xingang Pan, Xiaohang Zhan, Stella X. Yu, Dahua Lin, and Boqing Gong. "Compound domain adaptation in an open world." CVPR 2020. (oral)

  23. Experiments

  24. Our approach to break the compound target domain into a series of bi-domain adaptation problems by “domain distances” between the source and latent domains in the target (curriculum training) Latent domain 3 Latent domain 1 Source Latent domain 2

  25. Our approach to break the compound target domain into a series of bi-domain adaptation problems by “domain distances” between the source and latent domains in the target (curriculum training) Latent domain 3 Latent domain 1 Source Latent domain 2

  26. Our approach to break the compound target domain into a series of bi-domain adaptation problems by “domain distances” between the source and latent domains in the target (curriculum training) Latent domain 3 Latent domain 1 Source Latent domain 2

  27. Our approach to break the compound target domain into a series of bi-domain adaptation problems by “domain distances” between the source and latent domains in the target (curriculum training) Latent domain 3 Latent domain 1 Source Latent domain 2

  28. Pushing the boundary of visual recognition Long-tailed source domains The elephant in the room as we scale up classes / study the wild data Memory bank to enhance tail classes (CVPR’19, oral) Domain adaptation: a new powerhouse of techniques (CVPR’20, oral) Improved meta-learning for long-tailed recognition (undergoing) Open compound target domains (CVPR’20, oral) Learning from unlabeled, noisy data in the wild (undergoing)

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend