knowledge transfer for visual recognition
play

Knowledge Transfer for Visual Recognition The University of Tokyo - PowerPoint PPT Presentation

IIT-H and RIKEN-AIP Joint Workshop on Machine Learning and Applications March 15, 2019 Knowledge Transfer for Visual Recognition The University of Tokyo RIKEN AIP (Team leader of Medical Machine Intelligence) Tatsuya Harada Deep Neural Networks


  1. IIT-H and RIKEN-AIP Joint Workshop on Machine Learning and Applications March 15, 2019 Knowledge Transfer for Visual Recognition The University of Tokyo RIKEN AIP (Team leader of Medical Machine Intelligence) Tatsuya Harada

  2. Deep Neural Networks for Visual Recognition Deep Neural Networks Applications A yellow train on the tracks near a train station. cellphone cup book laptop cup book • Tasks in the visual recognition field • Object class recognition laptop input output • Object detection • Image caption generation • Semantic and instance segmentation • Image generation • Style transfer • DNNs becomes an indispensable module. • A large amount of labeled data is needed to train DNNs. • Reducing annotation cost is highly required. 2

  3. 3 Knowledge Transfer Learning Doggie! Doggie From picture books Domain Adaptation <a href="https://pixabay.com/ja/photos/%E5%AD%90%E7%8A%AC-%E3%82%B4%E3%83%BC%E3%83%AB%E3%83%87%E3%83%B3-%E3%83%BB-%E3%83%AA%E3%83%88%E3%83%AA%E3%83%BC%E3%83%90%E3%83%BC- 1207816/">Image</a> by <a href="https://pixabay.com/ja/users/Chiemsee2016-1892688/">Chiemsee2016</a> on Pixabay <a href="https://pixabay.com/ja/illustrations/%E7%8A%AC-%E5%8B%95%E7%89%A9-%E3%82%B3%E3%83%BC%E3%82%AE%E3%83%BC-%E3%83%93%E3%83%BC%E3%82%B0%E3%83%AB-1417208/">Image</a> by <a href="https://pixabay.com/ja/users/GraphicMama-team-2641041/">GraphicMama-team</a> on Pixabay

  4. Domain Adaptation (DA)  Problems  Supervised learning model needs many labeled examples  Cost to collect them in various domains  Goal  Transfer knowledge from source (rich supervised data) to target (small supervised data) domain  Classifier that works well on target domain.  Unsupervised Domain Adaptation (UDA)  Labeled examples are given only in the source domain.  There are no labeled examples in the target domain. Target domain Source domain Real images, unlabeled Synthetic images, labeled

  5. Distribution Matching for Unsupervised Domain Adaptation  Distribution matching based method • Match distributions of source and target features • Domain Classifier (GAN) [Ganin et al., 2015] • Maximum Mean Discrepancy [Long et al., 2015] Before adaptation Adapted Target T Source (unlabeled) Source Feature Extractor S Source (labeled) Decision boundary Target Decision boundary Target

  6. Adversarial Domain Adaptation  Training the feature generator in a adversarial way works well! T Domain  Category classifier, domain Target (unlabeled) Classifier classifier, feature extractor Feature Extractor  Problems Category  Whole distribution matching Source (labeled) Classifier S  Ignorance of category information in source domain Tzeng, Eric, et al. Adversarial discriminative domain adaptation. CVPR, 2017. ?? ???? Source ?? Domain ? Source Domain classifier Source Domain classifier Domain classifier Source classifier Target Target Category Category Category Category Target classifier classifier classifier classifier Target

  7. Unsupervised Domain Adaptation using Classifier Discrepancy Kuniaki Saito 1 , Kohei Watanabe 1 , Yoshitaka Ushiku 1 , Tatsuya Harada 1, 2 1: The University of Tokyo, 2: RIKEN CVPR 2018, oral presentation K. Saito K. Watanabe T. Harada Y. Ushiku

  8. Proposed Approach  Considering class specific distributions  Using decision boundary to align distributions Proposed Before adaptation Adapted Source Source Source Decision Class A boundary Target Source Class B Previous work Decision Decision Target Target Target boundary boundary

  9. Key Idea  Maximizing discrepancy by learning two classifiers  Minimizing discrepancy by learning feature space Maximize discrepancy Minimize discrepancy Maximize discrepancy Minimize discrepancy by learning classifiers by learning feature space by learning classifiers by learning feature space Source Source Source Source F 1 F 1 F 1 F 1 F 2 F 2 F 2 F 2 Target Target Target Target Discrepancy is the example which gets different Discrepancy Discrepancy predictions from two different classifiers.

  10. Network Architecture and Training Loss Input L1 class 1 F 1 Classifiers 1 2 F 2 L2 class 2 Algorithm 1. Fix generator , and find classifiers � , � that maximize 𝟐 𝟑 2. for Fix classifiers � , � , and find feature generator that minimizes Maximize D by learning classifier Minimize D by learning feature generator Source Source F 1 F 1 F 2 F 2 Target Target

  11. Improving by Dropout Adversarial Dropout Regularization Kuniaki Saito, Yoshitaka Ushiku, Tatsuya Harada, Kate Saenko ICLR 2018 Input L1 class 1 F 1 Classifiers 1 2 F 2 L2 class 2 Selecting two classifiers by dropout! F Classifier Input Classifier Sampling by Dropout 1 1 2 2

  12. Object Classification  Synthetic images to Real images (12 Classes)  Finetune pre-trained ResNet101 [He et al., CVPR 2016] (ImageNet)  Source:images, Target:images Source (Synthetic images) Target (Real images)

  13. Semantic Segmentation  Simulated Image (GTA5) to Real Image (CityScape)  Finetuning of pre-trained VGG, Dilated Residual Network [Yu et al., 2017] (ImageNet)  Calculate discrepancy pixel-wise  Evaluation by mean IoU (TP/(TP+FP+FN)) GTA 5 (Source) CityScape(Target) 100 source only 90 80 ours 70 60 IoU 50 40 30 20 10 0 road sdwk bldng wall pole light sign vg n trrn rider truck train mcycl bcycl fence sky car bus perso

  14. Qualitative Results RGB Ground truth Source only Adapted (ours)

  15. Another Topics of Unsupervised Domain Adaptation  Open-set Domain Adaptation Source Target Unknown • Kuniaki Saito, Shohei Yamamoto, Yoshitaka Ushiku, Tatsuya Harada. Open Set Domain Adaptation by Backpropagation. ECCV, 2018.  Adaptive Object Detection Source Target • Kuniaki Saito, Yoshitaka Ushiku, Tatsuya Harada , Kate Sanenko. Strong-Weak Distribution Alignment for Adaptive Object Detection. CVPR, 2019.

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend