smooth proxy anchor loss for noisy metric learning
play

Smooth Proxy-Anchor Loss for Noisy Metric Learning Carlos Roig, - PowerPoint PPT Presentation

Smooth Proxy-Anchor Loss for Noisy Metric Learning Carlos Roig, David Varas, Issey Masuda, Juan Carlos Riveiro, Elisenda Bou-Balust Metric Learning - Introduction : embedding i of class j : similarity function (e.g. cosine similarity)


  1. Smooth Proxy-Anchor Loss for Noisy Metric Learning Carlos Roig, David Varas, Issey Masuda, Juan Carlos Riveiro, Elisenda Bou-Balust

  2. Metric Learning - Introduction : embedding i of class j : similarity function (e.g. cosine similarity) Embedding space

  3. Metric Learning - Pairs vs Proxy methods Pair-based methods Proxy-based methods

  4. Metric Learning - Applications Face Verification Person Re-Identification Few-Shot Learning Content Based Image Retrieval Representation Learning Require clean data!

  5. Proxy Anchor Loss Image Proxy Anchor Loss batch Positive proxies Negative proxies Hyperparams Cosine similarity Positive proxies of a given sample Samples corresponding to proxy p Sungyeon Kim, Dongwon Kim, Minsu Cho, and Suha Kwak. Proxy anchor loss for deep metric learning , CVPR 2020.

  6. Smooth Proxy Anchor Loss Smoothing function The positive samples corresponding to a proxy Controls the position of the function are selected if Confidence value for otherwise, they are negative. sample x of belonging Controls the sharpness to proxy p of the function

  7. Our Method

  8. Our Method Backbone: ResNet50. ● Pretrained on the ImageNet ● dataset. Without the classification ● layer. Frozen for all experiments. ●

  9. Our Method Trained with Binary Cross Entropy loss: The confidence module generates the class confidences and the Dataset is a partition of the Webvision dataset * smoothing function balances each contribution. * More details in the paper

  10. Our Method Top 3 confidence scores Dataset image Example image Correct label (green) Incorrect label (red) Class name Top 3 score. Class name in bold

  11. Our Method 1) Noisy labels

  12. Our Method 1) Noisy labels 2) Relabelling

  13. Our Method 1) Noisy labels 2) Relabelling 3) Proxy selection

  14. Our Method 1) Noisy labels 2) Relabelling 3) Proxy selection 4) Loss computation

  15. Results Table 3. Comparison of Recall@K for different methods against our proposed loss on the WebVision dataset partition. [1] Yair Movshovitz-Attias, Alexander Toshev, Thomas K. Leung, Sergey Ioffe, and Saurabh Singh. No fuss distance metric learning using proxies , 2017 [2] Sungyeon Kim, Dongwon Kim, Minsu Cho, and Suha Kwak. Proxy anchor loss for deep metric learning , 2020 [3] Xun Wang, Xintong Han, Weilin Huang, Dengke Dong, and Matthew R. Scott. Multi-similarity loss with general pair weighting for deep metric learning , 2019

  16. Conclusions Two branch system for noisy metric learning ● ○ Confidence module Embedding ○ We propose a Smooth Proxy Anchor Loss that weights the ● contribution of noisy samples Our method improves 2.63 and 3.29 in Recall@1 with respect to ● MultiSimilarity and Proxy-Anchor loss respectively

  17. Thanks! carlos@vilynx.com Get the paper!

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend