Smooth Proxy-Anchor Loss for Noisy Metric Learning Carlos Roig, - - PowerPoint PPT Presentation

smooth proxy anchor loss for noisy metric learning
SMART_READER_LITE
LIVE PREVIEW

Smooth Proxy-Anchor Loss for Noisy Metric Learning Carlos Roig, - - PowerPoint PPT Presentation

Smooth Proxy-Anchor Loss for Noisy Metric Learning Carlos Roig, David Varas, Issey Masuda, Juan Carlos Riveiro, Elisenda Bou-Balust Metric Learning - Introduction : embedding i of class j : similarity function (e.g. cosine similarity)


slide-1
SLIDE 1

Smooth Proxy-Anchor Loss for Noisy Metric Learning

Carlos Roig, David Varas, Issey Masuda, Juan Carlos Riveiro, Elisenda Bou-Balust

slide-2
SLIDE 2

Metric Learning - Introduction

: embedding i of class j : similarity function (e.g. cosine similarity) Embedding space

slide-3
SLIDE 3

Metric Learning - Pairs vs Proxy methods

Pair-based methods Proxy-based methods

slide-4
SLIDE 4

Metric Learning - Applications

Face Verification Person Re-Identification Few-Shot Learning Content Based Image Retrieval Representation Learning

Require clean data!

slide-5
SLIDE 5

Proxy Anchor Loss

Image batch

Sungyeon Kim, Dongwon Kim, Minsu Cho, and Suha Kwak. Proxy anchor loss for deep metric learning, CVPR 2020.

Positive proxies Negative proxies Positive proxies of a given sample Samples corresponding to proxy p Cosine similarity Hyperparams

Proxy Anchor Loss
slide-6
SLIDE 6

Smooth Proxy Anchor Loss

Smoothing function Controls the sharpness

  • f the function

Confidence value for sample x of belonging to proxy p Controls the position of the function The positive samples corresponding to a proxy are selected if

  • therwise, they are negative.
slide-7
SLIDE 7

Our Method

slide-8
SLIDE 8

Our Method

  • Backbone: ResNet50.
  • Pretrained on the ImageNet

dataset.

  • Without the classification

layer.

  • Frozen for all experiments.
slide-9
SLIDE 9

Our Method

Trained with Binary Cross Entropy loss:

Dataset is a partition of the Webvision dataset*

The confidence module generates the class confidences and the smoothing function balances each contribution.

* More details in the paper

slide-10
SLIDE 10

Our Method

Top 3 confidence scores

Example image Class name Dataset image Top 3 score. Class name in bold

Correct label (green) Incorrect label (red)

slide-11
SLIDE 11

Our Method

1) Noisy labels

slide-12
SLIDE 12

Our Method

1) Noisy labels 2) Relabelling

slide-13
SLIDE 13

Our Method

1) Noisy labels 2) Relabelling 3) Proxy selection

slide-14
SLIDE 14

Our Method

1) Noisy labels 2) Relabelling 3) Proxy selection 4) Loss computation

slide-15
SLIDE 15

Results

[1] Yair Movshovitz-Attias, Alexander Toshev, Thomas K. Leung, Sergey Ioffe, and Saurabh Singh. No fuss distance metric learning using proxies, 2017 [2] Sungyeon Kim, Dongwon Kim, Minsu Cho, and Suha Kwak. Proxy anchor loss for deep metric learning, 2020 [3] Xun Wang, Xintong Han, Weilin Huang, Dengke Dong, and Matthew R. Scott. Multi-similarity loss with general pair weighting for deep metric learning, 2019

Table 3. Comparison of Recall@K for different methods against our proposed loss on the WebVision dataset partition.

slide-16
SLIDE 16

Conclusions

  • Two branch system for noisy metric learning

○ Confidence module ○ Embedding

  • We propose a Smooth Proxy Anchor Loss that weights the

contribution of noisy samples

  • Our method improves 2.63 and 3.29 in Recall@1 with respect to

MultiSimilarity and Proxy-Anchor loss respectively

slide-17
SLIDE 17

Thanks!

carlos@vilynx.com

Get the paper!