Neural Information Processing Systems (NeurIPS) 2018
Recurrent Transformer Networks for Semantic Correspondence
Seungryong Kim1, Stepthen Lin2, Sangryul Jeon1, Dongbo Min3, Kwanghoon Sohn1
- Dec. 05, 2018
1) 2) 3)
Recurrent Transformer Networks for Semantic Correspondence - - PowerPoint PPT Presentation
Neural Information Processing Systems (NeurIPS) 2018 Recurrent Transformer Networks for Semantic Correspondence Seungryong Kim 1 , Stepthen Lin 2 , Sangryul Jeon 1 , Dongbo Min 3 , Kwanghoon Sohn 1 Dec. 05, 2018 1) 2) 3) In Introduction
Neural Information Processing Systems (NeurIPS) 2018
Seungryong Kim1, Stepthen Lin2, Sangryul Jeon1, Dongbo Min3, Kwanghoon Sohn1
1) 2) 3)
2
Seungryong Kim et al., Recurrent Transformer Networks for Semantic Correspondence, NeurIPS, 2018
3
Seungryong Kim et al., Recurrent Transformer Networks for Semantic Correspondence, NeurIPS, 2018
attribute variations
4
Seungryong Kim et al., Recurrent Transformer Networks for Semantic Correspondence, NeurIPS, 2018
5
Seungryong Kim et al., Recurrent Transformer Networks for Semantic Correspondence, NeurIPS, 2018
β
But, π π is learned w/π π
β
Geometric inference based on
6
Seungryong Kim et al., Recurrent Transformer Networks for Semantic Correspondence, NeurIPS, 2018
β
using self- or meta-supervision Geometric Inference using source/target images Globally-varying geometric Inference only Only fixed, untransformed versions of the features
7
Seungryong Kim et al., Recurrent Transformer Networks for Semantic Correspondence, NeurIPS, 2018
parameters ππΊ such that πΈπ = πΊ π½ ππΊ
8
Seungryong Kim et al., Recurrent Transformer Networks for Semantic Correspondence, NeurIPS, 2018
π·(πΈπ
π‘, πΈπ’(ππ)) =< πΈπ π‘, πΈπ’(ππ) >/ < πΈπ π‘, πΈπ’(ππ) >2
9
Seungryong Kim et al., Recurrent Transformer Networks for Semantic Correspondence, NeurIPS, 2018
Source Target
ππ
π β ππ πβ1 = πΊ(π·(πΈπ π‘, πΈπ’(ππ πβ1))|ππ»)
10
Seungryong Kim et al., Recurrent Transformer Networks for Semantic Correspondence, NeurIPS, 2018
Source Target
πΈπ’(ππ) should be maximized while keeping the scores of other candidates low
π πΈπ
π‘, πΈπ’ π
= β ΰ·
πβππ
ππ
βlog(π(πΈπ π‘, πΈπ’(ππ)))
where the function π(πΈπ
π‘, πΈπ’(ππ)) is a Softmax probability
π‘, πΈπ’(ππ)) =
exp(π·(πΈπ
π‘, πΈπ’(ππ)))
π‘, πΈπ’(ππ)))
where ππ
β denotes a class label defined as 1 if π = π, 0 otherwise
11
Seungryong Kim et al., Recurrent Transformer Networks for Semantic Correspondence, NeurIPS, 2018
12
Seungryong Kim et al., Recurrent Transformer Networks for Semantic Correspondence, NeurIPS, 2018
Source images Target images SCNet
[Han et al., ICCVβ17]
[Rocco et al., CVPRβ18]
RTNs
13
Seungryong Kim et al., Recurrent Transformer Networks for Semantic Correspondence, NeurIPS, 2018
Source images Target images SCNet
[Han et al., ICCVβ17]
[Rocco et al., CVPRβ18]
RTNs
14
Seungryong Kim et al., Recurrent Transformer Networks for Semantic Correspondence, NeurIPS, 2018
Source images Target images SCNet
[Han et al., ICCVβ17]
[Rocco et al., CVPRβ18]
RTNs
15
Seungryong Kim et al., Recurrent Transformer Networks for Semantic Correspondence, NeurIPS, 2018
Seungryong Kim, Ph.D.
Digital Image Media Lab. Yonsei University, Seoul, Korea
Tel: +82-2-2123-2879 E-mail: srkim89@yonsei.ac.kr Homepage: http://diml.yonsei.ac.kr/~srkim/