Joint Probabilistic Matching Using m-Best Solutions
- S. Hamid Rezatofighi
Anton Milan Zhen Zhang Qinfeng Shi Antony Dick Ian Reid
1
Using m -Best Solutions S. Hamid Rezatofighi Anton Milan Zhen - - PowerPoint PPT Presentation
Joint Probabilistic Matching Using m -Best Solutions S. Hamid Rezatofighi Anton Milan Zhen Zhang Qinfeng Shi Antony Dick Ian Reid 1 Introduction One-to-One Graph Matching in Computer Vision Action Recognition Feature Point
Anton Milan Zhen Zhang Qinfeng Shi Antony Dick Ian Reid
1
One-to-One Graph Matching in Computer Vision
2
⋮ ⋮
Most existing works focus on
The optimal solution does not necessarily yield the correct matching assignment To improving the matching results, we propose
3
Formulating it as a constrained binary program
4
⋮ ⋮
Formulating it as a constrained binary program
5
⋮ ⋮ 𝑦1 𝑦1
1
𝑦𝑁
𝑂
Formulating it as a constrained binary program
6
⋮ ⋮ 𝑦1 𝑦1
1
𝑦𝑁
𝑂
𝑦𝑗
𝑘 = {0,1}
𝑌 = 𝑦1
0, 𝑦1 1, … , 𝑦𝑗 𝑘, … , 𝑦𝑁 𝑂 𝑈 ⊆ 𝑁×(𝑂+1)
Formulating it as a constrained binary program
7
⋮ ⋮ 𝑦1 𝑦1
1
𝑦𝑁
𝑂
𝑌∗ = argmin 𝑔 𝑌 𝑌 ∈ 𝒴
Or
𝑌∗ = argmax 𝑞 𝑌 𝑌 ∈ 𝒴
where
𝒴 = ቄ𝑌 = 𝑦𝑗
𝑘 ∀𝑗,𝑘| 𝑦𝑗 𝑘 = 0,1 ,
∀𝑘: ∑ 𝑦𝑗
𝑘 ≤ 1,
ቅ ∀𝑗: ∑ 𝑦𝑗
𝑘 = 1
Formulating it as a constrained binary program
8
⋮ ⋮ 𝑦1 𝑦1
1
𝑦𝑁
𝑂
𝒴 = ቄ𝑌 = 𝑦𝑗
𝑘 ∀𝑗,𝑘| 𝑦𝑗 𝑘 = 0,1 ,
∀𝑘: ∑ 𝑦𝑗
𝑘 ≤ 1,
ቅ ∀𝑗: ∑ 𝑦𝑗
𝑘 = 1
𝑌∗ = argmin 𝑔 𝑌 𝑌 ∈ 𝒴
Or
𝑌∗ = argmax 𝑞 𝑌 𝑌 ∈ 𝒴
where
Formulating it as a constrained binary program
9
⋮ ⋮ 𝑦1 𝑦1
1
𝑦𝑁
𝑂
𝒴 = ቄ𝑌 = 𝑦𝑗
𝑘 ∀𝑗,𝑘| 𝑦𝑗 𝑘 = 0,1 ,
∀𝑘: ∑ 𝑦𝑗
𝑘 ≤ 1,
ቅ ∀𝑗: ∑ 𝑦𝑗
𝑘 = 1
𝑌∗ = argmin 𝑔 𝑌 𝑌 ∈ 𝒴
Or
𝑌∗ = argmax 𝑞 𝑌 𝑌 ∈ 𝒴
where
Formulating it as a constrained binary program
10
⋮ ⋮ 𝑦1 𝑦1
1
𝑦𝑁
𝑂
𝒴 = ቄ𝑌 = 𝑦𝑗
𝑘 ∀𝑗,𝑘| 𝑦𝑗 𝑘 = 0,1 ,
∀𝑘: ∑ 𝑦𝑗
𝑘 ≤ 1,
ቅ ∀𝑗: ∑ 𝑦𝑗
𝑘 = 1
𝐵𝑌 ≤ 𝐶 𝑌∗ = argmin 𝑔 𝑌 𝑌 ∈ 𝒴
Or
𝑌∗ = argmax 𝑞 𝑌 𝑌 ∈ 𝒴
where
Formulating it as a constrained binary program
11
𝒴 = ቄ𝑌 = 𝑦𝑗
𝑘 ∀𝑗,𝑘| 𝑦𝑗 𝑘 = 0,1 ,
∀𝑘: ∑ 𝑦𝑗
𝑘 ≤ 1,
ቅ ∀𝑗: ∑ 𝑦𝑗
𝑘 = 1
⋮ ⋮
𝑌∗ = argmin 𝑔 𝑌 𝑌 ∈ 𝒴
Or
𝑌∗ = argmax 𝑞 𝑌 𝑌 ∈ 𝒴
where
Examples of joint matching distribution 𝑞 𝑌 and cost 𝑔 𝑌 in different applications
2014 ]
higher-order constraints in addition to one-to-one constraints
12
𝑔 𝑌 = 𝐷𝑈𝑌
𝑘 𝑦𝑗
𝑘
𝑔 𝑌 = 𝑌𝑈𝑅 𝑌
In general, globally optimal solution may or may not be easily achieved. Even the optimal solution does not necessarily yield the correct matching
assignment 13
𝑌∗ = argmin 𝑔 𝑌 𝑌 ∈ 𝒴 𝑌∗ = argmax 𝑞 𝑌 𝑌 ∈ 𝒴
In general, globally optimal solution may or may not be easily achieved. Even the optimal solution does not necessarily yield the correct matching
assignment
14
𝑌∗ = argmin 𝑔 𝑌 𝑌 ∈ 𝒴 𝑌∗ = argmax 𝑞 𝑌 𝑌 ∈ 𝒴
In general, globally optimal solution may or may not be easily achieved. Even the optimal solution does not necessarily yield the correct matching
assignment
15
𝑌∗ = argmin 𝑔 𝑌 𝑌 ∈ 𝒴 𝑌∗ = argmax 𝑞 𝑌 𝑌 ∈ 𝒴
In general, globally optimal solution may or may not be easily achieved. Even the optimal solution does not necessarily yield the correct matching
assignment 16
𝑌∗ = argmin 𝑔 𝑌 𝑌 ∈ 𝒴 𝑌∗ = argmax 𝑞 𝑌 𝑌 ∈ 𝒴
In general, globally optimal solution may or may not be easily achieved. Even the optimal solution does not necessarily yield the correct matching
assignment 17
𝑌∗ = argmin 𝑔 𝑌 𝑌 ∈ 𝒴 𝑌∗ = argmax 𝑞 𝑌 𝑌 ∈ 𝒴
Motivation to use marginalization
Encoding the entire distribution to untangle potential ambiguities
MAP only considers one single value of that distribution
Improving matching ranking due to averaging / smoothing property
Exact marginalization is NP-hard
Requiring all feasible permutations to built the joint distribution
Solution
Approximation using m-Best solutions
18
Marginalization by considering a fraction of the matching space
Using m-highest joint probabilities 𝑞 𝑌 / m-lowest values for 𝑔 𝑌
19
Marginalization by considering a fraction of the matching space
Using m-highest joint probabilities 𝑞 𝑌 / m-lowest values for 𝑔 𝑌
20
𝑌∗ = argmin 𝑔 𝑌 𝑌 ∈ 𝒴 𝑌∗ = argmax 𝑞 𝑌 𝑌 ∈ 𝒴
𝑌1
∗ is
1-st
solution
Marginalization by considering a fraction of the matching space
Using m-highest joint probabilities 𝑞 𝑌 / m-lowest values for 𝑔 𝑌
21
𝑌∗ = argmin 𝑔 𝑌 𝑌 ∈ 𝒴 𝑌∗ = argmax 𝑞 𝑌 𝑌 ∈ 𝒴
𝑌2
∗ is
2-nd
solution
Marginalization by considering a fraction of the matching space
Using m-highest joint probabilities 𝑞 𝑌 / m-lowest values for 𝑔 𝑌
22
𝑌∗ = argmin 𝑔 𝑌 𝑌 ∈ 𝒴 𝑌∗ = argmax 𝑞 𝑌 𝑌 ∈ 𝒴
𝑌3
∗ is
3-rd
solution
Marginalization by considering a fraction of the matching space
Using m-highest joint probabilities 𝑞 𝑌 / m-lowest values for 𝑔 𝑌
23
𝑌∗ = argmin 𝑔 𝑌 𝑌 ∈ 𝒴 𝑌∗ = argmax 𝑞 𝑌 𝑌 ∈ 𝒴
𝑌𝑙
∗ is
k-th
solution
Marginalization by considering a fraction of the matching space
Using m-highest joint probabilities 𝑞 𝑌 / m-lowest values for 𝑔 𝑌
24
𝑌∗ = argmin 𝑔 𝑌 𝑌 ∈ 𝒴 𝑌∗ = argmax 𝑞 𝑌 𝑌 ∈ 𝒴
𝑌𝑙
∗ is
k-th
solution
Marginalization by considering a fraction of the matching space
Using m-highest joint probabilities 𝑞 𝑌 / m-lowest values for 𝑔 𝑌
Approximation error bound decreases exponentially by increasing number of solutions
[Rezatofighi et al. , ICCV 2015]
25
𝑌∗ = argmin 𝑔 𝑌 𝑌 ∈ 𝒴 𝑌∗ = argmax 𝑞 𝑌 𝑌 ∈ 𝒴
𝑌𝑙
∗ is
k-th
solution
Naïve exclusion strategy
26
𝑌1
∗ = argmin 𝑔 𝑌
𝐵𝑌 ≤ 𝐶
Naïve exclusion strategy
27
𝑌2
∗ = argmin 𝑔 𝑌
𝐵𝑌 ≤ 𝐶
𝑌, 𝑌1
∗ ≤
𝑌1
∗ 1 − 1
Naïve exclusion strategy
28
𝑌3
∗ = argmin 𝑔 𝑌
𝐵𝑌 ≤ 𝐶
𝑌, 𝑌1
∗ ≤
𝑌1
∗ 1 − 1
𝑌, 𝑌2
∗ ≤
𝑌2
∗ 1 − 1
Naïve exclusion strategy
29
𝑌𝑙
∗ = argmin 𝑔 𝑌
𝐵𝑌 ≤ 𝐶
𝑌, 𝑌1
∗ ≤
𝑌1
∗ 1 − 1
𝑌, 𝑌2
∗ ≤
𝑌2
∗ 1 − 1
⋮ 𝑌, 𝑌𝑙−1
∗
≤ 𝑌𝑙−1
∗ 1 − 1
Naïve exclusion strategy
30
𝑌𝑙
∗ = argmin 𝑔 𝑌
𝐵𝑌 ≤ 𝐶 ሖ 𝐵𝑌 ≤ ሖ 𝐶
General approach Impractical for large values of m
Naïve exclusion strategy Binary Tree Partitioning
31
𝑌𝑙
∗ = argmin 𝑔 𝑌
𝐵𝑌 ≤ 𝐶 ሖ 𝐵𝑌 ≤ ሖ 𝐶
Efficient approach Not a good strategy for weak solvers
Partitioning the space into a set of disjoint subspaces [Rezatofighi et al., ICCV 2015 ]
General approach Impractical for large values of m
Person Re-Identification
32
Query images Gallery images
None of them
𝑑1 𝑑1
1
𝑑𝑁
𝑂
Person Re-Identification
33
𝑑1 𝑑1
1
⋯ 𝑑1
𝑂
𝑑2 𝑑2
1
⋯ ⋯ ⋮ ⋮ ⋱ ⋮ 𝑑𝑁 𝑑𝑁
1
⋯ 𝑑𝑁
𝑂
Original Assignment Costs Query images Gallery images
None of them
𝑑1 𝑑1
1
𝑑𝑁
𝑂
Query images Gallery images
Person Re-Identification
34
𝑑1 𝑑1
1
⋯ 𝑑1
𝑂
𝑑2 𝑑2
1
⋯ ⋯ ⋮ ⋮ ⋱ ⋮ 𝑑𝑁 𝑑𝑁
1
⋯ 𝑑𝑁
𝑂
Original Assignment Costs Query images Gallery images
None of them
𝑑1 𝑑1
1
𝑑𝑁
𝑂
Query images Gallery images
m-bst 𝔡1 𝔡1
1
⋯ 𝔡1
𝑂
𝔡2 𝔡2
1
⋯ ⋯ ⋮ ⋮ ⋱ ⋮ 𝔡𝑁 𝔡𝑁
1
⋯ 𝔡𝑁
𝑂
Query images Gallery images m-best Marginalized Costs 𝑌∗ = argmin 𝐷𝑈𝑌 𝑌 ∈ 𝒴
Person Re-Identification
Ranking is improved
35
𝑑1 𝑑1
1
⋯ 𝑑1
𝑂
𝑑2 𝑑2
1
⋯ ⋯ ⋮ ⋮ ⋱ ⋮ 𝑑𝑁 𝑑𝑁
1
⋯ 𝑑𝑁
𝑂
Original Assignment Costs Query images Gallery images
None of them
𝑑1 𝑑1
1
𝑑𝑁
𝑂
Query images Gallery images
m-bst 𝔡1 𝔡1
1
⋯ 𝔡1
𝑂
𝔡2 𝔡2
1
⋯ ⋯ ⋮ ⋮ ⋱ ⋮ 𝔡𝑁 𝔡𝑁
1
⋯ 𝔡𝑁
𝑂
Query images Gallery images m-best Marginalized Costs 𝑌∗ = argmin 𝐷𝑈𝑌 𝑌 ∈ 𝒴
Person Re-Identification
36
FT [Das et al., ECCV 2014] AvgF [Paisitkriangkrai et al., CVPR 2015 ] Dataset (Size) Method (m=100) Time (Sec.) RAiD (20×20) FT mbst-FT 74.0 85.0 82.0 99.0 96.0 100.0 1.6 iLIDS (59×59) AvgF mbst-AvgF 51.9 54.7 60.7 63.6 72.4 75.4 15.4 VIPeR (316×316) AvgF mbst-AvgF 44.9 50.5 58.3 63.0 76.3 78.0 201.9
Person Re-Identification
37
Feature Matching
38
𝑌∗ = argmax 𝑌𝑈𝐿𝑌 𝑌 ∈ 𝒴 Matching PASCAL VOC dataset [Leordeanu et al., IJCV 2011]
Feature Matching
39
𝑌∗ = argmax 𝑌𝑈𝐿𝑌 𝑌 ∈ 𝒴 Matching PASCAL VOC dataset [Leordeanu et al., IJCV 2011]
Feature Matching
40
𝑌∗ = argmax 𝑌𝑈𝐿𝑌 𝑌 ∈ 𝒴 Matching PASCAL VOC dataset [Leordeanu et al., IJCV 2011]
BP solver [Zhang et al., CVPR 2016] IPFP Solver [Leordeanu et al., IJCV 2011]
Feature Matching
41
𝑌∗ = argmax 𝑌𝑈𝐿𝑌 𝑌 ∈ 𝒴 Matching PASCAL VOC dataset [Leordeanu et al., IJCV 2011]
Feature Matching
42
𝑌∗ = argmax 𝑌𝑈𝐿𝑌 𝑌 ∈ 𝒴 Matching PASCAL VOC dataset [Leordeanu et al., IJCV 2011]
Feature Matching
43
𝑌∗ = argmax 𝑌𝑈𝐿𝑌 𝑌 ∈ 𝒴 Matching PASCAL VOC dataset [Leordeanu et al., IJCV 2011]
Limitations
One-to-One constraint is no longer guaranteed by marginalization Requires computational overhead to calculate m solutions
Conclusion
Graph matching by approximated marginals using m-best solutions instead of MAP A generic approach applicable to similar problems Marginalization improves matching accuracy and ranking
Take-home message
Do not rely on a single solution, explore more solutions
Future work
Exploring further applications with arbitrary cost functions
44
45
Email: hamid.rezatofighi@adelaide.edu.au
Code will be available