Using m -Best Solutions S. Hamid Rezatofighi Anton Milan Zhen - - PowerPoint PPT Presentation

using m best solutions
SMART_READER_LITE
LIVE PREVIEW

Using m -Best Solutions S. Hamid Rezatofighi Anton Milan Zhen - - PowerPoint PPT Presentation

Joint Probabilistic Matching Using m -Best Solutions S. Hamid Rezatofighi Anton Milan Zhen Zhang Qinfeng Shi Antony Dick Ian Reid 1 Introduction One-to-One Graph Matching in Computer Vision Action Recognition Feature Point


slide-1
SLIDE 1

Joint Probabilistic Matching Using m-Best Solutions

  • S. Hamid Rezatofighi

Anton Milan Zhen Zhang Qinfeng Shi Antony Dick Ian Reid

1

slide-2
SLIDE 2

Introduction

 One-to-One Graph Matching in Computer Vision

  • Action Recognition
  • Feature Point Matching
  • Multi-Target Tracking
  • Person Re-Identification

2

⋮ ⋮

slide-3
SLIDE 3

Introduction

 Most existing works focus on

  • Feature and/or metric learning [Zhao et al., CVPR 2014, Liu et al., ECCV 2010]
  • Developing better solvers [Cho et al., ECCV 2010, Zhou & De la Torre, CVPR 2013]

 The optimal solution does not necessarily yield the correct matching assignment  To improving the matching results, we propose

  • to consider more feasible solutions
  • a principle approach to combine the solutions

3

slide-4
SLIDE 4

One-to-One Graph Matching

 Formulating it as a constrained binary program

4

⋮ ⋮

slide-5
SLIDE 5

One-to-One Graph Matching

 Formulating it as a constrained binary program

5

⋮ ⋮ 𝑦1 𝑦1

1

𝑦𝑁

𝑂

slide-6
SLIDE 6

One-to-One Graph Matching

 Formulating it as a constrained binary program

6

⋮ ⋮ 𝑦1 𝑦1

1

𝑦𝑁

𝑂

𝑦𝑗

𝑘 = {0,1}

𝑌 = 𝑦1

0, 𝑦1 1, … , 𝑦𝑗 𝑘, … , 𝑦𝑁 𝑂 𝑈 ⊆ 𝔺𝑁×(𝑂+1)

slide-7
SLIDE 7

One-to-One Graph Matching

 Formulating it as a constrained binary program

7

⋮ ⋮ 𝑦1 𝑦1

1

𝑦𝑁

𝑂

𝑌∗ = argmin 𝑔 𝑌 𝑌 ∈ 𝒴

Or

𝑌∗ = argmax 𝑞 𝑌 𝑌 ∈ 𝒴

where

𝒴 = ቄ𝑌 = 𝑦𝑗

𝑘 ∀𝑗,𝑘| 𝑦𝑗 𝑘 = 0,1 ,

∀𝑘: ∑ 𝑦𝑗

𝑘 ≤ 1,

ቅ ∀𝑗: ∑ 𝑦𝑗

𝑘 = 1

slide-8
SLIDE 8

One-to-One Graph Matching

 Formulating it as a constrained binary program

8

⋮ ⋮ 𝑦1 𝑦1

1

𝑦𝑁

𝑂

𝒴 = ቄ𝑌 = 𝑦𝑗

𝑘 ∀𝑗,𝑘| 𝑦𝑗 𝑘 = 0,1 ,

∀𝑘: ∑ 𝑦𝑗

𝑘 ≤ 1,

ቅ ∀𝑗: ∑ 𝑦𝑗

𝑘 = 1

𝑌∗ = argmin 𝑔 𝑌 𝑌 ∈ 𝒴

Or

𝑌∗ = argmax 𝑞 𝑌 𝑌 ∈ 𝒴

where

slide-9
SLIDE 9

One-to-One Graph Matching

 Formulating it as a constrained binary program

9

⋮ ⋮ 𝑦1 𝑦1

1

𝑦𝑁

𝑂

𝒴 = ቄ𝑌 = 𝑦𝑗

𝑘 ∀𝑗,𝑘| 𝑦𝑗 𝑘 = 0,1 ,

∀𝑘: ∑ 𝑦𝑗

𝑘 ≤ 1,

ቅ ∀𝑗: ∑ 𝑦𝑗

𝑘 = 1

𝑌∗ = argmin 𝑔 𝑌 𝑌 ∈ 𝒴

Or

𝑌∗ = argmax 𝑞 𝑌 𝑌 ∈ 𝒴

where

slide-10
SLIDE 10

One-to-One Graph Matching

 Formulating it as a constrained binary program

10

⋮ ⋮ 𝑦1 𝑦1

1

𝑦𝑁

𝑂

𝒴 = ቄ𝑌 = 𝑦𝑗

𝑘 ∀𝑗,𝑘| 𝑦𝑗 𝑘 = 0,1 ,

∀𝑘: ∑ 𝑦𝑗

𝑘 ≤ 1,

ቅ ∀𝑗: ∑ 𝑦𝑗

𝑘 = 1

𝐵𝑌 ≤ 𝐶 𝑌∗ = argmin 𝑔 𝑌 𝑌 ∈ 𝒴

Or

𝑌∗ = argmax 𝑞 𝑌 𝑌 ∈ 𝒴

where

slide-11
SLIDE 11

One-to-One Graph Matching

 Formulating it as a constrained binary program

11

𝒴 = ቄ𝑌 = 𝑦𝑗

𝑘 ∀𝑗,𝑘| 𝑦𝑗 𝑘 = 0,1 ,

∀𝑘: ∑ 𝑦𝑗

𝑘 ≤ 1,

ቅ ∀𝑗: ∑ 𝑦𝑗

𝑘 = 1

⋮ ⋮

𝑌∗ = argmin 𝑔 𝑌 𝑌 ∈ 𝒴

Or

𝑌∗ = argmax 𝑞 𝑌 𝑌 ∈ 𝒴

where

slide-12
SLIDE 12

One-to-One Graph Matching

 Examples of joint matching distribution 𝑞 𝑌 and cost 𝑔 𝑌 in different applications

  • Multi-target tracking [Zheng et al., CVPR 2008] and person re-identification [Das et al., ECCV

2014 ]

  • Feature point matching [Leordeanu et al., IJCV 2011]
  • Stereo matching [Meltzer et al., ICCV 2005] and iterative closest point [Zheng, IJCV 1994]

higher-order constraints in addition to one-to-one constraints

12

𝑔 𝑌 = 𝐷𝑈𝑌

  • r equivalently 𝑞 𝑌 ∝ ς 𝑞 𝑦𝑗

𝑘 𝑦𝑗

𝑘

𝑔 𝑌 = 𝑌𝑈𝑅 𝑌

slide-13
SLIDE 13

Marginalization VS MAP Estimates

 In general, globally optimal solution may or may not be easily achieved.  Even the optimal solution does not necessarily yield the correct matching

assignment 13

𝑌∗ = argmin 𝑔 𝑌 𝑌 ∈ 𝒴 𝑌∗ = argmax 𝑞 𝑌 𝑌 ∈ 𝒴

slide-14
SLIDE 14

Marginalization VS MAP Estimates

 In general, globally optimal solution may or may not be easily achieved.  Even the optimal solution does not necessarily yield the correct matching

assignment

  • Visual similarity
  • Other ambiguities in the matching space

14

𝑌∗ = argmin 𝑔 𝑌 𝑌 ∈ 𝒴 𝑌∗ = argmax 𝑞 𝑌 𝑌 ∈ 𝒴

slide-15
SLIDE 15

Marginalization VS MAP Estimates

 In general, globally optimal solution may or may not be easily achieved.  Even the optimal solution does not necessarily yield the correct matching

assignment

  • Visual similarity
  • Other ambiguities in the matching space

15

𝑌∗ = argmin 𝑔 𝑌 𝑌 ∈ 𝒴 𝑌∗ = argmax 𝑞 𝑌 𝑌 ∈ 𝒴

slide-16
SLIDE 16

Marginalization VS MAP Estimates

 In general, globally optimal solution may or may not be easily achieved.  Even the optimal solution does not necessarily yield the correct matching

assignment 16

𝑌∗ = argmin 𝑔 𝑌 𝑌 ∈ 𝒴 𝑌∗ = argmax 𝑞 𝑌 𝑌 ∈ 𝒴

slide-17
SLIDE 17

Marginalization VS MAP Estimates

 In general, globally optimal solution may or may not be easily achieved.  Even the optimal solution does not necessarily yield the correct matching

assignment 17

𝑌∗ = argmin 𝑔 𝑌 𝑌 ∈ 𝒴 𝑌∗ = argmax 𝑞 𝑌 𝑌 ∈ 𝒴

slide-18
SLIDE 18

Marginalization VS MAP Estimates

Motivation to use marginalization

Encoding the entire distribution to untangle potential ambiguities

 MAP only considers one single value of that distribution

Improving matching ranking due to averaging / smoothing property

Exact marginalization is NP-hard

 Requiring all feasible permutations to built the joint distribution

Solution

 Approximation using m-Best solutions

18

slide-19
SLIDE 19

Marginalization Using m-Best Solutions

Marginalization by considering a fraction of the matching space

 Using m-highest joint probabilities 𝑞 𝑌 / m-lowest values for 𝑔 𝑌

19

slide-20
SLIDE 20

Marginalization Using m-Best Solutions

Marginalization by considering a fraction of the matching space

 Using m-highest joint probabilities 𝑞 𝑌 / m-lowest values for 𝑔 𝑌

20

𝑌∗ = argmin 𝑔 𝑌 𝑌 ∈ 𝒴 𝑌∗ = argmax 𝑞 𝑌 𝑌 ∈ 𝒴

𝑌1

∗ is

1-st

  • ptimal

solution

slide-21
SLIDE 21

Marginalization Using m-Best Solutions

Marginalization by considering a fraction of the matching space

 Using m-highest joint probabilities 𝑞 𝑌 / m-lowest values for 𝑔 𝑌

21

𝑌∗ = argmin 𝑔 𝑌 𝑌 ∈ 𝒴 𝑌∗ = argmax 𝑞 𝑌 𝑌 ∈ 𝒴

𝑌2

∗ is

2-nd

  • ptimal

solution

slide-22
SLIDE 22

Marginalization Using m-Best Solutions

Marginalization by considering a fraction of the matching space

 Using m-highest joint probabilities 𝑞 𝑌 / m-lowest values for 𝑔 𝑌

22

𝑌∗ = argmin 𝑔 𝑌 𝑌 ∈ 𝒴 𝑌∗ = argmax 𝑞 𝑌 𝑌 ∈ 𝒴

𝑌3

∗ is

3-rd

  • ptimal

solution

slide-23
SLIDE 23

Marginalization Using m-Best Solutions

Marginalization by considering a fraction of the matching space

 Using m-highest joint probabilities 𝑞 𝑌 / m-lowest values for 𝑔 𝑌

23

𝑌∗ = argmin 𝑔 𝑌 𝑌 ∈ 𝒴 𝑌∗ = argmax 𝑞 𝑌 𝑌 ∈ 𝒴

𝑌𝑙

∗ is

k-th

  • ptimal

solution

slide-24
SLIDE 24

Marginalization Using m-Best Solutions

Marginalization by considering a fraction of the matching space

 Using m-highest joint probabilities 𝑞 𝑌 / m-lowest values for 𝑔 𝑌

24

𝑌∗ = argmin 𝑔 𝑌 𝑌 ∈ 𝒴 𝑌∗ = argmax 𝑞 𝑌 𝑌 ∈ 𝒴

𝑌𝑙

∗ is

k-th

  • ptimal

solution

slide-25
SLIDE 25

Marginalization Using m-Best Solutions

Marginalization by considering a fraction of the matching space

 Using m-highest joint probabilities 𝑞 𝑌 / m-lowest values for 𝑔 𝑌 

Approximation error bound decreases exponentially by increasing number of solutions

[Rezatofighi et al. , ICCV 2015]

25

𝑌∗ = argmin 𝑔 𝑌 𝑌 ∈ 𝒴 𝑌∗ = argmax 𝑞 𝑌 𝑌 ∈ 𝒴

𝑌𝑙

∗ is

k-th

  • ptimal

solution

slide-26
SLIDE 26

Computing the m-Best Solutions

Naïve exclusion strategy

26

𝑌1

∗ = argmin 𝑔 𝑌

𝐵𝑌 ≤ 𝐶

slide-27
SLIDE 27

Computing the m-Best Solutions

Naïve exclusion strategy

27

𝑌2

∗ = argmin 𝑔 𝑌

𝐵𝑌 ≤ 𝐶

𝑌, 𝑌1

∗ ≤

𝑌1

∗ 1 − 1

slide-28
SLIDE 28

Computing the m-Best Solutions

Naïve exclusion strategy

28

𝑌3

∗ = argmin 𝑔 𝑌

𝐵𝑌 ≤ 𝐶

𝑌, 𝑌1

∗ ≤

𝑌1

∗ 1 − 1

𝑌, 𝑌2

∗ ≤

𝑌2

∗ 1 − 1

slide-29
SLIDE 29

Computing the m-Best Solutions

Naïve exclusion strategy

29

𝑌𝑙

∗ = argmin 𝑔 𝑌

𝐵𝑌 ≤ 𝐶

𝑌, 𝑌1

∗ ≤

𝑌1

∗ 1 − 1

𝑌, 𝑌2

∗ ≤

𝑌2

∗ 1 − 1

⋮ 𝑌, 𝑌𝑙−1

≤ 𝑌𝑙−1

∗ 1 − 1

slide-30
SLIDE 30

Computing the m-Best Solutions

Naïve exclusion strategy

30

𝑌𝑙

∗ = argmin 𝑔 𝑌

𝐵𝑌 ≤ 𝐶 ሖ 𝐵𝑌 ≤ ሖ 𝐶

 General approach  Impractical for large values of m

slide-31
SLIDE 31

Computing the m-Best Solutions

Naïve exclusion strategy Binary Tree Partitioning

31

𝑌𝑙

∗ = argmin 𝑔 𝑌

𝐵𝑌 ≤ 𝐶 ሖ 𝐵𝑌 ≤ ሖ 𝐶

 Efficient approach  Not a good strategy for weak solvers

Partitioning the space into a set of disjoint subspaces [Rezatofighi et al., ICCV 2015 ]

 General approach  Impractical for large values of m

slide-32
SLIDE 32

Experimental Results

Person Re-Identification

32

Query images Gallery images

None of them

𝑑1 𝑑1

1

𝑑𝑁

𝑂

slide-33
SLIDE 33

Experimental Results

Person Re-Identification

33

𝑑1 𝑑1

1

⋯ 𝑑1

𝑂

𝑑2 𝑑2

1

⋯ ⋯ ⋮ ⋮ ⋱ ⋮ 𝑑𝑁 𝑑𝑁

1

⋯ 𝑑𝑁

𝑂

Original Assignment Costs Query images Gallery images

None of them

𝑑1 𝑑1

1

𝑑𝑁

𝑂

Query images Gallery images

slide-34
SLIDE 34

Experimental Results

Person Re-Identification

34

𝑑1 𝑑1

1

⋯ 𝑑1

𝑂

𝑑2 𝑑2

1

⋯ ⋯ ⋮ ⋮ ⋱ ⋮ 𝑑𝑁 𝑑𝑁

1

⋯ 𝑑𝑁

𝑂

Original Assignment Costs Query images Gallery images

None of them

𝑑1 𝑑1

1

𝑑𝑁

𝑂

Query images Gallery images

m-bst 𝔡1 𝔡1

1

⋯ 𝔡1

𝑂

𝔡2 𝔡2

1

⋯ ⋯ ⋮ ⋮ ⋱ ⋮ 𝔡𝑁 𝔡𝑁

1

⋯ 𝔡𝑁

𝑂

Query images Gallery images m-best Marginalized Costs 𝑌∗ = argmin 𝐷𝑈𝑌 𝑌 ∈ 𝒴

slide-35
SLIDE 35

Experimental Results

Person Re-Identification

Ranking is improved

35

𝑑1 𝑑1

1

⋯ 𝑑1

𝑂

𝑑2 𝑑2

1

⋯ ⋯ ⋮ ⋮ ⋱ ⋮ 𝑑𝑁 𝑑𝑁

1

⋯ 𝑑𝑁

𝑂

Original Assignment Costs Query images Gallery images

None of them

𝑑1 𝑑1

1

𝑑𝑁

𝑂

Query images Gallery images

m-bst 𝔡1 𝔡1

1

⋯ 𝔡1

𝑂

𝔡2 𝔡2

1

⋯ ⋯ ⋮ ⋮ ⋱ ⋮ 𝔡𝑁 𝔡𝑁

1

⋯ 𝔡𝑁

𝑂

Query images Gallery images m-best Marginalized Costs 𝑌∗ = argmin 𝐷𝑈𝑌 𝑌 ∈ 𝒴

slide-36
SLIDE 36

Experimental Results

Person Re-Identification

36

FT [Das et al., ECCV 2014] AvgF [Paisitkriangkrai et al., CVPR 2015 ] Dataset (Size) Method (m=100) Time (Sec.) RAiD (20×20) FT mbst-FT 74.0 85.0 82.0 99.0 96.0 100.0 1.6 iLIDS (59×59) AvgF mbst-AvgF 51.9 54.7 60.7 63.6 72.4 75.4 15.4 VIPeR (316×316) AvgF mbst-AvgF 44.9 50.5 58.3 63.0 76.3 78.0 201.9

slide-37
SLIDE 37

Experimental Results

Person Re-Identification

37

slide-38
SLIDE 38

Experimental Results

Feature Matching

38

𝑌∗ = argmax 𝑌𝑈𝐿𝑌 𝑌 ∈ 𝒴 Matching PASCAL VOC dataset [Leordeanu et al., IJCV 2011]

slide-39
SLIDE 39

Experimental Results

Feature Matching

39

𝑌∗ = argmax 𝑌𝑈𝐿𝑌 𝑌 ∈ 𝒴 Matching PASCAL VOC dataset [Leordeanu et al., IJCV 2011]

slide-40
SLIDE 40

Experimental Results

Feature Matching

40

𝑌∗ = argmax 𝑌𝑈𝐿𝑌 𝑌 ∈ 𝒴 Matching PASCAL VOC dataset [Leordeanu et al., IJCV 2011]

BP solver [Zhang et al., CVPR 2016] IPFP Solver [Leordeanu et al., IJCV 2011]

slide-41
SLIDE 41

Experimental Results

Feature Matching

41

𝑌∗ = argmax 𝑌𝑈𝐿𝑌 𝑌 ∈ 𝒴 Matching PASCAL VOC dataset [Leordeanu et al., IJCV 2011]

slide-42
SLIDE 42

Experimental Results

Feature Matching

42

𝑌∗ = argmax 𝑌𝑈𝐿𝑌 𝑌 ∈ 𝒴 Matching PASCAL VOC dataset [Leordeanu et al., IJCV 2011]

slide-43
SLIDE 43

Experimental Results

Feature Matching

43

𝑌∗ = argmax 𝑌𝑈𝐿𝑌 𝑌 ∈ 𝒴 Matching PASCAL VOC dataset [Leordeanu et al., IJCV 2011]

slide-44
SLIDE 44

Discussion & Conclusion

Limitations

 One-to-One constraint is no longer guaranteed by marginalization  Requires computational overhead to calculate m solutions

Conclusion

 Graph matching by approximated marginals using m-best solutions instead of MAP  A generic approach applicable to similar problems  Marginalization improves matching accuracy and ranking

Take-home message

 Do not rely on a single solution, explore more solutions

Future work

 Exploring further applications with arbitrary cost functions

44

slide-45
SLIDE 45

Thank you

45

Visit our poster

Email: hamid.rezatofighi@adelaide.edu.au

Code will be available