Learning Two-View Stereo Matching Jianxiong Xiao Jingni Chen - - PowerPoint PPT Presentation

learning two view stereo matching
SMART_READER_LITE
LIVE PREVIEW

Learning Two-View Stereo Matching Jianxiong Xiao Jingni Chen - - PowerPoint PPT Presentation

Learning Two-View Stereo Matching Jianxiong Xiao Jingni Chen Dit-Yan Yeung Long Quan Department of Computer Science and Engineering The Hong Kong University of Science and Technology The 10th European Conference on Computer Vision Jianxiong


slide-1
SLIDE 1

Learning Two-View Stereo Matching

Jianxiong Xiao Jingni Chen Dit-Yan Yeung Long Quan

Department of Computer Science and Engineering The Hong Kong University of Science and Technology

The 10th European Conference on Computer Vision

Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 1 / 45

slide-2
SLIDE 2

Outline

1

Introduction

2

Semi-supervised Matching Framework Local Label Preference Cost Regional Surface Shape Cost Global Epipolar Geometry Cost Symmetric Visibility Consistency Cost

3

Iterative MV Optimization

4

Learning the Symmetric Affinity Matrix

5

More Details

6

Experiments

Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 2 / 45

slide-3
SLIDE 3

Introduction

Outline

1

Introduction

2

Semi-supervised Matching Framework Local Label Preference Cost Regional Surface Shape Cost Global Epipolar Geometry Cost Symmetric Visibility Consistency Cost

3

Iterative MV Optimization

4

Learning the Symmetric Affinity Matrix

5

More Details

6

Experiments

Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 3 / 45

slide-4
SLIDE 4

Introduction

Stereo Matching between Two Images

Input: two wide-baseline images taken from the same static scene, neither calibrated nor rectified. For more general applications, such as robust motion estimation from structure.

Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 4 / 45

slide-5
SLIDE 5

Introduction

Related Work

Small-baseline matching algorithm: cannot be extended easily when the epipolar lines are not parallel. Wide-baseline matching: depend heavily on the epipolar geometry which has to be provided, often by off-line calibration. Sparse matching: the fundamental matrix so estimated often fits to subsets of image, not the whole image. Region growing based methods: greedily, bad result for quite different pixel scales due to discrete growing. Learning techniques: the information learned from other irrelevant images is very weak, the quality of the result greatly depends on the training data.

Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 5 / 45

slide-6
SLIDE 6

Introduction

Related Work

Small-baseline matching algorithm: cannot be extended easily when the epipolar lines are not parallel. Wide-baseline matching: depend heavily on the epipolar geometry which has to be provided, often by off-line calibration. Sparse matching: the fundamental matrix so estimated often fits to subsets of image, not the whole image. Region growing based methods: greedily, bad result for quite different pixel scales due to discrete growing. Learning techniques: the information learned from other irrelevant images is very weak, the quality of the result greatly depends on the training data.

Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 5 / 45

slide-7
SLIDE 7

Introduction

Related Work

Small-baseline matching algorithm: cannot be extended easily when the epipolar lines are not parallel. Wide-baseline matching: depend heavily on the epipolar geometry which has to be provided, often by off-line calibration. Sparse matching: the fundamental matrix so estimated often fits to subsets of image, not the whole image. Region growing based methods: greedily, bad result for quite different pixel scales due to discrete growing. Learning techniques: the information learned from other irrelevant images is very weak, the quality of the result greatly depends on the training data.

Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 5 / 45

slide-8
SLIDE 8

Introduction

Related Work

Small-baseline matching algorithm: cannot be extended easily when the epipolar lines are not parallel. Wide-baseline matching: depend heavily on the epipolar geometry which has to be provided, often by off-line calibration. Sparse matching: the fundamental matrix so estimated often fits to subsets of image, not the whole image. Region growing based methods: greedily, bad result for quite different pixel scales due to discrete growing. Learning techniques: the information learned from other irrelevant images is very weak, the quality of the result greatly depends on the training data.

Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 5 / 45

slide-9
SLIDE 9

Introduction

Related Work

Small-baseline matching algorithm: cannot be extended easily when the epipolar lines are not parallel. Wide-baseline matching: depend heavily on the epipolar geometry which has to be provided, often by off-line calibration. Sparse matching: the fundamental matrix so estimated often fits to subsets of image, not the whole image. Region growing based methods: greedily, bad result for quite different pixel scales due to discrete growing. Learning techniques: the information learned from other irrelevant images is very weak, the quality of the result greatly depends on the training data.

Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 5 / 45

slide-10
SLIDE 10

Introduction

Our Semi-supervised Matching Approach

Propose a semi-supervised prospective of the matching problem without training. Utilize all information in an optimization procedure: local, regional and global. More robust to noise: the label vector is affected not merely by one matched pair but by all pairs with weighted paths to it. Capable of handling real number labels which is the inherent requirement of sub-pixel accuracy matching.

Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 6 / 45

slide-11
SLIDE 11

Semi-supervised Matching Framework

Outline

1

Introduction

2

Semi-supervised Matching Framework Local Label Preference Cost Regional Surface Shape Cost Global Epipolar Geometry Cost Symmetric Visibility Consistency Cost

3

Iterative MV Optimization

4

Learning the Symmetric Affinity Matrix

5

More Details

6

Experiments

Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 7 / 45

slide-12
SLIDE 12

Semi-supervised Matching Framework

Three Main Catalogs of Learning Methods

Supervised Learning

Given that √ and ×. Now, whether is √ or × ?

Unsupervised Learning

Given , and , any interesting structure in them?

Semi-supervised Learning

× √ ?

Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 8 / 45

slide-13
SLIDE 13

Semi-supervised Matching Framework

Three Main Catalogs of Learning Methods

Supervised Learning

Given that √ and ×. Now, whether is √ or × ?

Unsupervised Learning

Given , and , any interesting structure in them?

Semi-supervised Learning

× √ ?

Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 8 / 45

slide-14
SLIDE 14

Semi-supervised Matching Framework

Three Main Catalogs of Learning Methods

Supervised Learning

Given that √ and ×. Now, whether is √ or × ?

Unsupervised Learning

Given , and , any interesting structure in them?

Semi-supervised Learning

× √ ?

Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 8 / 45

slide-15
SLIDE 15

Semi-supervised Matching Framework

Three Main Catalogs of Learning Methods

Supervised Learning

Given that √ and ×. Now, whether is √ or × ?

Unsupervised Learning

Given , and , any interesting structure in them?

Semi-supervised Learning

× √ ?

Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 8 / 45

slide-16
SLIDE 16

Semi-supervised Matching Framework

Notations

For p = 1 or 2, q = 3−p: xp

(sp−1)×cp+tp: coordinate position (sp,tp) in the p-th image space,

sp ∈ {1,··· ,rp}, tp ∈ {1,··· ,cp},i = (sp −1)×cp +tp. X p: Input image with size np = rp ×cp pixels Xp =

  • xp

1,xp 2,...,xp (sp−1)×cp+tp,...,xp np

T . xq

j :a matching point of xp i located at coordinate position (sq,tq) in

the q-th continuous image space, sq,tq ∈ R. Label Vector yp

i =

  • vp

i ,hp i

T =

  • s1,t1

  • s2,t2T ∈ R2,

representing the position offset from the point in the first image to the point in the second image. Label Matrix Yp =

  • yp

1,··· ,yp np

Tand Visibility Vector Op =

  • p

1 ,··· ,op np

T.

Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 9 / 45

slide-17
SLIDE 17

Semi-supervised Matching Framework

Notations

For p = 1 or 2, q = 3−p: xp

(sp−1)×cp+tp: coordinate position (sp,tp) in the p-th image space,

sp ∈ {1,··· ,rp}, tp ∈ {1,··· ,cp},i = (sp −1)×cp +tp. X p: Input image with size np = rp ×cp pixels Xp =

  • xp

1,xp 2,...,xp (sp−1)×cp+tp,...,xp np

T . xq

j :a matching point of xp i located at coordinate position (sq,tq) in

the q-th continuous image space, sq,tq ∈ R. Label Vector yp

i =

  • vp

i ,hp i

T =

  • s1,t1

  • s2,t2T ∈ R2,

representing the position offset from the point in the first image to the point in the second image. Label Matrix Yp =

  • yp

1,··· ,yp np

Tand Visibility Vector Op =

  • p

1 ,··· ,op np

T.

Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 9 / 45

slide-18
SLIDE 18

Semi-supervised Matching Framework

Notations

For p = 1 or 2, q = 3−p: xp

(sp−1)×cp+tp: coordinate position (sp,tp) in the p-th image space,

sp ∈ {1,··· ,rp}, tp ∈ {1,··· ,cp},i = (sp −1)×cp +tp. X p: Input image with size np = rp ×cp pixels Xp =

  • xp

1,xp 2,...,xp (sp−1)×cp+tp,...,xp np

T . xq

j :a matching point of xp i located at coordinate position (sq,tq) in

the q-th continuous image space, sq,tq ∈ R. Label Vector yp

i =

  • vp

i ,hp i

T =

  • s1,t1

  • s2,t2T ∈ R2,

representing the position offset from the point in the first image to the point in the second image. Label Matrix Yp =

  • yp

1,··· ,yp np

Tand Visibility Vector Op =

  • p

1 ,··· ,op np

T.

Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 9 / 45

slide-19
SLIDE 19

Semi-supervised Matching Framework

Notations

For p = 1 or 2, q = 3−p: xp

(sp−1)×cp+tp: coordinate position (sp,tp) in the p-th image space,

sp ∈ {1,··· ,rp}, tp ∈ {1,··· ,cp},i = (sp −1)×cp +tp. X p: Input image with size np = rp ×cp pixels Xp =

  • xp

1,xp 2,...,xp (sp−1)×cp+tp,...,xp np

T . xq

j :a matching point of xp i located at coordinate position (sq,tq) in

the q-th continuous image space, sq,tq ∈ R. Label Vector yp

i =

  • vp

i ,hp i

T =

  • s1,t1

  • s2,t2T ∈ R2,

representing the position offset from the point in the first image to the point in the second image. Label Matrix Yp =

  • yp

1,··· ,yp np

Tand Visibility Vector Op =

  • p

1 ,··· ,op np

T.

Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 9 / 45

slide-20
SLIDE 20

Semi-supervised Matching Framework

Notations

For p = 1 or 2, q = 3−p: xp

(sp−1)×cp+tp: coordinate position (sp,tp) in the p-th image space,

sp ∈ {1,··· ,rp}, tp ∈ {1,··· ,cp},i = (sp −1)×cp +tp. X p: Input image with size np = rp ×cp pixels Xp =

  • xp

1,xp 2,...,xp (sp−1)×cp+tp,...,xp np

T . xq

j :a matching point of xp i located at coordinate position (sq,tq) in

the q-th continuous image space, sq,tq ∈ R. Label Vector yp

i =

  • vp

i ,hp i

T =

  • s1,t1

  • s2,t2T ∈ R2,

representing the position offset from the point in the first image to the point in the second image. Label Matrix Yp =

  • yp

1,··· ,yp np

Tand Visibility Vector Op =

  • p

1 ,··· ,op np

T.

Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 9 / 45

slide-21
SLIDE 21

Semi-supervised Matching Framework

Smoothness

IDEA: Nearby pixels are more likely to have similar label vectors. Smoothness Assumption = ⇒A graph G = V ,E . Two Images = ⇒Two Graphs G 1 =

  • V 1,E 1

and G 2 =

  • V 2,E 2

. N

  • xp

i

  • : the set of data points in the neighborhood of xp

i .

Affinity Matrix Wp, wp

ij is non-zero iff xp i and xp j are neighbors in E p.

Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 10 / 45

slide-22
SLIDE 22

Semi-supervised Matching Framework

Smoothness

IDEA: Nearby pixels are more likely to have similar label vectors. Smoothness Assumption = ⇒A graph G = V ,E . Two Images = ⇒Two Graphs G 1 =

  • V 1,E 1

and G 2 =

  • V 2,E 2

. N

  • xp

i

  • : the set of data points in the neighborhood of xp

i .

Affinity Matrix Wp, wp

ij is non-zero iff xp i and xp j are neighbors in E p.

Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 10 / 45

slide-23
SLIDE 23

Semi-supervised Matching Framework

Smoothness

IDEA: Nearby pixels are more likely to have similar label vectors. Smoothness Assumption = ⇒A graph G = V ,E . Two Images = ⇒Two Graphs G 1 =

  • V 1,E 1

and G 2 =

  • V 2,E 2

. N

  • xp

i

  • : the set of data points in the neighborhood of xp

i .

Affinity Matrix Wp, wp

ij is non-zero iff xp i and xp j are neighbors in E p.

Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 10 / 45

slide-24
SLIDE 24

Semi-supervised Matching Framework

Smoothness

IDEA: Nearby pixels are more likely to have similar label vectors. Smoothness Assumption = ⇒A graph G = V ,E . Two Images = ⇒Two Graphs G 1 =

  • V 1,E 1

and G 2 =

  • V 2,E 2

. N

  • xp

i

  • : the set of data points in the neighborhood of xp

i .

Affinity Matrix Wp, wp

ij is non-zero iff xp i and xp j are neighbors in E p.

Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 10 / 45

slide-25
SLIDE 25

Semi-supervised Matching Framework

Smoothness

IDEA: Nearby pixels are more likely to have similar label vectors. Smoothness Assumption = ⇒A graph G = V ,E . Two Images = ⇒Two Graphs G 1 =

  • V 1,E 1

and G 2 =

  • V 2,E 2

. N

  • xp

i

  • : the set of data points in the neighborhood of xp

i .

Affinity Matrix Wp, wp

ij is non-zero iff xp i and xp j are neighbors in E p.

Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 10 / 45

slide-26
SLIDE 26

Semi-supervised Matching Framework

Semi-Supervised Setting

Many existing matching techniques such as SIFT have already been powerful enough to recover some sparse matched pairs accurately and robustly. Labeled Data

  • X1

l ,Y1 l

  • and
  • X2

l ,Y2 l

  • =

⇒Unlabeled Data

  • X1

u,Y1 u

  • and
  • X2

u,Y2 u

  • .

Semi-supervised learning on the graph representation tries to estimate a label matrix ˆ Yp that is consistent with:

◮ the initial incomplete label matrix, ◮ the geometry of the data manifold induced by the graph structure. Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 11 / 45

slide-27
SLIDE 27

Semi-supervised Matching Framework

Semi-Supervised Setting

Many existing matching techniques such as SIFT have already been powerful enough to recover some sparse matched pairs accurately and robustly. Labeled Data

  • X1

l ,Y1 l

  • and
  • X2

l ,Y2 l

  • =

⇒Unlabeled Data

  • X1

u,Y1 u

  • and
  • X2

u,Y2 u

  • .

Semi-supervised learning on the graph representation tries to estimate a label matrix ˆ Yp that is consistent with:

◮ the initial incomplete label matrix, ◮ the geometry of the data manifold induced by the graph structure. Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 11 / 45

slide-28
SLIDE 28

Semi-supervised Matching Framework

Semi-Supervised Setting

Many existing matching techniques such as SIFT have already been powerful enough to recover some sparse matched pairs accurately and robustly. Labeled Data

  • X1

l ,Y1 l

  • and
  • X2

l ,Y2 l

  • =

⇒Unlabeled Data

  • X1

u,Y1 u

  • and
  • X2

u,Y2 u

  • .

Semi-supervised learning on the graph representation tries to estimate a label matrix ˆ Yp that is consistent with:

◮ the initial incomplete label matrix, ◮ the geometry of the data manifold induced by the graph structure. Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 11 / 45

slide-29
SLIDE 29

Semi-supervised Matching Framework

Semi-Supervised Setting

Many existing matching techniques such as SIFT have already been powerful enough to recover some sparse matched pairs accurately and robustly. Labeled Data

  • X1

l ,Y1 l

  • and
  • X2

l ,Y2 l

  • =

⇒Unlabeled Data

  • X1

u,Y1 u

  • and
  • X2

u,Y2 u

  • .

Semi-supervised learning on the graph representation tries to estimate a label matrix ˆ Yp that is consistent with:

◮ the initial incomplete label matrix, ◮ the geometry of the data manifold induced by the graph structure. Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 11 / 45

slide-30
SLIDE 30

Semi-supervised Matching Framework

Semi-Supervised Setting

Many existing matching techniques such as SIFT have already been powerful enough to recover some sparse matched pairs accurately and robustly. Labeled Data

  • X1

l ,Y1 l

  • and
  • X2

l ,Y2 l

  • =

⇒Unlabeled Data

  • X1

u,Y1 u

  • and
  • X2

u,Y2 u

  • .

Semi-supervised learning on the graph representation tries to estimate a label matrix ˆ Yp that is consistent with:

◮ the initial incomplete label matrix, ◮ the geometry of the data manifold induced by the graph structure. Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 11 / 45

slide-31
SLIDE 31

Semi-supervised Matching Framework

Semi-Supervised Setting

Many existing matching techniques such as SIFT have already been powerful enough to recover some sparse matched pairs accurately and robustly. Labeled Data

  • X1

l ,Y1 l

  • and
  • X2

l ,Y2 l

  • =

⇒Unlabeled Data

  • X1

u,Y1 u

  • and
  • X2

u,Y2 u

  • .

Semi-supervised learning on the graph representation tries to estimate a label matrix ˆ Yp that is consistent with:

◮ the initial incomplete label matrix, ◮ the geometry of the data manifold induced by the graph structure. Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 11 / 45

slide-32
SLIDE 32

Semi-supervised Matching Framework

Consistency Cost with Initial Labeling

Given a configuration ˆ Yp, consistency with the initial labeling can be measured by C p

l

  • ˆ

Yp,Op = ∑

xp

i ∈X p l

  • p

i

  • ˆ

yp

i −yp i

  • 2 .

Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 12 / 45

slide-33
SLIDE 33

Semi-supervised Matching Framework

Consistency Cost with Geometry

Consistency with the geometry of the data in the image space, which follows from the smooth manifold assumption, motivates a penalty term of the form C p

s

  • ˆ

Yp,Op = 1 2

xp

i ,xp j ∈X p

wp

ij φ

  • p

i ,op j

  • ˆ

yp

i − ˆ

yp

j

  • 2

, where φ

  • p

i ,op j

  • = 1

2

  • p

i

2 +

  • p

j

2 penalizes rapid changes in ˆ Yp between points that are close, and only enforces smoothness within visible regions.

Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 13 / 45

slide-34
SLIDE 34

Semi-supervised Matching Framework Local Label Preference Cost

Outline

1

Introduction

2

Semi-supervised Matching Framework Local Label Preference Cost Regional Surface Shape Cost Global Epipolar Geometry Cost Symmetric Visibility Consistency Cost

3

Iterative MV Optimization

4

Learning the Symmetric Affinity Matrix

5

More Details

6

Experiments

Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 14 / 45

slide-35
SLIDE 35

Semi-supervised Matching Framework Local Label Preference Cost

Local Label Preference Cost

The local cost is defined as: C p

d

  • ˆ

Yp,Op = ∑

xp

i ∈X p

  • p

i ρp i

ˆ yp

i

  • +
  • 1−op

i

  • τp

i

  • .

Similarity cost function ρp

i (y) : similarity cost between the pixel xp i in

  • ne image and the corresponding point for the label vector y in the
  • ther image space.

Penalty term τp

i = maxxp

j ∈N (xp i )

  • xp

i −xp j

  • : to avoid that every

point tends to have zero visibility to escape from cost charging.

Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 15 / 45

slide-36
SLIDE 36

Semi-supervised Matching Framework Regional Surface Shape Cost

Outline

1

Introduction

2

Semi-supervised Matching Framework Local Label Preference Cost Regional Surface Shape Cost Global Epipolar Geometry Cost Symmetric Visibility Consistency Cost

3

Iterative MV Optimization

4

Learning the Symmetric Affinity Matrix

5

More Details

6

Experiments

Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 16 / 45

slide-37
SLIDE 37

Semi-supervised Matching Framework Regional Surface Shape Cost

Assumption

Shape Cues

The shapes of the 3D objects’ surface in the scene are very important cues for matching.

Intuitive Approach

To reconstruct the 3D surface based on two-view geometry. Unstable, especially when the baseline is not large enough.

Piecewise Planar Patch Assumption

Since two data points with high affinity relation are more likely to have similar label vectors, we assume that the label vector of a data point can be linearly approximated by the label vectors of its neighbors.

Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 17 / 45

slide-38
SLIDE 38

Semi-supervised Matching Framework Regional Surface Shape Cost

Assumption

Shape Cues

The shapes of the 3D objects’ surface in the scene are very important cues for matching.

Intuitive Approach

To reconstruct the 3D surface based on two-view geometry. Unstable, especially when the baseline is not large enough.

Piecewise Planar Patch Assumption

Since two data points with high affinity relation are more likely to have similar label vectors, we assume that the label vector of a data point can be linearly approximated by the label vectors of its neighbors.

Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 17 / 45

slide-39
SLIDE 39

Semi-supervised Matching Framework Regional Surface Shape Cost

Assumption

Shape Cues

The shapes of the 3D objects’ surface in the scene are very important cues for matching.

Intuitive Approach

To reconstruct the 3D surface based on two-view geometry. Unstable, especially when the baseline is not large enough.

Piecewise Planar Patch Assumption

Since two data points with high affinity relation are more likely to have similar label vectors, we assume that the label vector of a data point can be linearly approximated by the label vectors of its neighbors.

Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 17 / 45

slide-40
SLIDE 40

Semi-supervised Matching Framework Regional Surface Shape Cost

Reconstruction Cost

The label of a data point can be linearly constructed by its neighbors: yp

i =

xp

j ∈N (xp i )

wp

ij yp j .

The reconstruction cost can be defined as Cr (Yp) =

xk

i ∈X k

  • yp

i −

xp

j ∈N (xp i )

wp

ij yp j

  • 2

= (I−Wp)Yp2

F

≈ tr

  • (Yp)T LpYp

=

xp

i ,xp j ∈X p

ap

ij

  • yp

i −yp j

  • 2

, Ap = Wp +(Wp)T −Wp (Wp)T: adjacency matrix Dp is a diagonal matrix containing the row sums of Ap,Dp ≈ I Lp = Dp −Ap: un-normalized graph Laplacian matrix

Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 18 / 45

slide-41
SLIDE 41

Semi-supervised Matching Framework Regional Surface Shape Cost

LLE Cost

To align the two 2D manifolds (image spaces) to one 2D manifold (visible surface). The labeled data (known matched pairs) are accounted for by constraining the mapped coordinates of matched points to coincide. X p

c = X p l ∪X p u ∪X q u ,

ˆ Yp

c =

  ˆ Yp

l

ˆ Yp

u

ˆ Yq

u

 , Op

c =

  Op

l

Op

u

Oq

u

 , Ap = Ap

ll

Ap

lu

Ap

ul

Ap

uu

  • ,

Ap

c =

  Ap

ll +Aq ll

Ap

lu

Aq

lu

Ap

ul

Ap

uu

Aq

ul

Aq

uu

 , C p

r

  • ˆ

Y1, ˆ Y2,O1,O2 =

xp

i ,xp j ∈X p c

(ap

c)ij φ

  • (op

c )i ,(op c )j

yp

c)i −(ˆ

yp

c)j

  • 2

.

Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 19 / 45

slide-42
SLIDE 42

Semi-supervised Matching Framework Global Epipolar Geometry Cost

Outline

1

Introduction

2

Semi-supervised Matching Framework Local Label Preference Cost Regional Surface Shape Cost Global Epipolar Geometry Cost Symmetric Visibility Consistency Cost

3

Iterative MV Optimization

4

Learning the Symmetric Affinity Matrix

5

More Details

6

Experiments

Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 20 / 45

slide-43
SLIDE 43

Semi-supervised Matching Framework Global Epipolar Geometry Cost

Global Epipolar Geometry Cost

For xp

i at position (sp,tp), the epipolar line:

  • ap

i ,bp i ,cp i

  • = (sp,tp,1)FT

pq.

Squared Euclidean distance in the image space of the other image: dp

i (y) =

  • ap

i sq +bp i tq +cp i

2

  • ap

i

2 +

  • bp

i

2 , where y =

  • s1,t1

  • s2,t2T. Global cost:

C p

g

  • ˆ

Yp,Op = ∑

xp

i ∈X p

  • p

i dp i

ˆ yp

i

  • .

Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 21 / 45

slide-44
SLIDE 44

Semi-supervised Matching Framework Symmetric Visibility Consistency Cost

Outline

1

Introduction

2

Semi-supervised Matching Framework Local Label Preference Cost Regional Surface Shape Cost Global Epipolar Geometry Cost Symmetric Visibility Consistency Cost

3

Iterative MV Optimization

4

Learning the Symmetric Affinity Matrix

5

More Details

6

Experiments

Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 22 / 45

slide-45
SLIDE 45

Semi-supervised Matching Framework Symmetric Visibility Consistency Cost

Symmetric Visibility Consistency Cost

If xp

i in one image has label to match with a point in the q-th image, then

there must exist some point in the q-th image to have label to match xp

i .

C p

v

  • Op, ˆ

Yq = β ∑xp

i ∈X p

  • p

i −γp i

  • ˆ

Yq2 + 1

2 ∑xp

i ,xp j ∈X p wp

ij

  • p

i −op j

2 γ function indicates whether or not there exist one or more data points that match a point near xp

i from the other view according to ˆ

Yq. Last term enforces the smoothness of the occlusion. β controls the strength of the visibility constraint.

Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 23 / 45

slide-46
SLIDE 46

Semi-supervised Matching Framework Symmetric Visibility Consistency Cost

Voting for γ

For each point xq

j at position (sq,tq) in X q with label yq j =

  • vq

j ,hq j

T : Place a 2D Gaussian ψ (s,t) on the p-th image centered at the matched position cj = (sp,tp)T . = ⇒ Mixture of Gaussian ∑xq

j ψcj (s,t) in the voted image space.

Truncate it γp (s,t) = min

  • 1, ∑

xq

j ∈X q

ψcj (s,t)

  • .

Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 24 / 45

slide-47
SLIDE 47

Iterative MV Optimization

Outline

1

Introduction

2

Semi-supervised Matching Framework Local Label Preference Cost Regional Surface Shape Cost Global Epipolar Geometry Cost Symmetric Visibility Consistency Cost

3

Iterative MV Optimization

4

Learning the Symmetric Affinity Matrix

5

More Details

6

Experiments

Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 25 / 45

slide-48
SLIDE 48

Iterative MV Optimization

Optimization Process

The optimization process has two steps:

1 M-step: estimate matching given visibility 2 V-step: estimate visibility given matching. Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 26 / 45

slide-49
SLIDE 49

Iterative MV Optimization

M-step: Estimation of Matching Given Visibility

Visibility term Cv imposes two constraints on ˆ Y given O

1 Local Constraint : For each pixel xp

i in the p-th image, it should not

match the invisible (occluded) points in the other image.

2 Global Constraint: For each visible point in the q-th image, at least

  • ne data point in the p-th image should match it.

In the M-step, we approximate the visibility term by considering only the local constraint, which can be incorporated into the similarity function ρp

i (y) in Cd.

Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 27 / 45

slide-50
SLIDE 50

Iterative MV Optimization

M-step: Estimation of Matching Given Visibility

Visibility term Cv imposes two constraints on ˆ Y given O

1 Local Constraint : For each pixel xp

i in the p-th image, it should not

match the invisible (occluded) points in the other image.

2 Global Constraint: For each visible point in the q-th image, at least

  • ne data point in the p-th image should match it.

In the M-step, we approximate the visibility term by considering only the local constraint, which can be incorporated into the similarity function ρp

i (y) in Cd.

Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 27 / 45

slide-51
SLIDE 51

Iterative MV Optimization

M-step: Estimation of Matching Given Visibility (Con’t)

Let Y =

  • Y1T,
  • Y2TT. Cost function:

CM

  • ˆ

Y

  • = ∑

p=1,2

  • λlC p

l +λsC p s +λdC p d +λrC p r +λgC p g

  • ˆ

Y

  • 2,

where ε

  • ˆ

Y

  • 2 is a small regularization term in order to prevent degenerate

situations. For fixed O1 and O2, the cost minimization is done by setting the derivative with respect to ˆ Y to be zero since the second derivative is a positive definite matrix.

Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 28 / 45

slide-52
SLIDE 52

Iterative MV Optimization

V-step: Estimation of Visibility Given Matching

Let O =

  • O1T,
  • O2TT. Cost function:

CV (O) = ∑

p=1,2

  • λlC p

l +λsC p s +λdC p d +λrC p r +λgC p g +λvC p v

  • +ε O2 ,

For fixed ˆ Y1 and ^ Y2, the cost minimization is done by setting the derivative with respect to O to be zero since the second derivative is a positive definite matrix.

Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 29 / 45

slide-53
SLIDE 53

Iterative MV Optimization

System of Linear Equations

Solving

We can derive that by the way we defined Wp and the cost functions, the coefficient matrix is strictly diagonally dominant and positive definite. Hence, Gauss-Seidel and Conjugate Gradient iterations both converge to the solution of the linear system with theoretical guarantee. GPU is helpful to speed up.

Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 30 / 45

slide-54
SLIDE 54

Learning the Symmetric Affinity Matrix

Outline

1

Introduction

2

Semi-supervised Matching Framework Local Label Preference Cost Regional Surface Shape Cost Global Epipolar Geometry Cost Symmetric Visibility Consistency Cost

3

Iterative MV Optimization

4

Learning the Symmetric Affinity Matrix

5

More Details

6

Experiments

Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 31 / 45

slide-55
SLIDE 55

Learning the Symmetric Affinity Matrix

Learning the Affinity Matrix

Directly define the matrix: no reliable approach for model selection if

  • nly very few labeled points are available.

Learn the matrix: more reliable and stable.

Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 32 / 45

slide-56
SLIDE 56

Learning the Symmetric Affinity Matrix

Manifold Assumptions

Smooth manifold and linear re-constructable assumption for the manifold in image space. The label space and the image space share the same local linear reconstruction weights. Linear construction weight matrix Wp by minimizing the energy function EWp = ∑xp

i ∈X p Exp i ,

Exp

i =

  • xp

i −

xp

j ∈N (xp i )

wp

ij xp j

  • 2

=

xp

j ,xp k∈N (xp i )

wp

ij Gi jkwp ik.

where Gi

jk =

  • xp

i −xp j

T xp

i −xp k

  • .

To avoid the undesirable contribution of negative weights, we enforce

xp

j ∈N (xp i )

wp

ij = 1, wp ij ≥ 0.

Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 33 / 45

slide-57
SLIDE 57

Learning the Symmetric Affinity Matrix

Quadratic Programming Objective Function

min

Wp

xp

i ∈X p

xp

j ,xp k∈N (xp i )

wp

ij Gi jkwp ik +κ∑ ij

  • wp

ij −wp ji

2 (1) s.t. ∀xp

i ∈ X p,

xp

j ∈N (xp i )

wp

ij = 1, wp ij ≥ 0.

where ∑ij

  • wp

ij −wp ji

2 is a penalty term to encourage wp

ij and wp ji to be

similar.

Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 34 / 45

slide-58
SLIDE 58

More Details

Outline

1

Introduction

2

Semi-supervised Matching Framework Local Label Preference Cost Regional Surface Shape Cost Global Epipolar Geometry Cost Symmetric Visibility Consistency Cost

3

Iterative MV Optimization

4

Learning the Symmetric Affinity Matrix

5

More Details

6

Experiments

Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 35 / 45

slide-59
SLIDE 59

More Details

Label Initialization

1 For each image, the occlusion boundaries and depth ordering in the

scene are identified using [Hoiem et al. 2007].

2 SIFT key points and nearest neighbor algorithm to achieve an initial

matching.

3 Enforce the constraints of the matching to be one-to-one cross

consistency.

4 Discrete region growing [Kannala and Brandt 2007]. 5 Interpolate the unmatched part by estimating the local homography

transformation.

6 Obtain the initial visibility matrix. Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 36 / 45

slide-60
SLIDE 60

More Details

Label Initialization

1 For each image, the occlusion boundaries and depth ordering in the

scene are identified using [Hoiem et al. 2007].

2 SIFT key points and nearest neighbor algorithm to achieve an initial

matching.

3 Enforce the constraints of the matching to be one-to-one cross

consistency.

4 Discrete region growing [Kannala and Brandt 2007]. 5 Interpolate the unmatched part by estimating the local homography

transformation.

6 Obtain the initial visibility matrix. Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 36 / 45

slide-61
SLIDE 61

More Details

Label Initialization

1 For each image, the occlusion boundaries and depth ordering in the

scene are identified using [Hoiem et al. 2007].

2 SIFT key points and nearest neighbor algorithm to achieve an initial

matching.

3 Enforce the constraints of the matching to be one-to-one cross

consistency.

4 Discrete region growing [Kannala and Brandt 2007]. 5 Interpolate the unmatched part by estimating the local homography

transformation.

6 Obtain the initial visibility matrix. Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 36 / 45

slide-62
SLIDE 62

More Details

Label Initialization

1 For each image, the occlusion boundaries and depth ordering in the

scene are identified using [Hoiem et al. 2007].

2 SIFT key points and nearest neighbor algorithm to achieve an initial

matching.

3 Enforce the constraints of the matching to be one-to-one cross

consistency.

4 Discrete region growing [Kannala and Brandt 2007]. 5 Interpolate the unmatched part by estimating the local homography

transformation.

6 Obtain the initial visibility matrix. Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 36 / 45

slide-63
SLIDE 63

More Details

Label Initialization

1 For each image, the occlusion boundaries and depth ordering in the

scene are identified using [Hoiem et al. 2007].

2 SIFT key points and nearest neighbor algorithm to achieve an initial

matching.

3 Enforce the constraints of the matching to be one-to-one cross

consistency.

4 Discrete region growing [Kannala and Brandt 2007]. 5 Interpolate the unmatched part by estimating the local homography

transformation.

6 Obtain the initial visibility matrix. Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 36 / 45

slide-64
SLIDE 64

More Details

Label Initialization

1 For each image, the occlusion boundaries and depth ordering in the

scene are identified using [Hoiem et al. 2007].

2 SIFT key points and nearest neighbor algorithm to achieve an initial

matching.

3 Enforce the constraints of the matching to be one-to-one cross

consistency.

4 Discrete region growing [Kannala and Brandt 2007]. 5 Interpolate the unmatched part by estimating the local homography

transformation.

6 Obtain the initial visibility matrix. Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 36 / 45

slide-65
SLIDE 65

More Details

Computing the similarity cost function

Our algorithm works with some label data in a semi-supervised manner by the consistent cost Cl The local cost Cd just works in an axillary manner. Unlike the traditional unsupervised matching, our framework does not heavily rely on the similarity function ρp

i (y).

For efficient computation, we just sample some values for some integer combination of h and v to compute ρp

i (y) = exp(−xp

i −xq j 2

2σ2

). We normalized the largest sampled value to be 1, and then fit ρp

i (y)

with continuous and differentiable quadratic function, i.e. ρp

i (y) = (v −vo)2 +(h −ho)2

2σ2 , where (vo,ho) and σ are the center and spread of the parabola for xp

i .

Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 37 / 45

slide-66
SLIDE 66

More Details

The Complete Procedure

1 Compute the depth and occlusion boundary image and feature vector. 2 Compute sparse matching by SIFT and the confidence penalty τ, then

interpolate the results from sparse matching with depth information to achieve an initial solution.

3 Learn the affinity matrix W1 and W2. 4 while (cost change between two iterations ≥ threshold): 1

Estimate the fundamental matrix F, and reject outliers to achieve a subset as label data,

2

Compute the parameters for the similarity cost function ρ and epipolar cost function d,

3

Estimate matching given visibility,

4

Compute the γ map,

5

Estimate visibility given matching.

Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 38 / 45

slide-67
SLIDE 67

More Details

The Complete Procedure

1 Compute the depth and occlusion boundary image and feature vector. 2 Compute sparse matching by SIFT and the confidence penalty τ, then

interpolate the results from sparse matching with depth information to achieve an initial solution.

3 Learn the affinity matrix W1 and W2. 4 while (cost change between two iterations ≥ threshold): 1

Estimate the fundamental matrix F, and reject outliers to achieve a subset as label data,

2

Compute the parameters for the similarity cost function ρ and epipolar cost function d,

3

Estimate matching given visibility,

4

Compute the γ map,

5

Estimate visibility given matching.

Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 38 / 45

slide-68
SLIDE 68

More Details

The Complete Procedure

1 Compute the depth and occlusion boundary image and feature vector. 2 Compute sparse matching by SIFT and the confidence penalty τ, then

interpolate the results from sparse matching with depth information to achieve an initial solution.

3 Learn the affinity matrix W1 and W2. 4 while (cost change between two iterations ≥ threshold): 1

Estimate the fundamental matrix F, and reject outliers to achieve a subset as label data,

2

Compute the parameters for the similarity cost function ρ and epipolar cost function d,

3

Estimate matching given visibility,

4

Compute the γ map,

5

Estimate visibility given matching.

Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 38 / 45

slide-69
SLIDE 69

More Details

The Complete Procedure

1 Compute the depth and occlusion boundary image and feature vector. 2 Compute sparse matching by SIFT and the confidence penalty τ, then

interpolate the results from sparse matching with depth information to achieve an initial solution.

3 Learn the affinity matrix W1 and W2. 4 while (cost change between two iterations ≥ threshold): 1

Estimate the fundamental matrix F, and reject outliers to achieve a subset as label data,

2

Compute the parameters for the similarity cost function ρ and epipolar cost function d,

3

Estimate matching given visibility,

4

Compute the γ map,

5

Estimate visibility given matching.

Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 38 / 45

slide-70
SLIDE 70

More Details

The Complete Procedure

1 Compute the depth and occlusion boundary image and feature vector. 2 Compute sparse matching by SIFT and the confidence penalty τ, then

interpolate the results from sparse matching with depth information to achieve an initial solution.

3 Learn the affinity matrix W1 and W2. 4 while (cost change between two iterations ≥ threshold): 1

Estimate the fundamental matrix F, and reject outliers to achieve a subset as label data,

2

Compute the parameters for the similarity cost function ρ and epipolar cost function d,

3

Estimate matching given visibility,

4

Compute the γ map,

5

Estimate visibility given matching.

Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 38 / 45

slide-71
SLIDE 71

More Details

The Complete Procedure

1 Compute the depth and occlusion boundary image and feature vector. 2 Compute sparse matching by SIFT and the confidence penalty τ, then

interpolate the results from sparse matching with depth information to achieve an initial solution.

3 Learn the affinity matrix W1 and W2. 4 while (cost change between two iterations ≥ threshold): 1

Estimate the fundamental matrix F, and reject outliers to achieve a subset as label data,

2

Compute the parameters for the similarity cost function ρ and epipolar cost function d,

3

Estimate matching given visibility,

4

Compute the γ map,

5

Estimate visibility given matching.

Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 38 / 45

slide-72
SLIDE 72

More Details

The Complete Procedure

1 Compute the depth and occlusion boundary image and feature vector. 2 Compute sparse matching by SIFT and the confidence penalty τ, then

interpolate the results from sparse matching with depth information to achieve an initial solution.

3 Learn the affinity matrix W1 and W2. 4 while (cost change between two iterations ≥ threshold): 1

Estimate the fundamental matrix F, and reject outliers to achieve a subset as label data,

2

Compute the parameters for the similarity cost function ρ and epipolar cost function d,

3

Estimate matching given visibility,

4

Compute the γ map,

5

Estimate visibility given matching.

Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 38 / 45

slide-73
SLIDE 73

More Details

The Complete Procedure

1 Compute the depth and occlusion boundary image and feature vector. 2 Compute sparse matching by SIFT and the confidence penalty τ, then

interpolate the results from sparse matching with depth information to achieve an initial solution.

3 Learn the affinity matrix W1 and W2. 4 while (cost change between two iterations ≥ threshold): 1

Estimate the fundamental matrix F, and reject outliers to achieve a subset as label data,

2

Compute the parameters for the similarity cost function ρ and epipolar cost function d,

3

Estimate matching given visibility,

4

Compute the γ map,

5

Estimate visibility given matching.

Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 38 / 45

slide-74
SLIDE 74

More Details

The Complete Procedure

1 Compute the depth and occlusion boundary image and feature vector. 2 Compute sparse matching by SIFT and the confidence penalty τ, then

interpolate the results from sparse matching with depth information to achieve an initial solution.

3 Learn the affinity matrix W1 and W2. 4 while (cost change between two iterations ≥ threshold): 1

Estimate the fundamental matrix F, and reject outliers to achieve a subset as label data,

2

Compute the parameters for the similarity cost function ρ and epipolar cost function d,

3

Estimate matching given visibility,

4

Compute the γ map,

5

Estimate visibility given matching.

Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 38 / 45

slide-75
SLIDE 75

Experiments

Outline

1

Introduction

2

Semi-supervised Matching Framework Local Label Preference Cost Regional Surface Shape Cost Global Epipolar Geometry Cost Symmetric Visibility Consistency Cost

3

Iterative MV Optimization

4

Learning the Symmetric Affinity Matrix

5

More Details

6

Experiments

Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 39 / 45

slide-76
SLIDE 76

Experiments

Experiments

Set the parameters to be more favor Cl and Cg for M-step, and Cv for the V-step, tune the parameters manually. The intensity value is set to be the norm of the label vector, i.e. y. For visualization, we scale the intensity to the range between 0 and 200, and only visible matching is shown, i.e. o 0.5.

Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 40 / 45

slide-77
SLIDE 77

Experiments

More Results

Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 41 / 45

slide-78
SLIDE 78

Experiments

3D Reconstruction from 3 Views

1 1 ←

→ 2 and 2 ← → 3.

2 1 ←

→ 2 ← → 3.

3 Projective reconstruction [Quan 1995]. 4 Metric upgrade and bundle adjustment [Hartley and Zisserman 2004]. 5 Feature tracks with large reprojection errors are considered as outliers. Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 42 / 45

slide-79
SLIDE 79

Experiments

3D Reconstruction from 3 Views

1 1 ←

→ 2 and 2 ← → 3.

2 1 ←

→ 2 ← → 3.

3 Projective reconstruction [Quan 1995]. 4 Metric upgrade and bundle adjustment [Hartley and Zisserman 2004]. 5 Feature tracks with large reprojection errors are considered as outliers. Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 42 / 45

slide-80
SLIDE 80

Experiments

3D Reconstruction from 3 Views

1 1 ←

→ 2 and 2 ← → 3.

2 1 ←

→ 2 ← → 3.

3 Projective reconstruction [Quan 1995]. 4 Metric upgrade and bundle adjustment [Hartley and Zisserman 2004]. 5 Feature tracks with large reprojection errors are considered as outliers. Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 42 / 45

slide-81
SLIDE 81

Experiments

3D Reconstruction from 3 Views

1 1 ←

→ 2 and 2 ← → 3.

2 1 ←

→ 2 ← → 3.

3 Projective reconstruction [Quan 1995]. 4 Metric upgrade and bundle adjustment [Hartley and Zisserman 2004]. 5 Feature tracks with large reprojection errors are considered as outliers. Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 42 / 45

slide-82
SLIDE 82

Experiments

3D Reconstruction from 3 Views

1 1 ←

→ 2 and 2 ← → 3.

2 1 ←

→ 2 ← → 3.

3 Projective reconstruction [Quan 1995]. 4 Metric upgrade and bundle adjustment [Hartley and Zisserman 2004]. 5 Feature tracks with large reprojection errors are considered as outliers. Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 42 / 45

slide-83
SLIDE 83

Experiments

3D Reconstruction from 3 Views (Con’t)

Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 43 / 45

slide-84
SLIDE 84

Experiments

Application for Structure from Motion

Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 44 / 45

slide-85
SLIDE 85

Q & A

Q & A

Q & A

Jianxiong Xiao et al. (HKUST) Learning Two-View Stereo Matching ECCV 2008 45 / 45