Camera parameter estimation for image based modeling Jaechul Kim Demo - - PowerPoint PPT Presentation

camera parameter estimation for image based modeling
SMART_READER_LITE
LIVE PREVIEW

Camera parameter estimation for image based modeling Jaechul Kim Demo - - PowerPoint PPT Presentation

Camera parameter estimation for image based modeling Jaechul Kim Demo presentation Visual recognition and search, Mar 21, 2008 Purpose Purpose Introduce a basic procedure of camera Introduce a basic procedure of camera parameter estimation


slide-1
SLIDE 1

Camera parameter estimation for image based modeling

Jaechul Kim

Demo presentation ‐ Visual recognition and search, Mar 21, 2008

slide-2
SLIDE 2

Purpose Purpose

  • Introduce a basic procedure of camera

Introduce a basic procedure of camera parameter estimation from multiple images and its application to image‐based modeling and its application to image based modeling

Demo presentation ‐ Visual recognition and search, Mar 21, 2008

slide-3
SLIDE 3

Overview of general procedure Overview of general procedure

  • Step 1 : Point matches and epipolar geometry

Step 1 : Point matches and epipolar geometry estimation (i.e. Fundamental matrix computation) computation)

Demo presentation ‐ Visual recognition and search, Mar 21, 2008

slide-4
SLIDE 4

Overview of general procedure Overview of general procedure

  • Step2 : Estimation of camera parameters

Step2 : Estimation of camera parameters

– Focal length, Camera position, Viewing direction etc etc.

Demo presentation ‐ Visual recognition and search, Mar 21, 2008

slide-5
SLIDE 5

Overview of general procedure Overview of general procedure

  • Step 3: 3D reconstruction & Texture mapping

Step 3: 3D reconstruction & Texture mapping

Demo presentation ‐ Visual recognition and search, Mar 21, 2008

slide-6
SLIDE 6

Step1

Feature Points Matching & Epipolar Geometry Estimation

  • General procedure

General procedure

– Find point correspondences between images From point correspondences compute – From point correspondences, compute fundamental matrices (F‐matrices) between images images

  • Outliers in point correspondences are rejected during

F‐matrix computation using RANSAC

  • Output : F‐matrix (i.e. projective

reconstruction)

Demo presentation ‐ Visual recognition and search, Mar 21, 2008

slide-7
SLIDE 7

Step 1

Feature Points Matching

  • Three methods are tested in this demo

Three methods are tested in this demo

– Harris corner detector & Window correlation + RANSAC RANSAC – SIFT detector & SIFT descriptor + RANSAC – Manual matching – Manual matching

Demo presentation ‐ Visual recognition and search, Mar 21, 2008

slide-8
SLIDE 8

Step1 l Epipolar geometry

  • Projective geometry between two views

Projective geometry between two views

X e1,e2 : epipoles l1,l2 : epipolar lines l2 l1 x1 x2 C1 C2 e1 e2

Demo presentation ‐ Visual recognition and search, Mar 21, 2008

slide-9
SLIDE 9

Step1 d l Fundamental matrix

  • Encode epipolar geometry between two views

Encode epipolar geometry between two views

  • Rank‐2 matrix (det(F) = 0) that can be

computed from at least 7 point computed from at least 7‐point correspondences

TFx

x

  • Define epiploar line for a given point x1 or x2

=

1 2 Fx

x

2 T 1 1 2

x F l Fx l = =

Demo presentation ‐ Visual recognition and search, Mar 21, 2008

2 1

x F l

slide-10
SLIDE 10

Step1 ( d l ) RANSAC (RANdom SAmple Concensus)

  • Robust estimation technique under the presence of

q p

  • utliers
  • Algorithm outline

Given putative correspondences sample 7 or 8 – Given putative correspondences, sample 7 or 8 correspondences and then compute the Fundamental matrix – Using the computed Fundamental matrix count the – Using the computed Fundamental matrix, count the number of inliers – If the number of inlier is a maximum among iterations, store the Fundamental matrix and inliers store the Fundamental matrix and inliers. – Repeat the sampling.

Demo presentation ‐ Visual recognition and search, Mar 21, 2008

slide-11
SLIDE 11

Step 1 Feature point detection & matching Feature point detection & matching

Harris corner + Window correlation + RANSAC

  • Harris corner detector
  • Harris corner detector

– Parameters to be used

  • Harris threshold, Mc, is 500
  • Kappa is set to 0 04

Kappa is set to 0.04

  • Gaussian smoothing with sigma 1 is applied to image before

corner detection

  • Window size (u v) is 1
  • Window size (u,v) is 1

Demo presentation ‐ Visual recognition and search, Mar 21, 2008

slide-12
SLIDE 12

Step1

Feature point detection & matching Feature point detection & matching

Harris corner + Window correlation + RANSAC

d l

  • Windows correlation

– For a detected corner point (x,y) in the image 1, h h ( ’ ’) h h search the corner point (x’,y’) in the image 2 with the minimum SSD error P t t b d – Parameter to be used

  • Correlation window size 15
  • Search area in the image 2 is set to 300 by 300 (1/4 size
  • Search area in the image 2 is set to 300 by 300 (1/4 size
  • f the image) centered to (x,y)

Demo presentation ‐ Visual recognition and search, Mar 21, 2008

slide-13
SLIDE 13

Step1

Feature point detection & matching Feature point detection & matching

Harris corner + Window correlation + RANSAC

d

  • Harris corner detection

I iti ll d t t d i t Initially detected corner points

Demo presentation ‐ Visual recognition and search, Mar 21, 2008

slide-14
SLIDE 14

Step1

Feature point detection & matching Feature point detection & matching

Harris corner + Window correlation + RANSAC

  • Window correlation + RANSAC
  • Window correlation + RANSAC

Putative matches (626) Inliers after RANSAC (23, 4%)

Demo presentation ‐ Visual recognition and search, Mar 21, 2008

slide-15
SLIDE 15

Step1

Feature point detection & matching Feature point detection & matching

Harris corner + Window correlation + RANSAC

E l f f l t h

  • Examples of false matches

Demo presentation ‐ Visual recognition and search, Mar 21, 2008

slide-16
SLIDE 16

Step1

Feature point detection & matching Feature point detection & matching

Harris corner + Window correlation + RANSAC

l f f l h

  • Examples of false matches

Demo presentation ‐ Visual recognition and search, Mar 21, 2008

slide-17
SLIDE 17

Step1

Feature point detection & matching Feature point detection & matching

Harris corner + Window correlation + RANSAC

  • More examples (Harris + RANSAC)

Initially detected corner points

Demo presentation ‐ Visual recognition and search, Mar 21, 2008

slide-18
SLIDE 18

Step1

Feature point detection & matching Feature point detection & matching

Harris corner + Window correlation + RANSAC

l ( )

  • More examples (Harris + RANSAC)

Putative matches (386) Inliers after RANSAC (141, 37%)

Demo presentation ‐ Visual recognition and search, Mar 21, 2008

slide-19
SLIDE 19

Step1

Feature point detection & matching Feature point detection & matching

Harris corner + Window correlation + RANSAC

  • More examples (Harris + RANSAC) :Good result

Demo presentation ‐ Visual recognition and search, Mar 21, 2008

slide-20
SLIDE 20

Step1

Feature point detection & matching Feature point detection & matching

Harris corner + Window correlation + RANSAC

l

  • More examples (Harris + RANSAC) :Good result

Demo presentation ‐ Visual recognition and search, Mar 21, 2008

slide-21
SLIDE 21

Step1

Feature point detection & matching Feature point detection & matching

Harris corner + Window correlation + RANSAC

  • Harris + RANSAC Concl sion
  • Harris + RANSAC ‐ Conclusion

– Weak to matching two images with large viewpoint change viewpoint change – Confusion in repetitive textures S f i i h i F i – Some of image pairs have incorrect F matrices – Harris corner detection seems to be more proper to ideo based camera parameter tracking here to video based camera parameter tracking where image change between consecutive frames is small small

Demo presentation ‐ Visual recognition and search, Mar 21, 2008

slide-22
SLIDE 22

Step1

Feature point detection & matching Feature point detection & matching

SIFT + RANSAC

  • SIFT + RANSAC

– Parameter to be used

  • Sigma : 0.5
  • Number of octaves : 6

N b f l l t 3

  • Number of levels per octave: 3
  • SIFT descriptor : 128 dimensions

Putative matches are found using nearest – Putative matches are found using nearest neighbor between the SIFT descriptors

Demo presentation ‐ Visual recognition and search, Mar 21, 2008

slide-23
SLIDE 23

Step1

Feature point detection & matching Feature point detection & matching

SIFT + RANSAC

  • SIFT + RANSAC

SIFT + RANSAC

50 50 100 150 200 100 150 200 250 300 350 400 250 300 350 400 450 500 550 450 500 550 100 200 300 400 500 600 700 100 200 300 400 500 600 700

Initially detected SIFT feature points

Demo presentation ‐ Visual recognition and search, Mar 21, 2008

slide-24
SLIDE 24

Step1

Feature point detection & matching Feature point detection & matching

SIFT + RANSAC

  • SIFT + RANSAC

Putative matches (258) Inliers after RANSAC (133, 52%)

Demo presentation ‐ Visual recognition and search, Mar 21, 2008

slide-25
SLIDE 25

Step1

Feature point detection & matching Feature point detection & matching

SIFT + RANSAC

  • SIFT + RANSAC Good res lt
  • SIFT + RANSAC : Good result

Demo presentation ‐ Visual recognition and search, Mar 21, 2008

slide-26
SLIDE 26

Step1

Feature point detection & matching Feature point detection & matching

SIFT + RANSAC

d l

  • SIFT + RANSAC : Good result

Demo presentation ‐ Visual recognition and search, Mar 21, 2008

slide-27
SLIDE 27

Step1

Feature point detection & matching Feature point detection & matching

SIFT + RANSAC

l l ( )

  • Failure examples (SIFT + RANSAC)

Demo presentation ‐ Visual recognition and search, Mar 21, 2008

slide-28
SLIDE 28

Step1

Feature point detection & matching Feature point detection & matching

SIFT + RANSAC

( )

  • Failure examples (SIFT + RANSAC)

Demo presentation ‐ Visual recognition and search, Mar 21, 2008

slide-29
SLIDE 29

Step1

Feature point detection & matching Feature point detection & matching

SIFT + RANSAC

SIFT RANSAC C l i

  • SIFT + RANSAC – Conclusion

– More robust to the viewpoint variance than Harris corner corner – In some cases, automatic matching using SIFT provides a reliable F‐matrix p – But, it still invokes false matches in repetitive textured areas

  • For bag‐of‐features, this may be not a critical problem
  • But, for F‐matrix computation, the accurate location

between matches is very important between matches is very important

Demo presentation ‐ Visual recognition and search, Mar 21, 2008

slide-30
SLIDE 30

Step1

Feature point detection & matching Feature point detection & matching

Conclusion

A i f hi f F i

  • Automatic feature matching for F‐matrix

computation

– Both Harris + RANSAC and SIFT + RANSAC don’t Both Harris + RANSAC and SIFT + RANSAC don t provide the reliable results persistently over many images taken from the wide range of imaging conditions in practice conditions in practice – But, SIFT+RANSAC is more powerful

  • If many of images with similar appearances are given,

SIFT+RANSAC can provide reliable F‐matrices estimation SIFT+RANSAC can provide reliable F matrices estimation

  • Or, some progressive way like the one used in reading

assignment paper could fix the problem

– The higher the inlier rate is the more reliable the The higher the inlier rate is, the more reliable the match result is.

Demo presentation ‐ Visual recognition and search, Mar 21, 2008

slide-31
SLIDE 31

Step1

Feature point detection & matching Feature point detection & matching

Conclusion

  • Manual assignment of correspondences

– In all of my trials automatic matches fail to In all of my trials, automatic matches fail to provide a convergent estimation of camera parameters – Therefore, all experiments on camera parameter estimation are performed on the dataset with manually assigned correspondences

Demo presentation ‐ Visual recognition and search, Mar 21, 2008

slide-32
SLIDE 32

Step2 Camera parameter estimation

  • The implemented method

The implemented method

– EXIF information based parameter initialization + Parameter optimization using Bundle adjustment Parameter optimization using Bundle adjustment

Demo presentation ‐ Visual recognition and search, Mar 21, 2008

slide-33
SLIDE 33

Why need camera parameters?

Projective ambiguity

  • Projective Geometry ‐ Hierarchy of transformations

General Imaging

Increa

Orthographic camera

asing focal, in Camera ca

Fronto‐parallel viewing camera

ncreasing dista alibration

camera Fully calibrated camera

ance

} { X H PH

1 −

From “Multiple‐view geometry in Computer Vision”, 1st ed. pp.59) } { X H PH,

slide-34
SLIDE 34

Step2 Camera parameter estimation

  • Camera model : Pin‐hole projection + CCD

Camera model : Pin hole projection + CCD model

Z

R, t

X Y

R, t K

3 4

[ | ] = + x K R t X X RX t KX

Homogeneous coordinate (linear) N h di t

3 3 3 3

,

c w c

= + = X RX t x KX

Non-homogeneous coordinate

Demo presentation ‐ Visual recognition and search, Mar 21, 2008

slide-35
SLIDE 35

Step2 Camera parameter estimation

  • Intrinsic parameters : CCD

p

⎥ ⎥ ⎤ ⎢ ⎢ ⎡ =

x

  • f
  • s

f α K

f

⎥ ⎥ ⎦ ⎢ ⎢ ⎣ = 1

y

  • f

K

  • Extrinsic parameters : coordinate transformation

– R, t

Demo presentation ‐ Visual recognition and search, Mar 21, 2008

slide-36
SLIDE 36

Step2 Camera parameter estimation

  • EXIF information

EXIF information

– Meta‐file information stored in image file by digital cameras digital cameras – Contains focal length, f‐number, white balance, model name, maker name, etc model name, maker name, etc

Demo presentation ‐ Visual recognition and search, Mar 21, 2008

slide-37
SLIDE 37

Step2 Camera parameter estimation

  • How to initialize a camera using EXIF

g

– Get a focal length f (mm) from EXIF information

  • e.g. 10mm

Estimate a CCD size from model name in EXIF information – Estimate a CCD size from model name in EXIF information

  • e.g. 20mm by 20mm for canon EOS 300d

– Convert the unit of focal length from mm to pixels

  • e.g. image size 1000 by 1000, then 1pixel =20/1000 mm, f = 10mm

= 10 / (20/1000) = 500 pixels

– For more accurate computation, we can consider the number of effective pixels

  • e.g. if 10M pixels Digital camera has 8M effective pixels, then CCD

size should be considered using the reduced size by 8/10.

Demo presentation ‐ Visual recognition and search, Mar 21, 2008

slide-38
SLIDE 38

Step2 Camera parameter estimation

  • Parameter optimization using bundle

Parameter optimization using bundle adjustment

– Initialize the internal parameters using EXIF p g information – Initialize the external parameters using F‐matrix d h i i i li d i l and the initialized internal parameters

  • Given, F and internal parameters, camera motion can

be computed via linear equation. p q

– Minimize the re‐projection errors using non‐linear least square optimization

Demo presentation ‐ Visual recognition and search, Mar 21, 2008

slide-39
SLIDE 39

Step2 dl d Bundle adjustment

  • Iterative non‐linear least square technique to

fit the model to the measurement

– Levenberg‐Marquardt algorithm is generally used.

  • Variables : 3D reconstructed points + Camera

2

) min

j i i j j i,

X P (x

∑ ∑

projection matrices

  • Measurement : 2D point correspondences
  • Error measure : re‐projection error of 3D

reconstructed points from 2D observed points

Demo presentation ‐ Visual recognition and search, Mar 21, 2008

slide-40
SLIDE 40

Step2 dl d Bundle adjustment

  • Speed‐up via using a sparseness

P1 P2 P3 X1 X2 X3

ε J λI)Δ J (J

T T

− = +

X11 X12 X13 X13 X21 X22 x23 x31 X32 X32 X33

Demo presentation ‐ Visual recognition and search, Mar 21, 2008

slide-41
SLIDE 41

Step2 Camera parameter estimation

  • Result 1 – Chateau cattle images (7 images)

Result 1 Chateau cattle images (7 images)

Demo presentation ‐ Visual recognition and search, Mar 21, 2008

slide-42
SLIDE 42

Step2 Camera parameter estimation

  • Result2 ‐ Triumphal Arch images (6 Images)

Result2 Triumphal Arch images (6 Images)

Demo presentation ‐ Visual recognition and search, Mar 21, 2008

slide-43
SLIDE 43

Step2 Camera parameter estimation

  • Result3 – Projector images (12 images)

Result3 Projector images (12 images)

Demo presentation ‐ Visual recognition and search, Mar 21, 2008

slide-44
SLIDE 44

Step2 Camera parameter estimation

  • Sensitivity to initialization (projector dataset)

Sensitivity to initialization (projector dataset)

8% fy ‐6% 4% fx ‐10% Convergence region

Demo presentation ‐ Visual recognition and search, Mar 21, 2008

Convergence region

slide-45
SLIDE 45

Step2 Camera parameter estimation

  • Sensitivity to initialization (Chateau cattle

Sensitivity to initialization (Chateau cattle dataset)

12% fy ‐6% 6% fx ‐10% Convergence region

Demo presentation ‐ Visual recognition and search, Mar 21, 2008

Convergence region

slide-46
SLIDE 46

Step2 Camera parameter estimation

  • Sensitivity to initialization (Triumphal Arch

Sensitivity to initialization (Triumphal Arch dataset)

8% fy ‐6% 8% fx ‐8% Convergence region

Demo presentation ‐ Visual recognition and search, Mar 21, 2008

Convergence region

slide-47
SLIDE 47

Step2 Camera parameter estimation Camera parameter estimation

Conclusion

  • EXIF based approach
  • EXIF based approach

– Provide a practical way to initializing camera parameters parameters – Initialization is very important to Bundle adjustment i e non linear optimization adjustment, i.e. non‐linear optimization

  • Cost function in the bundle adjustment is non‐linear,

and non‐convex.

  • When initial parameters are distorted, it does not

converge to the solution any more.

Demo presentation ‐ Visual recognition and search, Mar 21, 2008

slide-48
SLIDE 48

Conclusion Conclusion

  • SIFT outperforms Harris corner under the

SIFT outperforms Harris corner under the large view‐point changes

  • But, automatic matching doesn’t provide

But, automatic matching doesn t provide consistently reliable result in practice

  • Camera parameter estimation is a non‐linear,

Camera parameter estimation is a non linear, non‐convex problem

– Good initialization is very important. y p – EXIF information is a practical way to initialize camera parameters.

Demo presentation ‐ Visual recognition and search, Mar 21, 2008

slide-49
SLIDE 49

References References

  • Harris corner detector and RANSAC ‐ Matlab

– http://www.csse.uwa.edu.au/~pk/research/matlabfns/

E i l t ti M tl b

  • Epipolar geometry compuation ‐ Matlab

– http://www.robots.ox.ac.uk/~vgg/hzbook/code/

  • SIFT – Matlab

– http://vision ucla edu/~vedaldi/code/sift/sift html http://vision.ucla.edu/ vedaldi/code/sift/sift.html

  • Linear algebra (GSL) ‐ C++

– http://www.gnu.org/software/gsl/

  • Bundle adjustment ‐ C++

– http://www.ics.forth.gr/~lourakis/sba/

  • EXIF parser – C++

– http://www.codeproject.com/KB/graphics/cexif.aspx

  • Multiple View Geometry in Computer Vision 1st ed Richard Hartley and
  • Multiple View Geometry in Computer Vision 1st ed, Richard Hartley and

Andrew Zisserman

  • Chateau cattle dataset are obtained from the tutorial images used in

ImageModeler S/W by REALVIZ.

Demo presentation ‐ Visual recognition and search, Mar 21, 2008