C18 Computer Vision Lecture 5 Imaging geometry, camera calibration - - PowerPoint PPT Presentation

c18 computer vision
SMART_READER_LITE
LIVE PREVIEW

C18 Computer Vision Lecture 5 Imaging geometry, camera calibration - - PowerPoint PPT Presentation

C18 Computer Vision Lecture 5 Imaging geometry, camera calibration Victor Adrian Prisacariu http://www.robots.ox.ac.uk/~victor InfiniDense DEMO Course Content Projective geometry, camera calibration. Salient feature detection.


slide-1
SLIDE 1

C18 Computer Vision

Victor Adrian Prisacariu

http://www.robots.ox.ac.uk/~victor

Lecture 5

Imaging geometry, camera calibration

slide-2
SLIDE 2

InfiniDense

DEMO

slide-3
SLIDE 3

Course Content

  • Projective geometry, camera calibration.
  • Salient feature detection.
  • Recovering 3D from two images I: epipolar

geometry.

  • Recovering 3D from two images II: stereo

correspondences, triangulation, neural nets. Slides at http://www.robots.ox.ac.uk/~victor -> Teaching

Lots borrowed from David Murray + AV C18.

slide-4
SLIDE 4

Useful Texts

  • Multi

ltiple Vie iew Geometry try in in Computer Visi ision

  • Richard Hartley, Andrew Zisserman
  • Computer Visi

ision: A Modern Approach

  • David Forsyth, Jean Ponce Prentice Hall; ISBN:0130851981
  • 3-Dim

imensional Computer Visi ision: A Geometr tric Vie iewpoint

  • Olivier Faugeras
slide-5
SLIDE 5

Computer Vision: This time…

5. 5. Im Imaging geometry, camera calibration. 1. Introduction.

  • 2. The perspective camera as a geometric device.

3. Perspective using homogeneous coordinates. 4. Calibration the elements of the perspective model. 6. Salient feature detection and description. 7. Recovering 3D from two images I: epipolar geometry. 8. Recovering 3D from two images II: stereo correspondences, triangulation, neural nets.

slide-6
SLIDE 6

5.1 Introduction

Aim in geometric computati tion vis isio ion is to take a number of 2D images, and

  • btain an understanding of the 3D environment; what is in it; and how it

evolves over time. What do we have here …?

… seems very easy …

slide-7
SLIDE 7

It isn’t …

slide-8
SLIDE 8

Organizing the tricks …

Although human and (3D) computer vision might be bags of tricks, it is useful to place the tricks with ithin la larger proce cessing paradigms. For example: a) Data-driven, bottom-up processing. b) Model-driven, top-down, generative processing. c) Dynamic Vision (mixes bottom-up with top-down feedback). d) Active Vision (task oriented). e) Data-driven discriminative approach (machine learning). These are neither all-embracing nor exclusive.

slide-9
SLIDE 9

(a) Data-driven, bottom-up processing

  • Image processing

produces map of salient 2D features.

  • Features input into a

range of shape from X processes whose output was the 2.5 .5D sketch.

  • Only in the last stage we

get a fully 3D obje ject- ce centered description.

slide-10
SLIDE 10

(b) Model-driven, and (c) Dynamic vision

  • Model-driven, top-down,

generati tive proce cessing:

– a model of the scene is assumed known. – Supply a pose for the object relative to the camera, and use projection to predict where salient features should be found in the image space. – Search for the features, and refine the pose by minimizing the observed deviation.

  • Dynamic vis

vision: mixes bottom- up/top-down by introducing feedback.

Top-down Dynamic

slide-11
SLIDE 11

(d) Active Vision

  • Introduces task-oriented sensing-perception-

actio ion lo loops:

– Visual data needs only be “good enough” to drive the particular action.

  • No need to build and maintain an overarching

representation of the surroundings.

  • Computational resources focused where they

are needed.

slide-12
SLIDE 12

(e) Data-driven approach

  • The aim is to le

learn a description of the transformation between input and output using exemplars.

  • Geometry is not forgotten, but implicit learned

representation are favored.

slide-13
SLIDE 13

5.2 The perspective camera as a geometric device

slide-14
SLIDE 14

This is (a picture of) my cat

Cat nose x 520 520 x = 295

308

slide-15
SLIDE 15

My cat lives in a 3D world

𝐘 = 𝑌1 𝑌2 𝑌3 𝐲 = 𝑦1 𝑦2

The point 𝐘 in world space projects to the point 𝐲 in image space

slide-16
SLIDE 16

Going from X in 3D to x in 2D

𝐘 = 𝑌1 𝑌2 𝑌3 𝐲 = 𝑦1 𝑦2

?

film/sensor cat Output would be blurry  if film just exposed to the cat

slide-17
SLIDE 17

Going from X in 3D to x in 2D

film/sensor cat barrier Blur reduced, looks good ☺

𝐘 = 𝑌1 𝑌2 𝑌3 𝐲 = 𝑦1 𝑦2

?

slide-18
SLIDE 18

Pinhole Camera

cat All rays pass through the ce center of

  • f pr

projection (a single point). Image forms on the image plane. Image Plane pinhole

𝐘 = 𝑌1 𝑌2 𝑌3 𝐲 = 𝑦1 𝑦2

?

slide-19
SLIDE 19

Pinhole Camera

𝐲 = 𝑦1 𝑦2

  • 𝐘 =

𝑌1 𝑌2 𝑌3

f image plane f – focal length

  • – camera origin

p – principal point p Optical axis The 3D point 𝐘 = 𝑌1 𝑌2 𝑌3 is imaged into 𝐲 = 𝑦1 𝑦2 as: 𝑦1 𝑦2 = 𝑔

𝑌1 𝑌3

𝑔

𝑌2 𝑌3

slide-20
SLIDE 20

Homogeneous coordinates

  • The projection 𝐲 = 𝑔𝐘/𝑌3 is non-linear .
  • Can be made linear using homogeneous coordinates

– involves representing the image and scene in higher dimensional space.

  • Limiting cases – e.g. vanishing points – are handled

better.

  • Homogeneous coordinates allow for transformations to

be concatenated more easily.

slide-21
SLIDE 21

3D Euclidean transforms: inh inhomogeneous coordinates

  • My cat moves through 3D

space.

  • The movement of the tip of

the nose can be described using an Eucli lidean tr transform:

𝐘3×1

= 𝑺3×3𝐘3×1 + 𝐮3×1

rotation translation

slide-22
SLIDE 22

3D Euclidean transforms: inh inhomogeneous coordinates

  • Euclidean transform: 𝐘3×1

= 𝑺3×3𝐘3×1 + 𝐮3×1

  • Concatenation of successive transform is a

mess!

  • 𝐘1 = 𝑺1𝐘 + 𝐮1
  • 𝐘2 = 𝑺2𝐘1 + 𝐮2
  • 𝐘2 = 𝑺2 𝑺1𝐘 + 𝐮1 + 𝐮2 = 𝑺2𝑺1 𝐘 + 𝑺2𝐮𝟐 + 𝐮2 .
slide-23
SLIDE 23

3D Euclidean transforms: homogeneous coordinates

  • We replace the 3D points

𝑌 𝑍 𝑎 with a four vector 𝑌 𝑍 𝑎 1 .

  • The Euclidean transform becomes:

𝐘′ 1 = 𝑭 𝐘 1 = 𝑺 𝐮 𝟏𝑈 1 𝐘 1

  • Transformations can now be concatenated by matrix

multiplication: 𝐘1 1 = 𝑭10 𝐘0 1 𝐘2 1 = 𝑭21 𝐘1 1 → 𝐘2 1 = 𝑭21𝑭10 𝐘𝟏 1

slide-24
SLIDE 24

Homogeneous coordinates – definition in 𝑆3

  • 𝐘 =

𝑌, 𝑍, 𝑎 𝑈 is represented in homogeneous coordinates by any 4-vector 𝑌1 𝑌2 𝑌3 𝑌4

  • such that 𝑌 = 𝑌1/𝑌4, 𝑍 = 𝑌2/𝑌4, and 𝑎 = 𝑌3/𝑌4.
  • So the following homogeneous vectors represent the same point, for

any 𝜇 ≠ 0: 𝑌1 𝑌2 𝑌3 𝑌4 and 𝜇 𝑌1 𝑌2 𝑌3 𝑌4

  • E.g. 2,3,5, 1 𝑈 is the same as −3, −4.5, −7.5, −1.5 𝑈 and both

represent the sam same inhomogeneous point 2,3,5 𝑈

slide-25
SLIDE 25

Homogeneous coordinates – definition in 𝑆2

  • 𝐲 =

𝑦, 𝑧 𝑈 is represented in homogeneous coordinates by any 3-vector 𝑦1 𝑦2 𝑦3

  • such that 𝑦 = 𝑦1/𝑦3, 𝑧 = 𝑦2/𝑦3.
  • E.g. 1,2,3 𝑈 is the same as 3,6,9 𝑈 and both

represent the same inhomogeneous point 0.33,0.66 𝑈

slide-26
SLIDE 26

Homogeneous notation – rues for use

  • 1. Convert the inhomogeneous point to an

homogeneous vector: 𝑌 𝑍 𝑎 → 𝑌 𝑍 𝑎 1

  • 2. Apply a 4 × 4 transform.
  • 3. Dehomogenize the resulting vector:

𝑌1 𝑌2 𝑌3 𝑌4 → 𝑌1/𝑌4 𝑌2/𝑌4 𝑌3/𝑌4

slide-27
SLIDE 27

Projective transformations

  • A projective transformation is a linear transformation on

homogeneous 4-vectors represented by a non-singular 4x4 matr trix ix. 𝑌′1 𝑌′2 𝑌′3 𝑌′4 = 𝑞11 𝑞12 𝑞13 𝑞14 𝑞21 𝑞22 𝑞23 𝑞24 𝑞31 𝑞32 𝑞33 𝑞34 𝑞41 𝑞42 𝑞43 𝑞44 𝑌1 𝑌2 𝑌3 𝑌4

  • The effect on the homogenous points is that the original and

transformed points are linked through a projection center.

  • The 4x4 matrix is defined up to scale, and so has 15 degrees
  • f freedom.
slide-28
SLIDE 28

More 3D-3D and 2D-2D Transforms

Projective (15 dof): 𝑌′1 𝑌′2 𝑌′3 𝑌′4 = 𝑸4×4 𝑌1 𝑌2 𝑌3 𝑌4 Affine (12 dof): 𝐘′ 1 = 𝑩3×3 𝐮3 𝟏𝑈 1 𝐘 1 Similarity (7 dof): 𝐘′ 1 = 𝑇𝑺3×3 𝐮3 𝟏𝑈 1 𝐘 1 Euclidean (6 dof): 𝐘′ 1 = 𝑺3×3 𝐮3 𝟏𝑈 1 𝐘 1 Projective (aka Homography, 8 dof): 𝑦′1 𝑦′2 𝑦′3 = 𝐼3×3 𝑦1 𝑦2 𝑦3 Affine (6 dof): 𝐲′ 1 = 𝑩𝟑×𝟑 𝐮2 𝟏𝑈 1 𝐲 1 Similarity (5 dof): 𝐲′ 1 = 𝑇𝑺2×2 𝐮2 𝟏𝑈 1 𝐲 1 Euclidean (4 dof): 𝐲′ 1 = 𝑺2×𝟑 𝐮𝟑 𝟏𝑈 1 𝐲 1

slide-29
SLIDE 29

2D-2D Transform Examples

cos 𝜄 − sin 𝜄 𝑢𝑦 sin 𝜄 cos 𝜄 𝑢𝑧 1 𝑡cos 𝜄 − 𝑡sin 𝜄 𝑢𝑦 𝑡sin 𝜄 𝑡cos 𝜄 𝑢𝑧 1 𝑏11 𝑏12 𝑢𝑦 𝑏21 𝑏22 𝑢𝑧 1 ℎ11 ℎ12 ℎ12 ℎ21 ℎ22 ℎ23 ℎ31 ℎ32 ℎ33

Euclidean 3 DoF Similarity 4 DoF Affine 6 DoF Projective 8 DoF

slide-30
SLIDE 30

Perspective 3D-2D Transforms

  • Similar to a 3D-3D projective transform, but constr

train the transformed poi point to to a a pla plane 𝒜 = 𝒈. 𝑨 = 𝑔 → 𝐘image = 𝑦1 𝑦2 𝑔 1

  • Because z = 𝑔 is fixed, we can write:

𝜇 𝑦1 𝑦2 𝑔 1 = 𝑞11 𝑞12 𝑞13 𝑞14 𝑞21 𝑞22 𝑞23 𝑞24 𝑔𝑞31 𝑔𝑞32 𝑔𝑞33 𝑔𝑞34 𝑞31 𝑞32 𝑞33 𝑞34 𝑌1 𝑌2 𝑌3 1

  • The 3rd row is redundant, so:

𝜇 𝑦1 𝑦2 1 = 𝑞11 𝑞12 𝑞13 𝑞14 𝑞21 𝑞22 𝑞23 𝑞24 𝑞31 𝑞32 𝑞33 𝑞34 𝑌1 𝑌2 𝑌3 1 = 𝑄3×4 𝑌1 𝑌2 𝑌3 1 𝑄3×4 is the pr projection matrix ix and this is a per perspective transform

slide-31
SLIDE 31

5.3 Perspective using homogeneous coordinates

𝐘 = 𝑌1 𝑌2 𝑌3 𝐲 = 𝑦1 𝑦2

𝑦1 𝑦2 = 𝑔 𝑌1

𝑌3

𝑔 𝑌2

𝑌3

𝜇 𝑦1 𝑦2 1 = 𝑔 𝑔 1 𝑌1 𝑌2 𝑌3 1 → 𝜇𝑦1 = 𝑔𝑌1 𝜇𝑦2 = 𝑔𝑌2 𝜇 = 𝑌3 → 𝑦1 = 𝑔 𝑌1 𝑌3 𝑦2 = 𝑔 𝑌2 𝑌3

slide-32
SLIDE 32

Perspective using homogeneous coordinates 𝜇 𝑦1 𝑦2 1 = 𝑔 𝑔 1 𝑌1 𝑌2 𝑌3 1

Image Point Projection Matrix World Point

slide-33
SLIDE 33

Perspective using homogeneous coordinates

  • It is useful to split up the overall projection matrix into three

parts:

1. a part that depends on the internals of the camera 2. a vanilla projection matrix 3. a Euclidean transformation between the world and camera frames.

  • We first assume the scene and world are aligned with the

camera coords, so that the extr xtrin insic camera matr trix is is id identity ty and get:

Imag Image Poin

  • int

Camera’s Intrinsic Calib Calibration Proj

  • jectio

ion matr trix (v (vanill lla) Camera’s Extrinsic Calib Calibration Wor

  • rld

ld Poin

  • int

𝜇 𝑦 𝑧 1 𝑔 𝑔 1 1 1 1 1 1 1 1 𝐘

slide-34
SLIDE 34

Perspective using homogeneous coordinates

  • Now let’s make things more general:

– Insert a rotation 𝑺 and translation 𝐮 between world and camera coordinates. – Insert some extra term in the intrinsic calibration matrix.

Imag Image Poi

  • int

Camera’s Intrinsic Calib Calibration Proj

  • jectio

ion matr trix (v (vanill lla) Camera’s Extrinsic Calib Calibration Wor

  • rld

ld Poin

  • int

𝜇 𝑦 𝑧 1 𝑔 𝑡𝑔 𝑣0 𝛿𝑔 𝑤𝑝 1 1 1 1 𝑠

11

𝑠

12

𝑠

13

𝑢1 𝑠21 𝑠22 𝑠23 𝑢2 𝑠31 𝑠32 𝑠33 𝑢3 1 𝐘

slide-35
SLIDE 35

The camera pose (extrinsic parameters)

The camera’s extrinsic calibration is just the rotation 𝑆 and translation 𝐮 that take points from the world frame to the camera frame. 𝐘c 1 = 𝑺 𝐮 𝟏𝑈 1 𝐘𝑋 1

slide-36
SLIDE 36

Building 𝑺

  • 𝑺 captures rotation and can be built from various types of rotation

representations (Euler angles, quaternions, etc.).

  • Euler

ler angle gles capture the angles of rotation axis using 3 parameters, one for each axis.

𝑌′ = 𝑆𝑨𝑌𝑋 = cos 𝜄𝑨 sin 𝜄𝑨 − sin 𝜄𝑨 cos 𝜄𝑨 1 𝑌𝑥 𝑌′′ = 𝑆𝑧𝑌′ = cos 𝜄𝑧 − sin 𝜄𝑧 1 sin 𝜄𝑧 cos 𝜄𝑧 𝑌′ 𝑌𝐵 = 𝑆𝑦𝑌′′ = 1 cos 𝜄𝑦 ± sin 𝜄𝑦 ∓ sin 𝜄𝑦 cos 𝜄𝑦 𝑌′′ 𝑺𝐷𝑋 = 𝑺𝑦𝑺𝑧𝑺𝑨 Or Order matters!

slide-37
SLIDE 37

Building 𝐮

slide-38
SLIDE 38

Inverting the transform

𝑺𝐷𝑋 𝐮𝐷𝑋 𝟏𝑈 1

−1

= 𝑺𝑋𝐷 𝐮𝑋𝐷 𝟏𝑈 1

𝑺𝑋𝐷 = 𝑺𝐷𝑋 −1 = 𝑺𝐷𝑋 𝑈

For rotation:

𝐮𝑋𝐷 = −𝐮𝐷𝑋

For translation:

𝐮𝑋𝐷 = −𝑺𝑋𝐷𝐮𝐷𝑋

slide-39
SLIDE 39

The in intrinsic calibration parameters

Describe har ardware properties of real cameras:

– The image plane might be skewed. – The central axis of the lens might not line up with the optical axis. – The light gathering elements might not be square. – Lens distortion.

𝐿 = 𝑔 𝛿𝑔 1 1

𝑣0 𝑔

1

𝑤0 𝛿𝑔

1 1 𝑡 1 1 = 𝑔 𝑡𝑔 𝑣𝑝 𝛿𝑔 𝑤𝑝 1

different scaling

  • n x and y

𝛿 is the aspect ratio. Origin offset, (𝑣𝑝, 𝑤𝑝) is the principal point. s accounts for skew

slide-40
SLIDE 40

Summary of steps from Scene to Image

1. Move scene point 𝐘𝑋, 1 𝑈 into camera coordinate by 4 × 4 extrinsic Euclidean transformation: 𝐘𝐷 1 = 𝑺 𝐮 𝟏𝑈 1 𝐘𝑋 1 2. Project into ideal camera via a vanilla perspective transformation: 𝐲′ 1 = 𝑱|𝟏 𝐘𝐷 1 3. Map the ideal image into the real image using intrinsic matrix: 𝐲 1 = 𝑳 𝐲′ 1

slide-41
SLIDE 41

5.4 Camera Calibration

  • The process that finds 𝑳,

, and accounts for the internal physical characteristics of the camera.

  • (Usually) done once per camera.
  • There are a variety of method for self-calibration,

auto-calibration or pre-calibration.

  • We will gloss over pre-cali

libration, using a specially made “known” visual scene.

slide-42
SLIDE 42

What is camera calibration?

Imag Image Poi

  • int

Camera’s Intrinsic Calib Calibration Proj

  • jectio

ion matr trix (v (vanill lla) Camera’s Extrinsic Calib Calibration Wor

  • rld

ld Poin

  • int

𝜇 𝑦 𝑧 1 𝑔 𝑡𝑔 𝑣0 𝛿𝑔 𝑤𝑝 1 1 1 1 𝑠

11

𝑠

12

𝑠

13

𝑢1 𝑠21 𝑠22 𝑠23 𝑢2 𝑠31 𝑠32 𝑠33 𝑢3 1 𝐘 𝜇𝐲 𝑳 [𝑱|𝟏] 𝑺 𝐮 𝟏𝑈 1 𝐘

𝐐3×4 = 𝑳[𝑺|𝐮]

Ca Camera calib libratio ion: recover r 𝑳

slide-43
SLIDE 43

Camera Calibration – Math Part

  • 1. Recover overall projection matrix 𝑸3×4.
  • Assume target with at least 6 known scene points.
  • Build and solve system of (at least) 12 equations.
  • 2. Construct 𝑸LEFT = 𝑳𝑺 from leftmost 3x3 block
  • f 𝐐 = 𝑳[𝑺|𝐮].
  • 3. Invert 𝑸LEFT so 𝑸LEFT

−𝟐

= 𝑺−1𝑳−1.

  • 4. Decompose 𝑸LEFT

−𝟐

using QR decomposition into 𝑺 and 𝑳.

  • 5. Normalise 𝑳 (as scale of 𝑸 was unknown).
  • 6. Recover 𝐮 = 𝑳−1 𝑞14

𝑞24 𝑞24 𝑈.

𝜇 𝑦𝑗 𝑧𝑗 1 = 𝑞11 𝑞12 𝑞13 𝑞14 𝑞21 𝑞22 𝑞23 𝑞24 𝑞31 𝑞32 𝑞33 𝑞34 𝑌𝑗 𝑍

𝑗

𝑎𝑗 1 𝜇𝑗 = 𝑞31𝑌𝑗 + 𝑞32𝑍

𝑗 + 𝑞33𝑎𝑗 + 𝑞34

𝑞31𝑌𝑗 + 𝑞32𝑍

𝑗 + 𝑞33𝑎𝑗 + 𝑞34 𝑦𝑗 = 𝑞11𝑌𝑗 + 𝑞12𝑍 𝑗 + 𝑞13𝑎𝑗 + 𝑞14

𝑞31𝑌𝑗 + 𝑞32𝑍

𝑗 + 𝑞33𝑎𝑗 + 𝑞34 𝑧𝑗 = 𝑞21𝑌𝑗 + 𝑞22𝑍 𝑗 + 𝑞23𝑎𝑗 + 𝑞24

𝑌𝑗 𝑍

𝑗

𝑎𝑗 1 −𝑌𝑗𝑦𝑗 −𝑍

𝑗𝑦𝑗

−𝑎𝑗𝑦𝑗 −𝑦𝑗 𝑌𝑗 𝑍

𝑗

𝑎𝑗 1 −𝑌𝑗𝑧𝑗 −𝑍

𝑗𝑧𝑗

−𝑎𝑗𝑧𝑗 −𝑧𝑗 𝐪 = 0 whe here 𝐪 contains the un unknowns.

slide-44
SLIDE 44

Camera Calibration – Math Part

  • 1. Recover overall projection matrix 𝑸3×4.
  • Assume target with at least 6 known scene points.
  • Build and solve system of (at least) 12 equations.
  • 2. Construct 𝑸LEFT = 𝑳𝑺 from leftmost 3x3 block
  • f 𝐐 = 𝑳[𝑺|𝐮].
  • 3. Invert 𝑸LEFT so 𝑸LEFT

−𝟐

= 𝑺−1𝑳−1.

  • 4. Decompose 𝑸LEFT

−𝟐

using QR decomposition into 𝑺 and 𝑳.

  • 5. Normalise 𝑳 (as scale of 𝑸 was unknown).
  • 6. Recover 𝐮 = 𝑳−1 𝑞14

𝑞24 𝑞24 𝑈.

𝑸 = 𝑞11 𝑞12 𝑞13 𝑞14 𝑞21 𝑞22 𝑞23 𝑞24 𝑞31 𝑞32 𝑞33 𝑞34 𝑸𝑀𝐹𝐺𝑈

slide-45
SLIDE 45

Camera Calibration – Math Part

  • 1. Recover overall projection matrix 𝑸3×4.
  • Assume target with at least 6 known scene points.
  • Build and solve system of (at least) 12 equations.
  • 2. Construct 𝑸LEFT = 𝑳𝑺 from leftmost 3x3 block
  • f 𝐐 = 𝑳[𝑺|𝐮].
  • 3. Invert 𝑸LEFT so 𝑸LEFT

−𝟐

= 𝑺−1𝑳−1.

  • 4. Decompose 𝑸LEFT

−𝟐

using QR decomposition into 𝑺 and 𝑳.

  • 5. Normalise 𝑳 (as scale of 𝑸 was unknown).
  • 6. Recover 𝐮 = 𝑳−1 𝑞14

𝑞24 𝑞24 𝑈.

𝑄𝑀𝐹𝐺𝑈

−1

R K

  • 1

K R

  • 1
  • 1
slide-46
SLIDE 46

Camera Calibration – Math Part

  • 1. Recover overall projection matrix 𝑸3×4.
  • Assume target with at least 6 known scene points.
  • Build and solve system of (at least) 12 equations.
  • 2. Construct 𝑸LEFT = 𝑳𝑺 from leftmost 3x3 block
  • f 𝐐 = 𝑳[𝑺|𝐮].
  • 3. Invert 𝑸LEFT so 𝑸LEFT

−𝟐

= 𝑺−1𝑳−1.

  • 4. Decompose 𝑸LEFT

−𝟐

using QR decomposition into 𝑺 and 𝑳.

  • 5. Normalise 𝑳 (as scale of 𝑸 was unknown).
  • 6. Recover 𝐮 = 𝑳−1 𝑞14

𝑞24 𝑞34 𝑈.

slide-47
SLIDE 47

Camera Calibration – Example Algorithm

slide-48
SLIDE 48

Can be done without point matches …

slide-49
SLIDE 49

Radial Distortion

  • So far, we have figured out the transformations that

turn our camera into a notional camera with the world and the camera coordinates aligned and an “ideal” image plane.

  • One often has to correct for the other optical

distortions and aberrations. Radial distortion is the most common – see the Q sheet.

  • Correction for this distortion is applied before carrying
  • ut calibration.
slide-50
SLIDE 50

Practical Camera Calibration

Matlab Calibration Toolkit

http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/example.html

slide-51
SLIDE 51

Summary of Lecture 5

In this lecture we have:

  • Introduced the aims of geometric computer

vision, and some paradigms.

  • Explored linear transformations and

introduced homogeneous coordinates.

  • Defined perspective projection from a scene,

and saw that it could be made linear using homogeneous coordinates.

  • Discussed how to pre-calibrate a camera using

image of six or more known scene points.