3D Camera Calibration Nichola Abdo and Andr Borgeat March 5 th 2010 - - PowerPoint PPT Presentation

3d camera calibration
SMART_READER_LITE
LIVE PREVIEW

3D Camera Calibration Nichola Abdo and Andr Borgeat March 5 th 2010 - - PowerPoint PPT Presentation

3D Camera Calibration Nichola Abdo and Andr Borgeat March 5 th 2010 1/ 16 Motivation 3D Imaging Acquisition of 3D information is important for many computer vision and robotics applications. Stereo vision cameras combine several


slide-1
SLIDE 1

3D Camera Calibration

Nichola Abdo and André Borgeat March 5th 2010

1/ 16

slide-2
SLIDE 2

Motivation

3D Imaging

  • Acquisition of 3D information is important for many computer

vision and robotics applications.

  • Stereo vision cameras combine several images to obtain depth
  • measurements. This is computationally demanding and prone

to errors.

  • Laser scanners are expensive and require a mechanism for

scanning a laser beam to cover the entire scene.

2/ 16

slide-3
SLIDE 3

The Photonic Mixer Device (PMD)

PMD Cameras

Source: Ringbeck and Hagebeuker

  • PMD cameras operate by time of flight (TOF), providing both

intensity and distance measurements.

  • Distance is related to the phase shift between the reference

and received signals.

  • Measurements are obtained by all pixels simultaneously (no

need for scanning), resulting in high frame rates.

3/ 16

slide-4
SLIDE 4

Sources for Erroneous Depth Measurements

Numerous sources that affect the measurements

  • Distance

The distance calculation assumes a perfect sinusoidal modulation, which is practically not given, leading to a distance-dependent

  • scillating error.
  • Location of the pixel in the image

The individual sensors need time to propagate the signal depending

  • n their location in the sensor array.
  • Intensity

The brighter the image, the more it is shifted towards the camera.

  • Other sources

Temperature, shutter time, multiple reflections, ambient light, edges, over-/ undersaturation

4/ 16

slide-5
SLIDE 5

Related Work

  • Lindner and Kolb: B-splines approximation for the

distance-related error and the intensity-related error (2006, 2007).

  • Kahlmann et al. accounted for the distance and shutter-time

related errors using a look-up table approach (2006).

  • Fuchs and May modeled the distance-related error and the

pixel-location-related error as polynomials (2007). Calibration also involved computing the transformation between the camera and end-effector.

5/ 16

slide-6
SLIDE 6

Depth Calibration Setup

PMD-[vision] O3 camera attached to the robotic arm

  • PMD camera attached to the

end effector of a robotic arm.

  • The arm is moved to different

configurations and images are taken of a wall from different view-angles.

  • The pose of the robot is

accurately given by the robot control.

  • A laser range finder provides the

location of the wall in the world coordinate system.

6/ 16

slide-7
SLIDE 7

3D Projection of the Depth Images

3D coordinate xi

v of a pixel v = (r, c) of the ith image:

xi

v = wTttTsA(v, Di v − E i v)

Di

v: Distance measurement

E i

v: Error in the distance measurement

A: Projection

accounts for lens distortion etc.

tTs: Sensor-to-TCP transformation

sensor origin relative to robot arm

wTt: End-effector pose

location of the robot arm

7/ 16

slide-8
SLIDE 8

3D Projection of the Depth Images

3D coordinate xi

v of a pixel v = (r, c) of the ith image:

xi

v = wTttTsA(v, Di v − E i v)

Di

v: Distance measurement

E i

v: Error in the distance measurement

A: Projection

accounts for lens distortion etc.

tTs: Sensor-to-TCP transformation

sensor origin relative to robot arm

wTt: End-effector pose

location of the robot arm

7/ 16

slide-9
SLIDE 9

3D Projection of the Depth Images

3D coordinate xi

v of a pixel v = (r, c) of the ith image:

xi

v = wTttTsA(v, Di v − E i v)

Di

v: Distance measurement

E i

v: Error in the distance measurement

A: Projection

accounts for lens distortion etc.

tTs: Sensor-to-TCP transformation

sensor origin relative to robot arm

wTt: End-effector pose

location of the robot arm

7/ 16

slide-10
SLIDE 10

3D Projection of the Depth Images

3D coordinate xi

v of a pixel v = (r, c) of the ith image:

xi

v = wTttTsA(v, Di v − E i v)

Di

v: Distance measurement

E i

v: Error in the distance measurement

A: Projection

accounts for lens distortion etc.

tTs: Sensor-to-TCP transformation

sensor origin relative to robot arm

wTt: End-effector pose

location of the robot arm

7/ 16

slide-11
SLIDE 11

3D Projection of the Depth Images

3D coordinate xi

v of a pixel v = (r, c) of the ith image:

xi

v = wTttTsA(v, Di v − E i v)

Di

v: Distance measurement

E i

v: Error in the distance measurement

A: Projection

accounts for lens distortion etc.

tTs: Sensor-to-TCP transformation

sensor origin relative to robot arm

wTt: End-effector pose

location of the robot arm

7/ 16

slide-12
SLIDE 12

3D Projection of the Depth Images

3D coordinate xi

v of a pixel v = (r, c) of the ith image:

xi

v = wTttTsA(v, Di v − E i v)

Di

v: Distance measurement

E i

v: Error in the distance measurement

A: Projection

accounts for lens distortion etc.

tTs: Sensor-to-TCP transformation

sensor origin relative to robot arm

wTt: End-effector pose

location of the robot arm

7/ 16

slide-13
SLIDE 13

Modeling the Depth Error

Baseline, Fuchs and May (2006)

  • Used as basis for our work
  • Similar setup (camera attached to a robotic arm)
  • Explicit representation of the different error sources
  • Error Model consisting of:
  • Distance-related error D modeled as polynomial

D(Di

v) = m

  • k=0

dk[Di

v]k

  • Pixel-location-related error P modeled as function linear in row

and column P1(v) = p0 + p1r + p2c

⇒ Baseline error model:

  • E0 = D(Di

v) + P1(v)

8/ 16

slide-14
SLIDE 14

Modeling the Depth Error

Extensions (I) – Pixel-Location-Related Error

1 Function for pixel-location-related error doesn’t seem to fit Error (in cm) vs. Row Error (in cm) vs. Column

⇒ Different error term: P2(v) = p0 + p1(r − r0)2 + p2(c − c0)2

9/ 16

slide-15
SLIDE 15

Modeling the Depth Error

Extensions (II) – Intensity-Related Error

2 Intensity-related error: visible with the naked eye, data

available Two candidates:

  • Plain Intensities

I1(I i

v) = i0 + i1[I i v] + i2[I i v]2

  • Normalized Intensitites

Ni

v = in · I i v · [Di v]2

I2(I i

v, Di v) = i0 + i1[Ni v] + i2[Ni v]2

⇒ Two extended error models:

  • EA = D(Di

v) + P2(v) + I1(I i v)

  • EB = D(Di

v) + P2(v) + I2(I i v, Di v)

10/ 16

slide-16
SLIDE 16

Calibration

  • Find a parameterization a⋆ of E and tTs that minimizes the

error

  • Assume a fixed, plane wall with known location (n, d), the

error of the distance measurement of a pixel is given by f i

v (a) = nT

xi

v

  • + d

(xi

v: 3D Projection of a pixel)

  • Using a number of images i taken from different locations, the

calibration task can be formulated as a⋆ = argmina

  • i
  • v

f i

v (a)2

  • Can be solved using techniques for nonlinear Least Square

Estimation (e.g. Levenberg-Marquardt)

11/ 16

slide-17
SLIDE 17

Experiments

Setup

  • 42 images from different locations
  • 20 images of the plain white wall
  • 22 images of a checkerboard pattern
  • Training and testing done using 6-fold cross validation

Results

  • All techniques significantly reduced the error
  • Both our techniques outperformed the baseline
  • No significant difference between our two candidates

E0 EA EB Mean 26.54 15.14 10.90 11.68 2.57SE ±6.48 ±4.26 ±4.13 ±3.44

Average error in millimeters

12/ 16

slide-18
SLIDE 18

Qualitative Results

Sensor-to-Tool-Center-Point Transformation

Uncorrected plane Plane after applying the sensor-to-TCP transformation

13/ 16

slide-19
SLIDE 19

Qualitative Results

Intensity Correction

Error without intensity correction Error with intensity correction

14/ 16

slide-20
SLIDE 20

Conclusions

  • Our extended model outperforms the baseline model.
  • Accounting for the intensity-related error clearly improves the

accuracy of the distance measurement.

  • There are different approaches considering the intensity-related

error that we did not have the time to compare against (e.g. Lindner and Kolb (2007)).

  • Room for improvement:
  • Unexplained high variance in the individual results and some

strange measurements

  • There are probably other important error sources not

accounted for, but one has to be cautious when extending the model as a larger number of parameters could lead to

  • verfitting.

15/ 16

slide-21
SLIDE 21

Questions

16/ 16