Wide RGB-D for Scaled Layout Reconstruction Alejandro Perez-Yus, - - PowerPoint PPT Presentation

wide rgb d for scaled layout reconstruction
SMART_READER_LITE
LIVE PREVIEW

Wide RGB-D for Scaled Layout Reconstruction Alejandro Perez-Yus, - - PowerPoint PPT Presentation

Wide RGB-D for Scaled Layout Reconstruction Alejandro Perez-Yus, Gonzalo Lopez-Nicolas, Jose J. Guerrero Universidad de Zaragoza, Spain International Workshop on Lines, Planes and Manhattan Models for 3-D Mapping September 28, 2017 at IROS


slide-1
SLIDE 1

Wide RGB-D for Scaled Layout Reconstruction

Alejandro Perez-Yus, Gonzalo Lopez-Nicolas, Jose J. Guerrero Universidad de Zaragoza, Spain

International Workshop on Lines, Planes and Manhattan Models for 3-D Mapping September 28, 2017 at IROS 2017, Vancouver

slide-2
SLIDE 2

RGB-D cameras provide valuable information, but with limited FOV

slide-3
SLIDE 3

Fisheye cameras are able to view the whole scene, but lack depth information

slide-4
SLIDE 4

* Hybrid camera system

* Depth camera provides 3D certainty and scale * Fisheye camera is able to view 180º of field of view

Our proposal: Use both

slide-5
SLIDE 5

* Presented in ECCV 2016:

* “Peripheral Expansion of Depth Information via Layout Estimation” * A. Perez-Yus, G. Lopez-Nicolas, J.J. Guerrero,

How? With layout reconstruction

slide-6
SLIDE 6
slide-7
SLIDE 7
slide-8
SLIDE 8
slide-9
SLIDE 9
slide-10
SLIDE 10
slide-11
SLIDE 11
slide-12
SLIDE 12
slide-13
SLIDE 13
slide-14
SLIDE 14
slide-15
SLIDE 15
slide-16
SLIDE 16
slide-17
SLIDE 17

* Watch video at: https://youtu.be/nQYvhAhvv6U

slide-18
SLIDE 18

Outline of the method

slide-19
SLIDE 19

Outline of the method

slide-20
SLIDE 20

* Fisheye calibration has to be performed separately to model distortion properly

Calibration problem

slide-21
SLIDE 21

* New method that combines

* RGB to depth calibration [1] * Omnidirectional camera models [2]

Calibration

[1] C. Herrera et al. “Joint depth and color camera calibration with distortion correction”, PAMI 2012 [2] D. Scaramuzza et al. “A toolbox for easily calibrating omnidirectional cameras”, IROS 2006

slide-22
SLIDE 22

Calibration

  • A. Perez-Yus, G. Lopez-Nicolas, J.J. Guerrero, “A novel hybrid camera system with depth and fisheye

cameras”. International Conference on Pattern Recognition (2016)

slide-23
SLIDE 23

Outline of the method

slide-24
SLIDE 24

To avoid rectification of the image, we use a method that extract lines directly from

  • mnidirectional images

with revolution symmetry

Lines extraction

  • J. Bermudez-Cameo, G. Lopez-Nicolas, J.J. Guerrero, “Automatic Line Extraction in Uncalibrated

Omnidirectional Cameras with Revolution Symmetry”. International Journal of Computer Vision (2015)

slide-25
SLIDE 25

Extraction of the VPs

  • 1. With the normals of the 3D points
  • 2. Final extraction with lines (more accuracy)

Manhattan environments are assumed. We extract the 3 VPs in a two-stage optimization:

slide-26
SLIDE 26

* Three main directions * Above/below horizon * Long lines * Associated to 3D plane intersections

Line classification

slide-27
SLIDE 27

Outline of the method

slide-28
SLIDE 28

Lines below horizon are intersected with floor plane to have its 3D coordinates

Line projection and scaling

slide-29
SLIDE 29

The height of the ceiling is computed assuming floor/ceiling

  • symmetry. In the 2D plane, contours should overlap.

Line projection and scaling

slide-30
SLIDE 30

Line projection and scaling (Example)

slide-31
SLIDE 31

We extract four types of corner, either in floor or ceiling plane

Corner extraction

slide-32
SLIDE 32

Corners are scored to favour their appearance in the layout hypotheses generation when: * Lines are longer * Lines are closer to the intersection point * It is formed by more lines * Lines are associated to 3D intersections

Corner extraction

slide-33
SLIDE 33

Corner extraction (example)

slide-34
SLIDE 34

Outline of the method

slide-35
SLIDE 35

1. Draw 2-5 corners increasing probability of appearance with the scores

  • 2. Sort them clockwise
  • 3. Join corners with walls oriented in Manhattan

directions

  • 4. Possibility to add undetected corners to keep

alternatively-oriented Manhattan walls

  • 5. Close the layout

Hypotheses generation

slide-36
SLIDE 36

Hypotheses generation example

slide-37
SLIDE 37

Invalid hypotheses

slide-38
SLIDE 38

Hypotheses in 3D

slide-39
SLIDE 39

Outline of the method

slide-40
SLIDE 40

* Sum of Scores (SS) * Sum of Edges (SE) * Angle Coverage (AC) * Orientation Map (OM) – from [3]

Layout evaluation methods

[3] D.C. Lee et al. “Geometric reasoning for single image structure recovery”, CVPR 2009

slide-41
SLIDE 41

* We created our own data, including:

* RGB-D + Fisheye camera system: 70 images * Google Tango

* We measure the quality of the layout extraction with the Pixel Accuracy, i.e. the number of pixels of the resulting labeled image that matches the manually labeled ground truth (Pixel Accuracy, in %)

Experimental evaluation:

slide-42
SLIDE 42

Experimental results

* With few hypotheses we

  • btain good results à

Corner extraction and scoring works well * The method removing the depth information gets considerably worse results

slide-43
SLIDE 43

Results tango + scaling

slide-44
SLIDE 44

Results tango + scaling

slide-45
SLIDE 45

Bonus: New calibration method

Extrinsic calibration of multiple RGB- D cameras from line observations.

  • A. Perez-Yus, E. Fernandez-Moral, G.

Lopez-Nicolas, J.J. Guerrero, P. Rives IEEE Robotics and Automation Letters 2018

slide-46
SLIDE 46

New calibration method

slide-47
SLIDE 47

New calibration method

slide-48
SLIDE 48

Wide RGB-D for Scaled Layout Reconstruction

Alejandro Perez-Yus, Gonzalo Lopez-Nicolas, Jose J. Guerrero Universidad de Zaragoza, Spain

International Workshop on Lines, Planes and Manhattan Models for 3-D Mapping September 28, 2017 at IROS 2017, Vancouver