SLIDE 1 Wide RGB-D for Scaled Layout Reconstruction
Alejandro Perez-Yus, Gonzalo Lopez-Nicolas, Jose J. Guerrero Universidad de Zaragoza, Spain
International Workshop on Lines, Planes and Manhattan Models for 3-D Mapping September 28, 2017 at IROS 2017, Vancouver
SLIDE 2
RGB-D cameras provide valuable information, but with limited FOV
SLIDE 3
Fisheye cameras are able to view the whole scene, but lack depth information
SLIDE 4
* Hybrid camera system
* Depth camera provides 3D certainty and scale * Fisheye camera is able to view 180º of field of view
Our proposal: Use both
SLIDE 5
* Presented in ECCV 2016:
* “Peripheral Expansion of Depth Information via Layout Estimation” * A. Perez-Yus, G. Lopez-Nicolas, J.J. Guerrero,
How? With layout reconstruction
SLIDE 6
SLIDE 7
SLIDE 8
SLIDE 9
SLIDE 10
SLIDE 11
SLIDE 12
SLIDE 13
SLIDE 14
SLIDE 15
SLIDE 16
SLIDE 17
* Watch video at: https://youtu.be/nQYvhAhvv6U
SLIDE 18
Outline of the method
SLIDE 19
Outline of the method
SLIDE 20
* Fisheye calibration has to be performed separately to model distortion properly
Calibration problem
SLIDE 21 * New method that combines
* RGB to depth calibration [1] * Omnidirectional camera models [2]
Calibration
[1] C. Herrera et al. “Joint depth and color camera calibration with distortion correction”, PAMI 2012 [2] D. Scaramuzza et al. “A toolbox for easily calibrating omnidirectional cameras”, IROS 2006
SLIDE 22 Calibration
- A. Perez-Yus, G. Lopez-Nicolas, J.J. Guerrero, “A novel hybrid camera system with depth and fisheye
cameras”. International Conference on Pattern Recognition (2016)
SLIDE 23
Outline of the method
SLIDE 24 To avoid rectification of the image, we use a method that extract lines directly from
with revolution symmetry
Lines extraction
- J. Bermudez-Cameo, G. Lopez-Nicolas, J.J. Guerrero, “Automatic Line Extraction in Uncalibrated
Omnidirectional Cameras with Revolution Symmetry”. International Journal of Computer Vision (2015)
SLIDE 25 Extraction of the VPs
- 1. With the normals of the 3D points
- 2. Final extraction with lines (more accuracy)
Manhattan environments are assumed. We extract the 3 VPs in a two-stage optimization:
SLIDE 26
* Three main directions * Above/below horizon * Long lines * Associated to 3D plane intersections
Line classification
SLIDE 27
Outline of the method
SLIDE 28
Lines below horizon are intersected with floor plane to have its 3D coordinates
Line projection and scaling
SLIDE 29 The height of the ceiling is computed assuming floor/ceiling
- symmetry. In the 2D plane, contours should overlap.
Line projection and scaling
SLIDE 30
Line projection and scaling (Example)
SLIDE 31
We extract four types of corner, either in floor or ceiling plane
Corner extraction
SLIDE 32
Corners are scored to favour their appearance in the layout hypotheses generation when: * Lines are longer * Lines are closer to the intersection point * It is formed by more lines * Lines are associated to 3D intersections
Corner extraction
SLIDE 33
Corner extraction (example)
SLIDE 34
Outline of the method
SLIDE 35 1. Draw 2-5 corners increasing probability of appearance with the scores
- 2. Sort them clockwise
- 3. Join corners with walls oriented in Manhattan
directions
- 4. Possibility to add undetected corners to keep
alternatively-oriented Manhattan walls
Hypotheses generation
SLIDE 36
Hypotheses generation example
SLIDE 37
Invalid hypotheses
SLIDE 38
Hypotheses in 3D
SLIDE 39
Outline of the method
SLIDE 40 * Sum of Scores (SS) * Sum of Edges (SE) * Angle Coverage (AC) * Orientation Map (OM) – from [3]
Layout evaluation methods
[3] D.C. Lee et al. “Geometric reasoning for single image structure recovery”, CVPR 2009
SLIDE 41
* We created our own data, including:
* RGB-D + Fisheye camera system: 70 images * Google Tango
* We measure the quality of the layout extraction with the Pixel Accuracy, i.e. the number of pixels of the resulting labeled image that matches the manually labeled ground truth (Pixel Accuracy, in %)
Experimental evaluation:
SLIDE 42 Experimental results
* With few hypotheses we
Corner extraction and scoring works well * The method removing the depth information gets considerably worse results
SLIDE 43
Results tango + scaling
SLIDE 44
Results tango + scaling
SLIDE 45 Bonus: New calibration method
Extrinsic calibration of multiple RGB- D cameras from line observations.
- A. Perez-Yus, E. Fernandez-Moral, G.
Lopez-Nicolas, J.J. Guerrero, P. Rives IEEE Robotics and Automation Letters 2018
SLIDE 46
New calibration method
SLIDE 47
New calibration method
SLIDE 48 Wide RGB-D for Scaled Layout Reconstruction
Alejandro Perez-Yus, Gonzalo Lopez-Nicolas, Jose J. Guerrero Universidad de Zaragoza, Spain
International Workshop on Lines, Planes and Manhattan Models for 3-D Mapping September 28, 2017 at IROS 2017, Vancouver