Single-View Depth Image Estimation
Fangchang Ma PhD Candidate at MIT (Sertac Karaman Group)
- homepage: www.mit.edu/~fcma/
- code: github.com/fangchangma
Single-View Depth Image Estimation Fangchang Ma PhD Candidate at - - PowerPoint PPT Presentation
Single-View Depth Image Estimation Fangchang Ma PhD Candidate at MIT (Sertac Karaman Group) homepage: www.mit.edu/~fcma/ code: github.com/fangchangma Depth sensing is key to robotics advancement 1979, Multi-view vision and the Stanford
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
100 101 102 103 104 number of depth samples 0.00 0.05 0.10 0.15 0.20 0.25
REL
RGBd sparse depth RGB
23
24
25
26
(depth image)
27
Supervised training requires ground truth depth labels, which are hard to acquire in practice
36
Experiment 2. Self-Supervised Training
(depth image)
37
38
39
40
41
42
43
44
This Work
(178 fps on TX2 GPU)
Baseline
ResNet-50 with UpProj (2.7 fps on TX2 GPU)
Ground Truth RGB Input
45
46
47
48
49
50
51
0.5
1
1.5
52
Reconstructed Images Ground Truth
Undersampled Measurements
53
54
55
Input: only sparse depth Output: dense depth
Fangchang Ma, Luca Carlone, Ulas Ayaz, Sertac Karaman. “Sparse sensing for resource- constrained depth reconstruction”. IROS’16 Fangchang Ma, Luca Carlone, Ulas Ayaz, Sertac Karaman. “Sparse Depth Sensing for Resource- Constrained Robots”. The International Journal of Robotics Research (IJRR)
56
57
58
59
60