Competitive Collaboration Joint Unsupervised Learning of Depth, - - PowerPoint PPT Presentation

β–Ά
competitive collaboration
SMART_READER_LITE
LIVE PREVIEW

Competitive Collaboration Joint Unsupervised Learning of Depth, - - PowerPoint PPT Presentation

Competitive Collaboration Joint Unsupervised Learning of Depth, Camera Motion, Optical Flow and Motion Segmentation Anurag Ranjan Perceiving Systems Max Planck Institute for Intelligent Systems 1 Varun Jampani Lukas Balles Deqing Sun


slide-1
SLIDE 1

Competitive Collaboration

Joint Unsupervised Learning of Depth, Camera Motion, Optical Flow and Motion Segmentation

Anurag Ranjan Perceiving Systems Max Planck Institute for Intelligent Systems

1

slide-2
SLIDE 2

Varun Jampani Lukas Balles Deqing Sun Kihwan Kim Jonas Wulff Michael Black

2

slide-3
SLIDE 3

TΓΌbingen, Germany

3

slide-4
SLIDE 4

Outline

Motion and Optical Flow Deep Learning with Structure Competitive Collaboratio n Geometry Unsupervised Learning of Everything

4

Supervise d Unsupervise d

slide-5
SLIDE 5

Motion and Optical Flow

5

slide-6
SLIDE 6

Optical Flow

2D velocity for all pixels between two frames of a video sequence.

𝐽 𝑦, 𝑧, 𝑒 βˆ’ 1 = 𝐽(𝑦 + 𝑣, 𝑧 + 𝑀, 𝑒)

6

slide-7
SLIDE 7

Why do we need Optical Flow

7

Optical Flow

SLAM Action Recognition Super-resolution Video Compression Slomo Unsupervised Segmentation

Unsupervised Segmentation: Mahendran et al., VFX: Black et al., Motion Magnification: Liu et al., Action Recognition: Simoyan et al.

Motion Magnification VFX

slide-8
SLIDE 8

Optical Flow

2D velocity for all pixels between two frames of a video sequence.

𝐽 𝑦, 𝑧, 𝑒 βˆ’ 1 = 𝐽(𝑦 + 𝑣, 𝑧 + 𝑀, 𝑒)

8

slide-9
SLIDE 9

Estimating Optical Flow

𝐽 𝑦, 𝑧, 𝑒 βˆ’ 1 = 𝐽(𝑦 + 𝑣, 𝑧 + 𝑀, 𝑒) min

𝑣,𝑀 βˆ₯ 𝐽 𝑦, 𝑧, 𝑒 βˆ’ 1 βˆ’ 𝐽 𝑦 + 𝑣, 𝑧 + 𝑀, 𝑒 βˆ₯

min

𝑣,𝑀 𝜍(𝐽 𝑒 βˆ’ 1 βˆ’ π‘₯arp 𝐽 𝑒 , 𝑣, 𝑀 )

9

Photometric Loss

slide-10
SLIDE 10

10

min

𝑣,𝑀 𝜍(𝐽 𝑒 βˆ’ 1 βˆ’ π‘₯arp 𝐽 𝑒 , 𝑣, 𝑀 )

Photometric Loss

slide-11
SLIDE 11

No prior on structure

11

slide-12
SLIDE 12

Can we learn from data?

12

slide-13
SLIDE 13

Optical Flow Estimation

∈ β„π‘œΓ—n

Dosovitskiy et al. 2015

13

slide-14
SLIDE 14

FlowNet

Dosovitskiy et al. 2015

14

slide-15
SLIDE 15

Problem

FlowNet is too big. 33 M parameters. Needs to learn both large and small motions. Does not perform well.

15

slide-16
SLIDE 16

Approach

Image statistics are scale invariant. Use an image pyramid. Train a small network for each pyramid level. Compute residual flow at each level. Network captures small displacements. Pyramid captures large displacements.

16

Burt and Adelson. The Laplacian pyramid as a compact image code. IEEE COM, 1983

slide-17
SLIDE 17

SPyNet

Spatial Pyramid Network for Optical Flow Estimation

17

Ranjan et al. Optical Flow estimation using a Spatial Pyramid Network. CVPR 2017.

slide-18
SLIDE 18

32x7x7 64x7x7 32x7x7 16x7x7 2x7x7

𝐽1, 𝐽2 𝑀𝑙

18

slide-19
SLIDE 19

𝐻𝑙

19

slide-20
SLIDE 20

+

𝐻0 𝐽2

2

𝐽2

1

𝐽1

2

𝐽1

1

𝑒 𝑒 𝑒 𝑒 𝐽0

2

𝐽0

1

π‘Š 𝑀0 𝑣

+

𝐻1 π‘₯ π‘Š

1

𝑣 𝑀1

20

slide-21
SLIDE 21

+

𝐻2 π‘₯

+

𝐻1 π‘₯

+

𝐻0 𝐽2

2

𝐽2

1

𝐽1

2

𝐽1

1

𝑒 𝑒 π‘Š

2

𝑒 𝑒 𝐽0

2

𝐽0

1

π‘Š

1

π‘Š 𝑣 𝑣 𝑀0 𝑀1 𝑀2

21

slide-22
SLIDE 22

SPyNet FlowNet

22

Spatial Temporal Spatial Temporal

slide-23
SLIDE 23

Frames Ground Truth FlowNetS FlowNetC SPyNet

23

slide-24
SLIDE 24

7,500 7,600 7,700 7,800 7,900 8,000 8,100 8,200 8,300 8,400 8,500 1 10 100

Number of Model Parameters (in Millions)

SPyNet FlowNetC FlowNetS Voxel2Voxel* Average EPE on Sintel (Clean + Final)

*error metric not consistent with the benchmarks

24

slide-25
SLIDE 25

4,000 4,500 5,000 5,500 6,000 6,500 7,000 7,500 8,000 8,500 9,000 1 10 100 1000

Number of Model Parameters (in Millions)

SPyNet [2017] FlowNetC [2015] FlowNetS [2015] Voxel2Voxel* [2016] Average EPE on Sintel (Clean + Final)

*error metric not consistent with the benchmarks

PWC-Net [2018] FlowNet2 [2017]

25

slide-26
SLIDE 26

Sintel Final d0-10 d10-60 d60-140 s0-10 s10-40 s40+ SpyNet+ft 6.694 4.368 3.290 1.395 5.534 49.707 FlownetS+ft 7.252 4.610 2.993 1.873 5.826 43.236 FlownetC+ft 7.190 4.619 3.298 2.305 6.169 40.779 Sintel Clean d0-10 d10-60 d60-140 s0-10 s10-40 s40+ SpyNet+ft 5.501 3.122 1.719 0.832 3.343 43.442 FlownetS+ft 5.992 3.561 2.193 1.424 3.815 40.098 FlownetC+ft 5.575 3.182 1.993 1.622 3.974 33.369

Sintel Clean Sintel Final

Distance from Motion Boundaries Average Displacement

26

slide-27
SLIDE 27

Problem

SPyNet [1]

28

[1] Ranjan et al. Optical Flow estimation using a Spatial Pyramid Network. CVPR 2017.

slide-28
SLIDE 28

Why humans?

Scenes contain human actions.

Left Image: Delaitre et al. Recognizing human actions in still images, BMVC 2010 Right Image: Simonyan et al. Two-stream convolutional networks for action recognition in videos. NIPS 2014.

  • Useful for recognition problems.
  • Two-stream architectures use fast

classical optical flow methods.

  • Deep Networks have massive GPU

memory requirements.

29

slide-29
SLIDE 29

Problem

No dataset for human optical flow for training neural networks.

Flying Chairs [1]

MPI Sintel [2] KITTI [3]

[1] Dosovitskiy et al. Flownet: Learning optical flow with convolutional networks. ICCV 2015. [2] Butler et al. A naturalistic open source movie for optical flow evaluation. ECCV 2012. [3] Geiger et al. Vision meets robotics: The KITTI dataset. International Journal of Robotics Research 32.11 (2013): 1231-1237.

30

slide-30
SLIDE 30

Idea

Create a new dataset for human optical flow. Use it to train an existing fast and compact optical flow method.

31

slide-31
SLIDE 31

Human Flow Dataset

Human Motion Capture data [1] Realistic Human Body Model [2] Environment [3] Simulate and Extract Motion Vectors

+ +

32

[1] Ionescu et al. Human3.6m: Large scale datasets and predictive methods for 3d human sensing in natural environments. IEEE PAMI2014. [2] Loper et al. MoSh: Motion and Shape Capture from Sparse Markers. SIGGRAPH Asia 2014. [3] Yu et al. "Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop." arXiv preprint arXiv:1506.03365(2015).

+ Cloth texture, Lighting, Noise, Motion Blur, Camera Blur Blender

slide-32
SLIDE 32

Human Flow Dataset

33

slide-33
SLIDE 33

+

𝐻2 π‘₯

+

𝐻1 π‘₯

+

𝐻0 𝐽2

2

𝐽2

1

𝐽1

2

𝐽1

1

𝑒 𝑒 π‘Š

2

𝑒 𝑒 𝐽0

2

𝐽0

1

π‘Š

1

π‘Š 𝑣 𝑣 𝑀0 𝑀1 𝑀2

SPyNet

Ranjan et al. Optical Flow estimation using a Spatial Pyramid Network. CVPR 2017.

35

slide-34
SLIDE 34

0.1 0.2 0.3 0.4 0.5 0.6 0.010 0.100 1.000 10.000 PWC-Net

Average EPE Human Flow Dataset Inference Time (s)

SPyNet

Evaluation of Optical Flow Networks

36

SPyNet+HF PWC-Net+HF

slide-35
SLIDE 35

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.010 0.100 1.000 10.000 SPyNet+HF Flow Fields LDOF PCA Flow

Average EPE Human Flow Dataset

FlowNetS Epic Flow

Inference Time (s)

SPyNet

Evaluation of Optical Flow Networks

37

PWC-Net+HF PWC-Net FlowNet2

slide-36
SLIDE 36

–

38

Visuals

Video Ground Truth Human Flow SpyNet

slide-37
SLIDE 37

–

39

Visuals

Video Ground Truth Human Flow SpyNet

slide-38
SLIDE 38

–

40

Visuals

Video Ground Truth Human Flow SpyNet

slide-39
SLIDE 39

–

41

Visuals

Video Human Flow SpyNet

slide-40
SLIDE 40

–

42

Visuals

Video Human Flow SpyNet

slide-41
SLIDE 41

Human Flow may not work on other parts

  • f the scene.

43

slide-42
SLIDE 42

Introduction to Scene Geometry

44

slide-43
SLIDE 43

Motion of a Static Scene

For static scenes: Depth + Camera Motion = Optical Flow

45

slide-44
SLIDE 44

Multi-view Geometry

𝑦1 = πΏπ‘Œ, 𝑦2 = 𝐿 𝑆 𝑒 π‘Œ, π‘Œ =

𝑒 𝑔 𝑦1

𝑒 βˆ₯ 𝐽1 𝑦1 βˆ’ 𝐽2 𝑦2 βˆ₯= 0 min

𝑆,𝑒,𝑒 𝜍(𝐽1 βˆ’ π‘₯arp 𝐽2, 𝑆, 𝑒, 𝑒 )

Pinhole Camera Matrix

46

Photometric Loss

𝐽1 𝐽2

slide-45
SLIDE 45

Static Scene and Moving Objects

47

slide-46
SLIDE 46

How to decompose a scene?

48

slide-47
SLIDE 47

Competitive Collaboration

49

slide-48
SLIDE 48

𝑆

𝒠𝑠

50

slide-49
SLIDE 49

𝑆 𝐺

Competitor

𝒠𝑠 𝒠𝑔 𝒠

Competitor

51

slide-50
SLIDE 50

𝑆 𝑁 𝐺

Competition Moderator

𝒠𝑠 𝒠𝑔

Competitor Competitor

52

slide-51
SLIDE 51

𝑆 𝑁 𝐺

Collaboration Moderator

𝒠𝑠

βˆ—

𝒠𝑔

βˆ—

Competitor Competitor

53

slide-52
SLIDE 52

𝐡 𝑁 𝐢

Mixed Domain Learning

54

slide-53
SLIDE 53

Competition Loss

𝐹𝑑𝑝𝑛 = 𝑛 βˆ™ 𝐼 𝐡 , 5 + 1 βˆ’ 𝑛 βˆ™ 𝐼(𝐢 , 5)

55

slide-54
SLIDE 54

Collaboration Loss

πΉπ‘‘π‘π‘š = 𝐹𝑑𝑝𝑛 + α‰Šβˆ’ log(𝑁 𝑧 + πœ—) 𝑗𝑔 𝐹

𝐡 < 𝐹𝐢

βˆ’ log(1 βˆ’ 𝑁 𝑧 + πœ—) 𝑗𝑔𝐹

𝐡 β‰₯ 𝐹𝐢

𝐹

𝐡 = 𝐼(𝐡 (

), 5)

56

slide-55
SLIDE 55

𝐡 𝑁 𝐢

57

slide-56
SLIDE 56

Accuracy

Model Training MNIST Error SVHN Error MNIST+SVHN Error

Alice Basic 1.34 11.88 8.96 Alice CC 1.41 11.55 8.74 Bob CC 1.24 11.75 8.84 Alice+Bob+Mod CC 1.24 11.55 8.70

58

Alice 3x Basic 1.33 10.86 8.22

slide-57
SLIDE 57

Moderator Behavior

Alice Bob MNIST 0 % 100 % SVHN 100 % 0 %

59

slide-58
SLIDE 58

Joint Unsupervised Learning of Depth, Camera Motion, Optical Flow and Motion Segmentation

60

slide-59
SLIDE 59

𝑆

𝐸 𝐷

Monocular Depth Prediction CameraMotion Estimation Zhou et al. CVPR 2017

61

slide-60
SLIDE 60

𝑆

𝐸 𝐷 𝐺

Monocular Depth Prediction Optical Flow Estimation CameraMotion Estimation Zhou et al. CVPR 2017 Meister et al. AAAI β€˜18, Janai et al. ECCV β€˜18

62

slide-61
SLIDE 61

𝑆

𝐸 𝐷 𝐺 𝑁

Monocular Depth Prediction Optical Flow Estimation CameraMotion Estimation Motion Segmentation

𝒠𝑔 𝒠𝑠

63

slide-62
SLIDE 62

𝑆

𝐸 𝐷 𝐺 𝑁

𝐹

Monocular Depth Prediction Optical Flow Estimation CameraMotion Estimation Motion Segmentation Loss Loss

64

𝐹𝑆 = 𝜍(𝐽, π‘₯arp(𝐽+, 𝑑, 𝑒 )) β‹… 𝑛 𝐹𝐺 = 𝜍(𝐽, π‘₯arp(𝐽+, 𝑣+ )) β‹… (1 βˆ’ 𝑛)

Photometric Loss Photometric Loss

𝐹𝐷 = 𝐼(𝑱βˆ₯π‘£π‘†βˆ’ 𝑣𝐺βˆ₯<πœ‡π‘‘ , 𝑛)

slide-63
SLIDE 63

𝑆 𝑁 𝐺

Competition Depth and Camera Motion Nets Optical Flow Net

𝒠𝑠, 𝐹𝑆 𝒠𝑔, 𝐹𝐺

(Moderator) Mask Net

66

slide-64
SLIDE 64

𝑆 𝑁 𝐺

Collaboration (Moderator) Mask Net

𝒠𝑠

βˆ—

𝒠𝑔

βˆ—

𝐹𝐷

Depth and Camera Motion Nets Optical Flow Net

67

slide-65
SLIDE 65

Best amongst Unsupervis vised Methods on Single View Depth Prediction Camera Motion Estimation Optical Flow Only Network that does Unsupervis vised Motion Segmentation

68

slide-66
SLIDE 66

Results

69

slide-67
SLIDE 67

Image Depth Segmentation Static Flow Segmented Dynamic Flow Full Flow

70

slide-68
SLIDE 68

Image Depth Segmentation Static Flow Segmented Dynamic Flow Full Flow

71

slide-69
SLIDE 69

Image Depth Segmentation Static Flow Segmented Dynamic Flow Full Flow

72

slide-70
SLIDE 70

Image Depth Segmentation Static Flow Segmented Dynamic Flow Full Flow

73

slide-71
SLIDE 71

Image Depth Segmentation Static Flow Segmented Dynamic Flow Full Flow

74

slide-72
SLIDE 72

Depth Evaluation

Model Dataset AbsRel SqRel RMS RMSlog

Eigen et al. 2014 KITTI 0.203 1.548 6.307 0.282 Zhou et al. 2017 KITTI 0.183 1.595 6.709 0.270 Geonet 2018 KITTI 0.155 1.296 5.857 0.233 DF-Net 2018 KITTI 0.150 1.124 5.507 0.223 Ours KITTI 0.140 1.070 5.326 0.217 Zhou et al. 2017 CS+KITTI 0.198 1.836 6.565 0.275 Geonet 2018 CS+KITTI 0.153 1.328 5.737 0.232 DF-Net 2018 CS+KITTI 0.146 1.182 5.215 0.213 Ours CS+KITTI 0.139 1.032 5.199 0.213

75

Godard et al. CS+KITTI+S 0.114 0.991 5.029 0.203

[1] Zhou et al. Unsupervised learning of depth and ego-motion from video. CVPR 2017. [2] Yin et at. Geonet: Unsupervised learningof dense depth, optical flow and camera pose. CVPR 2018.. [3] Zou et al. Df-net: Unsupervised joint learning of depth and flow using cross-task consistency. ECCV 2018.

slide-73
SLIDE 73

Depth Ablation

Model Dataset Net D Net F AbsRel SqRel RMS RMSlog

Basic KITTI DispNet

  • 0.168

1.396 6.176 0.244 CC KITTI DispNet FlowNetC 0.148 1.149 5.464 0.226 CC KITTI DispResNet FlowNetC 0.144 1.284 5.716 0.226 CC KITTI DispResNet PWC Net 0.140 1.070 5.326 0.217 CC CS+KITTI DispResNet PWC Net 0.139 1.032 5.199 0.213

76

DispResNet> DispNet PWC Net > FlowNetC

slide-74
SLIDE 74

Depth Visuals

77

slide-75
SLIDE 75

Image Ground Truth CC (CS+K) Basic Basic+ssim CC (K)

78

slide-76
SLIDE 76

Image Ground Truth CC (CS+K) Basic Basic+ssim CC (K)

79

slide-77
SLIDE 77

Image Ground Truth CC (CS+K) Basic Basic+ssim CC (K)

80

slide-78
SLIDE 78

Image Ground Truth CC (CS+K) Basic Basic+ssim CC (K)

81

slide-79
SLIDE 79

Image Ground Truth CC (CS+K) Basic Basic+ssim CC (K)

82

slide-80
SLIDE 80

Image Ground Truth CC (CS+K) Basic Basic+ssim CC (K)

83

slide-81
SLIDE 81

Pose Evaluation

Model Sequence 09 Sequence 10

ORB-SLAM 0.014 Β± 0.008 0.012 Β± 0.011 Zhou et al. 2017 0.016 Β± 0.009 0.013 Β± 0.009 Geonet 2018 0.012 Β± 0.007 0.012 Β± 0.009 DF-Net 2018 0.017 Β± 0.007 0.015 Β± 0.009 Ours 0.012 Β± 0.007 0.012 Β± 0.008

84

[1] Zhou et al. Unsupervised learning of depth and ego-motion from video. CVPR 2017. [2] Yin et at. Geonet: Unsupervised learningof dense depth, optical flow and camera pose. CVPR 2018.. [3] Zou et al. Df-net: Unsupervised joint learning of depth and flow using cross-task consistency. ECCV 2018.

slide-82
SLIDE 82

Flow Evaluation on KITTI

Model EPE Fl Test Fl

UnFlow-CSS 2018 8.10 23.27 %

  • Back2Future 2018

6.59 24.21 % 22.94 % Geonet 2018 10.81

  • DF-Net 2018

8.98 26.41 % 25.70 % Ours 5.66 20.93 % 25.27 %

85

PWC-Net 2018 10.35 33.67%

  • PWC-Net+ft 2018

(2.16) (9.80%) 9.60%

[1] Janai et al. Unsupervised learning of multi-frame optical flow with occlusions. ECCV 2018. [2] Yin et at. Geonet: Unsupervised learningof dense depth, optical flow and camera pose. CVPR 2018.. [3] Zou et al. Df-net: Unsupervised joint learning of depth and flow using cross-task consistency. ECCV 2018.

slide-83
SLIDE 83

Flow Visuals

86

slide-84
SLIDE 84

Image 1 Ground Truth UnFlow-CSS GeoNet DF-Net Ours

87

[1] Meister et al. Unsupervised learningof optical flow with a bidirectional census loss. AAAI 2018. [2] Yin et at. Geonet: Unsupervised learningof dense depth, optical flow and camera pose. CVPR 2018.. [3] Zou et al. Df-net: Unsupervised joint learning of depth and flow using cross-task consistency. ECCV 2018.

slide-85
SLIDE 85

Image 1 Ground Truth UnFlow-CSS GeoNet DF-Net Ours

88

[1] Meister et al. Unsupervised learningof optical flow with a bidirectional census loss. AAAI 2018. [2] Yin et at. Geonet: Unsupervised learningof dense depth, optical flow and camera pose. CVPR 2018.. [3] Zou et al. Df-net: Unsupervised joint learning of depth and flow using cross-task consistency. ECCV 2018.

slide-86
SLIDE 86

Image 1 Ground Truth UnFlow-CSS GeoNet DF-Net Ours

89

[1] Meister et al. Unsupervised learningof optical flow with a bidirectional census loss. AAAI 2018. [2] Yin et at. Geonet: Unsupervised learningof dense depth, optical flow and camera pose. CVPR 2018.. [3] Zou et al. Df-net: Unsupervised joint learning of depth and flow using cross-task consistency. ECCV 2018.

slide-87
SLIDE 87

Image 1 Ground Truth UnFlow-CSS GeoNet DF-Net Ours

90

[1] Meister et al. Unsupervised learningof optical flow with a bidirectional census loss. AAAI 2018. [2] Yin et at. Geonet: Unsupervised learningof dense depth, optical flow and camera pose. CVPR 2018.. [3] Zou et al. Df-net: Unsupervised joint learning of depth and flow using cross-task consistency. ECCV 2018.

slide-88
SLIDE 88

Image 1 Ground Truth UnFlow-CSS GeoNet DF-Net Ours

91

[1] Meister et al. Unsupervised learningof optical flow with a bidirectional census loss. AAAI 2018. [2] Yin et at. Geonet: Unsupervised learningof dense depth, optical flow and camera pose. CVPR 2018.. [3] Zou et al. Df-net: Unsupervised joint learning of depth and flow using cross-task consistency. ECCV 2018.

slide-89
SLIDE 89

Image 1 Ground Truth UnFlow-CSS GeoNet DF-Net Ours

92

[1] Meister et al. Unsupervised learningof optical flow with a bidirectional census loss. AAAI 2018. [2] Yin et at. Geonet: Unsupervised learningof dense depth, optical flow and camera pose. CVPR 2018.. [3] Zou et al. Df-net: Unsupervised joint learning of depth and flow using cross-task consistency. ECCV 2018.

slide-90
SLIDE 90

93

What’s next? supervision

slide-91
SLIDE 91

94

Future Goal

Image courtesy: https://ps.is.tuebingen.mpg.de/research_fields/inverse-graphics

slide-92
SLIDE 92

github.com/anuragranj

Michael Black (MPI), Jonas Wulff (MIT), Timo Bolkart (MPI), Siyu Tang (MPI), Joel Janai (MPI), Deqing Sun (NVIDIA), Fatma GΓΌney (Oxford), Varun Jampani (NVIDIA), Andreas Geiger (MPI), ClΓ©ment Pinard (ENSTA), Soubhik Sanyal (MPI), Yiyi Liao (MPI), George Pavlakos (UPenn), Kihwan Kim (NVIDIA), Lukas Balles (MPI), Frederick KΓΌnstner (MPI), Dimitris Tzionas (MPI), David Hoffmann (MPI)

97