SPARSE VOLUMETRIC REPRESENTATION OF TIME-LAPSE POINT CLOUD Innfarn - - PowerPoint PPT Presentation

sparse volumetric representation of time lapse point cloud
SMART_READER_LITE
LIVE PREVIEW

SPARSE VOLUMETRIC REPRESENTATION OF TIME-LAPSE POINT CLOUD Innfarn - - PowerPoint PPT Presentation

SPARSE VOLUMETRIC REPRESENTATION OF TIME-LAPSE POINT CLOUD Innfarn Yoo, 05.08.2017 Introduction Previous Work AGENDA Method Result Future Work 2 INTRODUCTION Time-lapse Point Cloud Dataset Captured external point cloud data by captured


slide-1
SLIDE 1

Innfarn Yoo, 05.08.2017

SPARSE VOLUMETRIC REPRESENTATION OF TIME-LAPSE POINT CLOUD

slide-2
SLIDE 2

2

AGENDA

Introduction Previous Work Method Result Future Work

slide-3
SLIDE 3

3

INTRODUCTION

Captured external point cloud data by captured Kespry

  • Drone Captured Photogrammetric Point Cloud
  • Captured every 2-3 days
  • 235 captures, 190 GB (avg. 810 MB)
  • Each capture has 300 MB ~ 1.9 GB
  • 10 ~ 50 million points
  • Resolution is 10 ~ 20 cm
  • Some noise

Time-lapse Point Cloud Dataset

slide-4
SLIDE 4

4

slide-5
SLIDE 5

5

INTRODUCTION

Capturing internal point cloud data

  • Laser Scan (LIDAR) Point Cloud
  • Captured every 2 weeks
  • 23 captures, 510 GB (avg. 22 GB)
  • Each capture has 13 ~ 45 GB
  • 0.9 ~ 1.9 billion points
  • Resolution is 1 mm ~
  • Accurate (some noise near glass)

Time-lapse Point Cloud Dataset

slide-6
SLIDE 6

6

slide-7
SLIDE 7

7

INTRODUCTION

  • Drone captured point cloud
  • Dynamic loading and rendering of small, but many point cloud data
  • 10 ~ 30 millions of points per capture
  • We already showed our methods in GTC 2016
  • Laser scan point cloud data
  • Around 1.7 billions of points per capture
  • 1.7 billions of points * 16 bytes (float x, y, z and color r, g, b, a) = 25.9 GB
  • NVIDIA Quadro P6000 has 24 GB, GDDR5X memory, but not enough

Problems

Not a topic of this presentation

slide-8
SLIDE 8

8

INTRODUCTION

  • Visualizing laser scan dataset in real-time
  • Compactly save time-lapse laser scan dataset
  • Provide more spatial information
  • Primitive conversions
  • Convert to machine learning friendly dataset
  • Filling gaps in between points

Goals

Sparse Volumes

slide-9
SLIDE 9

9

AGENDA

Introduction Previous Work Method Result Future Work

slide-10
SLIDE 10

10

PREVIOUS WORK

  • Point Cloud VR
  • Time-Lapse VR Rendering
  • Octree-based dynamic loading and

rendering (LOD)

  • Achieved 90 fps per eye
  • Show entire dataset in VR village

GTC 2016

Render point cloud per two eyes, Markus Schuetz

slide-11
SLIDE 11

11

PREVIOUS WORK

  • Progressive Blue-Noise Point Cloud
  • Generating progressive blue-noise point

cloud

  • Buffer management using OpenGL 4.5

extension

  • Dynamic loading and rendering of massive

scale point cloud

GTC 2016

slide-12
SLIDE 12

12

PREVIOUS WORK

Drone Captured Time-lapse Point Cloud Visualization

slide-13
SLIDE 13

13

AGENDA

Introduction Previous Work Method Result Future Work

slide-14
SLIDE 14

14

TIME-LAPSE LASER SCAN POINT CLOUD

  • Time-lapse laser scan point cloud
  • Notoriously big data size
  • Capturing same space different time
  • Some area has higher density than other

Pros & cons

slide-15
SLIDE 15

15

slide-16
SLIDE 16

16

SPARSE VOLUMETRIC REPRESENTATION

  • Sparse Volumetric Representation
  • Data compression
  • Naturally represented by octree structure
  • Voxel allows several algorithms
  • e.g., Surface Extraction, Feature Detection, Object Detection
  • Gives spatial relationship between voxels
  • Can access neighbor voxels

Advantages of Sparse Volume

slide-17
SLIDE 17

17

CREATING SPARSE VOLUME

Offline Processes

Input laser scan files - format: E57, LAS, or LAZ Calculate Bounding Box Generate Octree & Splatting Points Voxelate & Save Voxels Merge & Compress Voxels

slide-18
SLIDE 18

18

CREATING SPARSE VOLUME

  • Make Octree power of 2 cube
  • All leaves have same depth  same volume for

leaf node

  • Subdivide leaf node into small subsets
  • e.g., 1cm x 1cm x 1cm voxel
  • Calculate whether points are in the voxel
  • If a point hits the voxel, voxel activated (sparse)

Bounding Box  Octree  Voxels

slide-19
SLIDE 19

19

CREATING SPARSE VOLUME

  • Activated voxels are only represented by several index bits (x, y, z)
  • 202.42 meter x 226.53 meter x 74.67 meter area
  • Voxelated by 1 cm x 1 cm x 1 cm
  • Only 43 bits are required to save 1 voxel index x, y, & z

Octree  Voxels  Compression

slide-20
SLIDE 20

20

OCTREE-BASED SPARSE VOLUMES

  • Save time-lapse point cloud
  • If a voxel is activated, then only save colors
  • System memory is not enough (out of core design)
  • Process each laser scan
  • Save to disk
  • Merging and compression on disk

Merge & Compress Voxels

slide-21
SLIDE 21

21

SPARSE VOLUMETRIC REPRESENTATION

Voxelization

slide-22
SLIDE 22

22

RENDERING

Progressive Rendering

  • More than 1 billions of points or voxels are too big to render in real-time
  • Keep 60 fps, 80 millions of points per frames is maximum in NVIDIA Quadro P6000
  • To see entire voxels or points, we use progressive rendering
  • Usually used for physically-based rendering
slide-23
SLIDE 23

23

RENDERING

i. Not clean depth and color framebuffer per frame

i. Only clean when camera moved or rendering option changed

ii. Planning 80 millions of points budget per frame

i. Calculate view-frustum & octree node distance ii. Calculate probability (visibility) per node based on distance

iii. Consecutively render additional points per frame per node

i. When no more points are remained in a node, then give remaining point budget to farther node

iv. Copy framebuffer to back buffer every frame

Progressive Rendering

slide-24
SLIDE 24

24

slide-25
SLIDE 25

25

slide-26
SLIDE 26

26

RENDERING

  • Dynamic loading
  • Planning how many points we can load per second (depending on disk speed)
  • Calculate probability based on node distance from space & time
  • Probability whether points need to be rendered in future frame
  • Load points in different thread
  • Sparse buffer allows to handle the points in virtual linear address
  • Actual physical memory in GPU will be used when needed
  • Loading or unloading blocks of sparse buffer based on spatiotemporal location

Thread & Sparse Buffer

slide-27
SLIDE 27

27

AGENDA

Introduction Previous Work Method Result Future Work

slide-28
SLIDE 28

28

RESULTS

0.5 1 1.5 2 2.5

5/8/2016 6/8/2016 7/8/2016 8/8/2016 9/8/2016 10/8/2016 11/8/2016 12/8/2016 1/8/2017 2/8/2017 3/8/2017

Number of Points & Voxels (Billion) Laser Scan Capture Date

Number of Points & Voxels (Voxelated result)

Num Points Num Voxels

slide-29
SLIDE 29

29

AGGREGATE RESULTS

50 100 150 200 250 300 350 400 Points Voxels (1cm) Remove Duplicated Voxels 5 10 15 20 25 30

Point to Voxel Conversion Size Comparison

Number of Object (billion) File Size (GB)

Unit: Billion Unit: GB

slide-30
SLIDE 30

30

RESULT

  • Sparse voxel representation can alleviate notoriously big data size problem
  • Preprocessing takes long time
  • Several hours to process 400 GB laser scan data
  • Progressive rendering allows see entire data set with real-time control

Overall

slide-31
SLIDE 31

31

AGENDA

Introduction Previous Work Method Result & Demo Future Work

slide-32
SLIDE 32

32

FUTURE WORK

  • NVIDIA’s GVDB
  • Similar to OpenVDB, but CUDA-based VDB
  • Our dataset is larger than the limit of GVDB

for now

  • We have 3.5 billion voxels
  • Later we will cut subset of voxels and

process in GVDB

GVDB

Partially Converted to GVDB and Rendered using NVIDIA OptiX Splatting 10 GB of points to GVDB

slide-33
SLIDE 33

33

FUTURE WORK

  • Integration to NVIDIA’s new ProViz viewer and editor
  • ProViz team is developing new ProViz viewer and editor
  • This work will be integrated
  • Object Detection from Volumetric Point Cloud
  • Detect objects using machine learning in 3D space

ProViz tool & Machine Learning

slide-34
SLIDE 34

34

NVIDIA’S NEW BUILDING

slide-35
SLIDE 35