Effj fjcient Collision Detection While Rendering Dynamic Point - - PowerPoint PPT Presentation

effj fjcient collision detection while rendering dynamic
SMART_READER_LITE
LIVE PREVIEW

Effj fjcient Collision Detection While Rendering Dynamic Point - - PowerPoint PPT Presentation

Effj fjcient Collision Detection While Rendering Dynamic Point Clouds M. Radwan, S. Ohrhallinger and M. Wimmer Vienna University of Technology, Austria Motivation Point clouds are queried using bounding hierarchy Construction: O(N log


slide-1
SLIDE 1

Effj fjcient Collision Detection While Rendering Dynamic Point Clouds

  • M. Radwan, S. Ohrhallinger and M. Wimmer

Vienna University of Technology, Austria

slide-2
SLIDE 2
  • M. Radwan, S. Ohrhallinger, M. Wimmer

2

Motivation

  • Point clouds are queried using bounding hierarchy
  • Construction: O(N log N), query time: O(log N)
slide-3
SLIDE 3
  • M. Radwan, S. Ohrhallinger, M. Wimmer

3

Motivation

  • Point clouds are queried using bounding hierarchy
  • Construction: O(N log N), query time: O(log N)
  • Dynamic points without any time coherency:

per-frame construction too slow for N=1000000 →

slide-4
SLIDE 4
  • M. Radwan, S. Ohrhallinger, M. Wimmer

4

Related work

BVH [Klein et al '04]

slide-5
SLIDE 5
  • M. Radwan, S. Ohrhallinger, M. Wimmer

5

Related work

BVH [Klein et al '04] Voxels [Eisemann et al '06]

slide-6
SLIDE 6
  • M. Radwan, S. Ohrhallinger, M. Wimmer

6

Related work

BVH [Klein et al '04] Voxels [Eisemann et al '06] LDI [Heidelberger et al '04]

slide-7
SLIDE 7
  • M. Radwan, S. Ohrhallinger, M. Wimmer

7

Related work

BVH [Klein et al '04] Voxels [Eisemann et al '06] LDI [Heidelberger et al '04] Dynamic [Pan et al '13]

slide-8
SLIDE 8
  • M. Radwan, S. Ohrhallinger, M. Wimmer

8

Proposed solution

  • 3D point cloud is really sampled on 2D surface
slide-9
SLIDE 9
  • M. Radwan, S. Ohrhallinger, M. Wimmer

9

Proposed solution

  • 3D point cloud is really sampled on 2D surface

→ fmatten to depth images (in screen space): O(N)

slide-10
SLIDE 10
  • M. Radwan, S. Ohrhallinger, M. Wimmer

10

Proposed solution

  • 3D point cloud is really sampled on 2D surface

→ fmatten to depth images (in screen space): O(N)

  • Incidental benefjts of our method:
slide-11
SLIDE 11
  • M. Radwan, S. Ohrhallinger, M. Wimmer

11

Proposed solution

  • 3D point cloud is really sampled on 2D surface

→ fmatten to depth images (in screen space): O(N)

  • Incidental benefjts of our method:

Superior accuracy

slide-12
SLIDE 12
  • M. Radwan, S. Ohrhallinger, M. Wimmer

12

Proposed solution

  • 3D point cloud is really sampled on 2D surface

→ fmatten to depth images (in screen space): O(N)

  • Incidental benefjts of our method:

Superior accuracy Robustness to sensor noise

slide-13
SLIDE 13
  • M. Radwan, S. Ohrhallinger, M. Wimmer

13

Bounding the points

R3: spherical cover

slide-14
SLIDE 14
  • M. Radwan, S. Ohrhallinger, M. Wimmer

14

Bounding the points

R3: spherical cover view ray depth intervals ?

slide-15
SLIDE 15
  • M. Radwan, S. Ohrhallinger, M. Wimmer

15

Bounding the points

R3: spherical cover view ray depth intervals inequal boundary thickness ?

slide-16
SLIDE 16
  • M. Radwan, S. Ohrhallinger, M. Wimmer

16

Equalize boundary thickness

R3: spherical cover R3: cylindrical cover

slide-17
SLIDE 17
  • M. Radwan, S. Ohrhallinger, M. Wimmer

17

Equalize boundary thickness

R3: spherical cover R3: cylindrical cover equal boundary thickness

slide-18
SLIDE 18
  • M. Radwan, S. Ohrhallinger, M. Wimmer

18

Blending view rays

R3: cylindrical cover

slide-19
SLIDE 19
  • M. Radwan, S. Ohrhallinger, M. Wimmer

19

Blending view rays

R3: cylindrical cover cylinders blended view rays →

slide-20
SLIDE 20
  • M. Radwan, S. Ohrhallinger, M. Wimmer

20

Blending view rays

R3: cylindrical cover cylinders blended view rays →

slide-21
SLIDE 21
  • M. Radwan, S. Ohrhallinger, M. Wimmer

21

Blending view rays

R3: cylindrical cover cylinders blended view rays →

slide-22
SLIDE 22
  • M. Radwan, S. Ohrhallinger, M. Wimmer

22

Discretize in screen space

R3: spherical cover view ray depth intervals blended

slide-23
SLIDE 23
  • M. Radwan, S. Ohrhallinger, M. Wimmer

23

Thickened LDI (layered depth images)

depth intervals

slide-24
SLIDE 24
  • M. Radwan, S. Ohrhallinger, M. Wimmer

24

Thickened LDI (layered depth images)

depth intervals depth image layers

slide-25
SLIDE 25
  • M. Radwan, S. Ohrhallinger, M. Wimmer

25

Thickened LDI (layered depth images)

depth intervals depth image layers

slide-26
SLIDE 26
  • M. Radwan, S. Ohrhallinger, M. Wimmer

26

Stacking cylinders into a depth layer

Detect layer connectivity with bit array occupancy

slide-27
SLIDE 27
  • M. Radwan, S. Ohrhallinger, M. Wimmer

27

Reuse of point-based rendering pipeline

slide-28
SLIDE 28
  • M. Radwan, S. Ohrhallinger, M. Wimmer

28

Reuse of point-based rendering pipeline

slide-29
SLIDE 29
  • M. Radwan, S. Ohrhallinger, M. Wimmer

29

Reuse of point-based rendering pipeline

Splat rendering Collision Detection Full TLDI 20 40 60 80 100 120 140 160

Happy buddha, 500k points

ms

Almost half of run time reused

slide-30
SLIDE 30
  • M. Radwan, S. Ohrhallinger, M. Wimmer

30

Application: Collision detection

Intersection of view ray intervals

slide-31
SLIDE 31
  • M. Radwan, S. Ohrhallinger, M. Wimmer

31

Application: Collision detection

Intersection of view ray intervals Construct depth layers as needed

slide-32
SLIDE 32
  • M. Radwan, S. Ohrhallinger, M. Wimmer

32

False positives

slide-33
SLIDE 33
  • M. Radwan, S. Ohrhallinger, M. Wimmer

33

False positives

Squash in view direction – but how much?

slide-34
SLIDE 34
  • M. Radwan, S. Ohrhallinger, M. Wimmer

34

Determining collision accuracy

point clouds

slide-35
SLIDE 35
  • M. Radwan, S. Ohrhallinger, M. Wimmer

35

Determining collision accuracy

mesh = reference point clouds

slide-36
SLIDE 36
  • M. Radwan, S. Ohrhallinger, M. Wimmer

36

Squashing the bounding volume

slide-37
SLIDE 37
  • M. Radwan, S. Ohrhallinger, M. Wimmer

37

Squashing the bounding volume

rho=0.05 is good squashed by factor 20 →

slide-38
SLIDE 38
  • M. Radwan, S. Ohrhallinger, M. Wimmer

38

Result 1: Enhanced accuracy

[Zachmann 2002] ~7% accuracy, ours: from 0.3%

slide-39
SLIDE 39
  • M. Radwan, S. Ohrhallinger, M. Wimmer

39

Accuracy for different models

within 0.3-3% of reference

slide-40
SLIDE 40
  • M. Radwan, S. Ohrhallinger, M. Wimmer

40

Result 2: Real-time collision detection

Previously only feasible using probability

slide-41
SLIDE 41
  • M. Radwan, S. Ohrhallinger, M. Wimmer

41

Interactive for large models (5M points)

slide-42
SLIDE 42
  • M. Radwan, S. Ohrhallinger, M. Wimmer

42

Early rejection test

Test fjrst layer against last layer of other point cloud

slide-43
SLIDE 43
  • M. Radwan, S. Ohrhallinger, M. Wimmer

43

Incidental distance queries

For collision detection:

  • For each view ray (pixel), test if intervals overlap

In case of non-collision:

  • Keep shortest distance between view ray intervals

separation distance in view direction →

slide-44
SLIDE 44
  • M. Radwan, S. Ohrhallinger, M. Wimmer

44

Result 3: Robust to noise

Test with added uniform Gaussian noise =nr σ

avg

slide-45
SLIDE 45
  • M. Radwan, S. Ohrhallinger, M. Wimmer

45

Time complexity

  • O(LN), L = number of layers (depth complexity)
  • Very little output-sensitivity measured

Colliding m>2 objects, adds factor m^2

  • But need only construct TLDIs once
  • Can also combine small point clouds to reduce m
slide-46
SLIDE 46
  • M. Radwan, S. Ohrhallinger, M. Wimmer

46

Conclusions

Novel structure bounds surface of dynamic points

  • Real-time, accurate, robust to noise
  • Potential other applications: everything which

benefjts from fast surface queries: GI, ray-tracing Work in progress

  • Compact TLDI data structure
  • Speed up construction+queries