Real-Time Visualization 1 Real-Time Volume Rendering Stefan - - PowerPoint PPT Presentation

real time visualization 1
SMART_READER_LITE
LIVE PREVIEW

Real-Time Visualization 1 Real-Time Volume Rendering Stefan - - PowerPoint PPT Presentation

Real-Time Visualization 1 Real-Time Volume Rendering Stefan Bruckner Introduction Visualization of a 3D field No explicitly defined surfaces 3D data is projected onto a 2D image Interactivity is crucial Most commonly based on


slide-1
SLIDE 1

Real-Time Visualization 1

Real-Time Volume Rendering Stefan Bruckner

slide-2
SLIDE 2

Introduction

  • Visualization of a 3D field

– No explicitly defined surfaces – 3D data is projected onto a 2D image – Interactivity is crucial

  • Most commonly based on sampled representation

– Regular grids – Curvilinear grids – Point-based representations

2

slide-3
SLIDE 3

Volume Rendering (1)

3

image plane

Image-order approach

For each pixel { calculate color of the pixel }

volume eye

slide-4
SLIDE 4

Volume Rendering (2)

4

image plane

For each slice { calculate contribution to the image }

volume eye

Object-order approach

slide-5
SLIDE 5

Ray Integration

5

image plane volume eye

slide-6
SLIDE 6

Physical Model

  • Conventional volume rendering uses an emission-

absorption model

  • Scattering effects are usually ignored due to high

computational complexity

  • For each pixel on the image plane, a the ray

integral has to be solved

  • Image-space approach for solving the ray integral:

volume ray casting

6

slide-7
SLIDE 7

Physical Model of Radiative Transfer

7

in-scattering

energy decrease

absorption

  • ut-scattering

energy increase

emission

slide-8
SLIDE 8

Analytical Model (1)

8

viewing ray

initial intensity at 𝑡0

without absorption all the initial radiant energy would reach the point 𝑡

slide-9
SLIDE 9

Analytical Model (2)

9

Extinction 𝜐 Absorption 𝜆

viewing ray

absorption between 𝑡0 and 𝑡

slide-10
SLIDE 10

Analytical Model (3)

10

absorption between 𝑡 and 𝑡 active emission at point 𝑡

  • ne point 𝑡 along the

viewing ray emits additional radiant energy viewing ray

slide-11
SLIDE 11

Analytical Model (4)

11

every point 𝑡 along the viewing ray emits additional radiant energy viewing ray

slide-12
SLIDE 12

Numerical Approximation (1)

12

Extinction:

approximate integral by Riemann sum:

slide-13
SLIDE 13

Numerical Approximation (2)

13

slide-14
SLIDE 14

Numerical Approximation (3)

14

now we introduce opacity:

slide-15
SLIDE 15

Numerical Approximation (4)

15

now we introduce opacity:

slide-16
SLIDE 16

Numerical Approximation (5)

16

slide-17
SLIDE 17

Numerical Approximation (6)

17

slide-18
SLIDE 18

Numerical Approximation (7)

18

slide-19
SLIDE 19

Numerical Approximation (8)

19

can be computed recursively:

radiant energy

  • bserved at position 𝑗

radiant energy emitted at position 𝑗 radiant energy

  • bserved at position 𝑗 − 1

absorption at position 𝑗

slide-20
SLIDE 20

Numerical Approximation (9)

20

back-to-front compositing front-to-back compositing

early ray termination: stop the calculation when 𝐵𝑗

′ ≈ 1

slide-21
SLIDE 21

Back-to-Front Compositing

21

𝛽 = 0.5 𝛽 = 0.75 𝛽 = 1.0 𝛽 = 0.4

1 2 3

slide-22
SLIDE 22

Front-to-Back Compositing

22

𝛽 = 0.5 𝛽 = 0.75 𝛽 = 1.0 𝛽 = 0.4

1

slide-23
SLIDE 23

Summary

23

  • Emission Absorption Model
  • Numerical Solutions [pre-multiplied alpha assumed]

back-to-front iteration front-to-back iteration

slide-24
SLIDE 24

Ray Casting

  • “Natural” volume rendering method
  • Image-order approach

– Most common CPU approach – Well-supported by current GPUs

  • Standard optimizations

– Early ray termination – Empty space skipping

  • Many possibilities

– Adaptive sampling – Realistic lighting

24

slide-25
SLIDE 25

CPU-Based Ray Casting

  • Most widely supported
  • Needs multiple cores for interactive

performance

  • Easy to support large volume sizes

(only dependent on CPU memory)

– No CPU-GPU bus transfers – Interactive implementations usually use orthogonal projection – Exception: isosurface ray-casting (e.g., virtual endoscopy); no DVR

25

[Grimm et al., 2005]

slide-26
SLIDE 26

GPU-Based Ray Casting

  • Easy to implement, real-time
  • Very flexible (adaptive sampling,

mixing of isosurface ray-casting and DVR ray-casting, ...)

  • Exploits loops in fragment shader

(Shader Model 3.0+, …)

– Early ray termination trivial – Empty space skipping – Perspective projection – Complex lighting possible

26

slide-27
SLIDE 27

Texture Slicing

  • Object-order approach
  • Composite textured (viewport-aligned) slices / 3D

texture

  • Standard approach on pre-Shader Model 3.0 GPUs
  • Not as flexible as ray-casting
  • Optimizations possible, but complicated to

implement (empty space skipping, early ray termination, ...)

27

slide-28
SLIDE 28

Splatting

  • Most useful if data is very sparse: few splats
  • Example: neurovascular data

28

slicing (left) vs. splatting (right)

slide-29
SLIDE 29

GPU Volume Rendering (1)

  • Today, standard ray

casting can be implemented on the GPU

  • On previous

hardware generations, only slicing was possible

29

eye image plane volume

slide-30
SLIDE 30

GPU Volume Rendering (2)

  • In raycasting,

sampling is typically performed at equidistant points along each ray

30

eye image plane volume 1 2

slide-31
SLIDE 31

GPU Volume Rendering (3)

  • Under perspective

projection, these sample points are located on concentric spherical shells

31

eye image plane volume

slide-32
SLIDE 32

GPU Volume Rendering (4)

  • This is equivalent to

sampling all the points on one shell before proceeding to the next one

32

eye image plane volume 1 2 3 4

slide-33
SLIDE 33

GPU Volume Rendering (5)

  • In slicing, the shells

are typically approximated by view-aligned planes

33

eye image plane volume

slide-34
SLIDE 34

Ray Casting vs. Slicing

  • Slicing

– May have advantages in terms of memory access pattern – Sampling pattern can cause artifacts

  • Ray Casting

– Easier to implement on current hardware – Early ray termination is trivial

34

slide-35
SLIDE 35

Volume Rendering GPU Pipeline Load

35

Ray Setup Sampling

Integration

Space Skipping Ray Marching

Integration

Filtering Classification Shading Ray Setup Sampling Filtering Classification Shading Ray Marching Clipping

Slicing

Raycasting

Clipping Space Skipping

Ray Casting

Courtesy Klaus Engel

slide-36
SLIDE 36

Volume Rendering Pipeline

36

data set final image

reconstruction classification shading compositing

volume rendering pipeline

slide-37
SLIDE 37

Reconstruction (1)

  • Usually volume data sets are given as a grid of

discrete samples

– For rendering purposes, we want to treat them as continuous three-dimensional functions

  • We need to choose an appropriate reconstruction

filter

– Requirements: high-quality reconstruction, but small performance overhead

37

slide-38
SLIDE 38

Reconstruction (2)

38

slide-39
SLIDE 39

Trilinear Interpolation

  • Simple extension of linear interpolation to three

dimensions

  • Advantage: current GPUs automatically do

trilinear interpolation of 3D textures

39

slide-40
SLIDE 40

Reconstruction Filters (1)

  • If very high quality is needed, more complex

reconstruction filters may be required

– Marschner-Lobb function is a common test signal to evaluate the quality of reconstruction filters [Marschner and Lobb 1994] – The signal has a high amount of its energy near its Nyquist frequency – Makes it a very demanding test for accurate reconstruction

40

slide-41
SLIDE 41

Reconstruction Filters (2)

  • Marschner-Lobb test signal (analytically evaluated)

41

slide-42
SLIDE 42

Reconstruction Filters (3)

  • Trilinear reconstruction of Marschner-Lobb test

signal

42

slide-43
SLIDE 43

Reconstruction Filters (4)

  • Cubic reconstruction of Marschner-Lobb test

signal

43

slide-44
SLIDE 44

Reconstruction Filters (5)

  • B-Spline reconstruction of Marschner-Lobb test

signal

44

slide-45
SLIDE 45

Reconstruction Filters (6)

  • Windowed sinc reconstruction of Marschner-Lobb

test signal

45

slide-46
SLIDE 46

Reconstruction Filters (7)

  • Marschner-Lobb test signal (analytically evaluated)

46

slide-47
SLIDE 47

Classification (1)

  • During classification the user defines the

appearence of the data

– Which parts are transparent? – Which parts have which color?

47

slide-48
SLIDE 48

real-time update of the transfer function important

Transfer Functions (1)

48

slide-49
SLIDE 49

Transfer Functions (2)

49

slide-50
SLIDE 50

Transfer Functions

  • Mapping of data attributes to optical properties
  • Simplest: color table with opacity over data value

50

slide-51
SLIDE 51

Window/Level

51

data value

  • pacity color

level window

0.0 1.0 black white

slide-52
SLIDE 52

Classification Order (1)

  • Classification can occur before or after

reconstruction

– Significant impact on quality

  • Pre-interpolative classification

– Classify all data values and then interpolate between RGBA-tuples

  • Post-interpolative classification

– Interpolate between scalar data values and then classify the result

52

slide-53
SLIDE 53

Classification Order (2)

53

  • ptical properties

data value

interpolation

PRE-INTERPOLATIVE

  • ptical properties

data value

interpolation

POST-INTERPOLATIVE

slide-54
SLIDE 54

Classification Order (3)

54

post-interpolative classified data

supersampling transfer function supersampling transfer function

analytical solution pre-interpolative

transfer function

continuous data discrete data

scalar value alpha value

slide-55
SLIDE 55

Classification Order (4)

55

same transfer function, resolution, and sampling rate

pre-interpolative post-interpolative

slide-56
SLIDE 56

pre-interpolative post-interpolative

Classification Order (5)

56

same transfer function, resolution, and sampling rate

slide-57
SLIDE 57

Pre-Integration (1)

57

image plane slab eye

sf sb

slide-58
SLIDE 58

Pre-Integration (2)

58

pre-integrate all possible combinations in the TF sf sb store integral into table

sf sb

d

front slice back slice

Assume constant sampling distance d sb sf

slide-59
SLIDE 59

Pre-Integration (3)

59

supersampling transfer Function transfer function supersampling

Analytical Solution post-interpolative TF

pre-integrated transfer function

pre-Integrated TF continuous data discrete data

scalar value alpha value

Classified data

slide-60
SLIDE 60

Pre-Integration (4)

60

no pre-integration pre-integration

slide-61
SLIDE 61

Pre-Integration (5)

  • Shading more difficult to integrate, but possible

– Avoid high-dimensional lookup tables by decomposition – [Lum et al. 2004], [Guetat et al. 2010]

  • Multi-dimensional transfer functions

– 2D transfer functions possible: [Kraus 2008] – Higher dimensions difficult

  • Alternatives

– Adaptive sampling – [Bergner et al. 2006]

61

slide-62
SLIDE 62

Shading (1)

  • Make structures in volume data sets more realistic

by applying an illumination model

– Shade each sample in the volume like a surface – Any model used in real-time surface graphics suitable – Common choice: Blinn-Phong illumination model

62

slide-63
SLIDE 63

Shading (2)

  • Local illumination, similar to surface lighting

– Lambertian reflection light is reflected equally in all directions – Specular reflection light is reflected scattered around the direction of perfect reflection

63

slide-64
SLIDE 64

Shading (3)

64

shaded volume rendering unhaded volume rendering

slide-65
SLIDE 65

Gradient Estimation (1)

  • Normalized gradient vector of the scalar field is

used to substitute for the surface normal

  • The gradient vector is the first-order derivative of

the scalar field

65

partial derivative in x-direction partial derivative in y-direction partial derivative in z-direction

slide-66
SLIDE 66

Gradient Estimation (2)

  • We can estimate the gradient vector using finite

differencing schemes, e.g. central differences:

  • Noisy data may require more complex estimation

schemes

66

slide-67
SLIDE 67

Gradient Magnitude

  • Magnitude of gradient vector can be used to

measure the “surfaceness” of a point

– Strong changes  high gradient magnitude – Homogenity  low gradient magnitude

  • Applications

– Use gradient magnitude to modulate opacity of sample – Interpolate between unshaded and shaded sample color using gradient magnitude as weight

67

slide-68
SLIDE 68

Compositing

  • DVR (Direct Volume Rendering)

– Physically-based – Optical model for emission & absorption – May require complex transfer function – Visual cues due to accumulation and shading

  • MIP (Maximum Intensity Projection)

– Practically-motivated – Project maximum value along each viewing ray – Suffices with window/level setting – Spatial ambiguities caused by order-independency

68

slide-69
SLIDE 69

Maximum Intensity Projection

69

data value maximum value

slide-70
SLIDE 70

Direct Volume Rendering

70

data value maximum value accumulated opacity accumulated color

slide-71
SLIDE 71

Maximum Intensity Difference

71

i = fi - fmaxi

if fi > fmaxi

  • therwise

data value maximum value

  • Difference between the data value at the i-th sample

along a viewing ray and the current maximum

slide-72
SLIDE 72

Maximum Intensity Difference

72

data value maximum value maximum difference

slide-73
SLIDE 73

Modified Compositing

  • Accumulated opacity Ai and color Ci at i-th

sample along a viewing ray

73

Ai = Ai-1 + (1 - Ai-1)αi Ci = Ci-1 + (1 - Ai-1)αici Ai = βiAi-1 + (1 - βiAi-1)αi Ci = βiCi-1 + (1 - βiAi-1)αici

αi

  • pacity of the sample

ci color of the sample

βi = 1 - i

slide-74
SLIDE 74

Direct Volume Rendering

74

data value maximum value accumulated opacity accumulated color

slide-75
SLIDE 75
  • Max. Int. Diff. Accumulation (MIDA)

75

data value maximum value accumulated opacity accumulated color

slide-76
SLIDE 76
  • Max. Int. Diff. Accumulation (MIDA)

76

data value maximum value accumulated opacity accumulated color

slide-77
SLIDE 77

Combining DVR, MIDA, and MIP (1)

  • MIDA features characteristics of DVR as well as

MIP

  • Use it as an intermediate step for a smooth

transition

  • Ability to make a DVR image more MIP-like and

vice versa

  • User interface: just replace “render mode”

combobox with slider

77

DVR MIDA MIP

slide-78
SLIDE 78

βi = 1 - i βi = 1 - i (1+γ)

Combining DVR, MIDA, and MIP (2)

78

DVR MIDA αi

  • pacity of the sample

ci color of the sample

Ai = βiAi-1 + (1 - βiAi-1)αi Ci = βiCi-1 + (1 - βiAi-1)αici

γ = -1 γ = 0

slide-79
SLIDE 79

Combining DVR, MIDA, and MIP (3)

  • Could let βi approach one only where the

maximum changes as γ get closer to MIP

– Problem: non-graceful degradation if shading is used – Solution: perform transition to MIP in image space by linearly interpolating between MIDA and MIP result colors

79

MIDA MIP γ = 0 γ = 1

slide-80
SLIDE 80

Example: DVR, MIDA, MIP

80

slide-81
SLIDE 81

Coping With Volume Size (1)

  • Multi-gigabyte volumes
  • Bricking

– Hierarchical (e.g., octree) vs. flat/fixed number of layers – Multiresolution – Out-of-core rendering

81

512x512x3396

[Ljung et al., 2006]

slide-82
SLIDE 82

Coping With Volume Size (2)

  • Cull bricks against TF (need min/max value per

brick)

  • Render each brick separately or all in single pass
  • GPU filtering: duplicate neighbor voxels

82

slide-83
SLIDE 83

Segmented Data

  • Segmentation mask / object ID volume
  • Per object:

– Enabling/disabling, transfer function, rendering mode, clipping planes, ...

  • Volume exploration (move objects, ...)

83

[Grimm et al., 2005] [Hadwiger et al., 2003]

slide-84
SLIDE 84

Multi-Volume Rendering

  • Combine multiple modalities

(CT, MR, fMR, PET, DSA, ...)

  • Per-volume transfer function
  • Combine with segmentation info
  • “Multi-volume rendering integral” not solved yet
  • Simple approaches:

– Blend multiple volumes – Switch transfer function according to segm. object – Switch depending on threshold (vessels from DSA, …)

84

[Beyer et al., 2007]

slide-85
SLIDE 85

Opacity Peeling / Skull Peeling

  • Peel away layers according to

accumulated opacity

– Example: show brain without segm. – Skull peeling: CT and MR combined: peeling with CT, rendering brain from MR

85

[Rezk-Salama et al., 2006] [Beyer et al., 2007] Opacity layers 1 - 4

slide-86
SLIDE 86

Recent GPU Ray-Casting Approaches

  • Rectilinear grids

– [Krüger and Westermann, 2003], [Röttger et al., 2003] – [Green, 2004] (in NVIDIA SDK) – [Stegmaier et al., 2005] – [Scharsach et al., 2006] – [Gobbetti et al., 2008]

  • Unstructured (tetrahedral) grids

– [Weiler et al., 2002, 2003, 2004] – [Bernardon et al., 2004] – [Callahan et al., 2006] – [Muigg et al., 2007]

86

slide-87
SLIDE 87

Basic Ray Setup / Termination

  • Two main approaches:

– Procedural ray/box intersection [Röttger et al., 2003], [Green, 2004] – Rasterize bounding box [Krüger and Westermann, 2003]

  • Some possibilities

– Ray start position and exit check – Ray start position and exit position – Ray start position and direction vector

87

slide-88
SLIDE 88

Single-Pass Ray Casting

  • Enabled by conditional loops in fragment shaders

(Shader Model 3.0 and higher / NVIDIA CUDA)

  • Substitute multiple passes and early-z testing by

single loop and early loop exit

  • Volume rendering example

in NVIDIA CUDA SDK (procedural ray setup)

  • Alternative: image-based

ray setup

88

slide-89
SLIDE 89

Procedural Ray Setup/Termination

  • Everything handled in the fragment shader
  • Procedural ray / bounding box intersection

– Ray is given by camera position and volume entry position – Exit criterion needed

  • Pro: simple and self-contained
  • Con: full load on the fragment shader

89

slide-90
SLIDE 90

CUDA Ray-Box Intersection Test

http://www.siggraph.org/education/materials/HyperGraph/raytrace/rtinter3.htm

90

slide-91
SLIDE 91

Rasterization-Based Ray Setup

  • Fragment = ray
  • Need ray start pos, direction vector
  • Rasterize bounding box
  • Identical for orthogonal and perspective projection!

91

  • =
slide-92
SLIDE 92

Fragment Shader

  • Rasterize front faces
  • f volume bounding box
  • Texcoords are volume

position in [0,1]

  • Subtract camera position
  • Repeatedly check for

exit of bounding box

92

slide-93
SLIDE 93

CUDA Kernel

  • Image-based ray

setup

– Ray start image – Direction image

  • Ray-cast loop

– Sample volume – Accumulate color and opacity

  • Terminate
  • Store output

93

slide-94
SLIDE 94

Ray-Casting Optimizations (1)

  • Early ray termination

– Isosurfaces: stop when surface hit – Direct volume rendering: stop when opacity >= threshold

  • Several possibilities

– Older GPUs: multi-pass rendering with early-z test – Shader model 3+: break out of ray-casting loop – Current GPUs: early loop exit works well

94

slide-95
SLIDE 95

Ray-Casting Optimizations (2)

  • Empty space skipping

– Skip transparent samples – Depends on transfer function – Start casting close to first hit

  • Several possibilities

– Per-sample check of opacity (expensive) – Traverse hierarchy (e.g., octree) or regular grid – These are image-order: what about object-order?

95

slide-96
SLIDE 96

Object-Order Empty Space Skipping (1)

  • Modify initial rasterization step

96

rasterize bounding box rasterize “tight" bounding geometry

slide-97
SLIDE 97

Object-Order Empty Space Skipping (2)

  • Store min-max values of volume blocks
  • Cull blocks against transfer function or iso value
  • Rasterize front and back faces of active blocks

97

slide-98
SLIDE 98

Object-Order Empty Space Skipping (3)

  • Rasterize front and back faces
  • f active min-max bricks
  • Start rays on brick front faces
  • Terminate when

– Full opacity reached, or – Back face reached

98

slide-99
SLIDE 99

Object-Order Empty Space Skipping (4)

  • Rasterize front and back faces
  • f active min-max bricks
  • Start rays on brick front faces
  • Terminate when

– Full opacity reached, or – Back face reached

  • Not all empty space

skipped

99

slide-100
SLIDE 100

Memory Management

100

What happens if data set is too large to fit into local GPU memory? Divide data set into smaller chunks (bricks)

One plane of voxels must be duplicated for correct interpolation across brick boundaries

incorrect interpolation!

slide-101
SLIDE 101

Simple Volume Bricking (1)

  • Two subdivision levels
  • Coarse bricks stored

in cache texture (green)

  • Finer blocks for object
  • rder empty space

skipping (blue)

  • Min-max of blocks/bricks
  • Swap in/out bricks

from cache texture on demand

  • Out-of-core brick cache on disk

101

1536 x 576 x 352

slide-102
SLIDE 102

Simple Volume Bricking (2)

  • Duplicate neighbor voxels for full-speed filtering
  • Store n3 bricks as (n+1)3

– 10% overhead with 323 bricks

102

slide-103
SLIDE 103

Simple Volume Bricking (3)

  • Layout/index texture for address translation
  • Supports multi-resolution rendering
  • Map virtual volume coordinates to physical cache

texture

103

slide-104
SLIDE 104

Address Translation (1)

  • Conceptual approach

104

final coordinate in cache texture virtual volume position brick origin in virtual volume scale factor according to resolution brick position in cache additional addition to account for duplicated border

slide-105
SLIDE 105

Address Translation (2)

  • Translation in shader

105

from single lookup in layout texture volume size cache size constant scaling factor input

  • utput
slide-106
SLIDE 106

Address Translation (3)

  • Layout texture generation

106

index of brick in cache storage resolution of brick cache texture size in texels index of brick in volume

  • riginal brick resolution

resolution scale

slide-107
SLIDE 107

Thanks!

  • Markus Hadwiger
  • Christof Rezk-Salama
  • Klaus Engel
  • Henning Scharsach
  • Johanna Beyer

107