Scalable many-light methods
Jaroslav Křivánek
Charles University, Prague
Scalable many-light methods Jaroslav Kivnek Charles University, - - PowerPoint PPT Presentation
Scalable many-light methods Jaroslav Kivnek Charles University, Prague Instant radiosity Approximate indirect illumination by 1. Generate VPLs 2. Render with VPLs 2 Instant radiosity with glossy surfaces Ground truth 1,000 VPLs
Jaroslav Křivánek
Charles University, Prague
2
3
Ground truth 1,000 VPLs 100,000 VPLs
– Per-pixel basis
– Per-image basis
4
Program of Computer Graphics, Cornell University
Environment map lighting & indirect Time 111s Textured area lights & indirect Time 98s
(640x480, Anti-aliased, Glossy materials)
– Thousands to millions – Sub-linear cost
100 200 300 400 500 600 1000 2000 3000 4000
Number of Point Lights Time (secs) Standard Ward Lightcut
Tableau Scene
– Area lights – HDR environment maps – Sun & sky light – Indirect illumination
– Enables tradeoffs between components
Area lights + Sun/sky + Indirect
Visible surface
Camera
– Approximate many lights by a single brighter light (the representative light)
– Binary tree of lights and clusters
Clusters Individual Lights
– A set of nodes that partitions the lights into clusters
#1 #2 #3 #4
1 2 3 4 1 4
Light Tree
Clusters Individual Lights Representative Light
4
1 2 3 4 1 4 4 1 2 3 4 1 4 4 1 2 3 4 1 4 4
Three Cuts #1 #2 #4 #1 #3 #4 #1 #4
1 2 3 4 1 4 4 1 2 3 4 1 4 4 1 2 3 4 1 4 4
Three Cuts #1 #2 #4 #1 #3 #4 #1 #4
1 2 3 4 1 4 4 1 2 3 4 1 4 4 1 2 3 4 1 4 4
Three Cuts #1 #2 #4 #1 #3 #4 #1 #4
1 2 3 4 1 4 4 1 2 3 4 1 4 4 1 2 3 4 1 4 4
Three Cuts #1 #2 #4 #1 #3 #4 #1 #4
– Convert illumination to point lights – Build light tree
– Choose a cut to approximate the illumination
– Apply captured light to scene – Convert to directional point lights using [Agarwal et al. 2003]
– Convert indirect to direct illumination using Instant Radiosity [Keller 97]
– More lights = more indirect detail
– Convert illumination to point lights – Build light tree
– Choose a cut to approximate the local illumination
– Contrast visibility threshold is fixed percentage of signal – Used 2% in our results
– Transitions will not be visible – Used to select cut
result =
lights
Currently support diffuse, phong, and Ward
result =
lights
result =
lights
Cluster
result ≈ Mj Gj Vj
lights
j is the representative light
error < Mub Gub Vub
Cluster
lights
– Visibility <= 1 (trivial) – Intensity is known – Bound material and geometric terms using cluster bounding volume
ub == upper bound
Cut
Cut
Cut
Cut
Cut
Cut
Cut
Lightcuts (128s) Reference (1096s) Kitchen, 388K polygons, 4608 lights (72 area sources)
Lightcuts (128s) Reference (1096s) Error Error x16 Kitchen, 388K polygons, 4608 lights (72 area sources)
Lightcuts 128s 4 608 Lights (Area lights only) Lightcuts 290s 59 672 Lights (Area + Sun/sky + Indirect)
Lightcuts 128s 4 608 Lights (Area lights only)
Lightcuts 290s 59 672 Lights (Area + Sun/sky + Indirect)
(only 54 to area lights)
Lightcuts Reference Error x 16 Cut size
– Thousands to millions – Sub-linear cost
100 200 300 400 500 600 1000 2000 3000 4000 5000
Number of Point Lights Time (secs) Standard Ward Lightcuts
100 200 300 400 500 600 1000 2000 3000 4000
Number of Point Lights Time (secs) Standard Ward Lightcut
Tableau Scene Kitchen Scene
43
– Locally adaptive representation (the cut)
– Most important lights always sampled
– Large cuts in dark regions – Need tight upper bounds for BRDFs
Program of Computer Graphics, Cornell University
– Complex illumination – Anti-aliasing – Motion blur – Participating media – Depth of field
Pixel = L(x,ω)...
Lights
∫
Pixel Area
∫
Aperture
∫
Volume
∫
Time
∫
Pixel = L(x,ω)...
Lights
Pixel Area
Time
– Complex illumination – Anti-aliasing – Motion blur – Participating media – Depth of field
Pixel = L(x,ω)...
Lights
Pixel Area
Time
Volume
– Complex illumination – Anti-aliasing – Motion blur – Participating media – Depth of field
Pixel = L(x,ω)...
Lights
Pixel Area
Time
Volume
Aperture
– Requires many samples
Pixel = L(x,ω)...
Lights
Pixel Area
Time
Volume
Aperture
camera
600 1200 1800 100 200 300
Samples per pixel Image time (secs)
Supersampling Multidimensional
Direct only (relative cost 1x) Direct+Indirect (1.3x) Direct+Indirect+Volume (1.8x) Direct+Indirect+Volume+Motion (2.2x)
Camera
– Light points (L) – Gather points (G)
Light points
Camera
– Light points (L) – Gather points (G)
Light points
Camera
– Light points (L) – Gather points (G)
Light points
Pixel
Gather points
– Light points (L) – Gather points (G)
Light points Gather points
Pixel =
(j,i)∈GxL
– Can be billions of pairs per pixel
– Up to billions of pairs per pixel
– Cartesian product of two trees (gather & light)
Light tree Gather tree
L0 L1 L2 L3 L4 L5 L6 G1 G0 G2
L0 L1 L2 L3 G0 G1
Light tree Gather tree
L0 L1 L2 L3 L4 L5 L6 G1 G0 G2
G1 G0 G2 L0 L4 L1 L6 L2 L5 L3
Light tree Gather tree
L0 L1 L2 L3 L4 L5 L6 G1 G0 G2
G1 G0 G2 L0 L4 L1 L6 L2 L5 L3
G1 G0 G2 L0 L4 L1 L6 L2 L5 L3
G1 G0 G2 L0 L4 L1 L6 L2 L5 L3
Light tree Gather tree
L0 L1 L2 L3 L4 L5 L6 G1 G0 G2
G1 G0 G2 L0 L4 L1 L6 L2 L5 L3
– Minkowski sums – Reuse bounds from Lightcuts
– Rasterize into cube-maps
– Create lights and light tree
– Create gather points and gather tree for pixel – Adaptively refine clusters in product graph until all cluster errors < perceptual metric
L6 G2 L1 L2 L3 L4 L5 L0 G1 G0
– Eg, source node of product graph
L6 G2 L1 L2 L3 L4 L5 L0 G1 G0
– In gather or light tree
L6 G2 L1 L2 L3 L4 L5 L0 G1 G0
– In gather or light tree
L6 G2 L1 L2 L3 L4 L5 L0 G1 G0
L6 G2 L1 L2 L3 L4 L5 L0 G1 G0
– 2% of pixel value (Weber’s law)
– Some types of paths not included
– Prototype only supports diffuse, Phong, and Ward materials and isotropic media
7,047,430 Pairs per pixel Time 590 secs Avg cut size 174 (0.002%)
400 800 1200 1600 50 100 150 200 250 300
Image time (secs) Gather points (avg per pixel)
Image time vs. Gather points
Multidimensional Original lightcuts Eye rays only
Our result Time 9.8min Metropolis Time 148min (15x) Visible noise 5% brighter (caustics etc.)
Zoomed insets
5,518,900 Pairs per pixel Time 705 secs Avg cut size 936 (0.017%)
180 Gather points X 13,000 Lights = 234,000 Pairs per pixel
Avg cut size 447 (0.19%)
114,149,280 Pairs per pixel Avg cut size 821 Time 1740 secs
10 min 13 min 20 min 3.8 sec 13.5 sec 16.9 sec Brute force: Our result:
81
82
(2,000,000)
83
84
= Σ ( )
Pixels Lights
85
Lights Pixels Point-to-point visibility: Ray-tracing Point-to-many-points visibility: Shadow-mapping
86
Shadow map at light position
Surface samples
87
Shadow map at sample position
Compute small subset of columns compute weighted sum
88
89
compute rows compute columns weighted sum
choose columns and weights
how to choose columns and weights?
90
Clustering Choose representative columns Columns
91
Reduced columns
– Norms of reduced columns – Represent the “energy” of the light
– Normalized reduced columns – Represent the “kind” of light’s contribution
93
Reduced columns: vectors in high- dimensional space visualize as …
radius = weight position = information vector
probability proportional to weight
(scaled) representative
94
95
total cost of all clusters cost of a cluster sum over all pairs in it weights squared distance between information vectors
Strong but similar columns Weak columns can be clustered more easily Columns with various intensities can be clustered
– 100,000 points – 1000 clusters
– Random sampling – Divide & conquer
102
Compute rows (GPU) Weighted sum Assemble rows into reduced matrix Cluster reduced columns Choose representatives Compute columns (GPU)
103
Our result: 16.9 sec (300
rows + 900 columns)
Reference: 20 min
(using all 100k lights)
5x diff
104
Our result: 13.5 sec (432 rows
+ 864 columns)
Reference: 13 min (using all
100k lights)
5x diff
105
Our result: 3.8 sec (100 rows
+ 200 columns)
Reference: 10 min (using all
100k lights)
5x diff