Philipp Slusallek
Computer Graphics
- Introduction to Ray Tracing -
Computer Graphics - Introduction to Ray Tracing - Philipp Slusallek - - PowerPoint PPT Presentation
Computer Graphics - Introduction to Ray Tracing - Philipp Slusallek Rendering Algorithms Rendering Definition: Given a 3D scene description as input and a camera, generate a 2D image as a view from the camera of the 3D scene
– Definition: Given a 3D scene description as input and a camera, generate a 2D image as a view from the camera of the 3D scene
– Ray Tracing
– Rasterization
– 3D geometry of objects in a scene – Geometric primitives – triangles, polygons, spheres, …
– Color, texture, absorption, reflection, refraction, subsurface scattering – Diffuse, glossy, mirror, glass, …
– Position and emission characteristics of light sources – Note: Light is also reflected off of surfaces!
– Assumption: air/empty space is totally transparent
volumes
– View point, viewing direction, field of view, resolution, …
– By simulating light transport
– Dynamic equilibrium
– Shoot photons from the light sources into scene – Scatter at surfaces and record when a detector is hit
– Particle or Light Tracing
– Start at the detector (camera) – Trace only paths that might transport light towards camera
– Ray Tracing
– Easy to understand and implement – Delivers “correct“ images by default
– Many optical global effects – Shadows, reflections, refractions, … – Efficient real-time implementation in SW and HW – Can work in parallel and distributed environments – Logarithmic scalability with scene size: O(log n) vs. O(n) – Output sensitive and demand driven approach
– Empedocles (492-432 BC), Renaissance (Dürer, 1525), … – Used in lens design, geometric optics, neutron transport, …
Perspective Machine, Albrecht Dürer
– Rays from viewpoint along viewing directions into 3D scene – (At least) one ray per picture element (pixel)
– Traversal of spatial index structures
– Ray-primitive intersection → hit point
– Compute light towards camera → pixel color
– Needed for computation
– Compute through recursive tracing of rays – Can be hard to determine correctly
Ray Generation Ray Traversal Intersection Shading Pixel Color Ray Generation Ray Traversal
Ray Generation Ray Traversal Intersection Shading Pixel Color Ray Traversal Intersection
Ray Generation Ray Traversal Intersection Shading Pixel Color Ray Traversal Intersection
Ray Generation Ray Traversal Intersection Shading Pixel Color Ray Generation Shading Ray Traversal Intersection Ray Generation
Ray Generation Ray Traversal Intersection Shading Pixel Color Ray Generation Shading Ray Traversal Intersection Ray Generation
Ray Generation Ray Traversal Intersection Shading Pixel Color Ray Generation Shading Ray Traversal Intersection Ray Generation
Ray Generation Ray Traversal Intersection Shading Pixel Color Ray Generation Shading Ray Traversal Intersection Ray Generation
Ray Generation Ray Traversal Intersection Shading Pixel Color Ray Generation Shading Ray Traversal Intersection Ray Generation
Pixel Color
Ray Generation Ray Traversal Intersection Shading Pixel Color Shading
– Interaction of light & material at intersections – Trace rays to light sources – Recursively trace new ray paths in reflection & refraction directions
image plane pixel lens/pupil refracted ray reflected ray primary ray shadow rays light source
Sphere Cylinder Cube
Reflected
Eye
Refracted
– Search the next intersection point (hit, material) – Return Shade(ray, hit, material)
– For each light source
– Calculate reflected radiance at hit – Adding radiance to the reflected radiance
– If mirroring material
– Same for transmission – Return reflected radiance
– Return false, if intersection with distance < dist has been found – Can be changed to handle transparent objects as well
– Search the next intersection point (hit, material) – Return Shade(ray, hit, material)
– For each light source
– Calculate reflected radiance at hit – Adding radiance to the reflected radiance
– If mirroring material
– Same for transmission – Return reflected radiance
– Return false, if intersection with distance < dist has been found – Can be changed to handle transparent objects as well
– Diffuse object: isotropic reflection at hit point of illumination
– Perfect reflection/refraction (mirror, glass)
– Non-Lambertian Reflectance
– Point/directional light sources – Area light sources
– Indirect/global illumination
– Instead of full spectrum
– Instead of full indirect light
– Ambient: constant, non-directional background light – Diffuse: light reflected uniformly in all directions – Specular: perfect reflection, refraction
– Often using Phong/Blinn shading model (or variation therof) – But physically-based models are available as well
– Hidden surface removal
– Shadow computation
surface and a light sources
– Exact simulation of some light paths
– Many reflections (exponential increase in number of rays) – Indirect illumination requires many rays to sample all incoming directions
– Solved with Path Tracing (→ later)
– By simulating light transport
– Dependable, physically-correct visualization
– Logarithmic scaling in scene size
12.5 Million Triangles ~1 Billion Triangles
Nvidia RTX (Turing)
(up to 10 Grays/s)
– Only used as an off-line technique – Was computationally far too demanding (minutes to hours per frame) – Believed to not be suitable for a HW implementation
– Interactive ray tracing on supercomputers [Parker, U. Utah‘98] – Interactive ray tracing on PCs [Wald‘01] – Distributed Real-time ray tracing on PC clusters [Wald’01] – RPU: First full HW implementation [Siggraph 2005] – Commercial tools: Embree/OSPRey (Intel/CPU), OptiX (Nvidia/GPU) – Complete film industry has switched to ray tracing (Monte-Carlo)
– Symposium on Interactive RT , now High-Performance Graphics (HPG)
– Research: PBRT (offline, physically-based, based on book, OSS), Mitsuba renderer (EPFL), Rodent (SB), … – Commercial: V-Ray (Chaos Group), Corona (Render Legion), VRED (Autodesk), MentalRay/iRay (MI), …
– Special type of query
– Volume computation – Sound waves tracing – Collision detection – …
– 𝑠 𝑢 = Ԧ 𝑝 + 𝑢 Ԧ 𝑒 , t ∈ ℝ; Ԧ 𝑝, Ԧ 𝑒 ∈ ℝ3:origin and direction
– All points on the graph of 𝑠 𝑢 , with t ∈ ℝ0+
t=1 t=3 t=2
// For given image resolution {resx, resy} // Loop over pixel raster coordinates [0, res-1] for(prcx = 0; prcx < resx; prcx++) for(prcy = 0; prcy < resy; prcy++) { // Normalized device coordinates [0, 1] ndcx = (prcx + 0.5) / resx; ndcy = (prcy + 0.5) / resy; // Screen space coordinates [-1, 1] sscx = ndcx * 2 - 1; sscy = ndcy * 2 - 1; // Generate direction through pixel center d = f + sscx x + sscy y; d = d / |d|; // May normalize here // Trace ray and assign color to pixel color = trace_ray(o, d); write_pixel(prcx, prcy, color); }
x spanning vectors y
u up-vector f focal vector d Image plane
– Ԧ 𝑑 ∈ ℝ3,𝑠 ∈ ℝ: center and radius – ∀ Ԧ 𝑞 ∈ ℝ3: Ԧ 𝑞 ∈ 𝑇 ⇔ Ԧ 𝑞 − Ԧ 𝑑 ∙ Ԧ 𝑞 − Ԧ 𝑑 − 𝑠2 = 0
radius c p1 p1 - c p2 - c p2
– Ray: 𝑠 𝑢 = Ԧ 𝑝 + 𝑢 Ԧ 𝑒 , t ∈ ℝ; Ԧ 𝑝, Ԧ 𝑒 ∈ ℝ3 – Sphere: Ԧ 𝑑 ∈ ℝ3,𝑠 ∈ ℝ:
𝑞 ∈ ℝ3: Ԧ 𝑞 ∈ 𝑇 ⇔ Ԧ 𝑞 − Ԧ 𝑑 ∙ Ԧ 𝑞 − Ԧ 𝑑 − 𝑠2 = 0
– Algebraic approach: substitute ray equation
𝑞 − Ԧ 𝑑 ∙ Ԧ 𝑞 − Ԧ 𝑑 − 𝑠2 = 0 with Ԧ 𝑞 = Ԧ 𝑝 + 𝑢 Ԧ 𝑒
𝑒 ∙ Ԧ 𝑒 + 2𝑢 Ԧ 𝑒 ∙ Ԧ 𝑝 − Ԧ 𝑑 + Ԧ 𝑝 − Ԧ 𝑑 ∙ ( Ԧ 𝑝 − Ԧ 𝑑) − 𝑠2 = 0
– Ray: 𝑠 𝑢 = Ԧ 𝑝 + 𝑢 Ԧ 𝑒 , t ∈ ℝ; Ԧ 𝑝, Ԧ 𝑒 ∈ ℝ3 – Sphere: Ԧ 𝑑 ∈ ℝ3,𝑠 ∈ ℝ:
𝑞 ∈ ℝ3: Ԧ 𝑞 ∈ 𝑇 ⇔ Ԧ 𝑞 − Ԧ 𝑑 ∙ Ԧ 𝑞 − Ԧ 𝑑 − 𝑠2 = 0
– Geometric approach
𝑝 , 𝑐 − Ԧ 𝑑
– Such that ∡𝑃𝐶𝐷 = 90°
𝑑 ≤ 𝑠
– Be aware of floating point issues if o is far from sphere
c r d
b
– 𝑜, Ԧ 𝑏 ∈ ℝ3: normal and point in 𝑄 (Hesse normal form for plane) – ∀ Ԧ 𝑞 ∈ ℝ3: Ԧ 𝑞 ∈ 𝑄 ⇔ Ԧ 𝑞 − Ԧ 𝑏 ∙ 𝑜 = 0
n a p1 p1 - a n a p2 p2 - a
– Ray: 𝑠 𝑢 = Ԧ 𝑝 + 𝑢 Ԧ 𝑒 , t ∈ ℝ; Ԧ 𝑝, Ԧ 𝑒 ∈ ℝ3 – Plane: 𝑜, Ԧ 𝑏 ∈ ℝ3: normal and point in 𝑄
– Plane equation: Ԧ 𝑞 ∈ 𝑄 ⇔ Ԧ 𝑞 − Ԧ 𝑏 ∙ 𝑜 = 0 ⇔ Ԧ 𝑞 ∙ 𝑜 − 𝐸 = 0, 𝑥𝑗𝑢ℎ 𝐸 = Ԧ 𝑏 ∙ 𝑜 – Substitute ray parameterization: ( Ԧ 𝑝 + 𝑢 Ԧ 𝑒) ∙ 𝑜 − 𝐸 = 0 – Solve for t
– Ray: 𝑠 𝑢 = Ԧ 𝑝 + 𝑢 Ԧ 𝑒 , t ∈ ℝ; Ԧ 𝑝, Ԧ 𝑒 ∈ ℝ3 – Plane: 𝑜, Ԧ 𝑏 ∈ ℝ3: normal and point in 𝑄
– Plane equation: Ԧ 𝑞 ∈ 𝑄 ⇔ Ԧ 𝑞 − Ԧ 𝑏 ∙ 𝑜 = 0 ⇔ Ԧ 𝑞 ∙ 𝑜 − 𝐸 = 0, 𝑥𝑗𝑢ℎ 𝐸 = Ԧ 𝑏 ∙ 𝑜 – Substitute ray parameterization: ( Ԧ 𝑝 + 𝑢 Ԧ 𝑒) ∙ 𝑜 − 𝐸 = 0 – Solve for t
– Ԧ 𝑏,𝑐, Ԧ 𝑑 ∈ ℝ3: vertices – Affine combinations of Ԧ 𝑏,𝑐, Ԧ 𝑑 → points in the plane
– ∀ Ԧ 𝑞 ∈ ℝ3: Ԧ 𝑞 ∈ 𝑈 ⇔ ∃𝜇1,2,3∈ ℝ0+, 𝜇1 + 𝜇2 + 𝜇3 = 1 𝑏𝑜𝑒 Ԧ 𝑞 = 𝜇1 Ԧ 𝑏 + 𝜇2𝑐 + 𝜇3 Ԧ 𝑑
– 𝜇1 = 𝑇𝑞𝑐𝑑/𝑇𝑏𝑐𝑑, etc. – S: signed area of triangles, based on CLW/CCW orientation
c a b p λ1 λ3 λ2
– Ԧ 𝑏,𝑐, Ԧ 𝑑 ∈ ℝ3: vertices – 𝜇1,2,3: Barycentric coordinates – 𝜇1 + 𝜇2 + 𝜇3 = 1 – 𝜇1 = 𝑇𝑞𝑐𝑑/𝑇𝑏𝑐𝑑, etc.
(1,0,0) (0, 1, 0) (0, 0, 1) 𝜇3 = 1
2
(0, 1
2, 1 2)
(1
2, 1 2, 0)
(1
2, 0, 1 2)
(1
3,1 3,1 3)
𝜇3 = 1
3
c a b
– Signed areas of subtriangles – Can be done in 2D, after “projection” onto major plane, depending on largest component of normal vector
– Edges of neighboring triangles might not be identical – Due to inaccuracies of floats – Need a better method!
n
x y z
– Ray: Ԧ 𝑝 + 𝑢 Ԧ 𝑒 , t ∈ ℝ; Ԧ 𝑝, Ԧ 𝑒 ∈ ℝ3 – Triangle: Ԧ 𝑏,𝑐, Ԧ 𝑑 ∈ ℝ3
b d
a
– Ray: Ԧ 𝑝 + 𝑢 Ԧ 𝑒 , t ∈ ℝ; Ԧ 𝑝, Ԧ 𝑒 ∈ ℝ3 – Triangle: Ԧ 𝑏,𝑐, Ԧ 𝑑 ∈ ℝ3 – 𝑜𝑏𝑐 = 𝑐 − Ԧ 𝑝 × (𝑏 − Ԧ 𝑝) – 𝑜𝑏𝑐 is the signed area of OAB (2x)
b d
a nab
– Ray: Ԧ 𝑝 + 𝑢 Ԧ 𝑒 , t ∈ ℝ; Ԧ 𝑝, Ԧ 𝑒 ∈ ℝ3 – Triangle: Ԧ 𝑏,𝑐, Ԧ 𝑑 ∈ ℝ3 – 𝑜𝑏𝑐 = 𝑐 − Ԧ 𝑝 × (𝑏 − Ԧ 𝑝) – 𝑜𝑏𝑐 is the signed area of OAB (2x) – 𝜇3
∗ (𝑢) = 𝑜𝑏𝑐 ∙ 𝑢 Ԧ
𝑒
b d
a nab p
– Ray: Ԧ 𝑝 + 𝑢 Ԧ 𝑒 , t ∈ ℝ; Ԧ 𝑝, Ԧ 𝑒 ∈ ℝ3 – Triangle: Ԧ 𝑏,𝑐, Ԧ 𝑑 ∈ ℝ3 – 𝑜𝑏𝑐 = 𝑐 − Ԧ 𝑝 × (𝑏 − Ԧ 𝑝) – 𝑜𝑏𝑐 is the signed area of OAB (2x) – 𝜇3
∗ (𝑢) = 𝑜𝑏𝑐 ∙ 𝑢 Ԧ
𝑒
– 𝜇1,2
∗ (𝑢) = 𝑜𝑐𝑑,𝑏𝑑 ∙ 𝑢 Ԧ
𝑒 – Normalize
𝜇𝑗
∗(𝑢)
𝜇1
∗(𝑢)+𝜇2 ∗(𝑢)+𝜇3 ∗(𝑢) , 𝑗 = 1,2,3
𝑒 cancels out b d
a nab p
– Ray: Ԧ 𝑝 + 𝑢 Ԧ 𝑒 , t ∈ ℝ; Ԧ 𝑝, Ԧ 𝑒 ∈ ℝ3 – Triangle: Ԧ 𝑏,𝑐, Ԧ 𝑑 ∈ ℝ3 – 𝑜𝑏𝑐 = 𝑐 − Ԧ 𝑝 × (𝑏 − Ԧ 𝑝) – 𝑜𝑏𝑐 is the signed area of OAB (2x) – 𝜇3
∗ (𝑢) = 𝑜𝑏𝑐 ∙ 𝑢 Ԧ
𝑒
– 𝜇1,2
∗ (𝑢) = 𝑜𝑐𝑑,𝑏𝑑 ∙ 𝑢 Ԧ
𝑒 – Normalize
𝜇𝑗
∗(𝑢)
𝜇1
∗(𝑢)+𝜇2 ∗(𝑢)+𝜇3 ∗(𝑢) , 𝑗 = 1,2,3
– Compute Ԧ 𝑞 = 𝜇1 Ԧ 𝑏 + 𝜇2𝑐 + 𝜇3 Ԧ 𝑑
b d
a p nab
– f(x, y, z) = v
– x = xo + t xd – y = yo + t yd – z = zo + t zd
– Ray: Ԧ 𝑝 + 𝑢 Ԧ 𝑒 , t ∈ ℝ; Ԧ 𝑝, Ԧ 𝑒 ∈ ℝ3 – Axis aligned bounding box (AABB): 𝑞𝑛𝑗𝑜,𝑞𝑛𝑏𝑦 ∈ ℝ3
Bounded Volume pmin pmax
– Ray: Ԧ 𝑝 + 𝑢 Ԧ 𝑒 , t ∈ ℝ; Ԧ 𝑝, Ԧ 𝑒 ∈ ℝ3 – Axis aligned bounding box (AABB): 𝑞𝑛𝑗𝑜,𝑞𝑛𝑏𝑦 ∈ ℝ3
– Ray enters the box in all dimensions before exiting in any – max({𝑢𝑗
𝑜𝑓𝑏𝑠|𝑗 = 𝑦, 𝑧, 𝑨}) < min({𝑢𝑗 𝑔𝑏𝑠|𝑗 = 𝑦, 𝑧, 𝑨})
Bounded Volume txnear tynear (smaller) tyfar
Bounded Volume txnear txfar (smaller) tyfar
– Polygons: [Appel ’68] – Quadrics, CSG: [Goldstein & Nagel ’71] – Recursive Ray Tracing: [Whitted ’79] – Tori: [Roth ’82] – Bicubic patches: [Whitted ’80, Kajiya ’82] – Algebraic surfaces: [Hanrahan ’82] – Swept surfaces: [Kajiya ’83, van Wijk ’84] – Fractals: [Kajiya ’83] – Deformations: [Barr ’86] – NURBS: [Stürzlinger ’98] – Subdivision surfaces: [Kobbelt et al ’98]