Philipp Slusallek Pascal Grittmann
Computer Graphics
- Texturing -
Computer Graphics - Texturing - Philipp Slusallek Pascal Grittmann - - PowerPoint PPT Presentation
Computer Graphics - Texturing - Philipp Slusallek Pascal Grittmann Overview Last time Shading BRDFs Today Texture definition Image textures Procedural textures Texture mapping Next lecture Alias &
– Shading – BRDFs
– Texture definition – Image textures – Procedural textures – Texture mapping
– Alias & signal processing
2
– Either via (painted) images textures or procedural functions
– Reflectance, normals, shadow reflections, …
3
– Input: 1D/2D/3D texture coordinates
– Output: Scalar or vector value
– Reflectance
– Geometry and Normal (important for lighting)
𝑂′ = 𝑂 + Δ𝑂
𝑂′ = 𝑂(𝑄 + 𝑢𝑂)
– Opacity
– Illumination
5
– Discrete set of sample values (given at texel centers!)
– Hit point does not exactly hit a texture sample
– Use reconstruction filter to find color for hit point
6
Texture Space
– Assuming cell-centered samples – u = tu * resU; – v = tv * resV;
– lu = min( u , resU – 1 ); – lv = min( v , resV – 1 );
– return image[lu, lv];
lu, lv lu+1, lv lu, lv+1 lu+1, lv+1
– Assuming node-centered samples – u = tu * (resU – 1); – v = tv * (resV – 1);
– fu = u - u ; – fv = v - v ;
– return (1-fu) (1-fv) image[u , v ] + (1-fu) ( fv) image[u , v+1] + ( fu) (1-fv) image[u+1, v ] + ( fu) ( fv) image[u+1, v+1]
– u0 = (1-fv) image[u , v ] + ( fv) image[u , v+1]; – u1= (1-fv) image[u+1, v ] + ( fv) image[u+1, v+1]; – return (1-fu) u0 + ( fu) u1;
lu, lv+1 lu+1, lv+1 lu, lv lu+1, lv fu 1-fu 1-fv fv
– Assuming node-centered samples – Essentially based on cubic splines (see later)
– Even smoother
– More complex & expensive (4x4 kernel) – Overshoot
– (u, v) in [0, 1] x [0, 1]
– (u, v) not in unit square?
? ? ? ? ? ? ? ? ? ? ? ? ? ? ? 0, 0 4, 4 4, 0 0, 4 v u
– 𝑢𝑣 = 𝑣 − 𝑣 – 𝑢𝑤 = 𝑤 − 𝑤
0, 0 4, 4 4, 0 0, 4 v u
– 𝑢𝑣 = 𝑣 − 𝑣 – 𝑢𝑤 = 𝑤 − 𝑤
– 𝑚𝑣 = 𝑣 – 𝑚𝑤 = 𝑤
– if (l_u % 2 == 1)
t_u = 1 - t_u
– if (l_v % 2 == 1)
t_v = 1 - t_v 0, 0 4, 4 4, 0 0, 4 v u
if (u < 0) tu = 0; else if (u > 1) tu = 1; else tu = u;
if (v < 0) tv = 0; else if (v > 1) tv = 1; else tv = v;
0, 0 4, 4 4, 0 0, 4 v u
if (u < 0 || u > 1
|| v < 0 || v > 1) return backgroundColor;
else
tu = u; tv = v; 0, 0 4, 4 4, 0 0, 4 v u
– With OpenGL texture modes
– Simple generation
– Simple acquisition
– Illumination “frozen” during acquisition – Limited resolution – Susceptible to aliasing – High memory requirements (often HUGE for films, 100s of GB) – Issues when mapping 2D image onto 3D object
19
– Sometimes hard to achieve specific effect – Possibly non-trivial programming
– Flexibility & parametric control – Unlimited resolution – Anti-aliasing possible – Low memory requirements – May be directly defined as 3D “image” mapped to 3D geometry – Low-cost visual complexity
– lu = u – lv = v
– parity = (lu + lv) % 2;
– if (parity == 1)
– else
– lu = u – lv = v – lw = w
– parity = (lu + lv + lw) % 2;
– if (parity == 1)
– else
– fu = u - u – fv = v - v
– bu = fu < mortarWidth; – bv = fv < mortarWidth;
– if (bu || bv)
– else
mortarWidth
– parity = v % 2; – u -= parity * 0.5;
– fu = u - u – fv = v - v
– bu = fu < mortarWidth; – bv = fv < mortarWidth;
– if (bu || bv)
– else
25
– Similarity between patches at different locations
– Similarity on different resolution scales
– But never completely identical
– Purely empirical approach – Looks convincing, but has nothing to do with material’s physics
– Used in many texture functions
– Statistical invariance under rotation – Statistical invariance under translation – Roughly fixed frequency of ~1 Hz
– Value noise
– Gradient noise
– Interpolation
– Hash function to map vertices to values
with finite array of values p
– Gradient noise has lower regularity artifacts – More high frequencies in noise spectrum
– Stochastic vs. deterministic
Random values at each pixel Gradient noise
– Single spike in frequency spectrum (single frequency, see later)
– Mix of different frequencies – Decreasing amplitude for high frequencies
– 𝑈𝑣𝑠𝑐𝑣𝑚𝑓𝑜𝑑𝑓 𝑦 = σ𝑗=0
𝑙
|𝑏𝑗 ∗ 𝑜𝑝𝑗𝑡𝑓 𝑔
𝑗 𝑦 |
𝑗 = 2𝑗
𝑗
𝑗 2
– Summation truncation
𝑙) < 2 pixel-size (band limit, see later)
– Smoothly alternating layers of different marble colors – fmarble(x,y,z) := marble_color(sin(x)) – marble_color : transfer function (see lower left)
– Simulated turbulence – fmarble(x,y,z) := marble_color(sin(x + turbulence(x, y, z)))
– Wood – Erosion – Marble – Granite – …
RenderMan Companion
– Turbulated saw-tooth function
– White blobs – Turbulated transparency along edge
– Vary procedural texture function’s parameters over time
38
– Object surface parameterization – Projective transformation
– Find corresponding pre-image/footprint of each pixel in texture – Integrate over pre-image
39
– Sphere: spherical coordinates (φ, θ) = (2π u, π v) – Cylinder: cylindrical coordinates (φ, h) = (2 π u, H v) – Parametric surfaces (such as B-spline or Bezier surfaces → later)
– Polygons, implicit surfaces, teapots, …
40
– Has implicit parameterization (e.g. barycentric coordinates) – But we need more control: Placement of triangle in texture space
– In homogeneous coordinates (by embedding (u,v) as (u,v,1)) – Transformation coefficients determined by 3 pairs (u,v)→(x,y)
41
𝑦 = 𝑏𝑣 + 𝑐𝑤 + 𝑑 𝑣 + ℎ𝑤 + 𝑗 𝑧 = 𝑒𝑣 + 𝑓𝑤 + 𝑔 𝑣 + ℎ𝑤 + 𝑗 𝑦′ 𝑧′ 𝑥 = 𝑏 𝑐 𝑑 𝑒 𝑓 𝑔 ℎ 𝑗 𝑣′ 𝑤′ 𝑟 ; 𝑦, 𝑧 = 𝑦′ 𝑥 , 𝑧′ 𝑥 , 𝑣, 𝑤 = 𝑣′ 𝑟 , 𝑤′ 𝑟
– Rasterization
– Ray tracing
– Explicitly given in matrix (colored for Τ 𝜖𝑣 𝜖𝑦, Τ 𝜖𝑤 𝜖𝑦, Τ 𝜖𝑟 𝜖𝑦)
42
𝑦′ 𝑧′ 𝑥 = 𝑏 𝑐 𝑑 𝑒 𝑓 𝑔 ℎ 𝑗 𝑣′ 𝑤′ 𝑟 𝑣′ 𝑤′ 𝑟 = 𝑓𝑗 − 𝑔ℎ 𝑑ℎ − 𝑐𝑗 𝑐𝑔 − 𝑑𝑓 𝑔 − 𝑒𝑗 𝑏𝑗 − 𝑑 𝑑𝑒 − 𝑏𝑔 𝑒ℎ − 𝑓 𝑐 − 𝑏ℎ 𝑏𝑓 − 𝑐𝑒 𝑦′ 𝑧′ 𝑥
– 3D world/object (x,y,z) coords → 3D (u,v,w) texture coordinates – Similar to carving object out of material block
– 3D Cartesian (x,y,z) coordinates → 2D (u,v) texture coordinates?
David Ebert
– Surface defined by parametric function
– Input
– Output
– Directly derived from surface parameterization – Invert parametric function
– (x, y, 0) = Polar2Cartesian(r, φ)
– p(u, v) = Polar2Cartesian(R v, 2 π u) // disc radius R
– (x, y, z) = Cylindrical2Cartesian(r, φ, z)
– p(u, v) = Cylindrical2Cartesian(r, 2 π u, H v) // cylinder height H
– (x, y, z) = Spherical2Cartesian(r, θ, φ)
– p(u, v) = Spherical2Cartesian(r, π v, 2 π u)
– Use barycentric coordinates directly – 𝑞 𝑣, 𝑤 = 1 − 𝑣 − 𝑤 𝑞0 + 𝑣𝑞1 + 𝑤 𝑞2
0,1 0,0 1,0 p2 p1 u v 0,1 0,0 1,0 u v
– Associate a predefined texture coordinate to each triangle vertex
– Texture mapped onto manifold
– No intrinsic parameterization??
– Express Cartesian coordinates into a given coordinate system
– Drop one coordinate – Compute u and v from remaining 2 coordinates
– Map to different Cartesian coordinate system – (x’, y’, z’) = AffineTransformation(x, y, z)
– Drop z’, map u = x’, map v = y’ – E.g.: Issues when surface normal orthogonal to projection axis
x y z x’ z’ y’
– Map to cylindrical coordinates (possibly after translation/rotation) – (r, φ, z) = Cartesian2Cylindrical(x, y, z) – Drop r, map u = φ / 2 π, map v = z / H – Extension: add scaling factors: u = α φ / 2 π – E.g.: Similar topology gives reasonable mapping
x y z x’ z’ y’
– Map to spherical coordinates (possibly after translation/rotation) – (r, θ, φ) = Cartesian2Spherical(x, y, z) – Drop r, map u = φ / 2 π, map v = θ / π – Extension: add scaling factors to both u and v – E.g.: Issues in concave regions
x y z x’ z’ y’
– May introduce undesired texture distortions if the intermediate surface differs too much from the destination surface – Still often used in practice because of its simplicity
55
– Slide projector
– Used a lot in film industry!
– View-dependent texturing (advanced topic)
– Re-project photo on its 3D environment
56
57
– Depends on surface normal and predefined vector
– α = n ω – return α flatColor + (1 - α) slopeColor;
– Photo of a reflective sphere (gazing ball) – Photos with a fish-eye camera
– Remapping 2 images of reflective sphere – Photo with an environment camera
– If no intersection found, use ray direction to find background color – Cartesian coords of ray dir. → spherical coords → uv tex coords
– Remapping 2 images of reflective sphere – Photos with a perspective camera
– Find main axis (-x, +x, -y, +y, -z, +z) of ray direction – Use other 2 coordinates to access corresponding face texture
62
– Single image – Bad utilization of the image area – Bad scanning on the edge – Artifacts, if map and image do not have the same view point
– Yields spherical parameterization – Subdivide in 2 images (front-facing and back-facing sides) – Less bias near the periphery – Arbitrarily reusable – Supported by OpenGL extensions
63
64 Terminator II motion picture
– Two maps: diffuse & specular – Diffuse: index by surface normal – Specular: indexed by reflected view vector
65
RenderMan Companion
– Pre-calculated illumination (local irradiance)
– Multiplication of irradiance with base texture
– Provides surface radiosity
– Animated light maps
66
Reflectance Irradiance Radiosity Representing radiosity in a mesh or texture mesh texture
– Surface normals changed only
67
– Surface normals are known
– Surface is offset in normal direction according to bump map intensity – New normal directions 𝑂′(𝑣, 𝑤) are calculated based on virtually displaced surface 𝑃′(𝑣, 𝑤) – Original surface is rendered with new normals 𝑂′(𝑣, 𝑤)
68
Grey-valued texture used for bump height
– Normal is cross-product of derivatives: – Where: – If B is small the last term in each equation can be ignored, yielding: – The first term is the normal to the surface and the last is zero, giving:
69
𝑃′ 𝑣, 𝑤 = 𝑃 𝑣, 𝑤 + 𝐶 𝑣, 𝑤 𝑂(𝑣, 𝑤) 𝑂′ 𝑣, 𝑤 = 𝑃𝑣
′ × 𝑃𝑤 ′
𝑂′ 𝑣, 𝑤 = 𝑃𝑣 × 𝑃𝑤 + 𝐶𝑣 𝑂 × 𝑃𝑤 + 𝐶𝑤 𝑃𝑣 × 𝑂 + 𝐶𝑣𝐶𝑤 𝑂 × 𝑂 𝐸 = 𝐶𝑣 𝑂 × 𝑃𝑤 − 𝐶𝑤 𝑂 × 𝑃𝑣 𝑂′ = 𝑂 + 𝐸 𝑃𝑣
′ = 𝑃𝑣 + 𝐶𝑣𝑂 + 𝐶𝑂𝑣
𝑃𝑤
′ = 𝑃𝑤 + 𝐶𝑤𝑂 + 𝐶𝑂𝑤
– Combination of multiple texture effects
70
RenderMan Companion
– Often with opacity texture – Rotates, always facing viewer – Used for rendering distant objects – Best results if approximately radially or spherically symmetric
– Azimuthal orientation: different view-points – Complex distribution: trunk, branches, …
71