Computer Graphics Course 2006
Texture Mapping
Part of the slides are by: Raanan Fattal David Akers Durand and Cutler
Computer Graphics Course 2006 Texture Mapping Part of the slides - - PowerPoint PPT Presentation
Computer Graphics Course 2006 Texture Mapping Part of the slides are by: Raanan Fattal David Akers Durand and Cutler Texture Mapping Motivation Definition of 2D texture mapping Coordinate generation, interpolation Different
Texture Mapping
Part of the slides are by: Raanan Fattal David Akers Durand and Cutler
Motivation Definition of 2D texture mapping Coordinate generation, interpolation Different uses of 2D textures Beyond 2D textures
Motivation: Add interesting and/ or realistic
detail to surfaces of objects.
Problem: Fine geometric detail is difficult
to model and expensive to render.
We don't want to represent all this detail
with geometry
Idea: Modify various shading parameters of the surface by
mapping a function (such as a 2D image) onto the surface.
Given an image, think of it as a 2D function from [ 0,1] 2
(texture coordinates) to the RGB color space:
For each geometric primitive, define a mapping M that
maps points on the surface to texture coordinates:
To shade a pixel corresponding to a point (x,y,z) on the
surface, use the color:
) , , ( ) , ( b g r v u T → ) , , ( ) , ( b g r v u T →
M x y z u v ( , , ) ( , ) = M x y z u v ( , , ) ( , ) =
( , , ) ( ( , , )) r g b T M x y z = ( , , ) ( ( , , )) r g b T M x y z =
Texture: Result:
So, what do we have to do in order to
make this idea work?
Coordinates generation/ assignment Interpolation
The basic idea is simple, but…
Large number of details Still an active research area
We will make an overview of some of the
issues today.
Next time you will learn how to use/ implement
part of the ideas here with OpenGL
To render a textured triangle, we must
start by assigning a texture coordinate to each vertex
A texture coordinate is a 2D point [ t x t y] in
texture space that is the coordinate of the image that will get mapped to a particular vertex
v0 v1 v2 t2 t0 t1
(1,1) (0,0) x y Texture Space Triangle (in any space)
The actual texture mapping computations take
place at the scan conversion and pixel rendering stages of the graphics pipeline
During scan conversion, as we are looping
through the pixels of a triangle, we must interpolate the t xt y texture coordinates in a similar way to how we interpolate the rgb color and z depth values
As with all other interpolated values, we must
precompute the slopes of each coordinate as they vary across the image pixels in x and y
Once we have the interpolated texture
coordinate, we look up that pixel in the texture map and use it to color the pixel
C B A λ1 λ2 λ3 D E
C A D ) 1 (
1 1
λ λ − + = C B E ) 1 (
2 2
λ λ − + = E D F ) 1 (
3 3
λ λ − + =
F
) ) 1 ( )( 1 ( ) ) 1 ( (
2 2 3 1 1 3
C B C A F λ λ λ λ λ λ − + − + − + =
C B A F
c b a
λ λ λ + + =
Linear Interpolation: Bilinear Interpolation
Where are a lot of issues with both texture
coordinates generation/ assignment and interpolation
Let’s start with the former Any ideas?
Tedious to specify
texture coordinates
Acquiring textures is
surprisingly difficult
Photographs have
projective distortions
Variations in
reflectance and illumination
Tiling problems
Pack triangles into
a single image
planar cylindrical spherical shrink-wrap cube face
Certain objects have a natural parametrization
(e.g., Bezier patches)
Polygons (triangles): each vertex is assigned a
pair of texture coordinates (u,v). Inside, linear interpolation is used.
How do we handle a more complex object?
(Bier and Sloan 1986)
Step I: define a mapping between the
texture and some intermediate surface:
plane cylinder sphere cube
Step II: Project intermediate surface onto
Use the texture
like a slide projector
No need to specify
texture coordinates explicitly
A good model for
shading variations due to illumination
A fair model for
reflectance (can use pictures)
texture polygon texture polygon
m agnification m inification
Mip-mapping
MIP = Multim in Parvo (many things in a small
place)
Idea: store texture as a pyramid of
progressively lower-resolution images, filtered down from original
R G B
B G R
Which level of mip-map to use?
Think of mip-map as 3-D pyramid Index into mip-map with 3 coordinates: u, v, d
(depth)
The size of the filter (i.e., d in the mip-
map) depends on the pixel coverage area in the texture map
In general, treat d as a continuous value Blend between nearest mip-map level using
linear interpolation
BTW: do you recognize this head? Better results can be achieved with trilinear filtering
As we already saw in this course, hacks
and ad-hoc solutions are part of the CG
Especially in the real-time rendering
Let’s see some really nice and unintended
use of mip-maps.
From the Kevin Bjorke(NVIDIA) talk in the
GDC.
I want to create a long
tunnel with a haze…
achieve the same result is to compute “Facing Ratio”: (N·V)
C = lerp(mossColor, texColor, pow(dot(N,V)),expon))
Is that a good idea to
use this chart “as is” with a mip mapping?
The evil you see versus the evil you don’t
see
What's the difference between a
real brick wall and a photograph of the wall texture-mapped onto a plane?
What happens
if we change the lighting or the camera position?
Use textures to alter the surface normal
Does not change the actual shape of the surface Just shaded as if it were a different shape
Sphere w/Diffuse Texture Swirly Bump Map Sphere w/Diffuse Texture & Bump Map
Treat the texture as a single-valued height function Compute the normal from the partial derivatives in the
texture
Cylinder w/Diffuse Texture Map Bump Map Cylinder w/Texture Map & Bump Map
There are no bumps on
the silhouette of a bump-mapped object
Bump maps
don’t allow self-occlusion
Use the texture map to actually move the surface point The geometry must be displaced before visibility is
determined
Image from: Geometry Caching for Ray-Tracing Displacement Maps by Matt Pharr and Pat Hanrahan.
note the detailed shadows cast by the stones
Ken Musgrave
We can simulate reflections by using the direction of the
reflected ray to index a spherical texture map at "infinity".
Assumes that all reflected rays
begin from the same point.
Terminator II. Do you remember a movie that used it before that? Hint: same director…
Texture Maps for Illumination (“Static Light Maps“)
One of the basic and prevalent techniques
in a (real-time) CG
We will talk more about generalization of
those ideas in GPU lesson
(Peachey 1985, Perlin 1985)
Problem: mapping a 2D image/ function
a difficult problem:
Distortion Discontinuities
Idea: use a texture function defined over
a 3D domain - the 3D space containing the object
Texture function can be digitized or procedural
Advantages:
compact representation (code vs. data) unlimited resolution unlimited extent controllable via parameters
Disadvantages:
Can be difficult to program and debug Can be difficult to predict and control Typically slower to evaluate Can be difficult to pre-filter
Download and play with it in your spare time.
Image by Henrik Wann Jensen Environment map by Paul Debevec