Week 9 -Wednesday What did we talk about last time? Textures - - PowerPoint PPT Presentation
Week 9 -Wednesday What did we talk about last time? Textures - - PowerPoint PPT Presentation
Week 9 -Wednesday What did we talk about last time? Textures Volume textures Cube maps Texture caching and compression Procedural texturing Texture animation Material mapping Alpha mapping Bump mapping refers
What did we talk about last time? Textures
- Volume textures
- Cube maps
- Texture caching and compression
- Procedural texturing
- Texture animation
- Material mapping
- Alpha mapping
Bump mapping refers to a wide range of techniques designed
to increase small scale detail
Most bump mapping is implemented per-pixel in the pixel
shader
3D effects of bump mapping are greater than textures alone,
but less than full geometry
Macro-geometry is made up of vertices and triangles
- Limbs and head of a body
Micro-geometry are characteristics shaded in the pixel shader, often with
texture maps
- Smoothness (specular color and m parameter) based on microscopic
smoothness of a material
Meso-geometry is the stuff in between that is too complex for macro-
geometry but large enough to change over several pixels
- Wrinkles
- Folds
- Seams
Bump mapping techniques are primarily concerned with mesoscale
effects
James Blinn proposed the offset vector bump
map or offset map
- Stores bu and bv values at each texel, giving the amount
that the normal should be changed at that point
Another method is a heightfield, a grayscale
image that gives the varying heights of a surface
- Normal changes can be computed from the heightfield
The results are the same, but these kinds of
deformations are usually stored in normal maps
- Normal maps give the full 3-component
normal change
Normal maps can be in world space
(uncommon)
- Only usable if the object never moves
Or object space
- Requires the object only to undergo rigid
body transforms
Or tangent space
- Relative to the surface, can assume positive z
Lighting and the surface have to be in the
same space to do shading
Filtering normal maps is tricky
Bump mapping doesn't change what can be seen, just the normal High enough bumps should block each other Parallax mapping approximates the part of the image you should
see by moving from the height back to the view vector and taking the value at that point
The final point used is:
z xy
v h v p p ⋅ + =
adj
At shallow viewing angles, the previous approximation can look
bad
- A small change results in a big texture change
To improve the situation, the offset is limited (by not scaling by
the z component)
It flattens the bumpiness at shallow angles, but it doesn't look
crazy
New equation:
xy
h v p p ⋅ + = ′
adj
The weakness of parallax mapping is that it can't tell where it first
intersects the heightfield
Samples are made along the view vector into the heightfield Three different research groups proposed the idea at the same time, all
with slightly different techniques for doing the sampling
There is still active research here Polygon boundaries are still flat in most models
Yet another possibility is to change vertex position based on
texture values
- Called displacement mapping
With the geometry shader, new vertices can be created on the
fly
Occlusion, self-shadowing, and realistic outlines are possible
and fast
Unfortunately, collision detection becomes more difficult
Radiometry is the measurement of electromagnetic radiation
(for us, specifically light)
Light is the flow of photons
- We'll generally think of photons as particles, rather than waves
Photon characteristics
- Frequency
ν = c/λ (Hertz)
- Wavelength λ = c/ν (meters)
- Energy
Q = hν (joules) [h is Planck's constant]
Radiometry just deals with physics Photometry takes everything from radiometry and weights it
by the sensitivity of the human eye
Photometry is just trying to account for the eye's differing
sensitivity to different wavelengths
Colorimetry is the science of
quantifying human color perception
The CIE defined a system of three non-
monochromatic colors X, Y, and Z for describing the human perceivable color space
RGB is a transform from these values
into monochromatic red, green, and blue colors
- RGB can only express colors in the triangle
As you know, there are others (HSV,
HSL, etc.)
Real light behaves consistently (but in a complex way) For rendering purposes, we often divide light into categories
that are easy to model
- Directional lights (like the sun)
- Omni lights (located at a point, but evenly illuminate in all directions)
- Spotlights (located at a point and have intensity that varies with
direction)
- Textured lights (give light projections variety in shape or color)
▪ Similar to gobos, if you know anything about stage lighting
With a programmable pipeline, you can express lighting models of
limitless complexity
The old DirectX fixed function pipeline provided a few stock
lighting models
- Ambient lights
- Omni lights
- Spotlights
- Directional lights
- All lights have diffuse, specular, and ambient color
Let's see how to implement these lighting models with shaders
Ambient lights are very simple to implement in shaders We've already seen the code The vertex shader must simply transform the vertex into clip
space (world x view x projection)
The pixel shader colors each fragment a constant color
- We could modulate this by a texture if we were using one
float4x4 World; float4x4 View; float4x4 Projection; float4 AmbientColor = float4(1, 0, 0, 1); float AmbientIntensity = 0.5; struct VertexShaderInput { float4 Position : POSITION0; }; struct VertexShaderOutput { float4 Position : POSITION0; };
VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4 worldPosition = mul(input.Position, World); float4 viewPosition = mul(worldPosition, View);
- utput.Position = mul(viewPosition, Projection);
return output; }
float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { return AmbientColor * AmbientIntensity; } technique Ambient { pass Pass1 { VertexShader = compile VS_SHADERMODEL VertexShaderFunction(); PixelShader = compile PS_SHADERMODEL PixelShaderFunction(); } }
Directional lights model lights from a very long distance with
parallel rays, like the sun
It only has color (specular and diffuse) and direction They are virtually free from a computational perspective Directional lights are also the standard model for
BasicEffect
- You don't have to use a shader to do them
Let's look at a diffuse shader first
We add values for the diffuse light intensity and direction We add a WorldInverseTranspose to transform the normals We also add normals to our input and color to our output
float4x4 World; float4x4 View; float4x4 Projection; float4 AmbientColor = float4(1, 1, 1, 1); float AmbientIntensity = 0.1; float4x4 WorldInverseTranspose; float4 DiffuseLightDirection = float4(1, 2, 0, 0); float4 DiffuseColor = float4(1, .5, 0, 1); float DiffuseIntensity = 1.0; struct VertexShaderInput { float4 Position : POSITION0; float4 Normal : NORMAL0; }; struct VertexShaderOutput { float4 Position : POSITION0; float4 Color : COLOR0; };
Color depends on the surface normal dotted with the light vector
VertexShaderOutput VertexShaderFunction(VertexShaderInput input){ VertexShaderOutput output; float4 worldPosition = mul(input.Position, World); float4 viewPosition = mul(worldPosition, View);
- utput.Position = mul(viewPosition, Projection);
float4 normal = mul(input.Normal, WorldInverseTranspose); float lightIntensity = dot(normal, normalize(DiffuseLightDirection));
- utput.Color = saturate(DiffuseColor * DiffuseIntensity *
lightIntensity); return output; }
No real differences here The diffuse color and ambient colors are added together The technique is exactly the same
float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { return saturate(input.Color + AmbientColor * AmbientIntensity); }
Adding a specular component to the diffuse shader requires
incorporating the view vector
It will be included in the shader file and be set as a parameter
in the C# code
The camera location is added to the declarations As are specular colors and a shininess parameter
float4x4 World; float4x4 View; float4x4 Projection; float4x4 WorldInverseTranspose; float3 Camera; static const float PI = 3.14159265f; float4 AmbientColor = float4(1, 1, 1, 1); float AmbientIntensity = 0.1; float3 DiffuseLightDirection; float4 DiffuseColor = float4(1, 1, 1, 1); float DiffuseIntensity = 0.7; float Shininess = 20; float4 SpecularColor = float4(1, 1, 1, 1); float SpecularIntensity = 0.5;
The output adds a normal so that the half vector can be computed
in the pixel shader
A world position lets us compute the view vector to the camera
struct VertexShaderInput { float4 Position : POSITION0; float3 Normal : NORMAL0; }; struct VertexShaderOutput { float4 Position : POSITION0; float4 Color : COLOR0; float3 Normal : NORMAL0; float4 WorldPosition : POSITIONT; };
The same computations as the diffuse shader, but we store
the normal and the transformed world position in the output
VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4 worldPosition = mul(input.Position, World);
- utput.WorldPosition = worldPosition;
float4 viewPosition = mul(worldPosition, View);
- utput.Position = mul(viewPosition, Projection);
float3 normal = normalize(mul(input.Normal, (float3x3)WorldInverseTranspose)); float lightIntensity = dot(normal, normalize(DiffuseLightDirection));
- utput.Color = saturate(DiffuseColor * DiffuseIntensity *
lightIntensity);
- utput.Normal = normal;
return output; }
Here we finally have a real computation because we need to use the pixel
normal (which is averaged from vertices) in combination with the view vector
The technique is the same
float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { float3 light = normalize(DiffuseLightDirection); float3 normal = normalize(input.Normal); float3 reflect = normalize(2 * dot(light, normal) * normal - light ); float3 view = normalize(Camera - (float3)input.WorldPosition); float dotProduct = dot(reflect, view); float4 specular = (8 + Shininess) / (8 * PI) * SpecularIntensity * SpecularColor * pow(saturate(dotProduct), Shininess); return saturate(input.Color + AmbientColor * AmbientIntensity + specular); }
Point lights model omni lights at a specific position
- They generally attenuate (get dimmer) over a distance and have a
maximum range
- DirectX has a constant attenuation, linear attenuation, and a quadratic
attenuation
- You can choose attenuation levels through shaders
They are more computationally expensive than directional lights
because a light vector has to be computed for every pixel
It is possible to implement point lights in a deferred shader,
lighting only those pixels that actually get used
We add light position and radius
float4x4 World; float4x4 View; float4x4 Projection; float4x4 WorldInverseTranspose; float3 LightPosition; float LightRadius = 100; float3 Camera; static const float PI = 3.14159265f; float4 AmbientColor = float4(1, 1, 1, 1); float AmbientIntensity = 0.1; float4 DiffuseColor = float4(1, 1, 1, 1); float DiffuseIntensity = 0.7; float Shininess = 20; float4 SpecularColor = float4(1, 1, 1, 1); float SpecularIntensity = 0.5;
We no longer need color in the output We do need the vector to the camera from the location We keep the world location at that fragment struct VertexShaderInput { float4 Position : POSITION0; float3 Normal : NORMAL0; }; struct VertexShaderOutput { float4 Position : POSITION0; float3 Normal : NORMAL0; float4 WorldPosition : POSITIONT; };
We compute the normal and the world position
VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4 worldPosition = mul(input.Position, World);
- utput.WorldPosition = worldPosition;
float4 viewPosition = mul(worldPosition, View);
- utput.Position = mul(viewPosition, Projection);
float3 normal = normalize(mul(input.Normal, (float3x3)WorldInverseTranspose));
- utput.Normal = normal;