SLIDE 1
Week 10 -Wednesday What did we talk about last time? Fresnel - - PowerPoint PPT Presentation
Week 10 -Wednesday What did we talk about last time? Fresnel - - PowerPoint PPT Presentation
Week 10 -Wednesday What did we talk about last time? Fresnel reflection Snell's Law Microgeometry effects Implementing BRDFs Ambient lighting Image based rendering A more complicated tool for area lighting is
SLIDE 2
SLIDE 3
SLIDE 4
SLIDE 5
SLIDE 6
SLIDE 7
A more complicated tool for area lighting is environment
mapping (EM)
The key assumption of EM is that only direction matters
- Light sources must be far away
- The object does not reflect itself
In EM, we make a 2D table of the incoming radiance based on
direction
Because the table is 2D, we can store it in an image
SLIDE 8
The radiance reflected by a mirror is based on the reflected
view vector r = 2(n•v)n – v
The reflectance equation is:
where RF is the Fresnel reflectance and Li is the incoming radiance from vector r
) ( ) ( ) ( r v
i
- F
- L
θ R L =
SLIDE 9
Steps: 1.
Generate or load a 2D image representing the environment
2.
For each pixel that contains a reflective object, compute the normal at the corresponding location on the surface
3.
Compute the reflected view vector from the view vector and the normal
4.
Use the reflected view vector to compute an index into the environment map
5.
Use the texel for incoming radiance
SLIDE 10
It doesn't work well with flat surfaces
- The direction doesn't vary much, mapping a lot of the surface to a
narrow part of the environment map
- Normal mapping combined with EM helps a lot
The range of values in an environment map may be large (to
cover many light intensities)
- As a consequence, the space requirements may be higher than
normal textures
SLIDE 11
Blinn and Newell used a longitude/latitude
system with a projection like Mercator
- ϕ is longitude and goes from 0 to 2π
- ρ is latitude and goes from 0 to π
We can compute these from the
normalized reflected view vector:
- ρ = arccos(-rz)
- ϕ = atan2(ry, rx)
Problems
- There are too many texels near the poles
- The seam of the left and the right halves
cannot easily be interpolated across
SLIDE 12
Imagine the environment is viewed through a
perfectly reflective sphere
The resulting sphere map (also called a light
probe) is what you'd see if you photographed such a sphere (like a Christmas ornament)
The sphere map has a basis giving its own
coordinate system (h,u,f)
The image was generated by looking along
the f axis, with h to the right and u up (all normalized)
SLIDE 13
To use the sphere map, convert the surface normal n and the
view vector v to the sphere space by multiplying by the following matrix:
Sphere mapping only shows the environment on the front of
the sphere
- It is view dependent
1
z y x z y x z y x
f f f u u u h h h
SLIDE 14
Cubic environmental mapping is the most popular current method
- Fast
- Flexible
Take a camera, render a scene facing in all six directions Generate six textures For each point on the surface of the object you're rendering, map to the
appropriate texel in the cube
SLIDE 15
Pros
- Fast, supported by hardware
- View independent
- Shader Model 4.0 can generate a cube map in a single pass with the
geometry shader
Cons
- It has better sampling uniformity than sphere maps, but not perfect
(isocubes improve this)
- Still requires high dynamic range textures (lots of memory)
- Still only works for distant objects
SLIDE 16
We have talked about using environment mapping for mirror-like surfaces The same idea can be applied to glossy (but not perfect) reflections By blurring the environment map texture, the surface will appear rougher For surfaces with varying roughness, we can simply access different
mipmap levels on the cube map texture
SLIDE 17
Environment mapping can be used for diffuse colors as well Such maps are called irradiance environment maps Because the viewing angle is not important for diffuse colors,
- nly the surface normal is used to decide what part of the
irradiance map is used
SLIDE 18
SLIDE 19
Rather than go through the rigmarole with vertex buffers, I'm
going to use a cube model
Likewise, I can use a special cube texture for skyboxes stored in a
TextureCube
This texture has 6 sub-textures for top, bottom, left, right, front
and back
cube = Content.Load<Model>("cube"); cubeTexture = Content.Load<TextureCube>("Sunset");
SLIDE 20
We need projections, a camera location, a texture, and a special
kind of sampler for cube textures
float4x4 World; float4x4 View; float4x4 Projection; float3 Camera; Texture SkyBoxTexture; samplerCUBE SkyBoxSampler = sampler_state { texture = <SkyBoxTexture>; magfilter = LINEAR; minfilter = LINEAR; mipfilter = LINEAR; AddressU = Mirror; AddressV = Mirror; };
SLIDE 21
Vertex shader input and output are simple Only the position is needed for input, and only the position
and texture coordinate are needed for output
struct VertexShaderInput { float4 Position : POSITION0; }; struct VertexShaderOutput { float4 Position : POSITION0; float3 TextureCoordinate : TEXCOORD0; };
SLIDE 22
Other than projection, the vertex shader gives a direction as a 3D coordinate The pixel shader uses the direction to look up the value in the texture cube
VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4 worldPosition = mul(input.Position, World); float4 viewPosition = mul(worldPosition, View);
- utput.Position = mul(viewPosition, Projection);
- utput.TextureCoordinate = worldPosition - Camera;
return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { return texCUBE(SkyBoxSampler, normalize(input.TextureCoordinate)); }
SLIDE 23
Environment mapping can easily be done in shaders using the
same cube texture we used for skyboxes
We use the same lookup, but we have to compute the
reflection from the camera off the surface and out to the cube
SLIDE 24
To the skybox shader code, we add a world inverse transpose
for transforming model normal and a tint color
float4x4 World; float4x4 View; float4x4 Projection; float4x4 WorldInverseTranspose; float3 Camera; float4 TintColor = float4(1, 1, 1, 1); Texture EnvironmentTexture; samplerCUBE EnvironmentSampler = sampler_state { texture = <EnvironmentTexture>; magfilter = LINEAR; minfilter = LINEAR; mipfilter = LINEAR; AddressU = Mirror; AddressV = Mirror; };
SLIDE 25
The only addition to the vertex shader is a normal, which
helps determine the direction to reflect in
struct VertexShaderInput { float4 Position : POSITION0; float4 Normal : NORMAL0; }; struct VertexShaderOutput { float4 Position : POSITION0; float3 Reflection : TEXCOORD0; };
SLIDE 26
The vertex shader adds in normal transformation
The pixel shader uses the reflect() intrinsic to find the reflection from the cube map
VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4 worldPosition = mul(input.Position, World); float4 viewPosition = mul(worldPosition, View);
- utput.Position = mul(viewPosition, Projection);
float3 ViewDirection = worldPosition - Camera; float3 Normal = normalize(mul(normalize(input.Normal), WorldInverseTranspose));
- utput.Reflection = reflect(normalize(ViewDirection), Normal);
return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { return TintColor * texCUBE(EnvironmentSampler, normalize(input.Reflection)); }
SLIDE 27
The result would
look better if the ship had more vertices
SLIDE 28
SLIDE 29
SLIDE 30
Work day for Project 2
SLIDE 31
Keep reading Chapter 8 Start reading Chapter 9
- We'll talk about global illumination on Monday