Week 9 -Wednesday What did we talk about last time? Textures - - PowerPoint PPT Presentation

week 9 wednesday what did we talk about last time textures
SMART_READER_LITE
LIVE PREVIEW

Week 9 -Wednesday What did we talk about last time? Textures - - PowerPoint PPT Presentation

Week 9 -Wednesday What did we talk about last time? Textures Volume textures Cube maps Texture caching and compression Procedural texturing Texture animation Material mapping Alpha mapping Bump mapping refers


slide-1
SLIDE 1

Week 9 -Wednesday

slide-2
SLIDE 2

 What did we talk about last time?  Textures

  • Volume textures
  • Cube maps
  • Texture caching and compression
  • Procedural texturing
  • Texture animation
  • Material mapping
  • Alpha mapping
slide-3
SLIDE 3
slide-4
SLIDE 4
slide-5
SLIDE 5
slide-6
SLIDE 6
slide-7
SLIDE 7

 Bump mapping refers to a wide range of techniques designed

to increase small scale detail

 Most bump mapping is implemented per-pixel in the pixel

shader

 3D effects of bump mapping are greater than textures alone,

but less than full geometry

slide-8
SLIDE 8

 Macro-geometry is made up of vertices and triangles

  • Limbs and head of a body

 Micro-geometry are characteristics shaded in the pixel shader, often with

texture maps

  • Smoothness (specular color and m parameter) based on microscopic

smoothness of a material

 Meso-geometry is the stuff in between that is too complex for macro-

geometry but large enough to change over several pixels

  • Wrinkles
  • Folds
  • Seams

 Bump mapping techniques are primarily concerned with mesoscale

effects

slide-9
SLIDE 9

 James Blinn proposed the offset vector bump

map or offset map

  • Stores bu and bv values at each texel, giving the amount

that the normal should be changed at that point

 Another method is a heightfield, a grayscale

image that gives the varying heights of a surface

  • Normal changes can be computed from the heightfield
slide-10
SLIDE 10

 The results are the same, but these kinds of

deformations are usually stored in normal maps

  • Normal maps give the full 3-component

normal change

 Normal maps can be in world space

(uncommon)

  • Only usable if the object never moves

 Or object space

  • Requires the object only to undergo rigid

body transforms

 Or tangent space

  • Relative to the surface, can assume positive z

 Lighting and the surface have to be in the

same space to do shading

 Filtering normal maps is tricky

slide-11
SLIDE 11

 Bump mapping doesn't change what can be seen, just the normal  High enough bumps should block each other  Parallax mapping approximates the part of the image you should

see by moving from the height back to the view vector and taking the value at that point

 The final point used is:

z xy

v h v p p ⋅ + =

adj

slide-12
SLIDE 12

 At shallow viewing angles, the previous approximation can look

bad

  • A small change results in a big texture change

 To improve the situation, the offset is limited (by not scaling by

the z component)

 It flattens the bumpiness at shallow angles, but it doesn't look

crazy

 New equation:

xy

h v p p ⋅ + = ′

adj

slide-13
SLIDE 13

 The weakness of parallax mapping is that it can't tell where it first

intersects the heightfield

 Samples are made along the view vector into the heightfield  Three different research groups proposed the idea at the same time, all

with slightly different techniques for doing the sampling

 There is still active research here  Polygon boundaries are still flat in most models

slide-14
SLIDE 14

 Yet another possibility is to change vertex position based on

texture values

  • Called displacement mapping

 With the geometry shader, new vertices can be created on the

fly

 Occlusion, self-shadowing, and realistic outlines are possible

and fast

 Unfortunately, collision detection becomes more difficult

slide-15
SLIDE 15
slide-16
SLIDE 16

 Radiometry is the measurement of electromagnetic radiation

(for us, specifically light)

 Light is the flow of photons

  • We'll generally think of photons as particles, rather than waves

 Photon characteristics

  • Frequency

ν = c/λ (Hertz)

  • Wavelength λ = c/ν (meters)
  • Energy

Q = hν (joules) [h is Planck's constant]

slide-17
SLIDE 17

 Radiometry just deals with physics  Photometry takes everything from radiometry and weights it

by the sensitivity of the human eye

 Photometry is just trying to account for the eye's differing

sensitivity to different wavelengths

slide-18
SLIDE 18

 Colorimetry is the science of

quantifying human color perception

 The CIE defined a system of three non-

monochromatic colors X, Y, and Z for describing the human perceivable color space

 RGB is a transform from these values

into monochromatic red, green, and blue colors

  • RGB can only express colors in the triangle

 As you know, there are others (HSV,

HSL, etc.)

slide-19
SLIDE 19
slide-20
SLIDE 20

 Real light behaves consistently (but in a complex way)  For rendering purposes, we often divide light into categories

that are easy to model

  • Directional lights (like the sun)
  • Omni lights (located at a point, but evenly illuminate in all directions)
  • Spotlights (located at a point and have intensity that varies with

direction)

  • Textured lights (give light projections variety in shape or color)

▪ Similar to gobos, if you know anything about stage lighting

slide-21
SLIDE 21

 With a programmable pipeline, you can express lighting models of

limitless complexity

 The old DirectX fixed function pipeline provided a few stock

lighting models

  • Ambient lights
  • Omni lights
  • Spotlights
  • Directional lights
  • All lights have diffuse, specular, and ambient color

 Let's see how to implement these lighting models with shaders

slide-22
SLIDE 22

 Ambient lights are very simple to implement in shaders  We've already seen the code  The vertex shader must simply transform the vertex into clip

space (world x view x projection)

 The pixel shader colors each fragment a constant color

  • We could modulate this by a texture if we were using one
slide-23
SLIDE 23

float4x4 World; float4x4 View; float4x4 Projection; float4 AmbientColor = float4(1, 0, 0, 1); float AmbientIntensity = 0.5; struct VertexShaderInput { float4 Position : POSITION0; }; struct VertexShaderOutput { float4 Position : POSITION0; };

slide-24
SLIDE 24

VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4 worldPosition = mul(input.Position, World); float4 viewPosition = mul(worldPosition, View);

  • utput.Position = mul(viewPosition, Projection);

return output; }

slide-25
SLIDE 25

float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { return AmbientColor * AmbientIntensity; } technique Ambient { pass Pass1 { VertexShader = compile VS_SHADERMODEL VertexShaderFunction(); PixelShader = compile PS_SHADERMODEL PixelShaderFunction(); } }

slide-26
SLIDE 26

 Directional lights model lights from a very long distance with

parallel rays, like the sun

 It only has color (specular and diffuse) and direction  They are virtually free from a computational perspective  Directional lights are also the standard model for

BasicEffect

  • You don't have to use a shader to do them

 Let's look at a diffuse shader first

slide-27
SLIDE 27

 We add values for the diffuse light intensity and direction  We add a WorldInverseTranspose to transform the normals  We also add normals to our input and color to our output

float4x4 World; float4x4 View; float4x4 Projection; float4 AmbientColor = float4(1, 1, 1, 1); float AmbientIntensity = 0.1; float4x4 WorldInverseTranspose; float4 DiffuseLightDirection = float4(1, 2, 0, 0); float4 DiffuseColor = float4(1, .5, 0, 1); float DiffuseIntensity = 1.0; struct VertexShaderInput { float4 Position : POSITION0; float4 Normal : NORMAL0; }; struct VertexShaderOutput { float4 Position : POSITION0; float4 Color : COLOR0; };

slide-28
SLIDE 28

 Color depends on the surface normal dotted with the light vector

VertexShaderOutput VertexShaderFunction(VertexShaderInput input){ VertexShaderOutput output; float4 worldPosition = mul(input.Position, World); float4 viewPosition = mul(worldPosition, View);

  • utput.Position = mul(viewPosition, Projection);

float4 normal = mul(input.Normal, WorldInverseTranspose); float lightIntensity = dot(normal, normalize(DiffuseLightDirection));

  • utput.Color = saturate(DiffuseColor * DiffuseIntensity *

lightIntensity); return output; }

slide-29
SLIDE 29

 No real differences here  The diffuse color and ambient colors are added together  The technique is exactly the same

float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { return saturate(input.Color + AmbientColor * AmbientIntensity); }

slide-30
SLIDE 30

 Adding a specular component to the diffuse shader requires

incorporating the view vector

 It will be included in the shader file and be set as a parameter

in the C# code

slide-31
SLIDE 31

 The camera location is added to the declarations  As are specular colors and a shininess parameter

float4x4 World; float4x4 View; float4x4 Projection; float4x4 WorldInverseTranspose; float3 Camera; static const float PI = 3.14159265f; float4 AmbientColor = float4(1, 1, 1, 1); float AmbientIntensity = 0.1; float3 DiffuseLightDirection; float4 DiffuseColor = float4(1, 1, 1, 1); float DiffuseIntensity = 0.7; float Shininess = 20; float4 SpecularColor = float4(1, 1, 1, 1); float SpecularIntensity = 0.5;

slide-32
SLIDE 32

 The output adds a normal so that the half vector can be computed

in the pixel shader

 A world position lets us compute the view vector to the camera

struct VertexShaderInput { float4 Position : POSITION0; float3 Normal : NORMAL0; }; struct VertexShaderOutput { float4 Position : POSITION0; float4 Color : COLOR0; float3 Normal : NORMAL0; float4 WorldPosition : POSITIONT; };

slide-33
SLIDE 33

 The same computations as the diffuse shader, but we store

the normal and the transformed world position in the output

VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4 worldPosition = mul(input.Position, World);

  • utput.WorldPosition = worldPosition;

float4 viewPosition = mul(worldPosition, View);

  • utput.Position = mul(viewPosition, Projection);

float3 normal = normalize(mul(input.Normal, (float3x3)WorldInverseTranspose)); float lightIntensity = dot(normal, normalize(DiffuseLightDirection));

  • utput.Color = saturate(DiffuseColor * DiffuseIntensity *

lightIntensity);

  • utput.Normal = normal;

return output; }

slide-34
SLIDE 34

 Here we finally have a real computation because we need to use the pixel

normal (which is averaged from vertices) in combination with the view vector

 The technique is the same

float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { float3 light = normalize(DiffuseLightDirection); float3 normal = normalize(input.Normal); float3 reflect = normalize(2 * dot(light, normal) * normal - light ); float3 view = normalize(Camera - (float3)input.WorldPosition); float dotProduct = dot(reflect, view); float4 specular = (8 + Shininess) / (8 * PI) * SpecularIntensity * SpecularColor * pow(saturate(dotProduct), Shininess); return saturate(input.Color + AmbientColor * AmbientIntensity + specular); }

slide-35
SLIDE 35

 Point lights model omni lights at a specific position

  • They generally attenuate (get dimmer) over a distance and have a

maximum range

  • DirectX has a constant attenuation, linear attenuation, and a quadratic

attenuation

  • You can choose attenuation levels through shaders

 They are more computationally expensive than directional lights

because a light vector has to be computed for every pixel

 It is possible to implement point lights in a deferred shader,

lighting only those pixels that actually get used

slide-36
SLIDE 36

 We add light position and radius

float4x4 World; float4x4 View; float4x4 Projection; float4x4 WorldInverseTranspose; float3 LightPosition; float LightRadius = 100; float3 Camera; static const float PI = 3.14159265f; float4 AmbientColor = float4(1, 1, 1, 1); float AmbientIntensity = 0.1; float4 DiffuseColor = float4(1, 1, 1, 1); float DiffuseIntensity = 0.7; float Shininess = 20; float4 SpecularColor = float4(1, 1, 1, 1); float SpecularIntensity = 0.5;

slide-37
SLIDE 37

 We no longer need color in the output  We do need the vector to the camera from the location  We keep the world location at that fragment struct VertexShaderInput { float4 Position : POSITION0; float3 Normal : NORMAL0; }; struct VertexShaderOutput { float4 Position : POSITION0; float3 Normal : NORMAL0; float4 WorldPosition : POSITIONT; };

slide-38
SLIDE 38

 We compute the normal and the world position

VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4 worldPosition = mul(input.Position, World);

  • utput.WorldPosition = worldPosition;

float4 viewPosition = mul(worldPosition, View);

  • utput.Position = mul(viewPosition, Projection);

float3 normal = normalize(mul(input.Normal, (float3x3)WorldInverseTranspose));

  • utput.Normal = normal;

return output; }

slide-39
SLIDE 39

 Lots of junk in here

float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { float3 lightDirection = LightPosition – (float3)input.WorldPosition; float3 normal = normalize(input.Normal); float intensity = pow(1 - saturate(length(lightDirection) / LightRadius), 2); lightDirection = normalize(lightDirection); float3 view = normalize(Camera - (float3)input.WorldPosition); float diffuseColor = dot(normal, lightDirection) * intensity; float3 reflect = normalize(2 * diffuseColor * normal – lightDirection); float dotProduct = dot(reflect, view); float4 specular = (8 + Shininess) / (8 * PI) * SpecularIntensity * SpecularColor * pow(saturate(dotProduct), Shininess) * intensity; return saturate(diffuseColor + AmbientColor * AmbientIntensity + specular); }

slide-40
SLIDE 40
slide-41
SLIDE 41

 BRDFs  Implementing BRDFs  Texture mapping in shaders

slide-42
SLIDE 42

 Finish reading Chapter 7