week 9 wednesday what did we talk about last time textures
play

Week 9 -Wednesday What did we talk about last time? Textures - PowerPoint PPT Presentation

Week 9 -Wednesday What did we talk about last time? Textures Volume textures Cube maps Texture caching and compression Procedural texturing Texture animation Material mapping Alpha mapping Bump mapping refers


  1. Week 9 -Wednesday

  2.  What did we talk about last time?  Textures  Volume textures  Cube maps  Texture caching and compression  Procedural texturing  Texture animation  Material mapping  Alpha mapping

  3.  Bump mapping refers to a wide range of techniques designed to increase small scale detail  Most bump mapping is implemented per-pixel in the pixel shader  3D effects of bump mapping are greater than textures alone, but less than full geometry

  4.  Macro-geometry is made up of vertices and triangles  Limbs and head of a body  Micro-geometry are characteristics shaded in the pixel shader, often with texture maps  Smoothness (specular color and m parameter) based on microscopic smoothness of a material  Meso-geometry is the stuff in between that is too complex for macro- geometry but large enough to change over several pixels  Wrinkles  Folds  Seams  Bump mapping techniques are primarily concerned with mesoscale effects

  5.  James Blinn proposed the offset vector bump map or offset map  Stores b u and b v values at each texel, giving the amount that the normal should be changed at that point  Another method is a heightfield , a grayscale image that gives the varying heights of a surface  Normal changes can be computed from the heightfield

  6.  The results are the same, but these kinds of deformations are usually stored in normal maps  Normal maps give the full 3-component normal change  Normal maps can be in world space (uncommon)  Only usable if the object never moves  Or object space  Requires the object only to undergo rigid body transforms  Or tangent space  Relative to the surface, can assume positive z  Lighting and the surface have to be in the same space to do shading  Filtering normal maps is tricky

  7.  Bump mapping doesn't change what can be seen, just the normal  High enough bumps should block each other  Parallax mapping approximates the part of the image you should see by moving from the height back to the view vector and taking the value at that point ⋅ h v = + xy  The final point used is: p p adj v z

  8.  At shallow viewing angles, the previous approximation can look bad  A small change results in a big texture change  To improve the situation, the offset is limited (by not scaling by the z component)  It flattens the bumpiness at shallow angles, but it doesn't look crazy ′ = + ⋅ p p h v  New equation: adj xy

  9.  The weakness of parallax mapping is that it can't tell where it first intersects the heightfield  Samples are made along the view vector into the heightfield  Three different research groups proposed the idea at the same time, all with slightly different techniques for doing the sampling  There is still active research here  Polygon boundaries are still flat in most models

  10.  Yet another possibility is to change vertex position based on texture values  Called displacement mapping  With the geometry shader, new vertices can be created on the fly  Occlusion, self-shadowing, and realistic outlines are possible and fast  Unfortunately, collision detection becomes more difficult

  11.  Radiometry is the measurement of electromagnetic radiation (for us, specifically light)  Light is the flow of photons  We'll generally think of photons as particles, rather than waves  Photon characteristics  Frequency ν = c / λ (Hertz)  Wavelength λ = c / ν (meters)  Energy Q = h ν (joules) [ h is Planck's constant]

  12.  Radiometry just deals with physics  Photometry takes everything from radiometry and weights it by the sensitivity of the human eye  Photometry is just trying to account for the eye's differing sensitivity to different wavelengths

  13.  Colorimetry is the science of quantifying human color perception  The CIE defined a system of three non- monochromatic colors X, Y, and Z for describing the human perceivable color space  RGB is a transform from these values into monochromatic red, green, and blue colors  RGB can only express colors in the triangle  As you know, there are others (HSV, HSL, etc.)

  14.  Real light behaves consistently (but in a complex way)  For rendering purposes, we often divide light into categories that are easy to model  Directional lights (like the sun)  Omni lights (located at a point, but evenly illuminate in all directions)  Spotlights (located at a point and have intensity that varies with direction)  Textured lights (give light projections variety in shape or color) ▪ Similar to gobos, if you know anything about stage lighting

  15.  With a programmable pipeline, you can express lighting models of limitless complexity  The old DirectX fixed function pipeline provided a few stock lighting models  Ambient lights  Omni lights  Spotlights  Directional lights  All lights have diffuse, specular, and ambient color  Let's see how to implement these lighting models with shaders

  16.  Ambient lights are very simple to implement in shaders  We've already seen the code  The vertex shader must simply transform the vertex into clip space (world x view x projection)  The pixel shader colors each fragment a constant color  We could modulate this by a texture if we were using one

  17. float4x4 World; float4x4 View; float4x4 Projection; float4 AmbientColor = float4(1, 0, 0, 1); float AmbientIntensity = 0.5; struct VertexShaderInput { float4 Position : POSITION0; }; struct VertexShaderOutput { float4 Position : POSITION0; };

  18. VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4 worldPosition = mul(input.Position, World); float4 viewPosition = mul(worldPosition, View); output.Position = mul(viewPosition, Projection); return output; }

  19. float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { return AmbientColor * AmbientIntensity; } technique Ambient { pass Pass1 { VertexShader = compile VS_SHADERMODEL VertexShaderFunction(); PixelShader = compile PS_SHADERMODEL PixelShaderFunction(); } }

  20.  Directional lights model lights from a very long distance with parallel rays, like the sun  It only has color (specular and diffuse) and direction  They are virtually free from a computational perspective  Directional lights are also the standard model for BasicEffect  You don't have to use a shader to do them  Let's look at a diffuse shader first

  21.  We add values for the diffuse light intensity and direction  We add a WorldInverseTranspose to transform the normals  We also add normals to our input and color to our output float4x4 World; float4x4 View; float4x4 Projection; float4 AmbientColor = float4(1, 1, 1, 1); float AmbientIntensity = 0.1; float4x4 WorldInverseTranspose; float4 DiffuseLightDirection = float4(1, 2, 0, 0); float4 DiffuseColor = float4(1, .5, 0, 1); float DiffuseIntensity = 1.0; struct VertexShaderInput { float4 Position : POSITION0; float4 Normal : NORMAL0; }; struct VertexShaderOutput { float4 Position : POSITION0; float4 Color : COLOR0; };

  22.  Color depends on the surface normal dotted with the light vector VertexShaderOutput VertexShaderFunction(VertexShaderInput input){ VertexShaderOutput output; float4 worldPosition = mul(input.Position, World); float4 viewPosition = mul(worldPosition, View); output.Position = mul(viewPosition, Projection); float4 normal = mul(input.Normal, WorldInverseTranspose); float lightIntensity = dot(normal, normalize(DiffuseLightDirection)); output.Color = saturate(DiffuseColor * DiffuseIntensity * lightIntensity); return output; }

  23.  No real differences here  The diffuse color and ambient colors are added together  The technique is exactly the same float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { return saturate(input.Color + AmbientColor * AmbientIntensity); }

  24.  Adding a specular component to the diffuse shader requires incorporating the view vector  It will be included in the shader file and be set as a parameter in the C# code

  25.  The camera location is added to the declarations  As are specular colors and a shininess parameter float4x4 World; float4x4 View; float4x4 Projection; float4x4 WorldInverseTranspose; float3 Camera; static const float PI = 3.14159265f; float4 AmbientColor = float4(1, 1, 1, 1); float AmbientIntensity = 0.1; float3 DiffuseLightDirection; float4 DiffuseColor = float4(1, 1, 1, 1); float DiffuseIntensity = 0.7; float Shininess = 20; float4 SpecularColor = float4(1, 1, 1, 1); float SpecularIntensity = 0.5;

  26.  The output adds a normal so that the half vector can be computed in the pixel shader  A world position lets us compute the view vector to the camera struct VertexShaderInput { float4 Position : POSITION0; float3 Normal : NORMAL0; }; struct VertexShaderOutput { float4 Position : POSITION0; float4 Color : COLOR0; float3 Normal : NORMAL0; float4 WorldPosition : POSITIONT; };

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend