08-11-19 Simulation Engines 2008, Markus Larsson 1
3D Graphics 2 Simulation Engines 2008 Chalmers University of - - PowerPoint PPT Presentation
3D Graphics 2 Simulation Engines 2008 Chalmers University of - - PowerPoint PPT Presentation
3D Graphics 2 Simulation Engines 2008 Chalmers University of Technology Markus Larsson markus.larsson@slxgames.com 08-11-19 Simulation Engines 2008, Markus Larsson 1 Camera management Three basic kinds of cameras in games
08-11-19 Simulation Engines 2008, Markus Larsson 2
Camera management
Three basic kinds of cameras in games
First-person
Camera attached to the player and inherits the exact
motion of the player
Examples: Farcry, Doom, Quake, etc
Scripted
Camera moves along pre-defined paths Examples: Alone in the dark, Resident Evil
Third person
The camera is located outside the body of the player and
shows both the avatar and the environment
Examples: Super Mario 64, Gears of War
08-11-19 Simulation Engines 2008, Markus Larsson 3
Third-person camera: Constraints
The camera should never be closer to a wall than the near plane
It should never go outside a level
It should translate and rotate smoothly to always try to stay at a specific point in relation to the player character
It should smooth out discontinuities in the character's movement
It should be tested for collision detection
It should be able to enter the character when needed
08-11-19 Simulation Engines 2008, Markus Larsson 4
Third-person camera: Algorithm
Calculate the destination point for the camera from the character's position
The destination point is calculated by applying a displacement and rotation from the position of the player character. Different camera views can have different displacements, and it may be possible to switch between them.
Check the validity of the destination point (it could be on the wrong side of a wall)
Perform a ray intersect between character and destination point (there should be no intersection with the world geometry)
If the point is invalid, move the camera back towards the character so that it is positioned on the correct side of the wall
Calculate approximate translation and rotation motion and animate it over a series of frames (animation speed is a tweakable constant)
Check for collision using the bounding box of the camera during the motion
08-11-19 Simulation Engines 2008, Markus Larsson 5
Shaders and shader languages
1990s
Development of hardware for rendering of textured
3D primitives
Hardware T&L introduced Dynamic lighting done using Gouraud shading
2000s
Per-pixel shading Real-time procedural textures Advanced texture mapping techniques
08-11-19 Simulation Engines 2008, Markus Larsson 6
In pursuit of realism
Traditionally, research on realistic computer
graphics has focused on global illumination methods such as ray tracing and radiosity
Do not work in real time
Pixar introduced the concept of shaders in
RenderMan and showed that GI is not strictly necessary for realistic images
Instead, shader-based methods based on local
reflections can be used
08-11-19 Simulation Engines 2008, Markus Larsson 7
What is a shader?
Three different interpretations from Watt & Policarpo, 2003 C-style module in the RenderMan API used for high-
level control of rendering components (surface, volume and light shaders)
A combination of render states and texture maps for a
multi-pass or multi-texture render of an object on fixed- pipeline GPUs
New hardware functionality for controlling the rendering
- f primitives on a per-pixel or per-vertex level on
programmable-pipeline GPUs; these are called pixel shaders and vertex shaders, respectively
The last one is important one
08-11-19 Simulation Engines 2008, Markus Larsson 8
Shader types
Vertex shader
Called for every vertex in a 3D primitive
Allows for effects such as hardware skinning, perturbation of water surface, etc
Pixel (fragment) shader
Called once for every fragment in a 3D primitive (not pixel, because a fragment in a 3D primitive could correspond to one or several pixels depending on filtering settings, etc)
Can be used for procedural texture, normal maps, etc
Geometry shader
Can add to and remove vertices from a mesh and be used for adding geometry too costly to process on the CPU
Allows displacement mapping, etc
Unified Shader Model in DirectX 10 (Shader Model 4.0)
08-11-19 Simulation Engines 2008, Markus Larsson 9
Shader languages
OpenGL Shading Language (GLSL)
Part of the OpenGL specification since OpenGL 1.4 High-level language similar to C/C++
Cg (C for Graphics)
Nvidia's proprietary shader language Extremely similar to HLSL, but works on both
OpenGL and DirectX
Microsoft HLSL
Works on DirectX 9 and 10 Nvidia and Microsoft collaborated on its
development
08-11-19 Simulation Engines 2008, Markus Larsson 10
Reflective surfaces: Environment maps
Commonly used for reflections Precomputed textures
Standard environment maps
Single texture representing the
scene in one texture as if reflected from a steel ball
Cubic environment maps
Cube map consisting of six textures unfolded on to a
cube
The most used format on modern hardware
Can also be used for advanced lighting
08-11-19 Simulation Engines 2008, Markus Larsson 11
Particle systems
One of the most useful tools available
Smoke, water, fire, etc
Each individual particle has a small or zero
geometrical extent, but together form cloud-like
- bjects
Each particle system can
contain thousands or even tens of thousands of particles
Be extremely cautious of
multiple render states
08-11-19 Simulation Engines 2008, Markus Larsson 12
Particle systems
Consists of one or several emitters and a number of particles
Each emitter is responsible for spawning new particles every time update according to some distribution
Each particle contains information about its current position, size, velocity, shape and lifetime
Each update
Emitters generate new particles New particles are assigned initial attributes All new particles are injected into the particle system Any particles that have exceeded their lifetime are
extinguished (and usually recycled)
The current particles are updated according to their scripts The current particles are rendered
08-11-19 Simulation Engines 2008, Markus Larsson 13
Water
Ideally based on fluid simulations
Way to costly for real-time use
Mostly represented by a simple plane/quad
In more advanced scenarios
represented by a displaced grid
Reflections are either entirely faked or by using a planar mirror
The scene is inverted and the reflection is rendered to a
texture which is then rendered using projective texturing
- nto the water mesh
08-11-19 Simulation Engines 2008, Markus Larsson 14
Explosions
There is no universal method
Usually a combination of effects are used in combination until it “looks good”
Often animated billboards of prerecorded explosions are combined with debris that is either actual geometry or particles
Nowadays, often actual models are used which is moved by the physics engine
08-11-19 Simulation Engines 2008, Markus Larsson 15
Lightmaps
Precomputed lighting and shadows on static geometry in
the scene
Advantages Allows for baking advanced lighting such as radiosity Extremely fast Disadvantages Only works on static
geometry
Requires a lot of pre-
computing for good quality
May require a lot of
memory
08-11-19 Simulation Engines 2008, Markus Larsson 16
Stencil shadows
Used with dynamic shadows
Uses the stencil buffer
Advantages
Creates crisp shadows without
aliasing artifacts
Stable and potentially very fast algorithm
Disadvantages
Difficult to get soft shadows (but possible) Extruding the shadow volumes on hardware is cumbersome
and requires modifications to the meshes
Requires multiple render passes Patent problems with Carmack's reverse
08-11-19 Simulation Engines 2008, Markus Larsson 17
Stencil shadows
Empty the stencil buffer
Draw the whole scene with ambient lighting
The z-buffer is filled and the color buffer is filled with the
color of surfaces in shadow
Turn off updates to the z-buffer and color buffer and draw the front-facing polygons of the shadow volumes
Increments the stencil buffer; all pixels in or behind shadow
volumes receive a positive value
Repeat for back-facing polygons
Decrement the stencil buffer; the values in the stencil buffer
will decrease where we leave the shadow volumes
Draw the diffuse and specular materials in the scene where the value of the stencil buffer is zero
08-11-19 Simulation Engines 2008, Markus Larsson 18
Shadow maps
Texture based shadows
Renders the scene from the light's point of view
For each fragment, checks if it is the the closest
- ne to the light
Advantages
Can be done almost entirely on the GPU
Works very similar to drawing the scene regularly and allows reuse of frustum check code, etc
Good candidate for soft shadow algorithms
Disadvantages
Precision problems and aliasing artifacts
Good for directional and spot lights but omni lights may require up to six individual textures per light
08-11-19 Simulation Engines 2008, Markus Larsson 19
Soft shadows
There are lots of techniques, but usually
shadow maps with multiple samples per texel are used
Often extremely expensive to compute “Free” in lightmaps
08-11-19 Simulation Engines 2008, Markus Larsson 20
Textures
Texture coordinates (U,V)
Unfolds the model
Paint directly on the
model with software such as BodyPaint, Zbrush, etc
Texturespace/
Tangentspace can be interesting for advanced effects such as Normal mapping
08-11-19 Simulation Engines 2008, Markus Larsson 21
Displacement mapping
Commonly used technique in raytracers
Not possible on DirectX 9 hardware
Can not move vertexes that do
not exist
Can be hacked for certain uses
by doing texture lookups in the vertex shader
Possible for real on DirectX 10 hardware
Geometry shaders allow
tessellation of objects without CPU intervention
08-11-19 Simulation Engines 2008, Markus Larsson 22
Normal maps
High-poly surfaces are transferred to
models by storing normals in textures
Normals are normally stored in
tangent space
Can produce extremely good results
with fairly low computational costs
Works well on most types of objects lit
with dynamic lights
Often, tools like Zbrush are used to
create high-poly objects, but normal maps can also be created from heightmaps
08-11-19 Simulation Engines 2008, Markus Larsson 23
Tangent space
08-11-19 Simulation Engines 2008, Markus Larsson 24
Parallax mapping
Also know as offset mapping as it offsets the texture
coordinates
Very cheap and fairly convincing on flat surfaces High quality versions require an iterative process, which is
more expensive, but produce extremely good results (where applicable)
08-11-19 Simulation Engines 2008, Markus Larsson 25
Cel shading
Non photo-realistic technique Can be done either by drawing multiple passes
- r in a shader by examining the scalar product
- f the normal and view direction
Check the sample in the Ogre SDK
08-11-19 Simulation Engines 2008, Markus Larsson 26
High Dynamic Range rendering
Often done by cheating Can be done for real on Shader Model 3.0 hardware Render to a target with higher precision than can be
drawn
Then scale the values in a second pass to simulate
different levels of exposure
08-11-19 Simulation Engines 2008, Markus Larsson 27
PRT
Precomputed radiance transfer can be used to simulate soft shadows, interreflections and subsurface scattering
Uses spherical harmonics to encode and approximate the light transfer function
Decoded in a programmable shader Only works on low frequency effects
with a reasonable number of parameters
Lights can move, but objects can not move relative to each other
Check the DirectX SDK
08-11-19 Simulation Engines 2008, Markus Larsson 28
Post processing
Generally done by first rendering the scene to a
texture and then applying that texture to a full- screen quad which is drawn with a pixel shader applied to it
Multiple post-processing effects can be
combined either in sequence or by blending between them
08-11-19 Simulation Engines 2008, Markus Larsson 29
Bloom
Can be done best when combined with HDR rendering In essence: apply gaussian blur on areas with intense light A cheap substitute for actual blur is to downsample the
image a few times and then combining the original image with the downsampled versions
08-11-19 Simulation Engines 2008, Markus Larsson 30
Depth of Field
DOF is extremely important in real world photography and film
The modern approach (Thorsten Sheuermann, 2004)
Downsample and pre-blur
the image
Use variable size filter
kernel to approximate circle of confusion
Blend between original
and pre-blurred image for better image quality
Take measures to prevent
“leaking” sharp foreground into blurry background
08-11-19 Simulation Engines 2008, Markus Larsson 31
Motion blur
The lack of motion blur is mostly why 25-30fps in a game seems stuttering but works with film
The simplest form of motion blur is done by using the accumulation buffer and combining previous frames with the current
Does not work well without very
high framerates
When used on meshes directly, a vertex shader stretches some vertices along the direction of motion and fades their transparency
Can be done in post processing by first drawing the scene to a velocity buffer which determines how much to blur the scene in the post processing pass
08-11-19 Simulation Engines 2008, Markus Larsson 32
Materials in Ogre
material Material/SOLID/TEX/sandstone.jpg { technique { pass { diffuse 0.8 0.8 0.8 specular 0.5 0.5 0.5 12.5 texture_unit { texture sandstone.jpg } } } }
08-11-19 Simulation Engines 2008, Markus Larsson 33
Post processing in Ogre
There are plenty of pre-defined post processing
effects
Compositor allows for easy access to the
effects
Use compositor scripts to add effects or
combine multiple effects
Check the Ogre SDK for demos
08-11-19 Simulation Engines 2008, Markus Larsson 34
Shadows in Ogre
Extremely easy
mSceneMgr->setShadowTechnique(“techniquename”);
Stencil shadows
SHADOWTYPE_STENCIL_MODULATIVE SHADOWTYPE_STENCIL_ADDITIVE
Shadow maps
SHADOWTYPE_TEXTURE_MODULATIVE SHADOWTYPE_TEXTURE_ADDITIVE
There are also more advanced methods
Check the example in the SDK
08-11-19 Simulation Engines 2008, Markus Larsson 35
Next lecture
Monday, 24 November Artificial Intelligence