week 3 monday what did we talk about last time graphics
play

Week 3 - Monday What did we talk about last time? Graphics - PowerPoint PPT Presentation

Week 3 - Monday What did we talk about last time? Graphics rendering pipeline Rasterizer Stage The final screen data containing the colors for each pixel is stored in the color buffer The merging stage is responsible for merging


  1. Week 3 - Monday

  2.  What did we talk about last time?  Graphics rendering pipeline  Rasterizer Stage

  3.  The final screen data containing the colors for each pixel is stored in the color buffer  The merging stage is responsible for merging the colors from each of the fragments from the pixel shading stage into a final color for a pixel  Deeply linked with merging is visibility: The final color of the pixel should be the one corresponding to a visible polygon (and not one behind it)

  4.  To deal with the question of visibility, most modern systems use a Z -buffer or depth buffer  The Z -buffer keeps track of the z -values for each pixel on the screen  As a fragment is rendered, its color is put into the color buffer only if its z value is closer than the current value in the z -buffer (which is then updated)  This is called a depth test

  5.  Pros  Polygons can usually be rendered in any order  Universal hardware support is available  Cons  Partially transparent objects must be rendered in back to front order (painter's algorithm)  Completely transparent values can mess up the z buffer unless they are checked  z -fighting can occur when two polygons have the same (or nearly the same) z values

  6.  A stencil buffer can be used to record a rendered polygon  This stores the part of the screen covered by the polygon and can be used for special effects  Frame buffer is a general term for the set of all buffers  Different images can be rendered to an accumulation buffer and then averaged together to achieve special effects like blurring or antialiasing  A back buffer allows us to render off screen to avoid popping and tearing

  7.  This pipeline is focused on interactive graphics  Micropolygon pipelines are usually used for film production  Predictive rendering applications usually use ray tracing renderers  The old model was the fixed-function pipeline which gave little control over the application of shading functions  The book focuses on programmable GPUs which allow all kinds of tricks to be done in hardware

  8.  An effect says how things should be rendered on the screen  We can specify this in details using shader programs  The BasicEffect class gives you the ability to do effects without creating a shader program  Less flexibility, but quick and easy  The BasicEffect class has properties for:  World transform  View transform  Projection transform  Texture to be applied  Lighting  Fog

  9.  Vertices can be stored in many different formats depending on data you want to keep  Position is pretty standard  Normals are optional  Color is optional  We will commonly use the VertexPositionColor type to hold vertices with color

  10.  The GPU holds vertices in a buffer that can be indexed into  Because it is special purpose hardware, it has to be accessed in special ways  It seems cumbersome, but we will often create an array of vertices, create an appropriately sized vertex buffer, and then store the vertices into the buffer VertexPositionColor[] vertices = new VertexPositionColor[3] { new VertexPositionColor(new Vector3(0, 1, 0), Color.Red), new VertexPositionColor(new Vector3(+0.5f, 0, 0), Color.Green), new VertexPositionColor(new Vector3(-0.5f, 0, 0), Color.Blue) }; vertexBuffer = new VertexBuffer(GraphicsDevice, typeof(VertexPositionColor), 3, BufferUsage.WriteOnly); vertexBuffer.SetData<VertexPositionColor>(vertices);

  11.  In order to draw a vertex buffer, you have to:  Set the basic effect to have the appropriate transformations  Set the vertex buffer on the device as the current one being drawn  Loop over the passes in the basic effect, applying them  Draw the appropriate kind of primitives effect.World = world; effect.View = view; effect.Projection = projection; effect.VertexColorEnabled = true; GraphicsDevice.SetVertexBuffer(vertexBuffer); foreach (EffectPass pass in effect.CurrentTechnique.Passes) { pass.Apply(); GraphicsDevice.DrawPrimitives(PrimitiveType.TriangleList, 0, 1); }

  12.  Sometimes a mesh will repeat many vertices  Instead of repeating those vertices, we can give a list of indexes into the vertex list instead short[] indices = new short[3] {0, 1, 2}; indexBuffer = new IndexBuffer(GraphicsDevice, typeof(short), 3, BufferUsage.WriteOnly); indexBuffer.SetData<short>(indices);

  13.  Once you have the index buffer, drawing with it is very similar to drawing without it  You simply have to set it on the device  Then call DrawIndexedPrimitives() instead of DrawPrimitives() on the device basicEffect.World = world; basicEffect.View = view; basicEffect.Projection = projection; GraphicsDevice.SetVertexBuffer<VertexPositionColor>(vertexBuffer); GraphicsDevice.Indices = indexBuffer; foreach (EffectPass pass in basicEffect.CurrentTechnique.Passes) { pass.Apply(); GraphicsDevice.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, 1); }

  14.  An icosahedron has 20 sides, but it only has 12 vertices  By using an index buffer, we can use only 12 vertices and 60 indices  Check out the XNA tutorial on RB Whitaker's site for the data:  http://rbwhitaker.wikidot.com/index- and-vertex-buffers  There are some minor changes needed to make the code work

  15.  It is very common to define primitives in terms of lists and strips  A list gives all the vertex indices for each of the shapes drawn  2 n indices to draw n lines  3 n indices to draw n triangles  A strip gives only the needed information to draw a series of connected primitives  n + 1 indices to draw a connected series of n lines  n + 2 indices to draw a connected series of n triangles

  16.  GPU stands for graphics processing unit  The term was coined by NVIDIA in 1999 to differentiate the GeForce256 from chips that did not have hardware vertex processing  Dedicated 3D hardware was just becoming the norm and many enthusiasts used an add-on board in addition to their normal 2D graphics card  Voodoo2

  17.  Modern GPU's are generally responsible for the geometry and rasterization stages of the overall rendering pipeline  The following shows color-coded functional stages inside those stages  Red is fully programmable  Purple is configurable  Blue is not programmable at all Vertex Geometry Screen Triangle Triangle Pixel Clipping Merger Shader Shader Mapping Setup Traversal Shader

  18.  GPU architecture  Vertex shading  Geometry shading  Pixel shading

  19.  Keep reading Chapter 3  Keep working on Assignment 1, due this Friday by 11:59  Keep working on Project 1, due Friday, September 27 by 11:59  CS Club  Tonight from 4-6 p.m. in The Point 113!

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend