COMPUTER GRAPHICS SPRING 2015 DR. MICHAEL J. REALE DEFINITION AND - - PowerPoint PPT Presentation

computer graphics
SMART_READER_LITE
LIVE PREVIEW

COMPUTER GRAPHICS SPRING 2015 DR. MICHAEL J. REALE DEFINITION AND - - PowerPoint PPT Presentation

CS 548: COMPUTER GRAPHICS INTRODUCTION TO COMPUTER GRAPHICS SPRING 2015 DR. MICHAEL J. REALE DEFINITION AND APPLICATIONS WHAT IS COMPUTER GRAPHICS? Computer graphics generating and/or displaying imagery using computers Also


slide-1
SLIDE 1

CS 548: COMPUTER GRAPHICS INTRODUCTION TO COMPUTER GRAPHICS

SPRING 2015

  • DR. MICHAEL J. REALE
slide-2
SLIDE 2

DEFINITION AND APPLICATIONS

slide-3
SLIDE 3

WHAT IS COMPUTER GRAPHICS?

  • Computer graphics – generating and/or displaying imagery using computers
  • Also touches on problems related to 3D model processing, etc.
  • The focus of this course will be on the underlying mechanics and algorithms of computer graphics
  • As opposed to a computer art course, for instance, which focuses on using tools to make graphical content
slide-4
SLIDE 4

SO, WHY DO WE NEED IT?

  • Practically EVERY field/discipline/application needs to use computer graphics in some way
  • Science
  • Art
  • Engineering
  • Business
  • Industry
  • Medicine
  • Government
  • Entertainment
  • Advertising
  • Education
  • Training
  • …and more!
slide-5
SLIDE 5

APPLICATIONS: GRAPHS, CHARTS, AND DATA VISUALIZATION

  • Graphics and charts
  • One of the earliest applications  plotting data using printer
  • Data visualization
  • Sciences
  • Show visual representation  see patterns in data
  • E.g., the flow of fluid (LOx) around a tube is described using streamtubes
  • Challenges: large data sets, best way to display data
  • Medicine
  • 2D images (CT, MRI scans)  3D volume rendering
  • E.g., volume rendering and image display from the visible woman dataset

Images from VTK website: http://www.vtk.org/VTK/project/imagegallery.php

slide-6
SLIDE 6

APPLICATIONS: CAD/CADD/CAM

  • Design and Test
  • CAD = Computer-Aided Design
  • CADD = Computer-Aided Drafting and Design
  • Usually rendered in wireframe
  • Manufacturing
  • CAM = Computer-Aided Manufacturing
  • Used for:
  • Designing and simulating vehicles, aircraft,

mechanical devices, electronic circuits/devices,…

  • Architecture and rendered building designs
  • ASIDE: Often need special graphics cards to make absolutely SURE the rendered image is correct (e.g., NVIDIA

Quadro vs. a garden-variety Geforce)

http://www.nvidia.com/object/siemens-plm-software-quadro-visualization.html

GAMES!!!

slide-7
SLIDE 7

APPLICATIONS: COMPUTER ART / MOVIES

  • First film to use a scene that was completely computer generated:
  • Tron (1982)
  • Star Trek II: Wrath of Khan (1982)
  • …(depends on who you talk to)
  • First completely computer-generated full-length film:
  • Toy Story (1995)

http://upload.wikimedia.org/wikipedia/en/1/17/Tron_poster.jpg http://blog.motorcycle.com/wp-content/uploads/2008/10/tron_movie_image_light_cycles__1_.jpg https://bigcostas.files.wordpress.com/2007/12/genesis1.jpg http://www.standbyformindcontrol.com/wp-content/uploads/2013/05/khan-poster.jpg http://s7d2.scene7.com/is/image/Fathead/15- 15991X_dis_toy_story_prod?layer=comp&fit=constrain&hei=350&wid=350&fmt=png- alpha&qlt=75,0&op_sharpen=1&resMode=bicub&op_usm=0.0,0.0,0,0&iccEmbed=0

slide-8
SLIDE 8

APPLICATIONS: VIRTUAL-REALITY ENVIRONMENTS / TRAINING SIMULATIONS

  • Used for:
  • Training/Education
  • Military applications (e.g., flight simulators)
  • Entertainment

https://dbvc4uanumi2d.cloudfr

  • nt.net/cdn/4.3.3/wp-

content/themes/oculus/img/or der/dk2-product.jpg

"Virtuix Omni Skyrim (cropped)" by Czar - Own work. Licensed under CC BY-SA 3.0 via Wikimedia Commons - http://commons.wikimedia.org/wiki/File:Virtuix_Omni_Skyrim_(cropped).jpg#mediaviewer/File:Virtuix_Omn i_Skyrim_(cropped).jpg

slide-9
SLIDE 9

APPLICATIONS: GAMES!

Unreal 4 Engine Crytek 3 Engine

slide-10
SLIDE 10

APPLICATIONS: GAMES!

Unity Engine (Wasteland 2) Source Engine (Portal 2)

slide-11
SLIDE 11

APPLICATIONS: GAMES!

Wolfenstein: Then and Now

http://www.dosgamesarchive.com/download/wolfenstein-3d/ http://mashable.com/2014/05/23/wolfenstein-then-and-now/ http://www.gamespot.com/images/1300-2536458

slide-12
SLIDE 12

CHARACTERISTICS OF COMPUTER GRAPHICS

  • Depending on your application, your focus and goals will be different:
  • Real-time vs. Non-real-time
  • Virtual Entities / Environments vs. Visualization / Representation
  • Developing Tools / Algorithms vs. Content Creation
slide-13
SLIDE 13

CG CHARACTERISTICS: REAL-TIME VS. NON-REAL-TIME

  • Real-time rendering
  • 15 frames per second (AT BARE MINIMUM – still see skips but it will look more or less animated)
  • 24 fps = video looks smooth (no skips/jumps)
  • 24 – 60 fps is a more common requirement
  • Examples: first-person simulations, games, etc.
  • Non-real-time
  • Could take hours for one frame
  • Examples: CG in movies, complex physics simulations, data visualization, etc.
  • Often trade-off between speed and quality (image, accuracy, etc.)
slide-14
SLIDE 14

CG CHARACTERISTICS: VIRTUAL ENTITIES / ENVIRONMENTS VS. VISUALIZATION / REPRESENTATION

  • Virtual Entities / Environments
  • Rendering a person, place, or thing
  • Often realistic rendering, but it doesn’t have to be
  • Examples: simulations (any kind), games, virtual avatars, movies
  • Visualization / Representation
  • Rendering data in some meaningful way
  • Examples: graphics/charts, data visualization, (to a lesser extent) graphics user

interfaces

  • Both
  • Rendering some object / environment, but also highlighting important

information

  • Examples: CAD/CAM

Remy from Pixar’s Ratatouille: http://disney.wikia.com/wiki/ Remy

Face using Crytek engine: http://store.steampo wered.com/app/220 980/

Tiny and Big: Grandpa’s Leftovers game: http://www.mobygames.com/game/windows/tiny-and-big- grandpas-leftovers/screenshots/gameShotId,564196/

http://www.vtk.org/VTK/project/imagegallery.php

slide-15
SLIDE 15

CG CHARACTERISTICS: TOOLS/ALGORITHMS VS. CONTENT CREATION

  • Developing Tools / Algorithms
  • It’s…well…developing tools and algorithms for graphical purposes
  • Using computer-graphics application programming interfaces (CG API)
  • Common CG APIs: GL, OpenGL, DirectX, VRML, Java 2D, Java 3D, etc.
  • Interface between programming language and hardware
  • Also called “general programming packages” in Hearn-Baker book
  • Example: how do I write code that will render fur realistically?
  • Content Creation
  • Using pre-made software to create graphical objects
  • Called “special purpose software packages” in Hearn-Baker book
  • Example: how do I create a realistic-looking dog in a 3D modeler program?
slide-16
SLIDE 16

THIS COURSE

  • In this course, we’ll mostly be focusing on developing tools / algorithms to render in real-time virtual

entities / environments

slide-17
SLIDE 17

VIDEO DISPLAY DEVICES

slide-18
SLIDE 18

INTRODUCTION

  • A lot of why computer graphics works the way it does is based in the hardware
  • Graphics cards, display devices, etc.
  • In this section, we’ll talk about video display devices (and some of the terminology associated with them)
  • CRT
  • Plasma
  • LCD/LED
  • 3D
  • For an excellent explanation of how…
  • LCD/LED monitors work: http://electronics.howstuffworks.com/lcd.htm
  • Plasma monitors work: http://electronics.howstuffworks.com/plasma-display.htm
slide-19
SLIDE 19

CRT: CATHODE-RAY TUBE

  • Primary video display mechanism for a long time
  • Now mostly replaced with LCD monitors/TVs
  • Cathode = an electrode (conductor) where electrons

leave the device

  • Cathode rays = beam of electrons
  • Basic idea:
  • Electron gun (cathode + control grid) shoots electrons in vacuum tube
  • Magnetic or electric coils focus and deflect beam of electrons so they hit each

location on screen

  • Screen coated with phosphor  glows when hit by electrons
  • Phosphor will fade in a short time  so keep directing electron beam over same

screen points

  • Called refresh CRT

By Theresa Knott (en:Image:Cathode ray Tube.PNG) [GFDL (http://www.gnu.org/copyleft/fdl.html) or CC-BY-SA-3.0 (http://creativecommons.org/licenses/by-sa/3.0/)], via Wikimedia Commons

http://i.imgur.com/ayHx5.jpg?1

slide-20
SLIDE 20

DISPLAY DEFINITIONS

  • Refresh rate = frequency that picture is redrawn
  • Term still used for things other than CRTs
  • Usually expressed in Hertz (e.g., 60 Hz)
  • Persistence = how long phosphors emit light after being hit by electrons
  • Low persistence  need higher refresh rates
  • LCD monitors have an analogous concept  response time
slide-21
SLIDE 21

MORE DISPLAY DEFINITIONS

  • Pixels = “picture element”; single point on screen
  • Resolution = maximum number of points that can be displayed without overlap
  • For CRTs  more analogue device, so definition is a little more involved
  • Now, usually just means (number of pixels in width) x (number of pixels in height)
  • Aspect ratio = resolution width / height
  • (Although sometimes vice versa)
slide-22
SLIDE 22

TYPES OF CRT MONITORS

  • There were two basic types of CRT monitors:
  • Vector displays
  • Raster-scan displays
slide-23
SLIDE 23

CRT: VECTOR DISPLAYS

  • Also called random-scan, stroke-writing, or calligraphic displays
  • Electron beam actually draws points, lines, and curves directly
  • List of things to draw stored in display list (also called refresh

display file, vector file, or display program)

  • Long list  just draw as quick as you can
  • Short list  delay refresh cycle so you don’t burn out screen!
  • Advantages: draws non-aliased lines
  • Disadvantages: not very flexible; cannot draw shaded polygons
  • Mostly abandoned in favor of raster-scan displays
slide-24
SLIDE 24

CRT: RASTER-SCAN DISPLAYS

  • Most common type of CRT
  • Refresh buffer (or frame buffer) = contains picture of

screen you want to draw

  • Electron gun sweeps across screen, one row at a time,

from top to bottom

  • Each row = scan line
  • Advantages: flexible
  • Disadvantages: lines, edge, etc. can look jagged (i.e.,

aliased)

slide-25
SLIDE 25

CRT: INTERLACING

  • Interlacing = first draw even-numbered scan lines,

then do odd-numbered lines

  • Effectively doubles your refresh rate
  • Also used to save data in TV transmission
slide-26
SLIDE 26

CRT: COLOR

  • Two ways to do color with CRTs:
  • Beam-penetration
  • Have red and green layer of phosphors
  • Slow electrons  only red lay
  • Fast electrons  only green layer
  • Medium-speed electrons  both
  • Inexpensive, but limited in number of colors
  • Shadow-mask
  • Uses red-green-blue model for color (RGB)
  • Three electron guns and three phosphor dots (one for red,
  • ne for green, and one for blue)
  • Shadow mask makes sure 3 guns hit the 3 dots
slide-27
SLIDE 27

PLASMA DISPLAYS

  • Fill region between two glass plates with mixture of gases (usually

includes neon)

  • Vertical conducting ribbons on one plate; horizontal conducting ribbons
  • n the other
  • Firing voltages at intersecting pair of horizontal and vertical conductors 

gas at intersection breaks down into glowing plasma of electrons and ions

  • For color  use three subpixels (red, green, and blue)
  • Advantages: very thin display; pixels very bright, so good at any angle
  • Disadvantages: expensive

http://electronics.howstuffworks.com/plasma-display2.htm

slide-28
SLIDE 28

LCD DISPLAYS

  • LCD = Liquid Crystal Displays
  • Liquid crystal = maintain a certain structure, but can move

around like liquid

  • Structure is twisted, but applying electrical current straightens it out
  • Basic idea:
  • Two polarized light filters (one vertical, one horizontal)
  • Light passes through first filter  polarized light in vertical direction
  • “ON STATE”  no current  crystal twisted  causes light to be

reoriented so it passes through horizontal filter

  • “OFF STATE”  current  crystal straightens out  light does NOT

pass through

slide-29
SLIDE 29

LCD DISPLAYS: WHERE DOES THE LIGHT COME FROM?

  • Mirror in back of display
  • Cheap LCD displays (e.g., calculator)
  • Just reflects ambient light in room (or prevents it from reflecting)
  • Fluorescent light in center of display
  • LED lights
  • Could be edge-lit or full array (i.e., LEDS covering entire back of screen)
  • Usually what people mean when they say an “LED monitor” = LCD display backlit by LEDs
slide-30
SLIDE 30

LCD DISPLAYS: PASSIVE VS. ACTIVE

  • Passive-matrix LCDs  use grid that sends charge to pixels through transparent conductive materials
  • Simple
  • Slow response time
  • Imprecise voltage control
  • When activating one pixel, nearby pixels also turned on  makes image fuzzy
  • Active-matrix LCDs  use transistor at each pixel location using thin-film transistor technology
  • Transistors control voltage at each pixel location  prevent leakage to other pixels
  • Control voltage get 256 shades of gray
slide-31
SLIDE 31

LCD DISPLAYS: COLOR

  • Color  have 3 subpixels (one red, one green, and one blue)

http://electronics.howstuffworks.com/lcd5.htm

slide-32
SLIDE 32

3D DISPLAYS: INTRODUCTION

  • In real life, we see depth because of we have two eyes (binocular vision)
  • One eye sees one angle, the other sees another
  • Brain meshes two images together to figure out how far away things are
  • By “3D” displays, we mean giving the illusion of depth by purposely giving each eye a different view
slide-33
SLIDE 33

3D DISPLAYS: OLDER APPROACHES

  • Anaglyph 3D
  • Red and blue 3D glasses
  • Show different images (one more reddish, the other more blue-ish)
  • Color quality (not surprisingly) is not that great
  • View Master toys
  • First introduced 1939
  • Take photograph at two different angles

http://science.howstuffworks.com/3-d-glasses2.htm

http://www.ebay.com/gds/How-to-Make-a-View-Master-Reel- /10000000178723069/g.html

slide-34
SLIDE 34

3D DISPLAYS: ACTIVE 3D

  • Active 3D
  • Special “shutter” glasses that sync up with monitor TV
  • Showing only one image at time (but REALLY fast)
  • Show left image on TV  glasses close right eye
  • Show right image on TV  glasses close left eye
  • Advantages:
  • If your monitor/TV has a high enough refresh rate, you’re good to go
  • See full screen resolution
  • Disadvantages:
  • If out of sync, see flickering
  • Image can look darker overall
  • Glasses can be cumbersome

http://www.nvidia.com/object/product-geforce-3d-vision2-wireless- glasses-kit-us.html

slide-35
SLIDE 35

3D DISPLAYS: PASSIVE 3D

  • Passive 3D
  • Polarized light glasses
  • TV shows two images at once
  • Uses alternating lines of resolution
  • Similar to red-blue glasses, but color looks right
  • Advantages:
  • Glasses are lightweight
  • Image is brighter than active 3D
  • Disadvantages:
  • Need special TV
  • Only seeing HALF the vertical resolution!

Left: Passive 3D through glasses - Middle: Passive 3D without glasses - Right: Active 3D http://www.cnet.com/news/active-3d-vs-passive-3d-whats-better/

slide-36
SLIDE 36

3D DISPLAYS: VR

  • Virtual reality displays (like the Oculus Rift) basically have two separate screens (one for each eye)
  • Advantages: no drop in resolution or brightness
  • Disadvantages: heavy headset

http://www.engadget.com/2013/09/30/vorpx-beta-launch/

slide-37
SLIDE 37

THE GRAPHICS RENDERING PIPELINE

slide-38
SLIDE 38

DEFINITIONS

  • Graphics Rendering Pipeline
  • Generates (or renders) a 2D image given a 3D scene
  • AKA the “pipeline”
  • A 3D scene contains:
  • A virtual camera – has a position and orientation (which way it’s point and which way is up), like in the image

above

  • 3D Objects – stuff to render; have position, orientation, and scale
  • Light sources - where the light is coming from, what color the light is, what kind of light is it, etc.
  • Textures/Materials – determine how the surface of the objects should look

We’ve got this… …and we want this

slide-39
SLIDE 39

PIPELINE STAGES

  • The Graphics Rendering Pipeline can be divided into 3 broad stages:
  • Application
  • Determined by (you guessed it) the application you’re running
  • Runs on CPU
  • Example operations: collision detection, animation, physics, etc.
  • Geometry
  • Computes what will be drawn, how it will be drawn, and where it will be drawn
  • Deals with transforms, projections, etc. (we’ll talk about these later)
  • Typically runs on GPU (graphics processing unit, or your graphics card)
  • Rasterizer
  • Renders final image and performs per-pixel computations (if desired)
  • Runs completely on GPU
  • Each of these stages can also be pipelines themselves
slide-40
SLIDE 40

PIPELINE SPEEDS

  • Like any pipeline, only runs as fast as slowest stage
  • Slowest stage (bottleneck)  determines rendering speed
  • Rendering speed usually expressed in:
  • Frames per second (fps)
  • Hertz (1/seconds)
  • Rendering speed  called throughput in other pipeline contexts
slide-41
SLIDE 41

APPLICATION STAGE

  • Programmer has complete control over this
  • Can be parallelized on CPU (if you have the cores for it)
  • Whatever else it does, it must send geometry to be rendered to the

Geometry Stage

  • Geometry – rendering primitives, like points, lines, triangles, polygons, etc.
slide-42
SLIDE 42

DEFINING 3D OBJECTS

  • A 3D object (or 3D model) is defined in term of geometry or geometric primitives
  • Vertex = point
  • Most basic primitives:
  • Points

(1 vertex)

  • Lines

(2 vertices)

  • Triangles

(3 vertices)

  • MOST of the time, an object/model is defined as a triangle mesh
slide-43
SLIDE 43

DEFINING 3D OBJECTS

  • Why triangles?
  • Simple
  • Fits in single plane
  • Can define any polygon in terms of triangles
  • At minimum, a triangle mesh includes:
  • Vertices
  • Position in (x,y,z)  Cartesian coordinates
  • Face definitions
  • For each face, a list of vertex indices (i.e., which vertices go with which triangle)
  • Vertices can be reused
  • (Optional) Normals, texture coordinates, color/material information, etc.
slide-44
SLIDE 44

DEFINING 3D OBJECTS

  • In addition to points, lines, and triangles, other primitives exist such as:
  • Polygons
  • Again, however, you can define any polygon in terms of triangles
  • Circles, ellipses, and other curves
  • Often approximated with line segments
  • Splines
  • Also often approximated with lines (or triangles, if using splines to define a 3D surface)
  • Sphere
  • Center point + radius
  • Often approximated with triangles
slide-45
SLIDE 45

DEFINING 3D OBJECTS

  • Sometimes certain kinds of 3D shapes are referred to as “primitives” (but ultimately these are often

approximated with a triangle mesh)

  • Cube
  • Sphere
  • Torus
  • Cylinder
  • Cone
  • Teapot
slide-46
SLIDE 46

ASIDE: WAIT, TEAPOT?

  • The Utah teapot or Newell teapot
  • In 1975, Martin Newell at the University of Utah needed a 3D model, so he

measured a teapot and modeled it by hand

  • Has become a very standard model for testing different graphical effects (and a bit
  • f an inside joke)

Original drawing of the teapot: http://www.computerhistory.org/revolution/ computer-graphics-music-and-art/15/206 http://community.thefoundry.co.uk/discussion/topic.aspx?f=8&t=33283

slide-47
SLIDE 47

GEOMETRY STAGE

  • Performs majority of per-polygon and per-vertex operations
  • In days of yore (before graphics accelerators), this stage ran on the CPU
  • Has 5 sub-stages
  • Model and View Transform
  • Vertex Shading
  • Projection
  • Clipping
  • Screen Mapping
slide-48
SLIDE 48

MODEL COORDINATES

  • Usually the vertices of a polygon mesh are relative to the

model’s center point (origin)

  • Example: vertices of a 2D square
  • (-1,1)
  • (1,1)
  • (1,-1)
  • (-1,-1)
  • Called modeling or local coordinates
  • Before this model gets to the screen, it will be transformed into several different spaces or coordinate

systems

  • When we start, the vertices are in model space (that is, relative to the model itself)
slide-49
SLIDE 49

GEOMETRY STAGE: MODEL TRANSFORM

  • Let’s say I have a teapot in model coordinates
  • I can create an instance (copy) of that model

in the 3D world

  • Each instance has its own model transform
  • Transforming model coordinates  to world

coordinates

  • Coordinates are now in world space
  • Transform may include translation, rotation,

scaling, etc.

Different model transforms

slide-50
SLIDE 50

RIGHT-HAND RULE

  • Before we go further with coordinate spaces, etc., we need to talk about which way the x, y, and z axes

go relative to each other

  • OpenGL (and other systems) use the right-hand rule
  • Point right hand toward X, with palm up towards Y  thumb points toward Z
slide-51
SLIDE 51

GEOMETRY STAGE: VIEW TRANSFORM

  • Only things visible by the virtual camera will be rendered
  • Camera has a position and orientation
  • The view transform will transform both the camera and all objects so that:
  • Camera starts at world origin (0,0,0)
  • Camera points in direction of negative z axis
  • Camera has up direction of positive y axis
  • Camera is set up up such that the x-axis points to right
  • NOTE: This is with the right-hand rule setup (OpenGL)
  • DirectX uses left-hand rule
  • Coordinates are now in camera space (or eye space)
slide-52
SLIDE 52

OUR GEOMETRIC NARRATIVE THUS FAR…

  • Model coordinates  MODEL TRANSFORM  World coordinates  VIEW TRANSFORM  Camera coordinates
  • Or, put another way:
  • Model space  MODEL TRANSFORM  World space  VIEW TRANSFORM  Camera (Eye) space
slide-53
SLIDE 53

GEOMETRY STAGE: VERTEX SHADING

  • Lights are defined in the 3D scene
  • 3D objects usually have one or more materials attached to them
  • So, a metal can model might have a metallic-looking material, for instance
  • Shading – determining the effect of a light (or lights) on a material
  • Vertex shading – shading calculations using vertex information
  • Vertex shading is programmable!
  • I.e., you have a great deal of control over what happens during this stage
  • We’ll talk about this in more detail later; for now, know that the geometry stage handles this part…
slide-54
SLIDE 54

GEOMETRY STAGE: PROJECTION

  • View volume – area inside camera’s view that contains the objects we must render
  • For perspective projections, called view frustum
  • Ultimately, we will need to map 3D coordinates to 2D coordinates (i.e., points on the screen)  points

must be projected from three dimensions to two dimensions

  • Projection – transforms view volume into a unit cube
  • Converts world coordinates  normalized device coordinates
  • Simplifies clipping later
  • Still keep z coordinates for now
  • In OpenGL, unit cube = (-1,-1,-1) to (1,1,1)
slide-55
SLIDE 55

GEOMETRY STAGE: PROJECTION

  • Two most commonly used projection methods:
  • Orthographic (or parallel)
  • View volume = rectangular box
  • Parallel lines remain parallel
  • Perspective
  • View volume = truncated pyramid with rectangular base  called view frustum
  • Things look smaller when farther away

Orthographic Perspective

slide-56
SLIDE 56

GEOMETRY STAGE: CLIPPING

  • What we draw is determined by what’s in the view volume:
  • Completely inside  draw
  • Completely outside  don’t draw
  • Partially inside  clip against view volume and only draw part inside view volume
  • When clipping, have to add new vertices to primitive
  • Example: line is clipped against view volume, so a new vertex is added where the line intersects with the view

volume

slide-57
SLIDE 57

GEOMETRY STAGE: SCREEN MAPPING

  • We now have our (clipped) primitives in normalized device coordinates (which are still 3D)
  • Assuming we have a window with a minimum corner (x1, y1) and maximum corner (x2, y2)
  • Screen mapping
  • x and y of normalized device coordinates  x’ and y’ screen coordinates (also device coordinates)
  • z coordinates unchanged
  • (x’,y’,z) = window coordinates = screen coordinates + z
  • Window coordinates passed to rasterizer stage
slide-58
SLIDE 58

GEOMETRY STAGE: SCREEN MAPPING

  • Where is the starting point (origin) in screen coordinates?
  • OpenGL  lower-left corner (Cartesian)
  • DirectX  sometimes the upper-left corner
  • Pixel = picture element
  • Basically each discrete location on the screen
  • Where is the center of a pixel?
  • Given pixel (0,0):
  • OpenGL  0.5, 0.5
  • DirectX  0.0 ,0.0

In OpenGL

slide-59
SLIDE 59

RASTERIZER STAGE

  • Have transformed and projected vertices with associated shading data from geometry stage
  • Primary goal  rasterization (or scan conversion)
  • Computing and setting colors for pixels covered by the objects
  • Convert 2D vertices + z value + shading info  pixels on screen
  • Has four basic stages:
  • Triangle setup
  • Triangle traversal
  • Pixel shading
  • Merging
  • Runs completely on GPU
slide-60
SLIDE 60

RASTERIZER STAGE: TRIANGLE SETUP AND TRIANGLE TRAVERSAL

  • Triangle setup
  • Performs calculations needed for next stage
  • Triangle Traversal (or Scan Conversion)
  • Finds which samples/pixels are inside each triangle
  • Generates a fragment for each part of a pixel covered by a triangle
  • Fragment properties  interpolated from triangle vertices
slide-61
SLIDE 61

RASTERIZER STAGE: PIXEL SHADING

  • Performs per-pixel shading computations using interpolated shading data from previous stage
  • Example: texture coordinates interpolated across triangles  get correct texture value for each pixel
  • Output: one or more colors for each pixel
  • Programmable!
slide-62
SLIDE 62

FRAGMENT DEFINITION

  • Fragment = data necessary to shade/color a pixel due to a primitive covering or partially covering that

pixel

  • Data can include color, depth, texture coordinates, normal, etc.
  • Values are interpolated from primitive’s vertices
  • Can have multiple fragments per pixel
  • Final pixel color will either be one of the fragments (i.e., z-buffer chooses nearest one) or combination of

fragments (e.g., alpha blending)

slide-63
SLIDE 63

RASTERIZER STAGE: MERGING

  • Color buffer  stores color for each pixel
  • Merging stage  combine each fragment color with color current stored in color buffer
  • Need to check if fragment is visible  e.g., with a Z-buffer
  • Check z value of incoming fragment
  • If closer to camera than previous value in Z-buffer  override color in color buffer and update z value
  • Advantages:
  • O(n)  n = number of primitives
  • Simple
  • Can draw OPAQUE objects in any order
  • Disadvantages:
  • Transparent objects more complicated
slide-64
SLIDE 64

RASTERIZER STAGE: MERGING

  • Frame buffer
  • Means all buffers on system (but sometimes just refers to color + Z-buffer)
  • To prevent the user from seeing the buffer while it’s being updated  use double-buffering
  • Two buffers, one visible and one invisible
  • Draw on invisible buffer  swap buffers
slide-65
SLIDE 65

FIXED-FUNCTION VS. PROGRAMMABLE

  • Fixed-function pipeline stages
  • Elements are set up in hardware a specify way
  • Usually can only turn things on or off or change options (defined by hardware and graphics API)
  • Programmable pipeline stages
  • Vertex shading and pixel (fragment) shading
  • Have direct control over what is done at given stage
slide-66
SLIDE 66

OTHER PIPELINES

  • The outline we described is NOT the only way to do a graphics pipeline
  • E.g., ray tracing renderers  pretty much do EVERYTHING in software, and then just set the pixel color with the

graphics card

slide-67
SLIDE 67

COMPUTER GRAPHICS API

slide-68
SLIDE 68

DEFINITIONS

  • Computer-graphics application programming interfaces (CG API)
  • Common CG APIs: GL, OpenGL, DirectX, VRML, Java 2D, Java 3D, etc.
  • Interface between programming language and hardware
  • Let’s briefly go over some CG APIs…
slide-69
SLIDE 69

GKS AND PHIGS

  • GKS (Graphical Kernel System) – 1984
  • International effort to develop standard for computer graphics software
  • Adopted as first graphics software standard by ISO (International Standards Organization) and ANSI (American National Standards

Institute)

  • Original 2D  3D extension developed later
  • PHIGS (Programmer’s Hierarchical Interactive Graphics Standard)
  • Extension of GKS
  • Developed in 1980’s  standard by 1989
  • 3D standard
  • Increased capabilities for hierarchical modeling, color specifications, surface rendering, and picture manipulations
  • PHIGS+  added more advanced 3D surface rendering
slide-70
SLIDE 70

GL AND OPENGL

  • GL (Graphics Library)
  • Developed by Silicon Graphics, Inc. (SGI) for their graphics workstations
  • Became de facto graphics standard
  • Fast, real-time rendering
  • Proprietary system
  • OpenGL
  • Developed as hardware-independent version of GL in 1990’s
  • Specification
  • Was maintained/updated by OpenGL Architecture Review Board; now maintained by

the non-profit Khronos Group

  • Both are consortiums of representatives from many graphics companies and organizations
  • Designed for efficient 3D rendering, but also handles 2D (just set z = 0)
  • Stable; new features added as extensions

SGI O2 workstation: http://www.engadget.com/products/sgi/o2/

slide-71
SLIDE 71

DIRECTX AND DIRECT3D

  • DirectX
  • Developed by Microsoft for Windows 95 in 1996
  • Originally called “Game SDK”
  • Actually combinations of different APIs: Direct3D, DirectSound, DirectInput, etc.
  • Less stable  adopts new features fairly quickly (for better or for worse)
  • Only works on Windows and Xbox
slide-72
SLIDE 72

WHY ARE WE USING OPENGL?

  • In this course, we will be using OpenGL because:
  • It works with practically ever platform/system (Windows, Unix/Linux, Mac, etc.)
  • It’s arguably easier to learn/understand
  • It is NOT because DirectX/Direct3D is a bad system
slide-73
SLIDE 73

REFERENCES

  • Many of the images in these slides come from the book “Real-Time Rendering” by Akenine-Moller,

Haines, and Hoffman (3rd Edition) as well as the online supplemental material found on their website: http://www.realtimerendering.com/

  • Some also are from the book “Computer Graphics with OpenGL” by Hearn, Baker, and Carithers (4th

edition)