CS488 Visible-Surface Determination Luc R ENAMBOT 1 - - PowerPoint PPT Presentation

cs488 visible surface determination
SMART_READER_LITE
LIVE PREVIEW

CS488 Visible-Surface Determination Luc R ENAMBOT 1 - - PowerPoint PPT Presentation

CS488 Visible-Surface Determination Luc R ENAMBOT 1 Visible-Surface Determination So far in the class we have dealt mostly with simple wireframe drawings of the models The main reason for this is so that we did not have to deal with


slide-1
SLIDE 1

CS488 Visible-Surface Determination

Luc RENAMBOT

1

slide-2
SLIDE 2

Visible-Surface Determination

  • So far in the class we have dealt mostly with

simple wireframe drawings of the models

  • The main reason for this is so that we did

not have to deal with hidden surface removal

  • Now we want to deal with more

sophisticated images so we need to deal with which parts of the model obscure

  • ther parts of the model

2

slide-3
SLIDE 3

Examples

The following sets of images show a wireframe version, a wireframe version with hidden line removal, and a solid polygonal representation of the same object

3

slide-4
SLIDE 4

Examples

slide-5
SLIDE 5

Drawing Order

If we do not have a way of determining which surfaces are visible then which surfaces are visible depends on the

  • rder in which they are

drawn with surfaces being drawn later appearing in front

  • f surfaces drawn previously

5

slide-6
SLIDE 6

Principles

  • We do not want to draw surfaces that

are hidden. If we can quickly compute which surfaces are hidden, we can bypass them and draw only the surfaces that are visible

  • For example, if we have a solid 6

sided cube, at most 3 of the 6 sides are visible at any one time, so at least 3 of the sides do not even need to be drawn because they are the back sides

6

slide-7
SLIDE 7

Principles

  • We also want to avoid having to draw the polygons

in a particular order. We would like to tell the graphics routines to draw all the polygons in whatever order we choose and let the graphics routines determine which polygons are in front of which other polygons

  • With the same cube as above we do not want to

have to compute for ourselves which order to draw the visible faces, and then tell the graphics routines to draw them in that order.

7

slide-8
SLIDE 8

Principles

  • The idea is to speed up the drawing, and give

the programmer an easier time, by doing some computation before drawing

  • Unfortunately these computations can take a

lot of time, so special purpose hardware is

  • ften used to speed up the process

8

slide-9
SLIDE 9

Techniques

  • Two types of approaches
  • Object space
  • Image space

9

slide-10
SLIDE 10

Object Space

  • Object space algorithms do their work on the objects

themselves before they are converted to pixels in the frame buffer.

  • The resolution of the display device is irrelevant here as this

calculation is done at the mathematical level of the objects

  • For each object a in the scene
  • Determine which parts of object a are visible

(involves comparing the polygons in object a to other polygons in a and to polygons in every other object in the scene)

10

slide-11
SLIDE 11

Image Space

  • Image space algorithms do their work as the objects are

being converted to pixels in the frame buffer

  • The resolution of the display device is important here as this

is done on a pixel by pixel basis

  • For each pixel in the frame buffer
  • Determine which polygon is closest to the viewer at that pixel

location

  • Determine the color of the pixel with the color of that polygon at

that location

11

slide-12
SLIDE 12

Approaches

  • As in our discussion of vector vs raster

graphics earlier in the term

  • The mathematical (object space)

algorithms tended to be used with the vector hardware

  • Whereas the pixel based (image space)

algorithms tended to be used with the raster hardware

12

slide-13
SLIDE 13

Homogeneous Coordinates

  • When we talked about 3D transformations

we reached a point near the end when we converted the 3D (or 4D with homogeneous coordinates) to 2D by ignoring the Z values

  • Now we will use those Z values to

determine which parts of which polygons (or lines) are in front of which parts of other polygons

13

slide-14
SLIDE 14

Technique

  • There are different levels of checking that

can be done:

  • Object
  • Polygon
  • Part of a Polygon

14

slide-15
SLIDE 15

Transparency

  • There are also times when we may not want

to cull out polygons that are behind other polygons

  • If the frontmost polygon is transparent then

we want to be able to 'see through' it to the polygons that are behind it as shown below

15

slide-16
SLIDE 16

Transparent Objects

16

Which objects are transparent in the scene?

slide-17
SLIDE 17

Coherence

  • We used the idea of coherence before in
  • ur line drawing algorithm
  • We want to exploit 'local similarity' to

reduce the amount of computation needed

  • This is how compression algorithms work

17

slide-18
SLIDE 18

Coherence

  • Face - properties (such as color, lighting) vary smoothly across a face

(or polygon)

  • Depth - adjacent areas on a surface have similar depths
  • Frame - images at successive time intervals tend to be similar
  • Scan Line - adjacent scan lines tend to have similar spans of objects
  • Area - adjacent pixels tend to be covered by the same face
  • Object - if objects are separate from each other (ie they do not
  • verlap) then we only need to compare polygons of the same object, and

not one object to another

  • Edge - edges only disappear when they go behind another edge or face
  • Implied Edge - line of intersection of 2 faces can be determined by

the endpoints of the intersection

18

slide-19
SLIDE 19

Extent

  • Rather than dealing with a complex object, it

is often easier to deal with a simpler version

  • f the object
  • In 2D: a bounding box
  • In 3D: a bounding volume

19

slide-20
SLIDE 20

Bounding Box

  • We convert a complex
  • bject into a simpler
  • utline, generally in the

shape of a box

  • Every part of the object is

guaranteed to fall within the bounding box

20

slide-21
SLIDE 21

Bounding Box

  • Checks can then be made
  • n the bounding box to

make quick decisions (ie does a ray pass through the box.)

  • For more detail, checks

would then be made on the object in the box.

  • There are many ways to

define the bounding box

21

slide-22
SLIDE 22

Bounding Box

  • The simplest way is to take the minimum

and maximum X, Y, and Z values to create a box

  • You can also have bounding boxes that

rotate with the object, bounding spheres, bounding cylinders, etc.

22

slide-23
SLIDE 23

Back-Face Culling

  • Back-face culling
  • an object space algorithm
  • Works on 'solid' objects which you are

looking at from the outside

  • That is, the polygons of the surface of the
  • bject completely enclose the object

23

slide-24
SLIDE 24

Normals

  • Every planar polygon has a surface normal,

that is, a vector that is normal to the surface

  • f the polygon
  • Actually every planar polygon has two

normals

  • Given that this polygon is part of a 'solid'
  • bject we are interested in the normal that

points OUT, rather than the normal that points in

24

slide-25
SLIDE 25

Back Face

  • OpenGL specifies that all

polygons be drawn such that the vertices are given in counterclockwise order as you look at the visible side of polygon in order to generate the 'correct' normal.

  • Any polygons whose normal

points away from the viewer is a 'back-facing' polygon and does not need to be further investigated

25

Front facing Back facing

slide-26
SLIDE 26

Computing

  • To find back facing polygons, the dot product
  • f the surface normal of each polygon is

taken with a vector from the center of projection to any point on the polygon

  • The dot product is then used to determine

what direction the polygon is facing:

  • greater than 0 : back facing
  • equal to 0 : polygon viewed on edge
  • less than 0 : front facing

26

slide-27
SLIDE 27

Dot Product

  • a.b = |a| |b| cos(theta)
  • a.b = ax*bx + ay*by+ az*bz
  • a.b = 0
  • orthogonal vectors

27

slide-28
SLIDE 28

Example

28

slide-29
SLIDE 29

OpenGL

  • OpenGL back-face culling is turned on using:
  • glCullFace(GL_BACK);
  • glEnable(GL_CULL_FACE);

29

slide-30
SLIDE 30

Remarks

  • Back-face culling can very quickly

remove unnecessary polygons

  • Unfortunately there are often times

when back-face culling can not be used

  • if you wish to make an open-topped box -

the inside and the outside of the box both need to be visible, so either two sets of polygons must be generated, one set facing

  • ut and another facing in, or back-face culling

must be turned off to draw that object

30

slide-31
SLIDE 31

Depth Buffer

  • Early on we talked about the frame buffer which

holds the color for each pixel to be displayed

  • This buffer could contain a variable number of

bytes for each pixel depending on whether it was a grayscale, RGB, or color indexed frame buffer

  • All of the elements of the frame buffer are initially

set to be the background color

  • As lines and polygons are drawn the color is set to

be the color of the line or polygon at that point

31

slide-32
SLIDE 32

Depth Buffer

  • We now introduce another buffer which is

the same size as the frame buffer but contains depth information instead of color information

32

slide-33
SLIDE 33

Z-Buffering

  • Image-space algorithm
  • All of the elements of the z-buffer are initially set to be

'very far away’

  • Whenever a pixel color is to be changed, the depth of

this new color is compared to the current depth in the z- buffer

  • If this color is 'closer' than the previous color the pixel is

given the new color

  • The z-buffer entry for that pixel is updated as well
  • Otherwise, the pixel retains the old color, the z-buffer

retains its old value

33

slide-34
SLIDE 34

Algorithm

for each polygon for each pixel p in the polygon's projection { //z ranges from -1 to 0 pz = polygon's normalized z-value at (x, y); if (pz > zBuffer[x, y]) // closer to the camera { zBuffer[x, y] = pz; framebuffer[x, y] = colour of pixel p } }

34

slide-35
SLIDE 35

Remarks

  • This is very nice since the order of drawing polygons

does not matter, the algorithm will always display the color of the closest point

  • The biggest problem with the z-buffer is its finite

precision

  • It is important to set the near and far clipping planes to

be as close together as possible to increase the resolution of the z-buffer within that range

  • Otherwise, even though one polygon may

mathematically be 'in front' of another that difference may disappear due to roundoff error

35

slide-36
SLIDE 36

OpenGL

  • OpenGL z-buffer and frame buffer are

cleared using:

  • glClear(GL_DEPTH_BUFFER_BIT |

GL_COLOR_BUFFER_BIT);

  • OpenGL z-buffering is turned on using:
  • glEnable(GL_DEPTH_TEST);
  • Also
  • glDepthFunc(GL_LESS)
  • glDepthRange(0, 1)

36

slide-37
SLIDE 37

Example

  • The depth-buffer is

especially useful when it is difficult to order the polygons in the scene based on their depth

37

slide-38
SLIDE 38

Warnock's Algorithm

  • Warnock's algorithm is a recursive area-

subdivision algorithm

  • It looks at an area of the image
  • If is is easy to determine which polygons are

visible in the area, they are drawn

  • else the area is subdivided into smaller parts

and the algorithm recurses.

  • Eventually an area will be represented by a single

non-intersecting polygon

38

slide-39
SLIDE 39

Iteration

  • At each iteration the area of interest is

subdivided into four equal areas

  • Each polygon is compared to each area and is

put into one of four bins

  • Surrounding polygons - completely

contain the area

  • Intersecting polygons - intersect the area
  • Contained polygons - completely

contained in the area

  • Disjoint polygons - completely outside the

area

39

slide-40
SLIDE 40

Iteration

  • For a given area:

case 1. If all polygons are disjoint then the background color fills the area case 2. If there is a single contained polygon or intersecting polygon then the background color is used to fill the area, then the part of the polygon contained in the area is filled with the color of that polygon case 3. If there is a single surrounding polygon and no intersecting or contained polygons then the area is filled with the color of the surrounding polygon case 4. If there is a surrounding polygon in front of any other surrounding, intersecting, or contained polygons then the area is filled with the color of the front surrounding polygon

  • Otherwise break the area into 4 equal parts and recurse

40

slide-41
SLIDE 41

Example

41

Book, pages 686-688

case 1. If all polygons are disjoint case 2. If there is a single contained polygon or intersecting polygon case 3. If there is a single surrounding polygon and no intersecting or contained polygons case 4. If there is a surrounding polygon in front of any other surrounding, intersecting, or contained polygons

slide-42
SLIDE 42

Remarks

  • Bounding boxes can help
  • At worst, log base 2 of the max(screen width, screen

height) recursive steps will be needed

  • At that point the area being looked at is only a single pixel

which can't be divided further

  • At that point, the distance to each polygon

intersecting,contained in, or surrounding the area is computed at the center of the polygon to determine the closest polygon and its color

  • Could be faster than z-buffer, but only for small number of

polygons

42

slide-43
SLIDE 43

Next Time

  • More

Visible-Surface Determination

  • Assignment 3: Monday 20th November

43