521493s computer graphics
play

521493S Computer Graphics Exercise 3 Question 3.1 Most graphics - PowerPoint PPT Presentation

521493S Computer Graphics Exercise 3 Question 3.1 Most graphics systems and APIs use the simple lighting and reflection models that we introduced for polygon rendering. Describe the ways in which each of these models is incorrect. For each


  1. 521493S Computer Graphics Exercise 3

  2. Question 3.1 Most graphics systems and APIs use the simple lighting and reflection models that we introduced for polygon rendering. Describe the ways in which each of these models is incorrect. For each defect, give an example of a scene in which you would notice the problem.

  3. Solution 3.1 (1/3) • Point sources produce a very harsh lighting. Such images are characterized by abrupt transitions between light and dark Image from Nvidia’s GPU ray tracing demo

  4. Solution 3.1 (2/3) • The ambient light in a real scene is dependent in both the lights on the scene and the reflectivity properties of the objects in the scene, something that cannot be computed correctly with OpenGL. Image from WinOSI gallery

  5. Solution 3.1 (3/3) • The Phong reflection term is not physically correct; the reflection term in the modified Phong model is even further from being physically correct.

  6. Question 3.2 Can Phong reflection model be used in both OpenGL and ray tracing rendering methods? If yes, how would the calculations differ?

  7. Solution 3.2 (1/1) • The same reflection model can be used for modeling the object material. • In OpenGL, we just have to assume that no light source is being obscured by any object because such global calculations are incompatible with the pipeline model. Pipeline model assumes that each polygon can be shaded independently from all other polygons as they flow through the pipeline. • Ray tracing calculations can tell us which light sources are actually visible for each position in space and only visible light sources would be included in the shading calculations.

  8. Question 3.3 Compare the shadow-generation algorithm that uses projections to the generation of shadows to a global rendering method that solves the rendering equation. What types of shadows can be generated by one method but not the other?

  9. Solution 3.3 (1/2) • Global rendered that solves rendering equation calculates all shadows almost perfectly. Global Illumination renderer Ray tracer with an object as light source with point light source Both images are from WinOSI gallery

  10. Solution 3.3 (2/2) • In global renderer, as each point is shaded, a calculation is done to see which light sources shine on it through any route photons can take and how much. The projection approach assumes that we can project each polygon onto all other polygons. • If the shadow of a given polygon projects onto multiple polygons, we could not compute these shadow polygons very easily. • In addition, we have not accounted for the different shades we might see if there were intersecting shadows from multiple light sources, reflection of light from other surfaces, refractions of light through transparent surfaces etc. • Plainly, shadow projection is very crude approximation of lighting. Even ray tracing doesn’t get close.

  11. Some things that are not possible with pure OpenGL or ray tracing Caustics Dispersion Rendered by Julien Roger with 1.280.520.629 lightrays Both images are from WinOSI gallery

  12. Question 3.4 How is an image produced with an environment map different from a ray traced image of the same scene?

  13. Environment mapping Cube mapping Sample image Diagram by TopherTG Image by Dave Pape

  14. Solution 3.4 (1/1) • The major problem is that the environment map is computed without the object in the scene. – All global lighting calculations of which it should be part are incorrect. These errors can be most noticeable if there are other reflective objects will now not show the reflection of the removed object. – Other errors can be caused by the removed object no longer blocking light and by its shadows being missing. • Other visual errors can be due to distortions in the mapping of the environment to a simple shape, such as a cube, and to errors in a two step mapping. • In addition, a new environment map should be computed for each viewpoint.

  15. Question 3.5 In what types of applications might you prefer a front-to-back rendering instead of a back-to- front rendering?

  16. Solution 3.5 (1/1) • Generally back-to-front rendering is nice because faces in front always paint over surfaces behind them. However, the final color is not determined until the front most object is processed. • Suppose you have a lot of overlapping opaque objects. Then most of the rendering will have been wasted since only the final faces will determine the image. • In applications such as ray tracers, a front-to-back rendering can be far more efficient as we can stop processing objects along a ray as soon as we encounter the first opaque object.

  17. Question 3.6 In what circumstances can you see aliasing problems with texture mapped surfaces? What tools does OpenGL provide to counter these effects?

  18. Solution 3.6 (1/5) • When texture is drawn on a surface by sampling each texture position at frequency that is lower than the actual rate the texture changes, aliasing can occur. • For example with a black and white striped texture, we might sample values that are all white while all black stripes can be present between those sampling points. In general, this kind of sampling can cause flickering of small details in the texture.

  19. Solution 3.6 (2/5) • OpenGL allows use of different detail levels for textures called mipmapping. – At each level, scale w and h to half Image by Mulad • When texture is applied during the rendering, proper texture level is automatically selected to avoid aliasing. • Even interpolation between two closest levels of the texture is possible (“ Trilinear filtering”).

  20. Solution 3.6 (3/5) • When texture should be scaled down only on one direction and not on other, mipmapping generally produces blurry results as it assumes isotropic filtering requirements and not anisotropic. Regular (isotropic) mipmapping With anisotropic filtering Image by Ener Hax

  21. Solution 3.6 (4/5) • Anisotropic filtering (“rip mapping”) creates filtered versions of images using different ratios of scaling to reduce the visual blurring effect. • Requires much more texture memory Regular (isotropic) mipmapping With anisotropic filtering Image by Mulad

  22. Solution 3.6 (5/5) • Anisotropic filtering may be available as OpenGL extension with the name of “ GL_EXT_texture_filter_anisotropic ”.

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend