Computer Graphics (CS 563) Lecture 4: Advanced Computer Graphics - - PowerPoint PPT Presentation

computer graphics cs 563 lecture 4 advanced computer
SMART_READER_LITE
LIVE PREVIEW

Computer Graphics (CS 563) Lecture 4: Advanced Computer Graphics - - PowerPoint PPT Presentation

Computer Graphics (CS 563) Lecture 4: Advanced Computer Graphics Image Based Effects: Part 2 Prof Emmanuel Agu Computer Science Dept. Worcester Polytechnic Institute (WPI) Image Processing Graphics concerned with creating artificial scenes


slide-1
SLIDE 1

Computer Graphics (CS 563) Lecture 4: Advanced Computer Graphics Image‐Based Effects: Part 2 Prof Emmanuel Agu

Computer Science Dept. Worcester Polytechnic Institute (WPI)

slide-2
SLIDE 2

Image Processing

 Graphics concerned with creating artificial scenes

from geometry and shading descriptions

 Image processing

 Input is an image  Output is a modified version of input image

 Image processing operations include altering images,

remove noise, super‐impose images

slide-3
SLIDE 3

Image Processing

 Example: Sobel Filter

Original Image Sobel Filter

slide-4
SLIDE 4

Image Processing

 Image processing the output of graphics rendering is

called post‐processing

 To post‐process using GPU, rendered output usually written

to offscreen buffer (e.g. color image, z‐depth buffer, etc)

 Image in offscreen buffer treated as texture, mapped to

screen‐filling quadrilateral

 Pixel shader invoked on each element of texture

slide-5
SLIDE 5

Image Negative

 Another example

slide-6
SLIDE 6

Image Distortion

slide-7
SLIDE 7

Image Sharpening

slide-8
SLIDE 8

Embossing

slide-9
SLIDE 9

Toon Rendering

slide-10
SLIDE 10

Toon Rendering for Non‐Photorealistic Effects

slide-11
SLIDE 11

Blurring

 For some operations, texture element may be combined with

neighboring texture elements (blurring)

With motion blur Without motion blur

slide-12
SLIDE 12

Texture Animation using Image Processing

 Use GPU to modify textures from frame to frame  Animations such as fluid flow can be done this way  Example: simulating rain by Tatarchuk et al

slide-13
SLIDE 13

Heat Shimmer

slide-14
SLIDE 14

Color Correction

 Color correction uses a function to convert colors in

an image to some other color

 Why color correct?

 Mimic appearance of a type of film  Portray a particular mood  Convert from one color space to another  Example of conversion from RGB to CIE’s XYZ color space

                               B G R Z Y X 950227 . 119193 . 019334 . 072169 . 715160 . 212671 . 180423 . 357580 . 412453 .

slide-15
SLIDE 15

Color Correction

slide-16
SLIDE 16

Color Correction

slide-17
SLIDE 17

High Dynamic Range

 Sun’s brightness is about 60,000 lumens  Dark areas of earth has brightness of 0 lumens  Basically, world around us has range of 0 – 60,000

lumens (High Dynamic Range)

 However, monitor has ranges of colors between 0 –

255 (Low Dynamic Range)

 New file formats have been created for HDR images

(wider ranges). (E.g. OpenEXR file format)

slide-18
SLIDE 18

High Dynamic Range

 Some scenes contain very bright + very dark areas  Using uniform scaling factor to map actual intensity

to displayed pixel intensity means:

 Either some areas are unexposed, or  Some areas of picture are overexposed

Under exposure Over exposure

slide-19
SLIDE 19

Tone Mapping

 Process of scaling intensities in real world images

(e.g HDR images) to fit in displayable range

 Try to capture feeling of real scene: non‐trivial  Example: If coming out of dark tunnel, lights should

seem bright

slide-20
SLIDE 20

Types of Tone Mapping Operators

 Global: Use same scaling factor for all pixels  Local: Use different scaling factor for different parts

  • f image

 Time‐dependent: Scaling factor changes over time  Time independent: Scaling factor does NOT change

  • ver time

 Real‐time rendering usually does NOT implement

local operators due to their complexity

slide-21
SLIDE 21

Tone Mapping Operators

slide-22
SLIDE 22

Simple (Global) Tone Mapping Methods

slide-23
SLIDE 23

Tone Mapping

 If range of input values is small, compute average

then scale so that average in displayable range

 Simple average may cause a few large values to

dominate

 Reinhard suggested to use logarithm instead when

summing pixel values

is the log‐average luminance, avoids log of 0

is the luminance at pixel (x,y)

         

)) , ( log( 1 exp

,

y x L N L

w y x w

w

L ) , ( y x Lw 

slide-24
SLIDE 24

Tone Mapping

 Once log‐average luminance is computed, can then

define tone mapping operator

is resulting luminance

 a parameter is key of the scene (a = 0.18 is normal)

 High key minimizes contrasts and shadows. E.g. a = 0.72  Low key maximizes contrasts between light and dark. E.g. a

= 0.045

) , ( ) , ( y x L L a y x L

w w

) , ( y x L

slide-25
SLIDE 25

Tone Mapping: Effects of a

slide-26
SLIDE 26

Lens Flare and Bloom

 Caused by lens of eye/camera when directed at light  Halo – refraction of light by lens  Ciliary Corona – Density fluctuations of lens  Bloom – Scattering in lens, glow around light

Halo, Bloom, Ciliary Corona – top to bottom

slide-27
SLIDE 27

Lens Flare and Bloom

 Use set of textures for glare effects  Each texture is bill boarded  Alpha map – how much to blend  Can be given colors for corona  Overlap all of them !  Animate – create sparkle

slide-28
SLIDE 28

Depth of Field

 In photographs, a range of pixels in focus  Pixels outside this range are out of focus  This effect is known as Depth of field

slide-29
SLIDE 29

Depth of Field using Accumulation Buffer

 Jitter view position, add weighted samples to

accumulation buffer

 After multiple rendering passes, display picture  Downside: Multiple rendering passes is expensive

slide-30
SLIDE 30

Depth of Field using Scattering

 Scatter shading value of each location on a surface to

neighboring pixel

 Sprites used to represent circles of influence  Pixel value is averaged sum of all overlapping circles

slide-31
SLIDE 31

Motion Blur

 Antialiasing is spatial blurring  In cameras, caused by exposing film to moving objects  Motion blur: Blurring of samples taken over time  Makes fast moving scenes appear less jerky  30 fps + motion blur better than 60 fps + no motion blur

slide-32
SLIDE 32

Motion Blur

 Accumulation buffer can be used to create blur  Basic idea is to average series of images over time  Move object to set of positions occupied in a frame,

blend resulting images together

slide-33
SLIDE 33

Motion Blur

 Can blur moving average of frames. E.g blur 8 images  When you render frame 9, subtract frame 1, etc  Velocity buffer: blur in screen space using velocity of

  • bjects
slide-34
SLIDE 34

Fog

 Fog was part of OpenGL fixed function pipeline  Using shaders, fog applied to scene just before

display

 Shaders can generate more elaborate fog

 Fog is atmospheric effect

 A little better realism  Help in determining distances

slide-35
SLIDE 35

Fog example

 Often just a matter of

 Choosing fog color  Choosing fog model  Turning it on

slide-36
SLIDE 36

Rendering Fog

 Color of fog: color of surface: f

c

s

c ] 1 , [ ) 1 (     f f f

s f p

c c c

 How to compute f ?  3 ways: linear, exponential, exponential-squared  Linear:

start end p end

z z z z f   

slide-37
SLIDE 37

Fog

 Exponential  Squared exponential  Exponential derived from Beer’s law

 Beer’s law: intensity of outgoing light diminishes

exponentially with distance

p f z

d

e f

2

) (

p f z

d

e f

slide-38
SLIDE 38

Fog

 f values for different depths can be pre‐computed

and stored in a table on GPU

 Distances used in f calculations are planar  Can also use Euclidean distance from viewer or radial

distance to create radial fog

slide-39
SLIDE 39

Different Atmospheres

More generally, we can simulate better skies

slide-40
SLIDE 40

Volume Rendering

 Volumetric data is represented as volumetric pixels (voxels)  Rendering Voxels (CT/MRI)  Methods  Implicit surface techniques to convert voxel samples into

polygonal surfaces (called isosurfaces)

 Voxel Data as set of 2D image slices (Lacroute & Levoy)  Splatting – Voxel represented by alpha blended circular object

(splat) , that drops of in opacity at fringes

 Volume slices as textured Quads (OpenGL Volumizer API)

slide-41
SLIDE 41

Volumetric Texturing

 Represent objects as sequence of semi‐transparent

textures

 Good for rendering fuzzy or hairy objects

slide-42
SLIDE 42

References

 Kutulakos K, CSC 2530H: Visual Modeling, course

slides

 UIUC CS 319, Advanced Computer Graphics

Course slides

 David Luebke, CS 446, U. of Virginia, slides  Chapter 2 of RT Rendering  Suman Nadella, CS 563 slides, Spring 2005