Welcome! 1 2 Todays Agenda: Projection Pipeline Recap - - PowerPoint PPT Presentation

welcome
SMART_READER_LITE
LIVE PREVIEW

Welcome! 1 2 Todays Agenda: Projection Pipeline Recap - - PowerPoint PPT Presentation

1 INFOGR Computer Graphics J. Bikker - April-July 2015 - Lecture 6: "Transformations" Welcome! 1 2 Todays Agenda: Projection Pipeline Recap Rasterization 2 INFOGR Lecture 6


slide-1
SLIDE 1

INFOGR – Computer Graphics

  • J. Bikker - April-July 2015 - Lecture 6: "Transformations"

Welcome!

1

1

slide-2
SLIDE 2

Today’s Agenda:

  • Projection
  • Pipeline Recap
  • Rasterization

2

2

slide-3
SLIDE 3

Perspective

INFOGR – Lecture 6 – "Transformations" Projection – Applying matrices, working our way backwards Goal: create 2D images of 3D scenes Standard approach: linear perspective (in contrast to e.g. fisheye views) Parallel projection: Perspective projection:

3

3

slide-4
SLIDE 4

INFOGR – Lecture 6 – "Transformations" Parallel projection: Maps 3D points to 2D by moving them along a projection direction until they hit an image plane. Perspective projection: Maps 3D points to 2D by projecting them along lines that pass through a single viewpoint until they hit an image plane.

Perspective

4

4

slide-5
SLIDE 5

INFOGR – Lecture 6 – "Transformations"

Perspective

5

5

slide-6
SLIDE 6

INFOGR – Lecture 6 – "Transformations"

Perspective

6

6

slide-7
SLIDE 7

INFOGR – Lecture 6 – "Transformations"

Perspective

7

7

slide-8
SLIDE 8

INFOGR – Lecture 6 – "Transformations"

Perspective

8

8

slide-9
SLIDE 9

INFOGR – Lecture 6 – "Transformations"

Perspective

9

9

slide-10
SLIDE 10

INFOGR – Lecture 6 – "Transformations"

Perspective

10

10

slide-11
SLIDE 11

INFOGR – Lecture 6 – "Transformations" Perspective projection World space (3D) Screen space (2D) We get our 3D objects perspective correct on the 2D screen by applying a sequence of matrix operations.

Perspective

11

11

slide-12
SLIDE 12

INFOGR – Lecture 6 – "Transformations" Perspective projection The camera is defined by:

  • Its position E
  • The view direction 𝑊
  • The image plane (defined by its distance

𝑒 and the field of view) The view frustum is the volume visible from the camera. It is defined by:

  • A near and a far plane 𝑜 and 𝑔;
  • A left and a right plane 𝑚 and 𝑠;
  • A top and a bottom plane 𝑢 and 𝑐 (in 3D).

y x z FOV E 𝑊 𝑒 𝑜 𝑔 𝑠 𝑚 The world according to the camera: Camera space

Perspective

12

12

slide-13
SLIDE 13

INFOGR – Lecture 6 – "Transformations" Perspective projection Camera space: looking down negative 𝑨. We can now map from (𝑦, 𝑧, 𝑨) to (𝑦𝑡, 𝑧𝑡)

(but this mapping is not trivial)

Projection (and later: clipping) becomes easier when we switch to an orthographic view volume. This time the mapping is: 𝑦, 𝑧, 𝑨 → 𝑦, 𝑧 → 𝑦𝑡, 𝑧𝑡 . Going from camera space to the orthographic view volume can be achieved using a matrix multiplication. x

  • z

y 𝑨 = 𝑜 𝑨 = 𝑔 x

  • z

y 𝑨 = 𝑜 𝑨 = 𝑔

Perspective

13

13

slide-14
SLIDE 14

INFOGR – Lecture 6 – "Transformations" Perspective projection The final transform is the one that takes us from the orthographic view volume to the canonical view volume. Again, this is done using a matrix. 𝑦 = 1

  • z

𝑨 = −1 𝑨 = 1 𝑦 = −1

Perspective

14

14

slide-15
SLIDE 15

INFOGR – Lecture 6 – "Transformations" Perspective projection World space  camera space 

  • rthographic view 

canonical view I × Mcamera × Mortho × Mcanonical These can be collapsed into a single 4 × 4 matrix.

Perspective

15

15

slide-16
SLIDE 16

INFOGR – Lecture 6 – "Transformations" Perspective projection Canonical view  screen We need one last transform: From canonical view (-1..1) to 2D screen space (𝑂𝑦 × 𝑂𝑧). Screen space (2D)

Perspective

16

16

slide-17
SLIDE 17

INFOGR – Lecture 6 – "Transformations" Perspective projection STEP ONE: canonical view to screen space Vertices in the canonical view are

  • rthographically projected on an 𝑜𝑦 × 𝑜𝑧 image.

We need to map the square [-1,1]2 onto a rectangle 0, 𝑜𝑦 × [0, 𝑜𝑧]. Matrix: 𝑜𝑦 2 𝑜𝑦 2 𝑜𝑧 2 𝑜𝑧 2 1 This is assuming we already threw away 𝑨 to get an orthographic projection. We will however combine all matrices in the end, so we actually need a 4 × 4 matrix: 𝑁𝑤𝑞 = 𝑜𝑦 2 𝑜𝑧 2 𝑜𝑦 2 𝑜𝑧 2 1 1

Perspective

17

17

slide-18
SLIDE 18

INFOGR – Lecture 6 – "Transformations" Perspective projection STEP ONE: canonical view to screen space We now know the final transform for the vertices: 𝑦𝑡𝑑𝑠𝑓𝑓𝑜 𝑧𝑡𝑑𝑠𝑓𝑓𝑜 𝑨𝑑𝑏𝑜𝑝𝑜𝑗𝑑𝑏𝑚 1 = 𝑁𝑤𝑞 𝑦𝑑𝑏𝑜𝑝𝑜𝑗𝑑𝑏𝑚 𝑧𝑑𝑏𝑜𝑝𝑜𝑗𝑑𝑏𝑚 𝑨𝑑𝑏𝑜𝑝𝑜𝑗𝑑𝑏𝑚 1 Next step: getting from the orthographic view volume to the canonical view volume.

Perspective

18

18

slide-19
SLIDE 19

INFOGR – Lecture 6 – "Transformations" Perspective projection STEP TWO: orthographic view volume to canonical view volume The orthographic view volume is an axis aligned box 𝑚, 𝑠 × 𝑐, 𝑢 × [𝑜, 𝑔]. We want to scale this to a 2 × 2 × 2 box centered around the origin. Moving the center to the origin: 1 − 𝑚 + 𝑠 2 1 − 𝑐 + 𝑢 2 1 − 𝑜 + 𝑔 2 1 Scaling to [-1,1]: 2 𝑠 − 𝑚 2 𝑢 − 𝑐 2 𝑜 − 𝑔 1

× =

Combined: 2 𝑠 − 𝑚 − 𝑚 + 𝑠 𝑠 − 𝑚 2 𝑢 − 𝑐 − 𝑐 + 𝑢 𝑢 − 𝑐 2 𝑜 − 𝑔 − 𝑜 + 𝑔 𝑜 − 𝑔 1

Perspective

19

19

slide-20
SLIDE 20

INFOGR – Lecture 6 – "Transformations" Perspective projection STEP TWO: orthographic view volume to canonical view volume The final transforms for the vertices are thus: 𝑦𝑡𝑑𝑠𝑓𝑓𝑜 𝑧𝑡𝑑𝑠𝑓𝑓𝑜 𝑨𝑑𝑏𝑜𝑝𝑜𝑗𝑑𝑏𝑚 1 = 𝑁𝑤𝑞𝑁𝑑𝑏𝑜𝑝𝑜𝑗𝑑𝑏𝑚 𝑦𝑝𝑠𝑢ℎ𝑝 𝑧𝑝𝑠𝑢ℎ𝑝 𝑨𝑝𝑠𝑢ℎ𝑝 1 Next step: getting from camera space to the orthographic view volume.

Perspective

20

20

slide-21
SLIDE 21

INFOGR – Lecture 6 – "Transformations" Perspective projection STEP THREE: camera space to orthographic view volume x

  • z

y y x z Translate: 1 −𝐹𝑦 1 −𝐹𝑧 1 −𝐹𝑨 1 i.e., the inverse of the camera translation. Rotate: We will use the inverse

  • f the basis defined by

the camera orientation. E

Perspective

21

21

slide-22
SLIDE 22

INFOGR – Lecture 6 – "Transformations" Perspective projection STEP THREE: camera space to orthographic view volume Basis defined by the camera orientation: z-axis: −𝑊 (convention says we look down –z) x-axis: −𝑊 × 𝑣𝑞 y-axis: 𝑊 × 𝑦 Matrix: 𝑌𝑦 𝑍

𝑦

−𝑊

𝑦

𝑌𝑧 𝑍

𝑧

−𝑊

𝑧

𝑌𝑨 𝑍

𝑨

−𝑊

𝑨

1 𝑊 𝑣𝑞 𝑨 𝑧 𝑦 Inverse: 𝑌𝑦 𝑌𝑧 𝑌𝑨 𝑍

𝑦

𝑍

𝑧

𝑍

𝑨

−𝑊

𝑦

−𝑊

𝑧

−𝑊

𝑨

1 1 −𝐹𝑦 1 −𝐹𝑧 1 −𝐹𝑨 1

= 𝑁𝑑𝑏𝑛𝑓𝑠𝑏 ×

Perspective

22

22

slide-23
SLIDE 23

INFOGR – Lecture 6 – "Transformations" Perspective projection STEP THREE: camera space to orthographic view volume The combined transform so far: 𝑦𝑡𝑑𝑠𝑓𝑓𝑜 𝑧𝑡𝑑𝑠𝑓𝑓𝑜 𝑨𝑑𝑏𝑜𝑝𝑜𝑗𝑑𝑏𝑚 1 = 𝑁𝑤𝑞 𝑁𝑑𝑏𝑜𝑝𝑜𝑗𝑑𝑏𝑚 𝑁𝑑𝑏𝑛𝑓𝑠𝑏 𝑦𝑥𝑝𝑠𝑚𝑒 𝑧𝑥𝑝𝑠𝑚𝑒 𝑨𝑥𝑝𝑠𝑚𝑒 1 One thing is still missing: perspective. x

  • z

y 𝑨 = 𝑜 𝑨 = 𝑔 x

  • z

y 𝑨 = 𝑜 𝑨 = 𝑔

Perspective

23

23

slide-24
SLIDE 24

INFOGR – Lecture 6 – "Transformations" Perspective projection Q: What is perspective? A: The size of an object on the screen is proportional to 1/𝑨. More precisely: 𝑧𝑡 =

𝑒 𝑨 𝑧 (and 𝑦𝑡 = 𝑒 𝑨 𝑦 )

where 𝑒 is the distance of the view plane to the camera. Q: How do we capture scaling based on distance in a matrix? A: … Dividing by z can’t be done using linear nor affine transforms.

Perspective

24

24

slide-25
SLIDE 25

INFOGR – Lecture 6 – "Transformations" Perspective projection Let’s have a look at homogeneous coordinates again. Recall: 𝑏1 𝑐1 𝑑1 𝑏2 𝑐2 𝑑2 𝑏3 𝑐3 𝑑3 𝑦 𝑧 𝑨 = 𝑏1𝑦 + 𝑐1𝑧 + 𝑑1𝑨 𝑏2𝑦 + 𝑐2𝑧 + 𝑑2𝑨 𝑏3𝑦 + 𝑐3𝑧 + 𝑑3𝑨 With homogeneous coordinates, we get: 𝑏1 𝑐1 𝑑1 𝑈

𝑦

𝑏2 𝑐2 𝑑2 𝑈

𝑧

𝑏3 𝑐3 𝑑3 𝑈

𝑨

1 𝑦 𝑧 𝑨 1 = 𝑏1𝑦 + 𝑐1𝑧 + 𝑑1𝑨 + 𝑈

𝑦

𝑏2𝑦 + 𝑐2𝑧 + 𝑑2𝑨 + 𝑈

𝑧

𝑏3𝑦 + 𝑐3𝑧 + 𝑑3𝑨 + 𝑈

𝑨

1 = (𝑏1𝑦 + 𝑐1𝑧 + 𝑑1𝑨 + 𝑈

𝑦)/1

(𝑏2𝑦 + 𝑐2𝑧 + 𝑑2𝑨 + 𝑈

𝑧)/1

(𝑏3𝑦 + 𝑐3𝑧 + 𝑑3𝑨 + 𝑈

𝑨)/1

1

Perspective

25

25

slide-26
SLIDE 26

INFOGR – Lecture 6 – "Transformations" Perspective projection 𝑦 𝑧 𝑨 𝑥 = 𝑏1 𝑐1 𝑑1 𝑈

𝑦

𝑏2 𝑐2 𝑑2 𝑈

𝑧

𝑏3 𝑐3 𝑑3 𝑈

𝑨

𝑏4 𝑐4 𝑑4 𝑥 𝑦 𝑧 𝑨 1 = 𝑏1𝑦 + 𝑐1𝑧 + 𝑑1𝑨 + 𝑈

𝑦

𝑏2𝑦 + 𝑐2𝑧 + 𝑑2𝑨 + 𝑈

𝑧

𝑏3𝑦 + 𝑐3𝑧 + 𝑑3𝑨 + 𝑈

𝑨

𝑏4𝑦 + 𝑐4𝑧 + 𝑑4𝑨 + 𝑥 Recall that using homogeneous coordinates 𝑦, 𝑧, 𝑨, 1 represents 𝑦, 𝑧, 𝑨 . The homogeneous vector (𝑦, 𝑧, 𝑨, 𝑥) represents 𝑦

𝑥 , 𝑧 𝑥 , 𝑨 𝑥 .

The division by 𝑥 is called homogenization. Notice that this doesn’t change any part of our framework, where 𝑥 = 1.

Perspective

26

26

slide-27
SLIDE 27

INFOGR – Lecture 6 – "Transformations" Perspective projection So, multiplying by this matrix 𝑦 𝑧 𝑨 1 × 𝑏1 𝑐1 𝑑1 𝑈𝑦 𝑏2 𝑐2 𝑑2 𝑈𝑧 𝑏3 𝑐3 𝑑3 𝑈𝑨 𝑏4 𝑐4 𝑑4 𝑥 and homogenization, creates this vector: 𝑏1𝑦 + 𝑐1𝑧 + 𝑑1𝑨 + 𝑈𝑦 / (𝑏4𝑦 + 𝑐4𝑧 + 𝑑4𝑨 + 𝑥) 𝑏2𝑦 + 𝑐2𝑧 + 𝑑2𝑨 + 𝑈𝑧 / (𝑏4𝑦 + 𝑐4𝑧 + 𝑑4𝑨 + 𝑥) 𝑏3𝑦 + 𝑐3𝑧 + 𝑑3𝑨 + 𝑈𝑨 / (𝑏4𝑦 + 𝑐4𝑧 + 𝑑4𝑨 + 𝑥) 1 How do we chose the coefficients of the matrix so that we get correct perspective correction? I.e., something like this: 𝑜𝑦/𝑨 𝑜𝑧/𝑨 𝑨 1

Perspective

27

27

slide-28
SLIDE 28

INFOGR – Lecture 6 – "Transformations" Perspective projection The matrix we are looking for is: 𝑜 𝑜 𝑜 + 𝑔 −𝑔𝑜 1 Let’s verify. What happened to 𝑨?  𝑨′ = 𝑜 + 𝑔 − 𝑔𝑜

𝑨

𝑦 𝑧 𝑨 1 = 𝑜𝑦 𝑜𝑧 𝑜 + 𝑔 𝑨 − 𝑔𝑜 𝑨 homogenize 𝑜𝑦/𝑨 𝑜𝑧/𝑨 𝑜 + 𝑔 − 𝑔𝑜/𝑨 1

  • 𝑨 = 𝑜: 𝑨′ = 𝑜
  • 𝑨 = 𝑔: 𝑨′ = 𝑔
  • All other 𝑨 yield values between 𝑜 and 𝑔 (but: proportional to

1 𝑨).

Perspective

28

28

slide-29
SLIDE 29

INFOGR – Lecture 6 – "Transformations" Perspective projection Combining with the orthographic projection matrix gives us: 𝑁𝑝𝑠𝑢ℎ𝑝 × 𝑜 𝑜 𝑜 + 𝑔 −𝑔𝑜 1 = 2𝑜 𝑠 − 𝑚 𝑚 + 𝑠 𝑚 − 𝑠 2𝑜 𝑢 − 𝑐 𝑐 + 𝑢 𝑐 − 𝑢 𝑜 + 𝑔 𝑜 − 𝑔 2𝑔𝑜 𝑔 − 𝑜 1

Perspective

29

29

slide-30
SLIDE 30

INFOGR – Lecture 6 – “Transformation” Perspective projection To transform a single world vertex we thus apply: 𝑦𝑡𝑑𝑠𝑓𝑓𝑜 𝑧𝑡𝑑𝑠𝑓𝑓𝑜 𝑨𝑑𝑏𝑜𝑝𝑜𝑗𝑑𝑏𝑚 1 = 𝑁𝑤𝑞𝑁𝑞𝑓𝑠𝑡𝑞𝑓𝑑𝑢𝑗𝑤𝑓𝑁𝑑𝑏𝑛𝑓𝑠𝑏 𝑦𝑥𝑝𝑠𝑚𝑒 𝑧𝑥𝑝𝑠𝑚𝑒 𝑨𝑥𝑝𝑠𝑚𝑒 1

  • 1. 𝑁𝑑𝑏𝑛𝑓𝑠𝑏: takes us from world space to camera space;
  • 2. 𝑁𝑞𝑓𝑠𝑡𝑞𝑓𝑑𝑢𝑗𝑤𝑓: from camera space to canonical;
  • 3. 𝑁𝑤𝑞: takes us from canonical to screen space.

𝑁𝑤𝑞 = 𝑜𝑦 2 𝑜𝑧 2 𝑜𝑦 2 𝑜𝑧 2 1 1 𝑁𝑞𝑓𝑠𝑡𝑞𝑓𝑑𝑢𝑗𝑤𝑓 2𝑜 𝑠 − 𝑚 𝑚 + 𝑠 𝑚 − 𝑠 2𝑜 𝑢 − 𝑐 𝑐 + 𝑢 𝑐 − 𝑢 𝑜 + 𝑔 𝑜 − 𝑔 2𝑔𝑜 𝑔 − 𝑜 1 𝑁𝑑𝑏𝑛𝑓𝑠𝑏 = 𝑌𝑦 𝑌𝑧 𝑌𝑨 −𝐹𝑦 𝑍

𝑦

𝑍

𝑧

𝑍

𝑨

−𝐹𝑧 −𝑊

𝑦

−𝑊

𝑧

−𝑊

𝑨

−𝐹𝑨 1

Perspective

30

30

slide-31
SLIDE 31

Today’s Agenda:

  • Projection
  • Pipeline Recap
  • Rasterization

31

slide-32
SLIDE 32

INFOGR – Lecture 6 – "Transformations" Scenegraph world car wheel wheel wheel wheel turret plane plane car wheel wheel wheel wheel turret buggy wheel wheel wheel wheel dude dude dude camera 𝑈𝑑𝑏𝑛𝑓𝑠𝑏 𝑈𝑑𝑏𝑠1 𝑈𝑞𝑚𝑏𝑜𝑓1 𝑈𝑑𝑏𝑠2 𝑈𝑞𝑚𝑏𝑜𝑓2 𝑈𝑐𝑣𝑕𝑕𝑧

Pipeline Recap

Transform Project Rasterize Shade meshes vertices vertices fragment positions pixels

Animation, culling, tessellation, ... Postprocessing

32

32

slide-33
SLIDE 33

INFOGR – Lecture 6 – "Transformations" Transformations World space to screen space: 𝑦𝑡𝑑𝑠𝑓𝑓𝑜 𝑧𝑡𝑑𝑠𝑓𝑓𝑜 𝑨𝑑𝑏𝑜𝑝𝑜𝑗𝑑𝑏𝑚 1 = 𝑁𝑤𝑞𝑁𝑞𝑓𝑠𝑡𝑞𝑓𝑑𝑢𝑗𝑤𝑓𝑁𝑑𝑏𝑛𝑓𝑠𝑏 𝑦𝑥𝑝𝑠𝑚𝑒 𝑧𝑥𝑝𝑠𝑚𝑒 𝑨𝑥𝑝𝑠𝑚𝑒 1 Object space to world space: 𝑦𝑥𝑝𝑠𝑚𝑒 𝑧𝑥𝑝𝑠𝑚𝑒 𝑨𝑥𝑝𝑠𝑚𝑒 1 = 𝑁𝑚𝑝𝑑𝑏𝑚𝑁𝑞𝑏𝑠𝑓𝑜𝑢 𝑦𝑚𝑝𝑑𝑏𝑚 𝑧𝑚𝑝𝑑𝑏𝑚 𝑨𝑚𝑝𝑑𝑏𝑚 1 In all cases, we construct a single 4 × 4 matrix, which we then apply to all vertices of a mesh.

Pipeline Recap

Transform Project Rasterize Shade meshes vertices vertices fragment positions pixels

Animation, culling, tessellation, ... Postprocessing

33

33

slide-34
SLIDE 34

INFOGR – Lecture 6 – "Transformations" Transformations Rendering a scene graph is done using a recursive function: Here, matrix concatenation is part of the recursive flow.

Pipeline Recap

void SGNode::Render( mat4& M ) { mat4 M’ = Mlocal * M; mesh->Rasterize( M’ ); for( int i = 0; i < childCount; i++ ) child[i]->Render( M’ ); };

34

34

slide-35
SLIDE 35

INFOGR – Lecture 6 – "Transformations" Transformations To transform meshes to world space, we call SGNode::Render with an identity matrix. To transform meshes to camera space, we call it with the inverse transform of the camera. Remember: the world revolves around the viewer; instead of turning the viewer, we turn the world in the opposite direction.

Pipeline Recap

void SGNode::Render( mat4& M ) { mat4 M’ = Mlocal * M; mesh->Rasterize( M’ ); for( int i = 0; i < childCount; i++ ) child[i]->Render( M’ ); };

35

35

slide-36
SLIDE 36

INFOGR – Lecture 6 – "Transformations" After projection The output of the projection stage is a stream of vertices for which we know 2D screen positions. The vertex stream must be combined with connectivity data to form triangles. ‘Triangles’ on a raster consist of a collection of pixels, called fragments.

Pipeline Recap

Transform Project Rasterize Shade meshes vertices vertices fragment positions pixels connectivity data

36

36

slide-37
SLIDE 37

Today’s Agenda:

  • Projection
  • Pipeline Recap
  • Rasterization

37

slide-38
SLIDE 38

INFOGR – Lecture 6 – "Transformations" Connectivity data Two triangles forming a quad, using four vertices: Note:

  • Connectivity data has no relation to actual vertex

positions.

  • Triangles are typically defined in clockwise order

around the triangle normal. These two notes can be contradictory, but in practice, they rarely are.

Rasterization

1 2

1 2 3

1 3 2

38

38

slide-39
SLIDE 39

INFOGR – Lecture 6 – "Transformations" Connectivity data We can store triangles more efficiently using triangle strips. Here, the first three vertex indices specify the first triangle. After that, subsequent triangles use the previous two indices, plus one extra vertex. It is rarely possible to define a complete mesh using a single triangle strip. However, we can generally reduce a mesh to a small set of strips.

Rasterization

1 2

1 2 3

3

39

39

slide-40
SLIDE 40

INFOGR – Lecture 6 – "Transformations" Connectivity data

Rasterization

On modern hardware, triangle strips are rarely used:

  • The memory reduction affects only

the connectivity data, which is small compared to vertex data;

  • Multiple strips for a single mesh

may incur significant overhead in the driver.

40

40

slide-41
SLIDE 41

INFOGR – Lecture 6 – "Transformations" Triangle rasterization

Rasterization

41

41

slide-42
SLIDE 42

INFOGR – Lecture 6 – "Transformations" Triangle rasterization Rasterizing a triangle, method 1: (from the book, 8.1.2)

  • 1. Determine the axis-aligned bounding box
  • f the triangle;
  • 2. For each pixel within this box, determine

whether it is inside the triangle. Drawback: at least 50% of the pixels will be rejected.

Rasterization

42

42

slide-43
SLIDE 43

INFOGR – Lecture 6 – "Transformations" Triangle rasterization Rasterizing a triangle, method 2: (see e.g. fatmap.txt, fatmap2.txt)

  • 1. Per scanline (within the bounding box),

determine the left and right side of the triangle;

  • 2. Per scanline, draw a horizontal line from

the left to the right. Drawback: not as easy to execute in parallel

  • n GPUs.

Rasterization

43

43

slide-44
SLIDE 44

INFOGR – Lecture 6 – "Transformations" Triangle rasterization So far, we have seen how to fill a triangle, or more accurately: how to determine which pixels it overlaps. To shade the triangle, we need more information. Per pixel:

  • Color (e.g. from a texture);
  • Normal;
  • Interpolated per-vertex shading information.

Rasterization

44

44

slide-45
SLIDE 45

INFOGR – Lecture 6 – "Transformations" Sanity check Let’s take a brief moment to meditate on the madness on the previous slide. Per pixel:

  • Normal

A triangle is defined by three vertices. All points on the triangle lie in the same plane. Therefore, the normal for each point on the triangle is the same.

Rasterization

45

45

slide-46
SLIDE 46

INFOGR – Lecture 6 – "Transformations" Sanity check Normal interpolation can cause some bad behavior: Shadows are still cast by the not-so-smooth geometry.

Rasterization

46

46

slide-47
SLIDE 47

INFOGR – Lecture 6 – "Transformations" Sanity check Normal interpolation can cause some bad behavior: Shadows are still cast by the not-so-smooth geometry.

Rasterization

47

47

slide-48
SLIDE 48

INFOGR – Lecture 6 – "Transformations" Sanity check Shading interpolation: Normal interpolation is costly: a linearly interpolated normal needs normalization, which involves a square root. Solution: calculate shading per vertex, and interpolate.

Rasterization

48

48

slide-49
SLIDE 49

INFOGR – Lecture 6 – "Transformations" Sanity check Shading: In nature, the color of a surface is the sum of all the light reflected by the surface towards the camera. Incoming light:

  • Direct light (arriving from light sources);
  • Indirect light (arriving via other surfaces).

Incoming light is partially absorbed, partially reflected. Light is generally not reflected uniformly in all directions.

Rasterization

49

49

slide-50
SLIDE 50

INFOGR – Lecture 6 – "Transformations" Triangle rasterization Interpolating per-vertex values over a triangle: Barycentric coordinates. Any point on the triangle can be parameterized by two values: 𝑄(λ1, λ2) = 𝐵 + λ1 𝐶 − 𝐵 + λ2(𝐷 − 𝐵) where 0 ≤ λ1, λ2 ≤ 1, and λ1+λ2 ≤ 1. Or, reversed: λ1 = 𝑄 ∙ 𝐶 − 𝐵 − 𝑄 ∙ 𝐵 λ2 = 𝑄 ∙ 𝐷 − 𝐵 − 𝑄 ∙ 𝐵

Rasterization

A B C P

50

50

slide-51
SLIDE 51

INFOGR – Lecture 6 – "Transformations" Triangle rasterization 𝑄(λ1, λ2) = 𝐵 + λ1 𝐶 − 𝐵 + λ2(𝐷 − 𝐵) Given the vertex normals 𝑂

𝐵, 𝑂𝐶 and 𝑂𝐷, we can now calculate the

interpolated per-pixel normal 𝑂𝑄: 𝑂𝑄 = 𝑂

𝐵 + λ1 𝑂𝐶 − 𝑂 𝐵 + λ2(𝑂𝐷 − 𝑂 𝐵)

Remember that an interpolated normal is typically not normalized.

Rasterization

A B C P

51

51

slide-52
SLIDE 52

Today’s Agenda:

  • Projection
  • Pipeline Recap
  • Rasterization

52

slide-53
SLIDE 53

INFOGR – Computer Graphics

  • J. Bikker - April-July 2015 - Lecture 6: "Transformations"

END of "Transformations"

next lecture: “Visibility”

53