System Architectures Computer Graphics Rendering Pipeline Jonathan - - PowerPoint PPT Presentation

system architectures computer graphics rendering pipeline
SMART_READER_LITE
LIVE PREVIEW

System Architectures Computer Graphics Rendering Pipeline Jonathan - - PowerPoint PPT Presentation

System Architectures Computer Graphics Rendering Pipeline Jonathan Thaler Department of Computer Science 1 / 86 Introduction Remember Pipes & Filters... 2 / 86 Pipes & Filters Definition Pipes and Filters is a pattern to implement


slide-1
SLIDE 1

System Architectures Computer Graphics Rendering Pipeline

Jonathan Thaler

Department of Computer Science

1 / 86

slide-2
SLIDE 2

Introduction Remember Pipes & Filters...

2 / 86

slide-3
SLIDE 3

Pipes & Filters

Definition Pipes and Filters is a pattern to implement component-based data transfor- mation problems. A sequence of processing steps on a data stream is expressed using components called Filters, connected through channels called Pipes.

Figure: The Pipes and Filters architectural style divides a larger processing task into a sequence of smaller, independent processing steps (Filters) that are connected by channels (Pipes).

3 / 86

slide-4
SLIDE 4

Pipes & Filters

Definition The central concept in Pipes and Filters is processing of a data stream. A data stream is understood as a sequence of uniform data entities (bytes, characters of a character set, digitized audio signal,...), which are simply given by a sequence. Sender and receiver agree on the semantics of the sequence (image, table, audio signal,...). As a consequence, it is not clear, for example, when the stream has reached its end - i.e. there isn’t any structural information on the end of a data stream.

Figure: A pipes and filters example for processing an incoming order.

4 / 86

slide-5
SLIDE 5

Other Known Usages

Computer Graphics The data structure is an incremental data stream, starting with vertices, to edges, to fragments, to pixels on screen, going through a number of transforma- tions from 3D space to 2D on screen.

5 / 86

slide-6
SLIDE 6

Introduction Computer Graphics Pipeline

6 / 86

slide-7
SLIDE 7

Computer Graphics Pipeline

Figure: Real-time rendering in the game Fortnite.

7 / 86

slide-8
SLIDE 8

Computer Graphics Pipeline

Figure: Real-time rendering in the game Doom Eternal.

8 / 86

slide-9
SLIDE 9

Computer Graphics Pipeline

Figure: Real-time rendering in the game Destiny 2.

9 / 86

slide-10
SLIDE 10

Computer Graphics Pipeline Physically-Based Rendering

10 / 86

slide-11
SLIDE 11

Computer Graphics Pipeline

Figure: Photorealistic rendering of various materials and surfaces.

11 / 86

slide-12
SLIDE 12

Computer Graphics Pipeline

Figure: Photorealistic rendering of light scattering with photon tracing.

12 / 86

slide-13
SLIDE 13

Computer Graphics Pipeline

Figure: Photorealistic rendering of translucent material with subsurface scattering.

13 / 86

slide-14
SLIDE 14

Computer Graphics Pipeline Towards photorealistic Real-Time Rendering

14 / 86

slide-15
SLIDE 15

Computer Graphics Pipeline

Figure: Towards photorealistic real-time rendering with the CryEngine.

15 / 86

slide-16
SLIDE 16

Computer Graphics Pipeline

Figure: Towards photorealistic real-time rendering with the Unreal Engine 4.

16 / 86

slide-17
SLIDE 17

Computer Graphics Pipeline

Figure: Towards photorealistic real-time rendering with the Unreal Engine 4.

17 / 86

slide-18
SLIDE 18

Computer Graphics Pipeline The Rendering Pipeline

18 / 86

slide-19
SLIDE 19

Computer Graphics Pipeline

The central problem of 3D computer graphics is how to arrive from 3D model coordinates at 2D screen coordinates.

Figure: From 3D model to 2D screen space.

19 / 86

slide-20
SLIDE 20

Computer Graphics Pipeline

Definition The rendering pipeline generates (renders) a two-dimensional image, given a virtual camera, three-dimensional objects, light sources, and other elements such as material properties.

Figure: The pipeline stages execute in parallel, with each stage dependent upon the result of the previous stage.

20 / 86

slide-21
SLIDE 21

Computer Graphics Pipeline

There are four pipeline stages (1st on CPU/GPU, 2nd, 3rd + 4th on GPU):

  • 1. The Application Stage is driven by the application and is therefore typically

implemented in software running on general-purpose CPUs.

  • 2. The Geometry Processing stage deals with transformations, projections, and all
  • ther types of geometry handling.
  • 3. The Rasterization Stage typically takes as input three vertices, forming a

triangle, and finds all pixels that are considered inside that triangle, then forwards these to the next stage.

  • 4. The Pixel Processing Stage executes a program per pixel to determine its color

and may perform depth testing to see whether it is visible or not.

21 / 86

slide-22
SLIDE 22

Computer Graphics Pipeline

Application Stage Implements the specific domain logic and performs general-purpose computing tasks. General-purpose computing tasks are traditionally: IO, collision detection, particles, global acceleration algorithms, animation, physics simulation,... Some tasks such as physics simulations tend to be executed on GPUs, therefore the separation of tasks between CPU and GPU is blurred. Submits draw commands to the GPU for rendering. Is highly domain specific and will be discussed a bit more in the chapter

  • n Game Engine Architecture.

22 / 86

slide-23
SLIDE 23

Computer Graphics Pipeline Geometry Stage

The geometry stage is typically performed on a graphics processing unit (GPU) that contains many programmable cores as well as fixed-operation hardware. The geometry processing stage on the GPU is responsible for most of the per-triangle and per-vertex operations.

23 / 86

slide-24
SLIDE 24

Geometry Stage

Geometry Stage Computes what is to be drawn, how it should be drawn, and where it should be

  • drawn. It runs on the GPU and is responsible for most of the per-triangle and

per-vertex operations.

24 / 86

slide-25
SLIDE 25

Geometry Stage

Vertex Shading: View Transformation

25 / 86

slide-26
SLIDE 26

Geometry Stage

Vertex Shading: Projection

26 / 86

slide-27
SLIDE 27

Geometry Stage

Clipping

27 / 86

slide-28
SLIDE 28

Geometry Stage

Screen Mapping

28 / 86

slide-29
SLIDE 29

Computer Graphics Pipeline Rasterization Stage

Given the transformed and projected vertices with their associated shading data from geometry processing, the goal of the rasterization stage is to find all pixels that are inside a triangle being rendered.

29 / 86

slide-30
SLIDE 30

Rasterization Stage

Rasterization Stage All the primitives that survive clipping in the geometry stage are rasterized: all pixels that are inside a primitive are found and sent further down the pipeline to pixel processing. Rasterization is the conversion from two-dimensional vertices in screen space

  • each with a z-value (depth value) and various shading information associated

with each vertex - into pixels on the screen. Rasterization is a synchronization point between geometry and pixel processing: triangles are formed from vertices and sent down to pixel processing.

30 / 86

slide-31
SLIDE 31

Rasterization Stage

Triangle Setup: differentials, edge equations, and other data for the triangle are

  • computed. Fixed-function hardware is used for this task and is therefore not fully

programmable through shaders. Triangle Traversal: finding which samples (antialiasing) or pixels are inside a

  • triangle. A pixel inside a triangle is referred to as a fragment.

Each triangle fragment’s properties are generated using data interpolated among the three triangle vertices. These properties include the fragment’s depth, as well as any shading data from the geometry stage. It is also here that perspective-correct interpolation over the triangles is performed. All pixels or samples that are inside a primitive are then sent to the pixel processing stage..

31 / 86

slide-32
SLIDE 32

Computer Graphics Pipeline Pixel Processing Stage

The goal is to compute the color of each pixel of each visible primitive.

32 / 86

slide-33
SLIDE 33

Pixel Processing Stage

Pixel Processing Stage Triangles that have been associated with any textures (images) are rendered with these images applied to them as desired. Visibility is resolved via the z-buffer algorithm, along with optional discard and stencil tests. Each object is processed in turn, and the final image is then displayed on the screen.

33 / 86

slide-34
SLIDE 34

Pixel Processing Stage

Pixel Shading

34 / 86

slide-35
SLIDE 35

Pixel Processing Stage

Merging with z-Buffer

35 / 86

slide-36
SLIDE 36

Computer Graphics Pipeline From 3D Model to 2D Screen Coordinates

A detailed and technical discussion of how to arrive from a 3D model at 2D coordinates.

36 / 86

slide-37
SLIDE 37

From 3D Model to 2D Screen Coordinates

Definition The key tools for projecting three dimensions down to two are a viewing model, use of homogeneous coordinates, application of linear transformations by matrix multiplication, and setting up a viewport mapping.

37 / 86

slide-38
SLIDE 38

From 3D Model to 2D Screen Coordinates

Definition The common transformation process for producing the desired view is analogous to taking a photograph with a camera.

  • 1. Viewing transformation: move the camera

to the location you want to shoot from and point the camera the desired direction.

  • 2. Modeling transformation: move the subject

to be photographed into the desired location in the scene.

  • 3. Projection transformation: choose a camera

lens or adjust the zoom .

  • 4. Apply the transformations: take the picture.
  • 5. Viewport transformation: stretch or shrink

the resulting image to the desired picture size.

38 / 86

slide-39
SLIDE 39

From 3D Model to 2D Screen Coordinates

Model-View Transform Steps 1 and 2 can be considered doing the same thing, but inverse (opposites)

  • f each other. Normally they are combined together as the Model-View Trans-

form. With the Model-View Transform we arrive at a single, unified space for assembling objects into a scene which is also called Eye Space.

39 / 86

slide-40
SLIDE 40

From 3D Model to 2D Screen Coordinates

Transformations Coordinate systems for transforming from 3D model coordinates into 2D screen coordinates.

40 / 86

slide-41
SLIDE 41

From 3D Model to 2D Screen Coordinates Coordinate systems for transforming from 3D model coordinates into 2D screen coordinates.

Visualisation of the various transformation steps.

41 / 86

slide-42
SLIDE 42

From 3D Model to 2D Screen Coordinates

Model Space A model is defined by a set of vertices. The x,y,z coordinates of these vertices are defined relative to the object’s center: that is at (0,0,0).

42 / 86

slide-43
SLIDE 43

From 3D Model to 2D Screen Coordinates

World Space To put the model into an absolute space, same for all models, relating them to each other, the model transformation with the model matrix is used.

43 / 86

slide-44
SLIDE 44

From 3D Model to 2D Screen Coordinates

World Space The point of reference is the center of the world. The model is now in world space, with all vertices of the model transformed relatively to the center of the

  • world. We went from model coordinates to world coordinates using the model

matrix.

44 / 86

slide-45
SLIDE 45

From 3D Model to 2D Screen Coordinates

Camera Space The camera is at the origin of the world space (pointing down -z). In order to move the world, we introduce the viewing matrix. To move the camera of 3 units to the right use (+x) which is equivalent to moving the world 3 units to the left with (-x).

45 / 86

slide-46
SLIDE 46

From 3D Model to 2D Screen Coordinates

Camera Space To move from model space to camera space, combine the modeling and viewing transforamtion. This transforms the model to world coordinates to viewing (camera) coordinates, relative to the camera. We went from world coordinates to camera coordinates using the viewing matrix.

46 / 86

slide-47
SLIDE 47

From 3D Model to 2D Screen Coordinates

Perspective Projection Apply the perspective projection with the projection matrix, which will cause

  • bjects farther away from the camera appear smaller and objects closer to the

camer appear larger.

47 / 86

slide-48
SLIDE 48

From 3D Model to 2D Screen Coordinates

Perspective Projection The perspective projection can be understood as as a view frustum...

48 / 86

slide-49
SLIDE 49

From 3D Model to 2D Screen Coordinates

Perspective Projection ... or as the unit cube (-1 to +1 on all axes). This will transform the coordinates into homogenous coordinates.

49 / 86

slide-50
SLIDE 50

From 3D Model to 2D Screen Coordinates

Normalised Device Coordinates Having the homogeneous coordinates, perform the perspective division to achieve the foreshortening, ending up in normalised device coordinates (NDC) in the range of [-1, +1] for all coordinates.

50 / 86

slide-51
SLIDE 51

From 3D Model to 2D Screen Coordinates

Screen Coordinates Finally, perform the transform from NDC to the screen coordinates with the viewport transformation.

51 / 86

slide-52
SLIDE 52

From 3D Model to 2D Screen Coordinates

The coordinates go from Model Space to Screen Space:

52 / 86

slide-53
SLIDE 53

From 3D Model to 2D Screen Coordinates

The coordinates go from Model Space to Screen Space:

53 / 86

slide-54
SLIDE 54

Computer Graphics Pipeline Linear Transformations

The underlying theory of the coordinate manipulations for displaying 3D models.

54 / 86

slide-55
SLIDE 55

Linear Transformations

Linear Transformations Linear Transformations are used to transform the vertices of the models from 3D model space to 2D screen space. This is done by expressing the transformations as matrices and using matrix multiplication to transform the coordinates of the models.

55 / 86

slide-56
SLIDE 56

Linear Transformations

Linear Transformations The model coordinates are represented as 4 component vectors because we can not represent translation with a 3x3 matrix multiplication.

56 / 86

slide-57
SLIDE 57

Linear Transformations

Linear Transformations Instead of multiplying X matrices separately with N vertices, for efficiency reasons we can combine X matrix tranformations into a single matrix by multiplying them together: v′ = Av v′′ = Bv′ = B(Av) = (BA)v v′′ = Cv C = BA Inefficient: X ∗ N matrix multiplications Efficient: X + N matrix multiplications

57 / 86

slide-58
SLIDE 58

Linear Transformations

Linear Transformations

Matrix multiplication is not commutative: AB = BA Therefore multiplication with a vector does not commute: Av = vA However matrix multiplication is associative: C(BA) = (CB)A = CBA Therefore we can re-associate the accumulated matrix multiplications as: C(B(Av)) = (CBA)v

58 / 86

slide-59
SLIDE 59

Computer Graphics Pipeline Homogenous Coordinates

For allowing translation and perspective with linear transformations.

59 / 86

slide-60
SLIDE 60

Homogenous Coordinates

Homogenous Coordinates Translating (moving/sliding over) three-dimensional coordinates cannot be done by multiplying with a 3x3 matrix. It requires an extra vector addition to move the point (0, 0, 0) somewhere else as (0,0,0) will always be mapped to (0,0,0). This is a called an affine transformation, which is not a linear transformation. Including that addition means we lose the ability to compose multiple trans- formations into a single one. By embedding vertices in a 4-coordinate space, affine transformations turn back into a simple linear transform, resulting in homogenous coordinates. The ad- vantage of using homogenous coordinates is twofold:

  • 1. It allows also to capture translation using only a linear transformation.
  • 2. It allows to apply perspective.

60 / 86

slide-61
SLIDE 61

Homogenous Coordinates

Translation with Homogenous Coordinates To move a vector (x,y,z) by 0.3 in the y direction, assuming a fourth vector coordinate of 1.0:     1.0 0.0 0.0 0.0 0.0 1.0 0.0 0.3 0.0 0.0 1.0 0.0 0.0 0.0 0.0 1.0         x y z 1.0     →     x y + 0.3 z 1.0    

61 / 86

slide-62
SLIDE 62

Homogenous Coordinates

Perspective with Homogenous Coordinates At the same time, we acquire the extra component needed to do perspective! Consider that a homogeneous coordinate has one extra component and does not change the point it represents when all its components are scaled by the same amount. For example, all these coordinates represent the same point:     2.0 3.0 5.0 1.0         4.0 5.0 10.0 2.0         0.2 0.3 0.5 0.1    

62 / 86

slide-63
SLIDE 63

Homogenous Coordinates

Perspective with Homogenous Coordinates Homogeneous coordinates act as directions instead of locations; scaling a direc- tion leaves it pointing in the same direction. Standing at (0, 0), the homoge- neous points (1, 2), (2, 4), and others along that line appear in the same

  • place. When projected onto the 1D space, they all become the point 2:

63 / 86

slide-64
SLIDE 64

Homogenous Coordinates

Perspective with Homogenous Coordinates To move to homogeneous coordinates add a fourth w component of 1.0:   3.0 4.0 5.0   →     3.0 4.0 5.0 1.0     To go back to cartesian coordinates, divide all components by the fourth com- ponent and drop the fourth component:     4.0 6.0 10.0 2.0    

divide by w

− − − − − − − →     2.0 3.0 5.0 1.0    

drop w

− − − − →   2.0 3.0 5.0  

64 / 86

slide-65
SLIDE 65

Homogenous Coordinates

Perspective with Homogenous Coordinates Perspective transforms modify w components to values other than 1.0. Making w larger can make coordinates appear further away. When displaying geometry, OpenGL will transform homogeneous coordinates back to the three-dimensional cartesian coordinates by dividing their first three components by the last component. This will make the objects farther away (now having a larger w) have smaller Cartesian coordinates, hence getting drawn on a smaller scale. A w of 0.0 implies (x, y) coordinates at infinity (the object got so close to the viewpoint that its perspective view got infinitely large). This can lead to undefined results.

65 / 86

slide-66
SLIDE 66

From 3D Model to 2D Screen Coordinates

Transformations Coordinate systems for transforming from 3D model coordinates into 2D screen coordinates.

66 / 86

slide-67
SLIDE 67

From 3D Model to 2D Screen Coordinates

Perspective Projection Now we apply the perspective projection with the projection matrix, which will cause objects farther away from the camera appear smaller and objects closer to the camer appear larger.

67 / 86

slide-68
SLIDE 68

From 3D Model to 2D Screen Coordinates

Perspective Projection The perspective projection can be understood as as a view frustum...

68 / 86

slide-69
SLIDE 69

From 3D Model to 2D Screen Coordinates

Perspective Projection ... or as the unit cube (-1 to +1 on all axes). This will transform the coordinates into homogenous coordinates.

69 / 86

slide-70
SLIDE 70

From 3D Model to 2D Screen Coordinates

Normalised Device Coordinates Having the homogeneous coordinates we need to perform the perspective divi- sion to achieve the foreshortening, ending up in normalised device coordinates (NDC) in the range of [-1, +1] for all coordinates.

70 / 86

slide-71
SLIDE 71

Computer Graphics Pipeline Transformations

Basic transformations for coordinate manipulations.

71 / 86

slide-72
SLIDE 72

Transformations

Translation T =     1.0 0.0 0.0 2.5 0.0 1.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 1.0    

72 / 86

slide-73
SLIDE 73

Transformations

Translation     x + 2.5 y z 1.0     =     1.0 0.0 0.0 2.5 0.0 1.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 1.0         x y z 1.0     T =     1 x 1 y 1 z 1     T −1 =     1 −x 1 −y 1 −z 1    

73 / 86

slide-74
SLIDE 74

Transformations

Scaling S =     3.0 0.0 0.0 0.0 0.0 3.0 0.0 0.0 0.0 0.0 3.0 0.0 0.0 0.0 0.0 1.0    

74 / 86

slide-75
SLIDE 75

Transformations

Scaling     3x 3y 3z 1.0     =     3.0 0.0 0.0 0.0 0.0 3.0 0.0 0.0 0.0 0.0 3.0 0.0 0.0 0.0 0.0 1.0         x y z 1.0     S =     x 1 y 1 z 1 1     S−1 =    

1 x

1

1 y

1

1 z

1 1    

75 / 86

slide-76
SLIDE 76

Transformations

Scaling If the object being scaled is not centered at (0, 0, 0), the previous scaling matrix will also move it further or closer to (0, 0, 0) by the scaling amount. v′ = T −1(S(Tv)) v′ = (T −1ST)v) M = T −1ST v′ = Mv

76 / 86

slide-77
SLIDE 77

Transformations

Rotation Rotating an object 50 degrees in the xy plane, around the z axis: only the x- and y-coordinates of the object change and the z stay constant. Rz =     cos 50 −sin 50 0.0 0.0 sin 50 cos 50 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 1.0    

77 / 86

slide-78
SLIDE 78

Transformations

Rotation     cos 50 x − sin 50 y sin 50 x + cos 50 y z 1.0     =     cos 50 −sin 50 0.0 0.0 sin 50 cos 50 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 1.0         x y z 1.0     Rx =     1.0 0.0 0.0 0.0 0.0 cos θ −sin θ 0.0 0.0 sin θ cos θ 0.0 0.0 0.0 0.0 1.0     Ry =     cos θ 0.0 −sin θ 0.0 0.0 1.0 0.0 0.0 sin θ 0.0 cos θ 0.0 0.0 0.0 0.0 1.0    

78 / 86

slide-79
SLIDE 79

Transformations

Rotation If the object being rotated is not centered at (0, 0, 0), the matrices above will also rotate the whole object around (0, 0, 0), changing its location. v′ = T −1(Rz(Tv)) v′ = (T −1RzT)v) M = T −1RzT v′ = Mv

79 / 86

slide-80
SLIDE 80

Transformations

Figure: A transformation matrix describing a transformation or orientation and position of a model can be seen as a local coordinate system (OpenGL column-major format).

80 / 86

slide-81
SLIDE 81

Transformations

Perspective Projection Intuition: for perspective projection we want to make objects with larger z values appear further away. The last matrix row replaces the w (fourth) coordinate with the z coordinate. This will make objects with a larger z (further away) appear smaller when the division by w occurs, creating a perspective effect.     1.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 1.0 0.0         x y z 1.0     →     x y z z    

81 / 86

slide-82
SLIDE 82

Transformations

Perspective Projection For perspective projection we want to make objects with larger z values appear further away.     

znear width/2

0.0 0.0 0.0 0.0

znear height/2

0.0 0.0 0.0 0.0 − zfar+znear

zfar−znear 2zfar+znear 2zfar−znear

0.0 0.0 −1.0 0.0     

82 / 86

slide-83
SLIDE 83

Transformations

Perspective Divide The perspective projection doesn’t actually create the 3D effect, but the per- spective divide. v′ =     x y z w     →     x/w y/w z/w 1    

83 / 86

slide-84
SLIDE 84

Transformations

Normalised Device Coordinates (NDC) The perspective divide will transform the coordinates into Normalised Device Coordinates (NDC), a transformation of the coordinates into the unit cube in the range of [-1,+1].

84 / 86

slide-85
SLIDE 85

Transformations

Transforming Normals Normals are mostly used for lighting, which we complete in a pre-perspective

  • space. For this reason the w component of a normal in model space is always 0.0,

because it is a direction. The w component of a vertex in model space however is always 1.0 because it is a position in space.

85 / 86

slide-86
SLIDE 86

Conclusion

Not covered: Scan-Line & flood-fill for rendering polygons Clipping algorithms Shadowing Per-Pixel Lighting Texture Mapping Backface Culling, Depth Sorting, Shading (Lighting) - all in Exercise 1.

86 / 86