computer graphics course
play

COMPUTER GRAPHICS COURSE Viewing and Projections Georgios - PowerPoint PPT Presentation

COMPUTER GRAPHICS COURSE Viewing and Projections Georgios Papaioannou - 2014 VIEWING TRANSFORMATION The Virtual Camera All graphics pipelines perceive the virtual world Y through a virtual observer (camera), also positioned in the 3D


  1. COMPUTER GRAPHICS COURSE Viewing and Projections Georgios Papaioannou - 2014

  2. VIEWING TRANSFORMATION

  3. The Virtual Camera • All graphics pipelines perceive the virtual world Y through a virtual observer (camera), also positioned in the 3D environment “eye” (virtual camera)

  4. Eye Coordinate System (1) • The virtual camera or “eye” also has its own Y coordinate system, the eye coordinate system Y Eye coordinate system (ECS) Z eye X Y X Z (WCS) Global (world) coordinate system

  5. Eye Coordinate System (2) • Expressing the scene’s geometry in the ECS is a natural “ egocentric ” representation of the world: – It is how we perceive the user’s relationship with the environment – It is usually a more convenient space to perform certain rendering tasks, since it is related to the ordering of the geometry in the final image

  6. Eye Coordinate System (3) • Coordinates as “seen” from the camera reference frame Y ECS X

  7. Eye Coordinate System (4) • What “egocentric” means in the context of transformations? – Whatever transformation produced the camera system  its inverse transformation expresses the world w.r.t. the camera • Example: If I move the camera “left”, objects appear to move “right” in the camera frame: WCS camera motion Eye-space object motion

  8. Moving to Eye Coordinates • Moving to ECS is a change of coordinates transformation • The WCS  ECS transformation expresses the 3D environment in the camera coordinate system • We can define the ECS transformation in two ways: – A) Invert the transformations we applied to place the camera in a particular pose – B) Explicitly define the coordinate system by placing the camera at a specific location and setting up the camera vectors

  9. WCS  ECS: Version A (1) • Let us assume that we have an initial camera at the origin of the WCS • Then, we can move and rotate the “eye” to any pose (rigid transformations only: No sense in scaling a camera): 𝐍 𝑑 𝐩 𝑑 , 𝐯, 𝐰, 𝐱 = 𝐒 1 𝐒 2 𝐔 1 𝐒 𝟑 … . 𝐔 𝑜 𝐒 𝑛 𝐩, ො 𝐟 1 , ො 𝐟 2 , ො 𝐟 3 • The eye space coordinates of shapes, given their WCS coordinates can be simply obtained by: −1 𝐰 𝑋𝐷𝑇 𝐰 𝐹𝐷𝑇 = 𝐍 𝑑

  10. WCS  ECS: Version A (2) • This version of the WCS  ECS transformation computation is useful in cases where: – The camera system is dependent on (attached to) some moving geometry (e.g. a driver inside a car) – The camera motion is well-defined by a simple trajectory (e.g. an orbit around an object being inspected)

  11. WCS  ECS: Version B (“Look At”) (1) • Let us directly define a camera system by specifying where the camera is, where does it point to and what is its roll (or usually, its “up” or “right” vector) up roll camera position front look-at right

  12. WCS  ECS: Version B (“Look At”) (2) • The camera coordinate system offset is the eye (camera) position 𝐩 𝑑 • Given the look-at position (the camera target) 𝐪 𝑢𝑕𝑢 and 𝐩 𝑑 , we can determine the “front” direction: Ԧ 𝐞 𝑔𝑠𝑝𝑜𝑢 = 𝐪 𝑢𝑕𝑢 − 𝐩 𝑑 (normalized) 𝐩 𝑑 𝐪 𝑢𝑕𝑢

  13. WCS  ECS: Version B (“Look At”) (3) • The “up” or “right” vector need not be given precisely, as we can infer the coordinate system indirectly • Let us provide an “upright” up vector: Ԧ 𝐞 𝑣𝑞 = (0,1,0) • Provided that Ԧ 𝐞 𝑣𝑞 is not parallel to Ԧ 𝐞 𝑔𝑠𝑝𝑜𝑢 : 𝐱 = − Ԧ Ԧ Ԧ ෝ 𝐞 𝑔𝑠𝑝𝑜𝑢 / 𝐞 𝑔𝑠𝑝𝑜𝑢 𝐞 𝑣𝑞 𝐰 ො 𝐯 = Ԧ 𝐞 𝑔𝑠𝑝𝑜𝑢 × Ԧ 𝐞 𝑣𝑞 , ෝ 𝐯 = 𝐯/ 𝐯 𝐱 ෝ Ԧ 𝐞 𝑔𝑠𝑝𝑜𝑢 𝐰 = ෝ ො 𝐱 × ෝ 𝐯 ෝ 𝐯

  14. WCS  ECS: Version B (“Look At”) (4) • We can use the derived local camera coordinate system to define the change of coordinates transformation (see 3D Transformations): 𝑣 𝑦 𝑣 𝑧 𝑣 𝑨 0 𝑤 𝑦 𝑤 𝑧 𝑤 𝑨 0 𝐪 𝐹𝐷𝑇 = ∙ 𝐔 −𝐏 𝑑 ∙ 𝐪 𝑋𝐷𝑇 𝑥 𝑦 𝑥 𝑧 𝑥 𝑨 0 0 0 0 1

  15. WCS  ECS: Version B (“Look At”) (5) • This version of the WCS  ECS transformation computation is useful in cases where: – There is a free roaming camera – The camera follows (observes) a certain target in space – The position (and target) are explicitly defined

  16. PROJECTIONS

  17. Projection • Is the process of transforming 3D coordinates of shapes to points on the viewing plane • Viewing plane is the 2D flat surface that represents an embedding of an image into the 3D space – We can define viewing systems where the “projection” surface is not planar (e.g. fish -eye lenses etc.) • (Planar) projections are define by a projection (viewing) plane and a center of projection (eye)

  18. Taxonomy • Two main categories: – Parallel projections: infinite distance between CoP and viewing plane – Perspective projections: Finite distance between CoP and viewing plane

  19. Where do We Perform the Projections? • Since in projections we “collapse” a 3D shape onto a 2D surface, we essentially want to loose one coordinate (say the depth z) • Therefore, it is convenient to perform the projection when shapes are expressed in the ECS

  20. Orthographic Projection (1) • The simplest projection: • Collapse the coordinates on plane parallel to xy at z=d (usually 0) 𝐪′ = (𝑦 ′ , 𝑧 ′ , 𝑒) 𝐪 = (𝑦, 𝑧, 𝑨) y ECS 𝑧 ′ = 𝑧 𝑦 ′ = 𝑦 z 𝑨 ′ = 𝑒 𝑒 x 𝑨 = 𝑒 (view plane)

  21. Orthographic Projection (2) • Very simple matrix representation • Note that the rank of the matrix is less than its dimension: This not a reversible transformation! – This is also intuitively justified since we “loose” all information about depth 1 0 0 0 0 1 0 0 𝐐 𝑃𝑆𝑈𝐼𝑃 = 0 0 0 𝑒 0 0 0 1

  22. The Pinhole Camera Model • It is an ideal camera (i.e. cannot exist in practice) • It is the simplest modeling of a camera: photographic Image sensor For simplicity, graphics use a “front” symmetrical projection plane

  23. The Perspective Projection • From similar triangles, we have: 𝑧 ′ = 𝑒 ∙ 𝑧 𝐪′ = (𝑦 ′ , 𝑧 ′ , 𝑒) 𝑨 𝑦 ′ = 𝑒 ∙ 𝑦 𝐪 = (𝑦, 𝑧, 𝑨) y 𝑨 ECS 𝑨 ′ = 𝑒 𝑧 𝑧′ z 𝑨 𝑒 x 𝑨 = 𝑒 (view plane)

  24. Matrix Form of Perspective Projection • The perspective projection is not a linear operation (division by z)  • It cannot be completely represented by a linear operator such as a matrix multiplication 𝑒 0 0 0 0 𝑒 0 0 Requires a division by the w coordinate 𝐐 𝑄𝐹𝑆 = 0 0 𝑒 0 to rectify the homogeneous coordinates 0 0 1 0 𝑦 ∙ 𝑒 𝑦 ∙ 𝑒 𝑦 ∙ 𝑒/𝑨 𝑧 ∙ 𝑒 𝑧 ∙ 𝑒 𝑧 ∙ 𝑒/𝑨 𝐐 𝑄𝐹𝑆 ∙ 𝐪 𝑋𝐷𝑇 = /𝑨 = 𝑨 ∙ 𝑒 𝑨 ∙ 𝑒 𝑒 𝑨 𝑨 1

  25. Properties of the Perspective Projection • Lines are projected to lines • Distances are not preserved • Angles between lines are not preserved unless lines are parallel to the view plane • Perspective foreshortening: The size of the projected shape is inversely proportional to the distance to the plane

  26. The Impact of Focal Distance d

  27. What Happens After Projection? (1) • Coordinates are transformed to a “post - projective” space Y Post-projective space X Y ECS X

  28. What Happens After Projection? (2) • Remember also that “depth” is for now collapsed to the focal distance • How then are we going to use the projected coordinates to perform “depth” sorting in order to remove hidden surfaces? • Also, how do we define the extents of what we can see?

  29. Preserving the Depth • Regardless of what the projection is, we also retain the transformed z values • For numerical stability, representation accuracy and plausibility of displayed image, we limit the z-range • 𝑜 ≤ 𝑨 ≤ 𝑔 , – 𝑜 =near clipping value, – 𝑔 =far clipping value,

  30. The View Frustum • The boundaries (line segments) of the image, form planes in space: • The intersection of the visible subspaces, defines what we can see inside a view frustum

  31. The Clipping Volume (1) • The viewing frustum, forms a clipping volume • It defines which parts of the 3D world are discarded, i.e. do not contribute to the final rendering of the image • For many rendering architectures, this is a closed volume (capped by the far plane) Right clipping plane Y Near clipping plane Z X Orthographic Perspective Clipping volume Clipping volume

  32. The Clipping Volume (2) • After projection, the contents of the clipping volume are warped to match a rectangular paralepiped • This post-projective volume is usually considered normalized and its local coordinate system is called Canonical Screen Space (CSS) • The respective device coordinates are also called Normalized Device Coordinates (NDC)

  33. Orthographic Projection Revisited (1) • Let us now create an orthographic projection that transforms a specific clipping box volume (left, right, bottom, top, near, far) to CSS: Y Z X

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend