3d Geometry for Computer Graphics Lesson 1: Basics & PCA 3d - - PowerPoint PPT Presentation

3d geometry for computer graphics
SMART_READER_LITE
LIVE PREVIEW

3d Geometry for Computer Graphics Lesson 1: Basics & PCA 3d - - PowerPoint PPT Presentation

3d Geometry for Computer Graphics Lesson 1: Basics & PCA 3d geometry 3d geometry 3d geometry Why?? We represent objects using mainly linear primitives: points lines, segments planes, polygons Need to know how to


slide-1
SLIDE 1

3d Geometry for Computer Graphics

Lesson 1: Basics & PCA

slide-2
SLIDE 2

3d geometry

slide-3
SLIDE 3

3d geometry

slide-4
SLIDE 4

3d geometry

slide-5
SLIDE 5

Why??

We represent objects using mainly

linear primitives:

points lines, segments planes, polygons

Need to know how to compute

distances, transformations, projections…

slide-6
SLIDE 6

How to approach geometric problems

  • We have two ways:

1.

Employ our geometric intuition

2.

Formalize everything and employ our algebra skills

  • Often we first do No.1 and then solve with No.2
  • For complex problems No.1 is not always

easy…

slide-7
SLIDE 7

Motivation – Shape Matching

What is the best transformation that aligns the unicorn with the lion? There are tagged feature points in both sets that are matched by the user

slide-8
SLIDE 8

Motivation – Shape Matching

The above is not a good alignment…. Regard the shapes as sets of points and try to “match” these sets using a linear transformation

slide-9
SLIDE 9

Motivation – Shape Matching

Regard the shapes as sets of points and try to “match” these sets using a linear transformation To find the best rotation we need to know SVD…

slide-10
SLIDE 10

Applications for Shape Matching

Mesh Comparison Deformation Transfer Morphing

slide-11
SLIDE 11

Principal Component Analysis

x y Given a set of points, find the best line that approximates it

slide-12
SLIDE 12

Principal Component Analysis

x y x’ y’ Given a set of points, find the best line that approximates it

slide-13
SLIDE 13

Principal Component Analysis

x y When we learn PCA (Principal Component Analysis), we’ll know how to find these axes that minimize the sum of distances2 x’ y’

slide-14
SLIDE 14

PCA and SVD

PCA and SVD are important tools not only in graphics

but also in statistics, computer vision and more.

To learn about them, we first need to get familiar with

eigenvalue decomposition.

So, today we’ll start with linear algebra basics reminder.

SVD: A = UΛV

T

Λ is diagonal contains the singular values of A

slide-15
SLIDE 15

Vector space

Informal definition:

V ≠ ∅

(a non-empty set of vectors)

v, w ∈ V ⇒ v + w ∈ V

(closed under addition)

v ∈ V, α is scalar ⇒ αv ∈ V

(closed under multiplication by scalar) Formal definition includes axioms about associativity and

distributivity of the + and ⋅ operators.

0 ∈ V always!

slide-16
SLIDE 16

Subspace - example

Let l be a 2D line though the origin L = {p – O | p ∈ l} is a linear subspace of R2

x y O

slide-17
SLIDE 17

Subspace - example

Let π be a plane through the origin in 3D V = {p – O | p ∈ π} is a linear subspace of R3 y x z O

slide-18
SLIDE 18

Linear independence

The vectors {v1, v2, …, vk} are a linearly

independent set if: α1v1 + α2v2 + … + αkvk = 0 ⇔ αi = 0 ∀ i

It means that none of the vectors can be

  • btained as a linear combination of the others.
slide-19
SLIDE 19

Linear independence - example

Parallel vectors are always dependent: Orthogonal vectors are always independent.

v w

v = 2.4 w ⇒ v + (−2.4)w = 0

slide-20
SLIDE 20

Basis of V

{v1, v2, …, vn} are linearly independent {v1, v2, …, vn} span the whole vector space V:

V = {α1v1 + α2v2 + … + αnvn | αi is scalar}

Any vector in V is a unique linear combination of

the basis.

The number of basis vectors is called the

dimension of V.

slide-21
SLIDE 21

Basis - example

The standard basis of R3 – three unit orthogonal

vectors x, y, z: (sometimes called i, j, k or e1, e2, e3)

y z x

slide-22
SLIDE 22

Matrix representation

Let {v1, v2, …, vn} be a basis of V Every v∈V has a unique representation

v = α1v1 + α2v2 + … + αnvn

Denote v by the column-vector: The basis vectors are therefore denoted:

1 n

α α ⎛ ⎞ ⎜ ⎟ = ⎜ ⎟ ⎜ ⎟ ⎝ ⎠ v

⎟ ⎟ ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ ⎛ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ ⎛ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ ⎛ 1 , 1 , 1

slide-23
SLIDE 23

Linear operators

A : V → W

is called linear

  • perator if:

A(v + w) = A(v) + A(w) A(α v) = α A(v)

In particular, A(0) = 0 Linear operators we know:

Scaling Rotation, reflection Translation is not linear – moves the origin

slide-24
SLIDE 24

Linear operators - illustration

Rotation is a linear operator:

v w

v+w R(v+w)

v w

slide-25
SLIDE 25

Linear operators - illustration

Rotation is a linear operator:

v w

v+w R(v+w)

v w

R(w)

slide-26
SLIDE 26

Linear operators - illustration

Rotation is a linear operator:

v w

v+w R(v+w)

v w

R(w) R(v)

slide-27
SLIDE 27

Linear operators - illustration

Rotation is a linear operator:

v w

v+w R(v+w)

v w

R(w) R(v) R(v)+R(w) R(v+w) = R(v) + R(w)

slide-28
SLIDE 28

Matrix representation of linear operators

Look at A(v1),…, A(vn) where {v1, v2, …, vn}

is a basis.

For all other vectors:

v = α1v1 + α2v2 + … + αnvn A(v) = α1A(v1) + α2A(v2) + … + αnA(vn)

So, knowing what A does to the basis is enough The matrix representing A is: 1 2

| | | ( ) ( ) ( ) | | |

A n

M A A A ⎛ ⎞ ⎜ ⎟ =⎜ ⎟ ⎜ ⎟ ⎝ ⎠ v v v

slide-29
SLIDE 29

Matrix representation of linear

  • perators

1 2 1

1 | | | | ( ) ( ) ( ) ( ) | | | |

n

A A A A ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ = ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠ ⎜ ⎟ ⎝ ⎠ v v v v

  • 1

2 2

| | | | 1 ( ) ( ) ( ) ( ) | | | |

n

A A A A ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ = ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠ ⎜ ⎟ ⎝ ⎠ v v v v

slide-30
SLIDE 30

Matrix operations

Addition, subtraction, scalar multiplication –

simple…

Multiplication of matrix by column vector:

1 11 1 1 1 1

row , row ,

i i i n m mn n mi i m i

a b a a b a a b a b ⎛ ⎞ < > ⎛ ⎞⎛ ⎞ ⎛ ⎞ ⎜ ⎟ ⎜ ⎟⎜ ⎟ ⎜ ⎟ ⎜ ⎟ = = ⎜ ⎟⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟⎜ ⎟ ⎜ ⎟ < > ⎜ ⎟ ⎝ ⎠⎝ ⎠ ⎝ ⎠ ⎜ ⎟ ⎝ ⎠

∑ ∑

b b

  • A

b

slide-31
SLIDE 31

Matrix by vector multiplication

Sometimes a better way to look at it:

Ab is a linear combination of A’s columns!

1 1 2 1 1 2 2

| | | | | | | | | | | |

n n n n

b b b b b ⎛ ⎞⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎜ ⎟⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ = + + + ⎜ ⎟⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎝ ⎠⎝ ⎠ ⎝ ⎠ ⎝ ⎠ ⎝ ⎠ a a a a a a

slide-32
SLIDE 32

Matrix operations

Transposition: make the rows to be the columns (AB)T = BTAT

⎟ ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎜ ⎝ ⎛ = ⎟ ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎜ ⎝ ⎛

mn n m T mn m n

a a a a a a a a

  • 1

1 11 1 1 11

slide-33
SLIDE 33

Matrix operations

Inner product can in matrix form:

,

T T

< >= = v w v w w v

slide-34
SLIDE 34

Matrix properties

Matrix A (n×n) is non-singular if ∃B, AB = BA = I B = A−1 is called the inverse of A A is non-singular ⇔ det A ≠ 0 If A is non-singular then the equation Ax=b has one

unique solution for each b.

A is non-singular ⇔ the rows of A are linearly

independent (and so are the columns).

slide-35
SLIDE 35

Orthogonal matrices

Matrix A (n×n) is orthogonal if A−1 = AT Follows: AAT = ATA = I The rows of A are orthonormal vectors!

Proof: I = ATA =

v1 v2 vn

= viTvj = δij

v1 v2 vn

⇒ <vi, vi> = 1 ⇒ ||vi|| = 1; <vi, vj> = 0

slide-36
SLIDE 36

Orthogonal operators

A is orthogonal matrix ⇒ A represents a linear operator

that preserves inner product (i.e., preserves lengths and angles):

Therefore, ||Av|| = ||v|| and ∠(Av,Aw) = ∠(v,w)

, ( ) ( ) , .

T T T T T

A A A A A A I < > = = = = = =< > v w v w v w v w v w v w

slide-37
SLIDE 37

Orthogonal operators - example

Rotation by α around the z-axis in R3 : In fact, any orthogonal 3×3 matrix represents a rotation

around some axis and/or a reflection

detA = +1

rotation only

detA = −1

with reflection

⎟ ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎜ ⎝ ⎛ − = 1 cos sin sin cos α α α α R

slide-38
SLIDE 38

Eigenvectors and eigenvalues

Let A be a square n×n matrix v is eigenvector of A if:

Av = λv (λ is a scalar) v ≠ 0

The scalar λ is called eigenvalue Av = λv ⇒ A(αv) = λ(α v) ⇒ α v is also eigenvector Av = λv, Aw = λw ⇒ A(v+w) = λ(v+w) Therefore, eigenvectors of the same λ form a

linear subspace.

slide-39
SLIDE 39

Eigenvectors and eigenvalues

An eigenvector spans an axis (subspace of

dimension 1) that is invariant to A – it remains the same under the transformation.

Example – the axis of rotation:

O Eigenvector of the rotation transformation

slide-40
SLIDE 40

Finding eigenvalues

For which λ is there a non-zero solution to Ax = λx ? Ax = λx ⇔ Ax – λx = 0 ⇔ Ax – λIx = 0 ⇔ (A−λI) x = 0 So, non trivial solution exists ⇔ det(A –λ I) = 0 ΔA(λ) = det(A –λ I) is a polynomial of degree n. It is called the characteristic polynomial of A. The roots of ΔA are the eigenvalues of A. Therefore, there are always at least complex eigenvalues. If n is

  • dd, there is at least one real eigenvalue.
slide-41
SLIDE 41

Example of computing ΔA

⎟ ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎜ ⎝ ⎛ − − = 4 1 1 3 3 2 1 A

) 3 2 )( 3 ( ) 3 ( 2 ) 3 ( ) 1 ( ) 3 ( 2 ) 3 ) 4 ( )( 1 ( 4 1 1 3 3 2 1 det ) (

2 2

+ − − = = − + − − = − + + − − − = = ⎟ ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎜ ⎝ ⎛ − − − − − = Δ λ λ λ λ λ λ λ λ λ λ λ λ λ λ

A

Cannot be factorized over R Over C: ) 2 1 )( 2 1 ( i i − +

slide-42
SLIDE 42

Computing eigenvectors

Solve the equation (A – λI)x = 0 We’ll get a subspace of solutions.

slide-43
SLIDE 43

Spectra and diagonalization

The set of all the eigenvalues of A is called

the spectrum of A.

A is diagonalizable if A has n independent eigenvectors.

Then:

1 1 1 2 2 2 n n n

A A A λ λ λ = = = v v v v v v

  • A

v2 v1 vn = v2 v1 vn λ1 λ2 λn

AV = VD

slide-44
SLIDE 44

Spectra and diagonalization

Therefore, A = VDV

−1, where D is diagonal

A represents a scaling along the eigenvector axes!

A

= v2 v1 vn λ1 λ2 λn v2 v1 vn −1

A = VDV

−1

1 1 1 2 2 2 n n n

A A A λ λ λ = = = v v v v v v

slide-45
SLIDE 45

Spectra and diagonalization

A D

rotation reflection

  • rthonormal

basis

  • ther
  • rthonormal

basis

V

slide-46
SLIDE 46

Spectra and diagonalization

A

slide-47
SLIDE 47

Spectra and diagonalization

A

slide-48
SLIDE 48

Spectra and diagonalization

A

slide-49
SLIDE 49

Spectra and diagonalization

A eigenbasis

slide-50
SLIDE 50

Spectra and diagonalization

A is called normal if AAT = ATA. Important example: symmetric matrices A = AT. It can be proved that normal n×n matrices have

exactly n linearly independent eigenvectors (over C).

If A is symmetric, all eigenvalues of A are real,

and it has n independent real orthonormal eigenvectors.

slide-51
SLIDE 51

Why SVD…

Diagonalizable matrix is essentially a scaling. Most matrices are not diagonalizable – they do other

things along with scaling (such as rotation)

So, to understand how general matrices behave,

  • nly eigenvalues are not enough

SVD tells us how general linear transformations

behave, and other things…

slide-52
SLIDE 52

Next Week: Mysteries of the PCA & SVD