The Singular Value Decomposition COMPSCI 527 Computer Vision - - PowerPoint PPT Presentation

the singular value decomposition
SMART_READER_LITE
LIVE PREVIEW

The Singular Value Decomposition COMPSCI 527 Computer Vision - - PowerPoint PPT Presentation

The Singular Value Decomposition COMPSCI 527 Computer Vision COMPSCI 527 Computer Vision The Singular Value Decomposition 1 / 21 Outline 1 Math Corners and the SVD: Motivation 2 Orthogonal Matrices 3 Orthogonal Projection 4 The Singular


slide-1
SLIDE 1

The Singular Value Decomposition

COMPSCI 527 — Computer Vision

COMPSCI 527 — Computer Vision The Singular Value Decomposition 1 / 21

slide-2
SLIDE 2

Outline

1 Math Corners and the SVD: Motivation 2 Orthogonal Matrices 3 Orthogonal Projection 4 The Singular Value Decomposition 5 Principal Component Analysis

COMPSCI 527 — Computer Vision The Singular Value Decomposition 2 / 21

slide-3
SLIDE 3

Math Corners and the SVD: Motivation

Math Corners and the SVD: Motivation

  • A few math installments to get ready for later technical

topics are sprinkled throughout the course

  • The Singular Value Decomposition (SVD) gives the most

complete geometric picture of a linear mapping

  • SVD yields orthonormal vector bases for the null space, the

row space, the range, and the left null space of a matrix

  • SVD leads to the pseudo-inverse, a way to give a linear

system a unique and stable approximate solution

  • SVD leads to principal component analysis, a technique to

reduce the dimensionality of a set of vector data while retaining as much information as possible

  • Dimensionality reduction improves the ability of machine

learning methods to generalize

COMPSCI 527 — Computer Vision The Singular Value Decomposition 3 / 21

slide-4
SLIDE 4

Orthogonal Matrices

Why Orthonormal Bases are Useful

  • n-dim linear space S ⊆ Rm (so n ≤ m)
  • p = [p1, . . . , pm]T ∈ S (standard basis)
  • v1, . . . , vn: an orthonormal basis for S:

vT

i vj = δij (Ricci delta)

  • Then there exists q = [q1, . . . , qn]T s.t.

p = q1v1 + . . . + qnvn

  • Matrix form: p = Vq where

V = [v1, . . . , vn] ∈ Rm×n vT

i p =

COMPSCI 527 — Computer Vision The Singular Value Decomposition 4 / 21

slide-5
SLIDE 5

Orthogonal Matrices

The Left Inverse of an Orthogonal Matrix

qi = vT

i p (Finding coefficients qi is easy!)

  • Rewrite vT

i vj = δij in matrix form

V = [v1, . . . , vn] ∈ Rm×n V TV =

  • LV = I with L = V T

COMPSCI 527 — Computer Vision The Singular Value Decomposition 5 / 21

slide-6
SLIDE 6

Orthogonal Matrices

Left and Right Inverse of an Orthogonal Matrix

  • LV = I with L = V T
  • Can we have R such that VR = I?
  • That would be the right inverse
  • What if m = n?

COMPSCI 527 — Computer Vision The Singular Value Decomposition 6 / 21

slide-7
SLIDE 7

Orthogonal Matrices

Orthogonal Transformations Preserve Norm (m ≥ n)

y = Vx : Rn → Rm y2 =

COMPSCI 527 — Computer Vision The Singular Value Decomposition 7 / 21

slide-8
SLIDE 8

Orthogonal Projection

Projection Onto a Subspace (m ≤ n)

  • Projection of b ∈ Rm onto subspace S ⊆ Rm

is the point p ∈ S closest to b

  • Let V ∈ Rm×n an orthonormal basis for S

(That is, V is an orthogonal matrix)

  • b − p ⊥ vi for i = 1, . . . , n

that is, V T(b − p) = 0

  • Projection of b ∈ Rm onto S is p = VV Tb

(optional proofs in an Appendix)

COMPSCI 527 — Computer Vision The Singular Value Decomposition 8 / 21

slide-9
SLIDE 9

The Singular Value Decomposition

Linear Mappings

b = Ax : Rn → Rm. Example: A = 1 √ 2   √ 3 √ 3 −3 3 1 1   (m = n = 3) range(A) ↔ rowspace(A)

COMPSCI 527 — Computer Vision The Singular Value Decomposition 9 / 21

slide-10
SLIDE 10

The Singular Value Decomposition

The Singular Value Decomposition: Geometry

b = Ax where A = 1 √ 2   √ 3 √ 3 −3 3 1 1  

COMPSCI 527 — Computer Vision The Singular Value Decomposition 10 / 21

slide-11
SLIDE 11

The Singular Value Decomposition

The Singular Value Decomposition: Algebra

Av1 = σ1u1 Av2 = σ2u2 σ1 ≥ σ2 > σ3 = 0 uT

1 u1

= 1 uT

2 u2

= 1 uT

3 u3

= 1 uT

1 u2

= uT

1 u3

= uT

2 u3

= vT

1 v1

= 1 vT

2 v2

= 1 vT

3 v3

= 1 vT

1 v2

= vT

1 v3

= vT

2 v3

=

COMPSCI 527 — Computer Vision The Singular Value Decomposition 11 / 21

slide-12
SLIDE 12

The Singular Value Decomposition

The Singular Value Decomposition: General

For any real m × n matrix A there exist orthogonal matrices U =

  • u1

· · · um

  • ∈ Rm×m

V =

  • v1

· · · vn

  • ∈ Rn×n

such that UTAV = Σ = diag(σ1, . . . , σp) ∈ Rm×n where p = min(m, n) and σ1 ≥ . . . ≥ σp ≥ 0. Equivalently, A = UΣV T .

COMPSCI 527 — Computer Vision The Singular Value Decomposition 12 / 21

slide-13
SLIDE 13

The Singular Value Decomposition

Rank and the Four Subspaces

A = UΣV T = [u1, . . . , ur, ur+1, . . . , um]                   σ1 ... σr ... · · · · · · . . . . . . · · · · · ·                              v1 . . . vr vr+1 . . . vn            [drawn for m > n]

COMPSCI 527 — Computer Vision The Singular Value Decomposition 13 / 21

slide-14
SLIDE 14

Principal Component Analysis

Principal Component Analysis

  • We used the SVD to view a matrix as a map
  • We can also view a matrix as a data set
  • A is a m × n matrix with n data points in Rm

COMPSCI 527 — Computer Vision The Singular Value Decomposition 14 / 21

slide-15
SLIDE 15

Principal Component Analysis

Principal Component Analysis

  • Let k ≤ m be some “smaller dimensionality”
  • How to approximate the m-dimensional data in A with

points in k dimensions? (Data compression, dimensionality reduction)

  • The columns in A are a cloud of points around the mean

µ(A) = 1

nA1n

  • Center the matrix: Ac = A − µ(A)1T

n

COMPSCI 527 — Computer Vision The Singular Value Decomposition 15 / 21

slide-16
SLIDE 16

Principal Component Analysis

Principal Component Analysis

COMPSCI 527 — Computer Vision The Singular Value Decomposition 16 / 21

slide-17
SLIDE 17

Principal Component Analysis

Principal Component Analysis

  • How to approximate the m-dimensional centered cloud Ac

in k ≪ m dimensions?

  • Ac = UΣV T =

[u1, . . . , uk, uk+1, . . . , um]           σ1 ... σk σk+1 ... σmin(m,n)                     v1 . . . vk vk+1 . . . vn          

  • Ac ≈ UkΣkV T

k = [u1, . . . , uk]

   σ1 ... σk       v1 . . . vk   

COMPSCI 527 — Computer Vision The Singular Value Decomposition 17 / 21

slide-18
SLIDE 18

Principal Component Analysis

Principal Component Analysis

  • Ac ≈ UkΣkV T

k = [u1, . . . , uk]

   σ1 ... σk       v1 . . . vk   

  • Ac ≈ UkB = [b1, . . . , bm]
  • B = UT

k Ac

  • B = UT

k [A − µ(A)1T n ]

(PCA)

  • B is k × n and captures most of the variance in A
  • See notes for a statistical interpretation (optional)
  • Reconstruct approximate original data:

A = Ac + µ(A)1T

n

  • A ≈ UkB + µ(A)1T

n

COMPSCI 527 — Computer Vision The Singular Value Decomposition 18 / 21

slide-19
SLIDE 19

Principal Component Analysis

PCA Example

Image compression: Each column viewed as a data point

100 200 300 400 500 600 700 100 101 102 103 104 105

m × n = 685 × 1024 original Singular Values k = 10 dimensions k = 50 dimensions

COMPSCI 527 — Computer Vision The Singular Value Decomposition 19 / 21

slide-20
SLIDE 20

Principal Component Analysis

Encoding/Decoding New Points

  • Given PCA parameters µ(A), Uk for an m × n matrix A,

the compressed points are B = UT

k [A − µ(A)1T n ]

(a k × n matrix with k ≪ m)

  • The original points can be approximately reconstructed as

A ≈ UkB + µ(A)1T

n

  • Given a new point a ∈ Rm, it can be encoded as

b = UT

k [a − µ(A)] (without incorporating a into the PCA)

(b is a short, k-dimensional vector)

  • The original a can be approximately reconstructed as

a ≈ Ukb + µ(A)

  • The parameters µ(A), Uk are a code, used for encoding

and approximate decoding (compression/decompression)

COMPSCI 527 — Computer Vision The Singular Value Decomposition 20 / 21

slide-21
SLIDE 21

Principal Component Analysis

PCA is Not the Final Answer

+ + + + + + + + + _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

COMPSCI 527 — Computer Vision The Singular Value Decomposition 21 / 21