Singular Value Decomposition Presented by Matthew Motoki 1 What is - - PowerPoint PPT Presentation

singular value decomposition
SMART_READER_LITE
LIVE PREVIEW

Singular Value Decomposition Presented by Matthew Motoki 1 What is - - PowerPoint PPT Presentation

Singular Value Decomposition Presented by Matthew Motoki 1 What is a singular value decomposition (SVD)? A singular value decomposition is a factorization of the form A = U V T , where A is an m n matrix, U is an m m orthogonal


slide-1
SLIDE 1

Singular Value Decomposition

Presented by Matthew Motoki

1

slide-2
SLIDE 2

What is a singular value decomposition (SVD)?

A singular value decomposition is a factorization of the form A = UΣV T, where

  • A is an m × n matrix,
  • U is an m × m orthogonal matrix (UUT = UTU = I)
  • Σ is a real rectangular diagonal matrix, with or-

dered diagonal entries σ1 ≥ · · · ≥ σr ≥ σr+1 = · · · = σmin{m,n} = 0

  • V is an n × n orthogonal matrix.

2

slide-3
SLIDE 3

Rectangular Diagonal Matrices

For m < n, Σ =

         

σ1 · · · ... σr ... · · ·

         

For m > n, Σ =

              

σ1 ... σr ... . . . . . .

              

3

slide-4
SLIDE 4

Applications of SVDs

SVDs are used in a wide range of disciplines including signal processing, statistics, linear algebra, and com- puter science. Respective examples include:

  • noise reduction,
  • solving a linear least squares problem,
  • determining the rank, range, and null space of A

(and AT); computing the pseudoinverse of a matix,

  • data compression/matrix approximation.

4

slide-5
SLIDE 5

Solving a SVD

SVDs can be solved by performing eigenvalue decom- positions on AAT and ATA. Denoting Σ =

  • Σr×r

0r×n−r 0m−r×r 0m−r×n−r

  • , we have

AAT = UΣV T(UΣV T)T = UΣ V TV

I

ΣTUT = U

  • (Σr×r)2

0r×m−r 0m−r×r 0m−r×m−r

  • UT.

Similarly, ATA = V

  • (Σr×r)2

0r×n−r 0n−r×r 0n−r×n−r

  • V T.

5

slide-6
SLIDE 6

Jacobi Eigenvalue Algorithm

It is an iterative method for the calculation of the eigen- values and eigenvectors of a real symmetric matrix, S. The idea is to multiply on the left and right by Given’s rotation matrices, G, with the intent of killing off non- diagonal entries. S′ = GTSG, with G = G(i, j, θ) =

           

1 · · · · · · · · · . . . ... . . . . . . . . . · · · c · · · −s · · · . . . . . . ... . . . . . . · · · s · · · c · · · . . . . . . . . . ... . . . · · · · · · · · · 1

           

, wherein, s = sin θ and c = cos θ.

6

slide-7
SLIDE 7

Jacobi Eigenvalue Algorithm cont.

The entries of S′ are: S′

ii

= c2Sii − 2scSij + s2Sjj S′

jj

= s2Sii + 2scSij + c2Sjj S′

ij

= S′

ji = (c2 − s2)Sij + sc(Sii − Sjj)

S′

ik

= S′

ki = cSik − sSjk

S′

jk

= S′

kj = sSik + Sjk

S′

kl

= Skl It can be shown that killing the ij-th and the ji-th entry will not increase values of the other entries in S′. Setting the S′

ij = 0, we get cos(2θ)Sij + 1 2 sin(2θ)(Sii −

Sjj) = 0, or tan(2θ) = 2Sij Sjj − Sii .

7

slide-8
SLIDE 8

Jacobi Eigenvalue Algorithm cont.

Iterating this process until a given tolerance is met; e.g. sij < 10−10 for all i = j, one gets D = · · · GT

2GT 1 S G1G2 · · · .

Comparing this to our AAT eigenvalue decomposition,

  • (Σr×r)2

0r×m−r 0m−r×r 0m−r×m−r

  • = UT AAT U,

it is obvious that U = G1G2 · · ·. Similarly for V .

8

slide-9
SLIDE 9

Image Compression.

Low Rank Approximation Property The m×n matrix can be written as the sum of rank-one matrices A =

r

  • i=1

σiuivT

i ,

where r is the rank of A (number of nonzero entries in Σ), and ui and vi are the columns of U and V respec- tively. For p < r, the rank p approximation of the matrix A is defined as A ≈ AP =

p

  • i=1

σiuivT

i .

9