Singular Value Decomposition (matrix factorization) Singular Value - - PowerPoint PPT Presentation

โ–ถ
singular value decomposition matrix factorization
SMART_READER_LITE
LIVE PREVIEW

Singular Value Decomposition (matrix factorization) Singular Value - - PowerPoint PPT Presentation

Singular Value Decomposition (matrix factorization) Singular Value Decomposition The SVD is a factorization of a matrix into = where is a orthogonal matrix, is a orthogonal


slide-1
SLIDE 1

Singular Value Decomposition (matrix factorization)

slide-2
SLIDE 2

Singular Value Decomposition

The SVD is a factorization of a ๐‘›ร—๐‘œ matrix into ๐‘ฉ = ๐‘ฝ ๐šป ๐‘พ๐‘ผ where ๐‘ฝ is a ๐‘›ร—๐‘› orthogonal matrix, ๐‘พ๐‘ผ is a ๐‘œร—๐‘œ orthogonal matrix and ๐šป is a ๐‘›ร—๐‘œ diagonal matrix.

For a square matrix (๐’ = ๐’):

๐‘ฉ = โ‹ฎ โ€ฆ โ‹ฎ ๐’—& โ€ฆ ๐’—' โ‹ฎ โ€ฆ โ‹ฎ ๐œ& โ‹ฑ ๐œ' โ€ฆ ๐ฐ&

(

โ€ฆ โ‹ฎ โ‹ฎ โ‹ฎ โ€ฆ ๐ฐ'

(

โ€ฆ ๐‘ฉ = โ‹ฎ โ€ฆ โ‹ฎ ๐’—& โ€ฆ ๐’—' โ‹ฎ โ€ฆ โ‹ฎ ๐œ& โ‹ฑ ๐œ' โ‹ฎ โ€ฆ โ‹ฎ ๐’˜& โ€ฆ ๐’˜' โ‹ฎ โ€ฆ โ‹ฎ

(

๐œ& โ‰ฅ ๐œ) โ‰ฅ ๐œ* โ€ฆ

slide-3
SLIDE 3

Reduced SVD

๐‘ฉ = ๐‘ฝ ๐šป ๐‘พ๐‘ผ = โ‹ฎ โ€ฆ โ‹ฎ โ€ฆ โ‹ฎ ๐’—" โ€ฆ ๐’—# โ€ฆ ๐’—$ โ‹ฎ โ€ฆ โ‹ฎ โ€ฆ โ‹ฎ ๐œ" โ‹ฑ ๐œ# โ‹ฎ โ€ฆ ๐ฐ"

%

โ€ฆ โ‹ฎ โ‹ฎ โ‹ฎ โ€ฆ ๐ฐ#

%

โ€ฆ

๐‘›ร—๐‘œ ๐‘›ร—๐‘› ๐‘œร—๐‘œ

๐‘ฉ = ๐‘ฝ๐‘บ ๐šป๐‘บ๐‘พ๐‘ผ

What happens when ๐‘ฉ is not a square matrix? 1) ๐’ > ๐’ We can instead re-write the above as: Where ๐‘ฝ๐‘บ is a ๐‘›ร—๐‘œ matrix and ๐šป๐‘บ is a ๐‘œร—๐‘œ matrix

slide-4
SLIDE 4

Reduced SVD

๐‘ฉ = ๐‘ฝ ๐šป ๐‘พ๐‘ผ =

โ‹ฎ โ€ฆ โ‹ฎ ๐’—" โ€ฆ ๐’—# โ‹ฎ โ€ฆ โ‹ฎ

๐œ& โ‹ฑ ๐œ. โ‹ฑ

โ€ฆ ๐ฐ"

%

โ€ฆ โ‹ฎ โ‹ฎ โ‹ฎ โ€ฆ ๐ฐ$

%

โ€ฆ โ‹ฎ โ‹ฎ โ‹ฎ โ€ฆ ๐ฐ#

%

โ€ฆ

๐‘›ร—๐‘œ ๐‘›ร—๐‘› ๐‘œร—๐‘œ

๐‘ฉ = ๐‘ฝ ๐šป๐‘บ๐‘พ๐‘บ

๐‘ผ 2) ๐’ > ๐’ We can instead re-write the above as: where ๐‘พ๐‘บ is a ๐‘œร—๐‘› matrix and ๐šป๐‘บ is a ๐‘›ร—๐‘› matrix In general:

๐‘ฉ = ๐‘ฝ๐‘บ๐šป๐‘บ๐‘พ๐‘บ

๐‘ผ

๐‘ฝ๐‘บ is a ๐‘›ร—๐‘™ matrix ๐šป๐‘บ is a ๐‘™ ร—๐‘™ matrix ๐‘พ๐‘บ is a ๐‘œร—๐‘™ matrix ๐‘™ = min(๐‘›, ๐‘œ)

slide-5
SLIDE 5

Letโ€™s take a look at the product ๐šป๐‘ผ๐šป, where ๐šป has the singular values of a ๐‘ฉ, a ๐‘›ร—๐‘œ matrix.

๐šป๐‘ผ๐šป = ๐œ" โ‹ฑ ๐œ# โ‹ฑ ๐œ" โ‹ฑ ๐œ# โ‹ฎ = ๐œ"$ โ‹ฑ ๐œ#$

๐‘›ร—๐‘œ ๐‘œร—๐‘› ๐‘œร—๐‘œ

๐šป๐‘ผ๐šป = ๐œ" โ‹ฑ ๐œ% โ‹ฎ ๐œ" โ‹ฑ ๐œ% โ‹ฑ = ๐œ"$ โ‹ฑ ๐œ%$ โ‹ฑ โ‹ฑ โ‹ฑ

๐‘›ร—๐‘œ ๐‘œร—๐‘› ๐‘œร—๐‘œ

๐‘› > ๐‘œ ๐‘œ > ๐‘›

slide-6
SLIDE 6

Assume ๐‘ฉ with the singular value decomposition ๐‘ฉ = ๐‘ฝ ๐šป ๐‘พ๐‘ผ. Letโ€™s take a look at the eigenpairs corresponding to ๐‘ฉ๐‘ผ๐‘ฉ: ๐‘ฉ๐‘ผ๐‘ฉ = ๐‘ฝ ๐šป ๐‘พ๐‘ผ ๐‘ผ ๐‘ฝ ๐šป ๐‘พ๐‘ผ ๐‘พ๐‘ผ ๐‘ผ ๐šป

๐‘ผ๐‘ฝ๐‘ผ ๐‘ฝ ๐šป ๐‘พ๐‘ผ = ๐‘พ๐šป๐‘ผ๐‘ฝ๐‘ผ ๐‘ฝ ๐šป ๐‘พ๐‘ผ = ๐‘พ ๐šป๐‘ผ๐šป ๐‘พ๐‘ผ

Hence ๐‘ฉ๐‘ผ๐‘ฉ = ๐‘พ ๐šป๐Ÿ‘ ๐‘พ๐‘ผ Recall that columns of ๐‘พ are all linear independent (orthogonal matrix), then from diagonalization (๐‘ช = ๐’€๐‘ฌ๐’€1๐Ÿ), we get:

  • the columns of ๐‘พ are the eigenvectors of the matrix ๐‘ฉ๐‘ผ๐‘ฉ
  • The diagonal entries of ๐šป๐Ÿ‘ are the eigenvalues of ๐‘ฉ๐‘ผ๐‘ฉ

Letโ€™s call ๐œ‡ the eigenvalues of ๐‘ฉ๐‘ผ๐‘ฉ, then ๐œ3) = ๐œ‡3

slide-7
SLIDE 7

In a similar way, ๐‘ฉ๐‘ฉ๐‘ผ = ๐‘ฝ ๐šป ๐‘พ๐‘ผ ๐‘ฝ ๐šป ๐‘พ๐‘ผ ๐‘ผ ๐‘ฝ ๐šป ๐‘พ๐‘ผ ๐‘พ๐‘ผ ๐‘ผ ๐šป

๐‘ผ๐‘ฝ๐‘ผ = ๐‘ฝ ๐šป ๐‘พ๐‘ผ๐‘พ๐šป๐‘ผ๐‘ฝ๐‘ผ = ๐‘ฝ๐šป ๐šป๐‘ผ๐‘ฝ๐‘ผ

Hence ๐‘ฉ๐‘ฉ๐‘ผ = ๐‘ฝ ๐šป๐Ÿ‘ ๐‘ฝ๐‘ผ Recall that columns of ๐‘ฝ are all linear independent (orthogonal matrices), then from diagonalization (๐‘ช = ๐’€๐‘ฌ๐’€1๐Ÿ), we get:

  • The columns of ๐‘ฝ are the eigenvectors of the matrix ๐‘ฉ๐‘ฉ๐‘ผ
slide-8
SLIDE 8

How can we compute an SVD of a matrix A ?

  • 1. Evaluate the ๐‘œ eigenvectors ๐ฐ3 and eigenvalues ๐œ‡3 of ๐‘ฉ๐‘ผ๐‘ฉ
  • 2. Make a matrix ๐‘พ from the normalized vectors ๐ฐ3. The columns are called

โ€œright singular vectorsโ€. ๐‘พ = โ‹ฎ โ€ฆ โ‹ฎ ๐ฐ& โ€ฆ ๐ฐ' โ‹ฎ โ€ฆ โ‹ฎ

  • 3. Make a diagonal matrix from the square roots of the eigenvalues.

๐šป = ๐œ& โ‹ฑ ๐œ' ๐œ3= ๐œ‡3 and ๐œ&โ‰ฅ ๐œ) โ‰ฅ ๐œ* โ€ฆ

  • 4. Find ๐‘ฝ: ๐‘ฉ = ๐‘ฝ ๐šป ๐‘พ๐‘ผ โŸน ๐‘ฝ ๐šป = ๐‘ฉ ๐‘พ โŸน ๐‘ฝ = ๐‘ฉ ๐‘พ ๐šป1๐Ÿ. The columns

are called the โ€œleft singular vectorsโ€.

slide-9
SLIDE 9

True or False?

๐‘ฉ has the singular value decomposition ๐‘ฉ = ๐‘ฝ ๐šป ๐‘พ๐‘ผ.

  • The matrices ๐‘ฝ and ๐‘พ are not singular
  • The matrix ๐šป can have zero diagonal entries
  • ๐‘ฝ ) = 1
  • The SVD exists when the matrix ๐‘ฉ is singular
  • The algorithm to evaluate SVD will fail when taking the square root
  • f a negative eigenvalue
slide-10
SLIDE 10

Singular values cannot be negative since ๐‘ฉ๐‘ผ๐‘ฉ is a positive semi- definite matrix (for real matrices ๐‘ฉ)

  • A matrix is positive definite if ๐’š๐‘ผ๐‘ช๐’š > ๐Ÿ for โˆ€๐’š โ‰  ๐Ÿ
  • A matrix is positive semi-definite if ๐’š๐‘ผ๐‘ช๐’š โ‰ฅ ๐Ÿ for โˆ€๐’š โ‰  ๐Ÿ
  • What do we know about the matrix ๐‘ฉ๐‘ผ๐‘ฉ ?

๐’š๐‘ผ ๐‘ฉ๐‘ผ๐‘ฉ ๐’š = (๐‘ฉ๐’š)๐‘ผ๐‘ฉ๐’š = ๐‘ฉ๐’š ๐Ÿ‘

๐Ÿ‘ โ‰ฅ 0

  • Hence we know that ๐‘ฉ๐‘ผ๐‘ฉ is a positive semi-definite matrix
  • A positive semi-definite matrix has non-negative eigenvalues

๐‘ช๐’š = ๐œ‡๐’š โŸน ๐’š๐‘ผ๐‘ช๐’š = ๐’š๐‘ผ ๐œ‡ ๐’š = ๐œ‡ ๐’š ๐Ÿ‘

๐Ÿ‘ โ‰ฅ 0 โŸน ๐œ‡ โ‰ฅ 0

Singular values are always non-negative

slide-11
SLIDE 11

Cost of SVD

The cost of an SVD is proportional to ๐’ ๐’๐Ÿ‘ + ๐’๐Ÿ’where the constant of proportionality constant ranging from 4 to 10 (or more) depending on the algorithm.

๐ท456 = ๐›ฝ ๐‘› ๐‘œ) + ๐‘œ* = ๐‘ƒ ๐‘œ* ๐ท.78.78 = ๐‘œ*= ๐‘ƒ ๐‘œ* ๐ท9: = 2๐‘œ*/3 = ๐‘ƒ ๐‘œ*

slide-12
SLIDE 12

SVD summary:

  • The SVD is a factorization of a ๐‘›ร—๐‘œ matrix into ๐‘ฉ = ๐‘ฝ ๐šป ๐‘พ๐‘ผ where ๐‘ฝ is a ๐‘›ร—๐‘›
  • rthogonal matrix, ๐‘พ๐‘ผ is a ๐‘œร—๐‘œ orthogonal matrix and ๐šป is a ๐‘›ร—๐‘œ diagonal matrix.
  • In reduced form: ๐‘ฉ = ๐‘ฝ๐‘บ๐šป๐‘บ๐‘พ๐‘บ

๐‘ผ, where ๐‘ฝ๐‘บ is a ๐‘›ร—๐‘™ matrix, ๐šป๐‘บ is a ๐‘™ ร—๐‘™ matrix,

and ๐‘พ๐‘บ is a ๐‘œร—๐‘™ matrix, and ๐‘™ = min(๐‘›, ๐‘œ).

  • The columns of ๐‘พ are the eigenvectors of the matrix ๐‘ฉ๐‘ผ๐‘ฉ, denoted the right singular

vectors.

  • The columns of ๐‘ฝ are the eigenvectors of the matrix ๐‘ฉ๐‘ฉ๐‘ผ, denoted the left singular

vectors.

  • The diagonal entries of ๐šป๐Ÿ‘ are the eigenvalues of ๐‘ฉ๐‘ผ๐‘ฉ. ๐œ&=

๐œ‡& are called the singular values.

  • The singular values are always non-negative (since ๐‘ฉ๐‘ผ๐‘ฉ is a positive semi-definite matrix,

the eigenvalues are always ๐œ‡ โ‰ฅ 0)

slide-13
SLIDE 13

Singular Value Decomposition (applications)

slide-14
SLIDE 14

1) Determining the rank of a matrix

๐‘ฉ = โ‹ฎ โ€ฆ โ‹ฎ โ€ฆ โ‹ฎ ๐’—" โ€ฆ ๐’—' โ€ฆ ๐’—# โ‹ฎ โ€ฆ โ‹ฎ โ€ฆ โ‹ฎ ๐œ" โ‹ฑ ๐œ' โ‹ฎ โ€ฆ ๐ฐ"

(

โ€ฆ โ‹ฎ โ‹ฎ โ‹ฎ โ€ฆ ๐ฐ'

(

โ€ฆ

Suppose ๐‘ฉ is a ๐‘›ร—๐‘œ rectangular matrix where ๐‘› > ๐‘œ: ๐‘ฉ = =

!"# $

๐œ!๐’—!๐ฐ!

%

๐‘ฉ# = ๐œ#๐’—#๐ฐ#

% what is rank ๐‘ฉ# = ?

In general, rank ๐‘ฉ& = ๐‘™ A) 1 B) n C) depends on the matrix D) NOTA

๐‘ฉ = โ‹ฎ โ€ฆ โ‹ฎ ๐’—" โ€ฆ ๐’—' โ‹ฎ โ€ฆ โ‹ฎ โ€ฆ ๐œ" ๐ฐ"

(

โ€ฆ โ‹ฎ โ‹ฎ โ‹ฎ โ€ฆ ๐œ' ๐ฐ'

(

โ€ฆ = ๐œ"๐’—"๐ฐ"

( + ๐œ)๐’—)๐ฐ) ( + โ‹ฏ + ๐œ'๐’—'๐ฐ' (

slide-15
SLIDE 15

Rank of a matrix

For general rectangular matrix ๐‘ฉ with dimensions ๐‘›ร—๐‘œ, the reduced SVD is:

๐‘ฉ = I

3G& H

๐œ3๐’—3๐ฐ3

(

๐‘ฉ = ๐‘ฝ๐‘บ๐šป๐‘บ๐‘พ๐‘บ

๐‘ผ

๐‘›ร—๐‘œ ๐‘›ร—๐‘™ ๐‘™ร—๐‘™ ๐‘™ ร—๐‘œ ๐‘™ = min(๐‘›, ๐‘œ)

๐œฏ = ๐œ# โ‹ฑ ๐œ& โ‹ฑ โ€ฆ ๐œฏ = ๐œ# โ‹ฑ ๐œ& โ‹ฑ โ‹ฎ

If ๐œ& โ‰  0 โˆ€๐‘—, then rank ๐‘ฉ = ๐‘™ (Full rank matrix) In general, rank ๐‘ฉ = ๐’”, where ๐’” is the number of non-zero singular values ๐œ& ๐‘  < ๐‘™ (Rank deficient)

slide-16
SLIDE 16
  • The rank of A equals the number of non-zero singular values which is

the same as the number of non-zero diagonal elements in ฮฃ.

  • Rounding errors may lead to small but non-zero singular values in a

rank deficient matrix, hence the rank of a matrix determined by the number of non-zero singular values is sometimes called โ€œeffective rankโ€.

  • The right-singular vectors (columns of ๐‘พ) corresponding to vanishing

singular values span the null space of A.

  • The left-singular vectors (columns of ๐‘ฝ) corresponding to the non-zero

singular values of A span the range of A.

Rank of a matrix

slide-17
SLIDE 17

2) Pseudo-inverse

  • Problem: if A is rank-deficient, ๐šป is not be invertible
  • How to fix it: Define the Pseudo Inverse
  • Pseudo-Inverse of a diagonal matrix:

๐šปN 3 = N

& O& , if ๐œ3 โ‰  0

0, if ๐œ3 = 0

  • Pseudo-Inverse of a matrix ๐‘ฉ:

๐‘ฉN = ๐‘พ๐šปN๐‘ฝ๐‘ผ

slide-18
SLIDE 18

3) Matrix norms

The Euclidean norm of an orthogonal matrix is equal to 1

๐‘ฝ ) = max

๐’š !+" ๐‘ฝ๐’š ) = max ๐’š !+"

๐‘ฝ๐’š ๐‘ผ(๐‘ฝ๐’š) = max

๐’š !+"

๐’š๐‘ผ๐’š = max

๐’š !+" ๐’š ) = 1

The Euclidean norm of a matrix is given by the largest singular value

๐‘ฉ ) = max

๐’š !+" ๐‘ฉ๐’š ) = max ๐’š !+" ๐‘ฝ ๐šป ๐‘พ๐‘ผ๐’š ) = max ๐’š !+" ๐šป ๐‘พ๐‘ผ๐’š ) =

= max

๐‘พ๐‘ผ๐’š !+" ๐šป ๐‘พ๐‘ผ๐’š ) = max ๐’› !+" ๐šป ๐’› ) = max(๐œ&)

Where we used the fact that ๐‘ฝ ) = 1, ๐‘พ ) = 1 and ๐šป is diagonal ๐‘ฉ ) = max ๐œ& = ๐œ#./

๐œ'() is the largest singular value

slide-19
SLIDE 19

4) Norm for the inverse of a matrix

The Euclidean norm of the inverse of a square-matrix is given by:

Assume here ๐‘ฉ is full rank, so that ๐‘ฉ1& exists

๐‘ฉ0" ) = max

๐’š !+" (๐‘ฝ ๐šป ๐‘พ๐‘ผ)0"๐’š ) = max ๐’š !+" ๐‘พ ๐šป0๐Ÿ๐‘ฝ๐‘ผ๐’š )

Since ๐‘ฝ ) = 1, ๐‘พ ) = 1 and ๐šป is diagonal then ๐‘ฉ0" )=

" 2#$%

๐œ'!$ is the smallest singular value

slide-20
SLIDE 20

5) Norm of the pseudo-inverse matrix

The norm of the pseudo-inverse of a ๐‘› ร— ๐‘œ matrix is: ๐‘ฉ3 )= "

2&

where ๐œ4 is the smallest non-zero singular value. This is valid for any matrix, regardless

  • f the shape or rank.

Note that for a full rank square matrix, ๐‘ฉ3 ) is the same as ๐‘ฉ0" ). Zero matrix: If ๐‘ฉ is a zero matrix, then ๐‘ฉ3 is also the zero matrix, and ๐‘ฉ3 )= 0

slide-21
SLIDE 21

The condition number of a matrix is given by ๐‘‘๐‘๐‘œ๐‘’) ๐‘ฉ = ๐‘ฉ ) ๐‘ฉ3 ) If the matrix is full rank: ๐‘ ๐‘๐‘œ๐‘™ ๐‘ฉ = ๐‘›๐‘—๐‘œ ๐‘›, ๐‘œ ๐‘‘๐‘๐‘œ๐‘’) ๐‘ฉ = ๐œ#./ ๐œ#&' where ๐œ#./ is the largest singular value and ๐œ#&' is the smallest singular value If the matrix is rank deficient: ๐‘ ๐‘๐‘œ๐‘™ ๐‘ฉ < ๐‘›๐‘—๐‘œ ๐‘›, ๐‘œ ๐‘‘๐‘๐‘œ๐‘’) ๐‘ฉ = โˆž

6) Condition number of a matrix

slide-22
SLIDE 22

7) Low-Rank Approximation

Another way to write the SVD (assuming for now ๐‘› > ๐‘œ for simplicity)

๐‘ฉ = โ‹ฎ โ€ฆ โ‹ฎ ๐’—# โ€ฆ ๐’—' โ‹ฎ โ€ฆ โ‹ฎ ๐œ# โ‹ฑ ๐œ$ โ‹ฎ โ€ฆ ๐ฐ#

%

โ€ฆ โ‹ฎ โ‹ฎ โ‹ฎ โ€ฆ ๐ฐ$

%

โ€ฆ = โ‹ฎ โ€ฆ โ‹ฎ ๐’—# โ€ฆ ๐’—$ โ‹ฎ โ€ฆ โ‹ฎ โ€ฆ ๐œ# ๐ฐ#

%

โ€ฆ โ‹ฎ โ‹ฎ โ‹ฎ โ€ฆ ๐œ$ ๐ฐ$

%

โ€ฆ = ๐œ#๐’—#๐ฐ#

% + ๐œ*๐’—*๐ฐ* % + โ‹ฏ + ๐œ$๐’—$๐ฐ$ %

The SVD writes the matrix A as a sum of outer products (of left and right singular vectors).

slide-23
SLIDE 23

๐œ" โ‰ฅ ๐œ) โ‰ฅ ๐œ* โ€ฆ โ‰ฅ 0

๐‘ฉH = ๐œ&๐’—&๐ฐ&

( + ๐œ)๐’—)๐ฐ) ( + โ‹ฏ + ๐œH๐’—H๐ฐH (

Note that ๐‘ ๐‘๐‘œ๐‘™ ๐‘ฉ = ๐‘œ and ๐‘ ๐‘๐‘œ๐‘™(๐‘ฉH) = ๐‘™ and the norm of the difference between the matrix and its approximation is The best rank-๐’ approximation for a ๐‘›ร—๐‘œ matrix ๐‘ฉ, (where ๐‘™ โ‰ค ๐‘›๐‘—๐‘œ(๐‘›, ๐‘œ)) is the one that minimizes the following problem: When using the induced 2-norm, the best rank-๐’ approximation is given by:

7) Low-Rank Approximation (cont.)

๐‘ฉ โˆ’ ๐‘ฉ&

* =

๐œ&+#๐’—&+#๐ฐ&+#

%

+ ๐œ&+*๐’—&+*๐ฐ&+*

%

+ โ‹ฏ + ๐œ$๐’—$๐ฐ$

% * = ๐œ&+#

slide-24
SLIDE 24

Example: Image compression

๐Ÿ”๐Ÿ๐Ÿ ๐Ÿ”๐Ÿ๐Ÿ ๐Ÿ”๐Ÿ๐Ÿ ๐Ÿ๐Ÿ“๐Ÿ๐Ÿ– ๐Ÿ๐Ÿ“๐Ÿ๐Ÿ– ๐Ÿ๐Ÿ“๐Ÿ๐Ÿ– ๐Ÿ”๐Ÿ๐Ÿ ๐Ÿ๐Ÿ“๐Ÿ๐Ÿ–

slide-25
SLIDE 25

Example: Image compression

๐Ÿ”๐Ÿ๐Ÿ ๐Ÿ๐Ÿ“๐Ÿ๐Ÿ– Image using rank-50 approximation

slide-26
SLIDE 26

8) Using SVD to solve square system of linear equations

If ๐‘ฉ is a ๐‘œร—๐‘œ square matrix and we want to solve ๐‘ฉ ๐’š = ๐’„, we can use the SVD for ๐‘ฉ such that ๐‘ฝ ๐šป ๐‘พ๐‘ผ๐’š = ๐’„ ๐šป ๐‘พ๐‘ผ๐’š = ๐‘ฝ๐‘ผ๐’„ Solve: ๐šป ๐’› = ๐‘ฝ๐‘ผ๐’„ (diagonal matrix, easy to solve!) Evaluate: ๐’š = ๐‘พ ๐’› Cost of solve: ๐‘ƒ ๐‘œ( Cost of decomposition ๐‘ƒ ๐‘œ) (recall that SVD and LU have the same cost asymptotic behavior, however the number of operations - constant factor before ๐‘œ) - for the SVD is larger than LU)