Linear algebra and differential equations (Math 54): Lecture 18 - - PowerPoint PPT Presentation

linear algebra and differential equations math 54 lecture
SMART_READER_LITE
LIVE PREVIEW

Linear algebra and differential equations (Math 54): Lecture 18 - - PowerPoint PPT Presentation

Linear algebra and differential equations (Math 54): Lecture 18 Vivek Shende April 3, 2019 Hello and welcome to class! Hello and welcome to class! Last time Hello and welcome to class! Last time We discussed the least squares problem,


slide-1
SLIDE 1

Linear algebra and differential equations (Math 54): Lecture 18

Vivek Shende April 3, 2019

slide-2
SLIDE 2

Hello and welcome to class!

slide-3
SLIDE 3

Hello and welcome to class!

Last time

slide-4
SLIDE 4

Hello and welcome to class!

Last time

We discussed the “least squares” problem, and then considered notions of distance and orthogonality in a general setting, as captured by inner products.

slide-5
SLIDE 5

Hello and welcome to class!

Last time

We discussed the “least squares” problem, and then considered notions of distance and orthogonality in a general setting, as captured by inner products.

This time

slide-6
SLIDE 6

Hello and welcome to class!

Last time

We discussed the “least squares” problem, and then considered notions of distance and orthogonality in a general setting, as captured by inner products.

This time

We meet the spectral theorem, and the singular value decomposition.

slide-7
SLIDE 7

Hello and welcome to class!

Last time

We discussed the “least squares” problem, and then considered notions of distance and orthogonality in a general setting, as captured by inner products.

This time

We meet the spectral theorem, and the singular value

  • decomposition. This is the last class on linear algebra.
slide-8
SLIDE 8

Hello and welcome to class!

Last time

We discussed the “least squares” problem, and then considered notions of distance and orthogonality in a general setting, as captured by inner products.

This time

We meet the spectral theorem, and the singular value

  • decomposition. This is the last class on linear algebra. Next week,

we begin differential equations!

slide-9
SLIDE 9

Review: Inner products

Definition

An inner product on a vector space V is a map · , · : V × V → R which is distributive, commutative, and positive.

slide-10
SLIDE 10

Review: Inner product properties

Distributivity (aka ”bilinearity”) av+bv′, cw+dw′ = acv, w+bcv′, w+adv, w′+bdv′, w′ Commutativity v, w = w, v Positivity v, v ≥ 0, with equality only when v = 0

slide-11
SLIDE 11

Review: Inner products on Rn

slide-12
SLIDE 12

Review: Inner products on Rn

Last time, we saw that distributivity implies that any inner product

  • n Rn is given by a matrix:

v, w = vTAw Ai,j = ei, ej

slide-13
SLIDE 13

When does a matrix define an inner product?

slide-14
SLIDE 14

When does a matrix define an inner product?

For any matrix A, consider the formula v, w = vTAw.

slide-15
SLIDE 15

When does a matrix define an inner product?

For any matrix A, consider the formula v, w = vTAw. This always satisfies the distributivity axiom.

slide-16
SLIDE 16

When does a matrix define an inner product?

For any matrix A, consider the formula v, w = vTAw. This always satisfies the distributivity axiom. It satisfies commutativity iff A is symmetric, i.e. AT = A.

slide-17
SLIDE 17

When does a matrix define an inner product?

For any matrix A, consider the formula v, w = vTAw. This always satisfies the distributivity axiom. It satisfies commutativity iff A is symmetric, i.e. AT = A. When does it satisfy positivity?

slide-18
SLIDE 18

When does a matrix define an inner product?

For any matrix A, consider the formula v, w = vTAw. This always satisfies the distributivity axiom. It satisfies commutativity iff A is symmetric, i.e. AT = A. When does it satisfy positivity? (last time: when A is 2x2, positive iff the eigenvalues are)

slide-19
SLIDE 19

Eigenvectors of symmetric matrices

slide-20
SLIDE 20

Eigenvectors of symmetric matrices

Eigenvectors with different eigenvalues are linearly independent.

slide-21
SLIDE 21

Eigenvectors of symmetric matrices

Eigenvectors with different eigenvalues are linearly independent. For a symmetric matrix,

slide-22
SLIDE 22

Eigenvectors of symmetric matrices

Eigenvectors with different eigenvalues are linearly independent. For a symmetric matrix, eigenvectors with different eigenvalues are in fact orthogonal:

slide-23
SLIDE 23

Eigenvectors of symmetric matrices

Eigenvectors with different eigenvalues are linearly independent. For a symmetric matrix, eigenvectors with different eigenvalues are in fact orthogonal: Indeed, since MT = M,

slide-24
SLIDE 24

Eigenvectors of symmetric matrices

Eigenvectors with different eigenvalues are linearly independent. For a symmetric matrix, eigenvectors with different eigenvalues are in fact orthogonal: Indeed, since MT = M, for any vector v, we have vTM = (Mv)T.

slide-25
SLIDE 25

Eigenvectors of symmetric matrices

Eigenvectors with different eigenvalues are linearly independent. For a symmetric matrix, eigenvectors with different eigenvalues are in fact orthogonal: Indeed, since MT = M, for any vector v, we have vTM = (Mv)T. If vi, vj are eigenvectors with eigenvalues λi, λj:

slide-26
SLIDE 26

Eigenvectors of symmetric matrices

Eigenvectors with different eigenvalues are linearly independent. For a symmetric matrix, eigenvectors with different eigenvalues are in fact orthogonal: Indeed, since MT = M, for any vector v, we have vTM = (Mv)T. If vi, vj are eigenvectors with eigenvalues λi, λj: vT

i Mvj = λjvT i vj

slide-27
SLIDE 27

Eigenvectors of symmetric matrices

Eigenvectors with different eigenvalues are linearly independent. For a symmetric matrix, eigenvectors with different eigenvalues are in fact orthogonal: Indeed, since MT = M, for any vector v, we have vTM = (Mv)T. If vi, vj are eigenvectors with eigenvalues λi, λj: vT

i Mvj = λjvT i vj

vT

i Mvj = (Mvi)Tvj = λivT i vj

slide-28
SLIDE 28

Eigenvectors of symmetric matrices

Eigenvectors with different eigenvalues are linearly independent. For a symmetric matrix, eigenvectors with different eigenvalues are in fact orthogonal: Indeed, since MT = M, for any vector v, we have vTM = (Mv)T. If vi, vj are eigenvectors with eigenvalues λi, λj: vT

i Mvj = λjvT i vj

vT

i Mvj = (Mvi)Tvj = λivT i vj

So if λi = λj, then vT

i vj = 0.

slide-29
SLIDE 29

The spectral theorem

slide-30
SLIDE 30

The spectral theorem

Theorem

A symmetric matrix M has all real eigenvalues and an orthonormal basis of eigenvectors.

slide-31
SLIDE 31

The spectral theorem

Theorem

A symmetric matrix M has all real eigenvalues and an orthonormal basis of eigenvectors. Proof.

slide-32
SLIDE 32

The spectral theorem

Theorem

A symmetric matrix M has all real eigenvalues and an orthonormal basis of eigenvectors.

  • Proof. Given any real basis of eigenvectors, those of different

eigenvalues are already orthogonal; and if some eigenspace is of dimension > 1, we may use Gram-Schmidt to replace our basis with an orthonormal one.

slide-33
SLIDE 33

The spectral theorem

Theorem

A symmetric matrix M has all real eigenvalues and an orthonormal basis of eigenvectors.

  • Proof. Given any real basis of eigenvectors, those of different

eigenvalues are already orthogonal; and if some eigenspace is of dimension > 1, we may use Gram-Schmidt to replace our basis with an orthonormal one. So it is enough to find a basis of real eigenvectors.

slide-34
SLIDE 34

The spectral theorem

Theorem

A symmetric matrix M has all real eigenvalues and an orthonormal basis of eigenvectors.

  • Proof. Given any real basis of eigenvectors, those of different

eigenvalues are already orthogonal; and if some eigenspace is of dimension > 1, we may use Gram-Schmidt to replace our basis with an orthonormal one. So it is enough to find a basis of real

  • eigenvectors. Our first task is to produce a single such vector.
slide-35
SLIDE 35

The spectral theorem

Certainly M has a complex eigenvector v with complex eigenvalue λ.

slide-36
SLIDE 36

The spectral theorem

Certainly M has a complex eigenvector v with complex eigenvalue λ. Let † denote the operation of taking transpose and complex conjugate.

slide-37
SLIDE 37

The spectral theorem

Certainly M has a complex eigenvector v with complex eigenvalue λ. Let † denote the operation of taking transpose and complex conjugate. Then λv†v = v†Mv = (Mv)†v = λv†v

slide-38
SLIDE 38

The spectral theorem

Certainly M has a complex eigenvector v with complex eigenvalue λ. Let † denote the operation of taking transpose and complex conjugate. Then λv†v = v†Mv = (Mv)†v = λv†v Since v†v is the sum of the squares of the lengths of the entries of v,

slide-39
SLIDE 39

The spectral theorem

Certainly M has a complex eigenvector v with complex eigenvalue λ. Let † denote the operation of taking transpose and complex conjugate. Then λv†v = v†Mv = (Mv)†v = λv†v Since v†v is the sum of the squares of the lengths of the entries of v, it is positive,

slide-40
SLIDE 40

The spectral theorem

Certainly M has a complex eigenvector v with complex eigenvalue λ. Let † denote the operation of taking transpose and complex conjugate. Then λv†v = v†Mv = (Mv)†v = λv†v Since v†v is the sum of the squares of the lengths of the entries of v, it is positive, hence nonzero,

slide-41
SLIDE 41

The spectral theorem

Certainly M has a complex eigenvector v with complex eigenvalue λ. Let † denote the operation of taking transpose and complex conjugate. Then λv†v = v†Mv = (Mv)†v = λv†v Since v†v is the sum of the squares of the lengths of the entries of v, it is positive, hence nonzero, hence λ = λ is a real number.

slide-42
SLIDE 42

The spectral theorem

Now we know there is a real unit eigenvector v with the real eigenvalue λ.

slide-43
SLIDE 43

The spectral theorem

Now we know there is a real unit eigenvector v with the real eigenvalue λ. Consider the orthogonal complement v⊥.

slide-44
SLIDE 44

The spectral theorem

Now we know there is a real unit eigenvector v with the real eigenvalue λ. Consider the orthogonal complement v⊥. This is preserved by M:

slide-45
SLIDE 45

The spectral theorem

Now we know there is a real unit eigenvector v with the real eigenvalue λ. Consider the orthogonal complement v⊥. This is preserved by M: if wTv = 0, then (Mw)Tv = wTMv = λwTv = 0

slide-46
SLIDE 46

The spectral theorem

Now we know there is a real unit eigenvector v with the real eigenvalue λ. Consider the orthogonal complement v⊥. This is preserved by M: if wTv = 0, then (Mw)Tv = wTMv = λwTv = 0 Pick an orthonormal basis w2, . . . , wn of v⊥.

slide-47
SLIDE 47

The spectral theorem

Now we know there is a real unit eigenvector v with the real eigenvalue λ. Consider the orthogonal complement v⊥. This is preserved by M: if wTv = 0, then (Mw)Tv = wTMv = λwTv = 0 Pick an orthonormal basis w2, . . . , wn of v⊥. In the basis v, w2, w3, . . . , wn, the matrix M takes the shape M =      vTMv · · · wT

2 Mw2

· · · wT

2 Mwn

. . . . . . ... . . . wT

n Mw2

· · · wT

n Mwn

    

slide-48
SLIDE 48

The spectral theorem

Now we know there is a real unit eigenvector v with the real eigenvalue λ. Consider the orthogonal complement v⊥. This is preserved by M: if wTv = 0, then (Mw)Tv = wTMv = λwTv = 0 Pick an orthonormal basis w2, . . . , wn of v⊥. In the basis v, w2, w3, . . . , wn, the matrix M takes the shape M =      vTMv · · · wT

2 Mw2

· · · wT

2 Mwn

. . . . . . ... . . . wT

n Mw2

· · · wT

n Mwn

     The lower-right block is a smaller symmetric matrix; we are done by induction.

slide-49
SLIDE 49

The spectral theorem

Theorem

A symmetric matrix M has all real eigenvalues and an orthonormal basis of eigenvectors.

slide-50
SLIDE 50

The spectral theorem: reformulation

slide-51
SLIDE 51

The spectral theorem: reformulation

Recall that a square matrix O is said to be orthogonal if OT = O−1;

slide-52
SLIDE 52

The spectral theorem: reformulation

Recall that a square matrix O is said to be orthogonal if OT = O−1; equivalently, if the columns are orthonormal;

slide-53
SLIDE 53

The spectral theorem: reformulation

Recall that a square matrix O is said to be orthogonal if OT = O−1; equivalently, if the columns are orthonormal; equivalently, if the rows are orthonormal.

slide-54
SLIDE 54

The spectral theorem: reformulation

Recall that a square matrix O is said to be orthogonal if OT = O−1; equivalently, if the columns are orthonormal; equivalently, if the rows are orthonormal.

Theorem

If M is symmetric, there is an orthogonal matrix O and a real diagonal matrix D with M = ODO−1 = ODOT

slide-55
SLIDE 55

The spectral theorem: reformulation

Recall that a square matrix O is said to be orthogonal if OT = O−1; equivalently, if the columns are orthonormal; equivalently, if the rows are orthonormal.

Theorem

If M is symmetric, there is an orthogonal matrix O and a real diagonal matrix D with M = ODO−1 = ODOT Proof: the columns of O are orthonormal eigenvectors of M.

slide-56
SLIDE 56

The spectral theorem: reformulation

Theorem

If M is symmetric, there is an orthonormal basis vi and real numbers λi such that M = λ1v1vT

1 + · · · + λnvnvT n

slide-57
SLIDE 57

The spectral theorem: reformulation

Theorem

If M is symmetric, there is an orthonormal basis vi and real numbers λi such that M = λ1v1vT

1 + · · · + λnvnvT n

Proof: the vi are the eigenvectors of M and the λi are their eigenvalues.

slide-58
SLIDE 58

The spectral theorem: reformulation

Theorem

If M is symmetric, there is an orthonormal basis vi and real numbers λi such that M = λ1v1vT

1 + · · · + λnvnvT n

Proof: the vi are the eigenvectors of M and the λi are their eigenvalues. Note by orthonormality, (vivT

i )2 = vivT i , and (vivT i )(vjvT j ) = 0

when i = j.

slide-59
SLIDE 59

What does positive mean?

Suppose M is a symmetric matrix. When is it true that: wTMw ≥ 0, with equality only when w = 0

slide-60
SLIDE 60

What does positive mean?

Suppose M is a symmetric matrix. When is it true that: wTMw ≥ 0, with equality only when w = 0 Claim: if and only if M has only positive eigenvalues.

slide-61
SLIDE 61

What does positive mean?

Suppose M is a symmetric matrix. When is it true that: wTMw ≥ 0, with equality only when w = 0 Claim: if and only if M has only positive eigenvalues. Take an orthonormal basis vi of eigenvectors for M, with eigenvalues λi.

slide-62
SLIDE 62

What does positive mean?

Suppose M is a symmetric matrix. When is it true that: wTMw ≥ 0, with equality only when w = 0 Claim: if and only if M has only positive eigenvalues. Take an orthonormal basis vi of eigenvectors for M, with eigenvalues λi. Note vT

i Mvi = λivT i vi = λi.

slide-63
SLIDE 63

What does positive mean?

Suppose M is a symmetric matrix. When is it true that: wTMw ≥ 0, with equality only when w = 0 Claim: if and only if M has only positive eigenvalues. Take an orthonormal basis vi of eigenvectors for M, with eigenvalues λi. Note vT

i Mvi = λivT i vi = λi. So if any of the λi is

≤ 0, then certainly M is not positive.

slide-64
SLIDE 64

What does positive mean?

Suppose M is a symmetric matrix. When is it true that: wTMw ≥ 0, with equality only when w = 0 Claim: if and only if M has only positive eigenvalues. Take an orthonormal basis vi of eigenvectors for M, with eigenvalues λi. Note vT

i Mvi = λivT i vi = λi. So if any of the λi is

≤ 0, then certainly M is not positive. On the other hand, expand any w = ( wivi).

slide-65
SLIDE 65

What does positive mean?

Suppose M is a symmetric matrix. When is it true that: wTMw ≥ 0, with equality only when w = 0 Claim: if and only if M has only positive eigenvalues. Take an orthonormal basis vi of eigenvectors for M, with eigenvalues λi. Note vT

i Mvi = λivT i vi = λi. So if any of the λi is

≤ 0, then certainly M is not positive. On the other hand, expand any w = ( wivi). Then wTMw = (

  • wivi)TM(
  • wivi) =
  • w2

i λi

slide-66
SLIDE 66

What does positive mean?

Suppose M is a symmetric matrix. When is it true that: wTMw ≥ 0, with equality only when w = 0 Claim: if and only if M has only positive eigenvalues. Take an orthonormal basis vi of eigenvectors for M, with eigenvalues λi. Note vT

i Mvi = λivT i vi = λi. So if any of the λi is

≤ 0, then certainly M is not positive. On the other hand, expand any w = ( wivi). Then wTMw = (

  • wivi)TM(
  • wivi) =
  • w2

i λi

This is certainly positive if the λi are positive and w = 0.

slide-67
SLIDE 67

Square-roots of positive matrices

slide-68
SLIDE 68

Square-roots of positive matrices

Positive numbers have square-roots.

slide-69
SLIDE 69

Square-roots of positive matrices

Positive numbers have square-roots. So do positive symmetric matrices:

slide-70
SLIDE 70

Square-roots of positive matrices

Positive numbers have square-roots. So do positive symmetric matrices: if A = ODO−1 with D having all non-negative entries,

slide-71
SLIDE 71

Square-roots of positive matrices

Positive numbers have square-roots. So do positive symmetric matrices: if A = ODO−1 with D having all non-negative entries, then we can write √ D for the diagonal matrix whose entries are the square-roots of D’s entries, and:

slide-72
SLIDE 72

Square-roots of positive matrices

Positive numbers have square-roots. So do positive symmetric matrices: if A = ODO−1 with D having all non-negative entries, then we can write √ D for the diagonal matrix whose entries are the square-roots of D’s entries, and:

slide-73
SLIDE 73

Square-roots of positive matrices

Positive numbers have square-roots. So do positive symmetric matrices: if A = ODO−1 with D having all non-negative entries, then we can write √ D for the diagonal matrix whose entries are the square-roots of D’s entries, and: √ A := O √ DO−1

slide-74
SLIDE 74

Square-roots of positive matrices

Positive numbers have square-roots. So do positive symmetric matrices: if A = ODO−1 with D having all non-negative entries, then we can write √ D for the diagonal matrix whose entries are the square-roots of D’s entries, and: √ A := O √ DO−1 Note that √ A is again symmetric and positive.

slide-75
SLIDE 75

Stretching and shrinking

slide-76
SLIDE 76

Stretching and shrinking

How big or small can be |Aw|

|w| ?

slide-77
SLIDE 77

Stretching and shrinking

How big or small can be |Aw|

|w| ?

If A is symmetric, then we take a basis of orthonormal eigenvectors vi with eigenvalues λi and write:

slide-78
SLIDE 78

Stretching and shrinking

How big or small can be |Aw|

|w| ?

If A is symmetric, then we take a basis of orthonormal eigenvectors vi with eigenvalues λi and write: w =

  • wivi
slide-79
SLIDE 79

Stretching and shrinking

How big or small can be |Aw|

|w| ?

If A is symmetric, then we take a basis of orthonormal eigenvectors vi with eigenvalues λi and write: w =

  • wivi

Aw =

  • wiλivi

From this it is not hard to see

slide-80
SLIDE 80

Stretching and shrinking

How big or small can be |Aw|

|w| ?

If A is symmetric, then we take a basis of orthonormal eigenvectors vi with eigenvalues λi and write: w =

  • wivi

Aw =

  • wiλivi

From this it is not hard to see min(|λi|) ≤ |Aw| |w| = w2

i λ2 i

w2

i

≤ max(|λi|)

slide-81
SLIDE 81

Stretching and shrinking

slide-82
SLIDE 82

Stretching and shrinking

What about for matrices A in general?

slide-83
SLIDE 83

Stretching and shrinking

What about for matrices A in general? Maybe not even square?

slide-84
SLIDE 84

Stretching and shrinking

What about for matrices A in general? Maybe not even square? |Av|2 = (Av)T(Av) = vTATAv

slide-85
SLIDE 85

Stretching and shrinking

What about for matrices A in general? Maybe not even square? |Av|2 = (Av)T(Av) = vTATAv Note the matrix ATA is (square and) symmetric.

slide-86
SLIDE 86

Stretching and shrinking

What about for matrices A in general? Maybe not even square? |Av|2 = (Av)T(Av) = vTATAv Note the matrix ATA is (square and) symmetric. It’s also non-negative:

slide-87
SLIDE 87

Stretching and shrinking

What about for matrices A in general? Maybe not even square? |Av|2 = (Av)T(Av) = vTATAv Note the matrix ATA is (square and) symmetric. It’s also non-negative: vTATAv = |Av|2 ≥ 0.

slide-88
SLIDE 88

Stretching and shrinking

What about for matrices A in general? Maybe not even square? |Av|2 = (Av)T(Av) = vTATAv Note the matrix ATA is (square and) symmetric. It’s also non-negative: vTATAv = |Av|2 ≥ 0. So it has a symmetric non-negative square-root B = √ ATA.

slide-89
SLIDE 89

Stretching and shrinking

What about for matrices A in general? Maybe not even square? |Av|2 = (Av)T(Av) = vTATAv Note the matrix ATA is (square and) symmetric. It’s also non-negative: vTATAv = |Av|2 ≥ 0. So it has a symmetric non-negative square-root B = √

  • ATA. So

we reduced the problem to the symmetric case, since |Av|2 = vTATAv = vTB2v = vTBTBv = |Bv|2

slide-90
SLIDE 90

Singular values

slide-91
SLIDE 91

Singular values

Let vi be an orthonormal basis of eigenvectors for ATA, with eigenvalues λi. We order them so that λ1 ≥ λ2 ≥ · · · .

slide-92
SLIDE 92

Singular values

Let vi be an orthonormal basis of eigenvectors for ATA, with eigenvalues λi. We order them so that λ1 ≥ λ2 ≥ · · · . As we have seen, all λi ≥ 0.

slide-93
SLIDE 93

Singular values

Let vi be an orthonormal basis of eigenvectors for ATA, with eigenvalues λi. We order them so that λ1 ≥ λ2 ≥ · · · . As we have seen, all λi ≥ 0. We write σi = √λi.

slide-94
SLIDE 94

Singular values

Let vi be an orthonormal basis of eigenvectors for ATA, with eigenvalues λi. We order them so that λ1 ≥ λ2 ≥ · · · . As we have seen, all λi ≥ 0. We write σi = √λi. The σi are called the singular values of A.

slide-95
SLIDE 95

Singular values

Let vi be an orthonormal basis of eigenvectors for ATA, with eigenvalues λi. We order them so that λ1 ≥ λ2 ≥ · · · . As we have seen, all λi ≥ 0. We write σi = √λi. The σi are called the singular values of A. Note: (Avi) · (Avj) = vT

j ATAvi = 0

i = j

slide-96
SLIDE 96

Singular values

Let vi be an orthonormal basis of eigenvectors for ATA, with eigenvalues λi. We order them so that λ1 ≥ λ2 ≥ · · · . As we have seen, all λi ≥ 0. We write σi = √λi. The σi are called the singular values of A. Note: (Avi) · (Avj) = vT

j ATAvi = 0

i = j (Avi) · (Avi) = vT

i ATAvi = σ2 i vT i vi = σ2 i

slide-97
SLIDE 97

Singular value decomposition

So the vi are an orthonormal basis whose images are also

  • rthogonal.
slide-98
SLIDE 98

Singular value decomposition

So the vi are an orthonormal basis whose images are also

  • rthogonal. We rescale the images to the orthonormal

ui := 1 σi Avi

slide-99
SLIDE 99

Singular value decomposition

So the vi are an orthonormal basis whose images are also

  • rthogonal. We rescale the images to the orthonormal

ui := 1 σi Avi We extend the ui to an orthonormal basis.

slide-100
SLIDE 100

Singular value decomposition

So the vi are an orthonormal basis whose images are also

  • rthogonal. We rescale the images to the orthonormal

ui := 1 σi Avi We extend the ui to an orthonormal basis. The matrices U, V whose columns are the basis vectors ui, vi are orthogonal:

slide-101
SLIDE 101

Singular value decomposition

So the vi are an orthonormal basis whose images are also

  • rthogonal. We rescale the images to the orthonormal

ui := 1 σi Avi We extend the ui to an orthonormal basis. The matrices U, V whose columns are the basis vectors ui, vi are orthogonal: UT = U−1 and V T = V −1.

slide-102
SLIDE 102

Singular value decomposition

So the vi are an orthonormal basis whose images are also

  • rthogonal. We rescale the images to the orthonormal

ui := 1 σi Avi We extend the ui to an orthonormal basis. The matrices U, V whose columns are the basis vectors ui, vi are orthogonal: UT = U−1 and V T = V −1. The matrix Σ := UTAV is diagonal, with entries σi.

slide-103
SLIDE 103

Singular value decomposition

So the vi are an orthonormal basis whose images are also

  • rthogonal. We rescale the images to the orthonormal

ui := 1 σi Avi We extend the ui to an orthonormal basis. The matrices U, V whose columns are the basis vectors ui, vi are orthogonal: UT = U−1 and V T = V −1. The matrix Σ := UTAV is diagonal, with entries σi. A = UΣV T