Gram-Schmidt algorithm Aim lecture: We use the theory of last lecture - - PowerPoint PPT Presentation

gram schmidt algorithm
SMART_READER_LITE
LIVE PREVIEW

Gram-Schmidt algorithm Aim lecture: We use the theory of last lecture - - PowerPoint PPT Presentation

Gram-Schmidt algorithm Aim lecture: We use the theory of last lecture to give an algorithm for finding orthonormal bases. As usual in this lecture F = R or C & V is an F -space equipped with an inner product ( | ). Gram-Schmidt Algorithm


slide-1
SLIDE 1

Gram-Schmidt algorithm

Aim lecture: We use the theory of last lecture to give an algorithm for finding

  • rthonormal bases.

As usual in this lecture F = R or C & V is an F-space equipped with an inner product (·|·).

Gram-Schmidt Algorithm

Let {v1, . . . , vd} be a basis for V & V≤r = Span(v1, . . . , vr).

1

The following inductively defines an orthogonal basis {w1, . . . , wd} for V : w1 = v1 & wr+1 = vr+1 − projV≤r vr+1 = vr+1 −

r

  • i=1

projF wivr+1.

2

Furthermore, Span(w1, . . . , wr) = V≤r.

  • Proof. This is an easy induction. Assuming the statements true for the inner

product space V≤r just note that wr+1 is in V ⊥

≤r so spans the orthogonal

complement to V≤r in V≤r+1.

Daniel Chan (UNSW) Lecture 34: Gram-Schmidt orthogonalisation Semester 2 2013 1 / 8

slide-2
SLIDE 2

Polynomial example

E.g. Find an orthonormal basis for V = R[x]≤1 equipped with the inner product (f |g) = 1

0 f (t)g(t)dt.

Daniel Chan (UNSW) Lecture 34: Gram-Schmidt orthogonalisation Semester 2 2013 2 / 8

slide-3
SLIDE 3

Example in R4

E.g. Find an orthog basis for the image of the following matrix A =     1 1 2 2 1 2 1    

Daniel Chan (UNSW) Lecture 34: Gram-Schmidt orthogonalisation Semester 2 2013 3 / 8

slide-4
SLIDE 4

Example cont’d

Daniel Chan (UNSW) Lecture 34: Gram-Schmidt orthogonalisation Semester 2 2013 4 / 8

slide-5
SLIDE 5

Projection formula revisited

Let {w1, . . . , wd} ⊂ V be an orthogonal set spanning W ≤ V . Recall for any v ∈ V we have projW v = (w1|v) w12 w1 + . . . + (wd|v) wd2 wd.

Prop

Consider the orthogonal co-ord system Q = (w1 . . . wd) : Fd − → W . Let P = (w1|·) w12 , . . . , (wd|·) wd2 T : V − → Fd .

1

Then projW = Q ◦ P.

2

If V = Fm so Q ∈ Mmd(F), then we may re-write P = D−1Q∗ where D is the diagonal matrix D = (w12) ⊕ . . . ⊕ (wd2). Hence projW = QD−1Q∗.

  • Proof. 1) is just a restatement of the projection formula of lecture 33 above. For

2), just note that the linear map (wi|·) is given by left multn by w∗

i so calculating

D−1C ∗ gives the result.

Daniel Chan (UNSW) Lecture 34: Gram-Schmidt orthogonalisation Semester 2 2013 5 / 8

slide-6
SLIDE 6

Example of a projection matrix

E.g. Let W = Span((2, 1, 2)T, (1, 2, −2)T). Find the matrix representing projW : R3 − → R3. Rem Note the formula in prop 2), above generalises the formula proju = uuT for u ∈ Rn a unit vector.

Daniel Chan (UNSW) Lecture 34: Gram-Schmidt orthogonalisation Semester 2 2013 6 / 8

slide-7
SLIDE 7

QR-factorisation

Let A = (v1 . . . vn) ∈ Mmn(C) be an m × n-matrix with lin indep columns v1, . . . , vn. Let w1, . . . , wn be an orthogonal basis of im A such that Span(v1, . . . , vi) = Span(w1, . . . , wi) for i = 1, . . . , n e.g. the one that Gram-Schmidt gives, or any derived from that by scaling the vectors.

Prop-Defn [QR-factorisation]

If Q = (w1 . . . wn) & D = (w12) ⊕ . . . ⊕ (wd2), then A = QR where R = D−1Q∗A is invertible and upper triangular. When Q has orthonormal columns, we call this a QR-factorisation of A.

  • Proof. Our new projection formula shows projim A = QD−1Q∗ so

A = (v1 . . . vn) = (QD−1Q∗v1 . . . QD−1Q∗vn) = QD−1Q∗A & it suffices to show that R = D−1Q∗A is upper triangular with all diagonal entries non-zero. If (β1, . . . , βn)T is the i-th column of R then comparing the i-columns of A & QR give vi = β1w1 + . . . + βnwn. Our condition on span ensures 0 = βi+1 = βi+2 = . . . = βn whilst βi = 0.

Daniel Chan (UNSW) Lecture 34: Gram-Schmidt orthogonalisation Semester 2 2013 7 / 8

slide-8
SLIDE 8

Example

Rem An easy way to remember the formula is to note Q∗Q = D so Q∗A = Q∗QR = DR. E.g. QR-factorise the matrix A =   1 1 1 1 1  

Daniel Chan (UNSW) Lecture 34: Gram-Schmidt orthogonalisation Semester 2 2013 8 / 8