▲✐♥❡❛r ❆❧❣❡❜r❛
- Prof. Inder K. Rana
Room 112 B Department of Mathematics IIT-Bombay, Mumbai-400076 (India) Email: ikr@math.iitb.ac.in Lecture 11
- Prof. Inder K. Rana
Department of Mathematics, IIT - Bombay
r r Prof. Inder K. Rana Room 112 B Department of Mathematics - - PowerPoint PPT Presentation
r r Prof. Inder K. Rana Room 112 B Department of Mathematics IIT-Bombay, Mumbai-400076 (India) Email: ikr@math.iitb.ac.in Lecture 11 Prof. Inder K. Rana Department of Mathematics, IIT - Bombay Recall We
Room 112 B Department of Mathematics IIT-Bombay, Mumbai-400076 (India) Email: ikr@math.iitb.ac.in Lecture 11
Department of Mathematics, IIT - Bombay
We showed that the matrix A = −5 −7 2 4
C1 =
−1
−7 2
We defined P :=
−7 −1 2
checked that P is is invertible, and P−1 A P = 2 −3
Department of Mathematics, IIT - Bombay
This prompted us to ask the following: Question: Given a matrix A when does there exist an invertible matrix P such that P−1AP will be a diagonal matrix, and how to find P? Before answering the following: Definition Let A be a n × n matrix with entries from I F = I R or I C. A is said to be diagonalizable if A if there exists an invertible matrix P with eateries from I F, such that P−1AP is a diagonal matrix.
Department of Mathematics, IIT - Bombay
The answer is given by the following: Theorem Let A be a n × n matrix. A is diagonalizable if and only if there exist scalars λ1, λ2, . . . , λn ∈ I R and vectors C1, C2, . . . , Cn ∈ I Rn such that the following holds: (i) A Ci = λi Ci ∀ 1 ≤ i ≤ n. Thus A has n-eigenvalues. (ii) The set {C1, . . . , Cn} is linearly independent, and hence is a basis
Rn.
Department of Mathematics, IIT - Bombay
Since A is diagonalizable, there exists an invertible matrix P such that P−1AP = D, where D is a diagonal matrix. Let C1, . . . , Cn be the columns of P. Then P = [C1 . . . Cn]. Since P is an invertible matrix, none of the column vectors Ci = 0. In fact, P = [C1 C2 · · · Cn] being an invertible matrix , has rank n, and hence the set {C1, . . . , Cn} is linearly independent. Let D = λ1 ... ... λn .
Department of Mathematics, IIT - Bombay
Then, P−1AP = D implies that AP = PD, i.e., A[C1 . . . Cn] = [C1 . . . Cn]D, i.e., [AC1 AC2 . . . ACn] = [λ1C1 λ2C2 . . . λnCn]. Thus, ACi = λiCi for all 1 ≤ i ≤ n. This proves one way. Conversely, Let X 1, X 2 . . . , X n be elements of I Fn such that {X 1, . . . , X n} is a linearly independent set and for λ1, . . . , λn ∈ I F, AX i = λiX i, 1 ≤ i ≤ n. Define the matrix P as follows: P := [X 1 X 2 · · · X n]. Then rank(P) = n, and hence is an invertible matrix.
Department of Mathematics, IIT - Bombay
Further, AP = A[X 1 · · · X n] = [AX 1 . . . AX n] = [λ1X 1 · · · λnX n] = PD, where D := λ1 ... ... λn . Hence, P−1AP = D, i.e., A is diagonalizable.
Department of Mathematics, IIT - Bombay
Problem: Given a n × n matrix A, when and how do find a linearly independent set of eigenvectors v1, v2, ..., vn for A? Clearly, this set of n eigenvectors will form a basis. We start with a theorem about linear independence of eigenvectors.
Department of Mathematics, IIT - Bombay
Theorem Let λ1, λ2, ..., λr be distinct eigenvalues of a square matrix A. Let v1, v2, ..., vr be a corresponding choice of eigenvectors. Then {v1, v2, ..., vr} is a linearly independent set. Proof: Let ℓ ≤ r be the smallest integer such that {v1, . . . , vℓ} is linearly dependent. In that case {v1, . . . , vℓ−1} is linearly independent, and hence there exists scalars α1,. . . , αℓ−1, not all zero, such that vℓ = α1v1 + . . . . + αℓ−1vℓ−1. (1) Then λℓvℓ = Avℓ = A ℓ−1
αivi
ℓ−1
αiAvi =
ℓ−1
αiλivi (2)
Department of Mathematics, IIT - Bombay
Also, from (1) λℓvℓ =
ℓ−1
λℓαivi. (3) Thus, from (2) and (3) we have: 0 =
ℓ−1
αi(λℓ − λi)vi. The linear independence of {v1, . . . , vℓ−1} implies that αi(λi − λℓ) = 0, for all i. Since, αi is not zero for some i, we get λi − λℓ = 0 for that i, which not possible as λ1, . . . , λk are all distinct. Hence, {v1, . . . , vk} are linearly independent.
Department of Mathematics, IIT - Bombay
Theorem If a n × n matrix A, has n distinct eigenvalues then A is diagonalizable. If the characteristic polynomial has a multiple root the there could be a problem. Example: Let A = 1 1 1
Let us find its eigenvalues and eigenvectors. Solution: The characteristic polynomial is D(t) = (t − 1)2. Hence only one eigenvalue which is repeated. To find eigenvectors, we note that A − I2 = A = 1
Further its null space is N(A − I2) = I R 1
eigenvectors is only 1-dimensional, and hence there does not exist a basis of I R2 consisting of eigenvectors. Thus A is not diagonalizable.
Department of Mathematics, IIT - Bombay
Henceforth, we denote N(A − λIn) as Eλ = Eλ(A) and call it the λ-eigenspace of A. Definition (Algebraic multiplicity) Let λ be an eigenvalue of A. The multiplicity of λ as a root of the characteristic polynomial DA(t) is called the algebraic multiplicity of λ. We write this number as mλ = mλ(A). Definition (Geometric multiplicity) Let λ be an eigenvalue of A. The dimension of the null space of A − λIn is known as the geometric multiplicity of λ. We write this number as gλ = gλ(A). Definition (Defect) Let λ be an eigenvalue of A. The difference mλ − gλ is known as the defect of λ and is denoted δλ = δλ(A).
Department of Mathematics, IIT - Bombay
Theorem If the algebraic and the geometric multiplicities of an n × n matrix agree for every eigenvalue λ of A, then there exists a basis of I Rn consisting of eigenvectors of A Proof: Let λ1, λ2, λk be the distinct eigenvalues of A with multiplicities m1, m2, ..., mk respectively. Let Bj = {vj1, vj2, ..., vjmj } be a basis of Eλj, j = 1, 2, ..., k. Then B =
j Bj will be an A-eigenbasis of I
Rn. (Note that m1 + m2 + · · · + mk = deg DA(t) = n.)
Department of Mathematics, IIT - Bombay
Definition (Similarity) Let A and B be two n × n matrices. We say A is similar to B if there is an invertible matrix P such that B = P−1AP. Theorem If A and B are similar, then both have the same eigenvalues with the same multiplicities -both algebraic and geometric. Theorem (Digonalization) Let A be diagonalizable i.e. each eigenvalue of A is defect-free. Then A is similar to a diagonal matrix whose diagonal entries are the eigenvalues of A, each occurring as many times as its multiplicity.
Department of Mathematics, IIT - Bombay
Example Let A = 1 2 −2 2 1 −4 1 −1 −2 . Find the eigenvalues and a basis of eigenvectors which diagonalizes A. Also write down a matrix X such that X −1AX is diagonal. Solution: DA(λ) = −λ(λ2 − 9) = ⇒ λ = 0, ±3. λ = −3: Row reduction of A + 3I = 4 2 −2 2 4 −4 1 −1 1 yields 1 1 −1 v1 = 1 1 as an eigenvector.
Department of Mathematics, IIT - Bombay
λ = 0: Row operations on A produces a matrix 1 2 −2 1 . Hence v2 = 2 1 is an eigenvector. λ = 3: A − 3I gives rise to 1 −1 1 1 . Hence v3 = 1 1 is an eigenvector. This enables us to write X = [v1, v2, v3] = 2 1 1 1 1 1 1 . X −1 = −1 −1 2 −1 1 1 2 −2 . Finally, X −1AX = −3 3 .
Department of Mathematics, IIT - Bombay
Consider the matrix A = 3 −2 4 2 −2 1 5 . The characteristic polynomial of A is =
−2 4 − λ 2 −2 1 5 − λ
(3 − λ)[(4 − λ)(5 − λ) − 2] = (3 − λ)(6 − λ)(λ − 3). Thus, A has two eigenvalues λ = 3, 6. Thus λ = 3 has algebraic multiplicity 2 and λ = 6 has algebraic multiplicity 1.
Department of Mathematics, IIT - Bombay
Let us eigenvector for the eigenvalue λ = 3. Note that A − 3I = −2 1 2 −2 1 2 ∼ −2 1 2 −2 1 2 ∼ −2 1 2 . Thus A − 3In has rank 1 and hence N(A − 3In) = E3(A) =
x2 2 + x3
x2 x3 x2, x3 ∈ I R .
Department of Mathematics, IIT - Bombay
Hence E3(A) has dimension 2 and a basis is obtained by selecting x2 = 1, x3 = 0, and x2 = 0, x3 = 1 : X 1 =
1 2
1 , X 2 = 1 1 These two solutions are linearly independent. Thus, we have linearly independent eigenvector for the eigenvalues λ = 3.
Department of Mathematics, IIT - Bombay
Similarly, for λ = 6, since A − 6I = −3 −2 −2 2 −2 1 −1 ∼ −3 1 −1 , it has rank 2 and hence, its nullity is 1. Further, given by E6(A) = α α α ∈ I R . For example for α = 1, we get X 3 = 1 1 , an eigenvector for the eigenvalue λ = 6.
Department of Mathematics, IIT - Bombay
Since algebraic multiplicities of both the eigenvalues equals the respective geometric multiplicities, A is diagonalizable. Let P :=
X 2 X 3
1 2
1 1 1 1 1 . Since, det(P) = 0, P is invertible. Thus {X 1, X 2, X 3} is a linearly independent set and P−1AP is diagonal, i.e., P−1AP = D = 3 3 . Let us verify this.
Department of Mathematics, IIT - Bombay
AP = 3 −2 4 2 −2 1 5
1 2
1 1 1 1 1 =
3 2
3 3 6 3 6 and PD =
1 2
1 1 1 1 1 3 3 6 =
3 2
3 3 6 3 6 .
Department of Mathematics, IIT - Bombay
Theorem Let A be a n × n real symmetric matrix. Then, A has n real eigenvalues (with real eigenvectors). Proof: We first observe that we can treat A as a matrix with entries being complex numbers. Then, its characteristic polynomial det(A − zI) = 0 has n roots in I C, and each root is an eigenvalue for A. Let λ ∈ I C be any eigenvalue and u ∈ I Cn be an eigenvector for A as a matrix over I C. We show that λ is in fact real and is an eigenvalue for A.
Department of Mathematics, IIT - Bombay
Writing u as a column vector, we have Au = λu. Recalling that in I Cn, the inner-product is given by u, u = utu, and for X, Y ∈ I Rn, AX, Y = X, AtY, we have λu2 = λu, u = λu, u = Au, u = u, Atu = u, λu = λu, u = λu2, which implies that λ = λ as u = 0. Thus, every eigenvalue of a symmetric matrix A is real.
Department of Mathematics, IIT - Bombay
If λ is an eigenvalue of A, then the matrix (A − λI) is not invertible and hence there exists a vector u ∈ I Rn, u = 0, such that (A − λI)u = 0, i.e., A has a real eigenvector.
Department of Mathematics, IIT - Bombay
Theorem Let A be a real symmetric matrix and λ1, λ2,. . . , λk be distinct eigenvalues of A. Let ui ∈ I Rn be nonzero such that Aui = λui, 1 ≤ i ≤ k. Then, {u1, u2, . . . , uk} is an orthogonal set. Proof: For i = j, 1 ≤ i, j ≤ k, since At = A, We have, λiui, uj = λiui, uj = Aui, uj = ui, Atuj = ui, Auj = ui, λjuj = λjui, uj. Since i = j, we have λi = λj and hence ui, uj = 0.
Department of Mathematics, IIT - Bombay