r r
play

r r Prof. Inder K. Rana Room 112 B Department of Mathematics - PowerPoint PPT Presentation

r r Prof. Inder K. Rana Room 112 B Department of Mathematics IIT-Bombay, Mumbai-400076 (India) Email: ikr@math.iitb.ac.in Lecture 11 Prof. Inder K. Rana Department of Mathematics, IIT - Bombay Recall We


  1. ▲✐♥❡❛r ❆❧❣❡❜r❛ Prof. Inder K. Rana Room 112 B Department of Mathematics IIT-Bombay, Mumbai-400076 (India) Email: ikr@math.iitb.ac.in Lecture 11 Prof. Inder K. Rana Department of Mathematics, IIT - Bombay

  2. Recall We showed that the matrix � − 5 � − 7 A = 2 4 has two eigenvalues λ = 2 , − 3 . Further we found eigenvectors � − 7 � � � 1 C 1 = and C 2 = . − 1 2 We defined � � 1 − 7 P := , − 1 2 checked that P is is invertible, and � 2 � 0 P − 1 A P = . − 3 0 Prof. Inder K. Rana Department of Mathematics, IIT - Bombay

  3. Diagonalizable This prompted us to ask the following: Question: Given a matrix A when does there exist an invertible matrix P such that P − 1 AP will be a diagonal matrix, and how to find P ? Before answering the following: Definition Let A be a n × n matrix with entries from I F = I R or I C . A is said to be diagonalizable if A if there exists an invertible matrix P F , such that P − 1 AP is a diagonal matrix. with eateries from I Prof. Inder K. Rana Department of Mathematics, IIT - Bombay

  4. Diagonalizable The answer is given by the following: Theorem Let A be a n × n matrix. A is diagonalizable if and only if there exist scalars λ 1 , λ 2 , . . . , λ n ∈ I R and vectors R n C 1 , C 2 , . . . , C n ∈ I such that the following holds: (i) A C i = λ i C i ∀ 1 ≤ i ≤ n . Thus A has n-eigenvalues. (ii) The set { C 1 , . . . , C n } is linearly independent, and hence is a basis R n . of I Prof. Inder K. Rana Department of Mathematics, IIT - Bombay

  5. Proof Since A is diagonalizable, there exists an invertible matrix P such that P − 1 AP = D , where D is a diagonal matrix. Let C 1 , . . . , C n be the columns of P . Then P = [ C 1 . . . C n ] . Since P is an invertible matrix, none of the column vectors C i = 0 . In fact, P = [ C 1 C 2 · · · C n ] being an invertible matrix , has rank n , and hence the set { C 1 , . . . , C n } is linearly independent. Let   λ 1 0 ...     D = .  ...      0 λ n Prof. Inder K. Rana Department of Mathematics, IIT - Bombay

  6. Proof Then, P − 1 AP = D implies that AP = PD , A [ C 1 . . . C n ] = [ C 1 . . . C n ] D , i.e., i.e., [ A C 1 A C 2 . . . A C n ] = [ λ 1 C 1 λ 2 C 2 . . . λ n C n ] . Thus, A C i = λ i C i for all 1 ≤ i ≤ n . This proves one way. Conversely, F n such that { X 1 , . . . , X n } is a Let X 1 , X 2 . . . , X n be elements of I linearly independent set and for λ 1 , . . . , λ n ∈ I F , A X i = λ i X i , 1 ≤ i ≤ n . Define the matrix P as follows: P := [ X 1 X 2 · · · X n ] . Then rank ( P ) = n , and hence is an invertible matrix. Prof. Inder K. Rana Department of Mathematics, IIT - Bombay

  7. Proof cont... Further, AP = A [ X 1 · · · X n ] = [ A X 1 . . . A X n ] = [ λ 1 X 1 · · · λ n X n ] = PD , where   0 λ 1 ...     D := .   ...     0 λ n Hence, P − 1 AP = D , i.e., A is diagonalizable. Prof. Inder K. Rana Department of Mathematics, IIT - Bombay

  8. Diagonalization Problem: Given a n × n matrix A , when and how do find a linearly independent set of eigenvectors v 1 , v 2 , ..., v n for A ? Clearly, this set of n eigenvectors will form a basis. We start with a theorem about linear independence of eigenvectors. Prof. Inder K. Rana Department of Mathematics, IIT - Bombay

  9. Linear independence of eigenvectors Theorem Let λ 1 , λ 2 , ..., λ r be distinct eigenvalues of a square matrix A. Let v 1 , v 2 , ..., v r be a corresponding choice of eigenvectors. Then { v 1 , v 2 , ..., v r } is a linearly independent set. Proof: Let ℓ ≤ r be the smallest integer such that { v 1 , . . . , v ℓ } is linearly dependent. In that case { v 1 , . . . , v ℓ − 1 } is linearly independent, and hence there exists scalars α 1 ,. . . , α ℓ − 1 , not all zero, such that v ℓ = α 1 v 1 + . . . . + α ℓ − 1 v ℓ − 1 . (1) Then � ℓ − 1 ℓ − 1 ℓ − 1 � � � � λ ℓ v ℓ = A v ℓ = A α i v i = α i A v i = α i λ i v i (2) i = 1 i = 1 i = 1 Prof. Inder K. Rana Department of Mathematics, IIT - Bombay

  10. Linear Independence of eigenvectors Also, from (1) ℓ − 1 � λ ℓ v ℓ = λ ℓ α i v i . (3) i = 1 Thus, from (2) and (3) we have: ℓ − 1 � 0 = α i ( λ ℓ − λ i ) v i . i = 1 The linear independence of { v 1 , . . . , v ℓ − 1 } implies that α i ( λ i − λ ℓ ) = 0 , for all i . Since, α i is not zero for some i , we get λ i − λ ℓ = 0 for that i , which not possible as λ 1 , . . . , λ k are all distinct. Hence, { v 1 , . . . , v k } are linearly independent. Prof. Inder K. Rana Department of Mathematics, IIT - Bombay

  11. Diagonalization Theorem If a n × n matrix A , has n distinct eigenvalues then A is diagonalizable. If the characteristic polynomial has a multiple root the there could be a problem. � 1 � 1 Example: Let A = . 0 1 Let us find its eigenvalues and eigenvectors. Solution: The characteristic polynomial is D ( t ) = ( t − 1 ) 2 . Hence only one eigenvalue which is repeated. � 0 � 1 To find eigenvectors, we note that A − I 2 = A = 0 0 is itself in reduced row echelon form. � 1 � Further its null space is N ( A − I 2 ) = I , i.e., the span of the R 0 eigenvectors is only 1-dimensional, and hence there does not exist a R 2 consisting of eigenvectors. Thus A is not diagonalizable. basis of I Prof. Inder K. Rana Department of Mathematics, IIT - Bombay

  12. Algebraic/geometric multiplicity, defect Henceforth, we denote N ( A − λ I n ) as E λ = E λ ( A ) and call it the λ -eigenspace of A . Definition (Algebraic multiplicity) Let λ be an eigenvalue of A . The multiplicity of λ as a root of the characteristic polynomial D A ( t ) is called the algebraic multiplicity of λ . We write this number as m λ = m λ ( A ) . Definition (Geometric multiplicity) Let λ be an eigenvalue of A . The dimension of the null space of A − λ I n is known as the geometric multiplicity of λ . We write this number as g λ = g λ ( A ) . Definition (Defect) Let λ be an eigenvalue of A . The difference m λ − g λ is known as the defect of λ and is denoted δ λ = δ λ ( A ) . Prof. Inder K. Rana Department of Mathematics, IIT - Bombay

  13. Diagonalization Theorem If the algebraic and the geometric multiplicities of an n × n matrix R n agree for every eigenvalue λ of A, then there exists a basis of I consisting of eigenvectors of A Proof: Let λ 1 , λ 2 , λ k be the distinct eigenvalues of A with multiplicities m 1 , m 2 , ..., m k respectively. Let B j = { v j 1 , v j 2 , ..., v j mj } be a basis of E λ j , j = 1 , 2 , ..., k . R n . Then B = � j B j will be an A -eigenbasis of I (Note that m 1 + m 2 + · · · + m k = deg D A ( t ) = n .) Prof. Inder K. Rana Department of Mathematics, IIT - Bombay

  14. Similarity of matrices and diagonalization Definition (Similarity) Let A and B be two n × n matrices. We say A is similar to B if there is an invertible matrix P such that B = P − 1 AP . Theorem If A and B are similar, then both have the same eigenvalues with the same multiplicities -both algebraic and geometric. Theorem (Digonalization) Let A be diagonalizable i.e. each eigenvalue of A is defect-free. Then A is similar to a diagonal matrix whose diagonal entries are the eigenvalues of A, each occurring as many times as its multiplicity. Prof. Inder K. Rana Department of Mathematics, IIT - Bombay

  15. Example Example   1 2 − 2 − 4 Let A = 2 1  . Find the eigenvalues and a basis of  1 − 1 − 2 eigenvectors which diagonalizes A . Also write down a matrix X such that X − 1 AX is diagonal. Solution: D A ( λ ) = − λ ( λ 2 − 9 ) = ⇒ λ = 0 , ± 3 .  − 2    4 2 1 0 0  yields λ = − 3: Row reduction of A + 3 I = 2 4 − 4 0 1 − 1    1 − 1 1 0 0 0  0   as an eigenvector. v 1 = 1  1 Prof. Inder K. Rana Department of Mathematics, IIT - Bombay

  16. Example contd.  1 2 − 2  λ = 0: Row operations on A produces a matrix 0 1 0  .  0 0 0   2  is an eigenvector. Hence v 2 = 0  1  − 1    1 1 1  is an λ = 3: A − 3 I gives rise to 0 0 1  . Hence v 3 = 1   0 0 0 0 eigenvector.  0 2 1   . This enables us to write X = [ v 1 , v 2 , v 3 ] = 1 0 1  1 1 1     − 1 − 1 2 − 3 0 0 X − 1 =  . Finally, X − 1 AX =  . − 1 0 1 0 0 0   1 2 − 2 0 0 3 Prof. Inder K. Rana Department of Mathematics, IIT - Bombay

  17. Example Consider the matrix   3 0 0  . − 2 A = 4 2  − 2 1 5 The characteristic polynomial of A is � � 3 − λ 0 0 � � � � = − 2 4 − λ 2 � � � � − 2 5 − λ 1 � � = ( 3 − λ )[( 4 − λ )( 5 − λ ) − 2 ] ( 3 − λ )( 6 − λ )( λ − 3 ) . = Thus, A has two eigenvalues λ = 3 , 6 . Thus λ = 3 has algebraic multiplicity 2 and λ = 6 has algebraic multiplicity 1 . Prof. Inder K. Rana Department of Mathematics, IIT - Bombay

  18. Example Let us eigenvector for the eigenvalue λ = 3 . Note that  0 0 0   0 0 0   − 2 1 2   ∼  ∼  . A − 3 I − 2 − 2 = 1 2 1 2 0 0 0    − 2 1 2 − 2 1 2 0 0 0 Thus A − 3 I n has rank 1 and hence x 2     2 + x 3    x 2 , x 3 ∈ I N ( A − 3 I n ) = E 3 ( A ) = x 2 R  .  x 3  Prof. Inder K. Rana Department of Mathematics, IIT - Bombay

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend