Eigenvalues and Eigenvectors Few concepts to remember from linear - - PowerPoint PPT Presentation

โ–ถ
eigenvalues and eigenvectors few concepts to remember
SMART_READER_LITE
LIVE PREVIEW

Eigenvalues and Eigenvectors Few concepts to remember from linear - - PowerPoint PPT Presentation

Eigenvalues and Eigenvectors Few concepts to remember from linear algebra Let be an matrix and the linear transformation = Rank: maximum number of linearly independent


slide-1
SLIDE 1

Eigenvalues and Eigenvectors

slide-2
SLIDE 2

Few concepts to remember from linear algebra

Let ๐‘ฉ be an ๐‘œร—๐‘› matrix and the linear transformation ๐’› = ๐‘ฉ๐’š

  • Rank: maximum number of linearly independent columns or rows of ๐‘ฉ
  • Range ๐‘ฉ = ๐’› = ๐‘ฉ๐’š โˆ€๐’š}
  • Null ๐‘ฉ = ๐’š ๐‘ฉ๐’š = ๐Ÿ}

๐’š โˆˆ โ„›๐’ โ†’

๐‘ฉ ๐’› โˆˆ โ„›๐’

slide-3
SLIDE 3

Eigenvalue problem

Let ๐‘ฉ be an ๐‘œร—๐‘œ matrix: ๐’š โ‰  ๐Ÿ is an eigenvector of ๐‘ฉ if there exists a scalar ๐œ‡ such that ๐‘ฉ ๐’š = ๐œ‡ ๐’š where ๐œ‡ is called an eigenvalue. If ๐’š is an eigenvector, then ฮฑ๐’š is also an eigenvector. Therefore, we will usually seek for normalized eigenvectors, so that ๐’š = 1 Note: When using Python, numpy.linalg.eig will normalize using p=2 norm.

slide-4
SLIDE 4

How do we find eigenvalues?

Linear algebra approach: ๐‘ฉ ๐’š = ๐œ‡ ๐’š ๐‘ฉ โˆ’ ๐œ‡ ๐‘ฑ ๐’š = ๐Ÿ Therefore the matrix ๐‘ฉ โˆ’ ๐œ‡ ๐‘ฑ is singular โŸน ๐‘’๐‘“๐‘ข ๐‘ฉ โˆ’ ๐œ‡ ๐‘ฑ = 0 ๐‘ž ๐œ‡ = ๐‘’๐‘“๐‘ข ๐‘ฉ โˆ’ ๐œ‡ ๐‘ฑ is the characteristic polynomial of degree ๐‘œ. In most cases, there is no analytical formula for the eigenvalues of a matrix (Abel proved in 1824 that there can be no formula for the roots

  • f a polynomial of degree 5 or higher) โŸน Approximate the

eigenvalues numerically!

slide-5
SLIDE 5

Example

Notes: The matrix ๐‘ฉ is singular (det(A)=0), and rank(๐‘ฉ)=1 The matrix has two distinct real eigenvalues The eigenvectors are linearly independent

๐‘ฉ = 2 1 4 2 ๐‘’๐‘“๐‘ข 2 โˆ’ ๐œ‡ 1 4 2 โˆ’ ๐œ‡ = 0

Solution of characteristic polynomial gives: ๐œ‡. = 4, ๐œ‡/ = 0 To get the eigenvectors, we solve: ๐‘ฉ ๐’š = ๐œ‡ ๐’š

2 โˆ’ (4) 1 4 2 โˆ’ (4) ๐‘ฆ$ ๐‘ฆ% = 0 ๐’š = 1 2 2 โˆ’ (0) 1 4 2 โˆ’ (0) ๐‘ฆ$ ๐‘ฆ% = 0 ๐’š = โˆ’1 2

slide-6
SLIDE 6

Diagonalizable Matrices

A ๐‘œร—๐‘œ matrix ๐‘ฉ with ๐‘œ linearly independent eigenvectors ๐’— is said to be diagonalizable. ๐‘ฉ ๐’—๐Ÿ = ๐œ‡. ๐’—๐Ÿ, ๐‘ฉ ๐’—๐Ÿ‘ = ๐œ‡/ ๐’—๐Ÿ‘, โ€ฆ ๐‘ฉ ๐’—๐’ = ๐œ‡; ๐’—๐’, In matrix form:

๐‘ฉ ๐’—๐Ÿ โ€ฆ ๐’—๐’ = ๐œ‡#๐’—๐Ÿ โ€ฆ ๐œ‡$๐’—๐’ = ๐’—๐Ÿ โ€ฆ ๐’—๐’ ๐œ‡# โ‹ฑ ๐œ‡$ This corresponds to a similarity transformation ๐‘ฉ๐‘ฝ = ๐‘ฝ๐‘ฌ โŸบ ๐‘ฉ = ๐‘ฝ๐‘ฌ๐‘ฝ%๐Ÿ

slide-7
SLIDE 7

Example

๐‘ฉ = 2 1 4 2 ๐‘’๐‘“๐‘ข 2 โˆ’ ๐œ‡ 1 4 2 โˆ’ ๐œ‡ = 0

Solution of characteristic polynomial gives: ๐œ‡. = 4, ๐œ‡/ = 0 To get the eigenvectors, we solve: ๐‘ฉ ๐’š = ๐œ‡ ๐’š

2 โˆ’ (4) 1 4 2 โˆ’ (4) ๐‘ฆ$ ๐‘ฆ% = 0 ๐’š = 1 2 2 โˆ’ (0) 1 4 2 โˆ’ (0) ๐‘ฆ$ ๐‘ฆ% = 0 ๐’š = โˆ’1 2

๐‘ฉ = ๐‘ฝ๐‘ฌ๐‘ฝ<. ๐‘ฝ = 0.447 0.894 โˆ’0.447 0.894

  • r normalized

eigenvector (๐‘ž = 2 norm) ๐’š = 0.447 0.894 ๐’š = โˆ’0.447 0.894

๐‘ฌ = 4 Notes: The matrix ๐‘ฉ is singular (det(A)=0), and rank(๐‘ฉ)=1 Since ๐‘ฉ has two linearly independent eigenvectors, the matrix ๐‘ฝ is full rank, and hence, the matrix ๐‘ฉ is diagonalizable.

slide-8
SLIDE 8

Example

The eigenvalues of the matrix: ๐‘ฉ = 3 โˆ’18 2 โˆ’9 are ๐œ‡. = ๐œ‡/ = โˆ’3. Select the incorrect statement: A) Matrix ๐‘ฉ is diagonalizable B) The matrix ๐‘ฉ has only one eigenvalue with multiplicity 2 C) Matrix ๐‘ฉ has only one linearly independent eigenvector D) Matrix ๐‘ฉ is not singular

slide-9
SLIDE 9

Letโ€™s look back at diagonalizationโ€ฆ

1) If a ๐‘œร—๐‘œ matrix ๐‘ฉ has ๐‘œ linearly independent eigenvectors ๐’š then ๐‘ฉ is diagonalizable, i.e., ๐‘ฉ = ๐‘ฝ๐‘ฌ๐‘ฝ<๐Ÿ where the columns of ๐‘ฝ are the linearly independent normalized eigenvectors ๐’š of ๐‘ฉ (which guarantees that ๐‘ฝ<๐Ÿ exists) and ๐‘ฌ is a diagonal matrix with the eigenvalues of ๐‘ฉ. 2) If a ๐‘œร—๐‘œ matrix ๐‘ฉ has less then ๐‘œ linearly independent eigenvectors, the matrix is called defective (and therefore not diagonalizable). 3) If a ๐‘œร—๐‘œ symmetric matrix ๐‘ฉ has ๐‘œ distinct eigenvalues then ๐‘ฉ is diagonalizable.

slide-10
SLIDE 10

A ๐’ร—๐’ symmetric matrix ๐‘ฉ with ๐’ distinct eigenvalues is diagonalizable. Suppose ๐œ‡,๐’— and ๐œˆ, ๐’˜ are eigenpairs of ๐‘ฉ ๐œ‡ ๐’— = ๐‘ฉ๐’— ๐œˆ ๐’˜ = ๐‘ฉ๐’˜ ๐œ‡ ๐’— = ๐‘ฉ๐’— โ†’ ๐’˜ 1 ๐œ‡ ๐’— = ๐’˜ 1 ๐‘ฉ๐’— ๐œ‡ ๐’˜ 1 ๐’— = ๐‘ฉ๐‘ผ๐’˜ 1 ๐’— = ๐‘ฉ ๐’˜ 1 ๐’— = ๐œˆ ๐’˜ 1 ๐’— โ†’ ๐œˆ โˆ’ ๐œ‡ ๐’˜ 1 ๐’— = 0 If all ๐‘œ eigenvalues are distinct โ†’ ๐œˆ โˆ’ ๐œ‡ โ‰  0 Hence, ๐’˜ 1 ๐’— = 0, i.e., the eigenvectors are orthogonal (linearly independent), and consequently the matrix ๐‘ฉ is diagonalizable. Note that a diagonalizable matrix ๐‘ฉ does not guarantee ๐‘œ distinct eigenvalues.

slide-11
SLIDE 11

Some things to remember about eigenvalues:

  • Eigenvalues can have zero value
  • Eigenvalues can be negative
  • Eigenvalues can be real or complex numbers
  • A ๐‘œร—๐‘œ real matrix can have complex eigenvalues
  • The eigenvalues of a ๐‘œร—๐‘œ matrix are not necessarily unique. In fact,

we can define the multiplicity of an eigenvalue.

  • If a ๐‘œร—๐‘œ matrix has ๐‘œ linearly independent eigenvectors, then the

matrix is diagonalizable

slide-12
SLIDE 12

How can we get eigenvalues numerically?

Assume that ๐‘ฉ is diagonalizable (i.e., it has ๐‘œ linearly independent eigenvectors ๐’—). We can propose a vector ๐’š which is a linear combination of these eigenvectors: ๐’š = ๐›ฝ#๐’—# + ๐›ฝ'๐’—' + โ‹ฏ + ๐›ฝ$๐’—$ Then we evaluate ๐‘ฉ ๐’š: ๐‘ฉ ๐’š = ๐›ฝ#๐‘ฉ๐’—# + ๐›ฝ'๐‘ฉ๐’—' + โ‹ฏ + ๐›ฝ$๐‘ฉ๐’—$ And since ๐‘ฉ๐’—# = ๐œ‡#๐’—# we can also write: ๐‘ฉ ๐’š = ๐›ฝ#๐œ‡#๐’—# + ๐›ฝ'๐œ‡'๐’—' + โ‹ฏ + ๐›ฝ$๐œ‡$๐’—$ where ๐œ‡( is the eigenvalue corresponding to eigenvector ๐’—( and we assume |๐œ‡#| > |๐œ‡'| โ‰ฅ |๐œ‡)| โ‰ฅ โ‹ฏ โ‰ฅ |๐œ‡$|

slide-13
SLIDE 13

Power Iteration

Our goal is to find an eigenvector ๐’—( of ๐‘ฉ. We will use an iterative process, where we start with an initial vector, where here we assume that it can be written as a linear combination of the eigenvectors of ๐‘ฉ. ๐’š* = ๐›ฝ#๐’—# + ๐›ฝ'๐’—' + โ‹ฏ + ๐›ฝ$๐’—$ And multiply by ๐‘ฉ to get: ๐’š# = ๐‘ฉ ๐’š* = ๐›ฝ#๐œ‡#๐’—# + ๐›ฝ'๐œ‡'๐’—' + โ‹ฏ + ๐›ฝ$๐œ‡$๐’—$ ๐’š' = ๐‘ฉ ๐’š# = ๐›ฝ# ๐œ‡# '๐’—# + ๐›ฝ' ๐œ‡' '๐’—' + โ‹ฏ + ๐›ฝ$ ๐œ‡$ '๐’—$ โ‹ฎ ๐’š+ = ๐‘ฉ ๐’š+%# = ๐›ฝ# ๐œ‡# +๐’—# + ๐›ฝ' ๐œ‡' +๐’—' + โ‹ฏ + ๐›ฝ$ ๐œ‡$ +๐’—$ Or rearrangingโ€ฆ ๐’š+ = ๐œ‡# + ๐›ฝ#๐’—# + ๐›ฝ' ๐œ‡' ๐œ‡#

+

๐’—' + โ‹ฏ + ๐›ฝ$ ๐œ‡$ ๐œ‡#

+

๐’—$

slide-14
SLIDE 14

Power Iteration

๐’šB = ๐œ‡. B ๐›ฝ.๐’—. + ๐›ฝ/ ๐œ‡/ ๐œ‡.

B

๐’—/ + โ‹ฏ + ๐›ฝ; ๐œ‡; ๐œ‡.

B

๐’—; Assume that ๐›ฝ. โ‰  0, the term ๐›ฝ.๐’—. dominates the others when ๐‘™ is very large. Since |๐œ‡. > |๐œ‡/ , we have C!

C" B

โ‰ช 1 when ๐‘™ is large Hence, as ๐‘™ increases, ๐’šB converges to a multiple of the first eigenvector ๐’—., i.e., lim

Bโ†’D ๐’š# C" # = ๐›ฝ.๐’—. or ๐’šB โ†’ ๐›ฝ. ๐œ‡. B ๐’—.

slide-15
SLIDE 15

How can we now get the eigenvalues?

If ๐’š is an eigenvector of ๐‘ฉ such that ๐‘ฉ ๐’š = ๐œ‡ ๐’š then how can we evaluate the corresponding eigenvalue ๐œ‡? ๐œ‡ = ๐’š๐‘ผ๐‘ฉ๐’š ๐’š๐‘ผ๐’š Rayleigh coefficient

slide-16
SLIDE 16

Normalized Power Iteration

๐’š๐Ÿ = arbitrary nonzero vector ๐’š๐Ÿ = ๐’š๐Ÿ ๐’š๐Ÿ for ๐‘™ = 1,2, โ€ฆ ๐’›B = ๐‘ฉ ๐’šB<. ๐’šB =

๐’›# ๐’›# ๐’š& = ๐œ‡$ & ๐›ฝ$๐’—$ + ๐›ฝ% ๐œ‡% ๐œ‡$

&

๐’—% + โ‹ฏ + ๐›ฝ' ๐œ‡' ๐œ‡$

&

๐’—'

slide-17
SLIDE 17

Normalized Power Iteration

Demo โ€œPower Iteration

๐’šB = ๐œ‡. B ๐›ฝ.๐’—. + ๐›ฝ/ ๐œ‡/ ๐œ‡.

B

๐’—/ + โ‹ฏ + ๐›ฝ; ๐œ‡; ๐œ‡.

B

๐’—;

What if the starting vector ๐’š๐Ÿ have no component in the dominant eigenvector ๐’—$ (๐›ฝ$ = 0)?

slide-18
SLIDE 18

Normalized Power Iteration

Demo โ€œPower Iteration

๐’šB = ๐œ‡. B ๐›ฝ.๐’—. + ๐›ฝ/ ๐œ‡/ ๐œ‡.

B

๐’—/ + โ‹ฏ + ๐›ฝ; ๐œ‡; ๐œ‡.

B

๐’—;

What if the first two largest eigenvalues (in magnitude) are the same, |๐œ‡$ = |๐œ‡% ?

๐’šB = ๐œ‡. B๐›ฝ.๐’—. + ๐œ‡. B ๐œ‡/ ๐œ‡.

B

๐›ฝ/๐’—/ + ๐œ‡. B โ€ฆ + ๐›ฝ; ๐œ‡; ๐œ‡.

B

๐’—;

slide-19
SLIDE 19

Potential pitfalls

1. Starting vector ๐’š๐Ÿ may have no component in the dominant eigenvector ๐’—" (๐›ฝ" = 0). This is usually unlikely to happen if ๐’š๐Ÿ is chosen randomly, and in practice not a problem because rounding will usually introduce such component. 2. Risk of eventual overflow (or underflow): in practice the approximated eigenvector is normalized at each iteration (Normalized Power Iteration) 3. First two largest eigenvalues (in magnitude) may be the same: |๐œ‡"| = |๐œ‡#|. In this case, power iteration will give a vector that is a linear combination of the corresponding eigenvectors:

  • If signs are the same, the method will converge to correct magnitude of the
  • eigenvalue. If the signs are different, the method will not converge.
  • This is a โ€œrealโ€ problem that cannot be discounted in practice.
slide-20
SLIDE 20

Error

๐’šB = ๐œ‡. B ๐›ฝ.๐’—. + ๐›ฝ/ ๐œ‡/ ๐œ‡.

B

๐’—/ + โ‹ฏ + ๐›ฝ; ๐œ‡; ๐œ‡.

B

๐’—;

๐น๐‘ ๐‘ ๐‘๐‘ 

We can see from the above that the rate of convergence depends on the ratio C!

C" , that is:

๐œ‡. <B ๐’šB โˆ’ ๐›ฝ.๐’—. = ๐‘ƒ ๐œ‡/ ๐œ‡.

B

slide-21
SLIDE 21

Convergence and error

๐’šB = ๐’—. + ๐›ฝ/ ๐›ฝ. ๐œ‡/ ๐œ‡.

B

๐’—/ + โ‹ฏ

๐’‡&

Power method has linear convergence, which is quite slow.

๐’‡PQR ๐’‡P โ‰ˆ ?S ?R

slide-22
SLIDE 22

A) 0.1536 B) 0.192 C) 0.09 D) 0.027

Iclicker question

slide-23
SLIDE 23

Iclicker question

Suppose we want to use the normalized power iteration, starting from ๐‘ฆT = (โˆ’0.5,0). Select the correct statement A) Normalized power iteration will not converge B) Normalized power iteration will converge to the eigenvector corresponding to the eigenvalue 2. C) Normalized power iteration will converge to the eigenvector corresponding to the eigenvalue 4.

๐’—$ ๐’—%

The matrix ๐‘ฉ = 3 1 1 3 has eigenvalues (4,2) and corresponding eigenvectors ๐’—. = (1,1) and ๐’—/ = (โˆ’1,1).

๐’š)

slide-24
SLIDE 24

Iclicker question

Suppose ๐’š is an eigenvector of ๐‘ฉ such that ๐‘ฉ ๐’š = ๐œ‡ ๐’š What is an eigenvalue of ๐‘ฉ<.? A) ๐œ‡ B) โˆ’๐œ‡ C) 1/๐œ‡ D) โˆ’ .

C

E) Canโ€™t tell without knowing ๐œ‡

slide-25
SLIDE 25

Inverse Power Method

Previously we learned that we can use the Power Method to obtain the largest eigenvalue and corresponding eigenvector, by using the update ๐’šBU. = ๐‘ฉ ๐’šB Suppose there is a single smallest eigenvalue of ๐‘ฉ. With the previous

  • rdering

|๐œ‡.| > |๐œ‡/| โ‰ฅ |๐œ‡V| โ‰ฅ โ‹ฏ > |๐œ‡;| the smallest eigenvalue is ๐œ‡;. When computing the eigenvalues of the inverse matrix ๐‘ฉ<., we get the following ordering 1 ๐œ‡; > 1 ๐œ‡;<. โ‰ฅ โ‹ฏ โ‰ฅ 1 ๐œ‡. And hence we can use the Power Method update on the matrix ๐‘ฉ<. to compute the dominant eigenvalue .

C$, i.e.,

๐’šBU. = ๐‘ฉ<. ๐’šB

slide-26
SLIDE 26

Iclicker question

Which code snippet is the best option to compute the smallest eigenvalue of the matrix ๐‘ฉ?

A) B) D) E) I have no idea! C)

slide-27
SLIDE 27

Inverse Power Method

Note that the update ๐’šBU. = ๐‘ฉ<. ๐’šB can be instead written as ๐‘ฉ ๐’šBU. = ๐’šB Where ๐’šB is know and we need to solve for ๐’šBU. (we are just solving a linear system of equations!). Since the matrix ๐‘ฉ does not change from iteration to the next, we can factorize the matrix once and then perform a series of backward and forward substitutions. Recall ๐‘ธ๐‘ฉ = ๐‘ด๐‘ฝ and ๐‘ฉ ๐’š = ๐’„ resulting in ๐‘ด๐‘ฝ ๐’š = ๐‘ธ๐’„ Hence we can efficiently solve ๐‘ด ๐’› = ๐‘ธ ๐’šB ๐‘ฝ ๐’šBU. = ๐’›

slide-28
SLIDE 28

Cost of computing eigenvalues using inverse power iteration

slide-29
SLIDE 29

Iclicker question

What is the approximated cost of computing the largest eigenvalue using Power Method? A) ๐‘™ ๐‘œ B) ๐‘œ/ + ๐‘™ ๐‘œ C) ๐‘™ ๐‘œ/ D) ๐‘œV + ๐‘™ ๐‘œ/ E) ๐‘‚๐‘ƒ๐‘ˆ๐ต

slide-30
SLIDE 30

Iclicker question

Suppose ๐’š is an eigenvector of ๐‘ฉ such that ๐‘ฉ ๐’š = ๐œ‡๐Ÿ ๐’š and also ๐’š is an eigenvector of ๐‘ช such that ๐‘ช ๐’š = ๐œ‡๐Ÿ‘ ๐’š. What is an eigenvalue of What is an eigenvalue of (๐‘ฉ + ๐Ÿ

๐Ÿ‘ ๐‘ช)<.?

A)

C๐Ÿ /C๐ŸUC๐Ÿ‘

B)

C๐Ÿ‘ /C๐ŸUC๐Ÿ‘

C)

/ /C๐ŸUC๐Ÿ‘

D)

C๐Ÿ /C๐Ÿ‘UC๐Ÿ

E)

C๐Ÿ‘ /C๐Ÿ‘UC๐Ÿ

slide-31
SLIDE 31

Iclicker question

Suppose ๐’š is an eigenvector of ๐‘ฉ such that ๐‘ฉ ๐’š = ๐œ‡ ๐’š , but ๐œ‡ is not the largest or smallest eigenvalue. We want to compute the eigenvalue ๐œ‡ that is close to a given number ๐œ. Which of the following modified matrices will give such eigenvalue? A) A) (๐‘ฉ โˆ’ ๐œ๐‘ฑ) B) (๐‘ฉ โˆ’ ๐œ๐‘ฑ) <. C) C) (1 โˆ’ ๐œ) ๐‘ฉ D) .

W ๐‘ฉ

E) I still have no clue how to answer to these iclicker questionsโ€ฆ

slide-32
SLIDE 32

Eigenvalues of a Shifted Inverse Matrix

Suppose the eigenpairs ๐’š, ๐œ‡ satisfy ๐‘ฉ๐’š = ๐œ‡ ๐’š. We can describe the eigenvalues for the shifted inverse matrix as (๐‘ฉ โˆ’ ๐œ๐‘ฑ)<.๐’š = ฬ… ๐œ‡ ๐’š ๐‘ฑ๐’š = ฬ… ๐œ‡ ๐œ‡๐‘ฑ โˆ’ ๐œ๐‘ฑ ๐’š ฬ… ๐œ‡ = 1 ๐œ‡ โˆ’ ๐œ Hence the eigensystem problem is (๐‘ฉ โˆ’ ๐œ๐‘ฑ)<.๐’š = 1 ๐œ‡ โˆ’ ๐œ ๐’š

slide-33
SLIDE 33

Eigenvalues of a Shifted Inverse Matrix

We use the update ๐‘ฉ โˆ’ ๐œ๐‘ฑ ๐’šBU. = ๐’šB To obtain the eigenpair ๐’š, ๐œ‡ that satisfy ๐‘ฉ๐’š = ๐œ‡ ๐’š such that ๐œ‡ is an eigenvalue close to the number ๐œ We can factorize the matrix ๐‘ช = ๐‘ฉ โˆ’ ๐œ๐‘ฑ such that ๐‘ธ๐‘ช = ๐‘ด๐‘ฝ and then efficiently solve ๐‘ด ๐’› = ๐‘ธ ๐’šB ๐‘ฝ ๐’šBU. = ๐’›

slide-34
SLIDE 34

Convergence summary

Method Cost Convergence ๐’‡๐’+๐Ÿ / ๐’‡๐’ Power Method

๐’š๐‘™+1 = ๐‘ฉ ๐’š๐‘™ ๐‘™ ๐‘œ' ๐œ‡' ๐œ‡#

Inverse Power Method

๐‘ฉ ๐’š๐‘™+1 = ๐’š๐‘™ ๐‘œ) + ๐‘™ ๐‘œ' ๐œ‡$ ๐œ‡$%#

Shifted Inverse Power Method

(๐‘ฉ โˆ’ ๐œ๐‘ฑ)๐’š๐‘™+1= ๐’š๐‘™ ๐‘œ) + ๐‘™ ๐‘œ' ๐œ‡/ โˆ’ ๐œ ๐œ‡/' โˆ’ ๐œ

๐œ‡$: largest eigenvalue (in magnitude) ๐œ‡%: second largest eigenvalue (in magnitude) ๐œ‡': smallest eigenvalue (in magnitude) ๐œ‡'-$: second smallest eigenvalue (in magnitude) ๐œ‡.: closest eigenvalue to ๐œ ๐œ‡.%: second closest eigenvalue to ๐œ