Matrices with Application to Page Rank Markov Matrices Pagerank - - PowerPoint PPT Presentation

matrices with application to page rank
SMART_READER_LITE
LIVE PREVIEW

Matrices with Application to Page Rank Markov Matrices Pagerank - - PowerPoint PPT Presentation

Linear Algebra Review Introduction Matrices Matrices with Application to Page Rank Markov Matrices Pagerank Anil Maheshwari anil@scs.carleton.ca School of Computer Science Carleton University Canada Matrices Linear Algebra Review


slide-1
SLIDE 1

Linear Algebra Review Introduction Matrices Markov Matrices Pagerank

Matrices with Application to Page Rank

Anil Maheshwari

anil@scs.carleton.ca School of Computer Science Carleton University Canada

slide-2
SLIDE 2

Linear Algebra Review Introduction Matrices Markov Matrices Pagerank

Matrices

1

A Rectangular Array

2

Operations: Addition; Multiplication; Diagonalization; Transpose; Inverse; Determinant

3

Row Operations; Linear Equations; Gaussian Elimination

4

Types: Identity; Symmetric; Diagonal; Upper/Lower Traingular; Orthogonal; Orthonormal

5

Transformations - Eigenvalues and Eigenvectors

6

Rank; Column and Row Space; Null Space

7

Applications: Page Rank, Dimensionality Reduction, . . .

slide-3
SLIDE 3

Linear Algebra Review Introduction Matrices Markov Matrices Pagerank

Matrix Vector Product

Matrix-vector product: Ax = b 2 1 3 4 4 −2

  • =

6 4

slide-4
SLIDE 4

Linear Algebra Review Introduction Matrices Markov Matrices Pagerank

Matrix Vector Product

Ax = b as linear combination of columns: 2 1 3 4 4 −2

  • = 4

2 3

  • −2

1 4

slide-5
SLIDE 5

Linear Algebra Review Introduction Matrices Markov Matrices Pagerank

Matrix-Matrix Product

Matrix-matrix product A = BC: 2 3 1 2 4 4

  • =

4 8 6 16

slide-6
SLIDE 6

Linear Algebra Review Introduction Matrices Markov Matrices Pagerank

Matrix-Matrix Product

A = BC as sum of rank 1 matrices: 2 3 1 2 4 4

  • =

2 3 2 4

  • +

1 4

slide-7
SLIDE 7

Linear Algebra Review Introduction Matrices Markov Matrices Pagerank

RREF

Let A =   2 2 2 4 8 10 16 24   1st Pivot: Replace r2 by r2 − r1, and r3 by r3 − 5r1:   2 2 2 8 6 24   2nd Pivot: Replace r3 by r3 − 3r2:   2 2 2 8  

slide-8
SLIDE 8

Linear Algebra Review Introduction Matrices Markov Matrices Pagerank

RREF contd.

Divide the first row by 2, the second row by 2:   1 1 1 4   Replace r1 by r1 − r2: R =   1 −4 1 4  

slide-9
SLIDE 9

Linear Algebra Review Introduction Matrices Markov Matrices Pagerank

Rank

A =   2 2 2 4 8 10 16 24   RREF − − − →   1 −4 1 4   = R Rank = Number of non-zero pivots = 2 Basis vectors of the row space = rows corresponding to the non-zero pivots in R v1 = 1

−4

  • and v2 =

1 4

  • Basis vectors of the column space = Columns of A

corresponding to non-zero pivots of R. u1 = 2

2 10

  • and u2 =

2

4 16

  • A = u1vT

1 + u2vT 2 =

2

2 10

  • [ 1 0 −4 ] +

2

4 16

  • [ 0 1 4 ]
slide-10
SLIDE 10

Linear Algebra Review Introduction Matrices Markov Matrices Pagerank

Null Space

The null space of A = All vectors x such that Ax = 0. This includes the 0 vector

  • Is there a vector x = (x1, x2, x3) ∈ R3, such that

Ax = x1 2

2 10

  • + x2

2

4 16

  • + x3

8 24

  • =
  • x = (1, −1, 1/4), or any of its scalar multiples, satisfies

Ax = 0 Dimension of Null Space of A= Number of columns (A) - rank(A)= 3 − 2 = 1

slide-11
SLIDE 11

Linear Algebra Review Introduction Matrices Markov Matrices Pagerank

Spaces for A

Let A be m × n matrix with real entries. Let R be RREF of A consisting of r ≤ min{m, n} non-zero pivots.

1

rank(A) = r

2

Column space is a subspace of Rm of dimension r, and its basis vectors are the columns of A corresponding to the non-zero pivots in R.

3

Row space is a subspace of Rn of dimension r, and its basis vectors are the rows of R corresponding to the non-zero pivots.

4

The null-space of A consists of all the vectors x ∈ Rn satisfying Ax = 0. They form a subspace of dimension n − r.

slide-12
SLIDE 12

Linear Algebra Review Introduction Matrices Markov Matrices Pagerank

Eigenvalues and Eigenvectors

Given an n × n matrix A. A non-zero vector v is an eigenvector of A, if Av = λv for some scalar λ. λ is the eigenvalue corresponding to vector v. A = 2 1 3 4

slide-13
SLIDE 13

Linear Algebra Review Introduction Matrices Markov Matrices Pagerank

Example: Eigenvalues and Eigenvectors

Example

Let A = 2 1 3 4

  • Observe that

2 1 3 4 1 3

  • = 5

1 3

  • and

2 1 3 4 1 −1

  • = 1

1 −1

  • Thus, λ1 = 5 and λ2 = 1 are the eigenvalues of A.

Corresponding eigenvectors are v1 = [1, 3] and v2 = [1, −1], as Av1 = λ1v1 and Av2 = λ2v2.

slide-14
SLIDE 14

Linear Algebra Review Introduction Matrices Markov Matrices Pagerank

Eigenvalues of Ak

Let Avi = λivi Consider: A2vi = A(Avi) = A(λivi) = λi(Avi) = λi(λivi) = λ2

i vi

= ⇒ A2vi = λ2

i vi

Eigenvalues of Ak

For an integer k > 0, Ak has the same eigenvectors as A, but the eigenvalues are λk.

slide-15
SLIDE 15

Linear Algebra Review Introduction Matrices Markov Matrices Pagerank

Matrices with distinct eigenvalues

Propertry

Let A be an n × n real matrix with n distinct eigenvalues. The corresponding eigenvectors are linearly independent.

slide-16
SLIDE 16

Linear Algebra Review Introduction Matrices Markov Matrices Pagerank

Matrices with distinct eigenvalues

Let A be an n × n real matrix with n distinct eigenvalues. Let λ1, . . . , λn be the distinct eigenvalues and let x1, . . . , xn be the corresponding eigenvectors,

  • respectively. Let each xi = [xi1, xi2, . . . , xin].

Define an eigenvector matrix X: X =    x11 x21 . . . xn1 . . . . . . . . . . . . x1n x2n . . . xnn    Since eigenvectors are linearly independent, we know that X−1 exists.

slide-17
SLIDE 17

Linear Algebra Review Introduction Matrices Markov Matrices Pagerank

Matrices with distinct eigenvalues (contd.)

Define a diagonal n × n matrix Λ: Λ =        λ1 . . . λ2 . . . λ3 . . . . . . . . . . . . . . . . . . . . . λn        Consider the matrix product AX, AX = A  x1 . . . xn   =  λ1x1 . . . λnxn   = XΛ

slide-18
SLIDE 18

Linear Algebra Review Introduction Matrices Markov Matrices Pagerank

Matrices with distinct eigenvalues (contd.)

Since X−1 exists, we multiply by X−1 on both the sides from left and obtain X−1AX = X−1XΛ = Λ (1) and when we multiply on the right we obtain AXX−1 = A = XΛX−1 (2)

slide-19
SLIDE 19

Linear Algebra Review Introduction Matrices Markov Matrices Pagerank

Matrices with distinct eigenvalues (contd.)

Consider diagonalization given by equation A = XΛX−1 Consider A2: = (XΛX−1)(XΛX−1) = XΛ(X−1X)ΛX−1 = XΛ2X−1 = ⇒ A2 has the same set of eigenvectors as A, but its eigenvalues are squared. Similarly, Ak = XΛkX−1. Eigenvectors of Ak are same as that of A and its eigenvalues are raised to the power of k.

slide-20
SLIDE 20

Linear Algebra Review Introduction Matrices Markov Matrices Pagerank

Symmetric Matrices

Example

Consider symmetric matrix S = [ 3 1

1 3 ].

Its eigenvalues are λ1 = 4 and λ2 = 2 and the corresponding eigenvectors are q1 = (1/ √ 2, 1/ √ 2) and q2 = (1/ √ 2, −1/ √ 2), respectively. Note that eigenvalues are real and the eigenvectors are

  • rthonormal.

S = 3 1 1 3

  • =

1/ √ 2 1/ √ 2 1/ √ 2 −1/ √ 2 4 2 1/ √ 2 1/ √ 2 1/ √ 2 −1/ √ 2

  • Eigenvalues of Symmetric Matrices

All the eigenvalues of a real symmetric matrix S are real. Moreover, all components of the eigenvectors of a real symmetric matrix S are real.

slide-21
SLIDE 21

Linear Algebra Review Introduction Matrices Markov Matrices Pagerank

Symmetric Matrices (contd.)

Property

Any pair of eigenvectors of a real symmetric matrix S corresponding to two different eigenvalues are

  • rthogonal.
slide-22
SLIDE 22

Symmetric Matrices (contd.)

Property Any pair of eigenvectors of a real symmetric matrix S corresponding to two different eigenvalues are

  • rthogonal.

2020-11-03

Linear Algebra Review Matrices Symmetric Matrices (contd.) Proof: Let q1 and q2 be two eigenvectors corresponding to λ1 = λ2,

  • respectively. Thus, Sq1 = λ1q1 and Sq2 = λ2q2. Since S is symmetric,

qT

1 S = λ1qT 1 .

Multiply by q2 on the right and we obtain λ1qT

1 q2 =

qT

1 Sq2 = qT 1 λ2q2. Since λ1 = λ2 and λ1qT 1 q2 = qT 1 λ2q2, this implies that

qT

1 q2 = 0 and thus the eigenvectors q1 and q2 are orthogonal.

slide-23
SLIDE 23

Linear Algebra Review Introduction Matrices Markov Matrices Pagerank

Symmetric Matrices (contd.)

Symmetric matrices with distinct eigenvalues

Let S be a n × n symmetric matrix with n distinct eigenvalues and let q1, . . . , qn be the corresponding

  • rthonormal eigenvectors. Let Q be the n × n matrix

consiting of q1, . . . , qn as its columns. Then S = QΛQ−1 = QΛQT . Furthermore, S = λ1q1qT

1 + λ2q2qT 2 + · · · + λnqnqT n

S = 3 1 1 3

  • = 4

1/ √ 2 1/ √ 2 1/ √ 2 1/ √ 2

  • + 2

1/ √ 2 −1/ √ 2 1/ √ 2 −1/ √ 2

slide-24
SLIDE 24

Linear Algebra Review Introduction Matrices Markov Matrices Pagerank

Summary for Symmetric Matrices

Theorem

All eigenvalues of a real symmetric n × n matrix S are

  • real. Moreover, S can be expressed as S = QΛQT ,

where Q consists of orthonormal basis of Rn formed by n eigenvectors of S, and Λ is a diagonal matrix consisting

  • f n eigenvalues of S. Furthermore,

S = λ1q1qT

1 + . . . + λnqnqT n .

Since Q is a basis of Rn, any vector x can be expressed as a linear combination x = α1q1 + . . . + αnqn Consider x · qi = (α1q1 + . . . + αnqn) · qi = αi

slide-25
SLIDE 25

Linear Algebra Review Introduction Matrices Markov Matrices Pagerank

Symmetric Matrices

Claim

S = QΛQT and S−1 =

1 λ1 q1qT 1 + . . . + 1 λn qnqT n

Proof Sketch: S = QΛQT = λ1q1qT

1 + . . . + λnqnqT n

SS−1 = (λ1q1qT

1 +. . .+λnqnqT n )( 1 λ1 q1qT 1 +. . .+ 1 λn qnqT n ) = I

as q1, . . . , qn are orthonormal.

slide-26
SLIDE 26

Linear Algebra Review Introduction Matrices Markov Matrices Pagerank

Positive Definite Matrices

Symmetric matrix S is positive definite if all its eigenvalues > 0. It is positive semi-definite if all the eigenvalues are ≥ 0.

An Alternate Characterization

Let S be a n × n real symmetric matrix. For all non-zero vectors x ∈ Rn, if xT Sx > 0 holds, then all the eigenvalues of S are > 0.

slide-27
SLIDE 27

Positive Definite Matrices

Symmetric matrix S is positive definite if all its eigenvalues > 0. It is positive semi-definite if all the eigenvalues are ≥ 0. An Alternate Characterization Let S be a n × n real symmetric matrix. For all non-zero vectors x ∈ Rn, if xT Sx > 0 holds, then all the eigenvalues of S are > 0.

2020-11-03

Linear Algebra Review Matrices Positive Definite Matrices Let λi be an eigenvalue of S and its corresponding unit eigenvector is

  • qi. Note that qT

i qi = 1. Since S is symmetric, we know that λi is real.

Now we have, λi = λiqT

i qi = qT i λiqi = qT i Sqi. But qT i Sqi > 0, hence

λi > 0.

slide-28
SLIDE 28

Linear Algebra Review Introduction Matrices Markov Matrices Pagerank

Eigenvalue Identities

Trace

Let λ1, . . . , λn be eigenvalues of n × n real matrix A. trace(A) =

n

  • i=1

aii =

n

  • i=1

λi

Determinant

det(A) =

n

  • i=1

λi

slide-29
SLIDE 29

Linear Algebra Review Introduction Matrices Markov Matrices Pagerank

Markov Matrices

1/3 2/3 P Q R 1/2 1/2 1/3 2/3

P Q R P 1/3 1/3 Q 1/2 2/3 R 1/2 2/3

slide-30
SLIDE 30

Linear Algebra Review Introduction Matrices Markov Matrices Pagerank

Markov Chain

  • X0, X1, . . . be a sequence of r. v. that evolve over time.
  • At time 0, we have X0, followed by X1 at time 1, . . .
  • Assume each Xi takes value from the set {1, . . . , n} that

represents the set of states.

  • This sequence is a Markov chain if the probability that

Xm+1 equals a particular state αm+1 ∈ {1, . . . , n} only depends on what is the state of Xm and is completely independent of the states of X0, . . . , Xm−1. Memoryless property:

P[Xm+1 = αm+1|Xm = αm, Xm−1 = αm−1, . . . , X0 = α0] = P[Xm+1 = αm+1|Xm = αm], where α0, . . . , αm+1, · · · ∈ {1, . . . , n}

slide-31
SLIDE 31

Linear Algebra Review Introduction Matrices Markov Matrices Pagerank

Memoryless Property

1/3 2/3 P Q R 1/2 1/2 1/3 2/3

P Q R P 1/3 1/3 Q 1/2 2/3 R 1/2 2/3

slide-32
SLIDE 32

Linear Algebra Review Introduction Matrices Markov Matrices Pagerank

Markov Matrices

What is a Markov Matrix? A square matrix A is a Markovian Matrix if

1

A[i, j] = probability of transition from the state j to state i.

2

Sum of the values within any column is 1 (= probability of leaving from a state to any of the possible states).

slide-33
SLIDE 33

Linear Algebra Review Introduction Matrices Markov Matrices Pagerank

State Transitions

Start in an initial state and in each successive step make a transition from the current state to the next state respecting the probabilities.

1

What is the probability of reaching the state j after taking n steps starting from the state i?

2

Given an initial probability vector representing the probabilities of starting in various states, what is the steady state? After traversing the chain for a large number of steps, what is the probability of landing in various states?

1/3 2/3 P Q R 1/2 1/2 1/3 2/3

slide-34
SLIDE 34

Linear Algebra Review Introduction Matrices Markov Matrices Pagerank

Types of States

Recurrent State: A state i is recurrent if starting from state i, with probability 1, we can return to the state i after making finitely many transitions. Transient State: A state i is transient, i.e. there is a non-zero probability of not returning to the state i.

1 2 3 4 5 6

Figure: Recurrent States={1,2,3}. Transient States={4,5,6}

slide-35
SLIDE 35

Linear Algebra Review Introduction Matrices Markov Matrices Pagerank

Irreducible Markov Chains

A Markov chain is irreducible if it is possible to go between any pair of states in a finite number of steps. Otherwise it is called reducible. Observation: If the graph is strongly connected then it is irreducible.

1 2 3 4 5 6

slide-36
SLIDE 36

Linear Algebra Review Introduction Matrices Markov Matrices Pagerank

Aperiodic Markov Chains

Period of a state

Period of a state i is the greatest common divisor (GCD)

  • f all possible number of steps it takes the chain to return

to the state i starting from i. Note: If there is no way to return to i starting from i, then its period is undefined.

Aperiodic Markov Chain

A Markov chain is aperiodic if the periods of each of its states is 1.

slide-37
SLIDE 37

Linear Algebra Review Introduction Matrices Markov Matrices Pagerank

Eigenvalues of Markov Matrices

A =   1/3 1/3 1/2 2/3 1/2 2/3   Eigenvalues of A are the roots of det(A − λI) = 0 Eigenvalue Eigenvector λ1 = 1 v1 = (2/3, 1, 1) λ2 = −2/3 v2 = (0, −1, 1) λ3 = −1/3 v3 = (−2, 1, 1) Observe: Largest (principal) eigenvalue is 1 and the corresponding (principal) eigenvector is (2/3, 1, 1). Note that Avi = λivi, for i = 1, . . . , 3. Any vector can be converted to a unit vector:

v ||v||

Example: v1 = ( 2

3, 1, 1) → 3 √ 22( 2 3, 1, 1)

slide-38
SLIDE 38

Linear Algebra Review Introduction Matrices Markov Matrices Pagerank

Principal Eigenvalue of Markov Matrices

Principal Eigenvalue

The largest eigenvalue of a Markovian matrix is 1 See Notes on Algorithm Design for the proof. Idea: Let B = AT − → 1 is an Eigenvector of B, as B− → 1 = 1− → 1 = ⇒ 1 is an Eigenvalue of A. Using contradiction, show that B cannot have any eigenvalue > 1

slide-39
SLIDE 39

Linear Algebra Review Introduction Matrices Markov Matrices Pagerank

Eigenvalues of Powers of A

A =   1/3 1/3 1/2 2/3 1/2 2/3   Note that all the entries in A2 are > 0 and all the entries within a column still adds to 1. A2 =   1/3 2/9 2/9 1/3 11/17 1/6 1/3 1/6 11/17  

Ak is Markovian

If the entries within each column of A adds to 1, then entries within each column of Ak, for any integer k > 0, will add to 1.

slide-40
SLIDE 40

Linear Algebra Review Introduction Matrices Markov Matrices Pagerank

Random Surfer Model

Initial: Surfer with probability vector u0 = (1/3, 1/3, 1/3) u1 = Au0 =   1/3 1/3 1/2 2/3 1/2 2/3     1/3 1/3 1/3   =   4/18 7/18 7/18   u2 = Au1 =   1/3 1/3 1/2 2/3 1/2 2/3     4/18 7/18 7/18   =   7/27 10/27 10/27   Likewise, we compute u3 = Au2 = [20/81, 61/162, 61/162], u4 = Au3 = [61/243, 91/243, 91/243], u5 = Au4 = [182/729, 547/1458, 547/1458], . . . u∞ = [0.25, 0.375, 0.375] = [2/8, 3/8, 3/8]

slide-41
SLIDE 41

Linear Algebra Review Introduction Matrices Markov Matrices Pagerank

Linear Combination of Eigenvectors

u0 =   1/3 1/3 1/3   = c1   2/3 1 1   + c2   −1 1   + c3   −2 1 1   u1 = Au0 = c1Av1 + c2Av2 + c3Av3 = c1λ1v1 + c2λ2v2 + c3λ3v3 (as Avi = λivi) Thus, u1 = A   1/3 1/3 1/3   = c1λ1   2/3 1 1   + c2λ2   −1 1   + c3λ3   −2 1 1  

slide-42
SLIDE 42

Linear Algebra Review Introduction Matrices Markov Matrices Pagerank

Linear Combination of Eigenvectors(contd.)

In general, for integer k > 0, uk = Aku0 = c1λk

1v1 + c2λk 2v2 + c3λk 3v3, i.e.

uk = Ak   1/3 1/3 1/3   = c1λk

1

  2/3 1 1   + c2λk

2

  −1 1   + c3λk

3

  −2 1 1   and that equals uk = c11k   2/3 1 1   + c2(−2 3)k   −1 1   + c3(−1 3)k   −2 1 1  

slide-43
SLIDE 43

Linear Algebra Review Introduction Matrices Markov Matrices Pagerank

Linear Combination of Eigenvectors(contd.)

For large values of k, ( 2

3)k → 0 and ( 1 3)k → 0. The above

expression reduces to

uk ≈ c1   2/3 1 1   = 3 8   2/3 1 1   =   2/8 3/8 3/8  

Note that the value of c1 is derived by solving the equation for u0 = c1v1 + c2v2 + c3v3 for u0 = [1/3, 1/3, 1/3]

slide-44
SLIDE 44

Linear Algebra Review Introduction Matrices Markov Matrices Pagerank

Linear Combination of Eigenvectors(contd.)

Suppose u0 = [1/4, 1/4, 1/2] u1 = Au0 = [1/4, 11/24, 7/24] u2 = Au1 = [1/4, 23/72, 31/72] u3 = Au2 = [1/4, 89/216, 73/216] . . . u∞ = [2/8, 3/8, 3/8]

slide-45
SLIDE 45

Linear Algebra Review Introduction Matrices Markov Matrices Pagerank

Convergence?

Entries in Ak

Assume that all the entries of a Markov matrix A, or of some finite power of A, i.e. Ak for some integer k > 0, are strictly > 0. A corresponds to an irreducible aperiodic Markov chain M. Irreducible: for any pair of states i and j, it is always possible to go from state i to state j in finite number of steps with positive probability. Period of a state i: GCD of all possible number of steps it takes the chain to return to the state i starting from i. Aperiodic: M is aperiodic if the GCD is 1 for the period

  • f each of the states in M.
slide-46
SLIDE 46

Linear Algebra Review Introduction Matrices Markov Matrices Pagerank

Perron-Frobenius Theorem

Assume A corresponds to an irreducible aperiodic Markov chain M. Perron-Frobenius Theorem from linear algebra states that

1

Largest eigenvalue 1 of A is unique

2

All other eigenvalues of A have magnitude strictly smaller than 1

3

All the coordinates of the eigenvector v1 corresponding to the eigenvalue 1 are > 0

4

The steady state corresponds to the eigenvector v1

slide-47
SLIDE 47

Linear Algebra Review Introduction Matrices Markov Matrices Pagerank

Pagerank Algorithm

Problem: How to rank the web-pages? Ranking assigns a real number to each web-page. The higher the number, the more important the page is. Needs to be automated, as the web is extremely large. We will study the Page Rank algorithm. Source: Page, Brin, Motwani, Winograd, The PageRank citation ranking: Bringing order to the Web published as a technical report in1998).

slide-48
SLIDE 48

Linear Algebra Review Introduction Matrices Markov Matrices Pagerank

Web as a Graph

Define a positively weighted directed graph G = (V, E): Each web-page is a vertex of G If a web-page u points (links) to the web-page v, there is a directed edge from u to v The weight of an edge uv is

1

  • ut-degree(u)

Assume V = {v1, . . . , vn} n × n adjacency matrix M of G is: For 1 ≤ i, j ≤ n: M(i, j) =

  • 1
  • ut-degree(vj),

if vjvi ∈ E

  • therwise
slide-49
SLIDE 49

Linear Algebra Review Introduction Matrices Markov Matrices Pagerank

An Example

v1 v2 v3 v4 v5

M =       1/2 1/3 1/2 1/2 1/2 1/2 1/2 1/3 1/3      

slide-50
SLIDE 50

Linear Algebra Review Introduction Matrices Markov Matrices Pagerank

Avoiding Sink Nodes

Idea: Make sink nodes point to all other nodes.

M =      1/2 1/3 1/2 1/2 1/2 1/2 1/2 1/3 1/3      →      1/2 1/3 1/5 1/2 1/5 1/2 1/2 1/5 1/2 1/2 1/3 1/5 1/3 1/5      = Q

slide-51
SLIDE 51

Linear Algebra Review Introduction Matrices Markov Matrices Pagerank

Teleportation - Key Idea

Define K = αQ + 1−α

n E

Teleportation Parameter: 0 < α < 1, e.g α = 0.9 E is a n × n matrix of all 1s. Observations on K:

1

Each entry of K is > 0

2

The entries within each column sums to 1

3

K satisfies the requirements of irreducible aperiodic Markov chain

4

Its largest eigenvalue is 1

5

By Perron-Frobenius Theorem, the steady state (page ranks) correspond to the principal eigenvector

slide-52
SLIDE 52

Linear Algebra Review Introduction Matrices Markov Matrices Pagerank

Conclusions

Computational Issues: K = αQ + 1−α

n E

Q is sparse and E is special. Favors: Teleport to specific pages Real story is not that simple!