Calculating Lyapunov exponents for random products of positive - - PowerPoint PPT Presentation

calculating lyapunov exponents for random products of
SMART_READER_LITE
LIVE PREVIEW

Calculating Lyapunov exponents for random products of positive - - PowerPoint PPT Presentation

Calculating Lyapunov exponents for random products of positive matrices Mark Pollicott Warwick University 7 July, 2020 Greetings from Kenilworth This is a 1830 painting of the ruin of Kenilworth Castle by J.M.W. Turner. Of course, this is 190


slide-1
SLIDE 1

Calculating Lyapunov exponents for random products of positive matrices

Mark Pollicott

Warwick University

7 July, 2020

slide-2
SLIDE 2

Greetings from Kenilworth

This is a 1830 painting of the ruin of Kenilworth Castle by J.M.W. Turner. Of course, this is 190 years out of date: The second picture is a modern photograph of the castle.

Mark Pollicott (Warwick University) Calculating Lyapunov exponents for random products of positive matrices 7 July, 2020 2 / 29

slide-3
SLIDE 3

Overview

We want to discuss the (top) Lyapunov exponent for random products of matrices. We want to discuss three ways (not) to estimate the Lyapunov exponent; and a 4th way to compute it - in the particular case that the matrices are positive Why positive matrices? - because the method works. The new ingredient is the improved estimate on the error in the approximation. Why is it interesting?- There are other different approaches (e.g., Bandtlow-Slipantshuk, Bahsoun, Braviera-Duarte, Galatolo-Monge-Nisoli, ... ) but I like this approach because it uses some interesting underlying mathematics and “Tudo vale a pena quando a alma n˜ ao ´ e pequena” - Fernando Pessoa

Mark Pollicott (Warwick University) Calculating Lyapunov exponents for random products of positive matrices 7 July, 2020 3 / 29

slide-4
SLIDE 4

A single matrix: Basic linear algebra

Let A be a single k × k matrix (k 2) with entries in R. Some comforting concepts: The eigenvalues λ1 · · · , λk of the matrix A; The determinant det A =

i λi ∈ R

The spectral radius spr(A) = maxi |λi| “Computational mathematics is mainly based on two ideas: Taylor series, and linear algebra” - Nick Trefethen We are going to use both.

Mark Pollicott (Warwick University) Calculating Lyapunov exponents for random products of positive matrices 7 July, 2020 4 / 29

slide-5
SLIDE 5

Spectral radius formula

The spectral radius also comes from

Theorem (The spectral radius formula)

spr(A) = lim

n→+∞ An1/n

where we can define the norm of a matrix A = (aij) by A = maxi,j |aij|. (The specific choice of norm isn’t so important.) I.M. Gelfand Surprisingly, there doesn’t seem to be any reference to this result before Gelfand’s paper in 1941. [N.B., After the lecture Michael Benedicks pointed

  • ut that there was a similar result by Beurling a few

years earlier than that of Gelfand.]

Mark Pollicott (Warwick University) Calculating Lyapunov exponents for random products of positive matrices 7 July, 2020 5 / 29

slide-6
SLIDE 6

What about two matrices?

Let us begin with two k × k matrices A1, A2 ( k 2) with entries in R. Consider weights 0 < p1, p2 < 1 (p1 + p2 = 1). We can consider: the 2n possible products Ai1Ai2 · · · Ain (i1, · · · , in ∈ {1, 2}); their norms Ai1Ai2 · · · Ain; and the weights pi1pi2 · · · pin (i1, · · · , in ∈ {1, 2}). We can define the (top) Lyapunov exponent by λ := lim

n→+∞

1 n

  • i1,...,in

(pi1 · · · pin) log Ai1 · · · Ain.

  • A. Lyapunov

Trivial case: A = A1 = A2 then λ = log spr(A).

Mark Pollicott (Warwick University) Calculating Lyapunov exponents for random products of positive matrices 7 July, 2020 6 / 29

slide-7
SLIDE 7

What about more matrices?

Everything generalizes in the obvious way to more matrices. Consider k × k matrices A1, . . . , Ad with entries in R (d 2, k 2). Consider weights 0 < p1, . . . , pd < 1 (p1 + · · · + pd = 1). We can consider: the dn possible products Ai1Ai2 · · · Ain (i1, · · · , in ∈ {1, . . . , d}); their norms Ai1Ai2 · · · Ain; and the weights pi1pi2 · · · pin (i1, · · · , in ∈ {1, . . . , d}). We can define the (top) Lyapunov exponent by λ := lim

n→+∞

1 n

  • i1,...,in

(pi1 · · · pin) log Ai1 · · · Ain. However, henceforth I will usually restrict to the case of two 2 × 2 matrices (mainly to avoid the problem of mixing up k and d) and take p1 = p2 = 1

2 (mainly

to cut down on notation).

Mark Pollicott (Warwick University) Calculating Lyapunov exponents for random products of positive matrices 7 July, 2020 7 / 29

slide-8
SLIDE 8

How easy is it to compute λ?

Question: Given matrices A1, A2 and 0 < p1, p2 < 1, how easy is it to compute λ? Sir John Kingman (1973): “Pride of place among the unsolved problems of subadditive ergodic theory must go to calculation of the value λ ... and indeed this usually seems to be a problem of some depth.” Yuval Peres (1992): “We turn now to the excruciating problem of the subject: Devise reasonably general and effective algorithms for explicit calculation (or at least approximation) of Lyapunov exponents” At least we have the definition to work from...

Mark Pollicott (Warwick University) Calculating Lyapunov exponents for random products of positive matrices 7 July, 2020 8 / 29

slide-9
SLIDE 9

Plan A: Compute λ using the definition

We can try this out with a simple example. Consider A1 = 2 1 1 1

  • and A2 =

3 1 2 1

  • and p1 = p2 = 1

2.

Then working from the definition:

  • i

1 2 log Ai = 1.77767 . . . 1 2

  • i,j

1 4 log AiAj = 1.45723 . . . 1 3

  • i,j,k . 1

8 log AiAjAk = 1.35236 . . . 1 4

  • i,j,k,l

1 16 log AiAjAkAl = 1.30008 . . . 1 5

  • i,j,k,l,m

1 32 log AiAjAkAlAm = 1.26872 . . .

Unfortunately, this doesn’t converge particularly quickly (typically O( 1

n)).

“In general, things either work out or they don’t, and if they don’t, you figure out something else, a plan B. There’s nothing wrong with plan B” - Dick Van Dyke

Mark Pollicott (Warwick University) Calculating Lyapunov exponents for random products of positive matrices 7 July, 2020 9 / 29

slide-10
SLIDE 10

A pointwise approach to Lyapunov exponents

We first need to introduce the following notation. Let Σ = {1, 2}N then we can write x = (xn)∞

n=1 ∈ Σ. Let µ = (p1, p2)N be the usual Bernoulli measure on Σ.

Theorem (Furstenberg-Kesten, Sir John Kingman)

For almost all (µ) x ∈ Σ, 1 n log Ax1 · · · Axn → λ, as n → +∞. This is a (subadditive) ergodic theorem.

  • H. Fustenberg

(2020 Abel prize winner)

  • H. Kesten

Sir John Kingman

Mark Pollicott (Warwick University) Calculating Lyapunov exponents for random products of positive matrices 7 July, 2020 10 / 29

slide-11
SLIDE 11

Plan B: Compute λ using random products

Let us try a little experiment. We can (again) let A1 =

  • 2

1 1 1

  • and A2 =
  • 3

1 2 1

  • and p1 = p2 = 1

2.

We can compute (for example 15) values of

1 1000 log Ax1 · · · Ax1000 for products of

“random” choices of 1000 matrices: 1.14649 . . . 1.14777 . . . 1.14924 . . . 1.15448 . . . 1.15181 . . . 1.14341 . . . 1.14569 . . . 1.15094 . . . 1.14975 . . . 1.14683 . . . 1.15213 . . . 1.14924 . . . 1.13802 . . . 1.15244 . . . 1.14983 . . . This suggests the value of the Lyapunov exponent to a couple of decimal places, but it is not clear how to get a rigorous estimate, so let us turn to a third approach.

Mark Pollicott (Warwick University) Calculating Lyapunov exponents for random products of positive matrices 7 July, 2020 11 / 29

slide-12
SLIDE 12

Positive matrices

Henceforth we will restrict to positive matrices. We say that the matrices A1, A2 are positive if all the entries are strictly larger than zero. When k = 2 then we can write A1 =

  • a(1)

11

a(1)

12

a(1)

21

a(1)

22

  • and A2 =
  • a(2)

11

a(2)

12

a(2)

21

a(2)

22

  • and then we require a(1)

ij

> 0 and a(2)

ij

> 0 for 1 i, j 2. Let Ai : R2 → R2 (i = 1, 2) be the usual linear action of the matrix Ai. We can consider the positive quadrant R2

+ = {(x, y) ∈ R2 : x, y 0} ⊂ R2.

Clearly positivity implies Ai(R2

+) ⊂ R2 + for i = 1, 2

Ai(R2

+)

R2

+

Mark Pollicott (Warwick University) Calculating Lyapunov exponents for random products of positive matrices 7 July, 2020 12 / 29

slide-13
SLIDE 13

Restriction to a simplex

We can consider the “restriction” to the simplex ∆ = {(x, y) ∈ R2

+ : x + y = 1}.

Let Ai : ∆ → ∆ (i = 1, 2) be the projective action defined by (x, y) → Ai ( x

y ) →

Ai ( x

y )

Ai ( x

y )1

→ Ai(x, y) ( x

y )

Ai ( x

y )

  • Ai (x, y)

∆ Since A1, A2 are positive we have Ai(∆) ⊂ int(∆). ( x

y )

Ai ( x

y )

  • Ai (x, y)

∆ x ξ We can just take the first coordinate of Ai(x, y) = (ξ, η) to get a map Ti : [0, 1] → [0, 1] defined by Ti : x → ξ.

Mark Pollicott (Warwick University) Calculating Lyapunov exponents for random products of positive matrices 7 July, 2020 13 / 29

slide-14
SLIDE 14

The contractions of [0, 1]

Of course we can explicitly write expressions for Ti in terms of the entries of Ai (i = 1, 2). More precisely, we associate to the positive matrices A1 =

  • a(1)

11

a(1)

12

a(1)

21

a(1)

22

  • and A2 =
  • a(2)

11

a(2)

12

a(2)

21

a(2)

22

  • the associated maps T1 : [0, 1] → [0, 1] and T2 : [0, 1] → [0, 1], which by a little

calculation are M¨

  • bius maps given by

Ti(x) = (a(i)

11 − a(i) 12 )x + a(i) 12

(a(i)

11 + a(i) 21 − a(i) 12 − a(i) 22 )x + a(i) 12 + a(i) 22

(i = 1, 2). The positivity of the matrices ensures that T1 and T2 are both (eventually) contractions of the interval.

Mark Pollicott (Warwick University) Calculating Lyapunov exponents for random products of positive matrices 7 July, 2020 14 / 29

slide-15
SLIDE 15

Plan C: Use transfer operators

Enter a deus ex-machina: We can define a linear operator L : C 1([0, 1]) → C 1([0, 1]) on C 1 functions by: Lw(z) = p1w(T1z) + p2w(T2z) where w ∈ C 1([0, 1]) and z ∈ [0, 1]. We can write the nth iterate (for n 2) as Lnw(z) =

  • |i|=n

piw(Tiz) where

1

i = (i1, · · · , in) ∈ {1, 2}n and |i| = n;

2

Ti = Ti1 ◦ · · · ◦ Tin : [0, 1] → [0, 1]; and

3

pi = pi1 · · · pin. 1 λ p1(Lnf1) + p2(Lnf2)

Theorem (Y. Peres)

One can associate functions f1, f2 ∈ C 1([0, 1]) and then p1(Lnf1) + p2(Lnf2) → λ, as n → +∞ In fact, there exists 0 < θ < 1 such that p1(Lnf1) + p2(Lnf2) = λ + O(θn).

Mark Pollicott (Warwick University) Calculating Lyapunov exponents for random products of positive matrices 7 July, 2020 15 / 29

slide-16
SLIDE 16

Unfortunately ...

Although p1Lnf1 + p2Lnf2 is exponentially close to λ there is an exponential amount of work needed to compute these approximations. However, this approach is good for something: The Lyapunov exponent λ depends analytically on the entries in the matrices (Ruelle) The Lyapunov exponent λ depends analytically on the weights p1, p2 (Peres) Of course, the positivity of the matrices can be replaced with “preserving a cone”,

  • etc. and even without the positivity we still have the work of Le Page with

different assumptions.

Mark Pollicott (Warwick University) Calculating Lyapunov exponents for random products of positive matrices 7 July, 2020 16 / 29

slide-17
SLIDE 17

Complex functions

Of course, for a single matrix A the eigenvalues come from zeros of z → det(I − zA). Can we get the Lypapunov exponents from a complex function? To define a suitable complex function we recall some notation: i = (i1, · · · , in) ∈ {1, 2}n and |i| = n; Ti = Ti1 ◦ · · · ◦ Tin : [0, 1] → [0, 1]; xi = Ti(xi) is the unique fixed point for the map Ti;and pi = pi1 · · · pin. Then d(z, s) := exp  −

  • n=1

zn n

  • |i|=n

pi |(Ti)′(xi)|s 1 − (Ti)′(xi)   which converges for |z| and |s| sufficiently small.

Mark Pollicott (Warwick University) Calculating Lyapunov exponents for random products of positive matrices 7 July, 2020 17 / 29

slide-18
SLIDE 18

Properties of d(z, s)

The definition looks a little complicated, but let us concentrate on the properties

  • f the function.

1 z d(z, s) z(s) d(z, s) is analytic; For each s > 0 we have d(0, s) = 1; and the smallest zero z(s) > 0 (i.e., d(z(s), s) = 0) is simple. These follow (essentially) from results of Ruelle.

Mark Pollicott (Warwick University) Calculating Lyapunov exponents for random products of positive matrices 7 July, 2020 18 / 29

slide-19
SLIDE 19

Plan D: d(z, s) and the Lyapunov exponent

We can write the Lyapunov exponent λ in terms of the zero z(s) of the function(s) z → d(z, s) (i.e., d(z(s), s) = 0).

Lemma (N. Jurga and I. Morris)

The Lyapunov exponent is given by λ = 1

2z ′(0).

(This improved on a clunkier formula by M.P.) By differentiating the identity d(z(s), s) = 0 we can now write λ in terms of d(z, s): λ = −1 2 ∂d(1, s) ∂s |s=0 ∂d(z, 0) ∂z |z=1 −1 , since z(0) = 1.

Mark Pollicott (Warwick University) Calculating Lyapunov exponents for random products of positive matrices 7 July, 2020 19 / 29

slide-20
SLIDE 20

Summary of Plan D

In case we have lost the plot, the basic idea is the following. We want to estimate the Lypapunov exponent λ. We write λ in terms of a function d(z, s) d(x, s) was defined in terms of fixed points

  • f combinations of contractions

T1, T2 : [0, 1] → [0, 1]. The contractions T1, T2 : [0, 1] → [0, 1] are written in terms of the matrices A1 and A2. And, of course, this generalizes for finitely many positive matrices in arbitrary dimensions. This looks a complicated piece of machinery, but it actually works reasonably effectively. A Heath Robinson machine for making pancakes.

Mark Pollicott (Warwick University) Calculating Lyapunov exponents for random products of positive matrices 7 July, 2020 20 / 29

slide-21
SLIDE 21

Approximating the Lyapunov exponents

Of course z → d(z, s) depends on infinitely many computations, so taking pity on my computer we need to make an approximation. We can expand as a Taylor series d(z, s) = 1 +

  • n=1

an(s)zn and approximate it by a polynomial d(N)(z, s) = 1 +

N

  • n=1

an(s)zn, for some N 1. We can then approximate λ = 1

2 ∂d(1,t) ∂t

|t=0

  • ∂d(z,0)

∂z

|z=1 −1 by λN = −1 2 ∂d(N)(1, t) ∂t |t=0 ∂d(N)(z, 0) ∂z |z=1 −1 .

Mark Pollicott (Warwick University) Calculating Lyapunov exponents for random products of positive matrices 7 July, 2020 21 / 29

slide-22
SLIDE 22

Example 1

Let A1 =

  • 2

1 1 1

  • and A2 =
  • 3

1 2 1

  • and p1 = p2 = 1

2.

We can then consider the associated two contractions T1(x) = x + 1 x + 2 and T2(x) = 2x + 1 3x + 2. Letting N = 9, say, we can use this approach to estimate λ ≈ λ9 = 1.143311035041828694244 . . . . Question: To how many decimal places is this accurate? Answer: It is accurate to 20 decimal places

Mark Pollicott (Warwick University) Calculating Lyapunov exponents for random products of positive matrices 7 July, 2020 22 / 29

slide-23
SLIDE 23

Example 2

Let A1 = 11

10

1

1 10

1

  • and A2 =

1

1 10

1

11 10

  • and p1 = p2 = 1

2.

We can then consider the associated maps T1(x) =

1 10x + 1

− 4

5x + 2 and T2(x) = 9 10x + 1 10 4 5x + 6 5

. Letting N = 9, say, we can use this approach to estimate λ ≈ λ9 = 0.4660 . . . . Question: To how many decimal places is this accurate? Answer: It is accurate to 2 decimal places

Mark Pollicott (Warwick University) Calculating Lyapunov exponents for random products of positive matrices 7 July, 2020 23 / 29

slide-24
SLIDE 24

Error bounds

The main approximation we made was replacing the function d(z, s) by dN(z, s) and so we need to bound the difference |d(z, s) − dN(z, s)|.

1

It is relatively easy to show there exists 0 < θ < 1 such that |d(z, s) − dN(z, s)| = O

  • n=N+1

an(s)zn

  • = O
  • θN2

leading to |λ − λN| = O

  • θN2

.

2

Recall that previously one had that |λ − λN| = O

  • θN

(by Plan C = Peres’ result). So we seem to be doing better. However, realistically N cannot be too large (unless you have a really big computer) so we would also like to get the best possible bounds for |an(s)| for n N + 1 ≈ 10.

Mark Pollicott (Warwick University) Calculating Lyapunov exponents for random products of positive matrices 7 July, 2020 24 / 29

slide-25
SLIDE 25

Error bounds on an(s)

Recall that to estimate λ we expanded d(z, s) = 1 + ∞

n=1 an(s)zn and we want

to get the best bounds on |an| for n N + 1 ≈ 10. We can use the classical approach of Grothendieck, Ruelle, Fried, Jurga, Morris, etc. to get bounds on the an(s) (called Euler bounds). But often we can do better by using improved bounds on the an(s) (called Computed bounds) using:

◮ More operator theory (Composition Operators, Approximation Numbers,

Weyl’s Inequality); and then

◮ A bit more computation to bound an for 10 ≈ N + 1 n M ≈ 500

This is a variant on using ideas in recent(-ish) work of Jenkinson-P. (for computing Hausdorff Dimension of certain sets) and Jenkinson-P.-Vytnova (for computing Lypapunov exponents - the other type: for expanding maps of the interval). This improvement is more marked when there is less hyperbolicity. This is best illustrated by examples.

Mark Pollicott (Warwick University) Calculating Lyapunov exponents for random products of positive matrices 7 July, 2020 25 / 29

slide-26
SLIDE 26

Example 1

Consider A1 = 2 1 1 1

  • and A2 =

3 1 2 1

  • and p1 = p2 = 1

2.

Recalling d(z, s) = d9(z, s) + ∞

n=10 an(s)zn we can compare the new and old

bounds on an(s) (e.g. when s = 0): n Old bound on an(0) New bound on an(s) 10 1.088971 . . . × 10−21 7.0265930 . . . × 10−23 11 1.956063 . . . × 10−26 6.3590078 . . . × 10−28 12 1.171186 . . . × 10−31 1.8728592 . . . × 10−33 13 2.337478 . . . × 10−37 1.8001868 . . . × 10−39 14 1.555061 . . . × 10−43 5.6589790 . . . × 10−46 15 3.448469 . . . × 10−50 5.8273585 . . . × 10−53 In fact we can show that λ = 1.143311035041828694244 ± 3 × 10−21 Conclusion: In this example there is no significant improvement (perhaps only an

  • rder of magnitude or so) using the new bounds rather than the old bounds.

Mark Pollicott (Warwick University) Calculating Lyapunov exponents for random products of positive matrices 7 July, 2020 26 / 29

slide-27
SLIDE 27

Example 2

Let A1 = 11

10

1

1 10

1

  • and A2 =
  • 1

1 10

1

11 10

  • and p1 = p2 = 1

2.

Recalling d(z, s) = d9(z, s) + ∞

n=10 an(s)zn we can compare the old and new

bounds on an(s) (e.g., when s = 0): n Old bound on an(0) New bound on an(0) 10 10.0321212 . . . 0.0000161 . . . 11 2.8224811 . . . 9.2280733 . . . × 10−7 12 0.6450222 . . . 4.1184631 . . . × 10−8 13 0.1203062 . . . 1.4468302 . . . × 10−9 14 0.0183832 . . . 4.0200884 . . . × 10−11 15 0.0023083 . . . 8.8694309 . . . × 10−13 In fact we can show that λ = 0.4660 ± 0.003 Conclusion: In this example the old error bounds are still bigger than the estimate (for N ≈ 9) whereas the newer bounds lead to meaningful result.

Mark Pollicott (Warwick University) Calculating Lyapunov exponents for random products of positive matrices 7 July, 2020 27 / 29

slide-28
SLIDE 28

The end of the talk

Obrigado

Mark Pollicott (Warwick University) Calculating Lyapunov exponents for random products of positive matrices 7 July, 2020 28 / 29

slide-29
SLIDE 29

The organizers We should take the opportunity to thank the organizers

Ao Cai (Universidade de Lisboa), Pedro Duarte (Universidade de Lisboa), Jos´ e Pedro Gaiv˜ ao (Universidade de Lisboa), Silvius Klein (Pontif´ ıcia Universidade Cat´

  • lica do Rio de Janeiro),

Jo˜ ao Lopes Dias (Universidade de Lisboa), Telmo Peixe (Universidade de Lisboa) Jaqueline Siqueira (Universidade Federal do Rio de Janeiro)

Mark Pollicott (Warwick University) Calculating Lyapunov exponents for random products of positive matrices 7 July, 2020 29 / 29