Random matrices and Gaussian multiplicative chaos Nick Simm - - PowerPoint PPT Presentation

random matrices and gaussian multiplicative chaos
SMART_READER_LITE
LIVE PREVIEW

Random matrices and Gaussian multiplicative chaos Nick Simm - - PowerPoint PPT Presentation

Random matrices and Gaussian multiplicative chaos Nick Simm Mathematics Institute, University of Warwick Joint work with Gaultier Lambert and Dmitry Ostrovsky. Optimal Point Configurations and Orthogonal Polynomials April 2017, CIEM Research


slide-1
SLIDE 1

Random matrices and Gaussian multiplicative chaos

Nick Simm

Mathematics Institute, University of Warwick

Joint work with Gaultier Lambert and Dmitry Ostrovsky.

Optimal Point Configurations and Orthogonal Polynomials April 2017, CIEM

Research supported by Leverhulme fellowship ECF-2014-309

slide-2
SLIDE 2
slide-3
SLIDE 3

The Circular Unitary Ensemble

slide-4
SLIDE 4

The Circular Unitary Ensemble

◮ Let UN be an N × N random matrix chosen uniformly from

the unitary group.

slide-5
SLIDE 5

The Circular Unitary Ensemble

◮ Let UN be an N × N random matrix chosen uniformly from

the unitary group.

◮ The joint distribution of points is an example of a (1d)

‘Coulomb gas’: P(θ1, . . . , θN) ∝

  • j<k

|eiθj − eiθk|2

slide-6
SLIDE 6

The Circular Unitary Ensemble

◮ Let UN be an N × N random matrix chosen uniformly from

the unitary group.

◮ The joint distribution of points is an example of a (1d)

‘Coulomb gas’: P(θ1, . . . , θN) ∝

  • j<k

|eiθj − eiθk|2

◮ The eigenvalues eiθ1, . . . , eiθN form a determinantal point

process with kernel KN(θ, φ) =

N−1

  • j=0

pj(θ)pj(φ) = sin(N(θ − φ)/2) sin((θ − φ)/2) where pj(θ) = eijθ.

slide-7
SLIDE 7

The Circular Unitary Ensemble

◮ Let UN be an N × N random matrix chosen uniformly from

the unitary group.

◮ The joint distribution of points is an example of a (1d)

‘Coulomb gas’: P(θ1, . . . , θN) ∝

  • j<k

|eiθj − eiθk|2

◮ The eigenvalues eiθ1, . . . , eiθN form a determinantal point

process with kernel KN(θ, φ) =

N−1

  • j=0

pj(θ)pj(φ) = sin(N(θ − φ)/2) sin((θ − φ)/2) where pj(θ) = eijθ.

◮ Problem: limit theorems for PN(θ) = det(UN − eiθ) as

N → ∞.

slide-8
SLIDE 8

Characteristic polynomials

slide-9
SLIDE 9

Characteristic polynomials

Characteristic polynomials of (large) random matrices:

slide-10
SLIDE 10

Characteristic polynomials

Characteristic polynomials of (large) random matrices:

◮ A good model of the Riemann zeta function ζ(s) high up on

the critical line s = 1/2 + it (Keating and Snaith ’00).

slide-11
SLIDE 11

Characteristic polynomials

Characteristic polynomials of (large) random matrices:

◮ A good model of the Riemann zeta function ζ(s) high up on

the critical line s = 1/2 + it (Keating and Snaith ’00).

◮ An interesting example of a log-correlated Gaussian field. E.g.

how to compute M∗

N =

max

θ∈[0,2π] log |PN(θ)| ≡

max

θ∈[0,2π] log | det(UN − eiθI)|

slide-12
SLIDE 12

Characteristic polynomials

Characteristic polynomials of (large) random matrices:

◮ A good model of the Riemann zeta function ζ(s) high up on

the critical line s = 1/2 + it (Keating and Snaith ’00).

◮ An interesting example of a log-correlated Gaussian field. E.g.

how to compute M∗

N =

max

θ∈[0,2π] log |PN(θ)| ≡

max

θ∈[0,2π] log | det(UN − eiθI)| ◮ Using these ideas, it has been conjectured and partially

proved that as N → ∞ M∗

N = log(N) − 3

4 log(log(N)) + (G1 + G2)/2 + o(1) where G1,2 are standard independent Gumbel variables.

(Fyodorov and Keating ’12, Arguin, Belius, Bourgade ’15, Paquette and Zeitouni ’16, Chaibbi, Madaule and Najnudel ’16)

slide-13
SLIDE 13

The logarithm

slide-14
SLIDE 14

The logarithm

Theorem (Hughes, Keating and O’Connell ’01)

Let {Zj}∞

j=1 be i.i.d. standard complex Gaussian random variables.

Then

VN(θ) := log |PN(θ)|

d

→ V (θ) := 1 2

  • k=1

eiθ √ k Zk + c.c.

slide-15
SLIDE 15

The logarithm

Theorem (Hughes, Keating and O’Connell ’01)

Let {Zj}∞

j=1 be i.i.d. standard complex Gaussian random variables.

Then

VN(θ) := log |PN(θ)|

d

→ V (θ) := 1 2

  • k=1

eiθ √ k Zk + c.c.

Key properties of V (θ):

◮ V is Gaussian and mean zero E(V (θ)) = 0. ◮ Logarithmic correlations:

E(V (θ)V (φ)) = 1 2Re ∞

  • j=1

eik(θ−φ) k

  • = −1

2 log |eiθ − eiφ|

◮ What about θ = φ? Implies

Var(V (θ)) = ∞

slide-16
SLIDE 16

The logarithm

Theorem (Hughes, Keating and O’Connell ’01)

Let {Zj}∞

j=1 be i.i.d. standard complex Gaussian random variables.

Then

VN(θ) := log |PN(θ)|

d

→ V (θ) := 1 2

  • k=1

eiθ √ k Zk + c.c.

Key properties of V (θ):

◮ V is Gaussian and mean zero E(V (θ)) = 0. ◮ Logarithmic correlations:

E(V (θ)V (φ)) = 1 2Re ∞

  • j=1

eik(θ−φ) k

  • = −1

2 log |eiθ − eiφ|

◮ What about θ = φ? Implies

Var(V (θ)) = ∞ Conclusion: Limit V (θ) is a distribution valued object.

slide-17
SLIDE 17

The exponential of the logarithm

slide-18
SLIDE 18

The exponential of the logarithm

Naively we might suppose that |PN(θ)| = eVN(θ) converges to eV (θ)?

slide-19
SLIDE 19

The exponential of the logarithm

Naively we might suppose that |PN(θ)| = eVN(θ) converges to eV (θ)? How to define the exponential of a distribution?

slide-20
SLIDE 20

The exponential of the logarithm

Naively we might suppose that |PN(θ)| = eVN(θ) converges to eV (θ)? How to define the exponential of a distribution? Consider measures formally defined by µ(γ)(D) =

  • D

eγV (θ)− γ2

2 Var(V (θ)) dθ

The measure µ(γ) is defined by a renormalization procedure Vǫ = V ∗ φǫ.

slide-21
SLIDE 21

The exponential of the logarithm

Naively we might suppose that |PN(θ)| = eVN(θ) converges to eV (θ)? How to define the exponential of a distribution? Consider measures formally defined by µ(γ)(D) =

  • D

eγV (θ)− γ2

2 Var(V (θ)) dθ

The measure µ(γ) is defined by a renormalization procedure Vǫ = V ∗ φǫ. It was shown by Kahane ’85 that

◮ µ(γ) ǫ

converges as ǫ → 0 to a non-trivial limit if and only if γ < 2.

◮ This limit does not depend on (Kahane’s) cut-off procedures.

slide-22
SLIDE 22

The exponential of the logarithm

Naively we might suppose that |PN(θ)| = eVN(θ) converges to eV (θ)? How to define the exponential of a distribution? Consider measures formally defined by µ(γ)(D) =

  • D

eγV (θ)− γ2

2 Var(V (θ)) dθ

The measure µ(γ) is defined by a renormalization procedure Vǫ = V ∗ φǫ. It was shown by Kahane ’85 that

◮ µ(γ) ǫ

converges as ǫ → 0 to a non-trivial limit if and only if γ < 2.

◮ This limit does not depend on (Kahane’s) cut-off procedures.

This limit defines the measure µ(γ) which is called Gaussian multiplicative chaos (GMC).

slide-23
SLIDE 23

Properties of measures µ(γ)

slide-24
SLIDE 24

Properties of measures µ(γ)

◮ Power law spectrum (multifractality): For 0 ≤ qγ2 < 2 we

have E(µ(γ)[0, r]q) = Cqrξ(q) where ξ(q) = (1 + γ2

2 )q − γ2 2 q2.

slide-25
SLIDE 25

Properties of measures µ(γ)

◮ Power law spectrum (multifractality): For 0 ≤ qγ2 < 2 we

have E(µ(γ)[0, r]q) = Cqrξ(q) where ξ(q) = (1 + γ2

2 )q − γ2 2 q2. ◮ In two dimensions, V is essentially the Gaussian free field, a

fundamental object of mathematical physics.

slide-26
SLIDE 26

Properties of measures µ(γ)

◮ Power law spectrum (multifractality): For 0 ≤ qγ2 < 2 we

have E(µ(γ)[0, r]q) = Cqrξ(q) where ξ(q) = (1 + γ2

2 )q − γ2 2 q2. ◮ In two dimensions, V is essentially the Gaussian free field, a

fundamental object of mathematical physics.

◮ In that context, eγV is used in Liouville quantum gravity to

construct a uniform random metric on the sphere. (see work and recent surveys of e.g. Berestycki, Duplantier, Rhodes, Sheffield, Vargas,. . . )

slide-27
SLIDE 27

Properties of measures µ(γ)

◮ Power law spectrum (multifractality): For 0 ≤ qγ2 < 2 we

have E(µ(γ)[0, r]q) = Cqrξ(q) where ξ(q) = (1 + γ2

2 )q − γ2 2 q2. ◮ In two dimensions, V is essentially the Gaussian free field, a

fundamental object of mathematical physics.

◮ In that context, eγV is used in Liouville quantum gravity to

construct a uniform random metric on the sphere. (see work and recent surveys of e.g. Berestycki, Duplantier, Rhodes, Sheffield, Vargas,. . . )

◮ The distribution of µ(γ) near γ = γc is believed to be closely

related to statistics of max|z|=1|PN(z)|.

slide-28
SLIDE 28

The L2-phase

slide-29
SLIDE 29

The L2-phase

The range 0 ≤ γ < √ 2 is called the L2-phase. This is because E(µ(γ)(D)2) =

  • D×D

|eiθ − eiφ|−γ2/2dθ dφ < ∞ if and only if 0 ≤ γ < √ 2.

slide-30
SLIDE 30

The L2-phase

The range 0 ≤ γ < √ 2 is called the L2-phase. This is because E(µ(γ)(D)2) =

  • D×D

|eiθ − eiφ|−γ2/2dθ dφ < ∞ if and only if 0 ≤ γ < √ 2.

Theorem (Webb ’15)

Consider µ(γ)

N (D) =

  • D |PN(θ)|γ dθ

E

  • D |PN(θ)|γ dθ

Then for any γ < √ 2 we have µ(γ)

N d

→ µ(γ), N → ∞ where µ(γ) is the same measure constructed from Kahane’s theory.

slide-31
SLIDE 31

Counting statistics in the CUE

slide-32
SLIDE 32

Counting statistics in the CUE

Instead of VN(θ), we consider counting statistics XN(θ) =

N

  • j=1

χJ(θ)(Nαθj), J(θ) = [θ − 1, θ + 1]

slide-33
SLIDE 33

Counting statistics in the CUE

Instead of VN(θ), we consider counting statistics XN(θ) =

N

  • j=1

χJ(θ)(Nαθj), J(θ) = [θ − 1, θ + 1] In reality, we consider a slightly smoother version: XN,ǫN(θ) =

N

  • j=1

(χJ(θ) ∗ φǫ)(Nαθj), φǫ(θ) = 1 ǫ φ θ ǫ

  • where f ∗ g stands for the convolution of the functions f and g.
slide-34
SLIDE 34

Counting statistics in the CUE

Instead of VN(θ), we consider counting statistics XN(θ) =

N

  • j=1

χJ(θ)(Nαθj), J(θ) = [θ − 1, θ + 1] In reality, we consider a slightly smoother version: XN,ǫN(θ) =

N

  • j=1

(χJ(θ) ∗ φǫ)(Nαθj), φǫ(θ) = 1 ǫ φ θ ǫ

  • where f ∗ g stands for the convolution of the functions f and g.

We will study the field XN,ǫ with mollifying scale ǫ → 0 depending

  • n N.
slide-35
SLIDE 35

A realization of the process

A plot of the process ˜ XN(u) := XN(u) − E(XN(u)) and N = 3000, α = 0.

slide-36
SLIDE 36

A realization of the process

A plot of the process ˜ XN(u) := XN(u) − E(XN(u)), u ∈ [−π, π) and N = 3000, α = 0. (Zoomed in around the origin u ∈ (−0.2, 0.2))

slide-37
SLIDE 37

The main result: all 0 ≤ γ < 2

slide-38
SLIDE 38

The main result: all 0 ≤ γ < 2

From the smoothed counting statistic, we construct a measure µ(γ)

N,ǫN(D) =

  • D

eγ ˜

XN,ǫN (θ)− γ2

2 Var( ˜

XN,ǫN (θ)) dθ

where ˜ XN,ǫN(θ) = XN,ǫN(θ) − E(XN,ǫN(θ)).

slide-39
SLIDE 39

The main result: all 0 ≤ γ < 2

From the smoothed counting statistic, we construct a measure µ(γ)

N,ǫN(D) =

  • D

eγ ˜

XN,ǫN (θ)− γ2

2 Var( ˜

XN,ǫN (θ)) dθ

where ˜ XN,ǫN(θ) = XN,ǫN(θ) − E(XN,ǫN(θ)).

Theorem (Lambert, Ostrovsky, S’ 2016)

Suppose 0 < α < 1 and ǫN → 0 such that ǫ−1

N Nα−1 → 0. Then for

every γ < 2 we have µ(γ)

N,ǫN d

→ µ(γ), N → ∞ where µ(γ) is the same GMC constructed via Kahane’s theory.

slide-40
SLIDE 40

The main result: all 0 ≤ γ < 2

From the smoothed counting statistic, we construct a measure µ(γ)

N,ǫN(D) =

  • D

eγ ˜

XN,ǫN (θ)− γ2

2 Var( ˜

XN,ǫN (θ)) dθ

where ˜ XN,ǫN(θ) = XN,ǫN(θ) − E(XN,ǫN(θ)).

Theorem (Lambert, Ostrovsky, S’ 2016)

Suppose 0 < α < 1 and ǫN → 0 such that ǫ−1

N Nα−1 → 0. Then for

every γ < 2 we have µ(γ)

N,ǫN d

→ µ(γ), N → ∞ where µ(γ) is the same GMC constructed via Kahane’s theory. Thus we go beyond the L2 bounds γ < √ 2 to establish convergence in the full phase γ < 2.

slide-41
SLIDE 41

Ideas in the proof

slide-42
SLIDE 42

Ideas in the proof

Try to reduce to the case limǫ→0 limN→∞.

slide-43
SLIDE 43

Ideas in the proof

Try to reduce to the case limǫ→0 limN→∞. For fixed ǫ > 0 there is:

Theorem (Soshnikov ’00)

Let f be a smooth function with rapid decay. Then

N

  • j=1

f (θjNα) − E  

N

  • j=1

f (θjNα)   d → N(0, σ2(f ))

slide-44
SLIDE 44

Ideas in the proof

Try to reduce to the case limǫ→0 limN→∞. For fixed ǫ > 0 there is:

Theorem (Soshnikov ’00)

Let f be a smooth function with rapid decay. Then

N

  • j=1

f (θjNα) − E  

N

  • j=1

f (θjNα)   d → N(0, σ2(f )) where σ2(f ) = f 2

H1/2 =

−∞

|k||ˆ f (k)|2 dk = 1 2π2 ∞

−∞

−∞

f ′(x)f ′(y) log 1 |x − y| dx dy

slide-45
SLIDE 45

Ideas in the proof

Try to reduce to the case limǫ→0 limN→∞. For fixed ǫ > 0 there is:

Theorem (Soshnikov ’00)

Let f be a smooth function with rapid decay. Then

N

  • j=1

f (θjNα) − E  

N

  • j=1

f (θjNα)   d → N(0, σ2(f )) where σ2(f ) = f 2

H1/2 =

−∞

|k||ˆ f (k)|2 dk = 1 2π2 ∞

−∞

−∞

f ′(x)f ′(y) log 1 |x − y| dx dy However, if ǫ ≡ ǫN then Var(XN,ǫN(u)) ∼ 1

2 log(ǫ−1 N ) diverges...

slide-46
SLIDE 46

Strong Gaussian approximation

slide-47
SLIDE 47

Strong Gaussian approximation

Goal: Show that XN,ǫN is closely approximated by the fixed ǫ and N → ∞ limiting Gaussian field.

slide-48
SLIDE 48

Strong Gaussian approximation

Goal: Show that XN,ǫN is closely approximated by the fixed ǫ and N → ∞ limiting Gaussian field.

Lemma

Suppose 0 < α < 1 and ǫN → 0 such that ǫ−1

N Nα−1 → 0. Then for

any finite sequence of points u1, . . . , uq, we have

E  exp  

q

  • j=1

αj ˜ XN,ǫN(uj)     = exp  1 2

  • q
  • j=1

αj(χJ(uj) ∗ φǫN)

  • 2

H1/2

  (1 + o(1))

as N → ∞. The error term is uniform in u1, . . . , uq varying in compact subets of R.

slide-49
SLIDE 49

Strong Gaussian approximation

Goal: Show that XN,ǫN is closely approximated by the fixed ǫ and N → ∞ limiting Gaussian field.

Lemma

Suppose 0 < α < 1 and ǫN → 0 such that ǫ−1

N Nα−1 → 0. Then for

any finite sequence of points u1, . . . , uq, we have

E  exp  

q

  • j=1

αj ˜ XN,ǫN(uj)     = exp  1 2

  • q
  • j=1

αj(χJ(uj) ∗ φǫN)

  • 2

H1/2

  (1 + o(1))

as N → ∞. The error term is uniform in u1, . . . , uq varying in compact subets of R. Case q = 2 quite easily gives the L2-phase: µN,ǫN = µN,ǫ + (µN,ǫN − µN,ǫ) Uniformity above allows precise computation of the second moment of the blue term, provided γ < √ 2.

slide-50
SLIDE 50

Getting the estimate

slide-51
SLIDE 51

Getting the estimate

The CLT usually proved by the method of moments. Difficult to

  • btain good estimates on the Laplace transform.
slide-52
SLIDE 52

Getting the estimate

The CLT usually proved by the method of moments. Difficult to

  • btain good estimates on the Laplace transform. Instead, we

exploit

Theorem (Borodin-Okounkov formula (2000))

Let F be a 2π-periodic function. Then the following identity holds:

TN[F] := det{ ˆ Fk−j}k,j=0..N−1 = exp

  • N

log F0 +

  • k=1

|k|| log F k|2

  • × det(I − RNH(c)H(c)†RN)

where RN are projections on {N + 1, N + 2, . . .} and H(c) = {ˆ cj+k−1}∞

j,k=1 where

c(eiθ) = exp

  • iIm

  • k=1
  • log F keikθ
  • .
slide-53
SLIDE 53

Getting the estimate

The CLT usually proved by the method of moments. Difficult to

  • btain good estimates on the Laplace transform. Instead, we

exploit

Theorem (Borodin-Okounkov formula (2000))

Let F be a 2π-periodic function. Then the following identity holds:

TN[F] := det{ ˆ Fk−j}k,j=0..N−1 = exp

  • N

log F0 +

  • k=1

|k|| log F k|2

  • × det(I − RNH(c)H(c)†RN)

where RN are projections on {N + 1, N + 2, . . .} and H(c) = {ˆ cj+k−1}∞

j,k=1 where

c(eiθ) = exp

  • iIm

  • k=1
  • log F keikθ
  • .

We prove that the last determinant is close to 1, uniformly as N → ∞.

slide-54
SLIDE 54

The L1-phase 1 < γ < √ 2

slide-55
SLIDE 55

The L1-phase 1 < γ < √ 2

Idea is to study γ-thick points (see e.g. Berestycki ’15): Pc,τ := {u ∈ [−c, c] : ˜ XN,ǫN(u) > τ log(ǫ−1

N )}

Note that Pc,τ are the points where the field fluctuates above its maximum. We show that for any τ > γ, the mass µγ

N,ǫN(Pc,τ) converges to

zero in L1.

slide-56
SLIDE 56

The L1-phase 1 < γ < √ 2

Idea is to study γ-thick points (see e.g. Berestycki ’15): Pc,τ := {u ∈ [−c, c] : ˜ XN,ǫN(u) > τ log(ǫ−1

N )}

Note that Pc,τ are the points where the field fluctuates above its maximum. We show that for any τ > γ, the mass µγ

N,ǫN(Pc,τ) converges to

zero in L1. On the complement Pc

c,τ it can be shown that now the L2

techniques work for any γ < √ 2.

slide-57
SLIDE 57

The L1-phase 1 < γ < √ 2

Idea is to study γ-thick points (see e.g. Berestycki ’15): Pc,τ := {u ∈ [−c, c] : ˜ XN,ǫN(u) > τ log(ǫ−1

N )}

Note that Pc,τ are the points where the field fluctuates above its maximum. We show that for any τ > γ, the mass µγ

N,ǫN(Pc,τ) converges to

zero in L1. On the complement Pc

c,τ it can be shown that now the L2

techniques work for any γ < √ 2. We are guided by Berestycki’s calculation, adapted to our ‘almost Gaussian’ setting.

slide-58
SLIDE 58

Conclusions

slide-59
SLIDE 59

Conclusions

Summary:

◮ I showed how counting statistics in the CUE are related to

log-correlated Gaussian fields.

slide-60
SLIDE 60

Conclusions

Summary:

◮ I showed how counting statistics in the CUE are related to

log-correlated Gaussian fields.

◮ The corresponding exponentials were constructed in terms of

Gaussian multiplicative chaos.

slide-61
SLIDE 61

Conclusions

Summary:

◮ I showed how counting statistics in the CUE are related to

log-correlated Gaussian fields.

◮ The corresponding exponentials were constructed in terms of

Gaussian multiplicative chaos.

◮ We also proved that the same results hold when the CUE is

replaced with the sine process.

slide-62
SLIDE 62

Conclusions

Summary:

◮ I showed how counting statistics in the CUE are related to

log-correlated Gaussian fields.

◮ The corresponding exponentials were constructed in terms of

Gaussian multiplicative chaos.

◮ We also proved that the same results hold when the CUE is

replaced with the sine process. What about other Coulomb gas type systems or more general point processes? How does the characteristic polynomial behave?

slide-63
SLIDE 63

Conclusions

Summary:

◮ I showed how counting statistics in the CUE are related to

log-correlated Gaussian fields.

◮ The corresponding exponentials were constructed in terms of

Gaussian multiplicative chaos.

◮ We also proved that the same results hold when the CUE is

replaced with the sine process. What about other Coulomb gas type systems or more general point processes? How does the characteristic polynomial behave?

Thank you.