SLIDE 1 Random matrices and Gaussian multiplicative chaos
Nick Simm
Mathematics Institute, University of Warwick
Joint work with Gaultier Lambert and Dmitry Ostrovsky.
Optimal Point Configurations and Orthogonal Polynomials April 2017, CIEM
Research supported by Leverhulme fellowship ECF-2014-309
SLIDE 2
SLIDE 3
The Circular Unitary Ensemble
SLIDE 4
The Circular Unitary Ensemble
◮ Let UN be an N × N random matrix chosen uniformly from
the unitary group.
SLIDE 5 The Circular Unitary Ensemble
◮ Let UN be an N × N random matrix chosen uniformly from
the unitary group.
◮ The joint distribution of points is an example of a (1d)
‘Coulomb gas’: P(θ1, . . . , θN) ∝
|eiθj − eiθk|2
SLIDE 6 The Circular Unitary Ensemble
◮ Let UN be an N × N random matrix chosen uniformly from
the unitary group.
◮ The joint distribution of points is an example of a (1d)
‘Coulomb gas’: P(θ1, . . . , θN) ∝
|eiθj − eiθk|2
◮ The eigenvalues eiθ1, . . . , eiθN form a determinantal point
process with kernel KN(θ, φ) =
N−1
pj(θ)pj(φ) = sin(N(θ − φ)/2) sin((θ − φ)/2) where pj(θ) = eijθ.
SLIDE 7 The Circular Unitary Ensemble
◮ Let UN be an N × N random matrix chosen uniformly from
the unitary group.
◮ The joint distribution of points is an example of a (1d)
‘Coulomb gas’: P(θ1, . . . , θN) ∝
|eiθj − eiθk|2
◮ The eigenvalues eiθ1, . . . , eiθN form a determinantal point
process with kernel KN(θ, φ) =
N−1
pj(θ)pj(φ) = sin(N(θ − φ)/2) sin((θ − φ)/2) where pj(θ) = eijθ.
◮ Problem: limit theorems for PN(θ) = det(UN − eiθ) as
N → ∞.
SLIDE 8
Characteristic polynomials
SLIDE 9
Characteristic polynomials
Characteristic polynomials of (large) random matrices:
SLIDE 10
Characteristic polynomials
Characteristic polynomials of (large) random matrices:
◮ A good model of the Riemann zeta function ζ(s) high up on
the critical line s = 1/2 + it (Keating and Snaith ’00).
SLIDE 11
Characteristic polynomials
Characteristic polynomials of (large) random matrices:
◮ A good model of the Riemann zeta function ζ(s) high up on
the critical line s = 1/2 + it (Keating and Snaith ’00).
◮ An interesting example of a log-correlated Gaussian field. E.g.
how to compute M∗
N =
max
θ∈[0,2π] log |PN(θ)| ≡
max
θ∈[0,2π] log | det(UN − eiθI)|
SLIDE 12
Characteristic polynomials
Characteristic polynomials of (large) random matrices:
◮ A good model of the Riemann zeta function ζ(s) high up on
the critical line s = 1/2 + it (Keating and Snaith ’00).
◮ An interesting example of a log-correlated Gaussian field. E.g.
how to compute M∗
N =
max
θ∈[0,2π] log |PN(θ)| ≡
max
θ∈[0,2π] log | det(UN − eiθI)| ◮ Using these ideas, it has been conjectured and partially
proved that as N → ∞ M∗
N = log(N) − 3
4 log(log(N)) + (G1 + G2)/2 + o(1) where G1,2 are standard independent Gumbel variables.
(Fyodorov and Keating ’12, Arguin, Belius, Bourgade ’15, Paquette and Zeitouni ’16, Chaibbi, Madaule and Najnudel ’16)
SLIDE 13
The logarithm
SLIDE 14 The logarithm
Theorem (Hughes, Keating and O’Connell ’01)
Let {Zj}∞
j=1 be i.i.d. standard complex Gaussian random variables.
Then
VN(θ) := log |PN(θ)|
d
→ V (θ) := 1 2
∞
eiθ √ k Zk + c.c.
SLIDE 15 The logarithm
Theorem (Hughes, Keating and O’Connell ’01)
Let {Zj}∞
j=1 be i.i.d. standard complex Gaussian random variables.
Then
VN(θ) := log |PN(θ)|
d
→ V (θ) := 1 2
∞
eiθ √ k Zk + c.c.
Key properties of V (θ):
◮ V is Gaussian and mean zero E(V (θ)) = 0. ◮ Logarithmic correlations:
E(V (θ)V (φ)) = 1 2Re ∞
eik(θ−φ) k
2 log |eiθ − eiφ|
◮ What about θ = φ? Implies
Var(V (θ)) = ∞
SLIDE 16 The logarithm
Theorem (Hughes, Keating and O’Connell ’01)
Let {Zj}∞
j=1 be i.i.d. standard complex Gaussian random variables.
Then
VN(θ) := log |PN(θ)|
d
→ V (θ) := 1 2
∞
eiθ √ k Zk + c.c.
Key properties of V (θ):
◮ V is Gaussian and mean zero E(V (θ)) = 0. ◮ Logarithmic correlations:
E(V (θ)V (φ)) = 1 2Re ∞
eik(θ−φ) k
2 log |eiθ − eiφ|
◮ What about θ = φ? Implies
Var(V (θ)) = ∞ Conclusion: Limit V (θ) is a distribution valued object.
SLIDE 17
The exponential of the logarithm
SLIDE 18
The exponential of the logarithm
Naively we might suppose that |PN(θ)| = eVN(θ) converges to eV (θ)?
SLIDE 19
The exponential of the logarithm
Naively we might suppose that |PN(θ)| = eVN(θ) converges to eV (θ)? How to define the exponential of a distribution?
SLIDE 20 The exponential of the logarithm
Naively we might suppose that |PN(θ)| = eVN(θ) converges to eV (θ)? How to define the exponential of a distribution? Consider measures formally defined by µ(γ)(D) =
eγV (θ)− γ2
2 Var(V (θ)) dθ
The measure µ(γ) is defined by a renormalization procedure Vǫ = V ∗ φǫ.
SLIDE 21 The exponential of the logarithm
Naively we might suppose that |PN(θ)| = eVN(θ) converges to eV (θ)? How to define the exponential of a distribution? Consider measures formally defined by µ(γ)(D) =
eγV (θ)− γ2
2 Var(V (θ)) dθ
The measure µ(γ) is defined by a renormalization procedure Vǫ = V ∗ φǫ. It was shown by Kahane ’85 that
◮ µ(γ) ǫ
converges as ǫ → 0 to a non-trivial limit if and only if γ < 2.
◮ This limit does not depend on (Kahane’s) cut-off procedures.
SLIDE 22 The exponential of the logarithm
Naively we might suppose that |PN(θ)| = eVN(θ) converges to eV (θ)? How to define the exponential of a distribution? Consider measures formally defined by µ(γ)(D) =
eγV (θ)− γ2
2 Var(V (θ)) dθ
The measure µ(γ) is defined by a renormalization procedure Vǫ = V ∗ φǫ. It was shown by Kahane ’85 that
◮ µ(γ) ǫ
converges as ǫ → 0 to a non-trivial limit if and only if γ < 2.
◮ This limit does not depend on (Kahane’s) cut-off procedures.
This limit defines the measure µ(γ) which is called Gaussian multiplicative chaos (GMC).
SLIDE 23
Properties of measures µ(γ)
SLIDE 24
Properties of measures µ(γ)
◮ Power law spectrum (multifractality): For 0 ≤ qγ2 < 2 we
have E(µ(γ)[0, r]q) = Cqrξ(q) where ξ(q) = (1 + γ2
2 )q − γ2 2 q2.
SLIDE 25
Properties of measures µ(γ)
◮ Power law spectrum (multifractality): For 0 ≤ qγ2 < 2 we
have E(µ(γ)[0, r]q) = Cqrξ(q) where ξ(q) = (1 + γ2
2 )q − γ2 2 q2. ◮ In two dimensions, V is essentially the Gaussian free field, a
fundamental object of mathematical physics.
SLIDE 26
Properties of measures µ(γ)
◮ Power law spectrum (multifractality): For 0 ≤ qγ2 < 2 we
have E(µ(γ)[0, r]q) = Cqrξ(q) where ξ(q) = (1 + γ2
2 )q − γ2 2 q2. ◮ In two dimensions, V is essentially the Gaussian free field, a
fundamental object of mathematical physics.
◮ In that context, eγV is used in Liouville quantum gravity to
construct a uniform random metric on the sphere. (see work and recent surveys of e.g. Berestycki, Duplantier, Rhodes, Sheffield, Vargas,. . . )
SLIDE 27
Properties of measures µ(γ)
◮ Power law spectrum (multifractality): For 0 ≤ qγ2 < 2 we
have E(µ(γ)[0, r]q) = Cqrξ(q) where ξ(q) = (1 + γ2
2 )q − γ2 2 q2. ◮ In two dimensions, V is essentially the Gaussian free field, a
fundamental object of mathematical physics.
◮ In that context, eγV is used in Liouville quantum gravity to
construct a uniform random metric on the sphere. (see work and recent surveys of e.g. Berestycki, Duplantier, Rhodes, Sheffield, Vargas,. . . )
◮ The distribution of µ(γ) near γ = γc is believed to be closely
related to statistics of max|z|=1|PN(z)|.
SLIDE 28
The L2-phase
SLIDE 29 The L2-phase
The range 0 ≤ γ < √ 2 is called the L2-phase. This is because E(µ(γ)(D)2) =
|eiθ − eiφ|−γ2/2dθ dφ < ∞ if and only if 0 ≤ γ < √ 2.
SLIDE 30 The L2-phase
The range 0 ≤ γ < √ 2 is called the L2-phase. This is because E(µ(γ)(D)2) =
|eiθ − eiφ|−γ2/2dθ dφ < ∞ if and only if 0 ≤ γ < √ 2.
Theorem (Webb ’15)
Consider µ(γ)
N (D) =
E
Then for any γ < √ 2 we have µ(γ)
N d
→ µ(γ), N → ∞ where µ(γ) is the same measure constructed from Kahane’s theory.
SLIDE 31
Counting statistics in the CUE
SLIDE 32 Counting statistics in the CUE
Instead of VN(θ), we consider counting statistics XN(θ) =
N
χJ(θ)(Nαθj), J(θ) = [θ − 1, θ + 1]
SLIDE 33 Counting statistics in the CUE
Instead of VN(θ), we consider counting statistics XN(θ) =
N
χJ(θ)(Nαθj), J(θ) = [θ − 1, θ + 1] In reality, we consider a slightly smoother version: XN,ǫN(θ) =
N
(χJ(θ) ∗ φǫ)(Nαθj), φǫ(θ) = 1 ǫ φ θ ǫ
- where f ∗ g stands for the convolution of the functions f and g.
SLIDE 34 Counting statistics in the CUE
Instead of VN(θ), we consider counting statistics XN(θ) =
N
χJ(θ)(Nαθj), J(θ) = [θ − 1, θ + 1] In reality, we consider a slightly smoother version: XN,ǫN(θ) =
N
(χJ(θ) ∗ φǫ)(Nαθj), φǫ(θ) = 1 ǫ φ θ ǫ
- where f ∗ g stands for the convolution of the functions f and g.
We will study the field XN,ǫ with mollifying scale ǫ → 0 depending
SLIDE 35
A realization of the process
A plot of the process ˜ XN(u) := XN(u) − E(XN(u)) and N = 3000, α = 0.
SLIDE 36
A realization of the process
A plot of the process ˜ XN(u) := XN(u) − E(XN(u)), u ∈ [−π, π) and N = 3000, α = 0. (Zoomed in around the origin u ∈ (−0.2, 0.2))
SLIDE 37
The main result: all 0 ≤ γ < 2
SLIDE 38 The main result: all 0 ≤ γ < 2
From the smoothed counting statistic, we construct a measure µ(γ)
N,ǫN(D) =
eγ ˜
XN,ǫN (θ)− γ2
2 Var( ˜
XN,ǫN (θ)) dθ
where ˜ XN,ǫN(θ) = XN,ǫN(θ) − E(XN,ǫN(θ)).
SLIDE 39 The main result: all 0 ≤ γ < 2
From the smoothed counting statistic, we construct a measure µ(γ)
N,ǫN(D) =
eγ ˜
XN,ǫN (θ)− γ2
2 Var( ˜
XN,ǫN (θ)) dθ
where ˜ XN,ǫN(θ) = XN,ǫN(θ) − E(XN,ǫN(θ)).
Theorem (Lambert, Ostrovsky, S’ 2016)
Suppose 0 < α < 1 and ǫN → 0 such that ǫ−1
N Nα−1 → 0. Then for
every γ < 2 we have µ(γ)
N,ǫN d
→ µ(γ), N → ∞ where µ(γ) is the same GMC constructed via Kahane’s theory.
SLIDE 40 The main result: all 0 ≤ γ < 2
From the smoothed counting statistic, we construct a measure µ(γ)
N,ǫN(D) =
eγ ˜
XN,ǫN (θ)− γ2
2 Var( ˜
XN,ǫN (θ)) dθ
where ˜ XN,ǫN(θ) = XN,ǫN(θ) − E(XN,ǫN(θ)).
Theorem (Lambert, Ostrovsky, S’ 2016)
Suppose 0 < α < 1 and ǫN → 0 such that ǫ−1
N Nα−1 → 0. Then for
every γ < 2 we have µ(γ)
N,ǫN d
→ µ(γ), N → ∞ where µ(γ) is the same GMC constructed via Kahane’s theory. Thus we go beyond the L2 bounds γ < √ 2 to establish convergence in the full phase γ < 2.
SLIDE 41
Ideas in the proof
SLIDE 42
Ideas in the proof
Try to reduce to the case limǫ→0 limN→∞.
SLIDE 43 Ideas in the proof
Try to reduce to the case limǫ→0 limN→∞. For fixed ǫ > 0 there is:
Theorem (Soshnikov ’00)
Let f be a smooth function with rapid decay. Then
N
f (θjNα) − E
N
f (θjNα) d → N(0, σ2(f ))
SLIDE 44 Ideas in the proof
Try to reduce to the case limǫ→0 limN→∞. For fixed ǫ > 0 there is:
Theorem (Soshnikov ’00)
Let f be a smooth function with rapid decay. Then
N
f (θjNα) − E
N
f (θjNα) d → N(0, σ2(f )) where σ2(f ) = f 2
H1/2 =
∞
−∞
|k||ˆ f (k)|2 dk = 1 2π2 ∞
−∞
∞
−∞
f ′(x)f ′(y) log 1 |x − y| dx dy
SLIDE 45 Ideas in the proof
Try to reduce to the case limǫ→0 limN→∞. For fixed ǫ > 0 there is:
Theorem (Soshnikov ’00)
Let f be a smooth function with rapid decay. Then
N
f (θjNα) − E
N
f (θjNα) d → N(0, σ2(f )) where σ2(f ) = f 2
H1/2 =
∞
−∞
|k||ˆ f (k)|2 dk = 1 2π2 ∞
−∞
∞
−∞
f ′(x)f ′(y) log 1 |x − y| dx dy However, if ǫ ≡ ǫN then Var(XN,ǫN(u)) ∼ 1
2 log(ǫ−1 N ) diverges...
SLIDE 46
Strong Gaussian approximation
SLIDE 47
Strong Gaussian approximation
Goal: Show that XN,ǫN is closely approximated by the fixed ǫ and N → ∞ limiting Gaussian field.
SLIDE 48 Strong Gaussian approximation
Goal: Show that XN,ǫN is closely approximated by the fixed ǫ and N → ∞ limiting Gaussian field.
Lemma
Suppose 0 < α < 1 and ǫN → 0 such that ǫ−1
N Nα−1 → 0. Then for
any finite sequence of points u1, . . . , uq, we have
E exp
q
αj ˜ XN,ǫN(uj) = exp 1 2
αj(χJ(uj) ∗ φǫN)
H1/2
(1 + o(1))
as N → ∞. The error term is uniform in u1, . . . , uq varying in compact subets of R.
SLIDE 49 Strong Gaussian approximation
Goal: Show that XN,ǫN is closely approximated by the fixed ǫ and N → ∞ limiting Gaussian field.
Lemma
Suppose 0 < α < 1 and ǫN → 0 such that ǫ−1
N Nα−1 → 0. Then for
any finite sequence of points u1, . . . , uq, we have
E exp
q
αj ˜ XN,ǫN(uj) = exp 1 2
αj(χJ(uj) ∗ φǫN)
H1/2
(1 + o(1))
as N → ∞. The error term is uniform in u1, . . . , uq varying in compact subets of R. Case q = 2 quite easily gives the L2-phase: µN,ǫN = µN,ǫ + (µN,ǫN − µN,ǫ) Uniformity above allows precise computation of the second moment of the blue term, provided γ < √ 2.
SLIDE 50
Getting the estimate
SLIDE 51 Getting the estimate
The CLT usually proved by the method of moments. Difficult to
- btain good estimates on the Laplace transform.
SLIDE 52 Getting the estimate
The CLT usually proved by the method of moments. Difficult to
- btain good estimates on the Laplace transform. Instead, we
exploit
Theorem (Borodin-Okounkov formula (2000))
Let F be a 2π-periodic function. Then the following identity holds:
TN[F] := det{ ˆ Fk−j}k,j=0..N−1 = exp
log F0 +
∞
|k|| log F k|2
where RN are projections on {N + 1, N + 2, . . .} and H(c) = {ˆ cj+k−1}∞
j,k=1 where
c(eiθ) = exp
∞
SLIDE 53 Getting the estimate
The CLT usually proved by the method of moments. Difficult to
- btain good estimates on the Laplace transform. Instead, we
exploit
Theorem (Borodin-Okounkov formula (2000))
Let F be a 2π-periodic function. Then the following identity holds:
TN[F] := det{ ˆ Fk−j}k,j=0..N−1 = exp
log F0 +
∞
|k|| log F k|2
where RN are projections on {N + 1, N + 2, . . .} and H(c) = {ˆ cj+k−1}∞
j,k=1 where
c(eiθ) = exp
∞
We prove that the last determinant is close to 1, uniformly as N → ∞.
SLIDE 54
The L1-phase 1 < γ < √ 2
SLIDE 55
The L1-phase 1 < γ < √ 2
Idea is to study γ-thick points (see e.g. Berestycki ’15): Pc,τ := {u ∈ [−c, c] : ˜ XN,ǫN(u) > τ log(ǫ−1
N )}
Note that Pc,τ are the points where the field fluctuates above its maximum. We show that for any τ > γ, the mass µγ
N,ǫN(Pc,τ) converges to
zero in L1.
SLIDE 56
The L1-phase 1 < γ < √ 2
Idea is to study γ-thick points (see e.g. Berestycki ’15): Pc,τ := {u ∈ [−c, c] : ˜ XN,ǫN(u) > τ log(ǫ−1
N )}
Note that Pc,τ are the points where the field fluctuates above its maximum. We show that for any τ > γ, the mass µγ
N,ǫN(Pc,τ) converges to
zero in L1. On the complement Pc
c,τ it can be shown that now the L2
techniques work for any γ < √ 2.
SLIDE 57
The L1-phase 1 < γ < √ 2
Idea is to study γ-thick points (see e.g. Berestycki ’15): Pc,τ := {u ∈ [−c, c] : ˜ XN,ǫN(u) > τ log(ǫ−1
N )}
Note that Pc,τ are the points where the field fluctuates above its maximum. We show that for any τ > γ, the mass µγ
N,ǫN(Pc,τ) converges to
zero in L1. On the complement Pc
c,τ it can be shown that now the L2
techniques work for any γ < √ 2. We are guided by Berestycki’s calculation, adapted to our ‘almost Gaussian’ setting.
SLIDE 58
Conclusions
SLIDE 59
Conclusions
Summary:
◮ I showed how counting statistics in the CUE are related to
log-correlated Gaussian fields.
SLIDE 60
Conclusions
Summary:
◮ I showed how counting statistics in the CUE are related to
log-correlated Gaussian fields.
◮ The corresponding exponentials were constructed in terms of
Gaussian multiplicative chaos.
SLIDE 61
Conclusions
Summary:
◮ I showed how counting statistics in the CUE are related to
log-correlated Gaussian fields.
◮ The corresponding exponentials were constructed in terms of
Gaussian multiplicative chaos.
◮ We also proved that the same results hold when the CUE is
replaced with the sine process.
SLIDE 62
Conclusions
Summary:
◮ I showed how counting statistics in the CUE are related to
log-correlated Gaussian fields.
◮ The corresponding exponentials were constructed in terms of
Gaussian multiplicative chaos.
◮ We also proved that the same results hold when the CUE is
replaced with the sine process. What about other Coulomb gas type systems or more general point processes? How does the characteristic polynomial behave?
SLIDE 63
Conclusions
Summary:
◮ I showed how counting statistics in the CUE are related to
log-correlated Gaussian fields.
◮ The corresponding exponentials were constructed in terms of
Gaussian multiplicative chaos.
◮ We also proved that the same results hold when the CUE is
replaced with the sine process. What about other Coulomb gas type systems or more general point processes? How does the characteristic polynomial behave?
Thank you.