Isotropic local laws for random matrices Antti Knowles University - - PowerPoint PPT Presentation

isotropic local laws for random matrices
SMART_READER_LITE
LIVE PREVIEW

Isotropic local laws for random matrices Antti Knowles University - - PowerPoint PPT Presentation

Isotropic local laws for random matrices Antti Knowles University of Geneva With Y. He and R. Rosenthal Random matrices Let H C N N be a large Hermitian random matrix, normalized so that H 1 . Some motivations: Quantum


slide-1
SLIDE 1

Isotropic local laws for random matrices

Antti Knowles University of Geneva With Y. He and R. Rosenthal

slide-2
SLIDE 2

Random matrices

Let H ∈ CN×N be a large Hermitian random matrix, normalized so that H ≍ 1. Some motivations:

Quantum mechanics: Hamilton operator of a disordered quantum system

(heavy nuclei, itinerant electrons in metals, quantum dots, ...).

Multivariate statistics: sample covariance matrix.

Goal: Analysis of eigenvalues λ1 λ2 . . . λN and eigenvectors u1, u2, . . . , uN ∈ SN−1

  • f H.
slide-3
SLIDE 3

The key questions

(1) Global eigenvalue distribution. Asymptotic behaviour of the empirical distribution

1 N

N

i=1 δλi.

λ1 λi λi+1 (2) Local eigenvalue distribution. Asymptotic behaviour of individual eigenvalues. Examples: distribution of the gaps λi − λi+1 or the largest eigenvalue λ1. (3) Distribution of eigenvectors. Localization / delocalization of the

  • eigenvectors. Distribution of the components v, ui.
slide-4
SLIDE 4

Global and local laws

The Green function G(z) . .= (H − zI)−1 is the right tool to address the questions (1) – (3). Writing z = E + iη ∈ C+, we have Im 1 N Tr G(z) = 1 N

N

  • i=1

η (λi − E)2 + η2 E η λi Observation: η = Im z is the spectral resolution.

Global law: control of G(z) for η ≍ 1. Local law: control of G(z) for η ≫ 1

N .

To answer the questions (2) and (3), one needs a local law.

slide-5
SLIDE 5

Deterministic equivalent of Green function

One usually needs control of G(z) as a matrix, and not just of

1 N Tr G(z).

Goal: There is a deterministic matrix M(z), the deterministic equivalent of G(z), such that G(z) − M(z) is small for η ≫ 1

N with high probability.

What does “G − M small” mean? Canonical notions of smallness (operator topologies): control of (i) |v, (G − M)w| , (ii) |(G − M)v| , (iii) G − M for all deterministic v, w ∈ SN−1. In fact, already for H = GUE it is easy to see that (ii) and (iii) blow up. Control of (i) is the strongest one can hope for (isotropic control of G).

slide-6
SLIDE 6

Example: Wigner matrices

The entries (Hij . . 1 i j N) are independent and satisfy EHij = 0 , E|Hij|2 = 1 N . The deterministic equivalent is M(z) = m(z)I where m(z) . .= 2

−2

dx √ 4 − x2 2π 1 x − z is the Stieltjes transform of the semicircle law.

slide-7
SLIDE 7

Some history

Local law η ≫ 1

N for Wigner matrices:

(a) Tr(G − M) [Erd˝

  • s, Schlein, Yau; 2009]

(b) (G − M)ij [Erd˝

  • s, Yau, Yin; 2010]

(c) v, (G − M)w [Knowles, Yin; 2011] More general models (sparse random matrices, covariance matrices, deformed matrices, ...): [Ajanki, Erd˝

  • s, Knowles, Kr¨

uger, Lee, Schnelli, Yau, Yin, . . . ] Two key steps in all proofs:

Deterministic step: stability of self-consistent equation.

Identify M as the solution of a self-consistent equation Π(M) = 0. Prove that Π(Q) ≈ 0 = ⇒ Q ≈ M .

Stochastic step: derivation of the self-consistent equation.

Prove that Π(G) ≈ 0 with high probability.

slide-8
SLIDE 8

Derivation of the self-consistent equation: folklore

Use Schur’s complement formula to write Gii = 1 −z − Hii −

k,l=i HikG(i) kl Hli

, and large deviation estimates to show, with high probability, Hii ≈ 0 and

  • k,l=i

HikG(i)

kl Hli ≈

  • k=i

HikG(i)

kl Hki ≈ 1

N

  • k=i

G(i)

kk ≈ 1

N

  • k

Gkk . Average over i. Works very well for Wigner matrices and some generalizations theoreof. Problems: (i) Matrix entries have to be independent. (ii) Expectation of H has to be diagonal. (iii) Does not give control of v, (G − M)w. This requires an additional, difficult, step.

slide-9
SLIDE 9

Alternative approach [He, K, Rosenthal; 2016]

New way to derive self-consistent equations, overcoming all of the above problems: (i) Admits a very general relationship between matrix entries and the independent random variables. (Can also handle models with short-range correlations, [Erd˝

  • s, Kr¨

uger, Schr¨

  • der; 2017].)

(ii) Completely insensitive to the expectation of H. (iii) Yields control of v, (G − M)w from the outset. Key idea: instead of working on entire rows and columns (Schur’s formula), work on individual entries (resolvent/cumulant expansion).

slide-10
SLIDE 10

Resolvent / cumulant expansion

Resolvent expansion in individual entries: (H(ij))kl . .= 1{i,j}={k,l} Hkl , G(ij)(z) . .= (H(ij) − zI)−1 . Starting point: trivial identity I + zG = HG. Then write E(HG)ii = E

  • j

HijGji = E

  • j

Hij

  • G(ij)

ji

− G(ij)

jj HjiG(ij) ii

− G(ij)

ji HijG(ij) ji

+ · · ·

  • = −E
  • j

1 N G(ij)

jj G(ij) ii

+ · · · = −E

  • j

1 N GjjGii + · · · . Note: resolvent expansion is used twice: G → G(ij) → G.

slide-11
SLIDE 11

The resulting algebra is beautifully summarized by the cumulant expansion E[h · f(h)] =

  • k=0

1 k!Ck+1(h) E[f (k)(h)] + Rℓ , Ck(h) . .= ∂k

t |t=0 log E[eth] .

[Khorunzhy, Khoruzhenko, Pastur; 1996] Performs essentially the same as the resolvent expansion but more tidily. In applications, h = Hij and f(h) is a polynomial of resolvents. For example, the previous resolvent calculation is replaced by E(HG)ii =

  • j

E[HijGji] =

  • j

  • k=0

1 k!Ck+1(Hij)E

∂Hij k Gji

  • + · · ·

=

  • j

1 N E

∂Hij Gji

  • + · · · =
  • j

1 N E[−GjjGii − GjiGij] + · · · Second term small by Cauchy-Schwarz and Ward identity

j|Gij|2 = 1 η Im Gii.

slide-12
SLIDE 12

Sketch of results

Illustrative model: general mean-field model with independent entries. The entries (Hij . . 1 i j N) are independent and satisfy Var(Hij) = O( 1

N ).

Split H = W + A where A . .= EH, and define the map Πz(M) . .= I + zM + S(M)M − AM , S(M) . .= E[WMW] . Then for z ∈ C+ the equation Πz(·) = 0 has a unique solution M(z) with positive imaginary part – the deterministic equivalent of G for this model. We prove that for all η ≫ 1

N

|v, Π(G)w|

  • Im M

Nη + 1 Nη . (Optimal in bulk and at edges.) This deals with the stochastic step – derivation of self-consistent equation. Conclude proof of local law by the deterministic step – stability analysis of self-consistent equation [Lee, Schnelli; 2013], [Ajanki, Erd˝

  • s, Kr¨

uger; 2016].

slide-13
SLIDE 13

How to start the proof

Let Pvw . .= v, Π(G)w, where Π(G) = I + zM + S(M)M − AM = WG + S(G)G . By Markov’s inequality, it suffices to estimate E|Pvw|2p = E

  • (WG)vwP p−1

vw P p vw

  • + E
  • (S(G)G)vwP p−1

vw P p vw

  • .

Apply the cumulant expansion to the first term by writing (WG)vw =

  • i,j

viWijGjw . The leading term from k = 1, −E

  • i,j

viGjjGiw

  • P p−1

vw P p vw

  • ,

cancels the term E

  • (S(G)G)vwP p−1

vw P p vw

  • . Everything else has to be

estimated – main work!