isotropic local laws for random matrices
play

Isotropic local laws for random matrices Antti Knowles University - PowerPoint PPT Presentation

Isotropic local laws for random matrices Antti Knowles University of Geneva With Y. He and R. Rosenthal Random matrices Let H C N N be a large Hermitian random matrix, normalized so that H 1 . Some motivations: Quantum


  1. Isotropic local laws for random matrices Antti Knowles University of Geneva With Y. He and R. Rosenthal

  2. Random matrices Let H ∈ C N × N be a large Hermitian random matrix, normalized so that � H � ≍ 1 . Some motivations: � Quantum mechanics: Hamilton operator of a disordered quantum system (heavy nuclei, itinerant electrons in metals, quantum dots, ...). � Multivariate statistics: sample covariance matrix. Goal: Analysis of eigenvalues λ 1 � λ 2 � . . . � λ N and eigenvectors u 1 , u 2 , . . . , u N ∈ S N − 1 of H .

  3. The key questions (1) Global eigenvalue distribution. Asymptotic behaviour of the empirical � N 1 distribution i =1 δ λ i . N λ i +1 λ i λ 1 (2) Local eigenvalue distribution. Asymptotic behaviour of individual eigenvalues. Examples: distribution of the gaps λ i − λ i +1 or the largest eigenvalue λ 1 . (3) Distribution of eigenvectors. Localization / delocalization of the eigenvectors. Distribution of the components � v , u i � .

  4. Global and local laws The Green function G ( z ) . . = ( H − zI ) − 1 is the right tool to address the questions (1) – (3). Writing z = E + i η ∈ C + , we have N Im 1 1 η � N Tr G ( z ) = ( λ i − E ) 2 + η 2 N i =1 λ i E η Observation: η = Im z is the spectral resolution. � Global law: control of G ( z ) for η ≍ 1 . � Local law: control of G ( z ) for η ≫ 1 N . To answer the questions (2) and (3), one needs a local law.

  5. Deterministic equivalent of Green function 1 One usually needs control of G ( z ) as a matrix, and not just of N Tr G ( z ) . Goal: There is a deterministic matrix M ( z ) , the deterministic equivalent of G ( z ) , such that G ( z ) − M ( z ) is small for η ≫ 1 N with high probability. What does “ G − M small” mean? Canonical notions of smallness (operator topologies): control of (i) |� v , ( G − M ) w �| , (ii) | ( G − M ) v | , (iii) � G − M � for all deterministic v , w ∈ S N − 1 . In fact, already for H = GUE it is easy to see that (ii) and (iii) blow up. Control of (i) is the strongest one can hope for (isotropic control of G ).

  6. Example: Wigner matrices The entries ( H ij . . 1 � i � j � N ) are independent and satisfy E | H ij | 2 = 1 E H ij = 0 , N . The deterministic equivalent is M ( z ) = m ( z ) I where √ � 2 4 − x 2 1 m ( z ) . . = d x 2 π x − z − 2 is the Stieltjes transform of the semicircle law.

  7. Some history Local law η ≫ 1 N for Wigner matrices: (a) Tr( G − M ) [Erd˝ os, Schlein, Yau; 2009] (b) ( G − M ) ij [Erd˝ os, Yau, Yin; 2010] (c) � v , ( G − M ) w � [Knowles, Yin; 2011] More general models (sparse random matrices, covariance matrices, deformed matrices, ...): [Ajanki, Erd˝ os, Knowles, Kr¨ uger, Lee, Schnelli, Yau, Yin, . . . ] Two key steps in all proofs: � Deterministic step: stability of self-consistent equation. Identify M as the solution of a self-consistent equation Π( M ) = 0 . Prove that Π( Q ) ≈ 0 = ⇒ Q ≈ M . � Stochastic step: derivation of the self-consistent equation. Prove that Π( G ) ≈ 0 with high probability.

  8. Derivation of the self-consistent equation: folklore Use Schur’s complement formula to write 1 G ii = , k,l � = i H ik G ( i ) − z − H ii − � kl H li and large deviation estimates to show, with high probability, H ii ≈ 0 and kl H ki ≈ 1 kk ≈ 1 H ik G ( i ) H ik G ( i ) G ( i ) � � � � kl H li ≈ G kk . N N k,l � = i k � = i k � = i k Average over i . Works very well for Wigner matrices and some generalizations theoreof. Problems: (i) Matrix entries have to be independent. (ii) Expectation of H has to be diagonal. (iii) Does not give control of � v , ( G − M ) w � . This requires an additional, difficult, step.

  9. Alternative approach [He, K, Rosenthal; 2016] New way to derive self-consistent equations, overcoming all of the above problems: (i) Admits a very general relationship between matrix entries and the independent random variables. (Can also handle models with short-range correlations, [Erd˝ os, Kr¨ uger, Schr¨ oder; 2017].) (ii) Completely insensitive to the expectation of H . (iii) Yields control of � v , ( G − M ) w � from the outset. Key idea: instead of working on entire rows and columns (Schur’s formula), work on individual entries (resolvent/cumulant expansion).

  10. Resolvent / cumulant expansion Resolvent expansion in individual entries: . = ( H ( ij ) − zI ) − 1 . ( H ( ij ) ) kl . G ( ij ) ( z ) . . = 1 { i,j }� = { k,l } H kl , Starting point: trivial identity I + zG = HG . Then write � E ( HG ) ii = E H ij G ji j � � G ( ij ) − G ( ij ) jj H ji G ( ij ) − G ( ij ) ji H ij G ( ij ) � = E + · · · H ij ji ii ji j 1 � N G ( ij ) jj G ( ij ) = − E + · · · ii j 1 � = − E N G jj G ii + · · · . j Note: resolvent expansion is used twice: G �→ G ( ij ) �→ G .

  11. The resulting algebra is beautifully summarized by the cumulant expansion ℓ 1 C k ( h ) . � k ! C k +1 ( h ) E [ f ( k ) ( h )] + R ℓ , . = ∂ k t | t =0 log E [e th ] . E [ h · f ( h )] = k =0 [Khorunzhy, Khoruzhenko, Pastur; 1996] Performs essentially the same as the resolvent expansion but more tidily. In applications, h = H ij and f ( h ) is a polynomial of resolvents. For example, the previous resolvent calculation is replaced by ℓ �� � k � 1 ∂ � � � E ( HG ) ii = E [ H ij G ji ] = k ! C k +1 ( H ij ) E + · · · G ji ∂H ij j j k =0 1 � � 1 ∂ � � = N E G ji + · · · = N E [ − G jj G ii − G ji G ij ] + · · · ∂H ij j j j | G ij | 2 = 1 Second term small by Cauchy-Schwarz and Ward identity � η Im G ii .

  12. Sketch of results Illustrative model: general mean-field model with independent entries. The entries ( H ij . . 1 � i � j � N ) are independent and satisfy Var( H ij ) = O ( 1 N ) . Split H = W + A where A . . = E H , and define the map Π z ( M ) . S ( M ) . . = I + zM + S ( M ) M − AM , . = E [ WMW ] . Then for z ∈ C + the equation Π z ( · ) = 0 has a unique solution M ( z ) with positive imaginary part – the deterministic equivalent of G for this model. We prove that for all η ≫ 1 N � � Im M � 1 |� v , Π( G ) w �| � + Nη . Nη (Optimal in bulk and at edges.) This deals with the stochastic step – derivation of self-consistent equation. Conclude proof of local law by the deterministic step – stability analysis of self-consistent equation [Lee, Schnelli; 2013], [Ajanki, Erd˝ os, Kr¨ uger; 2016].

  13. How to start the proof Let P vw . . = � v , Π( G ) w � , where Π( G ) = I + zM + S ( M ) M − AM = WG + S ( G ) G . By Markov’s inequality, it suffices to estimate E | P vw | 2 p = E ( WG ) vw P p − 1 vw P p ( S ( G ) G ) vw P p − 1 vw P p � � � � + E . vw vw Apply the cumulant expansion to the first term by writing � ( WG ) vw = v i W ij G j w . i,j The leading term from k = 1 , ��� � � P p − 1 vw P p − E v i G jj G i w , vw i,j ( S ( G ) G ) vw P p − 1 � vw P p � cancels the term E . Everything else has to be vw estimated – main work!

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend