transfer matrix approach to 1d random band matrices
play

Transfer matrix approach to 1d random band matrices Tatyana - PowerPoint PPT Presentation

Transfer matrix approach to 1d random band matrices Tatyana Shcherbina* based on the joint papers with M.Shcherbina Princeton University "Integrability and Randomness in Mathematical Physics and Geometry ", CIRM, April 8-12, 2019


  1. Transfer matrix approach to 1d random band matrices Tatyana Shcherbina* based on the joint papers with M.Shcherbina Princeton University "Integrability and Randomness in Mathematical Physics and Geometry ", CIRM, April 8-12, 2019 *Supported in part by NSF grant DMS-1700009 T. Shcherbina (PU) 09/04/2019 1 / 27

  2. Local statistics, localization and delocalization One of the key physical parameter of models is the localization length, which describes the typical length scale of the eigenvectors of random matrices. The system is called delocalized if the localization length ℓ is comparable with the matrix size, and it is called localized otherwise. Localized eigenvectors: lack of transport (insulators), and Poisson local spectral statistics (typically strong disorder) Delocalization: diffusion (electric conductors), and GUE/GOE local statistics (typically weak disorder). The questions of the order of the localization length are closely related to the universality conjecture of the bulk local regime of the random matrix theory. T. Shcherbina (PU) 09/04/2019 2 / 27

  3. From the RMT point of view, the main objects of the local regime are k-point correlation functions R k (k = 1 , 2 , . . . ), which can be defined by the equalities:     ϕ k ( λ ( N ) j 1 , . . . , λ ( N ) � j k ) E   j 1 � = ... � = j k � R k ϕ k ( λ ( N ) , . . . , λ ( N ) ) R k ( λ ( N ) , . . . , λ ( N ) ) d λ ( N ) . . . d λ ( N ) = , 1 k 1 k 1 k where ϕ k : R k → C is bounded, continuous and symmetric in its arguments. Universality conjecture in the bulk of the spectrum (hermitian case, deloc.eg.s.) (Wigner – Dyson): � sin π ( ξ i − ξ j ) � k N →∞ ( N ρ ( E )) − k R k � � { E + ξ j / N ρ ( E ) } − → det i , j = 1 . π ( ξ i − ξ j ) T. Shcherbina (PU) 09/04/2019 3 / 27

  4. Wigner matrices, β -ensembles with β = 1 , 2, sample covariance matrices, etc.: delocalization, GUE/GOE local spectral statistics Anderson model (Random Schr ¨ odinger operators): H RS = −△ + V , where △ is the discrete Laplacian in lattice box Λ = [ 1 , n ] d ∩ Z d , V is a random potential (i.e. a diagonal matrix with i.i.d. entries). In d = 1: narrow band matrix with i.i.d. diagonal   V 1 1 0 0 . . . 0 1 V 2 1 0 . . . 0     0 1 V 3 1 . . . 0   H RS = .  . . . . .  ... . . . . .   . . . . .     0 . . . 0 1 V n − 1 1   0 . . . 0 0 1 V n Localization, Poisson local spectral statistics (Fr ¨ ohlich, Spencer, Aizenman, Molchanov, . . . ) T. Shcherbina (PU) 09/04/2019 4 / 27

  5. Random band matrices Can be defined in any dimension, but we will speak about d = 1. Entries are independent (up to the symmetry) but not identically distributed. H = { H jk } N H = H ∗ , j , k = 1 , E { H jk } = 0 . Variance is given by some function J (even, compact support or rapid decay) E {| H jk | 2 } = W − 1 J � � | j − k | / W Main parameter: band width W ∈ [ 1 ; N ] . T. Shcherbina (PU) 09/04/2019 5 / 27

  6. 1d case   · · · · · 0 0 0 0 0 0 0 0 0 0 · · · · · · 0 0 0 0 0 0 0 0 0     · · · · · · · 0 0 0 0 0 0 0 0     · · · · · · · · 0 0 0 0 0 0 0     · · · · · · · · · 0 0 0 0 0 0     0 · · · · · · · · · 0 0 0 0 0     H = 0 0 · · · · · · · · · 0 0 0 0     0 0 0 · · · · · · · · · 0 0 0     0 0 0 0 · · · · · · · · · 0 0     0 0 0 0 0 · · · · · · · · · 0     0 0 0 0 0 0 · · · · · · · · ·     0 0 0 0 0 0 0 · · · · · · · ·   0 0 0 0 0 0 0 0 · · · · · · · W = O ( 1 ) [ ∼ random Schr ¨ odinger] ← → W = N [Wigner matrices] T. Shcherbina (PU) 09/04/2019 6 / 27

  7. We consider the following two models: Random band matrices: specific covariance � − 1 ≈ C 1 W − 1 exp {− C 2 | i − j | / W } − W 2 ∆ + 1 � J ij = ij Block band matrices Only 3 block diagonals are non zero.  A 1 B 1 0 0 0 . . . 0  B ∗ A 2 B 2 0 0 . . . 0  1   B ∗  0 A 3 B 3 0 . . . 0   2 H =  B ∗  . . . . . .  3    . . . . . A n − 1 B n − 1   B ∗ 0 . . . 0 A n n − 1 A j – independent W × W GUE-matrices with entry’s variance α < 1 ( 1 − 2 α ) / W, 4 B j -independent W × W Ginibre matrices with entry’s variance α/ W T. Shcherbina (PU) 09/04/2019 7 / 27

  8. Anderson transition in random band matrices Varying W, we can see the transition: Conjecture (in the bulk of the spectrum): √ ℓ ∼ W 2 d = 1 : W ≫ N Delocalization, GUE statistics √ W ≪ N Localization, Poisson statistics Partial results (d = 1): Schenker (2009): ℓ ≤ W 8 localization techniques; improved to W 7 ; Erd˝ os, Yau, Yin (2011): ℓ ≥ W – RM methods; os, Knowles (2011): ℓ ≫ W 7 / 6 (in a weak sense); Erd˝ os, Knowles, Yau, Yin (2012): ℓ ≫ W 5 / 4 (in a weak sense, not Erd˝ uniform in N); Bourgade, Erd˝ os, Yau, Yin (2016): gap universality for W ∼ N; Bourgade, Yau, Yin (2018): W ≫ N 3 / 4 (quantum unique ergodicity); T. Shcherbina (PU) 09/04/2019 8 / 27

  9. Another method, which allows to work with random operators with non-trivial spatial structures, is supersymmetry techniques (SUSY), which based on the representation of the determinant as an integral over the Grassmann (anticommuting) variables. The method allows to obtain an integral representation for the main spectral characteristic (such as density of states, second correlation functions, or the average of an elements of the resolvent) as the averages of certain observables in some SUSY statistical mechanics models (so-called dual representation in terms of SUSY). This is basically an algebraic step, and usually can be done by the standard algebraic manipulations. The real mathematical challenge is a rigour analysis of the obtained integral representation. T. Shcherbina (PU) 09/04/2019 9 / 27

  10. "Generalised" correlation functions � det ( H − z ′ 1 ) � R 1 ( z 1 , z ′ 1 ) := E det ( H − z 1 ) � det ( H − z ′ 1 ) det ( H − z ′ 2 )) � R 2 ( z 1 , z ′ 1 ; z 2 , z ′ 2 ) := E det ( H − z 1 ) det ( H − z 2 )) We study these functions for z 1 , 2 = E + ξ 1 , 2 /ρ ( E ) N, z ′ 1 , 2 = E + ξ ′ 1 , 2 /ρ ( E ) N, E ∈ ( − 2 , 2 ) . Link with the spectral correlation functions: d 2 � E { Tr ( H − z 1 ) − 1 Tr ( H − z 2 ) − 1 } = R ( z 1 , z ′ 1 ; z 2 , z ′ 2 ) � dz ′ 1 dz ′ � z ′ 1 = z 1 , z ′ 2 = z 2 2 Correlation function of the characteristic polynomials: � � R 0 ( λ 1 , λ 2 ) = E det ( H − λ 1 ) det ( H − λ 2 ) , λ 1 , 2 = E ± ξ/ρ ( E ) N . T. Shcherbina (PU) 09/04/2019 10 / 27

  11. Integral representation for characteristic polynomials � − 1 � � � � J − 1 � � R 0 ( λ 1 , λ 2 ) = C N exp jk Tr X j X k det X j − i Λ / 2 dX , 2 j , k j H N 2 where { X j } are hermitian 2 × 2 matrices, Λ = diag { λ 1 , λ 2 } ,and ˆ ξ = diag { ξ, − ξ } . For the density of states or the second correlation function X j will be super-matrices � a j � A j � � ρ j ρ j ¯ X 1 X 2 j = , j = τ j b j ¯ τ j B j with real variables a j , b j and Grassmann variables ρ j , τ j , or hermitian A j , hyperbolic B j and Grassmann 2 × 2 matrices ¯ ρ j , ¯ τ j . T. Shcherbina (PU) 09/04/2019 11 / 27

  12. The formulas can be obtain in any dimension and for any J, although � − 1 gives a nearest neighbour model. In � − W 2 ∆ + 1 the specific J = particular, it becomes accessible for transfer matrix approach. For the specific covariance ( − W 2 △ + 1 ) − 1 : N − W 2 � � Tr ( X j − X j − 1 ) 2 � � R 0 ( λ 1 , λ 2 ) = C N exp × 2 j = 2 H N 2 N � 2 � N i ˆ − 1 X j + iE · I ξ � � � � � � exp Tr + det X j − iE · I / 2 dX , 2 2 2N ρ ( λ 0 ) j = 1 j = 1 T. Shcherbina (PU) 09/04/2019 12 / 27

  13. The idea of the transfer operator approach is very simple and natural. Let K ( X , Y ) be the matrix kernel of the compact integral operator in ⊕ p i = 1 L 2 [ X , d µ ( X )] . Then � � d µ ( X i ) = ( K n − 1 f , ¯ g ( X 1 ) K ( X 1 , X 2 ) . . . K ( X n − 1 , X n ) f ( X n ) g ) ∞ � c j = ( f , ψ j )( g , ˜ λ n − 1 = ( K ) c j , with ψ j ) . j j = 0 Here { λ j ( K ) } ∞ j = 0 are the eigenvalues of K ( | λ 0 | ≥ | λ 1 | ≥ . . . ), ψ j are corresponding eigenvectors, and ˜ ψ j are the eigenvectors of K ∗ . Hence, to study the correlation function, it suffices to study the integral operator with a kernel K ( X , Y ) . T. Shcherbina (PU) 09/04/2019 13 / 27

  14. For characteristic polynomials with J = ( − W 2 ∆ + 1 ) − 1 : K ξ ( X , Y ) = W 4 − W 2 � 2 Tr ( X − Y ) 2 � 2 π 2 F ξ ( X ) exp F ξ ( Y ) , where F ξ ( X ) is the operator of multiplication by i � � 2n ρ ( E ) Tr X ˆ F ξ ( X ) = F ( X ) · exp − ξ with − 1 X + i Λ 0 � 2 + 1 � � � � � F ( X ) = exp 4 Tr 2 Tr log X − i Λ 0 / 2 − C + 2 and some specific C + j LU j , ˆ Saddle-points: X j = πρ ( E ) · U ∗ A j = diag { 1 , − 1 } , X j = ± πρ ( E ) · I 2 T. Shcherbina (PU) 09/04/2019 14 / 27

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend