random matrices and gaussian multiplicative chaos
play

Random matrices and Gaussian multiplicative chaos Nick Simm - PowerPoint PPT Presentation

Random matrices and Gaussian multiplicative chaos Nick Simm Mathematics Institute, University of Warwick Joint work with Gaultier Lambert and Dmitry Ostrovsky. Optimal Point Configurations and Orthogonal Polynomials April 2017, CIEM Research


  1. Random matrices and Gaussian multiplicative chaos Nick Simm Mathematics Institute, University of Warwick Joint work with Gaultier Lambert and Dmitry Ostrovsky. Optimal Point Configurations and Orthogonal Polynomials April 2017, CIEM Research supported by Leverhulme fellowship ECF-2014-309

  2. The Circular Unitary Ensemble

  3. The Circular Unitary Ensemble ◮ Let U N be an N × N random matrix chosen uniformly from the unitary group.

  4. The Circular Unitary Ensemble ◮ Let U N be an N × N random matrix chosen uniformly from the unitary group. ◮ The joint distribution of points is an example of a (1d) ‘Coulomb gas’: � | e i θ j − e i θ k | 2 P ( θ 1 , . . . , θ N ) ∝ j < k

  5. The Circular Unitary Ensemble ◮ Let U N be an N × N random matrix chosen uniformly from the unitary group. ◮ The joint distribution of points is an example of a (1d) ‘Coulomb gas’: � | e i θ j − e i θ k | 2 P ( θ 1 , . . . , θ N ) ∝ j < k ◮ The eigenvalues e i θ 1 , . . . , e i θ N form a determinantal point process with kernel N − 1 p j ( θ ) p j ( φ ) = sin( N ( θ − φ ) / 2) � K N ( θ, φ ) = sin(( θ − φ ) / 2) j =0 where p j ( θ ) = e ij θ .

  6. The Circular Unitary Ensemble ◮ Let U N be an N × N random matrix chosen uniformly from the unitary group. ◮ The joint distribution of points is an example of a (1d) ‘Coulomb gas’: � | e i θ j − e i θ k | 2 P ( θ 1 , . . . , θ N ) ∝ j < k ◮ The eigenvalues e i θ 1 , . . . , e i θ N form a determinantal point process with kernel N − 1 p j ( θ ) p j ( φ ) = sin( N ( θ − φ ) / 2) � K N ( θ, φ ) = sin(( θ − φ ) / 2) j =0 where p j ( θ ) = e ij θ . ◮ Problem: limit theorems for P N ( θ ) = det( U N − e i θ ) as N → ∞ .

  7. Characteristic polynomials

  8. Characteristic polynomials Characteristic polynomials of (large) random matrices:

  9. Characteristic polynomials Characteristic polynomials of (large) random matrices: ◮ A good model of the Riemann zeta function ζ ( s ) high up on the critical line s = 1 / 2 + it (Keating and Snaith ’00).

  10. Characteristic polynomials Characteristic polynomials of (large) random matrices: ◮ A good model of the Riemann zeta function ζ ( s ) high up on the critical line s = 1 / 2 + it (Keating and Snaith ’00). ◮ An interesting example of a log-correlated Gaussian field. E.g. how to compute M ∗ θ ∈ [0 , 2 π ] log | det( U N − e i θ I ) | N = θ ∈ [0 , 2 π ] log | P N ( θ ) | ≡ max max

  11. Characteristic polynomials Characteristic polynomials of (large) random matrices: ◮ A good model of the Riemann zeta function ζ ( s ) high up on the critical line s = 1 / 2 + it (Keating and Snaith ’00). ◮ An interesting example of a log-correlated Gaussian field. E.g. how to compute M ∗ θ ∈ [0 , 2 π ] log | det( U N − e i θ I ) | N = θ ∈ [0 , 2 π ] log | P N ( θ ) | ≡ max max ◮ Using these ideas, it has been conjectured and partially proved that as N → ∞ N = log( N ) − 3 M ∗ 4 log(log( N )) + ( G 1 + G 2 ) / 2 + o (1) where G 1 , 2 are standard independent Gumbel variables. (Fyodorov and Keating ’12, Arguin, Belius, Bourgade ’15, Paquette and Zeitouni ’16, Chaibbi, Madaule and Najnudel ’16)

  12. The logarithm

  13. The logarithm Theorem (Hughes, Keating and O’Connell ’01) Let { Z j } ∞ j =1 be i.i.d. standard complex Gaussian random variables. Then ∞ e i θ → V ( θ ) := 1 d � V N ( θ ) := log | P N ( θ ) | √ Z k + c . c . 2 k k =1

  14. The logarithm Theorem (Hughes, Keating and O’Connell ’01) Let { Z j } ∞ j =1 be i.i.d. standard complex Gaussian random variables. Then ∞ e i θ → V ( θ ) := 1 d � V N ( θ ) := log | P N ( θ ) | √ Z k + c . c . 2 k k =1 Key properties of V ( θ ): ◮ V is Gaussian and mean zero E ( V ( θ )) = 0. ◮ Logarithmic correlations: � ∞ e ik ( θ − φ ) E ( V ( θ ) V ( φ )) = 1 � = − 1 2 log | e i θ − e i φ | � 2 Re k j =1 ◮ What about θ = φ ? Implies Var ( V ( θ )) = ∞

  15. The logarithm Theorem (Hughes, Keating and O’Connell ’01) Let { Z j } ∞ j =1 be i.i.d. standard complex Gaussian random variables. Then ∞ e i θ → V ( θ ) := 1 d � V N ( θ ) := log | P N ( θ ) | √ Z k + c . c . 2 k k =1 Key properties of V ( θ ): ◮ V is Gaussian and mean zero E ( V ( θ )) = 0. ◮ Logarithmic correlations: � ∞ e ik ( θ − φ ) E ( V ( θ ) V ( φ )) = 1 � = − 1 2 log | e i θ − e i φ | � 2 Re k j =1 ◮ What about θ = φ ? Implies Var ( V ( θ )) = ∞ Conclusion: Limit V ( θ ) is a distribution valued object .

  16. The exponential of the logarithm

  17. The exponential of the logarithm Naively we might suppose that | P N ( θ ) | = e V N ( θ ) converges to e V ( θ ) ?

  18. The exponential of the logarithm Naively we might suppose that | P N ( θ ) | = e V N ( θ ) converges to e V ( θ ) ? How to define the exponential of a distribution?

  19. The exponential of the logarithm Naively we might suppose that | P N ( θ ) | = e V N ( θ ) converges to e V ( θ ) ? How to define the exponential of a distribution? Consider measures formally defined by � e γ V ( θ ) − γ 2 2 Var ( V ( θ )) d θ µ ( γ ) ( D ) = D The measure µ ( γ ) is defined by a renormalization procedure V ǫ = V ∗ φ ǫ .

  20. The exponential of the logarithm Naively we might suppose that | P N ( θ ) | = e V N ( θ ) converges to e V ( θ ) ? How to define the exponential of a distribution? Consider measures formally defined by � e γ V ( θ ) − γ 2 2 Var ( V ( θ )) d θ µ ( γ ) ( D ) = D The measure µ ( γ ) is defined by a renormalization procedure V ǫ = V ∗ φ ǫ . It was shown by Kahane ’85 that ◮ µ ( γ ) converges as ǫ → 0 to a non-trivial limit if and only if ǫ γ < 2. ◮ This limit does not depend on (Kahane’s) cut-off procedures.

  21. The exponential of the logarithm Naively we might suppose that | P N ( θ ) | = e V N ( θ ) converges to e V ( θ ) ? How to define the exponential of a distribution? Consider measures formally defined by � e γ V ( θ ) − γ 2 2 Var ( V ( θ )) d θ µ ( γ ) ( D ) = D The measure µ ( γ ) is defined by a renormalization procedure V ǫ = V ∗ φ ǫ . It was shown by Kahane ’85 that ◮ µ ( γ ) converges as ǫ → 0 to a non-trivial limit if and only if ǫ γ < 2. ◮ This limit does not depend on (Kahane’s) cut-off procedures. This limit defines the measure µ ( γ ) which is called Gaussian multiplicative chaos (GMC).

  22. Properties of measures µ ( γ )

  23. Properties of measures µ ( γ ) ◮ Power law spectrum (multifractality): For 0 ≤ q γ 2 < 2 we have E ( µ ( γ ) [0 , r ] q ) = C q r ξ ( q ) where ξ ( q ) = (1 + γ 2 2 ) q − γ 2 2 q 2 .

  24. Properties of measures µ ( γ ) ◮ Power law spectrum (multifractality): For 0 ≤ q γ 2 < 2 we have E ( µ ( γ ) [0 , r ] q ) = C q r ξ ( q ) where ξ ( q ) = (1 + γ 2 2 ) q − γ 2 2 q 2 . ◮ In two dimensions, V is essentially the Gaussian free field , a fundamental object of mathematical physics.

  25. Properties of measures µ ( γ ) ◮ Power law spectrum (multifractality): For 0 ≤ q γ 2 < 2 we have E ( µ ( γ ) [0 , r ] q ) = C q r ξ ( q ) where ξ ( q ) = (1 + γ 2 2 ) q − γ 2 2 q 2 . ◮ In two dimensions, V is essentially the Gaussian free field , a fundamental object of mathematical physics. ◮ In that context, e γ V is used in Liouville quantum gravity to construct a uniform random metric on the sphere . (see work and recent surveys of e.g. Berestycki, Duplantier, Rhodes, Sheffield, Vargas,. . . )

  26. Properties of measures µ ( γ ) ◮ Power law spectrum (multifractality): For 0 ≤ q γ 2 < 2 we have E ( µ ( γ ) [0 , r ] q ) = C q r ξ ( q ) where ξ ( q ) = (1 + γ 2 2 ) q − γ 2 2 q 2 . ◮ In two dimensions, V is essentially the Gaussian free field , a fundamental object of mathematical physics. ◮ In that context, e γ V is used in Liouville quantum gravity to construct a uniform random metric on the sphere . (see work and recent surveys of e.g. Berestycki, Duplantier, Rhodes, Sheffield, Vargas,. . . ) ◮ The distribution of µ ( γ ) near γ = γ c is believed to be closely related to statistics of max | z | =1 | P N ( z ) | .

  27. The L 2 -phase

  28. The L 2 -phase √ 2 is called the L 2 -phase. This is because The range 0 ≤ γ < � | e i θ − e i φ | − γ 2 / 2 d θ d φ < ∞ E ( µ ( γ ) ( D ) 2 ) = D × D √ if and only if 0 ≤ γ < 2.

  29. The L 2 -phase √ 2 is called the L 2 -phase. This is because The range 0 ≤ γ < � | e i θ − e i φ | − γ 2 / 2 d θ d φ < ∞ E ( µ ( γ ) ( D ) 2 ) = D × D √ if and only if 0 ≤ γ < 2. Theorem (Webb ’15) Consider D | P N ( θ ) | γ d θ � µ ( γ ) N ( D ) = D | P N ( θ ) | γ d θ E � √ Then for any γ < 2 we have µ ( γ ) d → µ ( γ ) , N → ∞ N where µ ( γ ) is the same measure constructed from Kahane’s theory.

  30. Counting statistics in the CUE

  31. Counting statistics in the CUE Instead of V N ( θ ), we consider counting statistics N � χ J ( θ ) ( N α θ j ) , X N ( θ ) = J ( θ ) = [ θ − 1 , θ + 1] j =1

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend