the uses of dyson schwinger equations
play

The uses of Dyson-Schwinger equations Alice Guionnet September 18, - PDF document

The uses of Dyson-Schwinger equations Alice Guionnet September 18, 2019 Abstract These lecture notes are attended for a course at Sissa in september 2019: they contain more material than what we will cover but hopefully we will be able to


  1. measure minimizes a strictly convex energy. Dyson-Schwinger equation can then be regarded as the equations satisfied by the critical points of this energy. The Dyson-Schwinger equations will be our key to get precise informations on the convergence to equilibrium, such as large dimension expansion of the free energy or fluctuations. These types of questions were attacked also in the Rie- mann Hilbert problems community based on a fine study of the asymptotics of orthogonal polynomials [50, 31, 40, 10, 22, 32]. It seems to me however that such an approach is more rigid as it requires more technical steps and assumptions and can not apply in such a great generality than loop equations. Yet, when it can be used, it provides eventually more detailed information. Moreover, in certain cases, such as the case of potentials with Fisher Hartwig singularities, Riemann Hilbert techniques could be used but not yet loop equations [60, 34]. To study the asymptotic properties of our models we need to get one step further than the formal approach developped in the physics litterature. In other words, we need to show that indeed correlators can expand in the dimension up to some error which is quantified in the large N limit and shown to go to zero. To do so, one needs in general a priori concentration bounds in order to expand the equations around their limits. For β -models, such a priori concentration of measure estimates can be derived thanks to a result of Boutet de Monvel, Pastur and Shcherbina [21] or Maida and Maurel-Segala [68]. It is roughly based on the fact that the logarithm of the density of such models is very close to a distance of the empirical measure to its equilibrium measure, hence implying a priori estimates on this distance. In more general situations where densities are unknown, for instance when one considers the distributions of the traces of polynomials in several matrices, one can rely on abstract concentration of measures estimates for instance in the case where the density is strictly log- concave or the underlying space has a positive Ricci curvature (e.g SU ( N ) ) [55]. Dyson-Schwinger equations are then crucial to obtain optimal concentration bounds and asymptotics. This strategy was introduced by Johansson [61] to derive central limit the- orems for β -ensembles with convex potentials. It was further developped by Shcherbina and collaborators [1, 79] and myself, together with Borot [11], to study global fluctuations for β -ensembles when the potential is off-critical in the sense that the equilibrium has a connected support and its density vanishes like a square root at its boundary. These assumptions allow to linearize the Dyson-Schwinger equations around their limit and solve these linearizations by inverting the so-called Master operator. The case where the support of the density has finitely many connected component but the potential is off-critical was adressed in [77, 12]. It displays the additional tunneling effect where eigen- values may jump from one connected support to the other, inducing discrete fluctuations. However, it can also be solved asymptotically after a detailed analysis of the case where the number of particles in each connected compo- nents is fixed, in which case Dyson-Schwinger equations can be asymptotically solved. These articles assumed that the potentials are real analytic in order to use Dyson-Schwinger equations for the Stieltjes functions. We will see that these techniques generalize to sufficiently smooth potentials by using more gen- 4

  2. eral Dyson-Schwinger equations. Global fluctuations, together with estimates of the Wasserstein distance, were obtained in [65] for off-critical, one-cut smooth potentials. One can obtain by such considerations much more precise estimates such as the expansion of the partition function up to any order for general off- critical cases with fixed filling fractions, see [12]. Such expansion can also be derived by using Riemann-Hilbert techniques, see [41] in a perturbative setting and [27] in two cut cases and polynomial potential. But β -models on the real line serve also as toy models for many other mod- els. Borot, Kozlowski and myself considered more general potentials depending on the empirical measure in [15]. We studied also the case of more compli- cated interactions (in particular sinh interactions) in [16] : the main problems are then due to the non-linearity of the interaction which induces multi-scale phenomenon. The case of critical potentials was tackled recently in [36]. Also Dyson-Schwinger (often called Ward identities) equations are instrumental to study Coulomb gas systems in higher dimension. One however has to deal with the fact that Ward identities are not nice functions of the empirical measure anymore, so that an additional term, the anisotropic term, has to be controlled. This could very nicely be done by Leblé and Serfaty [66] by using local large deviations estimates. Recently we also generalized this approach to study dis- crete β -ensembles and random tilings [13] by analyzing the so-called Nekrasov’s equations in the spirit of Dyson-Schwinger equations. The same approach can be developed to study multi-matrix questions. Orig- inally, I developed this approach to study fluctuations and large dimension ex- pansion of the free energy with E. Maurel Segala [52, 53] in the context of several random matrices. In this case we restrict ourselves to perturbations of the quadratic potential to insure convergence and stability of our equations. We could extend this study to the case of unitary or orthogonal matrices following the Haar measure (or perturbation of this case) in [28, 54]. This strategy was then applied in a closely related setting by Chatterjee [24], see also [30]. Dyson-Schwinger equations are also central to derive more local results such as rigidity and universality, showing that the eigenvalues are very close to their deterministic locus and that their local fluctuations does not depend much on the model. For instance, in the case of Wigner matrices with non Gaussian entries, a key tool to prove rigidity is to show that the Stieltjes transform ap- proximately satisfies the same quadratic equation than in the Gaussian case up to the optimal scale [43, 42, 4]. Recently, it was also realized that closely connected ideas could lead to universality of local fluctuations, on one hand by using the local relaxation flow [43, 67, 18], by using Lindenberg strategy [80, 81] or by constructing approximate transport maps [79, 5, 49]. Such ideas could be generalized in the discrete Beta ensembles [59] where universality could be derived thanks to optimal rigidity (based on the study of Nekrasov’s equations) and comparisons to the continuous setting. 5

  3. 1.2 A toy model Let us give some heuristics for the type of analysis we will do in these lectures thanks to a toy model. We will consider the distribution of N real-valued variables λ 1 , . . . , λ N and denote by N � µ N = 1 ˆ δ λ i N i =1 � f ( λ i ) . Then, the their empirical measure : for a test function f , ˆ µ N ( f ) = 1 N correlators are moments of the type p � µ N ( f i )] M ( f 1 , . . . , f p ) = E [ ˆ i =1 where f i are test functions, which can be chosen to be polynomials, Stieltjes functionals or some more general set of smooth test functions. Dyson-Schwinger equations are usually retrieved from some underlying invariance or symmetries of the model. Let us consider the continuous case where the law of the λ i ’s is absolutely continuous with respect to � dλ i and given by N N � � � 1 dP V exp {− V ( λ i 1 , λ i 2 ) } N ( λ 1 , . . . , λ N ) = dλ i Z V N i 1 =1 i 2 =1 where V is some symmetric smooth function. Then a way to get equations for the correlators is simply by integration by parts (which is a consequence of the invariance of Lebesgue measure under translation) : Let f 0 , f 1 , . . . , f ℓ be continuously differentiable functions. Then � � � ℓ � � ℓ 1 µ N ( f ′ µ N ( f i )] µ N ( f i )] E [ˆ 0 ) ˆ = E [ ∂ λ k f 0 ( λ k )) ˆ N i =1 k i =1 � � ℓ �� dλ ) − 1 � � ( dP V µ N ( f i )( dP V − 1 N N = N E f 0 ( λ k ) ∂ λ k ˆ dλ ) i =1 k � ˆ � ℓ � µ N ( x 1 ) d ˆ µ N ( x 2 ) µ N ( f i )] 2 N E [ = f 0 ( x 1 ) ∂ x 1 V ( x 1 , x 2 ) d ˆ ˆ i =1 � ℓ � � � − 1 µ N ( f 0 f ′ µ N ( f i )] E [ ˆ j ) ˆ N j =1 i � = j where we noticed that since V is symmetric ∂ x V ( x, x ) = 2 ∂ x V ( x, y ) | y = x . The case ℓ = 0 refers to the case f 1 = · · · = f ℓ = 1 . We will call the above equa- tions Dyson-Schwinger equations. One would like to analyze the asymptotics µ N converges, then we of the correlators. The idea is that if we can prove that ˆ can linearize the above equations around this limit, and hopefully solve them 6

  4. asymptotically by showing that only few terms are relevant on some scale, solv- ing these simplified equations and then considering the equations at the next µ N converges order of correction. Typically in the case above, we see that if ˆ towards µ ∗ almost surely (or in L p ) then by the previous equation (with ℓ = 0 ) we must have ˆ f 0 ( x 1 ) ∂ x 1 V ( x 1 , x 2 ) dµ ∗ ( x 1 ) dµ ∗ ( x 2 ) = 0 . (1) We can then linearize the equations around µ ∗ and we find that if we set ∆ N = µ N − µ ∗ , we can rewrite the above equation with ℓ = 0 as ˆ E [∆ N (Ξ f 0 )] = 1 ˆ µ N ( f ′ f 0 ( x 1 ) ∂ x 1 V ( x 1 , x 2 ) d ∆ N ( x 1 ) d ∆ N ( x 2 )] (2) N E [ˆ 0 )] − 2 E [ where Ξ is the Master operator given by ˆ ˆ ∂ x 1 V ( x, x 1 ) dµ ∗ ( x 1 ) + 2 f 0 ( x 1 ) ∂ x 1 V ( x 1 , x ) dµ ∗ ( x 1 ) . Ξ f 0 ( x ) = 2 f 0 ( x ) Let us show heuristically how such an equation can be solved asymptotically. Let us assume that we have some a priori estimates that tell us that ∆ N is of order δ N almost surely (or in all L k ’s)[ that is that for sufficiently smooth µ N − µ ∗ )( g ) is with high probability (i.e with probability functions g , ∆ N ( g ) = (ˆ greater than 1 − N − D for all D and N large enough) at most of order δ N C g for some finite constant C g ]. Then, the right hand side of (2) should be smaller than max { δ 2 N , N − 1 } for sufficiently smooth test functions. Hence, if we can invert the master operator Ξ , we see that the expectation of ∆ N is of order at most max { δ 2 N , N − 1 } . We would like to bootstrap this estimate to show that δ N is at most of order N − 1 . This requires to estimate higher moments of ∆ N . Let us do a similar derivation from the Dyson-Schwinger equations when ℓ = 1 to find that if ¯ ∆ N ( f ) = ∆ N ( f ) − E [∆ N ( f )] , ˆ E [∆ N (Ξ f 0 ) ¯ f 0 ( x 1 ) ∂ x 1 V ( x 1 , x 2 ) d ∆ N ( x 1 ) d ∆ N ( x 2 ) ¯ − 2 E [ ∆ N ( f 1 )] = ∆ N ( f 0) ] + 1 ∆ N ( f 1 )] + 1 0 ) ¯ N E [∆ N ( f ′ µ N ( f 0 f ′ (3) N 2 E [ˆ 1 )] . Again if Ξ is invertible, this allows to bound the covariance E [∆ N ( f 0 ) ¯ ∆ N ( f 1 )] by max { δ 3 N , δ 2 N /N, N − 2 } , which is a priori better than δ 2 N unless δ N is of order 1 /N . Since ∆ N ( f ) − ¯ ∆ N ( f ) is at most of order δ 2 N by (2), we deduce that also E [∆ N ( f 0 )∆ N ( f 1 )] is at most of order δ 3 N . We can plug back this estimate into the previous bound and show recursively (by considering higher moments) that δ N can be taken to be of order 1 /N up to small corrections. We then deduce that N →∞ N 2 E [(∆ N − E [∆ N ])( f 0 )(∆ N − E [∆ N ])( f 1 )] = µ ∗ (Ξ − 1 f 0 f ′ C ( f 0 , f 1 ) = lim 1 ) and N →∞ N E [∆ N ( f 0 )] = µ ∗ ((Ξ − 1 f 0 ) ′ ) . m ( f 0 ) = lim We can consider higher order equations (with ℓ ≥ 1 ) to deduce higher orders of corrections, and the convergence of higher moments. 7

  5. 1.3 Rough plan of the lecture notes We will apply these ideas in several cases where V has a logarithmic singularity in which case the self-interaction term in the potential has to be treated with more care. More precisely we will examine the following models. 1. The law of the GUE. We consider the case where the λ i are the eigenval- ues of the GUE and we take polynomial test functions. In this case the operator Ξ is triangular and easy to invert. Convergence towards µ ∗ and a priori estimates on ∆ N can also be proven from the Dyson-Schwinger equations. µ N 2. The Beta ensembles. We take smooth test functions. Convergence of ˆ is proven by large deviation principle and quantitative estimates on δ N are obtained by concentration of measure. The operator Ξ is invertible if µ ∗ has a single cut, with a smooth density which vanishes like a square root at the boundary of the support. We then obtain full expansion of the correlators. In the case where the equilibrium measure has p connected components in its support, we can still follow the previous strategy if we fix the number of eigenvalues in a small neighborhood of each connected pieces (the so-called filing fractions). Summing over all possible choices of filing fractions allows to estimate the partition functions as well as prove a form of central limit theorem depending on the fluctuations of the filling fractions. 3. Discrete Beta ensembles. These distributions include the law of random tilings and the λ i ’s are now discrete. Integration by parts does not give nice equations a priori but Nekrasov found a way to write new equations by showing that some observables are analytic. These equations can in turn be analyzed in a spirit very similar to continuous Beta-ensembles. 4. Several matrix models. In this case, large deviations results are not yet known despite candidates for the rate function were proposed by Voiculescu [90] and a large deviation upper bound was derived [9]. However, we can still write the Dyson-Schwinger equations and prove that limits exist pro- vided we are in a perturbative setting (with respect to independent GUE matrices). Again in perturbative settings we can derive the expansion of the correlators by showing that the Master operator is invertible. We will discuss also one idea related with our approach based on Dyson-Schwinger to study more local questions, in particular universality of local fluctuations. The first is based on the construction of approximate transport maps as intro- duced in [5]. The point is that the construction of this transport maps goes through solving a Poisson equation Lf = g where L is the generator of the Langevin dynamics associated with our invariant measure. It is symmetric with respect to this invariant measure and therefore closely related with integration by parts. In fact, solving this Poisson equation is at the large N limit closely related with inverting the master operator Ξ above, and in general follows the 8

  6. strategy we developed to analyze Dyson-Schwinger equations. Another strategy to show universality of local fluctuations is by analyzing the Dyson-Schwinger equations but for less smooth test functions, that is prove local laws. We will not developp this approach here. These ideas were developed in [59] for dis- crete beta-ensembles, based on a strategy initiated in [19]. The argument is to show that optimal bounds on Stieltjes functionals can be derived from Dyson- Schwinger equation away from the support of the equilibrium measure, but at √ some distance. It is easy to get it at distance of order 1 / N , by straightforward concentration inequalities. To get estimates up to distance of order 1 /N , the idea is to localize the measure. Rigidity follows from this approach, as well as universality eventually. 2 The example of the GUE In this section, we show how to derive topological expansions from Dyson- Schwinger equations for the simplest model : the GUE. The Gaussian Unitary Ensemble is the sequence of N × N hermitian matrices X N , N ≥ 0 such that ( X N ( ij )) i ≤ j are independent centered Gaussian variables with variance 1 /N that are complex outside of the diagonal (with independent real and imaginary parts). Then, we shall discuss the following expansion, true for all integer k � E [ 1 1 N Tr( X k N )] = N 2 g M g ( k ) . g ≥ 0 This expansion is called a topological expansion because M g ( k ) is the number of maps of genus g which can be build by matching the edges of a vertex with k labelled half-edges. We remind here that a map is a connected graph properly embedded into a surface (i.e so that edges do not cross). Its genus is the smallest genus of a surface so that this can be done. This identity is well known [91] and was the basis of several breakthroughs in enumerative geometry [58, 62]. It can be proven by expanding the trace into products of Gaussian entries and using Wick calculus to compute these moments. In this section, we show how to derive it by using Dyson-Schwinger equations. 2.1 Combinatorics versus analysis In order to calculate the electromagnetic momentum of an electron, Feynman used diagrams and Schwinger used Green’s functions. Dyson unified these two approaches thanks to Dyson-Schwinger equations. On one hand they can be thought as equations for the generating functions of the graphs that are enu- merated, on the other they can be seen as equations for the invariance of the underlying measure. A baby version of this idea is the combinatorial versus the analytical characterization of the Gaussian law N (0 , 1) . Let X be a random variable with law N (0 , 1) . On one hand it is the unique law with moments given by the number of matchings : 9

  7. E [ X n ] = # { pair partitions of n points } =: P n . (4) On the other hand, it is also defined uniquely by the integration by parts formula E [ f ′ ( X )] (5) E [ Xf ( X )] = (6) for all smooth functions f going to infinity at most polynomially. If one applies the latter to f ( x ) = x n one gets � X n +1 � � nX n − 1 � m n +1 := E = E = nm n − 1 . This last equality is the induction relation for the number P n +1 of pair partitions of n +1 points by thinking of the n ways to pair the first point. Since P 0 = m 0 = 1 and P 1 = m 1 = 0 , we conclude that P n = m n for all n . Hence, the integration by parts formula and the combinatorial interpretation of moments are equivalent. 2.2 GUE : combinatorics versus analysis When instead of considering a Gaussian variable we consider a matrix with Gaussian entries, namely the GUE, it turns out that moments are as well de- scribed both by integration by parts equations and combinatorics. In fact mo- ments of GUE matrices can be seen as generating functions for the enumeration of interesting graphs, namely maps, which are sorted by their genus. We shall describe the full expansion, the so-called topological expansion, at the end of this section and consider more general colored cases in section 3. In this section, we discuss the large dimension expansion of moments of the GUE up to order 1 /N 2 as well as central limit theorems for these moments, and characterize these asymptotics both in terms of equations similar to the previous integration by parts, and by the enumeration of combinatorial objects. Let us be more precise. A matrix X = ( X ij ) 1 ≤ i,j ≤ N from the GUE is the kj + iX i R random N × N Hermitian matrix so that for k < j , X kj = X R kj , with two independent real centered Gaussian variables with covariance 1 / 2 N (denoted � � � � 1 kj , X i R 0 , 1 later N ) variables X R kj ) and for k ∈ { 1 , . . . , N } , X kk ∼ N . 0 , 2 N N then, we shall prove that E [ 1 N Tr( X k )] = M 0 ( k ) + 1 N 2 M 1 ( k ) + o ( 1 (7) N 2 ) where • M 0 ( k ) = C k/ 2 denotes the Catalan number : it vanishes if k is odd and is the number of non-crossing pair partitions of 2 k (ordered) points, that is pair partitions so that any two blocks ( a, b ) and ( c, d ) is such that a < b < c < d or a < c < d < b . C k can also be seen to be the number of rooted trees embedded into the plane and k edges, that is trees with a 10

  8. distinguished edge and equipped with an exploration path of the vertices v 1 → v 2 → · · · → v 2 k of length 2 k so that ( v 1 , v 2 ) is the root and each edge is visited twice (once in each direction). C k can also be seen as the number of planar maps build over one vertex with valence k : namely take a vertex with valence k , draw it on the plane as a point with k half-edges. Choose a root, that is one of these half-edges. Then the set of half-edges is in bijection with k ordered points (as we drew them on the plane which is oriented). A matching of the half-edges is equivalent to a pairing of these points. Hence, we have a bijection between the graphs build over one vertex of valence k by matching the end-points of the half-edges and the pair partitions of k ordered points. The pairing is non-crossing iff the matching gives a planar graph, that is a graph that is properly embedded into the plane (recall that an embedding of a graph in a surface is proper iff the edges of the graph do not cross on the surface). Hence, M 0 ( k ) can also be interpreted as the number of planar graphs build over a rooted vertex with valence k . Recall that the genus g of a graph (that is the minimal genus of a surface in which it can be properly embedded) is given by Euler formula : 2 − 2 g = # V ertices + # Faces − # Edges , where the faces are defined as the pieces of the surface in which the graph is embedded which are separated by the edges of the graph. If the surface as minimal genus, these faces are homeomorphic to discs. • M 1 ( k ) is the number of graphs of genus one build over a rooted vertex with valence k . Equivalently, it is the number of rooted trees with k/ 2 edges and exactly one cycle. Moreover, we shall prove that for any k 1 , . . . , k p (Tr( X k j ) − E [Tr( X k j )]) 1 ≤ j ≤ p converges in moments towards a centered Gaussian vector with covariance � � (Tr( X k ) − E [Tr( X k )])(Tr( X ℓ ) − E [Tr( X ℓ )]) N →∞ E M 0 ( k, ℓ ) = lim . M 0 ( k, ℓ ) is the number of connected planar rooted graphs build over a vertex with valence k and one with valence ℓ . Here, both vertices have labelled half- edges and two graphs are counted as equal only if they correspond to matching half-edges with the same labels (and this despite of symmetries). Equivalently M 0 ( k, ℓ ) is the number of rooted trees with ( k + ℓ ) / 2 edges and an exploration path with k + ℓ steps such that k consecutive steps are colored and at least an edge is explored both by a colored and a non-colored step of the exploration path. Recall here that convergence in moments means that all mixed moments converge to the same mixed moments of the Gaussian vector with covariance M . We shall use that the moments of a centered Gaussian vector are given by Wick formula : p � � � m ( k 1 , . . . , k p ) = E [ X k i ] = M ( k a , k b ) blocks ( a,b ) of π i =1 π 11

  9. which is in fact equivalent to the induction formula we will rely on : p � m ( k 1 , . . . , k p ) = M ( k 1 , k i ) m ( k 2 , . . . , k i − 1 , k i +1 , . . . , k p ) . i =2 Convergence in moments towards a Gaussian vector implies of course the stan- dard weak convergence as convergence in moments implies that the second mo- ments of Z N := (Tr( X k j ) − E [Tr( X k j )]) 1 ≤ j ≤ p are uniformly bounded, hence the law of Z N is tight. Moreover, any limit point has the same moments than the Gaussian vector. Since these moments do not blow too fast, there is a unique such limit point, and hence the law of Z N converges towards the law of the Gaussian vector with covariance M . We will discuss at the end of this section how to generalize the central limit theorem to differentiable test functions, that is show that Z N ( f ) = Tr f ( X ) − E [Tr f ( X )] converges towards a centered Gaus- sian variable for any bounded differentiable function. This requires more subtle uniform estimates on the covariance of Z N ( f ) for which we will use Poincaré’s inequality. The asymptotic expansion (7) as well as the central limit theorem can be derived using combinatorial arguments and Wick calculus to compute Gaussian moments. This can also be obtained from the Dyson-Schwinger (DS) equation, which we do below. 2.2.1 Dyson-Schwinger Equations Let : Y k := Tr X k − E Tr X k We wish to compute for all integer numbers k 1 , . . . , k p the correlators : � � p � Tr X k 1 E Y k i . i =2 By integration by parts, one gets the following Dyson-Schwinger equations Lemma 2.1. For any integer numbers k 1 , . . . , k p , we have � � � � p k 1 − 2 p � � � 1 Tr X k 1 Tr X ℓ Tr X k 1 − 2 − p E = E Y k i Y k i N i =2 ℓ =0 i =2   p p � � k i  N Tr X k 1 + k i − 2  (8) + E Y k j i =2 j =2 ,j � = i Proof. Indeed, we have 12

  10. � � � � p p � � N � Tr X k 1 X ij ( X k 1 − 1 ) ji E Y k i = E Y k i i =2 i,j =1 i =2 � � �� p � N � 1 ( X k 1 − 1 ) ji = E ∂ X ji Y k i N i,j =1 i =2 where we noticed that since the entries are Gaussian independent complex vari- ables, for any smooth test function f , E [ X ij f ( X kℓ , k ≤ ℓ )] = 1 (9) N E [ ∂ X ji f ( X kℓ , k ≤ ℓ )] . But, for any i, j, k, ℓ ∈ { 1 , . . . , N } and r ∈ N r − 1 � ∂ X ji ( X r ) kℓ = ( X s ) kj ( X r − s − 1 ) iℓ s =0 where ( X 0 ) ij = 1 i = j . As a consequence ∂ X ji ( Y r ) = rX r − 1 . ij The Dyson-Schwinger equations follow readily. ⋄ Exercise 2.2. Show that 1. If X is a GUE matrix, (9) holds. Deduce (2.1) . 2. take X to be a GOE matrix, that is a symmetric matrix with real indepen- dent Gaussian entries N R (0 , 1 N ) above the diagonal, and N R (0 , 2 N ) on the diagonal. Show that E [ X ij f ( X kℓ , k ≤ ℓ )] = 1 N E [ ∂ X ji f ( X kℓ , k ≤ ℓ )] + 1 N E [ ∂ X ij f ( X kℓ , k ≤ ℓ )] . Deduce that a formula analogous to (2.1) holds provided we have an addi- � k 1 Tr X k 1 � p � tional term N − 1 E . i =2 Y k i 2.2.2 Dyson-Schwinger equation implies genus expansion We will show that the DS equation (2.1) can be used to show that : � 1 � = M 0 ( k ) + 1 N 2 M 1 ( k ) + o ( 1 N Tr X k E N 2 ) Next orders can be derived similarly. Let : � 1 � m N N Tr X k k := E 13

  11. By the DS equation (with no Y terms), we have that : � k − 2 � � N Tr X ℓ 1 1 m N N Tr X k − ℓ − 2 (10) k = E . ℓ =0 We now assume that we have the self-averaging property that for all ℓ ∈ N : �� 1 �� 2 � � 1 N Tr X ℓ − E N Tr X ℓ E = o (1) as N → ∞ as well as the boundedness property m N sup ℓ < ∞ . N We will show both properties are true in Lemma 2.3. If this is true, then the above expansion (10) gives us : k − 2 � m N m N ℓ m N k = k − ℓ − 2 + o (1) ℓ =0 As { m N ℓ , ℓ ≤ k } are uniformly bounded, they are tight and so any limit point { m ℓ , ℓ ≤ k } satisfies k − 2 � m k = m ℓ m k − ℓ − 2 , m 0 = 1 , m 1 = 0 . ℓ =0 This equation has clearly a unique solution. On the other hand, let M 0 ( k ) be the number of maps of genus 0 with one vertex with valence k . These satisfy the Catalan recurrence : k − 2 � M 0 ( ℓ ) M 0 ( k − ℓ − 2) M 0 ( k ) = ℓ =0 This recurrence is shown by a Catalan-like recursion argument, which goes by considering the matching of the first half edge with the ℓ th half-edge, dividing each map of genus 0 into two sub-maps (both still of genus 0 ) of size ℓ and k − ℓ − 2 , for ℓ ∈ { 0 , . . . , k − 2 } . Since m and M 0 both satisfy the same recurrence (and M 0 (0) = m N = 0 1 , M 0 (1) = m N = 0 ), we deduce that m = M 0 and therefore we proved by 1 induction (assuming the self-averaging works) that : m N k = M 0 ( k ) + o (1) as N → ∞ It remains to prove the self-averaging and boundedness properties. 14

  12. Lemma 2.3. There exists finite constants D k and E k , k ∈ N , independent of N , so that for integer number ℓ , every integer numbers k 1 , . . . , k ℓ then : � ℓ � � c N ( k 1 , . . . , k p ) := E satisfies | c N ( k 1 , . . . , k p ) | ≤ D � k i a) Y k i i =1 and � 1 � m N N Tr X k 1 satisfies | m N b) k 1 := E k 1 | ≤ E k 1 . Proof. The proof is by induction on k = � k i . It is clearly true for k = 0 , 1 where E 0 = 1 , E 1 = 0 and D k = 0 . Suppose the induction hypothesis holds for k − 1 . To see that b ) holds, by the DS equation, we first observe that : � k − 2 � � 1 � � N Tr X ℓ 1 1 N Tr X k N Tr X k − ℓ − 2 E E = ℓ =0 k − 2 � k − ℓ − 2 + 1 ( m N ℓ m N N 2 c N ( ℓ, k − ℓ − 2)) = ℓ =0 Hence, by the induction hypothesis we deduce that � �� � 1 k − 2 � � � � � N Tr X k � E � ≤ ( E ℓ E k − 2 − ℓ + D k − 2 ) := E k . ℓ =0 To see that a ) holds, we use the DS equation as follows       p p p � � �  Y k 1   Tr X k 1  − E [Tr X k 1 ] E   E Y k j = E Y k j Y k j j =2 j =2 j =2   p k − 2 � � 1   Tr X ℓ Tr X k 1 − ℓ − 2 N E = Y k j ℓ =0 j =2   p p � � k i   N Tr X k 1 + k i − 2 + E Y k j i =2 j =2 ,j � = i   � � k − 2 p � � 1   . Tr X ℓ Tr X k 1 − ℓ − 2 − E E Y k j N j =2 ℓ =0 We next substract the last term to the first and observe that Tr X ℓ Tr X k 1 − ℓ − 2 − E [Tr X ℓ Tr X k 1 − ℓ − 2 ] = NY ℓ m N k 1 − 2 − ℓ + NY k 1 − 2 − ℓ m N ℓ + Y ℓ Y k 1 − 2 − ℓ − c N ( ℓ, k 1 − 2 − ℓ ) 15

  13. to deduce   p k 1 − 2 � �  Y k 1  = 2 m N ℓ c N ( k 1 − 2 − ℓ, k 2 , . . . , k p ) E Y k j j =2 ℓ =0 p � k i m N k 1 + k i − 2 c N ( k 2 , .., k i − 1 , k i +1 , ., k p ) + i =2 k 1 − 2 � − 1 [ c N ( ℓ, k 1 − 2 − ℓ ) c N ( k 2 , . . . , k p ) − c N ( ℓ, k 1 − 2 − ℓ, k 2 , . . . , k p )] N ℓ =0 p � + 1 k i c N ( k 1 + k i − 2 , k 2 , .., k i − 1 , k i +1 , ., k p ) (11) N i =2 which is bounded uniformly by our induction hypothesis. ⋄ As a consequence, we deduce Corollary 2.4. For all k ∈ N , N Tr( X k ) converges almost surely towards 1 M 0 ( k ) . Proof. Indeed by Borel Cantelli Lemma it is enough to notice that it follows from the summability of � � � � ≤ c N ( k, k ) ≤ D 2 k | Tr( X k − E Tr( X k ) P | ≥ Nε ε 2 N 2 . ε 2 N 2 ⋄ 2.3 Central limit theorem The above self averaging properties prove that m N k = M 0 ( k ) + o (1) . To get the next order correction we analyze the limiting covariance c N ( k, ℓ ) . We will show that Lemma 2.5. For all k, ℓ ∈ N , c N ( k, ℓ ) converges as N goes to infinity towards the unique solution M 0 ( k, ℓ ) of the equation ℓ − 2 � M 0 ( k, ℓ ) = 2 M 0 ( p ) M 0 ( k − 2 − p, ℓ ) + ℓM 0 ( k + ℓ − 2) p =0 so that M 0 ( k, ℓ ) = 0 if k + ℓ ≤ 1 . As a consequence we will show that Corollary 2.6. N 2 ( m N k − M 0 ( k )) = m 1 k + o (1) where the numbers ( m 1 k ) k ≥ 0 are defined recursively by : � k − 2 � k − 2 m 1 m 1 k = 2 ℓ M 0 ( k − ℓ − 2) + M 0 ( ℓ, k − ℓ − 2) ℓ =0 ℓ =0 16

  14. Proof. (Of Lemma 2.5) Observe that c N ( k, ℓ ) converges for K = k + ℓ ≤ 1 (as it vanishes uniformly). Assume you have proven convergence towards M 0 ( k, ℓ ) up to K . Take k 1 + k 2 = K + 1 and use (11) with p = 1 to deduce that c N ( k 1 , k 2 ) satisfies k 1 − 2 � � k 1 + k 2 − 2 + 1 c N ( k 1 , k 2 ) = 2 m N ℓ c N ( k 1 − ℓ − 2 , k 2 )+ k 2 m N c N ( ℓ, k 1 − ℓ − 2 , k 2 ) . N ℓ =0 Lemma 2.3 implies that the last term is at most of order 1 /N and hence we deduce by our induction hypothesis that c ( k 1 , k 2 ) converges towards M 0 ( k 1 , k 2 ) which is given by the induction relation k 1 � M 0 ( ℓ ) M 0 ( k 1 − 2 − ℓ, k 2 ) + k 2 M 0 ( k 1 + k 2 − 2) . M 0 ( k 1 , k 2 ) = 2 ℓ =0 Moreover clearly M 0 ( k 1 , k 2 ) = 0 if k 1 + k 2 ≤ 1 . There is a unique solution to this equation. ⋄ Exercise 2.7. Show by induction that M 0 ( k, ℓ ) = # { planar maps with 1 vertex of degree ℓ and one vertex of degree k } Proof. (of Corollary 2.6) Again we prove the result by induction over k . It is fine for k = 0 , 1 where c 1 k = 0 . By (11) with p = 0 we have : � M 0 ( ℓ ) N 2 � � N 2 ( m N m N k − M 0 ( k )) = 2 k − ℓ − 2 − M 0 ( k − 2 − ℓ ) � N 2 � � m N + ℓ − M 0 ( ℓ ) ( m k − ℓ − 2 − M 0 ( k − 2 − ℓ )) � c N ( ℓ, k − ℓ − 2) + from which the result follows by taking the large N limit on the right hand side. ⋄ Exercise 2.8. Show that c 1 k = m 1 ( k ) is the number of planar maps with genus 1 build on a vertex of valence k .(The proof goes again by showing that m 1 ( k ) satisfies the same type of recurrence relations as c 1 k by considering the matching of the root : either it cuts the map of genus 1 into a map of genus 1 and a map of genus 0 , or there remains a (connected) planar maps.) Theorem 2.9. For any polynomial function P = � λ k x k , Z N ( P ) = Tr P − E [Tr P ] converges in moments towards a centered Gaussian variable Z ( P ) with covariance given by � E [ Z ( P ) ¯ λ k ¯ λ k ′ M 0 ( k, k ′ ) . Z ( P )] = 17

  15. Proof. It is enough to prove the convergence of the moments of the Y k ’s. Let � � c N ( k 1 , . . . , k p ) = E Y k 1 · · · Y k p . Then we claim that, as N → ∞ , c N ( k 1 , . . . , k p ) converges to G ( k 1 , . . . k p ) given by : � k M 0 ( k 1 , k i ) G ( k 2 , . . . , ˆ (12) G ( k 1 , . . . , k p ) = k i , . . . , k p ) i =2 where ˆ is the absentee hat. This type of moment convergence is equivalent to a Wick formula and is enough to prove (by the moment method) that Y k 1 , . . . , Y k p are jointly Gaussian. Again, we will prove this by induction by using the DS equations. Now assume that (12) holds for any k 1 , . . . , k p such that � p i =1 k i ≤ k . (induction hypothesis) We use (11). Notice by the a priori bound on correlators of Lemma 2.3(a) that the terms with a 1 /N are negligible in the right hand side and m N k is close to M 0 ( k ) , yielding   p � k 1 − 2 �  Y k 1  = 2 M 0 ( ℓ ) c N ( k 1 − 2 − ℓ, k 2 , . . . , k p ) E Y k j j =2 ℓ =0 p � k i M 0 ( k 1 + k i − 2) c N ( k 2 , .., k i − 1 , k i +1 , ., k p ) + O ( 1 + N ) i =2 By using the induction hypothesis, this gives rise to : � p � � � E Y k i = 2 M 0 ( ℓ ) G ( k 1 − ℓ − 2 , k 2 , . . . , k p ) i =1 � k i M 0 ( k i + k j − 2) G ( k 2 , . . . , ˆ + k i , . . . k p ) + o (1) It follows that � � k i M 0 ( k i + k j − 2) G ( k 2 , . . . , ˆ G ( k 1 , . . . , k p ) = 2 M 0 ( ℓ ) G ( k 1 − ℓ − 2 , k 2 , . . . , k p )+ k i , . . . k p ) . But using the induction hypothesis, we get p � � M 0 ( ℓ ) M ( k 1 − ℓ − 2 , k i )+ k i M 0 ( k i + k j − 2)) G ( k 2 , . . . , ˆ G ( k 1 , . . . , k p ) = (2 k i , . . . k p ) i =2 which yields the claim since � M 0 ( k 1 , k i ) = 2 M 0 ( ℓ ) M ( k 1 − ℓ − 2 , k i ) + k i M 0 ( k 1 + k i − 2) . ⋄ 18

  16. 2.4 Generalization One can generalize the previous results to smooth test functions rather than polynomials. We have Lemma 2.10. Let σ be the semi-circle law given by � dσ ( x ) = 1 4 − x 2 dx . 2 π 1. For any bounded continuous function f with polynomial growth at infinity � N 1 ˆ lim f ( λ i ) = f ( x ) dσ ( x ) a.s. N N →∞ i =1 2. For any C 2 function f with polynomial growth at infinity Z ( f ) = � f ( λ i ) − E ( � f ( λ i )) converges in law towards a centered Gaussian variable. Our proof will only show convergence : the covariance is well known and can be found for instance in [74, (3.2.2)]. Exercise 2.11. Show that for all n ∈ N , x n dσ ( x ) = M 0 ( n ) . ´ � N Proof. The convergence of 1 i =1 f ( λ i ) follows since polynomials are dense in N the set of continuous functions on compact sets by Weierstrass theorem. Indeed, our bounds on moments imply that we can restrict ourselves to a neighborhood of [ − 2 , 2] : � N � 1 1 1 λ 2 p λ 2 k +2 p i 1 | λ i |≥ M ≤ i M 2 k N N i =1 has moments asymptotically bounded by σ ( x 2 k +2 p ) /M 2 k ≤ 2 2 p (2 /M ) 2 k . This allows to approximate moments by truncated moments and then use Weierstrass theorem. To derive the central limit theorem, one can use concentration of measure inequalities such as Poincaré inequality. Indeed, Poincaré inequalities for Gaus- sian variables read : for any C 1 real valued function F on C N ( N − 1) / 2 × R N   � ( F ( X kℓ , k, l ) − E [ F ( X kℓ , k, l )]) 2 � � � � ≤ 2 � 2 � ∂ X ij F ( X kℓ , k, l )  . E N E i,j Taking F = Tr f ( X ) we find that ∂ X ij F ( X kℓ , k, l ) = f ′ ( X ) ji . Indeed, we proved this point for polynomial functions f so that we deduce � (Tr( f ( X )) − E [Tr( f ( X ))]) 2 � � � ≤ 2 Tr( f ′ ( X ) 2 ) E N E . Hence, if we take a C 1 function f , whose derivative is approximated by a poly- nomial P ε on [ − M, M ] (with M > 2 ) up to an error ε > 0 , and whose derivative grows at most like x 2 K for | x | ≥ M , we find 19

  17. � (Tr f ( X ) − E [Tr f ( X )] − (Tr P ε ( X ) − E [Tr P ε ( X )])) 2 � E � � � ( ε 2 + 1 ( P 2 ε ( λ i ) + λ 2 K ≤ 4 E )1 | λ i |≥ M ) i N where the right hand side goes to zero as N goes to infinity and then ε goes to zero. This shows the convergence of the covariance of Z ( f ) . We then proceed similarly to show that the approximation is good in any L p , hence deriving the convergence in moments. ⋄ 2.5 GUE topological expansion The “topological expansion” reads � 1 X k �� � � 1 E N Tr = N 2 g M g ( k ) g ≥ 0 where M g ( k ) is the number of rooted maps of genus g build over a vertex of degree k . Here, a “map” is a connected graph properly embedded in a surface and a “root” is a distinguished oriented edge. A map is assigned a genus, given by the smallest genus of a surface in which it can be properly embedded. This complete expansion (not that the above series is in fact finite) can be derived as well either by Wick calculus or by Dyson-Schwinger equations : we leave it as an exercise to the reader. We will see later that cumulants of traces of moments of the GUE are related with the enumeration of maps with several vertices. 3 Several matrix-ensembles Topological expansions have been used a lot in physics to relate enumeration problems with random matrices. Considering several matrix models allows to deal with much more complicated combinatorial questions, that is colored maps. In this section we show how the previous arguments based on Dyson-Schwinger equations allow to study these models in perturbative situations. In fact, large deviations questions are still open in the several matrices case and convergence of the trace of several matrices has only been proved in general perturbative situations [52] or for very specific models such as the Ising model corresponding to a simple AB interaction [44, 71, 69, 56, 57]. 3.0.1 Non-commutative laws We let C � X 1 , · · · , X m � denote the set of polynomials in m -non commutative indeterminates with complex coefficients. We equip it with the involution ∗ so that for any i 1 , . . . , i k ∈ { 1 , . . . , m } , for any complex number z , we have ( zX i 1 · · · X i k ) ∗ = ¯ zX i k · · · X i 1 . 20

  18. For N × N Hermitian matrices ( A 1 , · · · , A m ) , let us define the linear form µ A 1 , ··· ,A m from C � X 1 , · · · , X m � into C by ˆ µ A 1 , ··· ,A m ( P ) = 1 N Tr ( P ( A 1 , · · · , A m )) ˆ where Tr is the standard trace Tr( A ) = � N i =1 A ii . If A 1 , . . . , A m are random, we denote by µ A 1 , ··· ,A m ( P ) := E [ˆ ¯ µ A 1 , ··· ,A m ( P )] . µ A 1 , ··· ,A m will be seen as elements of the algebraic dual C � X 1 , · · · , X m � ∗ µ A 1 , ··· ,A m , ¯ ˆ of C � X 1 , · · · , X m � . C � X 1 , · · · , X m � ∗ is equipped with its weak topology. Definition 3.1. A sequence ( µ n ) n ∈ N in C � X 1 , · · · , X m � ∗ converges weakly to- wards µ ∈ C � X 1 , · · · , X m � ∗ iff for any P ∈ C � X 1 , · · · , X m � , n →∞ µ n ( P ) = µ ( P ) . lim Lemma 3.2. Let C be a finite constant and n be an integer number. Set K n ( C ) = { µ ∈ C � X 1 , · · · , X m � ∗ ; | µ ( X ℓ 1 · · · X ℓ r ) | ≤ C r ∀ ℓ i ∈ { 1 , · · · , m } , r ∈ N , r ≤ n } . Then, any sequence ( µ n ) n ∈ N so that µ n ∈ K m n ( C ) is sequentially compact if m n goes to infinity with n , i.e. has a subsequence ( µ φ ( n ) ) n ∈ N which converges weakly. We denote in short K ( C ) or K ∞ ( C ) the set of such sequances. Proof. Since µ n ( X ℓ 1 · · · X ℓ r ) ∈ C is uniformly bounded, it has converg- ing subsequences. By a diagonalisation procedure, since the set of monomials is countable, we can ensure that for a subsequence ( φ ( n ) , n ∈ N ) , the terms µ φ ( n ) ( X ℓ 1 · · · X ℓ r ) , ℓ i ∈ { 1 , · · · , m } , r ∈ N converge simultaneously. The limit defines an element of C � X 1 , · · · , X m � ∗ by linearity. ⋄ The following is a triviality, that we however recall since we will use it several times. Corollary 3.3. Let C be a finite non negative constant and m n a sequence going to infinity at infinity. Let ( µ n ) n ∈ N be a sequence such that µ n ∈ K m n ( C ) which has a unique limit point. Then ( µ n ) n ∈ N converges towards this limit point. Proof. Otherwise we could choose a subsequence which stays at positive distance of this limit point, but extracting again a converging subsequence gives a contradiction. Note as well that any limit point will belong automatically to C � X 1 , · · · , X m � ∗ . ⋄ We shall call in these notes non-commutative laws elements of C � X 1 , · · · , X m � ∗ which satisfy µ ( PP ∗ ) ≥ 0 , µ ( PQ ) = µ ( QP ) , µ (1) = 1 for all polynomial functions P, Q . This is a very weak point of view which how- ever is sufficient for our purpose. The name ‘law’ at least is justified when m = 1 , µ N is the empirical measure of the eigenvalues of the matrix A , in which case ˆ 21

  19. and hence a probability measure on R , whereas the non-commutativity is clear when m ≥ 2 . There are much deeper reasons for this name when considering C ∗ -algebras and positivity, and we refer the reader to [89] or [3]. The laws ˆ µ A 1 , ··· ,A m are obviously non-commutative laws. Since µ A 1 , ··· ,A m , ¯ these conditions are closed for the weak topology, we see that any limit point of µ N , µ N will as well satisfy these properties. A linear functional on C � X 1 , · · · , X m � ˆ which satisfies such conditions is called a tracial state. This leads to the notion of C ∗ -algebras and representations of the laws as moments of non-commutative operators on C ∗ -algebras. We however do not want to detail this point in these notes. 3.1 Non-commutative derivatives First, for 1 ≤ i ≤ m , let us define the non-commutative derivatives ∂ i with respect to the variable X i . They are linear maps from C � X 1 , · · · , X m � to C � X 1 , · · · , X m � ⊗ 2 given by the Leibniz rule ∂ i PQ = ∂ i P × (1 ⊗ Q ) + ( P ⊗ 1) × ∂ i Q and ∂ i X j = 1 i = j 1 ⊗ 1 . Here, × is the multiplication on C � X 1 , · · · , X m � ⊗ 2 ; P ⊗ Q × R ⊗ S = PR ⊗ QS . So, for a monomial P , the following holds � ∂ i P = R ⊗ S P = RX i S where the sum runs over all possible monomials R, S so that P decomposes into RX i S . We can iterate the non-commutative derivatives; for instance ∂ 2 i : C � X 1 , · · · , X m � → C � X 1 , · · · , X m � ⊗ C � X 1 , · · · , X m � ⊗ C � X 1 , · · · , X m � is given for a monomial function P by � ∂ 2 R ⊗ S ⊗ Q. i P = 2 P = RX i SX i Q We denote by ♯ : C � X 1 , · · · , X m � ⊗ 2 × C � X 1 , · · · , X m �→ C � X 1 , · · · , X m � the map P ⊗ Q♯R = PRQ and generalize this notation to P ⊗ Q ⊗ R♯ ( S, V ) = PSQV R . So ∂ i P♯R corresponds to the derivative of P with respect to X i in the direction R , and similarly 2 − 1 [ ∂ 2 i P♯ ( R, S ) + ∂ 2 i P♯ ( S, R )] the second derivative of P with respect to X i in the directions R, S . We also define the so-called cyclic derivative D i . If m is the map m ( A ⊗ B ) = BA , we define D i = m ◦ ∂ i . For a monomial P , D i P can be expressed as � D i P = SR. P = RX i S 3.2 Non-commutative Dyson-Schwinger equations µ N = ˆ Let X N 1 , . . . , X N m be m independent GUE matrices and set ˆ µ X N 1 ,...,X N 1 to be their non-commutative law. Let P 0 , · · · , P r be r polynomials in k non- 22

  20. commutative variables. Then for all i ∈ { 1 , . . . , m } � r � r µ N ⊗ ˆ µ N ( X i P 0 ) µ N ( P j )] µ N ( ∂ i P 0 ) µ N ( P j )] E [ˆ E [ˆ ˆ = ˆ j =1 j =1 r � � 1 µ N ( P 0 D i P j ′ ) µ N ( P j )] (13) + E [ˆ ˆ N 2 j ′ =1 j � = j ′ The proof is a direct application of integration by parts and is left to the reader. The main point is that our definitions yield ∂ X k ij Tr( P ( X )) = ( D k P ) ji , ∂ X k ij ( P ( X )) i ′ j ′ = ( ∂ k P ) i ′ i,jj ′ . 3.3 Independent GUE matrices 3.3.1 Voiculescu’s theorem The aim of this section is to prove that if X N,ℓ , 1 ≤ ℓ ≤ k are independent GUE matrices Theorem 3.4. [Voiculescu [88]] For any monomial q in the unknowns X 1 , . . . , X m , � � N →∞ E [ 1 q ( X N 1 , X N 2 , . . . , X N ] = σ m ( q ) lim N Tr m ) where σ m ( q ) is the number M 0 ( q ) of planar maps build on a star of type q . Remark 3.5. σ m , once extended by linearity to all polynomials, is called the law of m free semi-circular variables because it is the unique non-commutative law so that the moments of a single variable are given by the Catalan numbers satisfying σ m � � ℓ 1 − σ ( x ℓ 1 )) · · · ( X m p ( X m 1 ℓ p − σ ( x ℓ p )) = 0 , for any choice of ℓ j , 1 ≤ j ≤ p , such that ℓ p � = ℓ p +1 . Proof. By the non-commutative Dyson-Schwinger equation with P j = 1 for j ≥ 1 , we have for all i � µ N ( X i X ℓ 1 · · · X ℓ k )] = µ N ( X ℓ 1 · · · X ℓ j − 1 )ˆ µ N ( X ℓ j +1 · · · X ℓ k )] E [ˆ E [ˆ j : ℓ j = i Let us assume that for all k ≤ K there exists C K finite such that for any ℓ 1 , . . . , ℓ k ∈ { 1 , . . . , m } so that � ℓ i ≤ K µ N ( X ℓ 1 · · · X ℓ k )] | (14) | E [ˆ ≤ C k � � 2 ] µ N ( X ℓ 1 X ℓ 2 · · · X ℓ k ) − E [ˆ µ N ( X ℓ 1 X ℓ 2 · · · X ℓ k )] C k /N 2 (15) E [ ˆ ≤ µ N ( X ℓ 1 X ℓ 2 · · · X ℓ k )] is tight and its limit Then we deduce that the family E [ˆ points τ ( X ℓ 1 · · · X ℓ k ) satisfy � τ ( X ℓ 1 · · · X ℓ k ) = τ ( X ℓ 2 · · · X ℓ j − 1 ) τ ( X ℓ j +1 · · · X ℓ k ) j : ℓ j = ℓ 1 23

  21. and τ (1) = 1 , τ ( X ℓ ) = 0 . There is a unique solution to this equation. It is given by {M 0 ( X ℓ 1 · · · X ℓ k , 1) , ℓ i ∈ { 1 , . . . , m }} since the later satisfies the same equation. Indeed, it is easily seen that the number of planar maps on a trivial star 1 can be taken to be equal to one, and there is none with a star with only one half-edge. Moreover, the number M 0 ( X ℓ 1 · · · X ℓ k , 1) of planar maps build on X ℓ 1 · · · X ℓ k can be decomposed according to the matching of the half-edge of the root. Because the maps are planar, such a matching cut the planar map in to independent planar maps. Hence � M 0 ( X ℓ 1 · · · X ℓ k , 1) = M 0 ( X ℓ 2 · · · X ℓ j − 1 , 1) M 0 ( X ℓ j +1 · · · X ℓ k , 1) . j : ℓ j = ℓ 1 The proof of (14) is a direct consequence of non-commutative Hölder inequality and the bound obtained in the first chapter for one matrix. We leave (15) to the reader : it can be proved by induction over K using the Dyson-Schwinger equation exactly as in the one matrix case, see Lemma 2.3. ⋄ 3.3.2 Central limit theorem Theorem 3.6. Let P 1 , . . . , P r be polynomial in X 1 , . . . , X m and set Y ( P ) = µ N ( P ) − σ m ( P )) . Then ( Y ( P 1 ) , . . . , Y ( P k )) converges towards a centered N (ˆ Gaussian vector with covariance m � σ m ( D i Ξ − 1 P 1 D i P 2 ) , C ( P 1 , P 2 ) = i =1 with Ξ P = � i [ ∂ i P # X i − ( σ m ⊗ I + I ⊗ σ m )( ∂ i D i P )] . Notice above that Ξ is invertible on the space of polynomials with null con- stant term. Indeed, for any monomial q , the first part of Ξ is the degree operator � ∂ i q # X i = deg( q ) q i whereas the second part reduces the degree, so that the sum is invertible. Proof. The proof is the same as for one matrix and proceeds by induction based on (13). We first observe that m N ( P ) = E [ˆ µ N ( P )] − σ ( P ) is of order 1 /N 2 by induction over the degree of P thanks to (14) and (15). We then show the convergence of the covariance thanks to the Dyson-Schwinger equation (13) with r = 1 and P 1 = P − E [ P ] , and P 0 = D i P , which yields after summation over i : m � N 2 E [ˆ µ N (Ξ P 0 )(ˆ µ N ( P ) − E [ˆ µ N ( P )])] = µ N ( D i P 0 D i P )] + N 2 R N ( P ) E [ˆ j =1 where � µ N − σ m ) ⊗ 2 ( ∂ i ◦ D i P 0 )ˆ µ N ( P )] R N ( P ) = E [(ˆ i 24

  22. Since P is centered, this is of order at most 1 /N 3 by (14) and (15). Hence, letting N going to infinity and inverting Ξ shows the convergence of the covariance towards C . Finally, to prove the central limit theorem we deduce from (13) that, if µ N ( P ) − σ m ( P )) , we have Y ( P ) = N (ˆ � r µ N − σ m )( P ) G N ( P, P 1 , . . . , P r ) = E [ N (ˆ Y ( P j )] j =1 r � � µ N − σ m ) ⊗ (ˆ µ N − σ m )( ∂ i D i Ξ − 1 P ) N E [(ˆ = Y ( P i )] i i =1 � r � m � µ N ( D i Ξ − 1 PD i P j ) E [ + ˆ Y ( P ℓ )] . j =1 i =1 ℓ � = j By induction over the total degree of the P i ’s, and using the previous estimate, we can show that the first term goes to zero. Hence, we deduce by induction that G N ( P, P 1 , . . . , P r ) converges towards G ( P, P 1 , . . . , P r ) solution of r � σ m ( D i Ξ − 1 PD i P j ) G ( P, P 1 , . . . , P j − 1 , P j +1 , . . . P r ) , G ( P, P 1 , . . . , P r ) = j =1 which is Wick formula for Gaussian moments. ⋄ 3.4 Several interacting matrices models In this section, we shall be interested in laws of interacting matrices of the form 1 dµ N e − N Tr( V ( X 1 , ··· ,X m )) dµ N ( X 1 ) · · · dµ N ( X m ) V ( X 1 , · · · , X m ) := Z N V where Z N V is the normalizing constant ˆ Z N e − N Tr( V ( X 1 , ··· ,X m )) dµ N ( X 1 ) · · · dµ N ( X m ) V = and V is a polynomial in m non-commutative unknowns. In the sequel, we fix n monomials q i non-commutative monomials; q i ( X 1 , · · · , X m ) = X j i 1 · · · X j i ri for some j k l ∈ { 1 , · · · , m } , r i ≥ 1 , and consider the potential given by n � V t ( X 1 , · · · , X m ) = t i q i ( X 1 , · · · , X m ) i =1 where t = ( t 1 , . . . , t n ) are n complex numbers such that V t is self-adjoint. More- over, dµ N ( X ) denotes the standard law of the GUE , that is � � N e − N 2 Tr( X 2 ) dµ N ( X ) = Z − 1 N 1 X ∈H (2) d ℜ ( X ij ) d ℑ ( X ij ) . 1 ≤ i ≤ j ≤ N 1 ≤ i<j ≤ N 25

  23. This part is motivated by a work of ’t Hooft [82] and large developments which occurred thereafter in theoretical physics. ’t Hooft in fact noticed that if V = V t = � n i =1 t i q i with fixed monomials q i of m non-commutative unknowns and if we see Z N V = Z N as a function of t = ( t 1 , · · · , t n ) t � ln Z N N 2 − 2 g F g ( t ) (16) t := g ≥ 0 where n � � ( − t i ) k i M g (( q i , k i ) 1 ≤ i ≤ n ) F g ( t ) := k i ! k 1 , ··· ,k n ∈ N k i =1 is a generating function of the number M g (( q i , k i ) 1 ≤ i ≤ k ) of maps with genus g build over k i stars of type q i , 1 ≤ i ≤ n . A map is a connected oriented graph which is embedded into a surface. Its genus g is by definition the smallest genus of a surface in which it can be embedded in such a way that edges do not cross and the faces of the graph (which are defined by following the boundary of the graph) are homeomorphic to a disc. Intuitively, the genus of a surface is the maximum number of simple closed curves that can be drawn on it without disconnecting it. The genus of a map is related with the number of vertices, edges and faces of the map. The faces of the map are the pieces of the surface in which it is embedded which are enclosed by the edges of the graph. Then, the Euler characteristic 2 − 2 g is given by the number of faces plus the number of vertices minus the number of edges. The vertices of the maps we shall consider have the structure of a star; a star of type q , for some monomial q = X ℓ 1 · · · X ℓ k , is a vertex with valence deg ( q ) and oriented colored half-edges with one marked half edge of color ℓ 1 , the second of color ℓ 2 etc until the last one of color ℓ k . M g (( q i , k i ) 1 ≤ i ≤ n ) is then the number of maps with k i stars of type q i , 1 ≤ i ≤ n . The equality (16) obtained by ’t Hooft [82] was only formal, i.e means that all the derivatives on both sides of the equality coincide at t = 0 . This result can then be deduced from Wick formula which gives the expression of arbitrary moments of Gaussian variables. Adding to V a term t q for some monomial q and identifying the first order derivative with respect to t at t = 0 we derive from (16) n � � � ( − t i ) k i ˆ µ N ( q ) dµ N N − 2 g M g (( q i , k i ) 1 ≤ i ≤ n , ( q, 1)) . (17) ˆ V t = k i ! g ≥ 0 k 1 , ··· ,k n ∈ N k i =1 Even though the expansions (16) and (17) were first introduced by ’t Hooft to compute the matrix integrals, the natural reverse question of computing the numbers M g (( q i , k i ) 1 ≤ i ≤ n ) by studying the associated integrals over matrices encountered a large success in theoretical physics. In the course of doing so, one would like for instance to compute the limit lim N →∞ N − 2 ln Z N and claim t that the limit has to be equal to F 0 ( t ) . There is here the claim that one can interchange derivatives and limit, a claim that we shall study in this chapter. 26

  24. We shall indeed prove that the formal limit can be strenghten into a large N expansion. This requires that integrals are finite which could fail to happen for instance with a potential such as V ( X ) = X 3 . We could include such potential to the cost of adding a cutoff 1 � X i �≤ M for some sufficiently large (but fixed) M . This introduces however boundary terms that we prefer to avoid hereafter. Instead, we shall assume that � � � V t ( X 1 , . . . , X m ) + 1 X 2 (18) ( X k ( ij )) i,j,k → Tr k 4 is convex for all N . We denote by U the set of parameters t = ( t 1 , . . . , t n ) so that (18) holds. Note that this is true when t = 0 . This implies that the dµ N Hessian of − ln V t is uniformly bounded below by − N/ 4 I , that is is uniformly dx log-concave. This property will provide useful a priori bounds. We also denote B ǫ the set of parameters t = ( t 1 , . . . , t n ) so that � t � ∞ = max | t i | is bounded above by ǫ . In the sequel, we denote by � t � 1 = � | t i | . Then, we shall prove that for t ∈ U ∩ B ǫ , ǫ small enough, V t ( P ) + 1 µ N V t [ P ] = µ N µ N ( P )] = σ 0 N 2 σ 1 V t ( P ) + o ( N − 2 ) ¯ V t [ˆ V t ( q ) = � � k ( − t i ) ki where σ g M g (( q i , k i ) 1 ≤ i ≤ k , ( q, 1)) for monomial k 1 , ··· ,k n ∈ N k i =1 k i ! functions q for g = 0 or 1 . This part summarizes results from [52] and [53]. The full expansion (i.e higher order corrections) was obtained by E. Maurel Segala (see [70]). 3.4.1 First order expansion for the free energy We prove here (see Theorem 3.16) that 1 ˆ � n i =1 t i N Tr( q i ( X 1 , ··· ,X m )) dµ N ( X 1 ) · · · dµ N ( X m ) lim N 2 ln e N →∞ n � � ( t i ) k i = k i ! M 0 (( q i , k i ) , 1 ≤ i ≤ n ) k 1 , ··· ,k n ∈ N i =1 provided V t satisfies (18) and the parameters t i ’s are sufficiently small. To µ N prove this result we first show that, under the same assumptions, ¯ t ( q ) = µ N � t i q i ( N − 1 Tr( q )) converges as N goes to infinity towards a limit which is as well related with map enumeration (see Theorem 3.12). The central tool in our asymptotic analysis will be again the Dyson-Schwinger’s equations. They are simple emanation of the integration by parts formula (or, somewhat equivalently, of the symmetry of the Laplacian in L 2 ( dx ) ). These equations will be shown to pass to the large N limit and be then given as some asymptotic differential equation for the limit points of ¯ µ N = µ N µ N ] . These V t [ˆ t equations will in turn uniquely determine these limit points in some small range of the parameters. We will then show that the limit points have to be given as some generating function of maps. 27

  25. 3.4.2 Finite dimensionnal Dyson-Schwinger’s equations We can generalize the Dyson-Schwinger equations that we proved in Section 3.2 for independent GUE matrices to the interacting case as follows. Property 3.7. For all P ∈ C � X 1 , · · · , X m � , all i ∈ { 1 , · · · , m } , � � � � µ N ⊗ ˆ µ N µ N ( ∂ i P ) = µ N µ N (( X i + D i V t ) P ) ˆ ˆ V t V t Proof. Using repeatidely Stein’s lemma which says that for any differen- tiable function f on R , ˆ ˆ f ( x ) xe − x 2 f ′ ( x ) e − x 2 2 dx = 2 dx, we find, since |ℜ ( Al ( rs )) | 2 |ℑ ( Al ( rs )) | 2 |ℜ ( Al ( rs )) | 2 |ℑ ( Al ( rs )) | 2 A l ( rs ) e − − − ( ∂ ℜ ( A l ( rs )) + i∂ ℑ ( A l ( rs )) ) e − − = 2 2 2 2 |ℜ ( Al ( rs )) | 2 |ℑ ( Al ( rs )) | 2 A l ( rs ) e − − − ∂ ¯ = 2 2 with ∂ ¯ A l ( rs ) A k ( ij ) = ∂ A l ( sr ) A k ( ij ) = 1 k = l 1 rs = ji , that N � � ˆ 1 1 ˆ N Tr( A k P ) dµ N ∂ A k ( ji ) ( Pe − N Tr( V t ) ) ji dµ N ( A i ) V t ( A ) = N 2 i,j =1 ˆ  � N � 1  = Q jj R ii N 2 i,j =1 P = QX k R  � n � � N  dµ N − N t l P ji Q hj R ih V t ( A ) l =1 q l = QX k R h =1 ˆ � 1 � N 2 (Tr ⊗ Tr)( ∂ k P ) − 1 dµ N V t ( A ) = N Tr( D k V t P ) where A = ( A 1 , . . . , A m ) . This yields ˆ � � µ N ⊗ ˆ µ N (( X k + D k V t ) P ) − ˆ µ N ( ∂ k P ) dµ N (19) V t ( A ) = 0 . ˆ ⋄ 3.4.3 A priori estimates µ N V t is a probability measure with uniformly log-concave density. This provides very useful a priori inequalities such as concentration inequalities and Brascamp- Lieb inequalities. We recall below the main consequences we shall use and refer to [3] and my course in Saint Flour for details. 28

  26. We assume V t = � t i q i satisfies (18), that is t = ( t 1 , . . . , t n ) ∈ U . Brascamp- Lieb inequalities allow to compare expectation of convex functions with those under the Gaussian law, for which we have a priori bounds on the norm of matrices. From this we deduce, see [53] for details, that Lemma 3.8. For ǫ small enough, there exists M 0 finite so that for all t ∈ U ∩ B ǫ , V t = � t i q i there exists a positive constant c such that for all i and s ≥ 0 V t ( � X i � ≥ s + M 0 − 1) ≤ e − cNs . µ N As a consequence, for δ > 0 , for all all r ≤ N/ 2 and all ℓ i , i ≤ r ( M 0 + δ ) r . µ N ( X ℓ 1 · · · X ℓ r ) | ] (20) E [ | ˆ ≤ Concentration inequalities are deduced from log-Sobolev and Herbst’s argu- ment [53, section 2.3] : Lemma 3.9. There exists ǫ > 0 and c > 0 so that for t ∈ U ∩ B ǫ for any polynomial P � � ≤ e − cN 2 δ 2 µ N µ N ( P ) − E [ˆ µ N ( P )] | ≥ � P � L {| ˆ M 0 δ } ∩ {� X i � ≤ M 0 + 1 } V t A = sup � X i �≤ A ( � m k =1 � D k P ( X ) D k P ∗ ( X ) � ∞ ) 1 / 2 if the supremum is where � P � L taken over m-tuples of N × N self-adjoint matrices X = ( X 1 , . . . , X m ) and all N . �� | α q | 2 deg q 2 A 2 deg( q ) � 1 / 2 . Note that if P = � α q q , � P � L A ≤ 3.4.4 Tightness and limiting Dyson-Schwinger’s equations We say that τ ∈ C � X 1 , · · · , X m � ∗ satisfies the Dyson-Schwinger equation with potential V , denoted in short SD[V] , if and only if for all i ∈ { 1 , · · · , m } and P ∈ C � X 1 , · · · , X m � , τ ( I ) = 1 , τ ⊗ τ ( ∂ i P ) = τ (( D i V + X i ) P ) SD [ V ] . We shall now prove that Property 3.10. There exists ǫ > 0 so that for all t ∈ U ∩ B ǫ (¯ µ N t , N ∈ N ) is tight. Any limit point τ satisfies SD[ V t ] and belongs to K ( M 0 ) , with M 0 as in Lemma 3.8 and K ( M ) defined in Lemma 3.2. Proof. By Lemma 3.8 we know that ¯ µ N = µ N µ N ] belongs to the compact V t [ˆ t set K ( M 0 ) (the restriction on moments with degree going to infinity with N being irrelevant) hence this sequence is tight. Any limit point τ belongs as well to K ( C 0 ) . Moreover, the DS equation (19), together with the concentration property of Lemma 3.9, implies that τ (( X k + D k V ) P ) = τ ⊗ τ ( ∂ k P ) . (21) ⋄ 29

  27. 3.4.5 Uniqueness of the solutions to Dyson-Schwinger’s equations for small parameters The main result of this paragraph is Theorem 3.11. For all R ≥ 1 , there exists ǫ > 0 so that for � t � ∞ = max 1 ≤ i ≤ n | t i | < ǫ , there exists at most one solution τ t ∈ K ( R ) to SD[ V t ] . Note that if V = 0 , our equation becomes Remark : τ ( X i P ) = τ ⊗ τ ( ∂ i P ) . Because if P is a monomial, τ ⊗ τ ( ∂ i P ) = � P = P 1 X i P 2 τ ( P 1 ) τ ( P 2 ) with P 1 and P 2 with degree smaller than P , we see that the equation SD[0] allows to define uniquely τ ( P ) for all P by induction. The solution can be seen to be exactly τ ( P ) = σ m ( P ) , σ m the law of m free semi-circular found in Theorem 3.4. When V t is not zero, such an argument does not hold a priori since the right hand side will also depend on τ ( D i q j P ) , with D i q j P of degree strictly larger than X i P . However, our compactness assumption K ( R ) gives uniqueness because it forces the solution to be in a small neighborhood of the law τ 0 = σ m of m free semi- circular variables, so that perturbation analysis applies. We shall see in Theorem 3.13 that this solution is actually the one which is related with the enumeration of maps. Proof. Let us assume we have two solutions τ and τ ′ in K ( R ) . Then, by the equation SD [ V t ] , for any monomial function P of degree l − 1 , for i ∈ { 1 , · · · , m } , ( τ − τ ′ )( X i P ) = (( τ − τ ′ ) ⊗ τ )( ∂ i P ) + ( τ ′ ⊗ ( τ − τ ′ ))( ∂ i P ) − ( τ − τ ′ )( D i V t P ) We define for l ∈ N ∆ l ( τ, τ ′ ) = | τ ( P ) − τ ′ ( P ) | . sup monomial P of degree ≤ l Using SD [ V t ] and noticing that if P is of degree l − 1 , l − 2 � p 1 k ⊗ p 2 ∂ i P = l − 2 − k k =0 where p i k , i = 1 , 2 are monomial of degree k or the null monomial, and D i V t is a finite sum of monomials of degree smaller than D − 1 , we deduce ∆ l ( τ, τ ′ ) = 1 ≤ i ≤ m {| τ ( X i P ) − τ ′ ( X i P ) |} max max P monomial of degree ≤ l − 1 l − 2 D − 1 � � ∆ k ( τ, τ ′ ) R l − 2 − k + C � t � ∞ ∆ l + p − 1 ( τ, τ ′ ) ≤ 2 k =0 p =0 30

  28. with a finite constant C (which depends on n only). For γ > 0 , we set � d γ ( τ, τ ′ ) = γ l ∆ l ( τ, τ ′ ) . l ≥ 0 Note that in (K(R)) , this sum is finite for γ < ( R ) − 1 . Summing the two sides of the above inequality times γ l we arrive at D − 1 � d γ ( τ, τ ′ ) ≤ 2 γ 2 (1 − γR ) − 1 d γ ( τ, τ ′ ) + C � t � ∞ γ − p +1 d γ ( τ, τ ′ ) . p =0 We finally conclude that if ( R, � t � ∞ ) are small enough so that we can choose γ ∈ (0 , R − 1 ) so that D − 1 � 2 γ 2 (1 − γR ) − 1 + C � t � ∞ γ − p +1 < 1 p =0 then d γ ( τ, τ ′ ) = 0 and so τ = τ ′ and we have at most one solution. Taking γ = (2 R ) − 1 shows that this is possible provided D − 1 � 1 (2 R ) p − 1 < 1 4 R 2 + C � t � ∞ p =0 so that when � t � ∞ goes to zero, we see that we need R to be at most of order 1 − � t � D − 2 . ∞ ⋄ 3.4.6 Convergence of the empirical distribution We can finally state the main result of this section. Theorem 3.12. There exists ǫ > 0 and M 0 ∈ R + (given in Lemma 3.8) so that µ N (resp. ¯ for all t ∈ U ∩ B ǫ , ˆ µ N t ) converges almost surely (resp. everywhere) towards the unique solution of SD[ V t ] such that | τ ( X ℓ 1 · · · X ℓ r ) | ≤ M r 0 for all choices of ℓ 1 , · · · , ℓ r . Proof. By Property 3.10, the limit points of ¯ µ N t belong to K( M 0 ) and satisfies SD[ V t ] . Since M 0 does not depend on t , we can apply Theorem 3.11 to see that if t is small enough, there is only one such limit point. Thus, by Corollary 3.3 we can conclude that (¯ µ N t , N ∈ N ) converges towards this limit point. From concentration inequalities we have that µ N − ¯ t )( P ) | 2 ) ≤ BC ( P, M ) N − 2 + C 2 d N 2 e − αMN/ 2 µ N µ N V ( | (ˆ insuring by Borel-Cantelli’s lemma that µ N − ¯ µ N N →∞ (ˆ lim t )( P ) = 0 a.s µ N . resulting with the almost sure convergence of ˆ ⋄ 31

  29. 3.4.7 Combinatorial interpretation of the limit In this part, we are going to identify the unique solution τ t of Theorem 3.11 as a generating function for planar maps. Namely, we let for k = ( k 1 , · · · , k n ) ∈ N n and P a monomial in C � X 1 , · · · , X m � , M k ( P ) = card { planar maps with k i labelled stars of type q i for 1 ≤ i ≤ n and one of type P } = M 0 (( P, 1) , ( q i , k i ) 1 ≤ i ≤ n ) . This definition extends to P ∈ C � X 1 , · · · , X m � by linearity. Then, we shall prove that 1. The family {M k ( P ) , k ∈ N , P ∈ C � X 1 , · · · , X m �} satis- Theorem 3.13. fies the induction relation : for all i ∈ { 1 , · · · , m } , all P ∈ C � X 1 , · · · , X m � , all k ∈ N n , � � n � C p j M k ( X i P ) = k j M p ⊗M k − p ( ∂ i P )+ k j M k − 1 j ([ D i q j ] P ) 0 ≤ pj ≤ kj j =1 1 ≤ j ≤ n 1 ≤ j ≤ n (22) where 1 j ( i ) = 1 i = j and M k (1) = 1 k =0 . (22) defines uniquely the family {M k ( P ) , k ∈ C � X 1 , · · · , X m � , P ∈ C � X 1 , · · · , X m �} . 2. There exists A, B finite constants so that for all k ∈ N n , all monomial P ∈ C � X 1 , · · · , X m � , n � i =1 k i B deg ( P ) � n |M k ( P ) | ≤ k ! A (23) C k i C deg ( P ) i =1 with k ! := � n i =1 k i ! and C p the Catalan numbers. 3. For t in B (4 A ) − 1 , n � � ( − t i ) k i M t ( P ) = M k ( P ) k i ! k ∈ N n i =1 is absolutely convergent. For t small enough, M t is the unique solution of SD [ V t ] which belongs to K(4B) . By Theorem 3.11 and Theorem 3.12, we therefore readily obtain that Corollary 3.14. For all c > 0 , there exists η > 0 so that for t ∈ U c ∩ B η , ˆ µ N converges almost surely and in expectation towards � � n ( − t i ) k i τ t ( P ) = M t ( P ) = M k ( P ) k i ! k ∈ N n i =1 32

  30. µ N , for all P, Q in C � X 1 , · · · , X m � , Let us remark that by definition of ˆ µ N ( PP ∗ ) ≥ 0 µ N ( PQ ) = ˆ µ N ( QP ) . ˆ ˆ These conditions are closed for the weak topology and hence we find that Corollary 3.15. There exists η > 0 ( η ≥ (4 A ) − 1 ) so that for t ∈ B η , M t is a linear form on C � X 1 , · · · , X m � such that for all P, Q M t ( PP ∗ ) ≥ 0 M t ( PQ ) = M t ( QP ) M t (1) = 1 . Remark. This means that M t is a tracial state. The traciality property can easily be derived by symmetry properties of the maps. However, the positivity property M t ( PP ∗ ) ≥ 0 is far from obvious but an easy consequence of the matrix models approximation. This property will be seen to be useful to actually solve the combinatorial problem (i.e. find an explicit formula for M t ). Proof of Theorem 3.13. 1. Proof of the induction relation (22) . • We first check them for k = 0 = (0 , · · · , 0) . By convention, there is one planar map with a single vertex, so M 0 (1) = 1 . We now check that � M 0 ( X i P ) = M 0 ⊗ M 0 ( ∂ i P ) = M 0 ( p 1 ) M 0 ( p 2 ) P = p 1 X i p 2 But this is clear since for any planar map with only one star of type X i P , the half-edge corresponding to X i has to be glued with another half-edge of P , hence if X i is glued with the half-edge X i coming from the decomposition P = p 1 X i p 2 , the map is split into two (independent) planar maps with stars p 1 and p 2 respectively (note here that p 1 and p 2 inherites the structure of stars since they inherite the orientation from P as well as a marked half-edge corresponding to the first neighbour of the glued X i .) • We now proceed by induction over the k and the degree of P ; we assume that (22) is true for � k i ≤ M and all monomials, and for � k i = M + 1 when deg ( P ) ≤ L . Note that M k (1) = 0 for | k | ≥ 1 since we can not glue a vertex with no half-edges with any star. Hence, this induction can be started with L = 0 . Now, consider R = X i P with P of degree less than L and the set of planar maps with a star of type X i Q and k j stars of type q j , 1 ≤ j ≤ n , with | k | = � k i = M + 1 . Then, ⋄ either the half-edge corresponding to X i is glued with an half- edge of P , say to the half-edge corresponding to the decomposition P = p 1 X i p 2 ; we see that this cuts the map M into two disjoint planar maps M 1 (containing the star p 1 ) and M 2 (resp. p 2 ), the stars of type q i being distributed either in one or the other of these 33

  31. two planar maps; there will be r i ≤ k i stars of type q i in M 1 , the rest in M 2 . Since all stars all labelled, there will be � C r i k i ways to assign these stars in M 1 and M 2 . Hence, the total number of planar maps with a star of type X i P and k i stars of type q i , such that the marked half-edge of X i P is glued with an half-edge of P is � � � n C r i (24) k i M r ( p 1 ) M k − r ( p 2 ) P = p 1 X i p 2 0 ≤ ri ≤ ki i =1 1 ≤ i ≤ n ⋄ Or the half-edge corresponding to X i is glued with an half-edge of another star, say q j ; let’s say with the edge coming from the decom- position of q j into q j = q 1 X i q 2 . Then, we can see that once we are giving this gluing of the two edges, we can replace X i P and q j by q 2 q 1 P . We have k j ways to choose the star of type q j and the total number of such maps is � k j M k − 1 j ( q 2 q 1 P ) q j = q 1 X i q 2 Note here that M k is tracial. Summing over j , we obtain by linearity of M k � n (25) k j M k − 1 j ([ D i q j ] P ) j =1 (24) and (25) give (22). Moreover, it is clear that (22) defines uniquely M k ( P ) by induction. 2. Proof of (23) . To prove the second point, we proceed also by induction over k and the degree of P . First, for k = 0 , M 0 ( P ) is the number of colored maps with one star of type P which is smaller than the number of planar maps with one star of type x deg P since colors only add constraints. Hence, we have, with C k the Catalan numbers, M k ( P ) ≤ C ] ≤ C deg ( P ) [ deg ( P ) 2 showing that the induction relation is fine with A = B = 1 at this step. Hence, let us assume that (23) is true for � k i ≤ M and all polynomials, and � k i = M +1 for polynomials of degree less than L . Since M k (1) = 0 for � k i ≥ 1 we can start this induction. Moreover, using (22), we get 34

  32. that, if we denote k ! = � n i =1 k i ! , � � M k ( X i P ) M p ( P 1 ) M k − p ( P 2 ) = k ! p ! ( k − p )! 0 ≤ pi ≤ ki P = P 1 X i P 2 1 ≤ j ≤ n � M k − 1 j (( D i q j P ) + ( k − 1 j )! 1 ≤ j ≤ n kj � =0 Hence, taking P of degree less or equal to L and using our induction hypothesis, we find that with D the maximum of the degrees of q j � � n � � � � � M k ( X i P ) � k i B degP − 1 � � ≤ A C p i C k i − p i C degP 1 C degP 2 � � k ! i =1 0 ≤ pj ≤ kj P = P 1 X i P 2 1 ≤ j ≤ n � � k j − 1 � C k j B degP + degq l − 1 C degP + degq l − 1 + D A 1 ≤ l ≤ n j � � � � k i B degP +1 � 1 ≤ j ≤ n B degq j − 2 4 degq j − 2 4 n ≤ A C k i C degP +1 B 2 + D A i It is now sufficient to choose A and B such that � 1 ≤ j ≤ n B degq j − 2 4 degq j − 2 4 n B 2 + D ≤ 1 A (for instance B = 2 n +1 and A = 4 nDB D − 2 4 D − 2 if D is the maximal degree of the q j ) to verify the induction hypothesis works for polynomials of all degrees (all L ’s). 3. Properties of M t . From the previous considerations, we can of course define M t and the serie is absolutely convergent for | t | ≤ (4 A ) − 1 since C k ≤ 4 k . Hence M t ( P ) depends analytically on t ∈ B (4 A ) − 1 . Moreover, for all monomial P , n n � � � (4 t i A ) k i (4 B ) degP ≤ (1 − 4 At i ) − 1 (4 B ) degP . |M t ( P ) | ≤ k ∈ N n i =1 i =1 so that for small t , M t belongs to K(4B) . 4. M t satisfies SD[ V t ] . This is derived by summing (22) written for all k and multiplied by the factor � ( t i ) k i /k i ! . From this point and the previous one (note that B is independent from t ), we deduce from Theorem 3.11 that for sufficiently small t , M t is the unique solution of SD[ V t ] which belongs to K(4B) . ⋄ 35

  33. 3.4.8 Convergence of the free energy Theorem 3.16. There exists ǫ > 0 so that for t ∈ U ∩ B ǫ � � N 2 ln Z V t ( − t i ) k i 1 N lim = M k . Z 0 k i ! N →∞ N k ∈ N n \ (0 ,.., 0) 1 ≤ i ≤ n Moreover, the right hand side is absolutely converging. Above M k denotes the number of planar maps build over k i stars of type q i , 1 ≤ i ≤ n . Proof. Note that if V satisfies (18), then for any α ∈ [0 , 1] , αV also satisfies (18). Set 1 N 2 ln Z V α t F N ( α ) = N . Then, N 2 ln Z V t 1 N = F N (1) − F N (0) . Z 0 N Moreover � � ∂ α F N ( α ) = − µ N µ N ( V t ) (26) ˆ . V α t By Theorem 3.12, we know that for all α ∈ [0 , 1] , we have � � N →∞ µ N µ N ( V t ) lim ˆ = τ α t ( V t ) V α t � � whereas by (20), we know that µ N µ N ( q i ) stays uniformly bounded. There- ˆ V α t fore, a simple use of dominated convergence theorem shows that ˆ 1 ˆ 1 � n N 2 ln Z V t 1 N (27) lim = − τ α t ( V t ) dα = − t i τ α t ( q i ) dα. Z 0 N →∞ 0 0 N i =1 Now, observe that by Corollary 3.14, that with 1 i = (0 , . . . , 1 , 0 , . . . , 0) with the 1 in i th position, � � ( − t j ) k j τ t ( q i ) = M k +1 i k j ! k ∈ N n 1 ≤ j ≤ n � � ( − t j ) k j = − ∂ t i M k k j ! k ∈ N n \{ 0 , ··· , 0 } 1 ≤ j ≤ n so that (27) results with ˆ 1 � � N 2 ln Z V t ( − αt j ) k j 1 N − M k ] dα lim = ∂ α [ Z 0 k j ! N →∞ 0 N k ∈ N n \{ 0 , ··· , 0 } 1 ≤ j ≤ n � � ( − t j ) k j = − M k . k j ! k ∈ N n \{ 0 , ··· , 0 } 1 ≤ j ≤ n ⋄ 36

  34. 3.5 Second order expansion for the free energy We here prove that � ˆ � 1 � n i =1 t i N Tr( q i ( X 1 , ··· ,X m )) dµ N ( X 1 ) · · · dµ N ( X m ) N 2 ln e � 1 � � n ( t i ) k i 1 k i ! M g (( q i , k i ) , 1 ≤ i ≤ n ) + o ( 1 = N 2 ) N 2 g − 2 g =0 k 1 , ··· ,k n ∈ N i =1 for some parameters t i small enough and such that � t i q i satisfies (18). As for µ N the first order, we shall prove first a similar result for ¯ t . We will first refine µ N the arguments of the proof of Theorem 3.11 to estimate ¯ t − τ t . This will already prove that (¯ µ N t − τ t )( P ) is at most of order N − 2 . To get the limit of µ N − τ t which N 2 (¯ µ N t − τ t )( P ) , we will first obtain a central limit theorem for ˆ is of independent interest. The key argument in our approach, besides further uses of integration by parts-like arguments, will be the inversion of the master operator. This can not be done in the space of polynomial functions, but in the µ N and space of some convergent series. We shall now estimate differences of ˆ its limit. So, we set µ N − τ t ) ˆ δ N = N (ˆ t ˆ δ N ¯ δ N dµ N ˆ µ N = V = N (¯ t − τ t ) t µ N − ¯ ˜ t ) = ˆ t − ¯ δ N µ N δ N δ N = N (ˆ t . t Rough estimates on the size of the correction ˜ δ N 3.5.1 In this section we improve on the perturbation analysis performed in section 3.4.5 in order to get the order of ¯ δ N µ N t ( P ) = N (¯ t ( P ) − τ t )( P ) for all monomial P . Proposition 3.17. There exists ǫ > 0 so that for t ∈ U ∩ B ǫ , for all integer number N , and all monomial functions P of degree less than N , t ( P ) |≤ C deg ( P ) | ¯ δ N . N Proof. The starting point is the finite dimensional Dyson-Schwinger equation of Property 3.7 � � µ N ⊗ ˆ µ N µ N [( X i + D i V ) P ]) = µ N µ N ( ∂ i P ) (28) V (ˆ ˆ V 37

  35. Therefore, since τ satisfies the Dyson-Schwinger equation SD[V] , we get that for all polynomial P , ¯ t ( X i P ) = − ¯ t ( D i V t P ) + ¯ t ( ∂ i P ) + τ t ⊗ ¯ δ N δ N δ N µ N δ N (29) t ⊗ ¯ t ( ∂ i P ) + r ( N, P ) with � � r ( N, P ) := N − 1 µ N δ N ˜ t ⊗ ˜ δ N t ( ∂ i P ) . V t By Lemma 3.9, if P is a monomial of degree d , r ( N, P ) is at most of order d 3 M d − 1 /N . We set 0 | ¯ D N δ N d = max t ( P ) | . P monomial of degree ≤ d Observe that by (20), for ǫ > 0 and any monomial of degree d less than N/ 2 , µ N t ( P ) |≤ ( M 0 + ǫ ) d , | τ t ( P ) |≤ M d | ¯ 0 . Thus, by (29), writing D i V = � t j D i q j , we get that for d < N/ 2 � n d − 1 � l + 1 D N | t j | D N ( M 0 + ǫ ) d − l − 1 D N N d 3 M d d +1 ≤ max d + deg ( D i q j ) + 2 0 1 ≤ i ≤ m j =1 l =0 We next define for κ ≤ 1 N/ 2 � D N ( κ, ǫ ) = κ k D N k . k =1 We obtain, if D is the maximal degree of V , D N ( κ ) [ n � t � ∞ + 2(1 − ( M 0 + ǫ ) κ ) − 1 κ 2 ] D N ( κ ) ≤ N/ 2+ D N/ 2 � � κ k 1 κ k − D D N N k 3 ( M 0 + ǫ ) k (30) + n � t � ∞ k + k = N/ 2+1 k =1 where we choose κ small enough so that η = ( M 0 + ǫ ) κ < 1 . In this case the sum of the last two terms is of order 1 /N . Since D N k is bounded by 2 N ( M 0 + ǫ ) k , � N/ 2+ D k is of order Nκ − D η N/ 2 is going to zero. Then, for κ small, k = N/ 2+1 κ k − D D N we deduce D N ( κ ) ≤ C ( κ, ǫ ) N − 1 and so for all monomial P of degree d ≤ N/ 2 , | ¯ δ N t ( P ) |≤ C ( κ, ǫ ) κ − d N − 1 . ⋄ To get the precise evaluation of N ¯ δ N t ( P ) , and of the full expansion of the free energy, we use loop equations, and therefore introduce the corresponding master operator and show how to invert it. 38

  36. 3.5.2 Higher order loop equations. To get the central limit theorem we derive the higher order Dyson-Schwinger equations. To this end introduce the Master operator. It is the linear map on polynomials given by � m � d Ξ P = ∂ i P # X i + ∂ i P # D i V t − (1 ⊗ τ t + τ t ⊗ 1) ∂ i .D i P . i =1 i =1 Recall here that if P is a monomial � m i =1 ∂ i P # X i = deg( P ) P . Using the traciality of ˆ δ N and again integration by parts we find that t Lemma 3.18. For all monomials p 0 , . . . p k we have � � � k ˆ ˆ µ N δ N δ N t [Ξ p 0 ] t ( p i ) V t i =1   � k � m � ˆ µ N  ˆ µ N ( D i p 0 D i p j ) δ N  = ˆ t ( p ℓ ) V t j =1 i =1 ℓ � = j � � m k � � + 1 ˆ t ⊗ ˆ ˆ µ N δ N δ N δ N t [ ∂ i ◦ D i p 0 ] t ( p i ) V t N i =1 i =1 3.5.3 Inverting the master operator Note that when t = 0 , Ξ is invertible on the space of self-adjoint polynomials with no constant terms, which we denote C � X 1 , · · · , X m � 0 . The idea is therefore to invert Ξ for t small. If P is a polynomial and q a non-constant monomial we will denote ℓ q ( P ) the coefficient of q in the decomposition of P in monomials. We can then define a norm � . � A on C � X 1 , · · · , X m � 0 for A > 1 by � | ℓ q ( P ) | A deg q . � P � A = deg q � =0 In the formula above, the sum is taken on all non-constant monomials. We also define the operator norm given, for T from C � X 1 , · · · , X m � 0 to C � X 1 , · · · , X m � 0 , by ||| T ||| A = � T ( P ) � A . sup � P � A =1 A be the completion of C � X 1 , · · · , X m � 0 for � . � A . Finally, let C � X 1 , · · · , X m � 0 We say that T is continuous on C � X 1 , · · · , X m � 0 A if ||| T ||| A is finite. We shall prove that Ξ is continuous on C � X 1 , · · · , X m � 0 A with continuous inverse when t is small. We define a linear map Σ on C � X 1 , · · · , X m � such that for all monomials q of degree greater or equal to 1 q Σ( q ) = deg q . 39

  37. Moreover, Σ( q ) = 0 if deg q = 0 . We let Π be the projection from C � X 1 , · · · , X m � sa onto C � X 1 , · · · , X m � 0 (i.e Π( P ) = P − P (0 , · · · , 0) ). We now define some oper- ators on C � X 1 , · · · , X m � 0 i.e. from C � X 1 , · · · , X m � 0 into C � X 1 , · · · , X m � 0 , we set � m � � Ξ 1 : P − → Π ∂ k Σ P♯D k V k =1 � m � � Ξ 2 : P − → Π ( τ t ⊗ I + I ⊗ τ t )( ∂ k D k Σ P ) . k =1 We denote Ξ 0 = I − Ξ 2 ⇒ Π ◦ Ξ ◦ Σ = Ξ 0 + Ξ 1 , where I is the identity on C � X 1 , · · · , X m � 0 . Note that the images of the op- erators Ξ i ’s and Π ◦ Ξ ◦ Σ are indeed included in C � X 1 , · · · , X m � sa since V is assumed self-adjoint. Lemma 3.19. With the previous notations, 1. For t ∈ U , the operator Ξ 0 is invertible on C � X 1 , · · · , X m � 0 . 2. There exists A 0 > 0 such that for all A > A 0 , the operators Ξ 2 , Ξ 0 and Ξ − 1 0 are continuous on C � X 1 , · · · , X m � 0 A and their norm ||| . ||| A are uniformly bounded for t in B η ∩ U . 3. For all ǫ, A > 0 , there exists η ǫ > 0 such for � t � ∞ < η ǫ , Ξ 1 is continuous on C � X 1 , · · · , X m � 0 A and ||| Ξ 1 ||| A ≤ ǫ . 4. For all A > A 0 , there exists η > 0 such that for t ∈ B η ∩ U , Π ◦ Ξ ◦ Σ is continuous, invertible with a continuous inverse on C � X 1 , · · · , X m � 0 A . Besides the norms of Π ◦ Ξ and (Π ◦ Ξ) − 1 are uniformly bounded for t in B η . 5. For any A > M 0 , there is a finite constant C such that � P � L M 0 ≤ C � P � A . The norm � . � L M 0 was defined in Lemma 3.9. Proof. 1. Recall that Ξ 0 = I − Ξ 2 , whereas since Ξ 2 reduces the degree of a poly- nomial by at least 2 , � (Ξ 2 ) n ( P ) P → n ≥ 0 is well defined on C � X 1 , · · · , X m � 0 as the sum is finite for any polynomial P . This clearly gives an inverse for Ξ 0 . 40

  38. 2. First remark that a linear operator T has a norm less than C with respect to � . � A if and only if for all non-constant monomial q , � T ( q ) � A ≤ CA deg q . Recall that τ t is uniformly bounded (see Lemma 3.10) and let C 0 < + ∞ be such that | τ t ( q ) |≤ C deg q for all monomial q . Take a monomial q = 0 X i 1 · · · X i p , and assume that A > 2 C 0 , �� � � � A ≤ p − 1 � Π ( I ⊗ τ t ) ∂ k D k Σ q � r 1 τ t ( r 2 ) � A k k,q = q 1 Xkq 2 , q 2 q 1= r 1 Xkr 2 p − 1 p − 2 � � � ≤ 1 A deg r 1 C deg r 2 A l C p − l − 2 p − 1 ≤ 0 0 p n =0 k,q = q 1 Xkq 2 , l =0 q 2 q 1= r 1 Xkr 2 � C 0 � p − 2 − l p − 2 � A p − 2 ≤ 2 A − 2 � q � A ≤ A l =0 where in the second line, we observed that once deg( q 1 ) is fixed, q 2 q 1 is uniquely determined and then r 1 , r 2 are uniquely determined by the choice of l the degree of r 1 . Thus, the factor 1 p is compensated by the number of possible decomposition of q i.e. the choice of the degree of q 1 . If A > 2 , P → Π ( � k ( I ⊗ τ t ) ∂ k D k Σ P ) is continuous of norm strictly less 2 . And a similar calculus for Π ( � than 1 k ( τ t ⊗ I ) ∂ k D k Σ) shows that Ξ 2 is continuous of norm strictly less than 1 . It follows immediately that Ξ 0 is continuous. Recall now that � Ξ − 1 Ξ n = 2 . 0 n ≥ 0 As Ξ 2 is of norm strictly less than 1 , Ξ − 1 is continuous. 0 3. Let q = X i 1 · · · X i p be a monomial and let D be the degree of V � � 1 � q 1 D k V q 2 � A ≤ 1 � t � ∞ DnA p − 1+ D − 1 � Ξ 1 ( q ) � A ≤ p p k,q = q 1 X k q 2 k,q = q 1 X k q 2 � t � ∞ DnA D − 2 � q � A . = It is now sufficient to take η ǫ < ( nDA D − 2 ) − 1 ǫ . 4. We choose η < ( nDA D − 2 ) − 1 ||| Ξ − 1 0 ||| − 1 so that when | t |≤ η , A ||| Ξ 1 ||| A ||| Ξ − 1 0 ||| A < 1 . By continuity, we can extend Ξ 0 , Ξ 1 , Ξ 2 , Π ◦ Ξ ◦ Σ and Ξ − 1 on the space 0 C � X 1 , · · · , X m � 0 A . The operator � ( − Ξ − 1 0 Ξ 1 ) n Ξ − 1 P → 0 n ≥ 0 41

  39. is well defined and continuous. And this is clearly an inverse of Π ◦ Ξ ◦ Σ = Ξ 0 + Ξ 1 = Ξ 0 ( I + Ξ − 1 0 Ξ 1 ) . Finally, we notice that Σ − 1 is bounded from C � X 1 , · · · , X m � 0 A to C � X 1 , · · · , X m � 0 A ′ for A 0 < A ′ < A , and hence up to take A slightly larger ΠΞ = (ΠΞΣ) ◦ Σ − 1 is continuous on C � X 1 , · · · , X m � 0 A as well as its inverse. 5. The last point is trivial. ⋄ 3.5.4 Central limit theorem Theorem 3.20. Take t ∈ U ∩ B η for η small enough and A > M 0 ∧ A 0 . Then A , (ˆ δ N ( P 1 ) , . . . , ˆ For all P 1 , . . . , P k in C � X 1 , · · · , X m � 0 δ N ( P k )) converges in law to a centered Gaussian vector with covariance m � σ (2) ( P, Q ) := τ ( D i Ξ − 1 PD i Q ) . i =1 Proof. It is enough to prove the result for monomials P i (which satisfy P i (0) = 0 ). We know by the previous part that for A large enough there exists Q 1 ∈ C � X 1 , · · · , X m � 0 A so that P 1 = Π ◦ Ξ ◦ Σ Q 1 . But the space C � X 1 , · · · , X m � 0 A by construction. Thus, there exists a sequence Q p is dense in C � X 1 , · · · , X m � 0 1 in C � X 1 , · · · , X m � 0 such that p →∞ � Q 1 − Q p lim 1 � A = 0 . Let us define R p = Ξ ◦ Σ Q 1 − Ξ ◦ Σ Q p 1 in C � X 1 , · · · , X m � 0 A . By the previous section, it goes to zero for � . � A ′ for A ′ ∈ ( A 0 , A ) , but also for � . � L M 0 for A > M 0 . But, by Lemma 3.9 and 3.8 we find that since ˆ δ t N has mass bounded by N , for any polynomial P and δ > 0 and r integer number smaller than N/ 2 � N ( R ) | r � � � | ˆ | ˆ µ N δ t µ N δ t N ( R ) | r 1 ∩ i {� X i �≤ M 0 } ≤ V t V t � N ( R ) | 2 r � 1 / 2 | ˆ V t ( ∪ i {� X i � ≥ M 0 } ) 1 / 2 + µ N δ t µ N V t ˆ rx r − 1 e − cx 2 dx + ( N (2 M 0 + 2)) r e − cN . ( � R � L M 0 ) r ≤ We deduce by taking R = R p that for all r ∈ N µ N V t ( | ˆ δ t N ( R p ) | r ) = 0 . p →∞ lim sup lim N →∞ 42

  40. Therefore Lemma 3.18 implies that there exists o ( p ) going to zero when p goes to infinity such that � � � � � k � k ˆ ˆ ˆ ˆ µ N δ N δ N µ N δ N δ N t [ P ] t ( q i ) = t [Ξ Q p ] t ( q i ) + o ( p ) V t V t i =1 i =1   � k � m � ˆ µ N  ˆ µ N ( D i Q p D i q j ) δ N  = ˆ t ( q ℓ ) V t j =1 i =1 ℓ � = j � � � m � k 1 ˆ t ⊗ ˆ ˆ µ N δ N δ N δ N + t [ ∂ i ◦ D i Q p ] t ( q i ) + o ( p ) V t N i =1 i =1   � k � m � ˆ τ t ( D i Q p D i q j ) µ N δ N  + o ( p ) ≃ t ( q ℓ ) V t j =1 i =1 ℓ � = j   � k � m � ˆ τ t ( D i (Ξ − 1 P ) D i q j ) µ N δ N  + o ( p ) ≃ t ( q ℓ ) V t j =1 i =1 ℓ � = j where in the last line we used that � D i Q n − D i Q � A 0 goes to zero and that τ t is continuous for this norm. The result follows then by induction over k since again we recognize Wick formula. Exercise 3.21. Show that for P, Q two monomials, � � ( − t i ) ℓ i σ (2) ( P, Q ) = M 0 ( P, Q, ( q i , ℓ i )) ℓ i ! is the generating function for the enumeration of planar maps with two stars of type P, Q and ℓ i of type q i , 1 ≤ i ≤ n . 3.5.5 Second order correction to the free energy We now deduce from the Central Limit Theorem the precise asymptotics of N ¯ δ N ( P ) and then compute the second order correction to the free energy. Let φ 0 and φ be the linear forms on C � X 1 , · · · , X m � which are given, if P is a monomial, by m � � σ (2) ( P 3 P 1 , P 2 ) . φ ( P ) = i =1 P = P 1 X i P 2 X i P 3 Note that φ vanishes if the degree of P is less than 2 . Proposition 3.22. Take t ∈ U small enough. Then, for any polynomial P , N →∞ N ¯ δ N ( P ) = φ (Ξ − 1 Π P ) . lim 43

  41. Proof. Again, we base our proof on the finite dimensional Dyson-Schwinger equation (28) which, after centering, reads for i ∈ { 1 , · · · , m } , � � � � µ N − τ t )[( X i + D i V ) P − ( I ⊗ τ t + τ t ⊗ I ) ∂ i P δ N ⊗ ˆ N 2 µ N = µ N ˆ δ N ( ∂ i P )] (ˆ V V (31) Taking P → D i Π P and summing over i ∈ { 1 , · · · , m } , we thus have � � � m � � µ N − τ t )(Ξ P ) δ N ⊗ ˆ ˆ N 2 µ N = µ N δ N ( (32) (ˆ ∂ i ◦ D i P ) V V i =1 By Theorem 3.20 we see that � � � m δ N ⊗ ˆ N →∞ µ N ˆ δ N ( lim ∂ i ◦ D i Π P ) = φ ( P ) V i =1 which gives the asymptotics of N ¯ δ N (Ξ P ) for all P in the imgae of Ξ . To generalize the result to arbitrary P , we proceed as in the proof of the full central limit theorem. We take a sequence of polynomials Q n wich goes to Q = Ξ − 1 P when n go to ∞ for the norm � . � A . We denote R n = P − Ξ Q n = Ξ( Q − Q n ) . Note that as P and Q n are polynomials then R n is also a polynomial. N ¯ δ N ( P ) = N ¯ δ N (Ξ Q n ) + N ¯ δ N ( R n ) According to Proposition 3.17, for any monomial P of degree less than N 1 − ǫ , | N ¯ δ N ( P ) | ≤ C deg( P ) . So if we take the limit in N , for any monomial P , | N ¯ δ N ( P ) | ≤ C deg( P ) lim sup N and if P is a polynomial, Lemma 3.9 yields for C < A | N ¯ δ N ( P ) | ≤ � P � L lim sup C ≤ � P � A . N We now fix n and take the large N limit, | N ¯ | N ¯ δ N ( P − Ξ Q n ) | = lim sup δ N ( R n ) | ≤ � R n � A . lim sup N N If we take the limit in n the right term vanishes and we are left with : N N ¯ N N ¯ δ N ( P ) = lim δ N ( Q n ) = lim lim n lim n φ ( Q n ) . It is now sufficient to show that φ is continuous for the norm � . � A . But P → � m i =1 ∂ i ◦ D i P is continuous from C � X 1 , · · · , X m � 0 A to C � X 1 , · · · , X m � 0 A − 1 and σ 2 is continuous for � . � A − 1 provided A is large enough. This proves that φ is continuous and then can be extended on C � X 1 , · · · , X m � 0 A . Thus N N ¯ δ N ( P ) = lim lim n φ ( Q n ) = φ ( Q ) . ⋄ 44

  42. Theorem 3.23. Take t ∈ U small enough. Then ln Z V t N = N 2 F t + F 1 t + o (1) Z 0 N with ˆ 1 F t = τ α t ( V t ) dα 0 and ˆ 1 F 1 φ α t (Ξ − 1 t = α t V t ) ds. 0 Proof. As for the proof of Theorem 3.16, we note that αV t = V α t is c -convex for all α ∈ [0 , 1] We use (26) to see that ∂ α ln Z N V α t = µ N µ N ( V t )) α t (ˆ so that we can write ˆ 1 ln Z N V α t N 2 µ N µ N ( V t )) dα = V α t (ˆ Z N 0 0 ˆ 1 [ N ¯ N 2 F t + δ N (33) = α t ( V t )] ds 0 with ˆ 1 F t = τ α t ( V t ) dα. 0 Proposition 3.22 and (33) finish the proof of the theorem since by Proposition 3.17, all the N ¯ δ N ( q i ) can be bounded independently of N and t ∈ B η ∩ U so that dominated convergence theorem applies. ⋄ Exercise 3.24. Show that F 1 t is a generating function for maps of genus one. 4 Beta-ensembles Closely related to random matrices are the so-called Beta-ensembles. Their distribution is the probability measure on R N given by N � ∆( λ ) β e − Nβ � V ( λ i ) 1 d P β,V d λ i ( λ 1 , . . . , λ N ) = N Z β,V N i =1 where ∆( λ ) = � i<j | λ i − λ j | . 2 x 2 and β = 2 , P 2 ,x 2 / 4 Remark 4.1. In the case V ( X ) = 1 is exactly the law N of the eigenvalues for a matrix taken in the GUE as we were considering in the previous chapter (the case β = 1 corresponds to GOE and β = 4 to GSE). This is left as a (complicated) exercise, see e.g. [3]. 45

  43. β ensembles also represent strongly interacting particle systems. It turns out that both global and local statistics could be analyzed in some details. In these lectures, we will discuss global asymptotics in the spirit of the previous chapter. This section is strongly inspired from [ ? ]. However, in that paper, only Stieltjes functions were considered, so that closed equations for correlators were only retrieved under the assumption that V is analytic. In this section, we consider more general correlators, allowing sufficiently smooth (but not analytic) potentials. We did not try to optimize the smoothness assumption. 4.1 Law of large numbers and large deviation principles Notice that we can rewrite the density of β -ensembles as :     d P β,V � � 1 1 N = exp 2 β ln | λ i − λ j | − βN V ( λ i ) d λ Z β,V   N i � = j � � 1 − βN 2 E (ˆ ” = ” exp µ N ) Z β,V N where ˆ µ N is the empirical measure (total mass 1 ), and for any probability mea- sure µ on the real line, we denote by E the energy [1 2 V ( x ) + 1 2 V ( y ) − 1 ˆ ˆ 2 ln | x − y | ] d µ ( x ) d µ ( y ) E ( µ ) = (the “=” is in quotes because we have thrown out the fact that ln | x − y | is not well defined for a Dirac mass on the “self-interaction” diagonal terms) V ( x ) Assumption 4.2. Assume that lim inf | x |→∞ ln( | x | ) > 1 (i.e. V ( x ) goes to in- finity fast enough to dominate the log term at infinity) and V is continuous. Theorem 4.3. If Assumption 4.2 holds, the empirical measure converges almost surely for the weak topology µ N ⇒ µ eq ˆ V , a.s where µ eq V is the equilibrium measure for V , namely the minimizer of E ( µ ) . One can derive this convergence from a related large deviation principle [8] that we now state. µ N under P β,V Theorem 4.4. If Assumption 4.2 holds, the law of ˆ satisfies a N large deviation principle with speed N 2 and good rate function I ( µ ) = β E ( µ ) − β ν ∈P ( R ) E ( ν ) . inf In other words, I has compact level sets and for any closed set F of P ( R ) , 1 N 2 ln P β,V lim sup (ˆ µ N ∈ F ) ≤ − inf F I N N →∞ 46

  44. whereas for any open set O of P ( R ) , 1 N 2 ln P β,V lim inf (ˆ µ N ∈ O ) ≥ − inf O I N N →∞ To deduce the convergence of the empirical measure, we first prove the ex- istence and uniqueness of the minimizers of E . Lemma 4.5. Suppose Assumption 4.2 holds, then : • There exists a unique minimizer µ eq V to E . It is characterized by the fact that there exists a finite constant C V such that the effective potential ˆ ln | x − y | d µ eq V eff ( x ) := V ( x ) − V ( y ) − C V vanishes on the support of µ eq V and is non negative everywhere. • For any probability measure µ , we have the decomposition � � ˆ ∞ 2 � � ds ˆ ˆ E ( µ ) = E ( µ eq � e isx d ( µ − µ eq � V eff ( x ) dµ ( x ) . (34) V ) + V )( x ) + � � s 0 Proof. We notice that with f ( x, y ) = 1 2 V ( x ) + 1 2 V ( y ) − 1 2 ln | x − y | , ˆ ˆ E ( µ ) = f ( x, y ) dµ ( x ) dµ ( y ) = sup f ( x, y ) ∧ Mdµ ( x ) dµ ( y ) M ≥ 0 by monotone convergence theorem. Observe also that the growth assumption we made on V insures that there exists γ > 0 and C > −∞ such that (35) f ( x, y ) ≥ γ (ln( | x | + 1) + ln( | y | + 1)) + C , so that f ∧ M is a bounded continuous function. Hence, E is the supremum of the bounded continuous functions E M ( µ ) := ´ ´ f ( x, y ) ∧ Mdµ ( x ) dµ ( y ) , defined on the set P ( R ) of probability measures on R , equipped with the weak topology. Hence E is lower semi-continuous. Moreover, the lower bound (35) on f yields � ˆ � ln( | x | + 1) dµ ( x ) ≤ M − C L M := { µ ∈ P ( R ) : E ( µ ) ≤ M } ⊂ =: K M 2 γ (36) where K M is compact. Hence, since L M is closed by lower semi-continuity of E we conclude that L M is compact for any real number M . This implies that E achieves its minimal value. Let µ eq V be a minimizer. Writing that E ( µ eq V + ǫν ) ≥ E ( µ eq V ) for any measure ν with zero mass so that µ eq V + ǫν is positive for ǫ small enough gives the announced characterization in terms of the effective potential V eff . For the second point, take µ with E ( µ ) < ∞ and write ˆ ln | . − y | d µ eq V = V eff + V ( y ) + C V 47

  45. so that V ) − 1 ˆ ˆ ˆ E ( µ ) = E ( µ eq ln | x − y | d ( µ − µ eq V )( x ) d ( µ − µ eq V )( y ) + V eff ( x ) dµ ( x ) . 2 On the other hand, we have the following equality for all x, y ∈ R � � ˆ ∞ 1 2 t − e − | x − y | 2 e − 1 ln | x − y | = dt . 2 t 2 t 0 One can then argue [7] that for all probability measure µ with E ( µ ) < ∞ (in particular with no atoms), we can apply Fubini’s theorem and the fact that µ − µ eq V is massless, to show that ˆ ˆ ln | x − y | d ( µ − µ eq V )( x ) d ( µ − µ eq Σ( µ ) := V )( y ) ˆ ∞ 1 ˆ ˆ e − | x − y | 2 d ( µ − µ eq V )( x ) d ( µ − µ eq = − V )( y ) dt 2 t 2 t 0 2 tλ 2 � � ˆ ∞ 2 � � 1 ˆ ˆ e − 1 � e iλx d ( µ − µ eq � − √ = V )( x ) dλdt � � 2 2 πt 0 � � 2 dy ˆ ∞ � � ˆ � e iyx d ( µ − µ eq � = − V )( x ) � � y 0 This term is concave non-positive in the measure µ as it is quadratic in µ , and in fact non degenerate as it vanishes only when all Fourier transforms of µ equal those of µ eq V , implying that µ = µ eq V . Therefore E is as well strictly convex as it differs from this function only by a linear term. Its minimizer is thus unique. ⋄ Remark 4.6. Note that the characterization of µ eq V implies that it is compactly supported as V eff goes to infinity at infinity. Remark 4.7. It can be shown that the equilibrium measure has a bounded density with respect to Lebesgue measure if V is C 2 . Indeed, if f is C 1 from R → R and ε small enough so that ϕ ε ( x ) = x + εf ( x ) is a bijection, we know that I ( ϕ ε # µ eq V ) ≥ I ( µ eq V ) , where we denoted by ϕ # µ the pushforward of µ by ϕ given, for any test function g , by : ˆ ˆ g ( y ) dϕ # µ ( y ) = g ( ϕ ( x )) dµ ( x ) . As a consequence, we deduce by arguing that the term linear in ε must vanish that ˆ ˆ f ( x ) − f ( y ) 1 ˆ dµ eq V ( x ) dµ eq V ′ ( x ) f ( x ) dµ eq V ( y ) = V ( x ) . x − y 2 48

  46. By linearity, we may now take f to be complex valued and given by f ( x ) = ( z − x ) − 1 dµ eq ( z − x ) − 1 . We deduce that the Stieltjes transform S eq ( z ) = ´ V ( x ) satisfies ˆ V ′ ( x ) 1 2 S eq ( z ) 2 = z − x dµ eq V ( x ) = S eq ( z ) V ′ ( ℜ ( z )) + f ( z ) with ˆ V ′ ( x ) − V ′ ( ℜ ( z )) dµ eq f ( z ) = V ( x ) . z − x f is bounded on compacts if V is C 2 . Moreover, we deduce that � V ′ ( ℜ ( z )) 2 + 2 f ( z ) . S ( z ) = V ′ ( ℜ ( z )) − But we can now let z going to the real axis and we deduce from Theorem ?? � V ′ ( x ) 2 − 4 f ( x ) . that µ eq V has bounded density Note also that it follows, since V ′ ( x ) 2 − 4 f ( x ) is smooth that when the density V vanishes at a it vanishes like | x − a | q/ 2 for some integer number q ≥ 1 . of µ eq Because the proof of the large deviation principle will be roughly the same in the discrete case, we detail it here. Proof of Theorem 4.4 We first consider the non-normalized measure     d Q β,V � � 1 N = exp 2 β ln | λ i − λ j | − βN V ( λ i ) d λ   i � = j and prove that it satisfies a weak large deviation principle, that is that for any probability measure µ , 1 N 2 ln Q β,V − β E ( µ ) = lim sup lim sup N ( d (ˆ µ N , µ ) < δ ) δ → 0 N →∞ 1 N 2 ln Q β,V = lim inf δ → 0 lim inf N ( d (ˆ µ N , µ ) < δ ) N →∞ where d is a distance compatible with the weak topology, such as the Vasershtein distance. To prove the upper bound observe that for any M > 0 µ N ( y ) � ˆ e − βN 2 ´ µ N ( x ) d ˆ Q β,V x � = y f ( x,y ) ∧ Md ˆ e − βV ( λ i ) dλ i N ( d (ˆ µ N , µ ) < δ ) ≤ d (ˆ µ N ,µ ) <δ µ N ( y ) � ˆ e − βN 2 ´ µ N ( x ) d ˆ e βNM f ( x,y ) ∧ Md ˆ e − βV ( λ i ) dλ i = µ N ,µ ) <δ d (ˆ where in the first line we used that the λ i are almost surely distinct. Now, using that for any finite M , E M is continuous, we get ˆ e βNM e − βN 2 E M ( µ )+ N 2 o ( δ ) ( Q β,V e − βV ( λ ) dλ ) N N ( d (ˆ µ N , µ ) < δ ) ≤ 49

  47. Taking first the limit N going to infinity, then δ going to zero and finally M going to infinity yields 1 N 2 ln Q β,V µ N , µ ) < δ ) ≤ − β E ( µ ) . lim sup lim sup N ( d (ˆ δ → 0 N →∞ To get the lower bound, we may choose µ with no atoms as otherwise E ( µ ) = + ∞ . We can also assume µ compactly supported, as we can approximate it by µ M ( dx ) = 1 | x |≤ M dµ/µ ([ − M, M ]) and it is not hard to see that E ( µ M ) goes to E ( µ ) as M goes to infinity. Let x i be the i th classical location of the particles given by µ (( −∞ , x i ]) = i/N . x i < x i +1 and we have for N large enough and p > 0 , if u i = λ i − x i , Ω = ∩ i {| u i | ≤ N − p , u i ≤ u i +1 } ⊂ { d (ˆ µ N , µ ) < δ } so that we get the lower bound N � � ˆ Q β,V | x i − x j + u i − u j | β N ( d (ˆ µ N , µ ) < δ ) ≥ exp ( − NβV ( x i + u i )) du i i>j i =1 Ω Observe that by our ordering of x and u , we have | x i − x j + u i − u j | ≥ max {| x i − x j | , | u i − u j |} and therefore � � | x i − x j | β � | x i +1 − x i | β/ 2 � | x i − x j + u i − u j | β ≥ | u i +1 − u i | β/ 2 i>j i>j +1 i i where for i > j + 1 ˆ x i ˆ x j +1 ln | x i − x j | ≥ ln | x − y | dµ ( x ) dµ ( y ) x i − 1 x j whereas ˆ x i ˆ x i 1 x>y ln | x − y | dµ eq V ( x ) dµ eq ln | x i − x i − 1 | ≥ 2 V ( y ) . x i − 1 x i − 1 We deduce that � � ln | x i +1 − x i | ≥ N 2 ln | x i − x j | + 1 ˆ ˆ ln | x − y | dµ eq V ( x ) dµ eq V ( y ) . 2 2 i>j +1 i Moreover, V is continuous and µ compactly supported, so that � N � N 1 V ( x i + u i ) = 1 (37) V ( x i ) + o (1) . N N i =1 i =1 Hence, we conclude that � | u i +1 − u i | β/ 2 � ˆ Q β,V exp {− βN 2 E ( µ ) } N ( d (ˆ µ N , µ ) < δ ) ≥ du i Ω i exp {− βN 2 E ( µ ) + o ( N 2 ) } (38) ≥ 50

  48. which gives the lower bound. To conclude, it is enough to prove exponential tightness. But with K M as in (36) we have by (35) µ N ( x ) − CN 2 � ˆ Q β,V dλ i ≤ e N 2 ( C ′ − 2 γM ) N ( K c e − 2 γN ( N − 1) ´ ln( | x | +1) d ˆ M ) ≤ K c M with some finite constant C ′ independent of M . Hence, exponential tightness follows : 1 N 2 ln Q β,V N ( K c lim sup lim sup M ) = −∞ M →∞ N →∞ from which we deduce a full large deviation principle for Q β,V and taking F = O N be the whole set of probability measures, we get in particular that 1 N 2 ln Z β,V lim = − β inf E . N N →∞ ⋄ We also have large deviations from the support : the probability that some eigenvalue is away from the support of the equilibrium measure decays exponen- tially fast if V eff is positive there. This was proven for the quadratic potential in [6], then in [3] but with the implicit assumption that there is convergence of the support of the eigenvalues towards the support of the limiting equilibrium measure. In [11, 15], it was proved that large deviations estimate for the sup- port hold in great generality. Hence, if the effective potential is positive outside of the support S of the equilibrium measure, there is no eigenvalue at positive distance of the support with exponentially large probability. It was shown in [48] that if the effective potential is not strictly positive outside of the support of the limiting measure, eigenvalues may deviate towards the points where it vanishes. For completeness, we summarize the proof of this large deviation principle below. Theorem 4.8. Let S be the support of µ eq V . Assume Assumption 4.2 and that V is C 2 . Then, for any closed set F in S c 1 N ln P β,V lim sup ( ∃ i ∈ { 1 , N } : λ i ∈ F ) ≤ − inf F V eff , N N →∞ whereas for any open set O ⊂ S c 1 N ln P β,V lim inf ( ∃ i ∈ { 1 , N } : λ i ∈ O ) ≥ − inf O V eff . N N →∞ ln | x − y | dµ eq Proof. Observe first that V eff is continuous as V is and x → ´ V ( y ) is continuous by Remark 4.7. Hence, as V eff goes to infinity at infinity, it is a good rate function. We shall use the representation Υ β,V λ i ∈ F ] ≤ N Υ β,V � N ( F ) N ( F ) ≤ P β,V (39) ∃ i N Υ β,V Υ β,V N ( R ) N ( R ) 51

  49. where, for any measurable set X : � �� � ˆ β, NV − Nβ V ( ξ )+( N − 1) β ´ d ˆ µ N − 1 ( λ ) ln | ξ − λ | Υ β,V N − 1 (40) N ( X ) = P dξ e . N − 1 X N ln Υ β,V 1 We shall hereafter estimate N ( X ) . We first prove a lower bound for Υ β,V N ( X ) with X open. For any x ∈ X we can find ε > 0 such that ( x − ε, x + ε ) ⊂ X . Let δ ε ( V ) = sup {| V ( x ) − V ( y ) | , | x − y | ≤ ε } . Using twice Jensen inequality, we lower bound Υ β,V N ( X ) by � �� � ˆ x + ε β, NV − Nβ V ( ξ )+( N − 1) β ´ d ˆ µ N − 1 ( η ) ln | ξ − η | N − 1 ≥ P dξe N − 1 x − ε � �� � ˆ x + ε � � ´ β, NV ( N − 1) β d ˆ µ N − 1 ( λ ) ln | ξ − λ | e − Nβ V ( x )+ δ ε ( V ) ≥ P N − 1 dξe N − 1 x − ε � � ´ �� � � β, NV N − 1 ( N − 1) β P d ˆ µ N − 1 ( λ ) H x,ε ( λ ) 2 ε e − Nβ V ( x )+ δ ε ( V ) N − 1 ≥ e � � ´ �� � � β, NV N − 1 ( N − 1) β P d ˆ µ N − 1 ( λ ) φ x,K ( λ ) H x,ε ( λ ) 2 ε e − Nβ V ( x )+ δ ε ( V ) N − 1 (41) ≥ e where we have set : ˆ x + ε dξ (42) H x,ε ( λ ) = 2 ε ln | ξ − λ | x − ε and φ x,K is a continuous function which vanishes outside of a large compact K including the support of µ eq V , is equal to one on a ball around x with radius 1+ ε (note that H is non-negative outside [ x − (1 + ε ) , x + 1 + ε ] resulting with the lower bound (41)) and on the support of µ eq V , and takes values in [0 , 1] . For any fixed ε > 0 , φ x,K H x,ε is bounded continuous, so we have by Theorem 4.4 (note that it applies as well when the potential depends on N as soon as it converges uniformly on compacts) that : � � � � dµ eq ( N − 1) β ´ V ( λ ) φ x,K ( λ ) H x,ε ( λ )+ NR ( ε,N ) N ( X ) ≥ 2 ε e − Nβ Υ β,V V ( x )+ δ ε ( V ) (43) e 2 with lim N →∞ R ( ε, N ) = 0 for all ε > 0 . Letting N → ∞ , we deduce since dµ eq dµ eq ´ ´ V ( λ ) H x,ε ( λ ) that : V ( λ ) φ x,K ( λ ) H x,ε ( λ ) = � � 1 ˆ N ln Υ β,V dµ eq N ( X ) ≥ − β δ ε ( V ) − β V ( x ) − (44) lim inf V ( λ ) H x,ε ( λ ) N →∞ dµ eq Exchanging the integration over ξ and x , observing that ξ → ´ V ( λ ) ln | ξ − λ | is continuous by Remark 4.7 and then letting ε → 0 , we conclude that for all x ∈ X , 1 N ln Υ β,V (45) lim inf N ( X ) ≥ − β V eff ( x ) . N →∞ 52

  50. We finally optimize over x ∈ X to get the desired lower bound. To prove the upper bound, we note that for any M > 0 , � | ξ − λ | ,M − 1 ��� � ˆ � β, NV − Nβ V ( ξ )+( N − 1) β ´ d ˆ µ N − 1 ( λ ) ln max Υ β,V N ( X ) ≤ P N − 1 dξ e . N − 1 X Observe that there exists C 0 and c > 0 and d finite such that for | ξ | larger than C 0 : � | ξ − λ | , M − 1 � ˆ W µ ( ξ ) = V ( ξ ) − ≥ c ln | ξ | + d dµ ( λ ) ln max by the confinement Hypothesis (4.2), and this for all probability measures µ on R . As a consequence, if X ⊂ [ − C, C ] c for some C large enough, we deduce that : ˆ 2 ( c ln | ξ | + d ) ≤ e − N β dξe − ( N − 1) β Υ β,V 4 c ln C (46) N ( X ) ≤ X where the last bound holds for N large enough. Combining (45), (46) and (39) shows that � 1 N ln P β,V ∃ i | λ i | ≥ C ] = −∞ . lim sup lim sup N C →∞ N →∞ Hence, we may restrict ourselves to X bounded. Moreover, the same bound β, NV extends to P N − 1 so that we can restrict the expectation over ˆ µ N − 1 to prob- N − 1 ability measures supported on [ − C, C ] up to an arbitrary small error e − Ne ( C ) , provided C is large enough and where e ( C ) goes to infinity with C . Recall � | ξ − λ | , M − 1 � also that V ( ξ ) − 2 is uniformly bounded from ´ d ˆ µ N − 1 ( λ ) ln max � | ξ − λ | , M − 1 � below by a constant D . As λ → ln max is bounded continuous on compacts, we can use the large deviation principles of Theorem 4.4 to deduce that for any ε > 0 , any C ≥ C 0 , e N 2 ˜ R ( ε,N,C ) + e − N ( e ( C ) − β Υ β,V 2 D ) (47) ≤ N ( X ) � � � | ξ − λ | ,M − 1 � dµ eq ˆ − NβV ( ξ )+( N − 1) β ´ V ( λ ) ln max + NMε + dξ e X with lim sup N →∞ ˜ R ( ε, N, C ) equals to 1 β, NV µ N − 1 , µ eq N − 1 ( { ˆ µ N − 1 ([ − C, C ]) = 1 } ∩ { d (ˆ V ) > ε } ) lim sup N 2 ln P < 0 . N − 1 N →∞ � | ξ − λ | , M − 1 � dµ eq Moreover, ξ → V ( ξ ) − ´ is bounded continuous so V ( λ ) ln max that a standard Laplace method yields, 1 N ln Υ β,V lim sup N ( X ) N →∞ � � � � | ξ − λ | , M − 1 ��� � ˆ , − ( e ( C ) − β dµ eq ≤ max − inf β V ( ξ ) − V ( λ ) ln max 2 D ) . ξ ∈ X 53

  51. We finally choose C large enough so that the first term is larger than the sec- � dµ eq ond, and conclude by monotone convergence theorem that ´ | ξ − V ( λ ) ln max λ | , M − 1 � dµ eq decreases as M goes to infinity towards ´ V ( λ ) ln | ξ − λ | . This com- pletes the proof of the large deviation. ⋄ Hereafter we shall assume that Assumption 4.9. V eff is positive outside S . Remark 4.10. As a consequence of Theorem 4.8, we see that up to exponen- tially small probabilities, we can modify the potential at a distance ǫ of the support. Later on, we will assume we did so in order that V ′ eff does not vanish outside S . In these notes we will also use that particles stay smaller than M for some M large enough with exponentially large probability. Theorem 4.11. Assume Assumption 4.2 holds. Then, there exists M finite so that 1 N ln P β,V lim sup ( ∃ i ∈ { 1 , . . . , N } : | λ i | ≥ M ) < 0 . N N →∞ Here, we do not need to assume that the effective potential is positive ev- erywhere, we only use it is large at infinity. The above shows that latter on, we can always change test functions outside of a large compact [ − M, M ] and hence that L 2 norms are comparable to L ∞ norms. 4.2 Concentration of measure We next define a distance on the set of probability measures on R which is well suited for our problem. Definition 4.12. For µ, µ ′ probability measures on R , we set � ˆ ∞ � 1 � � 2 dy � � 2 ˆ � � D ( µ, µ ′ ) = e iyx d ( µ − µ ′ )( x ) . � � y 0 It is easy to check that D defines a distance on P ( R ) (taking eventually the value + ∞ , for instance on measure with Dirac masses). Moreover, we have the following property Property 4.13. Let f ∈ L 1 ( dx ) such that ˆ f belongs to L 1 ( dt ) , and set � f � 1 / 2 = � ´ � 1 / 2 t | ˆ f t | 2 dt . • Assume also f continuous. Then for any probability measures µ, µ ′ � � � � ˆ � f ( x ) d ( µ − µ ′ )( x ) � � ≤ 2 � f � 1 / 2 D ( µ, µ ′ ) . � 54

  52. • Assume moreover f, f ′ ∈ L 2 . Then � f � 1 / 2 ≤ 2( � f � L 2 + � f ′ � L 2 ) . (48) Proof. For the first point we just use inverse Fourier transform and Fubini to write that ˆ ˆ f ( x ) d ( µ − µ ′ )( x ) | f t � ˆ | = | µ − µ ′ t dt | ˆ ∞ t 1 / 2 | ˆ f t | t − 1 / 2 | � µ − µ ′ t | dt ≤ 2 D ( µ, µ ′ ) � f � 1 / 2 ≤ 2 0 where we finally used Cauchy-Schwarz inequality. For the second point, we observe that ˆ ∞ f t | 2 dt ≤ 1 f t | 2 dt ) = π ˆ ˆ � f � 2 t | ˆ | ˆ f t | 2 dt + | t ˆ 2 ( � f � 2 L 2 + � f ′ � L 2 ) 1 / 2 = 2( 0 from which the result follows. ⋄ � N µ N = 1 We are going to show that ˆ i =1 δ λ i satisfies concentration inequal- N µ N and µ eq ities for the D -distance. However, the distance between ˆ V is infinite as ˆ µ N has atoms. Hence, we are going to regularize ˆ µ N so that it has finite energy, following an idea of Maurel-Segala and Maida [68]. First define ˜ λ by λ 1 = λ 1 and ˜ ˜ λ i = ˜ λ i − 1 + max { σ N , λ i − λ i − 1 } where σ N will be chosen to be Remark that ˜ λ i − ˜ λ i − 1 ≥ σ N whereas | λ i − ˜ like N − p . λ i | ≤ Nσ N . Define � 1 � � δ ˜ where U i are independent and equi-distributed random µ N = E U ˜ N λ i + U i variables uniformly distributed on [0 , N − q ] (i.e. we smooth the measure by putting little rectangles instead of Dirac masses and make sure that the eigen- values are at least distance N − p apart). For further use, observe that we have λ i + U i − λ i | ≤ N 1 − p + N − q . In the sequel we will take q = p + 1 so uniformly | ˜ that the first error term dominates. Then we claim that Lemma 4.14. Assume V is C 1 . For 3 < p + 1 ≤ q there exists C p,q finite and c > 0 such that V ) ≥ t ) ≤ e C p,q N ln N − βN 2 t 2 + e − cN P β,V µ N , µ eq ( D (˜ N Remark 4.15. Using that the logarithm is a Coulomb interaction, Serfaty et al could improve the above bounds to get the exact exponent in the term in N ln N , as well as the term in N . This allows to prove central limit theorems under weaker conditions. Our approach seems however more robust and extends to more general interactions [15]. Corollary 4.16. Assume V is C 1 . For all q > 2 there exists C finite and c, c 0 > 0 such that � � � � � � 1 ˆ � µ N − µ eq � ϕ d (ˆ ≤ e − cN N − q +2 � ϕ � L + c 0 N − 1 / 2 √ � ≥ 1 P sup V ) � ln N � ϕ � 1 ϕ 2 55

  53. Moreover � � ˆ ϕ ( x ) − ϕ ( y ) � � � ≤ C 1 µ N − µ eq µ N − µ eq � � (49) d (ˆ V )( x ) d (ˆ V )( y ) N ln N � ϕ � C 2 � x − y | ϕ ( x ) − ϕ ( y ) | with probability greater than 1 − e − cN . Here � ϕ � L = sup x � = y and | x − y | � ϕ � C k = � ℓ ≤ k � ϕ ( ℓ ) � ∞ . Note that we can modify ϕ outside a large set [ − M.M ] up to modify the constant c . Proof. We take q = p + 1 . The triangle inequality yields : � � � � ˆ ˆ ˆ µ N − µ eq � µ N − µ eq � ϕ d (ˆ ϕ d (ˆ ϕ d (˜ | V ) | = µ N − ˜ µ N ) + V ) � � � � � � N � 1 ˆ � � � E U [ ϕ ( λ i ) − ϕ (˜ µ N − µ eq ≤ � + | V )( λ ) dλ | � λ i + U )] � ϕ ( λ ) ˆ (˜ � N i =1 � ϕ � L N − q +2 + 2 � ϕ � 1 µ N , µ eq ≤ 2 D (˜ V ) λ i | is bounded by N − p +1 and U by N − q and used where we noticed that | λ i − ˜ Cauchy-Schwartz inequality. We finally use (48) to see that on {| λ i | ≤ M } we have by the previous lemma that for all ϕ � � � � ˆ � µ N − µ eq � � ≤ N − p +1 � ϕ � L + t � ϕ � 1 ϕ d (ˆ V ) � 2 � with probability greater than 1 − e C p,q N ln N − β 2 N 2 t 2 . We next choose t = c 0 ln N/N 0 = 4 | C p,q | /β so that this probability is greater than 1 − e − c 2 with c 2 0 / 2 N ln N . Theorem 4.11 completes the proof of the first point since it shows that the probability that one eigenvalue is greater than M decays exponentially fast. We next consider ˆ φ ( x ) − φ ( y ) µ N − µ eq µ N − µ eq L N ( φ ) := d (ˆ V )( x ) d (ˆ V )( y ) x − y on { max | λ i | ≤ M } . Hence we can replace φ by φχ M where χ M is a smooth function, equal to one on [ − M, M ] and vanishing outside [ − M − 1 , M +1] . Hence assume that φ is compactly supported. If we denote by ˜ L N ( φ ) the quantity µ N we have that defined as L N ( φ ) but with ˜ µ N instead of ˆ � � � � � ≤ 2 � φ (2) � ∞ N − q +2 . � ˜ L N ( φ ) − L N ( φ ) We can now replace φ by its Fourier representation to find that ˆ 1 ˆ ˆ ˆ ˜ dtit ˆ µ N − µ eq µ N − µ eq e iαtx d (˜ e i (1 − α ) tx d (˜ L N ( φ ) = φ ( t ) dα V )( x ) V )( x ) . 0 56

  54. We can then use Cauchy-Schwartz inequality to deduce that ˆ 1 ˆ ˆ | ˜ dt | t ˆ e iαtx d (˜ µ N − µ eq V )( x ) | 2 L N ( φ ) | ≤ φ ( t ) | dα | 0 ˆ 1 tdα ˆ ˆ dt | t ˆ µ N − µ eq e iαtx d (˜ V )( x ) | 2 = φ ( t ) | tα | 0 ˆ dt | t ˆ µ N , µ eq V ) 2 ≤ φ ( t ) | D (˜ µ N , µ eq V ) 2 � φ � C 2 ≤ (50) CD (˜ where we noticed that � ˆ � 1 / 2 � ˆ � 1 / 2 ˆ dt | t ˆ dt | t ˆ φ ( t ) | 2 (1 + t 2 ) dt (1 + t 2 ) − 1 φ ( t ) | ≤ C ( � φ (2) � L 2 + � φ ′ � L 2 ) ≤ C � φ � C 2 ≤ as we compactified φ . The conclusion follows from Theorem 4.11. ⋄ We next prove Lemma 4.14. We first show that : � � Z β,V − N 2 β E ( µ eq ≥ exp V ) + CN ln N N The proof is exactly as in the proof of the large deviation lower bound of The- orem 4.4 except we take µ = µ eq V and V is C 1 , so that N N � � 1 V ( x i + u i ) = 1 V ( x i ) + O ( 1 N ) . N N i =1 i =1 This allows to improve the lower bound (38) into Z β,V Q β,V µ N , µ eq ≥ N ( d (ˆ V ) < δ ) N exp {− βN 2 E ( µ eq (51) ≥ V ) + CN ln N } Now consider the unnormalized density of Q β,V = Z β,V P β,V on the set N N N where | λ i | ≤ M for all i � � � � dQ β,V N ( λ ) | λ i − λ j | β exp = − Nβ V ( λ i ) dλ i<j � � � � � � β � � � ˜ λ i − ˜ V (˜ ≤ λ j � exp − Nβ λ i ) i<j because the ˜ λ only increased the differences. Observe that for | λ i | ≤ M , | V ′ ( x ) | ( N 1 − p + N − q ) . | V ( λ i ) − V (˜ λ i + U i ) | ≤ sup | x |≤ M +1 57

  55. Moreover for each j > i � � � � � � � � � ˜ λ i − ˜ � ˜ λ i + u i − ˜ � + O ( N − q + p ) . ln λ j � = E ln λ j − u j Hence, we deduce that on | λ i | ≤ M for all i , there exists a finite constant C such that d P β,V � V )) + CN ln N + CN 2 − q + p + CN 3 − p � − N 2 β ( E (˜ µ N ) − E ( µ eq N ≤ exp d λ As we chose q = p + 1 , p > 2 , the error is at most of order N ln N . We now use the fact that ˆ V ) 2 + µ N ) − E ( µ eq µ N , µ eq µ N − µ eq E (˜ V ) = D (˜ ( V eff )( x ) d (˜ V )( x ) where the last term is non-negative, and Theorem 4.11, to conclude V ≥ t } ∩ { max | λ i | ≤ M } ) ≤ e CN ln N − βN 2 t 2 � ˆ � N P β,V µ N , µ eq e − NβV eff ( x ) dx ( { D (˜ N where the last integral is bounded by a constant as V eff is non-negative and goes to infinity at infinity faster than logarithmically. We finally remove the cutoff by M thanks to Theorem 4.11. 4.3 The Dyson-Schwinger equations 4.3.1 Goal and strategy We want to show that for sufficiently smooth functions f that • � 1 � K � � N g c g ( f ) + o ( 1 1 = µ eq E f ( λ i ) V ( f ) + N K ) N g =1 • � f ( λ i ) − E [ � f ( λ i )] converges to a centered Gaussian. We will provide two approaches, one which deals with general functions and a second one, closer to what we will do for discrete β ensembles, where we will restrict ourselves to Stieltjes transform f ( x ) = ( z − x ) − 1 for z ∈ C \ R , which in fact gives these results for all analytic function f by Cauchy formula. The present approach allows to consider sufficiently smooth functions but we will not try to get the optimal smoothness. We will as well restrict ourselves to K = 2 , but the strategy is similar to get higher order expansion. The strategy is similar to the case of the GUE : 58

  56. • We derive a set of equations, the Dyson-Schwinger equations, for our ob- servables (the correlation functions, that are moments of the empirical measure, or the moments of Stieltjes transform) : it is an infinite system of equations, a priori not closed. However, it will turn out that asymptot- ically it can be closed. • We linearize the equations around the limit. It takes the form of a lin- earized operator acting on our observables being equal to observables of smaller magnitudes. Inverting this linear operator is then the key to im- prove the concentration bounds, starting from the already known concen- tration bounds of Corollary 4.16. • Using optimal bounds on our observables and the inversion of the master operator, we recursively obtain their large N expansion. • As a consequence, we derive the central limit theorem. 4.3.2 Dyson-Schwinger Equation Hereafter we set M N = N (ˆ µ N − µ V ) . We let Ξ be defined on the set of C 1 b ( R ) functions by ˆ f ( x ) − f ( y ) Ξ f ( x ) = V ′ ( x ) f ( x ) − dµ V ( y ) . x − y Ξ will be called the master operator. The Dyson-Schwinger equations are given in the following lemma. Lemma 4.17. Let f i : R → R be C 1 b functions, 0 ≤ i ≤ K . Then, � K � K ( 1 β − 1 µ N ( f ′ E [ M N (Ξ f 0 ) N ˆ µ N ( f i )] = 2) E [ˆ 0 ) N ˆ µ N ( f i )] i =1 i =1 � K � 1 µ N ( f 0 f ′ + E [ˆ ℓ ) N ˆ µ N ( f i )] β ℓ =1 i � = ℓ ˆ f 0 ( x ) − f 0 ( y ) p � 1 + 2 N E [ dM N ( x ) dM N ( y ) N ˆ µ N ( f i )] x − y i =1 Proof. This lemma is a direct consequence of integration by parts which implies that for all j     � K � � K 1 E [ f ′  f 0 ( λ j )  NV ′ ( λ j ) −   0 ( λ j ) N ˆ µ N ( f i )] = β E N ˆ µ N ( f i ) λ j − λ k i =1 k � = j i =1 K � � E [ f 0 ( λ j ) f ′ − ℓ ( λ j ) N ˆ µ N ( f i )] ℓ =1 i � = ℓ 59

  57. Summing over j ∈ { 1 , . . . , N } and dividing by N yields �� � � K ˆ ˆ f 0 ( x ) − f 0 ( y ) � µ N ( V ′ f 0 ) − 1 βN E ˆ d ˆ µ N ( x ) d ˆ µ N ( y ) N ˆ µ N ( f i ) 2 x − y i =1 K K � � � (1 − β µ N ( f ′ µ N ( f 0 f ′ = 2 ) E [ˆ 0 ) N ˆ µ N ( f i )] + E [ˆ ℓ ) N ˆ µ N ( f i )] i =1 ℓ =1 i � = ℓ where we used that ( x − y ) − 1 ( f ( x ) − f ( y )) goes to f ′ ( x ) when y goes to x . We first take f ℓ = 1 for ℓ ∈ { 1 , . . . , K } and f 0 with compact support and deduce that as ˆ µ N goes to µ V almost surely as N goes to infinity, we have ˆ f 0 ( x ) − f 0 ( y ) µ V ( f 0 V ′ ) − 1 (52) dµ V ( x ) dµ V ( y ) = 0 . 2 x − y This implies that µ V has compact support and hence the formula is valid for all f 0 . We then linearize around µ V to get the announced lemma. ⋄ The central point is therefore to invert the master operator Ξ . We follow a lemma from [5]. For a function h : R → R , we recall that � h � C j ( R ) := � j r =0 � h ( r ) � L ∞ ( R ) , where h ( r ) denotes the r -th derivative of h . Lemma 4.18. Given V : R → R , assume that µ eq V has support given by [ a, b ] and that � dµ V dx ( x ) = S ( x ) ( x − a )( b − x ) with S ( x ) ≥ ¯ c > 0 a.e. on [ a, b ] . Let g : R → R be a C k function and assume that V is of class C p . Then there exists a unique constant c g such that the equation Ξ f ( x ) = g ( x ) + c g has a solution of class C ( k − 2) ∧ ( p − 3) . More precisely, for j ≤ ( k − 2) ∧ ( p − 3) there is a finite constant C j such that (53) � f � C j ( R ) ≤ C j � g � C j +2 ( R ) , where, for a function h , � h � C j ( R ) := � j r =0 � h ( r ) � L ∞ ( R ) . This solution will be denoted by Ξ − 1 g . It is C k if g is C k +2 and p ≥ k + 1 . It decreases at infinity like | V ′ ( x ) x | − 1 . Remark 4.19. The inverse of the operator Ξ can be computed, see [5]. For x ∈ [ a, b ] we have that Ξ − 1 g ( x ) equals � ˆ b � � � � 1 ( y − a )( b − y ) g ( y ) − g ( x ) x − a + b dy − π ( g ( x ) + c g ) + c 2 , β ( x − a )( b − x ) S ( x ) y − x 2 a 60

  58. where c g and c 2 are chosen so that Ξ − 1 g converges to finite constants at a and b . We find that for x ∈ S ˆ b 1 1 Ξ − 1 g ( x ) � = βS ( x ) PV g ( y ) dy ( y − x ) ( y − a )( b − y ) a ˆ b 1 1 � = ( g ( y ) − g ( x )) dy, βS ( x ) ( y − x ) ( y − a )( b − y ) a and outside of S f is given by (see Remark 4.10). � � − 1 ˆ ˆ f ( y ) ( x − y ) − 1 dµ eq x − y dµ eq V ′ ( x ) − f ( x ) = V ( y ) V ( y ) . Remark 4.20. Observe that by Remark 4.7, the density of µ eq V has to vanish at the boundary like | x − a | q/ 2 for some q ∈ N . Hence the only case when we can invert this operator is when q = 1 . Moreover, by the same remark, � � ˆ V ′ ( x ) 2 − f ( x ) = V ′ ( x ) − PV ( x − y ) − 1 dµ eq S ( x ) ( x − a )( b − x ) = V ( y ) so that S extends to the whole real line. Assuming that S is positive in [ a, b ] we see that it is positive in a open neighborhood of [ a, b ] since it is smooth. We can assume without loss of generality that it is smooth everywhere by the large deviation principle for the support. We will therefore assume hereafter that Assumption 4.21. V : R → R is of class C p and µ eq V has support given by [ a, b ] and that � dµ V dx ( x ) = S ( x ) ( x − a )( b − x ) c > 0 a.e. on [ a, b ] . Moreover, we assume that ( | V ′ ( x ) x | + 1) − 1 is with S ( x ) ≥ ¯ integrable. The first condition is necessary to invert Ξ on all test functions (in critical cases, Ξ is may not be surjective). The second implies that for Ξ − 1 f decays fast enough at infinity so that it belongs to L 1 (for f smooth enough) so that we can use the Fourier inversion theorem. We then deduce from Lemma 4.17 the following : Corollary 4.22. Assume that 4.21 with p ≥ 4 . Take f 0 C k , k ≥ 3 and f i C 1 . Let g = Ξ − 1 f 0 be the C k − 2 function such that there exists a constant c g such 61

  59. that Ξ f 0 = g + c g . Then � K � K M N ( f i )] = ( 1 β − 1 µ N ((Ξ − 1 f 0 ) ′ ) E [ 2) E [ˆ M N ( f i )] i =0 i =1 � K � + 1 µ N (Ξ − 1 f 0 f ′ E [ˆ ℓ ) M N ( f i )] β ℓ =1 i � = ℓ ˆ Ξ − 1 f 0 ( x ) − Ξ − 1 f 0 ( y ) K � + 1 2 N E [ dM N ( x ) dM N ( y ) M N ( f i )] x − y i =1 4.3.3 Improving concentration inequalities We are now ready to improve the concentration estimates we obtained in the previous section. We could do that by using the Dyson-Schwinger equations (this is what we will do in the discrete case) but in fact there is a quicker way to proceed by infinitesimal change of variables in the continuous case : Lemma 4.23. Take g ∈ C 4 and assume p ≥ 4 . Then there exists universal finite constants C V and c > 0 such that for all M > 0 � � ˆ µ N − µ eq ≤ e − cN + N − M . P β,V N | g ( x ) d (ˆ V )( x ) | ≥ C V � g � C 4 ln N + M ln N N Proof. Take f compactly supported on a compact set K . Making the change of i ) , we see that Z β,V i + 1 variable λ i = λ ′ N f ( λ ′ equals N ˆ � N ( f ( λ i ) − f ( λ j )) | β � | λ i − λ j + 1 N f ( λ i )) (1+ 1 e − NβV ( λ i + 1 N f ′ ( λ i )) dλ i (54) Observe that by Taylor’s expansion there are θ ij ∈ [0 , 1] such that � � � | λ i − λ j + 1 | λ i − λ j | exp { 1 f ( λ i ) − f ( λ j ) N ( f ( λ i ) − f ( λ j )) | = λ i − λ j N i<j � f ( λ i ) − f ( λ j ) � 2 � − 1 } θ ij N 2 λ i − λ j i<j where the last term is bounded by � f ′ � 2 ∞ . Similarly there exists θ i ∈ [0 , 1] such that V ( λ i + 1 N f ( λ i )) = V ( λ i ) + 1 N f ( λ i ) V ′ ( λ i ) + 1 N 2 f ( λ i ) 2 V ′′ ( λ i + θ i N f ( λ i )) where the last term is bounded for N large enough by C K ( V ) � f � 2 ∞ with C K = sup d ( x,K ) ≤ 1 | V ′′ ( x ) | . We deduce by expanding the right hand side of (54) that � � ˆ exp { β f ( λ i ) − f ( λ j ) ≤ e βC K � f � 2 ∞ + β � f ′ � 2 ∞ + � f ′ � ∞ V ′ ( λ i ) f ( λ i ) } dP β,V − β N N λ i − λ j i<j i 62

  60. Using Chebychev inequality we deduce that if f is C 1 and compactly supported � �   � � � � � � 1 f ( λ i ) − f ( λ j ) P β,V � �  V ′ ( λ i ) f ′ ( λ i )  ≤ N − M e C ( f ) − ≥ M ln N � � N λ i − λ j 2 N � � i,j (55) with C ( f ) = C K � f � 2 ∞ + ( β + 1)(1 + � f ′ � ∞ ) 2 . But � � 1 f ( λ i ) − f ( λ j ) V ′ ( λ i ) f ′ ( λ i ) − N λ i − λ j i<j ˆ ˆ f ( x ) − f ( y ) µ N − µ )(Ξ f ) + 1 µ N − µ eq µ N − µ eq = − N (ˆ 2 N d (ˆ V )( x ) d (ˆ V )( y ) x − y where if f is C 2 the last term is bounded by C � f � C 2 ln N with probability greater than 1 − e − cN by Corollary 4.16. Hence, we deduce from (56) that � � � � µ N − µ )(Ξ f ) ≤ N C � f � C 2 − M e C � f � 2 P β,V � (ˆ � ≥ M ln N C 1 + e − cN N N and inverting f by putting g = Ξ f concludes the proof for f with compact support. Again using Theorem 4.11 allows to extend the result for f with full support. ⋄ Exercise 4.24. Concentration estimates could as well be improved by using Dyson-Schwinger equations. However, using the Dyson-Schwinger equations necessitates to loose in regularity at each time, since it requires to invert the master operator. Hence, it requires stronger regularity conditions. Prove that if Assumption 4.21 holds with p ≥ 12 , for any f be C k with k ≥ 11 . Then for ℓ = 1 , 2 , there exists C ℓ such that � � � E [( N (ˆ � ≤ C ℓ � f � C 3+4 ℓ � f � 1 ℓ =2 ℓ +1 µ N − µ eq V )( f )) ℓ ] 2 . C 1 (ln N ) Hint : Use the DS equations, concentration, invert the master operator and bootstrap if you do not get the best estimates at once. Theorem 4.25. Suppose that Assumption 4.21 holds with p ≥ 10 . Let f be C k with k ≥ 9 . Then V )( f )] = ( 1 β − 1 µ N − µ eq 2) µ eq V [(Ξ − 1 f ) ′ ] . N →∞ E [ N (ˆ m V ( f ) = lim Let f 0 , f 1 be C k with k ≥ 9 and p ≥ 12 . Then N →∞ E [ M N ( f 0 ) M N ( f 1 )] = m V ( f 0 ) m V ( f 1 ) + 1 β µ eq V ( f ′ 1 Ξ − 1 f 0 ) . C V ( f 0 , f 1 ) = lim Remark 4.26. Notice that as C is symmetric, we can deduce that for any f 0 , f 1 in C k with k ≥ 9 , µ eq 1 Ξ − 1 f 0 ) = µ eq V ( f ′ V ( f ′ 0 Ξ − 1 f 1 ) . 63

  61. Proof. To prove the first convergence observe that ( 1 β − 1 µ N ((Ξ − 1 f 0 ) ′ )] (56) E [ M N ( f 0 )] = 2) E [ˆ ˆ Ξ − 1 f 0 ( x ) − Ξ − 1 f 0 ( y ) + 1 2 N E [ dM N ( x ) dM N ( y )] . x − y The first term converges to the desired limit as soon as (Ξ − 1 f 0 ) ′ is continuous. For the second term we can use the previous Lemma and the basic concentration estimate 4.16 to show that it is neglectable. The arguments are very similar to those used in the proof of Corollary 4.16 but we detail them for the last time. First, not that if χ M is the indicator function that all eigenvalues are bounded by M , we have by Theorem 4.11 that ˆ Ξ − 1 f 0 ( x ) − Ξ − 1 f 0 ( y ) dM N ( x ) dM N ( y )] | ≤ � Ξ − 1 f 0 � C 1 N 2 e − cN . | E [(1 − χ M ) x − y We therefore concentrate on the other term, up to modify Ξ − 1 f 0 outside [ − M, M ] so that it decays to zero as fast as wished and is as smooth as the original func- tion (it is enough to multiply it by a smooth cutoff function). In particular we may assume it belongs to L 2 and write its decomposition in terms of Fourier � transform. With some abuse of notations, we still denote (Ξ − 1 f 0 ) t the Fourier transform of this eventually modified function. Then, we have ˆ Ξ − 1 f 0 ( x ) − Ξ − 1 f 0 ( y ) | E [ χ M dM N ( x ) dM N ( y )] | x − y ˆ 1 ˆ | t � E [ χ M | M N ( e iαt. ) | 2 ] dαdt ≤ (Ξ − 1 f 0 ) t | 0 To bound the right hand side under the weakest possible hypothesis over f 0 , observe that by Corollary 4.16 applied on only one of the M N we have √ E [ χ M | M N ( e iαt. ) | 2 ] ≤ C N ln N | t | E [ | M N ( e iαt. ) | ] + N 2 e − cN (57) where again we used that even though e iαt. has infinite 1 / 2 norm, we can modify this function outside [ − M, M ] into a function with 1 / 2 norm of order | t | . We next use Lemma 4.23 to estimate the first term in (57) (with � e iαt. � C 4 of order | αt | 4 + 1 ) and deduce that : ˆ Ξ − 1 f 0 ( x ) − Ξ − 1 f 0 ( y ) | E [ χ M dM N ( x ) dM N ( y )] | x − y C (ln N ) 3 / 2 √ ˆ | t � (Ξ − 1 f 0 ) t || t | 5 dt ≤ N C (ln N ) 3 / 2 √ N � Ξ − 1 f 0 � C 7 ≤ C (ln N ) 3 / 2 √ ≤ N � f 0 � C 9 64

  62. Hence, we deduce that ˆ Ξ − 1 f 0 ( x ) − Ξ − 1 f 0 ( y ) dM N ( x ) dM N ( y )] | ≤ C (ln N ) 3 / 2 √ 1 − 1 � f 0 � C 9 N | E [ N x − y goes to zero if f 0 is C 9 . This proves the first claim. Similarly, for the covariance, we use Corollary 4.22 with p = 1 to find that for f 0 , f 1 C k , µ N − µ eq C N ( f 0 , f 1 ) = E [( N (ˆ V ))( f 0 ) M N ( f 1 )] ( 1 β − 1 V ((Ξ − 1 f 0 ) ′ ) E [ M N ( f 1 )] + 1 2) µ eq β µ eq V (Ξ − 1 f 0 f ′ = 1 ) β − 1 +( 1 µ N − µ eq V )((Ξ − 1 f 0 ) ′ ) M N ( f 1 )] 2) E [(ˆ + 1 µ N − µ eq V )(Ξ − 1 f 0 f ′ (58) β E [(ˆ 1 )] ˆ Ξ − 1 f 0 ( x ) − Ξ − 1 f 0 ( y ) + 1 2 N E [ dM N ( x ) dM N ( y ) M N ( f 1 )] x − y The first line converges towards the desired limit. The second goes to zero as soon as (Ξ − 1 f 0 ) ′ is C 1 and f 1 is C 4 , as well as the third line. Finally, we can bound the last term by using twice Lemma 4.23, Cauchy-Schwartz and the basic concentration estimate once ˆ Ξ − 1 f 0 ( x ) − Ξ − 1 f 0 ( y ) | E [ dM N ( x ) dM N ( y ) M N ( f 1 )] | x − y C (ln N ) 5 / 2 √ ˆ dt | � (Ξ − 1 f 0 ) t |� f 1 � C 4 | t | 6 ≤ N C (ln N ) 5 / 2 √ N � Ξ − 1 f 0 � C 7 � f 1 | 1 / 2 ≤ C 4 which once plugged into (58) yields the result. ⋄ 4.3.4 Central limit theorem Theorem 4.27. Suppose that Assumption 4.21 holds with p ≥ 10 . Let f be C k with k ≥ 9 . Then M N ( f ) := � N i =1 f ( λ i ) − Nµ eq V ( f ) converges in law under P N β,V towards a Gaussian variable with mean m V ( f ) and covariance σ ( f ) = µ eq V ( f ′ Ξ − 1 f ) . Observe that we have weaker assumptions on f than in Lemma 4.25. This is because when we use the Dyson-Schwinger equations, we have to invert the operator Ξ several times, hence requiring more and more smoothness of the test function f . Using the change of variable formula instead allows to invert it only once, hence lowering our requirements on the test function. Proof. We can take f compactly supported by Theorem 4.11. We come back to the proof of Lemma 4.23 but go one step further in Taylor expansion to see 65

  63. that the function � f ( λ i ) − f ( λ j ) � 2 � � � Λ N ( f ) := β f ( λ i ) − f ( λ j ) − β V ′ ( λ i ) f ′ ( λ i ) − β 2 N 2 N λ i − λ j λ i − λ j i<j i<j N � � − β V ′′ ( λ i )( f ( λ i )) 2 + 1 f ′ ( λ i ) 2 N N i =1 satisfies � � � � ˆ � ≤ C 1 � e Λ N ( f ) dP β,V � N � f � 3 � ln C 1 N where the constant C may depend on the support of f . For any δ > 0 , with probability greater than 1 − e − C ( δ ) N 2 for some C ( δ ) > 0 , the empirical measure µ N is at Vasershtein distance smaller than δ from µ eq V . On this set, for f C 1 ˆ � f ( λ i ) − f ( λ j ) � 2 � � 1 + 1 V ′′ ( λ i )( f ( λ i )) 2 = C ( f ) + O ( δ ) 2 N 2 λ i − λ j N i,j where ˆ � f ( x ) − f ( y ) � 2 C ( f ) = 1 ˆ dµ eq V ( x ) dµ eq V ′′ ( x ) f ( x ) 2 dµ eq V ( y ) + V ( x ) 2 x − y whereas � N 1 ˆ f ′ ( x ) dµ eq f ′ ( λ i ) = M ( f ) + o (1) , if M ( f ) = V ( x ) . N i =1 As Λ N ( f ) is at most of order N , we deduce by letting N and then δ going to zero that N � � f ( λ i ) − f ( λ j ) Z N ( f ) := β V ′ ( λ i ) f ′ ( λ i ) − β 2 N λ i − λ j i,j =1 satisfies for any f C 1 ˆ = e ( β 2 − 1) M ( f )+ β 2 C ( f ) . e Z N ( f ) dP β,V lim N N →∞ In the line above we took into account that we added a diagonal term to Z N ( f ) which contributed to the mean. We can now replace f by tf for real numbers f and conclude that Z N ( f ) converges in law towards a Gaussian variable with mean ( β 2 − 1) M ( f ) and covariance βC ( f ) . On the other hand we can rewrite Z N ( f ) as Z N ( f ) = βM N (Ξ f ) + ε N ( f ) 66

  64. where ˆ f ( x ) − f ( y ) ε N ( f ) = βN µ N − µ eq µ N − µ eq d (ˆ V )( x ) d (ˆ V )( y ) 2 x − y Now, we can use Lemma 4.23 to bound the probability that ε N ( f ) is greater than some small δ . We again use the Fourier transform to write : ˆ 1 ε N ( f ) = βN ˆ � � ( it ˆ µ N − µ eq µ N − µ eq f t ) (ˆ V ) (1 − α ) t (ˆ V ) αt dt . 2 0 We can bound the L 1 norm of ε N ( f ) by Cauchy-Schwartz inequality by ˆ 1 E [ | ε N ( f ) | ] ≤ βN ˆ µ N − µ eq � µ N − µ eq � | t ˆ V ) (1 − α ) t | 2 ] 1 / 2 E [ | V ) (1 − α ) t | 2 ] 1 / 2 dtdα . f t | E [ | (ˆ (ˆ 2 0 Finally, Lemma 4.23 implies that V ) (1 − α ) t | 2 ] 1 / 2 ≤ C | t | 4 ln N � µ N − µ eq + N − C E [ | (ˆ N from which we deduce that there exists a finite constant C f t | dt ln N 2 ˆ | t | 5 | ˆ E [ | ε N ( f ) | ] ≤ C . N Thus, the convergence in law of Z N ( f ) implies the convergence in law of M N (Ξ f ) towards a Gaussian variable with covariance C ( f ) and mean ( 1 2 − β ) M ( f ) . 1 If f is C 9 , we can invert Ξ and conclude that M N ( f ) converges towards a Gaussian variable with mean m ( f ) = ( 1 2 − 1 β ) M (Ξ − 1 ( f )) and co- variance C (Ξ − 1 ( f )) . To identify the covariance, it is enough to show that C ( f ) = µ eq V ((Ξ f ) ′ f ) . But on the support of µ eq V ˆ f ( x ) − f ( y ) (Ξ f ) ′ ( x ) = V ′′ f ( x ) + PV dµ eq V ( y ) ( x − y ) 2 from which the result follows. ⋄ 4.4 Expansion of the partition function 1. For f C 17 and V C 20 , Theorem 4.28. V ( f ) + 1 N m V ( f ) + 1 N 2 K V ( f ) + o ( 1 µ N ( f )] = µ eq E P N β,V [ˆ N 2 ) , with m V ( f ) as in Theorem 4.25 and ˆ 1 K V ( f ) = ( 1 β − 1 2) m V (((Ξ − 1 f 0 ) ′ )+1 ˆ ˆ � dαC V ( e itα. , e it (1 − α ) . ) . Ξ − 1 f ( t ) itdt 2 0 67

  65. 2. Assume V C 20 , then ln Z N β,V = C 0 β N ln N + C 1 β ln N + N 2 F 0 ( V ) + NF 1 ( V ) + F 2 ( V ) + o (1) β = β β = 3+ β/ 2+2 /β with C 0 2 , C 1 and 12 −E ( µ eq F 0 ( V ) = V ) ln dµ eq − ( β ˆ dx dµ eq V (59) F 1 ( V ) = 2 − 1) V + f 1 ˆ 1 F 2 ( V ) = − β K V α ( V − V 0 ) dα + f 2 0 where f 1 , f 2 only depends on b − a , the width of the support of µ eq V . Proof. The first order estimate comes from Theorem 4.25. To get the next term, we notice that if Ξ − 1 f belongs to L 1 we can use the Fourier transform of Ξ − 1 f (which goes to infinity to zero faster than ( | t | + 1) − 3 as Ξ − 1 f is C 6 ) so that ˆ Ξ − 1 f ( x ) − Ξ − 1 f ( y ) E [ dM N ( x ) dM N ( y )] x − y ˆ 1 ˆ dtit � dα E [ � M N ( e itα. ) � M N ( e it (1 − α ) . )] = Ξ − 1 f ( t ) 0 ˆ 1 ˆ dtit � dαC V ( e itα. , e it (1 − α ) . ) Ξ − 1 f ( t ) ≃ 0 We can therefore use (56) to conclude that ( 1 β − 1 2) m V (((Ξ − 1 f ) ′ ) N ( E [ M N ( f )] − m ( f )) ≃ ˆ 1 +1 ˆ ˆ � dαC V ( e itα. , e it (1 − α ) . ) Ξ − 1 f ( t ) dtit 2 0 which proves the first claim. We used that f is C 12 so that (Ξ − 1 f ) ′ is C 9 and Theorem 4.25 for the convergence of the first term. For the second we notice that the covariance is uniformly bounded by C ( | t | 12 + 1) , so we can apply monotone Ξ − 1 f ( t ) || t | 13 is finite, so f C 16+ . dt | � convergence theorem when ´ To prove the second point, the idea is to proceed by interpolation from a case where the partition function can be explicitly computed, that is where V is quadratic. We interpolate V with a potential V 0 ( x ) = c ( x − d ) 2 / 4 so that the limiting equilibrium measure µ c,d , which is a semi-circle law shifted by d and enlarged by a factor √ c , has support [ a, b ] (so d = ( a + b ) / 2 and c = ( b − a ) 2 / 16 ). The advantage of keeping the same support is that the potential V α = αV +(1 − α ) V 0 has equilibrium measure µ α = αµ eq V +(1 − α ) µ c,d 68

  66. since it satisfies the characterization of Lemma 4.5. We then write ˆ 1 Z N β,V ∂ α ln Z N ln = β,V α dα Z N 0 β,V 0 ˆ 1 − βN 2 µ N ( V − V 0 )] dα = E P N β,Vα [ˆ 0 It is not hard to see that if µ eq V satisfy hypotheses 4.2, so does µ α and that the previous expansion can be shown to be uniform in α . Hence, we obtain the expansion from the first point if V is C 20 with ˆ 1 − β µ V α ( V − V 0 ) dα + f 0 F 0 ( V ) = 0 ˆ 1 F 1 ( V ) = − β m V α ( V − V 0 ) dα + f 1 0 ˆ 1 F 2 ( V ) = − β K V α ( V − V 0 ) dα + f 2 0 where f 0 , f 1 , f 2 are the coefficients in the expansion of Selberg integrals given in [72] : βN 3+ β/ 2+2 /β e N 2 f 0 + Nf 1 + f 0 + o (1) Z N 2 N V 0 ,β = N 12 with f 0 , f 1 , f 2 only depending on b − a : � � b − a �� − 3 f 0 = ( β/ 2) 4 + ln 4 � b − a � (1 − β/ 2) ln − 1 / 2 − β/ 4 + ( β/ 2) ln( β/ 2) + ln(2 π ) − ln Γ(1 + β/ 2) f 1 = 4 χ ′ (0; 2 /β, 1) + ln(2 π ) f 2 = 2 The first formula of Theorem 4.28 is clear from the large deviation principle and the last is just what we proved in the first point. Let us show that the first order correction is given in terms of the relative entropy as stated in (59). Indeed, by 69

  67. integration by part and Remark 4.26 we have ( 1 β − 1 2) − 1 m V ( f ) µ eq V [(Ξ − 1 f ) ′ ] = Ξ − 1 f ( dµ eq ˆ dx ) ′ dx V − = Ξ − 1 f (ln dµ eq ˆ dx ) ′ dµ eq V = − V f ′ Ξ − 1 (ln dµ eq ˆ dx ) dµ eq V = − V To complete our proof, we will first prove that if g is C 10 , s → 0 s − 1 ( µ eq V − sf − µ eq V )( g ) = µ eq V (Ξ − 1 gf ′ ) . (60) lim which implies the key estimate f ′ Ξ − 1 (ln dµ eq V = ∂ t µ V + tf (ln dµ eq ( 1 β − 1 ˆ dx ) dµ eq 2) − 1 m V ( f ) = − V V (61) dx ) | t =0 . To prove (60), we first show that m V ( f ) = ´ f ( x ) dµ V ( x ) is continuous in V in the sense that � (62) D ( µ V , µ W ) ≤ � V − W � ∞ . Indeed, by Lemma 4.5 applied to µ = µ W and since V eff d ( µ W − µ V ) ≥ 0 , we ´ have D ( µ W , µ V ) 2 ≤ E ( µ W ) − E ( µ V ) Wdµ + 1 V dµ + 1 ˆ ˆ ≤ inf { 2Σ( µ ) } − inf { 2Σ( µ ) } ≤ � W − V � ∞ V )( g ) goes to zero like √ s for g Lipschitz and f As a consequence ( µ eq V − sf − µ eq bounded. We can in fact get a more accurate estimate by using the limiting Dyson-Schwinger equation (52) to µ eq V − sf and µ eq V and take their difference to get : ( µ eq µ eq V )(Ξ V g ) = sµ eq βN f ( gf ′ ) (63) − s V − sf V − ˆ g ( x ) − g ( y ) 1 d ( µ V − sf − µ eq V )( x ) d ( µ V − sf − µ eq + V )( y ) . 2 x − y The last term is at most of order s if g is C 2 by (62) (see a similar argument in (50)), and so is the first. Hence we deduce from (63) that ( µ eq V − sf − µ eq V )( g ) is of order s if g ∈ C 4 and f is C 5 . Plugging back this estimate into the last term in (63) together with (62), we get (60) for g ∈ C 8 and f ∈ C 9 . 70

  68. From (61), we deduce that ˆ 1 F 1 ( V ) − f 1 = − β m V α ( V − V 0 ) dα 0 ˆ 1 ˆ 1 ( ∂ α µ V α )(ln dµ eq µ V α ( ∂ α ln dµ eq ( β dx ) dα − ( β V α V α = 2 − 1) 2 − 1) dx ) dα 0 0 ˆ 1 ∂ α [ µ V α (ln dµ eq ( β V α 2 − 1) = dx )] dα 0 wich yields the result. Above in the second line the last term vanishes as µ eq V α (1) = 1 . ⋄ 5 Discrete Beta-ensembles We will consider discrete ensembles which are given by a parameter θ and a weight function w : � � 1 P θ,w N ( � ℓ ) = I θ ( ℓ j − ℓ i ) w ( ℓ i , N ) Z θ,ω N i<j i where for x ≥ 0 we have set I θ ( x ) = Γ( x + 1)Γ( x + θ ) Γ( x )Γ( x + 1 − θ ) where Γ is the usual Γ -function, Γ( n + 1) = n Γ( n ) . The coordinates ℓ 1 , . . . , ℓ N are discrete and belong to the set W θ such that ℓ i +1 − ℓ i ∈ { θ, θ + 1 , . . . } and ℓ i ∈ ( a ( N ) , b ( N )) with w ( a ( N ) , N ) = w ( b ( N ) , N ) = 0 and ℓ 1 − a ( N ) ∈ N , b ( N ) − ℓ N ∈ N . Example 5.1. When θ = 1 this probability measure arises in the setting of lozenge tilings of the hexagon. More specifically, if one looks at a “slice” of the hexagon with sides of size A, B, C , then the number of lozenges of a particu- lar orientation is exactly N and the locations of these lozenges are distributed according the P 1 ,ω N . Along the vertical line at distance t of the vertical side of size A (see Figure 1), the distribution of horizontal lozenges corresponds to a potential of the form � � w ( ℓ, N ) = ( A + B + C + 1 − t − ℓ ) t − B ( ℓ ) t − C , where ( a ) n is the Pochhammer symbol, ( a ) n = a ( a + 1) · · · ( a + n − 1) . 71

  69. B C A t Figure 1: Lozenge tilings of a hexagon More generally, as x → + ∞ the interaction term scales like : I θ ( x ) ≈ | x | 2 θ as x → ∞ so the model for θ should be compared to the β ensemble model with β ↔ 2 θ . Note however that when θ � = 1 , the particles configuration do not live on Z N . These discrete β -ensembles were studied in [13]. Large deviation estimates can be generalized to the discrete setting but Dyson-Schwinger equations are not easy to establish. Indeed, discrete integration by parts does not give closed equations for our observables this time. A nice generalization was proposed by Nekrasov that allows an analysis similar to the analysis we developed for continuous β models. It amounts to show that some functions of the observables are analytic, in fact thanks to the fact that its possible poles cancel due to discrete integration by parts. We present this approach below. 5.1 Large deviations, law of large numbers Let ˆ µ N be the empirical measure : N � µ N = 1 ˆ δ ℓ i /N N i =1 aN + O (ln N ) , b ( N ) = ˆ Assumption 5.2. Assume that a ( N ) = ˆ bN + O (ln N ) a, ˆ for some finite ˆ b and the weight w ( x, N ) is given for x ∈ ( a ( N ) , b ( N )) by : � � x �� w ( x, N ) = exp − NV N N a, ˆ where V N ( u ) = 2 θV 0 ( u ) + 1 N e N ( Nu ) . V 0 is continuous on [ˆ b ] and twice con- 72

  70. a, ˆ tinuously differentiable in (ˆ b ) . It satisfies 1 1 | V ′′ 0 ( u ) | ≤ C (1 + a | + ) . | ˆ | u − ˆ b − u ) | e N is uniformly bounded on [ a ( N ) + 1 , b ( N ) − 1] /N by C ln N for some finite constant C independent of N . a, ˆ For the sake of simplicity, we define V 0 to be constant outside of [ˆ b ] and continuous at the boundary. Example 5.3. In the setting of lozenge tilings of the hexagon of Example 5.1 we assume that for large N A = ˆ AN + O (1) , B = ˆ BN + O (1) , C = ˆ CN + O (1) , t = ˆ tN + O (1) with ˆ t > max { ˆ B, ˆ a = 0 , ˆ C } . Then a ( N ) = 0 , b ( N ) = A + B + C + 1 − t obey ˆ b = A + ˆ ˆ B + ˆ C − ˆ t . Moreover, the potential satisfies our hypothesis with u ln u + ( ˆ A + ˆ B + ˆ C − ˆ t − u ) ln( ˆ A + ˆ B + ˆ C − ˆ V 0 ( u ) = t − u ) − ( ˆ A + ˆ C − u ) ln( ˆ A + ˆ t − ˆ t − ˆ C − u ) − (ˆ C + u ) ln(ˆ C + u ) Notice that V N is infinite at the boundary since w vanishes. However, particles stay at distance at least 1 /N of the boundary and therefore up to an error of order 1 /N , we can approximate V N by V 0 . Theorem 5.4. If Assumption 5.2 holds, the empirical measure converges almost surely : µ N → µ V 0 ˆ where µ V 0 is the equilibrium measure for V 0 . It is the unique minimizer of the energy ˆ � 1 � 2 V 0 ( x ) + 1 2 V 0 ( y ) − 1 E ( µ ) = 2 ln | x − y | dµ ( x ) dµ ( y ) a, ˆ subject to the constraint that µ is a probability measure on [ˆ b ] with density with respect to Lebesgue measure bounded by θ − 1 . Remark 5.5. We have already seen that E is a strictly convex good rate function a, ˆ on the set of probability measures on [ˆ b ] , see (34) . To see that it achieves its minimal value at a unique minimizer, it is therefore enough to show that we are minimizing this function on a closed convex set. But the set of probability a, ˆ measures on [ˆ b ] with density bounded by 1 /θ is clearly convex. It can be seen to be closed as it is characterized as the countable intersection of closed sets a, ˆ given as the set of probability measures on [ˆ b ] so that � � � � � ≤ � f � 1 ˆ � � f ( x ) dµ ( x ) � θ a, ˆ for bounded continuous function f on [ˆ b ] so that � f � 1 = ´ | f ( x ) | dx < ∞ . 73

  71. a, ˆ The case where ˆ b are infinite can also be considered [13]. This result can be deduced from a large deviation principle similar to the continuous case [35] : µ N under P θ,w Theorem 5.6. If Assumption 5.2 holds, the law of ˆ satisfies a N large deviation principle in the scale N 2 with good rate function I which is infi- a, ˆ nite outside of the set P θ of probability measures on [ˆ b ] absolutely continuous with respect to the Lebesgue measure and with density bounded by 1 /θ , and given on P θ by I ( µ ) = 2 θ ( E ( µ ) − inf P θ E ) . Proof. The proofs are very similar to the continuous case, we only sketch the differences. In this discrete framework, because the particles have spacings bounded below by θ , we have, for all x < y , θ # { i : ℓ i ∈ N [ x, y ] } ≤ ( y − x ) N + θ so that µ N ([ x, y ]) ≤ | y − x | + 1 ˆ N . θ µ N can only deviate towards probability measures in P θ . The In particular, ˆ proof of the large deviation upper bound is then exactly the same as in the continuous case. For the lower bound, the proof is similar and boils down to concentrate the particles very close to the quantiles of the measure towards which the empirical measure deviates : one just need to find such a configuration in W θ . We refer the reader to [35]. In particular in the limit we will have : d µ eq ≤ 1 V d x θ The variational problem defining µ eq V in this case takes this bound into ac- count. Noticing that E ( µ eq V + tν ) ≥ E ( µ eq V ) for all ν with zero mass, non-negative outside the support of µ eq V and non-positive in the region where dµ eq V = θ − 1 dx , the characterization of the equilibrium measure is that ∃ C V s.t. if we define : ˆ ln( | x − y | ) d µ eq V eff ( x ) = V 0 ( x ) − V ( y ) − C V and V eff satisfies :  on 0 < d µ eq d x < 1  V eff ( x ) = 0 V  θ on d µ V eff ( x ) ≥ 0 d x = 0   on d µ d x = 1 V eff ( x ) ≤ 0 θ The analysis of the large deviation principle and concentration are the same as in the continuous β ensemble case otherwise. ⋄ 74

  72. 5.2 Concentration of measure As in the continuous case we consider the pseudo- distance D (4.12) and the µ N with regularization of the empirical measure ˜ µ N given by the convolution of ˆ a uniform variable on [0 , θ N ] (to keep measures with density bounded by 1 /θ ). We then have as in the continuous case Lemma 5.7. Assume V 0 is C 1 . There exists C finite such that for all t ≥ 0 µ N , µ V 0 ) ≥ t ) ≤ e CN ln N − N 2 t 2 P θ,ω ( D (˜ N As a consequence, for any N ∈ N , any ε > 0 � � � � � � ˆ 1 � ≥ t 1 µ N − µ V 0 )( x ) ≤ e CN ln N − N 2 t 2 P θ,ω � � sup z − xd (ˆ ε 2 + � N ε 2 N z : ℑ z ≥ ε Proof. We set Q θ,ω N ( ℓ ) = N − θN 2 Z θ,ω N P θ,ω N ( ℓ ) and set for a configuration ℓ , µ N ) , E ( ℓ ) := E (ˆ N � � E ( ℓ ) = 1 V 0 ( ℓ i N ) − 2 θ ln | ℓ i N − ℓ j N | . N N 2 i =1 i<j • We first show that Q θ,ω N ( ℓ ) = e − N 2 2 θ E ( ℓ )+ O ( N ln N ) . Indeed, Stirling for- √ mula shows that ln Γ( x ) = x ln x − x − ln 2 πx + O ( 1 x ) , which implies that � � Γ( ℓ j − ℓ i + 1)Γ( ℓ j − ℓ i + θ ) 1 O ( � ℓj − ℓi ) | ℓ j − ℓ i | 2 θ e Γ( ℓ j − ℓ i )Γ( ℓ j − ℓ i + 1 − θ ) = i<j i<j i<j with � 1 ( ℓ j − ℓ i ) = O ( N ln N ) as ℓ j − ℓ i ≥ θ ( j − i ) . Similarly, by our i<j assumption on V N , for all configuration ℓ so that ℓ 1 � = a ( N ) and ℓ N � = a ( N ) , we have : N N � � 1 V N ( ℓ i N ) = 1 V 0 ( ℓ i N ) + O (ln N N ) . N N i =1 i =1 Hence we deduce that for any configuration with positive probability : N ( ℓ ) = e − N 2 2 θ E ( ℓ )+ O ( N ln N ) Q θ,ω (64) • We have the lower bound N − θN 2 Z θ,ω ≥ e − N 2 2 θ E ( µ V 0 )+ CN ln N . To prove N this bound we simply have to choose a configuration matching this lower bound. We let ( q i ) 1 ≤ i ≤ N be the quantiles of µ V 0 so that a, q i ]) = i − 1 / 2 µ V 0 ([ˆ . N 75

  73. Then we set Q i = a ( N ) + θ ( i − 1) + ⌊ Nq i − a ( N ) − ( i − 1) θ ⌋ Because the density of µ V 0 is bounded by 1 /θ , q i +1 − q i ≥ θ and there- fore Q i +1 − Q i ≥ θ . Moreover, Q 1 − a ( N ) is an integer. Hence, Q is a configuration. We have by the previous point that N − θN 2 Z θ,ω ≥ e − N 2 2 θ E ( Q )+ O ( N ln N ) (65) N We finally can compare E ( Q ) to E ( µ V 0 ) . Indeed, by definition Q i ∈ [ Nq i , Nq i + 1] and Q i − Q j ≥ θ ( i − j ) , so that � � ln | Q i − Q j ln | Q i − Q j | ≥ | + O ( N ln N ) N N i<j i +[ 2 θ ] <j � ln | q j − q i − 1 ≥ N | + O ( N ln N ) i +[ 2 θ ] <j � ln | q j − q i | + O ( N ln N ) = i +[ 2 θ ] <j ˆ q j ˆ q i +1 � N 2 ≥ ln | x − y | dµ V 0 ( x ) dµ V 0 ( y ) + O ( N ln N ) q j − 1 q i i +[ 2 θ ] <j ˆ N 2 ≥ ln | x − y | dµ V 0 ( x ) dµ V 0 ( y ) + O ( N ln N ) x<y where we used that the logarithm is monotone and the density of µ V 0 uniformly bounded by 1 /θ . Moreover � 1 � ˆ q i +1 ˆ q i +1 � � N V 0 ( Q i ( | Q i | N ) − V 0 ( x ) dµ V 0 ( x ) | ≤ C N − q i | + | q i +1 − q i | ) dµ V 0 q i q i i i is bounded by C ′ /N . We conclude that E ( Q ) ≤ E ( µ V 0 ) + O (ln N N ) so that we deduce the announced bound from (65). • We then show that Q θ,ω N ( ℓ ) = e − N 2 2 θ E (˜ µ N )+ O ( N ln N ) . We start from (64) µ N by ˜ and need to show we can replace the empirical measure of ˆ µ N and then add the diagonal term i = j up to an error of order N ln N . Indeed, if u, v are two independent uniform variables on [0 , θ ] , independent of ℓ , � � N + u − v ln | ℓ i N − ℓ j E [ln | ℓ i N − ℓ j N | − | ] N i,j i � = j 76

  74. � � E [ln | u − v 1 = − | ] + O ( ) = O ( N ln N ) N ℓ j − ℓ i i i<j whereas N � 1 ( V 0 ( ℓ i N ) − E [ V 0 ( ℓ i N + u N )]) = O (ln N N ) N i =1 • P θ,ω N ( ℓ ) ≤ e − N 2 2 θD 2 (˜ µ N ,µ V 0 )+ O ( N ln N ) . We can now write ˆ µ N − µ V 0 )( x ) + D 2 (˜ E (˜ µ N ) = E ( µ V 0 ) + V eff ( x ) d (˜ µ N , µ V 0 ) D 2 is indeed positive as ˜ µ N and µ V 0 have the same mass. V eff ( x ) vanishes on the liquid regions of µ V 0 , is non-negative on the voids where ˜ µ N − µ V 0 is non-negative, and non positive on the frozen regions where ˜ µ N − µ V 0 is non-negative since ˜ µ N has density bounded by 1 /θ . Hence we conclude that ˆ V eff ( x ) d (˜ µ N − µ V 0 )( x ) ≥ 0 . a, ˆ [ˆ b ] On the other hand the effective potential is bounded and so our assumption on a ( N ) − N ˆ a implies ˆ N 2 b ] c V eff ( x ) d (˜ µ N − µ V 0 )( x ) = O ( N ln N ) . a, ˆ [ˆ Hence, we can conclude by the previous two points. ⋄ 5.3 Nekrasov’s equations The analysis of the central limit theorem is a bit different than for the continuous β ensemble case. Introduce : N � 1 1 G N ( z ) = z − ℓ i N i =1 N 1 ˆ z − x d µ eq G ( z ) = V ( x ) . We want to study the fluctuations of { N ( G N ( z ) − G ( z )) } . To this end, we would like an analogue of Dyson-Schwinger equations in this discrete setting. The candidate given by discrete integration by parts is not suited to asymptotic analysis as it yields densities which depend on � (1 + ( ℓ i − ℓ j ) − 1 ) which is not µ N . In this case the analysis goes by the Nekrasov’s equations a function of ˆ which Nekrasov calls “non-perturbative” Dyson-Schwinger equations. Assume that we can write : 77

  75. Assumption 5.8. w ( x − 1 , N ) = φ + w ( x, N ) N ( x ) φ − N ( x ) where φ ± N are analytic functions in some subset M of the complex plane which includes [ a ( N ) , b ( N )] and independent of N . Example 5.9. In the example of random lozenge tilings of Example 5.1 we can take 1 1 φ + φ − N ( z ) = N 2 ( t − C + z )( A + B + C − t − z ) , N ( z ) = N 2 z ( A + C − z ) . With these defined, Nekrasov’s equation is the following statement. Theorem 5.10. If Assumption 5.8 holds � N �� � N �� � � � � θ θ R N ( ξ ) = φ − + φ + N ( ξ ) E P θ,w 1 − N ( ξ ) E P θ,w 1 + ξ − ℓ i ξ − ℓ i − 1 N N i =1 i =1 is analytic in M . Proof. In fact this can be checked by looking at the poles of the right hand side and showing that the residues vanish. Noting that there is a residue when ξ = ℓ i or ℓ i − 1 we find that the residue at ξ = m is  � � N � � � θ − θφ − P θ,w   N ( m ) N ( ℓ 1 , .., ℓ i − 1 , m, ℓ i +1 , . . . , ℓ N ) 1 − m − ℓ j i ℓ i = m j � = i  � � N � � � θ + θφ + P θ,w   . N ( m ) N ( ℓ 1 , .., ℓ i − 1 , m − 1 , ℓ i +1 , . . . , ℓ N ) 1 + m − ℓ j − 1 i ℓ i = m − 1 j � = i If m = a ( N ) + 1 the second term vanish since the configuration space is such that ℓ i > a ( N ) for all i , whereas φ − N ( a ( N ) + 1) = 0 . Hence both term vanish. The same holds at b ( N ) and therefore we now consider m ∈ ( a ( N ) + 1 , b ( N )) . Similarly, a configuration where ℓ i = m implies that ℓ i − 1 ≤ m − θ whereas ℓ i = m − 1 implies ℓ i − 1 ≤ m − 1 − θ . However, the first term vanishes when ℓ i − 1 = m − θ . Hence, in both sums we may consider only configurations where ℓ i − 1 ≤ m − 1 − θ . The same holds for ℓ i +1 ≥ m + θ . Then notice that if ℓ is a configuration such that when we shift ℓ i by one we still have a configuration, our specific choice of weight w and interaction with the function Γ imply that  � � � N θ φ − N ( m ) P θ,w   N ( ℓ 1 , .., m, ℓ i +1 , . . . , ℓ N ) 1 − m − ℓ j j � = i 78

  76.  � � N � θ = φ + N ( m ) P θ,w   . N ( ℓ 1 , .., m − 1 , ℓ i +1 , . . . , ℓ N ) 1 + m − ℓ j − 1 j � = i On the other hand a configuration such that when we shift the i th particle by one we do not get an admissible configuration has residue zero. Hence, we find that the residue at ξ = ℓ i and ℓ i − 1 vanishes. ⋄ Nekrasov’s equation a priori still contains the analytic function R N as an unknown. However, we shall see that it can be asymptotically determined based on the sole fact that it is analytic, provided the equilibrium measure is off- critical. Assumption 5.11. Uniformly in M , N ( z ) =: φ ± ( z N ) + 1 1 ( z N ) + O ( 1 φ ± N φ ± N 2 ) Observe here that φ ± 1 may depend on N and be oscillatory in the sense that it may depend on the boundary point. For instance, in the case of binomial weights, φ + N ( x ) = ( M +1 − x ) , φ − N ( x ) = x , we see that if M/N goes to m , N φ − ( x ) = x and φ − 1 ( x ) = 0 , but φ + ( x ) = m − x, φ + 1 ( x ) = M + 1 − N m where the latter may oscillate, even if it is bounded. We will however hide this default of convergence in the notations. The main point is to assume the functions in the expansion are bounded uniformly in N and z ∈ M . Example 5.12. With the example of lozenge tiling, we have t − ˆ C + z )( ˆ A + ˆ B + ˆ φ − ( z ) = z ( ˆ A + ˆ φ + ( z ) = (ˆ C − t − z ) , C − z ) . whereas if ∆ D = D − L ˆ D , 1 ( x ) = N φ + 1 ( x ) = x (∆ t − ∆ C + ∆ A + ∆ B + ∆ C − ∆ t ) , φ − L x (∆ A + ∆ C ) . To analyze the asymptotics of G N , we expand the Nekrasov’s equations around the equilibrium limit. We set ξ = Nz for z ∈ C \ R . Since we know by Lemma 5.7 that ∆ G N ( z ) = G N ( z ) − G ( z ) is small (away from [ a, b ] ), we can expand the Nekrasov’s equation of Lemma 5.10 to get : R N ( ξ ) = R µ ( z ) − θQ µ ( z ) E [∆ G N ( z )] + 1 (66) N E µ ( z ) + Γ µ ( z ) where we have set : φ − ( z ) e − θG ( z ) + φ + ( z ) e θG ( z ) R µ ( z ) := φ − ( z ) e − θG ( z ) − φ + ( z ) e θG ( z ) Q µ ( z ) := � θ 2 � φ − ( z ) e − θG ( z ) θ 2 2 ∂ z G ( z ) + φ + ( z ) e θG ( z ) E µ ( z ) := 2 − θ ∂ z G ( z ) 1 ( z ) e − θG ( z ) + φ + 1 ( z ) e θG ( z ) . + φ − 79

  77. Γ µ is the reminder term given by (66) which basically is bounded on {ℑ z ≥ ε } ∩ M by � � E [ | ∆ G N ( z ) | 2 ] + 1 N | ∂ z E [∆ G N ( z )] | + o ( 1 | Γ µ ( z ) | ≤ C ( ε ) N ) The a priori concentration inequalities of Lemma 5.7 show that Γ µ ( z ) = O (ln N/N ) . We deduce by taking the large N limit that R µ is analytic in M and we set ˜ R µ = R N − R µ . Let us assume for a moment that we have the stronger control on Γ µ Lemma 5.13. For any ε > 0 , E [ | ∆ G N ( z ) | 2 ] + 1 N | ∂ z E [∆ G N ( z )] | = o ( 1 N ) uniformly on M ∩ {|ℑ z | ≥ ε } . Let us deduce the asymptotics of N E [∆ G N ( z )] . To do that let us assume we are in a off-critical situation in the sense that Assumption 5.14. � ( z − a )( b − z ) H ( z ) =: σ ( z ) H ( z ) θQ µ ( z ) = where H does not vanish in M . Remark 5.15. Observe that if ρ is the density of the equilibrium measure, e 2 iπθρ ( E ) = R µ ( E ) + Q µ ( E − i 0) R µ ( E ) + Q µ ( E + i 0) . Our assumption implies therefore that ρ ( E ) = 0 or 1 /θ outside [ a, b ] and goes to these values as a square root. There is a unique liquid region, where the density takes values in (0 , 1 /θ ) , it is exactly [ a, b ] . We now proceed with similar techniques as in the β ensemble case, to take advantage of equation (66) as we used the Dyson-Schwinger equation before. Lemma 5.16. If Assumption 5.14 holds, for any z ∈ M\ R , E [ N ∆ G N ( z )] = m ( z ) + o (1) (67) with m ( z ) = K − 1 E µ ( z ) where 1 1 1 ˛ K − 1 f ( z ) = H ( ξ ) f ( ξ ) d ξ . 2 iπσ ( z ) ξ − z [ a,b ] Remark 5.17. If we compare to the continuous setting, K is the operator of multiplication by θQ µ ( z ) whereas in the continuous case it was multiplication dx = G ( z ) − V ′ ( z ) . Choosing φ + ( z ) = e − V ′ ( z ) / 2 , φ − ( z ) = e + V ′ ( z ) / 2 we see by β dµ that Q µ ( z ) = sinh( θG µ − V ′ ( z ) / 2) is the hyperbolic sinus of the density. Hence, the discrete and continuous master operators can be compared up to take a sinh . 80

  78. Proof. To get the next order correction we look at (66) : θQ µ ( z ) E [∆ G N ( z )] = 1 N E µ ( z ) − ˜ R µ ( z ) + Γ µ ( z ) We can then rewrite as a contour integral for z ∈ M : 1 ˛ 1 ξ − z [ σ ( ξ ) E [∆ G N ( ξ )]] d ξ σ ( z ) E [∆ G N ( z )] = 2 iπ z � 1 � 1 1 1 ˛ N E µ ( ξ ) − ˜ d ξ = R µ ( ξ ) + Γ µ ( ξ ) 2 iπ ξ − z H ( ξ ) [ a,b ] � 1 � � 1 � 1 ˛ 1 1 d ξ + o = N E µ ( ξ ) 2 iπ ξ − z H ( ξ ) N [ a,b ] where we used that σ ∆ G N goes to zero like 1 /z to deduce that there is no residue at infinity so that we can move the contour to a neighborhood of [ a, b ] , that ˜ R µ /H is analytic in a neighborhood of [ a, b ] to remove its contour integral, and assumed Lemma 5.13 holds to bound the reminder term, as the integral is bounded independently of N . ⋄ Remark 5.18. The previous proof shows, without Lemma 5.13, that E [∆ G N ( z )] is at most of order ln N/N sin Γ µ is at most of this order by basic concentration estimates. We finally prove Lemma 5.13. To do so, it is enough to bound E [ | ∆ G N ( z ) | 2 ] by o (1 /N ) uniformly on M ∩ {|ℑ z | ≥ ǫ/ 2 } by analyticity. Note that Lemma 5.7 already implies that this is of order ln N/N . To improve this bound, we get an equation for the covariance. To get such an equation we replace the weight w ( x, N ) by � � t w t ( x, N ) = w ( x, N ) 1 + z ′ − x/N for t very small. This changes the functions φ ± N by � � z ′ − x/N + 1 N ( x ) ( z ′ − x/N + t ) φ + ,t N ( x ) = φ + , N � � z ′ − x/N + t + 1 N ( x ) ( z ′ − x/N ) φ − ,t N ( x ) = φ − . N We can apply the Nekrasov’s equations to this new measure for t small enough (so that the new weights w t does not vanish for z ′ ∈ M ) to deduce that � N �� � N �� � � � � θ θ N ( ξ ) = φ − ,t + φ + ,t R t N ( ξ ) E P θ,wt 1 − N ( ξ ) E P θ,wt 1 + ξ − ℓ i ξ − ℓ i − 1 N N i =1 i =1 (68) 81

  79. is analytic. We start expanding with respect to N by writing N ( Nx ) / ( z ′ − x )( t + z ′ − x ) = ( φ ± ( x ) + 1 ( x ) + o ( 1 φ ± ,t N φ ± ,t N )) 1 with 1 ( x ) + φ + ( x ) φ − ( x ) φ + ,t ( x ) = φ + z ′ − x, φ − ,t ( x ) = φ − 1 ( x ) + t + z ′ − x . 1 1 We set N ( Nx ) − R µ ( x )) / ( z ′ − x )( t + z ′ − x ) ˜ R t N ( x ) = ( R t which is analytic up to a correction which is o (1 /N ) and analytic away from z ′ in a neighborhood of which it has two simple poles. We divide both sides of Nekrasov equation by ( z ′ − x )( t + z ′ − x ) , and take ξ = Nz and again using Lemma 5.7, we deduce that N ( z ) + 1 θQ µ ( z ) E θ,w t P N [∆ G N ( z )] = ˜ R t N E t µ ( z ) + Γ t (69) µ ( z ) where µ ( z ) = E µ ( z ) + φ + ( z ) φ − ( z ) z ′ − z e θG ( z ) + E t t + z ′ − z e − θG ( z ) and Γ t µ ( z ) is a reminder term. It is the sum of the reminder term coming from (66) and the error term coming from the expansion of φ ± ,t . The latter has single poles at z ′ and z ′ + t and is bounded by 1 /N 2 . We can invert the multiplication by Q µ as before to conclude (taking a contour which does not include z ′ so that ˜ R t N stays analytic inside) that [∆ G N ( z )] = K − 1 [ 1 µ ]( z ) + o ( 1 N E t µ + Γ t E P θ,wt N ) , N where we noticed that the residues of ε N are of order one. We finally differentiate with respect to t and take t = 0 (note therefore that we need no estimates under the tilted measure P θ,w t , but only those take at N t = 0 where we have an honest probability measure). Noticing that the operator K does not depend on t , we obtain, with ¯ ∆ G N ( z ′ ) = G N ( z ′ ) − E [ G N ( z ′ )] : � � = − K − 1 [ φ − ( . ) ∆ G N ( z ) ¯ N 2 E P θ,w ∆ G N ( z ′ ) ( z ′ − . ) 2 e − θG ( . ) ]( z )+ NK − 1 [ ∂ t Γ t µ | t =0 ]( z ) N (70) It is not difficult to see by a careful expansion in Nekrasov’s equation (68) that � N E [ | ∆ G N ( z ) | 2 | ¯ | ∂ t Γ t ∆ G N ( z ′ ) | ] µ ( z ) | t =0 | ≤ (71) C ( ε ) � ∆ G N ( z ′ )] | + 1 + | ∂ z E [∆ G N ( z ) ¯ N E [ | ¯ ∆ G N ( z ′ ) | ] 82

  80. √ By Lemma 5.7, it is at most of order (ln N ) 3 / N so that we proved ( z ′ − . ) 2 e − θG ( . ) ]( z ) + O ((ln N ) 3 √ � � = − K − 1 [ φ − ( . ) ∆ G N ( z ) ¯ N 2 E P θ,w ∆ G N ( z ′ ) N ) N (72) This shows by taking z ′ = ¯ z that for ℑ z ≥ ε E [ | N ∆ G N ( z ) | 2 ] ≤ (ln N ) 3 √ (73) N . We note here that ∆ G N ( z ) and ¯ ∆ G N ( z ) only differ by ln N/N by Remark 5.18. This completes the proof of Lemma 5.13. We derive the central limit theorem in the same spirit. Theorem 5.19. If Assumption 5.14 holds, for any z 1 , . . . , z k ∈ M\ R , ( N ∆ G N ( z 1 ) − m ( z 1 ) , . . . , N ∆ G N ( z k ) − m ( z k )) converges in distribution towards a centered Gaussian vector with covariance C ( z, z ′ ) = − K − 1 [ φ − ( . ) ( z ′ − . ) 2 e − θG ( . ) ]( z ) Remark 5.20. It was shown in [13] that the above covariance is the same than for random matrices and is given by � � zw − 1 1 2 ( a + b )( z + w ) + ab � � C ( z, w ) = 1 − . ( z − w ) 2 ( z − a )( z − b ) ( w − a )( w − b ) It only depends on the end points and therefore is the same than for continuous β ensembles with equilibrium measure with same end points. However notice that the mean given in (76) is different. Proof. We first prove the convergence of the covariance by improving the esti- mates on the reminder term in (70) by a bootstrap procedure. It is enough to improve the estimate on ∂ t Γ µ according to (70). But already, our new bound on the covariance (73) and Lemma 5.7 allow to bound the right hand side of (71) by (ln N ) 4 /N . This allows to improve the estimate on the covariance as in the previous proof and we get : E [ | N ∆ G N ( z ) | 2 ] ≤ C ( ǫ )(ln N ) 4 . (74) In turn, we can again improve the estimate on | ∂ t Γ µ ( z ) | since we now can bound the right hand side of (71) by (ln N ) 5 N − 1 / 2 , which implies the desired convergence of E [∆ G N ( z )∆ G N ( z ′ )] towards C ( z, z ′ ) . To derive the central limit theorem it is enough to show that the cumulants of degree higher than two vanish. To do so we replace the weight w ( x, N ) by � � p � t i w t ( x, N ) = w ( x, N ) 1 + . z i − x/N i =1 83

  81. The cumulants are then given by N∂ t 1 ∂ t 2 · · · ∂ t p E P θ,wt [∆ G N ( z )] | t 1 = t 2 = ··· = t p =0 . N Indeed, recall that the cumulant of N ¯ ∆ G N ( z 1 ) , . . . N ¯ ∆ G N ( z p ) is given by p � ∂ t 1 · · · ∂ t p ln E P θ,wt [exp { N t i G N ( z i ) } ] | t 1 = t 2 = ··· = t p =0 N i =1 which is also given by [ N ¯ ∂ t 2 · · · ∂ t p ln E P θ,wt ∆ G N ( z 1 )] | t 1 = t 2 = ··· = t p =0 . N [ ¯ Noticing that E P θ,wt ∆ G N ( z ) − ∆ G N ( z )] is independent of t , we conclude that N it is enough to show that N∂ t 1 ∂ t 2 · · · ∂ t p E P θ,wt [∆ G N ( z )] | t 1 = t 2 = ··· = t p =0 N goes to zero for p ≥ 2 . In fact, we can perform an analysis similar to the previous one. This changes the functions φ ± N by p p � � φ + ,t N ( x ) = φ + ( z i − x/N + t i ) , φ − ,t N ( x ) = φ − N ( z ) N ( z ) ( z i − x/N ) . i =1 i =1 We can apply the Nekrasov’s equations to this new measure for t i small enough (so that the new weights do not vanish) to deduce that � N �� � N �� � � � � θ θ R t N ( ξ ) = φ − ,t + φ + ,t (75) N ( ξ ) E P θ,wt 1 − N ( ξ ) E P θ,wt 1 + ξ − ℓ i ξ − ℓ i − 1 N N i =1 i =1 is analytic. Expanding in N we deduce that [∆ G N ( z )] = K − 1 [ 1 N E t µ ( z ) + Γ t E P θ,wt µ ( z )] N where p p � � φ + ( x ) φ − ( x ) z i − xe θG ( z ) + E t t i + z i − xe − θG ( z ) µ ( z ) = E µ ( z ) + i =1 i =1 and � � � � ( | ∆ G N ( z ) | 2 + 1 | N ¯ | ∂ t 1 · · · ∂ t p Γ t µ | t i =0 ( z ) | ≤ C ( ǫ ) E N 2 ) ∆ G N ( z i ) || � � + 1 N ¯ N | ∂ z E [∆ G N ( z ) ∆ G N ( z i )] | . 84

  82. The contour in the definition of K − 1 includes z and [ a, b ] but not the z i ’s. Taking the derivative with respect to t 1 , . . . , t p at zero we see that for p ≥ 1 [ N ∆ G N ( z )] = K − 1 [ ∂ t 1 ∂ t 2 · · · ∂ t p N Γ t ∂ t 1 ∂ t 2 · · · ∂ t p E P θ,wt µ ( z )] N where we used that the operator K is independent of t . We finally need to show that the right hand side goes to zero. It will, provided we show that for all p ∈ N , all z 1 , . . . , z p ∈ M\ [ A, B ] there exists C depending only on min d ( z i , [ A, B ]) and p such that � � � � p � � � � ≤ C (ln N ) 3 p . � E [ � N ∆ G N ( z i )] � i =1 This provides also bounds on E [ | ∆ G N ( z ) | p ] when p is even. Indeed ∂ t 2 · · · ∂ t p N Γ t µ ( z ) can be bounded by a combination of such moments. We can prove this by induc- tion over p . By our previous bound on the covariance, we have already proved this result for p = 2 by (74). Let us assume we obtained this bound for all ℓ ≤ p for some p ≥ 2 . To get bounds on moments of correlators of order p + 1 , µ | t =0 is at most of order (ln N ) 3 p +2 if p is let us notice that | ∂ t 1 ∂ t 2 · · · ∂ t p N Γ t even by the induction hypothesis and Lemma 5.7(by bounding uniformly the Stieltjes functions depending on z ). This is enough to conclude. If p is odd, we can only get bounds on moments of modulus of the Stieltjes transform of order p − 1 . We do that and bound also the Stieltjes transform depending on the argu- ment z 1 by using Lemma 5.7. We then get a bound of order (ln N ) 3 p +3 √ N for | ∂ t 1 ∂ t 2 · · · ∂ t p N Γ t µ | t =0 . This provides a similar bound for the correlators of order p +1 , which is now even. Using Hölder inequality back on the previous estimate and Lemma 5.7 on at most one term, we finally bound | ∂ t 1 ∂ t 2 · · · ∂ t p N Γ t µ | t =0 by (ln N ) 3( p +1) which concludes the argument. ⋄ 5.4 Second order expansion of linear statistics In this section we show how to expand the expectation of linear statistics one step further. To this end we need to assume that φ ± N expands to the next order. Assumption 5.21. Uniformly in M , N ( z ) =: φ ± ( z ) + 1 1 ( z ) + 1 2 ( z ) + O ( 1 φ ± N φ ± N 2 φ ± N 3 ) Lemma 5.22. Suppose Assumption 5.21 holds. Then, N →∞ E [ N 2 ∆ G N ( z ) − Nm ( z )] − r ( z ) = 0 (76) lim 85

  83. with r ( z ) = K − 1 F µ ( z ) where � θ 2 � 2 ∂ z m ( z ) − θ 3 z G ( z ) + θ 2 2 [ C ( z, z ) + ( θ φ − ( z ) e − θG ( z ) 3 ∂ 2 2 ∂ z G ( z ) + m ( z )) 2 ] F µ ( z ) = � θ 2 � + φ − 1 ( z ) e − θG ( z ) + φ − 2 ( z ) e − θG ( z ) 2 ∂ z G ( z ) − θm ( z ) � ( θ 2 2 − θ ) ∂ z m ( z ) + ( θ 3 3 + θ − θ 2 + φ + ( z ) e θG ( z ) 2 ) ∂ 2 z G ( z ) � + θ 2 2 [( m ( z ) − 2 − θ ∂ z G ( z )) 2 + C ( z, z )] 2 1 ( z ) e θG ( z ) [ θm ( z ) + ( θ 2 + φ + 2 − θ ) ∂ z G ( z )] + φ − 2 ( z ) e θG ( z ) Proof. The proof is as before to show that � 1 � θQ µ ( z ) E [∆ G N ( z )] = 1 N E µ ( z ) + 1 N 2 F µ ( z ) + ˜ R N µ ( z ) + o N 2 by using Nekrasov’s equation of Theorem 5.10, expanding the exponentials and using Lemmas 5.19 and 5.16. We then apply K − 1 on both sides to conclude. ⋄ 5.5 Expansion of the partition function To expand the partition function in the spirit of what we did in the continuous case, we need to compare our partition function to one we know. In the con- tinuous case, Selberg integrals were computed by Selberg. In the discrete case it turns out we can compute the partition function of binomial Jack measure [13] which corresponds to the choice of weight depending on two positive real parameters α, β > 0 given by : Γ( M + θ ( N − 1) + 3 2 ) w J ( ℓ ) = ( αβθ ) ℓ (77) Γ( ℓ + 1)Γ( M + θ ( N − 1) + 1 − ℓ ) Then, the partition function can be computed explicitely and we find (see the work in progress with Borot and Gorin) : Theorem 5.23. With summation going over ( ℓ 1 , . . . , ℓ N ) satisfying ℓ 1 ∈ Z ≥ 0 and ℓ i +1 − ℓ i ∈ { θ, θ + 1 , θ + 2 , . . . } , i ∈ { 1 , . . . , N − 1 } , we have � � � N 1 Γ( ℓ i +1 − ℓ i + 1)Γ( ℓ i +1 − ℓ i + θ ) Z J = w J ( ℓ i ) N N 2 θ Γ( ℓ i +1 − ℓ i )Γ( ℓ i +1 − ℓ i + 1 − θ ) 1 ≤ i<j ≤ N i =1 N � Γ( θ ( N + 1 − i ))Γ( M + θ ( N − 1) + 3 2 ) (1 + αβθ ) MN ( αβθ N 2 ) θ N ( N − 1) = . 2 Γ( θ )Γ( M + 1 + θ ( i − 1)) i =1 86

  84. On the other hand, the equilibrium measure µ J for this model can be com- puted and we find that if M N → ( m − θ ) and q = αβθ , there exists α, β ∈ (0 , m ) so that µ J has density equal to 0 or 1 /θ outside ( α, β ) , and in the liquid region ( α, β ) the density is given by : � � x (1 − q ) + q m − qθ − θ µ J ( x ) = 1 πθ arccot � , (( x (1 − q ) + q m − qθ − θ )) 2 + 4 xq ( m − x ) where arccot is the reciprocal of the cotangent function. Therefore, depending on the choices of the parameters, the behavior of µ J ( x ) as x varies from 0 to m is given by the following four scenarios (it is easy to see that all four do happen) • Near zero µ J ( x ) = 0 , then 0 < µ J ( x ) < θ − 1 , then µ J ( x ) = θ − 1 near m ; • Near zero µ J ( x ) = θ − 1 , then 0 < µ J ( x ) < θ − 1 , then µ J ( x ) = θ − 1 near m ; • Near zero µ J ( x ) = 0 , then 0 < µ J ( x ) < θ − 1 , then µ J ( x ) = 0 near m ; • Near zero µ J ( x ) = θ − 1 , then 0 < µ J ( x ) < θ − 1 , then µ J ( x ) = 0 near m . We want to interpolate our model with weight w with a Jack binomial model with weight w J . To this end we would like to consider a model with the same liquid/frozen/void regions so that the model with weight w t w 1 − t , t ∈ [0 , 1] , J corresponds to an equilibrium measure with the same liquid/frozen/void regions and an equilibrium measure given by the interpolation between both equilibrium measure. However, doing that we may have problems to satisfy the conditions of Nekrasov’s equations if w/w J may vanish or blow up. It is possible to circumvent this point by proving that the boundary points are frozen with overwhelming probability, hence allowing more freedom with the boundary point. In these lecture notes, we will not go to this technicality. Theorem 5.24. Assume there exists M, q so that ln( w/w J ) is approximated, a, ˆ uniformly on [ˆ b ] by ln w ( Nx ) = − N ( V − V J )( x ) + ∆ 1 V ( x ) + 1 N ∆ 2 V ( x ) + o ( 1 N ) . w j where V − V J and ∆ 1 V are analytic in M , whereas ∆ 2 V is bounded continuous a, ˆ b ] . Assume moreover that φ ± on [ˆ N satisfies Assumption 5.14. Then, we have ln Z θ,w N = − N 2 F 0 ( θ, V ) + NF 1 ( θ, w ) + F 0 ( θ, w ) + o (1) Z J N with F 0 ( θ, V ) = − 2 θ E ( µ ) + 2 θ E ( µ J ) ˆ 1 ˆ 1 1 1 ˆ ˆ F 1 ( θ, V ) = ( V J − V )( z ) m t ( z ) dt + ∆ 1 V ( z ) G t ( z ) dt 2 πi 2 πi 0 C 0 C ˆ 1 1 ˆ F 2 ( θ, V ) = (( V J − V )( z ) r t ( z ) + ∆ 1 V ( z ) m t ( z ) + ∆ 2 V ( z ) G t ( z )) dzdt 2 πi 0 C 87

  85. Proof. We consider P θ,w t the discrete β model with weight w t w 1 − t . We have N J ˆ 1 ln Z θ,w � ln w P θ,w t N = ( ( ℓ i )) dt Z J N w J 0 N i ˆ 1 µ N � � P θ,w t N 2 ( V J − V ) + N ∆ 1 V + ∆ 2 V = (ˆ ) dt + o (1) . N 0 Denote µ t the equilibrium measure for w t w 1 − t . Clearly J ˆ 1 ˆ 1 µ N (∆ 2 V )) dt = P θ,w t lim (ˆ µ t (∆ 2 V ) dt . N N →∞ 0 0 For the first two terms we use the analyticity of the potentials and Cauchy formula to express everything in terms of Stieltjes functions ˆ 1 µ N � � P θ,w t N 2 ( V J − V ) + N ∆ 1 V (ˆ ) dt N 0 ˆ 1 � � 1 ˆ N 2 ( V J − V ) + N ∆ 1 V ( z ) P θ,w t = ( G N ( z )) dzdt . N 2 πi 0 C We then use Lemma 5.22 since all our assumptions are verified. This provides an expansion : ln Z θ,w = − N 2 F 0 ( θ, V ) + NF 1 ( θ, w ) + F 0 ( θ, w ) + o (1) . N Z J N Again by taking the large N limit we can identify F 0 ( θ, V ) = −E ( µ V ) . For F 1 we find ˆ 1 ˆ 1 1 ˆ 1 ˆ ( V J − V )( z ) m t ( z ) dt + F 1 ( θ, w ) = ∆ 1 V ( z ) G t ( z ) dt 2 πi 2 πi 0 C 0 C and ˆ 1 1 ˆ (( V J − V )( z ) r t ( z ) + ∆ 1 V ( z ) m t ( z ) + ∆ 2 V ( z ) G t ( z )) dzdt F 2 ( θ, w ) = 2 πi 0 C ⋄ 6 Continuous Beta-models : the several cut case In this section we consider again the continuous β -ensembles, but in the case where the equilibrium measure has a disconnected support. The strategy has to be modified since in this case the master operator Ξ is not invertible. In fact, the central limit theorem is not true as if we consider a smooth function f which equals one on one connected piece of the support but vanishes other- wise, and if we expect that the eigenvalues stay in the vicinity of the support of 88

  86. the equilibrium measure, the linear statistic � f ( λ i ) should be an integer and therefore can not fluctuate like a Gaussian variable. It turns out however that the previous strategy works as soon as we fix the filling fractions, the number of eigenvalues in a neighborhood of each connected piece of the support. The idea will therefore be to obtain central limit theorems conditionally to filling fractions. We will as well expand the partition functions for such fixed filling fractions. The latter expansion will allow to estimate the distribution of the fill- ing fractions and to derive their limiting distribution, giving a complete picture of the fluctuations. These ideas were developed in [12, 15]. [12] also includes the case of hard edges. After this work, a very special case (two connected com- ponents and a polynomial potential) could be treated in [27] by using Riemann Hilbert. I will here follow the strategy of [12], but will use general test functions instead of Stieltjes functionals as in Section 4. So as in Section 4, we consider the probability measure � N 1 ∆( λ ) β e − Nβ � V ( λ i ) d P β,V d λ i . ( λ 1 , . . . , λ N ) = N Z β,V N i =1 By Theorems 4.4 and 4.3, if V satisfies Assumption 4.2, we know that the empirical measure of the λ ’s converges towards the equilibrium measure µ eq V . We shall hereafter assume that µ eq = µ V has a disconnected support but a V off-critical density in the following assumption. Assumption 6.1. V : R → R is of class C p and µ eq has support given by V S = ∪ K i =1 [ a i , b i ] with b i < a i +1 < b i +1 < a i +2 and � � � � K dµ V � dx ( x ) = H ( x ) ( x − a i )( b i − x ) i =1 where H is a continuous function such that H ( x ) ≥ ¯ c > 0 a.e. on S . We discuss this assumption in Lemma 6.5. Let us notice that the fact that the support µ V has a finite number of connected components is guaranteed when V is analytic. Also, the fact that the density vanishes as a square root at the boundary of the support is generic, cf [64]. Remember, see Lemma 4.5, that µ V is described by the fact that the effective potential V eff is non-negative outside of the support of µ V .We will also assume hereafter that Assumption 4.2 holds and that V eff is strictly positive outside S . By Theorem 4.8, we therefore know that the eigenvalues will remain in S ε = ∪ p i =1 S i ε , S i ε := [ a i − ε, b i + ε ] with probability greater than 1 − e − C ( ε ) N with some C ( ε ) > 0 for all ε > 0 . We take ε small enough so that S ε is still the union of p disjoint connected components S i , 1 ≤ i ≤ p . Moreover, we will assume that V is C 1 so that the conclusions of Theorem 4.14 and Corollary 4.16 still hold. In particular Corollary 6.2. Assume V is C 1 . There exists c > 0 and C finite such that � � √ P β,V ≤ e − cN 1 ≤ i ≤ p | # { j : λ j ∈ [ a i − ε, b i + ε ] } − Nµ V ([ a i , b i ]) | ≥ C max N ln N N 89

  87. We can therefore restrict our study to the probability measure given, if we denote by N i = # { j : λ j ∈ [ a i − ε, b i + ε ] } , ˆ n i = N i /N and ˆ n K ) , n = (ˆ n 1 , . . . , ˆ by N � ∆( λ ) β e − Nβ � V ( λ i ) 1 S ε d P β,V d λ i N,n ( λ 1 , . . . , λ N ) = 1 max i | N i − Nµ ([ a i ,b i ]) |≤ C √ N ln N Z β,V N,ε i =1 since exponentially small corrections do not affect our polynomial expansions. As ε > 0 is kept fixed we forget it in the notations and denote � N n ( λ 1 , . . . , λ N ) = 1 N i =¯ n i N 1 S ε ∆( λ ) β e − Nβ � V ( λ i ) d P β,V d λ i N, ˆ Z β,V N, ˆ n i =1 the probability measure obtained by conditioning the filling fractions to be equal to ˆ n = ( n 1 , . . . , n p ) . Clearly, we have � N ! Z β,V N 1 ! · · · N p ! Z β,V (78) = N N, ˆ n √ | N i − Nµ ([ a i ,b i ]) |≤ C N ln N Z β,V � N ! N, ˆ n P β,V P β,V (79) = N Z β,V N, ˆ n N 1 ! · · · N K ! √ N | N i − Nµ ([ a i ,b i ]) |≤ C N ln N N ! where the combinatorial term N 1 ! ··· N K ! comes from the ordering of the eigenval- ues to be distributed among the cuts. Hence, we will retrieve large N expansions of the partition functions and linear statistics of the full model from those of the fixed filling fraction models. 6.1 The fixed filling fractions model To derive central limit theorems and expansion of the partition function for fixed filling fractions we first need to check that we have the same type of results that before we fix the filling fractions. We leave the following Theorem as an exercise, its proof is similar to the proof of Theorem 4.4. Recall the notation : ˆ ˆ [1 2 V ( x ) + 1 2 V ( y ) − 1 2 ln | x − y | ] d µ ( x ) d µ ( y ) . E ( µ ) = Theorem 6.3. Fix n i ∈ (0 , 1) so that � n i = 1 . Under the above assumptions • Assume that (ˆ n i ) 1 ≤ i ≤ K converges towards ( n i ) 1 ≤ i ≤ K . The law of the vec- � N 1 + ··· + N i j = N 1 + ··· + N i − 1 +1 δ λ j under P β,V tor of p empirical measures ˆ µ N 1 = i N i N, ˆ n satisfies a large deviation principle on the space of p tuples of probability measures on S i = [ a i − ε, b i + ε ] , 1 ≤ i ≤ p , in the scale N 2 with good rate function I n = J n − inf J n where � K J n ( µ 1 , . . . , µ p ) = β E ( n i µ i ) . i =1 90

  88. • J n achieves its minimal value uniquely at ( µ n i ) 1 ≤ i ≤ p . Besides there exists p constants C n i such that � ˆ V n n i µ n i ( y )) − C n (80) eff ( x ) = V ( x ) − ln | x − y | d ( i is greater or equal to 0 on S i and equal to 0 on the support of µ n i . • The conclusions of Lemma 4.14 and Corollary 4.16 hold in the fixed filling n = N i /N, � N i = N we can smooth fraction case in the sense that for ˆ � ˆ µ N into ˜ µ N µ N (by pulling appart eigenvalues and taking the con- n i ˆ i = ˆ volution by a small uniform variable), so that there exists c > 0 , C p,q < ∞ such that for t > 0 � � � ≤ e C p,q N ln N − βN 2 t 2 + e − cN P β,V n i µ ˆ n D (˜ µ N , ˆ i ) ≥ t N, ˆ n Note above that the filling fractions N i /N may vary when N grows : the first two statements hold if we take the limit, and the last with ˆ n i = N i /N exactly equal to the filling fractions (the measures µ n i are defined for any given n i such that � n i = 1 ). The last result does not hold if ˆ n is replaced by its limit n , unless ˆ n is close enough to n . To get the expansion for the fixed filling fraction model it is essential to check that they are off critical if the ˆ n i are close to µ ( S i ) : Lemma 6.4. Assume V is analytic. Fix ε > 0 . There exists δ > 0 so that if max i | n i − µ V ( S i ) | ≤ δ , ( µ n i ) 1 ≤ i ≤ p are off-critical in the sense that there exists a n i < b n i in S i ε and H n i uniformly bounded below by a positive constant on S i ε such that � dµ n i ( x ) = H n ( x − a n i )( b n i ( x ) i − x ) dx . Proof. We first observe that n → ´ fdµ n i is smooth for all smooth functions f . Indeed, take two filling fractions n, m and denote in short by µ n = � n i µ n i . Re- call that µ n minimizes E on the set of probability measures with filling fractions n . We decompose E as � ˆ eff ( x ) d ( ν − µ n )( x )+ β a h , ˆ V n 2 D 2 ( ν, µ n ) − β C n b h ]) − n h ) (81) E ( ν ) = β h ( ν ([ˆ where V n eff is the effective potential for the measure µ n . Note here that we used that as ν − µ n has zero mass to write ˆ ln | x − y | d ( ν − µ n )( x )( ν − µ n )( y ) = − D 2 ( ν, µ n ) . We then take ν a measure with filling fractions m and since µ m minimizes E among such measures, E ( µ m ) ≤ E ( ν ) . (82) 91

  89. We choose ν to have the same support than µ n so that V n eff ( x ) d ( ν − µ n )( x ) = 0 ´ eff ( x ) d ( µ m − µ n )( x ) ≥ 0 . Hence, we deduce from (81) and V n and notice that ´ (82) that D 2 ( µ m , µ n ) ≤ D 2 ( ν, µ n ) . Finally we choose ν = µ n + � 1 Bi | B i | dx with B i is an interval in the i ( m i − n i ) support of µ n i where its density is bounded below by some fixed value. For max | m i − n i | small enough it is a probability measure. Then, it is easy to check that D 2 ( µ m , µ n ) ≤ D 2 ( µ, µ n ) ≤ C � m − n � 2 ∞ from which the conclusion follows from (48). Next, we use the Dyson-Schwinger equation with the test function f ( x ) = ( z − x ) − 1 to deduce that G n ´ ( z − x ) − 1 dµ n i ( x ) satisfies the equation i ( z ) = ˆ V ′ ( x ) � G n n j G n z − x dµ n i ( x ) = V ′ ( z ) G n i ( z ) + f n i ( z )( j ( z )) = i ( z ) where f n ´ ( V ′ ( y ) − V ′ ( z ))( y − z ) − 1 dµ n i ( y ) . Hence we deduce that i ( z ) = −   � � � 1 j ( z )) 2 − 4 n i f n G n  V ′ ( z ) − n j G n  . n j G n i ( z ) = j ( z ) − ( V ′ ( z ) − i ( z ) 2 n i j � = i j � = i The imaginary part of G n i gives the density of µ n i in the limit where z goes to the real axis. Since the first term in the above right hand side is obviously real, the latter is given by the square root term and therefore we want to show that � j ( z )) 2 − 4 n i f n F ( z, n ) = ( V ′ ( z ) − n j G n i ( z ) j � = i vanishes only at two points a n i , b n i for z ∈ S i . The previous point shows that F is Lipschitz in the filling fraction n as V is C 3 (since then f i n is the integral of a C 1 function under µ n i ) whereas Assumption 6.1 implies that at n ∗ i = µ V ( S i ) , F vanishes at only two points and has non-vanishing derivative at these points. This implies that the points where F ( z, n ) vanishes in S i are at distance of order at most max | n i − m i | of a i , b i . However, to guarantee that there are exactly two such points, we use the analyticity of V which guarantees that F ( ., n ) is analytic for all n so that we can apply Rouché theorem. As F ( z, n ∗ ) does not vanish on the boundary of some compact neighborhood K of a i , for n close enough to n ∗ , we have | F ( z, n ) − F ( z, n ∗ ) | ≤ | F ( z, n ∗ ) | for z ∈ ∂K . This guarantees by Rouché’s theorem, since F ( ., n ) is analytic in neighborhood of S i as V is, that F ( ., n ) and F ( ., n ∗ ) have the same number of zeroes inside K . ⋄ To apply the method of Section 4, we can again use the Dyson-Schwinger equations and in fact Lemma 4.17 still holds true : Let f i : R → R be C 1 b 92

  90. functions, 0 ≤ i ≤ p . Then, taking the expectation under P β,V n , we deduce N, ˆ p p � � ( 1 β − 1 µ N ( f ′ E [ M N (Ξ f 0 ) N ˆ µ N ( f i )] = 2) E [ˆ 0 ) N ˆ µ N ( f i )] i =1 i =1 p � � 1 µ N ( f 0 f ′ + E [ˆ ℓ ) N ˆ µ N ( f i )] β ℓ =1 i � = ℓ ˆ f 0 ( x ) − f 0 ( y ) p � 1 + 2 E [ dM N ( x ) dM N ( y ) N ˆ µ N ( f i )] x − y i =1 + O ( e − cN ) where the last term comes from the boundary terms which are exponentially small by the large deviations estimates of Theorem 4.8. We still denoted M N ( f ) = � f ( λ i ) − N � ˆ n i µ ˆ n i ( f ) but this time the mass in each S i is fixed so this quantity is unchanged if we change f by adding a piecewise constant function on the S i ’s. We therefore have this time to find for any sufficiently smooth function g a function f such that there are constants C j so that ˆ f ( x ) − f ( y ) p � Ξ ˆ n f ( x ) = V ′ ( x ) f ( x ) − dµ ˆ n i ( y ) = g ( x ) + C j , x ∈ S j . n i ˆ x − y i =1 n inside By the characterization of µ ˆ n , if S ˆ j = [ a ˆ n j , b ˆ n n j ] denotes the support of µ ˆ S i ε , this question is equivalent to find f so that on every [ a ˆ n j , b ˆ n j ] , � ˆ f ( y ) Ξ ˆ n f ( x ) := PV x − y H ˆ n ( y − a ˆ n j )( b ˆ n j − y ) dy = g ( x ) + C j j ( y ) This question was solved in [73] under the condition that g, f are Hölder with some positive exponent. Once one gets existence of these functions, the property of the inverse are the same as before since inverting the operator on one S i will correspond to the same inversion. For later use, we prove a slightly stronger statement : Lemma 6.5. Let θ ∈ [0 , 1] and set for n i ∈ (0 , 1) , � n i = 1 . Let S n i denote the support of µ n i . We set, for i ∈ { 1 , . . . , K } , all x ∈ S n i ˆ f ( x ) − f ( y ) ˆ f ( x ) − f ( y ) � Ξ n θ f ( x ) := V ′ ( x ) f ( x ) − n i dµ n dµ n i ( y )+ θ n j j ( y ) . x − y x − y j � = i Then for all g ∈ C k , k > 2 , there exist constants C j , 1 ≤ j ≤ p , so that the equation Ξ n θ f ( x ) = g ( x ) + C j , x ∈ S n j has a unique solution which is Hölder for some exponent α > 0 . We denote by (Ξ n θ ) − 1 g this solution. There exists finite constant D j such that � (Ξ n θ ) − 1 g � C j ≤ D j � g � C j − 2 . 93

  91. Proof. Let us first recall the result from [73, section 90] which solves the case θ = 1 . Let S n k = [ a n k , b n k ] . Because of the characterization of the equilibrium measure, inverting Ξ 1 is equivalent to seek for f Hölder such that there are K constants ( C k ) 1 ≤ k ≤ K such that on S k ˆ f ( x ) K 1 f ( t ) = PV t − xdx = g ( t ) + C k ∪ S k for all k ∈ { 1 , . . . , K } . Then, by [73, section 90], if g is Hölder, there exists a unique solution and it is given by � 1 g ( x ) := f ( x ) = σ ( x ) ˆ dy 1 K − 1 PV y − x ( g ( y ) + C k ) π σ ( y ) S k k �� ( x − a n where σ ( x ) = i )( x − b n i ) . The proof shows uniqueness and then exhibits a solution. To prove uniqueness we must show that K 1 f = C k has a unique solution, namely zero. To do so one remarks that ˆ f ( x ) Φ( z ) = x − z dx ∪ S k � is such that Ψ( z ) = (Φ( z ) − C k / 2) ( x − a n k )( x − b n k ) is holomorphic in a neigh- borhood of S n k and vanishes at a n k , b n k . Indeed, K 1 f = C k is equivalent to Φ + ( x ) + Φ − ( x ) = C k implies that Ψ + ( x ) = Ψ − ( x ) on the cuts. Hence Φ( z ) − C k / 2 = [( z − a n k )( z − b n k )] 1 / 2 Ω( z ) (83) with Ω holomorphic in a neighborhood of S n k , and so Φ ′ ( z ) σ ( z ) is holomorphic everywhere. Hence, since Φ ′ goes to zero at infinity like 1 /z 2 , P ( z ) = Φ ′ ( z ) σ ( z ) is a polynomial of degree at most K − 2 . We claim that this is a contradiction with the fact that then the periods of Φ vanish, see [37, Section II.1] for details. Let us roughly sketch the idea. Indeed, because Φ = u + iv is analytic outside the cuts, if Λ = ∪ Λ k is a set of contours surrounding the cuts and Λ c the part of the imaginary plan outside Λ , we have by Stockes theorem � ( ∂ x u ) 2 + ( ∂ y u ) 2 � ˆ ˆ J = dxdy = ud ¯ v Λ c Λ Letting Λ going to S we find ˆ ˆ ˆ u + dv + − u − dv − ud ¯ v = Λ S S But by the condition Φ + +Φ − = C k we see that u + + u − = ℜ ( C k ) , d ( v + + v − ) = 0 and hence � � ˆ dv + = ℜ ( C k )( v + ( b n k ) − v + ( a n J = ℜ ( C k ) k )) . S k k k 94

  92. ´ z On the other hand Φ( z ) = −∞ P ( ξ ) /σ ( ξ ) dξ for any path avoiding the cuts and hence converges towards finite values on the cuts. But since Φ ′ ( ξ ) = P ( ξ ) σ ( x ) is analytic outside the cuts, going to zero like 1 /z 2 at infinity, ˆ ˆ Φ ′ ( ξ ) dξ = 2 Φ ′ ( x ) dx = 2(Φ( b n k ) − Φ( a n 0 = k )) S n Λ k k Thus, v ( b n k ) − v ( a n k ) = 0 and we conclude that J = 0 . Therefore Φ vanishes, and so does f . Next, we consider the general case θ ∈ (0 , 1) . We show that Ξ n θ is injective on the space of Hölder functions. Again, it is sufficient to consider the homogeneous equation (84) K θ f ( x ) = (1 − θ ) K 0 f ( x ) + θK 1 f ( x ) = C k f ( y ) on S k for all k . Here K 0 f ( x ) = ´ y − x dy on S k for all k . If K θ is injective, so S k is Ξ n θ by dividing the function f on S k by σ k ( x ) S k ( x ) = dµ n k /dx . Recall that Tricomi airfol equation shows that K 0 is invertible, see Lemma 4.18, and we have just seen that K 1 is injective. To see that K θ is still injective for θ ∈ [0 , 1] we notice that we can invert K 1 to deduce that we seek for an Hölder function f (= K 1 g ) and a piecewise constant function C so that f ( x ) = − 1 − θ K − 1 1 ( K 0 f − C ) θ Let us consider this equation for x ∈ S k and put f = K − 1 0 g . By the formula for K − 1 and K − 1 we deduce that we seek for constants d, D and a function g so 1 0 that on S k : � 1 g ( y ) + d k σ k ( y ) dy = − 1 − θ 1 g ( y ) + D ℓ ˆ ˆ σ k ( x ) PV PV σ ( y ) dy . x − y θ σ ( x ) y − x S k S ℓ ℓ Here, we used a formula for K − 1 where σ k was replaced by σ − 1 : this alternative 0 k formula is due to Parseval formula [85, (2) p.174], see (16) and (18) in [85]. Note here that both side vanish at the end points of S k by the choices of the constants. As a consequence � 1 g ( y ) + d k σ k ( y ) dy + 1 − θ 1 g ( y ) + D ℓ ˆ ˆ σ ( y ) dy x − y y − x σ k ( x ) θ σ ( x ) S k S ℓ ℓ is analytic in a neighborhood of S k .We next integrate over a contour C k around 95

  93. S k to deduce that ˆ g ( y ) + d k 1 ˆ dz ˆ g ( y ) + d k σ k ( y ) dy = σ k ( y ) dy x − y 2 πi z − x z − y S k C k S k � − 1 − θ 1 ˆ dz σ k ( z ) ˆ g ( y ) + D ℓ = σ ( y ) dy z − x y − z θ 2 πi σ ( z ) C k S ℓ ℓ − 1 − θ σ k ( y ) g ( y ) + D k ˆ = σ ( y ) dy x − y θ σ ( y ) S k − 1 − θ g ( y ) + D k ˆ = σ k ( y ) dy θ x − y S k where we used that σ k /σ is analytic in a neighborhood of S k , as well as the terms coming from the other cuts. Hence we seek for g satisfying 1 g ( y ) + d k ˆ σ k ( y ) dy = 0 θ x − y S k for some constant d k . Tricomi airfol equation shows that this equation has a unique solution which is when g + d k is a multiple of 1 / ( σ k ) 2 . By our smoothness assumption on g , we deduce that g + d k must vanish. This implies that f = K − 1 0 g vanishes by Tricomi. Hence, we conclude that K θ , and therefore Ξ n θ is injective on the space of Hölder continuous functions. To show that Ξ n θ is surjective, it is enough to show that it is surjective when composed with the inverse of the single cut operators Ξ n = (Ξ n 1 , . . . , Ξ n p ) , that is that ˆ (Ξ n � j ) − 1 f ( y ) dµ n j ( y ) , x ∈ S i = [ a n i , b n L θ f ( x ) := n i f ( x )+ θRf ( x ) , Rf ( x ) = n j i ] x − y j � = i is surjective. But R is a kernel operator and in fact it is Hilbert-Schmidt in L 2 ( σ − ǫ dx ) for any ǫ > 0 (here σ ( x ) = � � ( x − a n i )( b n i − x ) ). Indeed, on x ∈ S i , R is a sum of terms of the form � ˆ b n � ˆ (Ξ n j ) − 1 f ( y ) 1 1 f ( t ) ˆ j dµ n dµ n j ( y ) = S j ( y ) PV ( y − t ) σ j ( t ) dt j ( y ) x − y x − y a n j by Remark 4.19. Even though we have a principal value inside the (smooth) integral we can apply Fubini and notice that 1 1 1 1 1 ˆ ˆ S j ( y ) dµ n PV j ( y ) = x − tPV ( ( x − y ) + ( y − t )) dσ j ( y ) ( x − y )( y − t ) 1 1 − = x − tσ j ( x ) where we used that t belongs to S j but not x to compute the Hilbert transport of σ j at t and x . Hence, the above term yields ˆ (Ξ n j ) − 1 f ( y ) ˆ f ( t ) 1 dµ n σ j ( t )(1 − x − tσ j ( x )) dt, x ∈ S i j ( y ) = x − y S j 96

  94. from which it follows that R is a Hilbert-Schmidt operator in L 2 ( σ − ǫ dx ) . Hence, R is a compact operator in L 2 ( σ − ǫ dx ) . But L θ is injective in this space. Indeed, for f ∈ L 2 ( σ − ǫ dx ) , L θ f = 0 implies that f = θn − 1 i Rf is analytic. Writing back h = (Ξ n ) − 1 f , we deduce that Ξ n θ h = 0 with h Hölder, hence h must vanish by the previous consideration. Hence L θ is injective. Therefore, by the Fredholm alternative, L θ is surjective. Hence L θ is a bijection on L 2 ( σ − ǫ dx ) . But note that the above identity shows that R maps L 2 ( σ − ǫ ) onto analytic functions, therefore K − 1 maps Hölder functions with exponent α onto Hölder functions θ = L θ ◦ Ξ n is invertible onto the with exponent α . We thus conclude that Ξ n space of Hölder functions. We also see that the inverse has the announced property since for x ∈ [ a n j , b n j ] ˆ (Ξ n � θ ) − 1 g ( y ) (Ξ n θ ) − 1 g ( x ) = Ξ − 1 [ g − h ] , dµ n h ( x ) = θ n j j ( y )] x − y j � = i where h is C ∞ . The announced bound follows readily from the bound on one cut as on L k we have ˆ (Ξ n θ ) − 1 f � (Ξ n θ ) − 1 f ( x ) = (Ξ n 0 ) − 1 ( f − θ dµ n ℓ ( y )) x − y ℓ � = k is such that ˆ (Ξ n θ ) − 1 f � � (Ξ n θ ) − 1 f � C s ≤ c s � f − θ dµ n c s ( � f � C s +2 + � (Ξ n θ ) − 1 f � ∞ ) . ℓ ( y )) � C s +2 ≤ ˜ x − y ℓ � = k ⋄ As Ξ n 1 is invertible with bounded inverse we can apply exactly the same strategy as in the one cut case to prove the central limit theorem : Theorem 6.6. Assume V is analytic and the previous hypotheses hold true. Then there exists ǫ > 0 so that for max | n i − µ ([ a i , b i ]) | ≤ ǫ , for any f C k with k ≥ 11 , the random variable M N ( f ) := � N i =1 f ( λ i ) − Nµ n ( f ) converges in law under P β,V N,n towards a Gaussian variable with mean m n V ( f ) and covariance V ( f, f ) , which are defined as in Theorem 4.27 but with µ n instead of µ and C n Ξ n instead of Ξ . We can also obtain the expansion for the partition function Theorem 6.7. Assume V is analytic and the previous hypotheses hold true. Then there exists ǫ > 0 so that for max | ˆ n i − µ ([ a i , b i ]) | ≤ ǫ , ˆ n i = N i /N , we have � � N ! n K N )! Z N, ˆ n = C 0 β N ln N + C 1 ln β ln( N ) β,V (ˆ n 1 N )! · · · (ˆ + N 2 F ˆ 0 ( V ) + NF ˆ n 1 ( V ) + F ˆ n n (85) 2 ( V ) + o (1) 97

  95. and for n i > 0 , � n i = 1 , β = − ( K − 1) / 2 + 3+ β/ 2+2 /β β = β with C 0 2 , C 1 12 F n −E ( µ n 0 ( V ) = V ) ln( dµ n ( β ˆ V − β F n dx ) dµ n V 1 ( V ) = 2 − 1) 2 n i ln n i + f 1 where f 1 depends only on the boundary points of the support. F n 2 ( V ) is a con- tinuous function of n . Above the error term is uniform on n in a neighborhood of n ∗ . Proof. The proof is again by interpolation. We first remove the interaction between cuts by introducing for θ ∈ [0 , 1] � h ′ )( y ) � 1 β,V ˆ n e N 2 β d P β,θ,V µ N h − µ ˆ n µ N h ′ − µ ˆ n 2 θ ´ ln | x − y | d (ˆ h )( x ) d (ˆ eff ( λ 1 , . . . , λ N ) = dP N, ˆ n N, ˆ n h Z β,θ,V N, ˆ n h � = h ′ β,V ˆ n where P is the β ensemble on S h with potential given by the effective e ff N, ˆ n h potential. We still have a similar large deviation principle for the ˆ µ N h under P β,θ,V and the minimizer of the rate function is always µ ˆ n h . Hence we are N, ˆ n always in a off critical situation. Moreover, we can write the Dyson-Schwinger equations for this model : it is easy to see that the master operator is Ξ n θ of Lemma 6.5 which we have proved to be invertible. Therefore, we deduce that the covariance and the mean of linear statistics are in a small neighborhood of C θ, ˆ n and m θ, ˆ n V . It is not hard to see that this convergence is uniform in θ . V Hence, we can proceed and compute � � � ˆ 1 Z β, 1 ,V ˆ N, ˆ n P β,θ,V N 2 µ N h − µ ˆ n µ N h ′ − µ ˆ n ln | x − y | d (ˆ ln = β h )( x ) d (ˆ h ′ )( y ) dθ N, ˆ n Z β, 0 ,V 0 N, ˆ n h<h ′ Indeed, using the Fourier transform of the logarithm we have � ˆ � N 2 P β,θ,V µ N h − µ ˆ n µ N h ′ − µ ˆ n ln | x − y | d (ˆ h )( x ) d (ˆ h ′ )( y ) N, ˆ n � � ˆ 1 ˆ ˆ t P β,θ,V e itx d (ˆ µ N h − µ ˆ n e − ity d (ˆ µ N h ′ − µ ˆ n = ( N h )( x )( N h ′ )( y )) dt N, ˆ n where the above RHS is close to [ C θ, ˆ V ( e it. , e − it. ) + | m θ, ˆ n V ( e it. ) | 2 ] n Hence, decoupling the cuts in this way only provides a term of order one in the partition function. It is not hard to see that it will be a continuous function of the filling fraction (as the inverse of Ξ ˆ n θ is uniformly continuous in n ). Finally we can use the expansion of the one cut case of Theorem 4.28 to expand Z β, 0 ,V N, ˆ n to conclude. ⋄ 98

  96. 6.2 Central limit theorem for the full model To tackle the model with random filling fraction, we need to estimate the ratio of the partition functions according to (78). Recall that n ∗ i = µ ([ a i , b i ]) . We can now extend the definition of the partition function to non-rational values of the filling fractions by using Theorem 6.7. Then we have Theorem 6.8. Under the previous hypotheses, for max | n i − n ∗ i | ≤ ǫ , there exists a positive definite form Q and a vector v such that Z N,n ( Nn ∗ 1 )! · · · ( Nn ∗ K )! β,V D ( n ) := Z N,n ∗ ( Nn 1 ! · · · ( Nn K )! β,V exp {− 1 2 Q ( N ( n − n ∗ ))(1 + O ( ǫ )) + � N ( n − n ∗ ) , v � + o (1) } , = where Z N,n ∗ β,V / ( Nn ∗ 1 )! · · · ( Nn ∗ K )! is defined thanks to the expansion of (85) when- ever n ∗ N takes non-integer values (note here the right hand side makes sense for any filling fraction n ). O ( ǫ ) is bounded by Cǫ uniformly in N. We have 0 ( V ) | n = n ∗ and v i = ∂ n i F n Q = − D 2 F n 1 ( V ) | n = n ∗ . As a consequence, since the probability that the filling fractions ˆ n are equal to n is proportional to D ( n ) , we deduce that the distribution of N (ˆ n − n ∗ ) − Q − 1 v is equivalent to a centered discrete Gaussian variable with values in − Nn ∗ − Q − 1 v + Z and covariance Q − 1 . Note here that Nn ∗ is not integer in general so that N (ˆ n − n ∗ ) − Q − 1 v does not live in a fixed space : this is why the distribution of N (ˆ n − n ∗ ) − Q − 1 v does not converge in general. As a corollary of the previous theorem, we immediatly have that Corollary 6.9. Let f be C 11 . Then � f ( λ i ) − Nµ ( f ) ] = exp { 1 2 C n ∗ V ( f, f ) + m n ∗ E P N V ( f ) } β,V [ e � exp {− 1 2 Q ( N ( n − n ∗ )) + � N ( n − n ∗ ) , v + ∂ n µ n | n = n ∗ ( f ) �} × � (1 + o (1)) n exp {− 1 2 Q ( N ( n − n ∗ )) + � N ( n − n ∗ ) , v �} n We notice that we have a usual central limit theorem as soon as ∂ n µ n | n = n ∗ ( f ) vanishes (in which case the second term vanishes), but otherwise the discrete Gaussian variations of the filling fractions enter into the game. This term comes from the difference Nµ ( f ) − Nµ n ( f ) . As is easy to see, the last thing we need to show to prove these results is that Lemma 6.10. Assume V analytic, off-critical. Then • n → µ n ( f ) is C 1 and C n V ( f, f ) , m n V ( f ) are continuous in n , i ( V ) is C 2 − i in a neighborhood of n ∗ , • n → F n • Q = − D 2 F n 0 ( V ) | n = n ∗ is definite positive. 99

  97. Let us remark that this indeed implies Corollary 6.9 and Theorem 6.8 since by Theorem 6.7 we have for | n i − n ∗ i | ≤ ǫ Z N,n ln ( Nn ∗ 1 ! · · · ( Nn ∗ K )! 0 ( V ) − F n ∗ 1 ( V ) − F n ∗ β,V N 2 { F n 0 ( V ) } + N ( F n = 1 ( V )) Z N,n ∗ ( Nn 1 )! · · · ( Nn K )! β,V 2 ( V ) − F n ∗ +( F n 2 ( V )) + o (1) − 1 2 Q ( N ( n − n ∗ ) , N ( n − n ∗ ))(1 + O ( ǫ )) = + ∂ n F n 1 ( V ) | n = n ∗ . ( N ( n − n ∗ )) 0 ( V ) vanishes at n ∗ since n ∗ minimizes F n where we noticed that ∂ n F n 0 and Q is definite positive. Hence we obtain the announced estimate on the partition function. About Corollary 6.9 we have by (78) and by conditionning on filling fractions � � � f ( λ i ) − Nµ V ( f ) ] = E � f ( λ i ) − Nµ ˆ n ( f ) − µ n ∗ ( f )) E P N, ˆ e N ( µ ˆ n ( f ) ] E P N β,V [ e β,V [ e n � � � f ( λ i ) − Nµ ˆ n − n ∗ ,∂ n µ n ( f ) | n = n ∗ � E P N, ˆ n ( f ) ] e N � ˆ ≃ E β,V [ e (1 + o (1)) n So we only need to prove Lemma 6.10. Proof. n → µ n is twice continuously differentiable. We have already seen in the proof of Lemma 6.4 that n → µ n is Lipschitz for the distance D for n in a neighborhood of n ∗ . This implies that ν ǫ = ǫ − 1 ( µ n + ǫκ − µ n ) is tight (for the distance D and hence the weak topology). Let us consider a limit point ν and its Stieltjes transform G ν ( z ) = ( z − x ) − 1 dν ( x ) . Along this subsequence, the proof ´ of Lemma 6.4 also shows that ǫ − 1 ( a n + ǫκ − a n i ) has a limit (and similarly for b n , i as well as H n i ). Hence, we see that ν is absolutely continuous with respect to Lebesgue measure, with density blowing up at most like a square root at the boundary. By (80) in Theorem 6.3 we deduce that G ν ( E + i 0) + G ν ( E − i 0) = 0 �� ( z − a n for all E inside the support of µ n . This implies that i )( b n i − z ) G ν ( z ) has no discontinuities in the cut, hence is analytic. Finally, G ν goes to zero at �� ( z − a n infinity like 1 /z 2 so that i )( b n i − z ) G ν ( z ) is a polynomial of degree at most p − 2 . Its coefficients are uniquely determined by the p − 1 equations fixing the filling fractions since for a contour C n i around [ a n i , b n i ] ˆ ˆ G µ n ( z ) dz = n i ⇒ G ν ( z ) dz = κ i . C n C n i i There is a unique solution to such equations. As it is linear in κ , it is given by � κ i ω n (86) G ν ( z ) = i ( z ) 100

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend