method of cumulants and mod gaussian convergence of the
play

Method of cumulants and mod-Gaussian convergence of the graphon - PowerPoint PPT Presentation

Method of cumulants and mod-Gaussian convergence of the graphon models Pierre-Loc Mliot (Joint work with Valentin Fray and Ashkan Nikeghbali) 2017, May 11th University Paris-Sud 1 This is not the whole story: 2 S n 1 S n When looking


  1. Method of cumulants and mod-Gaussian convergence of the graphon models Pierre-Loïc Méliot (Joint work with Valentin Féray and Ashkan Nikeghbali) 2017, May 11th University Paris-Sud

  2. 1 This is not the whole story: 2 S n 1 S n When looking at a sum S n = ∑ n i = 1 A i of centered i.i.d. random variables, the fluctuations are universally predicted by the central limit theorem √ ⇀ N ( 0 , 1 ) . n Var ( A 1 ) ▶ large deviations (Cramér, 1938): log ( P [ S n ≥ nx ]) ≃ − n I ( x ) . ▶ speed of convergence (Berry, 1941; Esseen, 1945): � � [ ] ∫ s � � 3 E [ | A 1 | 3 ] � � e − t 2 √ ≤ s − √ � ≤ ( Var ( A 1 )) 3 / 2 √ n . sup � P � � 2 dt n Var ( A 1 ) 2 π s ∈ R −∞ ▶ local limit theorem (Gnedenko, 1948; Stone, 1965): if A 1 is non- lattice distributed and Var ( A 1 ) = 1, then √ √ √ [ ] ≃ e − x 2 √ n P S n ∈ ( nx , nx + h ) h . 2 π

  3. Many other sequences of random variables are asymptotically normal: functionals of Markov chains, martingales, etc. 6 S n 2 Definition (Mod-Gaussian convergence) time the CLT and the other limiting results. lows one to go beyond the central limit theorem, and to prove in one Idea: there is a renormalisation theory of random variables that al- 2 A sequence of real random variables ( X n ) n ∈ N is mod-Gaussian with parameters t n → + ∞ and limit ψ ( z ) if, locally uniformly on a domain D ⊂ C , E [ e zX n ] e − tnz 2 = ψ n ( z ) → ψ ( z ) with ψ continuous on D and ψ ( 0 ) = 1. n 1 / 3 ; t n = n 1 / 3 Var ( A 1 ) and For a sum of i.i.d. S n , one looks at X n = ψ ( z ) = exp( E [( A 1 ) 3 ] z 3 ) .

  4. 3 Objectives: of random graphs. 4 3. Prove the mod-Gaussian convergence of a large class of models vergence. 2. Describe general conditions which ensure the mod-Gaussian con- 1. Explain the consequences of mod-Gaussian convergence. Later: Markov chains, random graphs, random permutations, etc. has the mod-Gaussian convergence Example: let X n = Re (log det( I n − M n )) , with M n ∼ Haar ( U ( n )) . One → G ( 1 + z 2 ) 2 E [ e zX n ] e − (log n ) z 2 G ( 1 + z ) , G = Barnes’ function . Remark: one can replace the exponent z 2 2 of the Gaussian distribution by the exponent η ( z ) of any infinitely divisible distribution.

  5. Mod-Gaussian convergence and bounds on cumulants

  6. Method of cumulants If X is a random variable with convergent Laplace transform, its cumu- Idea: characterize similarly the mod-Gaussian convergence of a se- z r . The first cumulants are 4 dz r )� lants are: ( � κ ( r ) ( X ) = d r � log E [ e zX ] . � z = 0 So, log E [ e zX ] = ∑ ∞ κ ( r ) ( X ) r = 1 r ! κ ( 2 ) ( X ) = E [ X 2 ] − ( E [ X ]) 2 = Var ( X ) κ ( 1 ) ( X ) = E [ X ] ; ; κ ( 3 ) ( X ) = E [ X 3 ] − 3 E [ X 2 ] E [ X ] + 2 ( E [ X ]) 3 . The Gaussian distribution N ( m , σ 2 ) is characterized by κ ( 1 ) ( X ) = m , κ ( 2 ) ( X ) = σ 2 , κ r ≥ 3 ( X ) = 0. quence ( X n ) n ∈ N .

  7. Definition (Method of cumulants) (MC2) The first cumulants satisfy (MC3) All the cumulants satisfy 5 A sequence of random variables ( S n ) n ∈ N satisfies the hypotheses of the method of cumulants with parameters ( D n , N n , A ) if: (MC1) One has N n → + ∞ and D n N n → 0. κ ( 1 ) ( S n ) = 0 ; κ ( 2 ) ( S n ) = ( σ n ) 2 N n D n ; κ ( 3 ) ( S n ) = L n N n ( D n ) 2 with lim n →∞ ( σ n ) 2 = σ 2 > 0 and lim n →∞ L n = L . | κ ( r ) ( S n ) | ≤ N n ( 2 D n ) r − 1 r r − 2 A r .

  8. Mod-Gaussian convergence and its consequences Lz 3 dx This inequality relies on the general estimate N n D n S n Consequences: . 6 6 S n D n N n If ( S n ) n ∈ N satisfies the hypotheses MC1-MC3, then X n = ( N n ) 1 / 3 ( D n ) 2 / 3 is mod-Gaussian convergent, with t n = ( σ n ) 2 ( ) 1 / 3 ( ) and ψ ( z ) = exp √ Var ( S n ) , then Y n ⇀ N ( 0 , 1 ) . 1. Central limit theorem: if Y n = 2. Speed of convergence: ) 3 √ ( 3 A d Kol ( Y n , N ( 0 , 1 )) ≤ . σ n � � � � ∫ T � � � � � µ ( ξ ) − � ν ( ξ ) d ν ( x ) � � � � d Kol ( µ , ν ) ≤ 1 � d ξ + 24 . � � � π ξ π T − T ∞

  9. 7 D n D n ment of change of measure. Ly 3 D n N n 2 N n y 2 , then This estimate relies on the Berry–Esseen inequality and an argu- and D n N n N n D n . , then D n N n N n ( ) 1 / 6 3. Normality zone and moderate deviations: if y ≪ P [ Y n ≥ y ] = P [ N ( 0 , 1 ) ≥ y ] ( 1 + o ( 1 )) . ( ) 1 / 4 If 1 ≪ y ≪ ( ) √ P [ Y n ≥ y ] = e − y 2 √ exp ( 1 + o ( 1 )) . 6 σ 3 2 π 4. Local limit theorem: for any exponent ε ∈ ( 0 , 1 2 ) , ( N n ) ε [ ( D n ) ε ] = e − y 2 Y n − y ∈ √ ( b − a ) . lim P ( a , b ) n →∞ 2 π ( ) − 1 / 2 ( ) 1 / 6 Thus, Y n is normal between the two scales

  10. Joint cumulants and dependency graphs

  11. Dependency graphs 2 7 6 5 4 3 1 Example: 8 Let S = ∑ v ∈ V A v be a sum of random variables, and G = ( V , E ) a de- pendency graph for ( A v ) v ∈ V : if V 1 and V 2 are two disjoint subsets of V without edge e = { v 1 , v 2 } between v 1 ∈ V 1 and v 2 ∈ V 2 , then ( A v ) v ∈ V 1 and ( A v ) v ∈ V 2 are independent. ( A 1 , A 2 , . . . , A 5 ) ⊥ ( A 6 , A 7 ) , but one has also ( A 1 , A 2 , A 3 ) ⊥ A 5 . Parameters of the graph: D = max v ∈ V (deg v + 1 ) , N = card ( V ) , A = max v ∈ V ∥ A v ∥ ∞ .

  12. Theorem (Bound on cumulants; Féray–M.–Nikeghbali, 2013) d r A j If S is a sum of random variables with a dependency graph of param- 9 eters ( D , N , A ) , then for any r ≥ 1, | κ ( r ) ( S ) | ≤ N ( 2 D ) r − 1 r r − 2 A r . Corollary: if S n = ∑ N n i = 1 A i , n with the A i , n ’s bounded by A and a sparse dependency graph of maximal degree D n ≪ N n , then MC3 is satisfied. The proof of the bound relies on the notion of joint cumulant : )� � ( � log E [ e z 1 A 1 + z 2 A 2 + ··· + z r A r ] κ ( A 1 , A 2 , . . . , A r ) = � dz 1 dz 2 · · · dz r z 1 = ··· = z r = 0   ℓ ( π ) ∑ ∏ ∏ ( − 1 ) ℓ ( π ) − 1 ( ℓ ( π ) − 1 )!  . = E π 1 ⊔ π 2 ⊔···⊔ π ℓ ( π ) =[ [ 1 , r ] ] i = 1 j ∈ π i

  13. Properties of joint cumulants 10 2. The joint cumulants are multilinear and invariant by permutation. 1. For any random variable X , κ ( r ) ( X ) = κ ( X , X , . . . , X ) ( r occurrences). 3. If { A 1 , A 2 , . . . , A r } can be split in two independent families, then κ ( { A 1 , . . . , A r } ) = 0. Consider a sum S = ∑ v ∈ V A v with a dependency graph G of parameters ( D , N , A ) . ∑ κ ( r ) ( S ) = κ ( A v 1 , A v 2 , . . . , A v r ) v 1 , v 2 ,..., v r and the sum can be restricted to families { v 1 , v 2 , . . . , v r } such that the induced multigraph H = G [ v 1 , v 2 , . . . , v r ] is connected. Actually, | κ ( A v 1 , A v 2 , . . . , A v r ) | ≤ A r 2 r − 1 ST H , where ST H is the number of spanning trees of H .

  14. Sketch of proof of the bound the bivariate Tutte polynomial 11 1. In the expansion of κ ( A 1 , . . . , A r ) , many set partitions yield the same moment M π = ∏ ℓ ( π ) i = 1 E [ ∏ j ∈ π i A j ] , so ( ∑ ) ∑ κ ( A 1 , . . . , A r ) = µ ( π ) M π ′ π ′ π → H π ′ � � � � | κ ( A 1 , . . . , A r ) | ≤ A r ∑ ∑ � � � µ ( π ) � � . � π ′ π → H π ′ 2. The functional F H /π ′ = ∑ π → H π ′ µ ( π ) depends only on the con- traction H /π ′ of H along π ′ , and one can show that is up to a sign | F H /π ′ | = T H /π ′ ( 1 , 0 ) ≤ T H /π ′ ( 1 , 1 ) = ST H /π ′ .

  15. tree of H , hence The bound on the cumulant of the sum S follows by noticing that: 12 3. A pair ( π ′ , T ∈ ST H /π ′ ) can be associated to a bicolored spanning ∑ ST H /π ′ ≤ 2 r − 1 ST H . π ′ ▶ given a vertex v 1 and a Cayley tree T , the number of lists ( v 2 , . . . , v r ) such that T is contained in H = G [ v 1 , . . . , v r ] is smaller than D r − 1 ; ▶ the number of pairs ( v 1 ∈ V , T Cayley tree ) is N r r − 2 . The proof leads to the notion of weighted dependency graph .

  16. Weighted dependency graphs Definition (Weighted dependency graph; Féray, 2016) The same proof gives: 13 A sum S = ∑ v ∈ V A v admits a weighted dependency graph G = ( V , E ) of parameters ( wt : E → R + , A ) if, for any family { v 1 , v 2 , . . . , v r } ,   ∑ ∏   . | κ ( A v 1 , A v 2 , . . . , A v r ) | ≤ A r wt ( v i , v j ) T ∈ ST G [ v 1 ,..., vr ] ( v i , v j ) edge of T | κ ( S ) | ≤ N ( 2 D ) r − 1 r r − 2 A r 2 ( 1 + max v ∈ W ( ∑ with N = card ( V ) and D = 1 w ∼ v wt ( v , w ))) .

  17. Sums of weakly dependent random variables N n and all its consequences. Moreover, one has the concentration in- N n D n equality: D n and a dependency graph of maximal degree D n . We suppose that 14 Let S n = ∑ N n i = 1 A i , n be a sum of random variables, with | A i , n | ≤ A a.s. κ ( 3 ) ( S n ) Var ( S n ) → σ 2 > 0 → 0 ; ; N n ( D n ) 2 → L . Then, S n − E [ S n ] satisfies the hypotheses of the method of cumulants, ( ) ε 2 P [ | S n − E [ S n ] | ≥ ε ] ≤ 2 exp − 9 ( ∑ N n i = 1 E [ | A i | ]) D n A ( ) ε 2 ≤ 2 exp − . 9 N n D n A 2

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend