freeness and graph sums
play

Freeness and Graph Sums Jamie Mingo (Queens University) based on - PowerPoint PPT Presentation

Freeness and Graph Sums Jamie Mingo (Queens University) based on joint work with Roland Speicher and Mihai Popa An alise funcional e sistemas din amicos Universidade Federal de Santa Catarina February 23, 2015 1 / 15 GUE random


  1. Freeness and Graph Sums Jamie Mingo (Queen’s University) based on joint work with Roland Speicher and Mihai Popa An´ alise funcional e sistemas dinˆ amicos Universidade Federal de Santa Catarina February 23, 2015 1 / 15

  2. GUE random matrices ◮ ( Ω , P ) is a probability space ◮ X N : Ω → M N ( C ) is a random matrix 1 ◮ X N = X ∗ √ ( x ij ) ij a N × N self-adjoint random matrix N = N with x ij independent complex Gaussians with E ( x ij ) = 0 and E ( | x ij | 2 ) = 1 ( modulo self-adjointness) ◮ λ 1 � λ 2 � · · · � λ N eigenvalues of X N , µ N = 1 N ( δ λ 1 + · · · + δ λ N ) is the spectral measure of X N , � t k d µ N ( t ) = tr ( X k N ) 0.3 X N is the N × N GUE with limiting 0.2 eigenvalue distribution given by ◮ 0.1 Wigner’s semi-circle law - - 2 1 0 1 2 2 / 15

  3. Wigner and Universality ◮ in the physics literature universality refers to the fact that the limiting eigenvalue distribution is semi-circular even if we don’t assume the entries are Gaussian 0.3 0.2 0.1 - - - 3 2 1 0 1 2 3 3 / 15

  4. random variables and their distributions ◮ ( A , ϕ ) unital algebra with state; ◮ C � x 1 , . . . , x s � is the unital algebra generated by the non-commuting variables x 1 , . . . , x s ◮ the distribution of a 1 , . . . , a s ∈ ( A , ϕ ) is the state µ : C � x 1 , . . . , x s � → C given by µ ( p ) = ϕ ( p ( a 1 , . . . , a s )) ◮ convergence in distribution of { a ( N ) , . . . , a ( N ) } ⊂ ( A N , ϕ N ) s 1 to { a 1 , . . . , a s } ⊂ ( A , ϕ ) means pointwise convergence of distributions: µ N ( p ) → µ ( p ) for p ∈ C � x 1 , . . . , x s � . 2 π e − t 2 / 2 be the density of the Gauss law 1 ◮ let f ( t ) = √ ∞ f ( is )) = s 2 s n � ◮ then log ( ˆ 2 = k n n ! with k 2 = 1 and k n = 0 for n = 1 n � 2, so the Gauss law is characterized by having all cumulants except k 1 and k 2 equal to 0 4 / 15

  5. Moments and Cumulants ◮ a 1 , . . . , a s ∈ ( A , ϕ ) random variables ◮ a partition, π = { V 1 , . . . , V k } , of [ n ] = { 1, 2, 3, . . . , n } is a decomposition of [ n ] into a disjoint union of subsets: V i ∩ V j = ∅ for i � j and [ n ] = V 1 ∪ · · · ∪ V k . ◮ P ( n ) is set of all partitions of [ n ] ◮ given a family of maps { k 1 , k 2 , k 3 , . . . , } with k n : A ⊗ n → C we define � k π ( a 1 , . . . , a n ) = k j ( a i 1 , . . . , a i j ) V ∈ π V = { i 1 ,..., i j } ◮ in general moments are defined by the moment-cumulant formula � ϕ ( a 1 · · · a n ) = k π ( a 1 , . . . , a n ) π ∈ P ( n ) ◮ k 1 ( a 1 ) = ϕ ( a 1 ) and ϕ ( a 1 a 2 ) = k 2 ( a 1 , a 2 ) + k 1 ( a 1 ) k 1 ( a 2 ) 5 / 15

  6. cumulants and independence ◮ a ∈ A , n th cumulant of a is k ( a ) = k n ( a , . . . , a ) n ◮ if a 1 and a 2 are (classically) independent then k ( a 1 + a 2 ) = k ( a 1 ) + k ( a 2 ) for all n n n n ◮ if k n ( a i 1 , . . . , a i n ) = 0 unless i 1 = · · · i n we say mixed cumulants vanish ◮ if mixed cumulants vanish then a 1 and a 2 are independent free cumulants and free independence ( R. Speicher ) ◮ partition with a crossing: 1 2 3 4 ◮ non-crossing partition: 1 2 3 4 ◮ NC ( n ) = { non-crossing partitions of [ n ] } � ◮ ϕ ( a 1 · · · a n ) = κ π ( a 1 , . . . , a n ) defines the free π ∈ NC ( n ) cumulants : same rules apply as for classical independence. 6 / 15

  7. freeness and asymptotic freeness ◮ if a and b are free with respect to ϕ then ϕ ( abab ) = ϕ ( a 2 ) ϕ ( b ) 2 + ϕ ( a ) 2 ϕ ( b 2 ) − ϕ ( a ) 2 ϕ ( b ) 2 ◮ in general if a 1 , . . . , a s are free then all mixed moments ϕ ( x i 1 · · · x i n ) can be written as a polynomial in the moments of individual moments { ϕ ( a k i ) } i , k . ◮ { a ( N ) , . . . , a ( N ) } ⊂ ( A N , ϕ N ) are asymptotically free if µ n → µ s 1 and x 1 , . . . , x s are free with respect to µ ◮ in practice this means: a ( N ) , . . . , a ( N ) ∈ ( A n , ϕ N ) are s 1 asymptotically free if whenever we have b ( N ) ∈ alg ( 1, a ( N ) ) i j i is such that ϕ N ( b ( N ) ) = 0 and j 1 � j 2 � · · · � j m we have i ϕ N ( b ( N ) · · · b ( N ) m ) → 0 1 7 / 15

  8. simple distributions: Wigner and Marchenko-Pastur 2 π e − t 2 / 2 be the density of the Gauss law 1 ◮ let f ( t ) = √ f ( is )) = s 2 ∞ s n � ◮ then log ( ˆ 2 = n ! with k 2 = 1 and k n = 0 for k n n = 1 n � 2, so the Gauss law is characterized by having all cumulants except k 1 and k 2 equal to 0 ◮ µ a probability measure on R , z ∈ C + , � ( z − t ) − 1 d µ ( t ) is the Cauchy transform of µ and G ( z ) = z = κ 1 + κ 2 z + κ 3 z 2 + · · · is the R ( z ) = G � − 1 � ( z ) − 1 R -transform of µ √ 4 − t 2 dt is the semi-circle law we have κ n = 0 1 ◮ if d µ ( t ) = 2 π except for κ 2 = 1 ◮ if 1 < c and a = ( 1 − √ c ) 2 and b = ( 1 + √ c ) 2 we let √ ( b − t )( t − a ) d µ = dt , µ is the Marchenko-Pastur distribution: 2 π t κ n = c for all n 8 / 15

  9. random matrices and asymptotic freeness 1 ◮ X N = X ∗ √ ( x ij ) ij a N × N self-adjoint random matrix N = N with x ij independent complex Gaussians with E ( x ij ) = 0 and E ( | x ij | 2 ) = 1 ( modulo self-adjointness) ◮ Voiculescu’s big theorem: for large N mixed moments of X N and Y N are close to those of freely independent semi-circular operators (thus asymptotically free ) 0.3 0.5 0.4 0.2 0.3 0.2 0.1 0.1 � 1 0 1 2 3 4 5 6 � 2 � 1 0 1 2 3 4 X 1000 + X 2 X 1000 + ( X T 1000 ) 2 1000 ◮ ( with M. Popa ) transposing a matrix can free it from itself 9 / 15

  10. Wishart Random Matrices ◮ Suppose G 1 , . . . , G d 1 are d 2 × p random matrices where G i = ( g ( i ) jk ) jk and g ( i ) jk are complex Gaussian random variables with mean 0 and (complex) variance 1, i.e. E ( | g ( i ) jk | 2 ) = 1. Moreover suppose that the random variables { g ( i ) jk } i , j , k are independent. ◮   G 1 1 1 . � � G ∗ G ∗ ( G i G ∗ . W = · · · = j ) ij   . 1 d 1 d 1 d 2 d 1 d 2   G d 1 is a d 1 d 2 × d 1 d 2 Wishart matrix. We write W = d − 1 1 ( W ( i , j )) ij as d 1 × d 1 block matrix with each entry the d 2 × d 2 matrix d − 1 2 G i G ∗ j . 10 / 15

  11. Partial Transposes on M d 1 ( C ) ⊗ M d 2 ( C ) · G i a d 2 × p matrix · W ( i , j ) = 1 d 2 G i G ∗ j , a d 2 × d 2 matrix, · W = 1 d 1 ( W ( i , j )) ij is a d 1 × d 1 block matrix with entries W ( i , j ) · W T = 1 d 1 ( W ( j , i ) T ) ij is the “full” transpose Γ = 1 · W d 1 ( W ( j , i )) ij is the “left” partial transpose · W Γ = 1 d 1 ( W ( i , j ) T ) ij is the “right” partial transpose p · we assume that → c , 0 < c < ∞ d 1 d 2 · eigenvalue distributions of W and W T converge to Marchenko-Pastur with parameter c and W Γ converge to a shifted Γ ◮ eigenvalues of W semi-circular with mean c and variance c (Aubrun, 2012) ◮ W and W T are asymptotically free (M. and Popa, 2014) Γ , W Γ , W T } form an ◮ ( main theorem ) the matrices { W , W asymptotically free family 11 / 15

  12. graphs and graphs sums ( with Roland Speicher ) ◮ a graph means a finite oriented graph with possibly loops and multiple edges ◮ a graph sum means attach a matrix to each edge and sum over vertices j T 1 T 2 T i T i j i k T 3 � � � i , j , k t ( 1 ) ij t ( 2 ) jk t ( 3 ) i , j t ij i t ii ki 12 / 15

  13. graph sums and their growth ◮ given G = ( V , E ) a graph and an assignment e �→ T e ∈ M N ( C ) we have a graph sum � � t ( e ) S G ( T ) = i t ( e ) i s ( e ) e ∈ E i : V → [ N ] ◮ problem find “best” r ( G ) ∈ R + such that for all T we have | S G ( T ) | � N r ( G ) � � T e � e ∈ E ◮ for example: | S G ( T 1 , T 2 , T 3 ) | � N 3 / 2 � T 1 � � T 2 � � T 3 � when k T 2 j T 1 G = i T 3 l 13 / 15

  14. finding the growth ( J.F.A. 2012 ) T 4 i 4 i 4 T 3 i 3 T 2 r = 3 i 2 i 2 = i 3 T 1 � ∴ T 5 i 1 = i 5 = i 6 2 T 6 i 5 i 1 T 7 i 7 = i 8 i 6 T 11 T 10 T 8 T 9 i 8 i 7 T 12 ◮ a edge is cutting is its removal disconnects the graph ◮ a graph is two-edge connected if it has no cutting edge ◮ a two-edge connected component is a two-edge connected subgraph which is maximal ◮ we make a quotient graph whose vertices are the two-edge connected components on the old graph and the edges are the cutting edges of the old graph ◮ r ( G ) is 1 2 the number of leaves on the quotient graph ( always a union of trees ) 14 / 15

  15. Conclusion: traces and graph sums Γ ◮ X = W is the partially transposed Wishart matrix, but now we no longer assume entries are Gaussian ◮ we let A 1 , A 2 , . . . , A n be d 1 d 2 × d 1 d 2 constant matrices ◮ compute E ( Tr ( XA 1 XA 2 · · · XA n )) ; when A i = I we get the n th moment of the eigenvalue distribution ◮ integrating out the X ’s leaves a sum of graph sums, one for each partition π ∈ P ( n ) a ( 1 ) i 1 A i − 1 1 i 1 i − 1 X X ( 1, − 3 ) (− 1, 3 ) π = i 2 i − 4 a ( 2 ) ( 1, − 3 )(− 1, 3 ) i 2 i − 2 A A 2 ( 3 ) 4 ( 2, − 2 )( 4, − 4 ) i − 2 i 4 ( 2, − 2 ) a i 3 i − 3 X X a ( 4 ) i − 3 i 3 ( 4, − 4 ) A i 4 i − 4 3 thm : the only π ’s for which r ( G π ) is large enough ( n / 2 + 1 in this case) are non-crossing partitions with blocks of size 1 or 2 (corresponding to the free cumulants κ 1 and κ 2 ) Γ , W Γ , W T } ass. free thm : method extends to showing that { W , W 15 / 15

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend