the discriminant is an invariant
play

The discriminant is an invariant P ( x ) = x n e 1 x n 1 + e 2 x n - PowerPoint PPT Presentation

The discriminant is an invariant P ( x ) = x n e 1 x n 1 + e 2 x n 2 + + ( 1 ) n 1 e n 1 x + ( 1 ) n e n 1 e 1 e 2 e n 1 e n 0 . . . 0 0 1 e 1 e 2 e n 0 . . .


  1. The discriminant is an invariant P ( x ) = x n − e 1 x n − 1 + e 2 x n − 2 + · · · + ( − 1 ) n − 1 e n − 1 x + ( − 1 ) n e n 1 e 1 e 2 · · · e n − 1 e n 0 . . . 0 � � � � 0 1 e 1 e 2 · · · e n 0 . . . 0 � � � � � . . � � � . . � � . . � � � 0 · · · 0 1 e 1 e 2 · · · e n − 1 e n � ∆( P ) = ± � � � � n ( n − 1 ) e 1 ( n − 2 ) e 2 · · · e n − 1 e n 0 . . . 0 � � � � 0 1 e 1 e 2 · · · e n − 1 0 . . . 0 � � � � . . � � . . � � � . . � � � � 0 · · · 0 n ( n − 1 ) e 1 ( n − 2 ) e 2 · · · e n − 1 � ∆ is the resultant of P and P ′ . ∆( P ) � = 0 iff P has only simple roots.

  2. Discriminant as a symmetric function � ( x i − x j ) 2 . ∆( x 1 , . . . , x n ) = ± i � = j In the Schur basis : ∆( x 1 , x 2 ) = − s 2 + 3 s 1 , 1 ∆( x 1 , x 2 , x 3 ) = − s 4 , 2 + 3 s 4 , 1 , 1 + 3 s 3 , 3 − 6 s 3 , 2 , 1 + 15 s 2 , 2 , 2 ∆( x 1 , x 2 , x 3 , x 4 ) = 16 terms, ∆( x 1 , x 2 , x 3 , x 4 , x 5 ) = 59 terms, ∆( x 1 , x 2 , x 3 , x 4 , x 5 , x 6 ) = 247 terms, ∆( x 1 , x 2 , x 3 , x 4 , x 5 , x 6 , x 7 ) = 1111 terms.

  3. Discriminant as the square of the Vandermonde determinant 2 x n − 1 x 2 � � 1 x 1 · · · � 1 � 1 � x 2 x n − 1 � 1 x 2 · · · � � 2 2 ∆( x 1 , . . . , x n ) = det . � . � . � � . � � � � x n − 1 x 2 1 x n · · · � � n n

  4. Invariant Theory and its applications • Involved in the classification of entanglement. From a proposal of Klyachko : use the (geometric) invariant theory to classify quantum systems of particles (qubit systems). 8 papers with J.Y. Thibon and several co-authors. • Used for computing hyperdeterminants N Det ( M i 1 ,..., i 2 k ) 1 ≤ i 1 ,..., i 2 k ≤ N = 1 � � ǫ ( σ 1 ) · · · ǫ ( σ 2 k ) M σ 1 ( i ) ...σ 2 k ( i ) . N ! σ 1 ,...,σ 2 k ∈ S N i = 1 of Hankel type (i.e. M i 1 ,..., i 2 k = f ( i 1 + · · · + i 2 k ) ).

  5. Random matrices The joint probability density for the eigenvalues λ 1 , λ 2 , . . . , λ n of GUE / GOE / GSE ( G aussian U nitary/ O rthogonal/ S ymplectic) is given by � − � N � � i = 1 λ 2 | λ i − λ j | β � i P N , β ( λ 1 , . . . , λ N ) = C N ,β exp d λ i 2 i < j where β = 1 ( U ) , 2 ( O ) , 4 ( S ) . Selberg integral : � 1 i < j | λ i − λ j || 2 γ � N j = 1 λ α − 1 ( 1 − λ j ) β − 1 d λ j S ( N ; α, β ; γ ) := � 0 j � N − 1 Γ( 1 + γ + j γ )Γ( α + j γ )Γ( β + j γ ) = j = 0 Γ( 1 + γ )Γ( α + β +( n + j − 1 ) γ ) Selberg proof : for γ ∈ N and extended to γ ∈ C using analytic tools (Carlson Theorem).

  6. Random matrices The joint probability density for the eigenvalues λ 1 , λ 2 , . . . , λ n of GUE / GOE / GSE ( G aussian U nitary/ O rthogonal/ S ymplectic) is given by � − � N � � i = 1 λ 2 | λ i − λ j | β � i P N , β ( λ 1 , . . . , λ N ) = C N ,β exp d λ i 2 i < j where β = 1 ( U ) , 2 ( O ) , 4 ( S ) . Selberg-like integrals : � � � | λ i − λ j | 2 β d µ ( λ 1 ) . . . d µ ( λ N ) =?? . . . f ( λ 1 , . . . , λ N ) i < j

  7. A first question Physicists want to write the Laughlin wave function in terms of Slater wave functions ( wave functions due to John C. Slater of a multi-fermionic system, 1929). Combinatorially : expand the powers of the discriminant on Schur functions. Computation : usually by numerical methods. No combinatorial interpretation. ◮ P . Di Francesco, M. Gaudin, C. Itzykson, F . Lesage, Laughlin’s wave functions, Coulomb gases and expansions of the discriminant , Int.J.Mod.Phys. (1994) ◮ T. Scharf, J.-Y. Thibon, B.G. Wybourne, Powers of the Vandermonde determinant and the quantum Hall effect , J. Phys.(1994) ◮ R.C. King, F . Toumazet, B.G. Wybourne, The square of the Vandermonde determinant and its q-generalization , J. Phys. A (2004)

  8. Second question What about the other values ν ∈ Z ? p 1 1 ν = ν = ν = 2 m 2 m + 1 2 pm + 2 Laughlin Moore-Read Read-Rezayi   p � � � ( z i − z j ) m 1 � � ( z i − z j ) 2 Pf S   z i − z j   i < j k = 1 ( k − 1 ) N p < i < j ≤ k N p � ( z i − z j ) 2 m − 1 � ( z i − z j ) 2 m × × i < j i < j Bernevig-Haldane : General expression for ν = k r in terms of Jack polynomials : − k + 1 (( p − 1 ) r ) k (( p − 2 ) r ) k ··· ( r ) k 0 k ( z 1 , . . . , z pk ) . r − 1 J

  9. Why Jack polynomials ? Not the solutions of the true eigenvector problem but adiabatically equivalent. Conditions that the wave function must fulfill. ◮ Eigenfunction of Laplace-Beltrami type operator with dominance properties ( α = − k + 1 r − 1 ) : N � 2 � z i + z j � � ∂ + 1 ∂ ∂ H ( α ) � � LB = z i z i − z i ∂ z i α z i − z j ∂ z i ∂ z j i = 1 i < j (Jack polynomials) ◮ In the kernel of L + = � ∂ ∂ z i . Invariant by translation ; highest i weighted ;singular. ◮ Eigenfunctions of of L 0 = � i z i ∂ ∂ z i . Homogeneous. ◮ In the kernel of L − = � i z 2 ∂ ∂ z i . Lowest weight. i ◮ Clustering conditions :

  10. Why Jack polynomials ? Clustering conditions at ν = k r k particles cluster. Setting z 1 = · · · = z k the wave function must vanish as N � ( z 1 − z i ) r . i = k + 1 Related to Feigin et al (math.QA/0112127). Wheel conditions.

  11. What are Jack polynomials ? Hecke algebra Action on multivariate polynomials : T i = t + ( s i − 1 ) tx i + 1 − x i . x i + 1 − x i In particular 1 T i = 1 and x i + 1 T i = x i . Together with the multiplication by the variables x i and the affine operator τ defined by τ f ( x 1 , . . . , x N ) = f ( x N q , x 1 , . . . , x N − 1 ) . We have ◮ T i T i + 1 T i = T i + 1 T i T i + 1 (braid relation) ◮ T i T j = T j T i for | i − j | > 1 ◮ ( T i − t )( T i + 1 ) = 0

  12. What are Jack polynomials ? Cherednik operators and Macdonald polynomials Cherednik operators : ξ i = t 1 − i T i − 1 · · · T 1 τ T − 1 N − 1 · · · T − 1 i Knop-Cherednik operators : Ξ i = t 1 − i T i − 1 · · · T 1 ( τ − 1 ) T − 1 N − 1 · · · T − 1 i Non symmetric Macdonald polynomials : E v = ( ∗ ) x v [ 1 ] · · · x v [ N ] + · · · 1 N simultaneous eigenfunctions of ξ i . Non symmetric shifted Macdonald polynomials : M v = ( ∗ ) x v [ 1 ] · · · x v [ N ] + · · · 1 N simultaneous eigenfunctions of Ξ i . Remark : � M v = E v + α u E u . | u | < | v |

  13. What are Jack polynomials ? Spectral vectors and vanishing properties v = [ 0 , 1 , 2 , 2 , 0 , 1 ] : Standardized : std v = [ 1 , 3 , 5 , 4 , 0 , 2 ] 1 Spectral vector : Spec v [ i ] = � v � [ i ] with � v � = [ t 1 , qt 3 , q 2 t 5 , q 2 t 4 , 1 , qt 2 ] . Shifted Macdonald polynomials can alternatively be defined by vanishing properties M v ( � u � [ 1 ] , . . . , � u � [ N ]) = 0 for | u | ≤ | v | and u � = v .

  14. What are Jack polynomials ? Yang-Baxter graph ( i , i + 1 ) if v i < v i + 1 [ 201 ] v · Φ = [ v 2 , . . . , v N , v 1 + 1 ] ? [ 000 ]

  15. What are Jack polynomials ? Yang-Baxter graph ( i , i + 1 ) if v i < v i + 1 [ 201 ] v · Φ = [ v 2 , . . . , v N , v 1 + 1 ] Φ [ 002 ] [ 020 ] ( 23 ) Φ [ 001 ] [ 010 ] [ 100 ] ( 23 ) ( 12 ) Φ [ 000 ]

  16. What are Jack polynomials ? Yang-Baxter graph � � 1 − t E v · ( i , i + 1 ) = E v T i + E 201 1 − � v � [ i + 1 ] � v � [ i ] E v · Φ = E v τ x N τ x N E 002 E 020 T 2 + ⋆ τ x N E 001 E 010 E 100 T 2 + ⋆ T 1 + ⋆ τ x N E 000

  17. What are Jack polynomials ? Yang-Baxter graph E 201 τ x N E 002 E 020 T 2 + ⋆ τ x N E 001 E 010 E 100 T 2 + ⋆ T 1 + ⋆ τ x N E 000

  18. What are Jack polynomials ? Yang-Baxter graph � � 1 − t E v · ( i , i + 1 ) = E v T i + E 201 1 − � v � [ i + 1 ] � v � [ i ] E v · Φ = E v τ x N τ x N E 002 E 020 T 2 + ⋆ τ x N E 001 E 010 E 100 T 2 + ⋆ T 1 + ⋆ τ x N E 000

  19. What are Jack polynomials ? Yang-Baxter graph � � 1 − t M v · ( i , i + 1 ) = M v T i + M 201 1 − � v � [ i + 1 ] � v � [ i ] τ ( x N − 1 ) M v · Φ = M v τ ( x N − 1 ) M 002 M 020 T 2 + ⋆ τ ( x N − 1 ) M 001 M 010 M 100 T 2 + ⋆ T 1 + ⋆ τ ( x N − 1 ) M 000

  20. What are Jack polynomials ? Yang-Baxter graph M 201 τ ( x N − 1 ) M 002 M 020 T 2 + ⋆ τ ( x N − 1 ) M 001 M 010 M 100 T 2 + ⋆ T 1 + ⋆ τ ( x N − 1 ) M 000

  21. What are Jack polynomials ? Symmetrization Symmetrizing operator : � S N = T σ s ∈ S N where T σ = T i 1 . . . T i k if σ = s i 1 · · · s i k is the shortest decomposition of σ in elementary transpositions. Symmetric Macdonald polynomials : eigenfunctions of symmetric polynomials in the variables ξ i . J λ = ( ∗ ) E λ . S N . Symmetric shifted Macdonald polynomials : eigenfunctions of symmetric polynomials in the variables Ξ i . MS λ = ( ∗ ) M λ . S N .

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend