clt for kostlan shub smale polynomial systems
play

CLT for Kostlan Shub Smale polynomial systems Joint work with D. - PowerPoint PPT Presentation

CLT for Kostlan Shub Smale polynomial systems Joint work with D. Armentano; J-M Azas & J. Len Federico Dalmao Random Waves in Oxford 21 June 2018 Plan of the Talk In this talk we are concerned with the asymptotic distribution of the


  1. CLT for Kostlan Shub Smale polynomial systems Joint work with D. Armentano; J-M Azaïs & J. León Federico Dalmao Random Waves in Oxford 21 June 2018

  2. Plan of the Talk In this talk we are concerned with the asymptotic distribution of the geometric measure of the set of real roots of a polynomial system. ◮ Introduction ◮ The model and the results ◮ Dimension 1 ◮ The general case ◮ Chaotic expansion ◮ Bounding the tail

  3. Introduction Roots of random polynomials have been intensively studied. However, the case of roots of random polynomial systems have received less attention. Most of the literature is concerned with the Kostlan - Shub - Smale model (and related ones) and to its mean behaviour: ◮ Kostlan, Shub and Smale computed the expectation of the number of real roots in the square case. ◮ Kostlan and Prior computed the expectation of the volume of the zero set in the rectangular case. ◮ Burgisser computed the expectation of the intrinsic volumes (including: number or volume, Euler characteristic, etc).

  4. Introduction In the case of the variance: ◮ Wschebor gave asymptotics in the case where the system’s size m tends to ∞ ( r = m ). ◮ Letendre ( r < m ) and Letendre & Puchol and us ( r = m ) gave the asymptotics in the case where the common degree d tends to ∞ . There are some results for systems In the case of random waves; ◮ D-Nourdin-Peccati-Rossi (arithmetic). ◮ Nourdin-Peccati-Rossi (planar).

  5. The model For r , m ∈ N with r ≤ m , consider homogeneous polynomials � a ( ℓ ) j x j ; P ℓ ( x ) = ℓ = 1 , . . . , r , | j |≤ d where we use the notations ◮ j = ( j 0 , . . . , j m ) ∈ N m + 1 ; ◮ x = ( x 0 , . . . , x m ) ∈ R m + 1 ; ◮ | j | = � m k = 0 j k and x j = � m k = 0 x j k k . ◮ a ( ℓ ) ∈ R . j

  6. The model We assume that the random variables a ( ℓ ) are independent cen- j tered Gaussian with variances � d � � � d ! a ( ℓ ) � m Var = = k = 0 j k ! . j j The key point is that this choice entails a simple and nice co- variance for P ℓ , namely: r d ( s , t ) := E ( P ℓ ( s ) P ℓ ( t )) = � s , t � d .

  7. The model Why this model? It has several good properties: ◮ (algebraic) This distribution "is natural" since it is induced by the Frobenius inner product for matrices. ◮ (geometric) The distribution of the polynomials is invariant under the action of the orthogonal group in R m + 1 . ◮ (probabilistic) There are lot of independences, among the polynomial and its derivatives, among most of its derivatives.

  8. The results We are interested in the real roots of the system P = ( P 1 , . . . , P r ) . We denote V d the number of real roots of P in the case r = m or the geometric volume of zero set of P in the case r < m (on the sphere). As we saw before, there exists 0 < V ∞ < ∞ such that Var ( V d ) lim d r − m / 2 = V ∞ . d →∞

  9. Main Result Theorem (AADL) As d → ∞ we have that V d − E ( V d ) 1 2 ( r − m / 2 ) d converges in distribution towards a centered normal random variable with variance V ∞ .

  10. Preliminary: Dimension one The one dimensional case � d �� � � d a j x j ; a j independent ∼ N P ( x ) = 0 , , j j = 0 is simpler. Indeed, counting the number of real roots of P is equivalent to counting the number of roots on the sphere S 1 of its homoge- neous version P 0 and this is equivalent to counting the number of real roots of � d a j cos j ( t ) sin d − j ( t ) , X d ( t ) = j = 0 on the interval [ 0 , 2 π ] .

  11. Preliminary: Dimension one The process X d is centered, Gaussian and stationary with co- variance function given by r d ( t ) = cos d ( t ) . √ In particular, after the scaling x := d · t , there exists a limit process (in the weak sense) on [ 0 , ∞ ) ; namely a centered, sta- tionary, Gaussian process with covariance given by � � − 1 2 x 2 r ∞ ( x ) = exp .

  12. Preliminary: Dimension one The variance is dealt with using Kac-Rice formula. Wiener Chaos techniques give the convergence of the finite di- mensional projections of the number of real roots of X d . The tail of the expansion can be bounded by dividing the interval √ [ 0 , 2 π d ] in small isometric pieces and by approximating the number of real roots of X d in the pieces by that of the limit process X .

  13. The general case In the general case, working on the sphere S m complicates things. Nevertheless, Kac-Rice formulae and the Hermite expansion still work. Namely: Kac-Rice formula: � ( S m ) 2 E s , t (det ⊥ ( P ′ E ( V 2 d ) = E ( V d ) + d )( s , t )) p s , t ( 0 , 0 ) dsdt , being � ◮ det ⊥ ( M ) = det( M ′ M ) , ◮ E s , t ( · ) = E ( · | P d ( s ) = P d ( t ) = 0 ) , ◮ p s , t the joint density function of P d ( s ) and P d ( t ) .

  14. The general case: Hermite (chaotic) Expansion Let ◮ γ = ( α , β ) ∈ N m × N m × m ; | γ | = � i α i + � ij β ij ; H γ = � i H α i · � ◮ � i j H β ij ; ◮ c γ is the product of Hermite coefficients of δ 0 and det ⊥ . Then, it holds that ∞ � V d := V d − E ( V d ) � = I q , d , 1 2 ( r − m / 2 ) d q = 1 � � � I q , d = c γ H γ ( t ) dt . S m | γ | = 2 q

  15. The general case The asymptotic variance of V d is gotten by Kac-Rice formula (J-M’s talk). In order to state the convergence towards a normal random variable of the normalized volume of the zero set of P d , we take advantage of the expansion and the particular structure of chaotic random variables. (Giovanni’s talk). We study separately the partial sums and the tail � � I q , d and I q , d . q ≤ Q q > Q

  16. The general case: partial sums The asymptotic normality of the finite partial sums of the ex- pansion is proved using the so called contractions (the Fourth moment theorem by Peccati et al). The moral is that in the case of a sequence of random variables living in a fixed chaos (ie: generated by Hermite polynomials of fixed degree), convergence to the normal distribution is equiva- lent to the convergence of the fourth moment. The contractions provide another technical equivalent mean to prove the normality (let me skip the details).

  17. The general case: partial sums More precisely, it suffices to prove that: For k = 0 , 1 , 2, let r ( k ) indicate the k -th derivative of r d : d [ − 1 , 1 ] → R . If � π/ 2 sin m − 1 ( θ ) | r ( k ) d → + ∞ d m / 3 lim d (cos( θ )) | d θ = 0 , 0 then, I q , d converges in distribution towards a centered normal random variable. Within the sum of a finite number of chaos, joint convergence follows from marginal convergence; so the partial sums con- verges towards a normal rv.

  18. Bounding the tail The main issue is the uniform (w.r.t. d ) bound of the variance of the tail of the expansion. Actually, the variance of each I q , d has the form � ( S m ) 2 H q ( � s , t � ) dsdt . since it dependes on E ( � H γ ( s ) � H γ ( t )) and p s , t ( 0 , 0 ) , which in turns depends on H q depends on � s , t � d and of its derivatives.

  19. Bounding the tail In order to get the global bound, we divide the sphere in the diagonal { ( s , t ) ∈ S m × S m : s = t } and the off-diagonal regions. γ We first note that outside a d -tube of the diagonal the vari- √ ance of the tail of the expansion can be bounded directly using Arcones’ inequality. Let X ∼ N ( 0 , Id ) on R N and h : R N → R such that E [ h 2 ( X )] < ∞ . The Hermite rank, rank ( h ) , of h is defined as inf { τ : ∃ k ∈ N N , | k | = τ ; E [( h ( X ) − E h ( X )) H k ( X )] � = 0 } .

  20. Bounding the tail Let W = ( W 1 , . . . , W N ) and Q = ( Q 1 , . . . , Q N ) be two stan- dard Gaussian random vectors on R N . We define r ( j , k ) = E [ W j Q k ] . Define � � � N � N | r ( j , k ) | , max | r ( j , k ) | ψ := max max . 1 ≤ j ≤ N 1 ≤ k ≤ N k = 1 j = 1 Then, | Cov ( h ( W ) , h ( Q )) | ≤ E [ h 2 ( W )] ψ rank ( h ) . Using this we bound the tail of the variance by that of a geo- metric series.

  21. Bounding the tail: the diagonal It does not exist a global limit process. In fact, denoting by dist the geodesical distance on S m , we have r d ( s , t ) = � s , t � d = cos( dist ( s , t )) d , √ Using the scaling ( s , t ) �→ d ( u , v ) , the limit "covariance" is: � � −� u − v � 2 exp . 2 Unfortunatelly, this function does not define a covariance on the sphere S m .

  22. Bounding the tail: the diagonal Let C ( s , γ ) denote a cap (geodesic ball) C ( s , γ ) = { t ∈ S m : dist ( s , t ) < γ } . After projecting on the tangent space at s , the function � � − dist 2 ( s , t ) exp , 2 does define a limit process: the local limit process. This fact allows to bound uniformly the variance of the tail of the expansion of the volume of the zero set on any cap C ( s , γ ) with small γ . The next step is to transform this local bound into a global one.

  23. Bounding the tail Note that, by the invariance of the distribution of the polyno- mials P ℓ , the law of the volume of the zero set restricted to any cap C ( s , γ ) does not depend on s . We use a convenient partition of the sphere such that, save a negligible set, the sets in the partition, once projected on the tangent space, are asymptotically isometric. The idea is to use hyper-spherical coordinates and to their inter- val of variation taking into account the jacobian of the change of coordinates.

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend