scaling limits of non increasing markov chains and
play

Scaling limits of non-increasing Markov chains and applications to - PowerPoint PPT Presentation

Scaling limits of non-increasing Markov chains and applications to random trees and coalescents Bndicte HAAS Universit Paris-Dauphine based on joint works with Grgory MIERMONT (Orsay) Bndicte Haas SSP - March 2012 1 / 26 Outline


  1. Scaling limits of non-increasing Markov chains and applications to random trees and coalescents Bénédicte HAAS Université Paris-Dauphine based on joint works with Grégory MIERMONT (Orsay) Bénédicte Haas SSP - March 2012 1 / 26

  2. Outline Introduction 1 Scaling limits of non-increasing Markov chains 2 Applications: 3 - scaling limits of Markov branching trees - number of collisions in Λ -coalescents - random walks with barriers Bénédicte Haas SSP - March 2012 2 / 26

  3. Scaling limits: a basic example I.i.d sequence of centered random variables X i ∈ {− 1 , 1 } : − 1 , 1 , 1 , 1 , 1 , − 1 , − 1 , − 1 , 1 , − 1 , − 1 , 1 , 1 , 1 , 1 , ... Centered random walk: S n = X 1 + ... + X n How does S n behave when n is large ? (1) what is the growth rate ? (2) what is the limit after rescaling ? Central limit theorem: S n law → N ( 0 , 1 ) √ n Functional version: D ONSKER ’ S theorem (51): „ S [ nt ] « law √ n , t ∈ [ 0 , 1 ] → ( B ( t ) , t ∈ [ 0 , 1 ]) where B is a standard Brownian motion Bénédicte Haas SSP - March 2012 3 / 26

  4. Another example: Galton-Watson trees Galton-Watson processes are introduced in 1873 to study the extinction of family names generation 3 generation 2 generation 1 ancester=root η : offspring distribution (proba. on Z + = { 0 , 1 , 2 , .. } ), such that η ( 1 ) < 1, with mean m Extinction probability = 1 in subcritical ( m < 1) and critical ( m = 1 ) cases ∈ [ 0 , 1 ) in supercritical cases ( m > 1) Bénédicte Haas SSP - March 2012 4 / 26

  5. Large Galton-Watson trees T GW : critical GW tree conditioned to have n nodes n Offspring distribution η : finite variance σ 2 < ∞ ◮ H ( U ) : height (=generation) of a node chosen uniformly at random n H ( U ) → R n law √ n σ , where R has a Rayleigh distribution: P ( R > x ) = exp ( − x 2 / 2 ) ( M EIR & M OON 78 ) ◮ H n : height of the tree → 2 W H n law √ n σ , where W = maximum of a Brownian excursion with length 1 ( K OLCHIN 86 ) With length 1 on each edge, what does the tree look like when n → ∞ ? Bénédicte Haas SSP - March 2012 5 / 26

  6. Universal limit: the Brownian tree T Br A LDOUS 93: T GW → 2 law n √ n σ T Br (picture by G. Miermont) Compact (random) real tree , i.e. compact metric space with the tree property: ∀ x , y ∈ T Br ∃ ! path from x to y almost surely: binary tree, self-similar, with Hausdorff dimension = 2 Bénédicte Haas SSP - March 2012 6 / 26

  7. Topology on the set of compact rooted real trees Gromov-Hausdorff distance: let ( T , ρ ) , ( T ′ , ρ ′ ) be two compact rooted real trees ( T , ρ ) , ( T ′ , ρ ′ ) := inf d H ( ϕ 1 ( T ) , ϕ 2 ( T ′ )) ∨ d Z ( ϕ 1 ( ρ ) , ( ϕ 2 ( ρ ′ )) ` ´ ` ´ d GH → Z and ϕ 2 : T ′ ֒ the infimum being on all isometric embeddings ϕ 1 : T ֒ → Z into a same metric space ( Z , d Z ) . ( T , ρ ) and ( T ′ , ρ ′ ) are equivalent if ∃ ϕ isometry: T ′ = ϕ ( T ) , ρ ′ = ϕ ( ρ ) d GH : distance on the set of equivalence classes A LDOUS 93 : T GW 2 law n √ n → σ T Br , GH jointly with the convergence of the uniform probability on the nodes of T GW towards a n probability measure on the leaves of T Br Bénédicte Haas SSP - March 2012 7 / 26

  8. Large Galton-Watson trees: when the variance is infinite Assume: η ( k ) = P ( to have k children ) k →∞ Ck − β , 1 < β < 2 ∼ Then ( D UQUESNE 03 ): T GW GH C − 1 /β T β n law → n 1 − 1 /β ◮ “smaller" trees ◮ the limiting tree T β belongs to the family of stable Lévy trees (introduced by Duquesne, Le Gall, Le Jan): - each branching vertex branches in an infinite, countable number of subtrees - it is a self-similar tree, with Hausdorff dimension = β/ ( β − 1 ) Bénédicte Haas SSP - March 2012 8 / 26

  9. Non-increasing Markov chains ( X ( k ) , k ≥ 0 ) : Z + -valued Markov chain, non-increasing ( X n ( k ) , k ≥ 0 ) : chain starting from X n ( 0 ) = n Xn n k 0 An Absorption time: A n = inf { i : X n ( i ) = X n ( j ) , ∀ j ≥ i } , finite X n ( · ) How behave and when n → ∞ ? A n n Bénédicte Haas SSP - March 2012 9 / 26

  10. Scaling limit Assumption: macroscopic jumps are rare starting from n , the probability that the first jump is larger than ≥ n ε behaves like c ε ∼ n γ n →∞ for some γ > 0 ( c ε ր when ε ց ) More precisely, ∃ finite measure µ on [ 0 , 1 ] such that, for ε ∈ ( 0 , 1 ] P ( n − X n ( 1 ) ≥ n ε ) ∼ 1 » n − X n ( 1 ) ∼ µ ([ 0 , 1 ]) Z µ ( d x ) – and E 1 − x , n γ n n γ [ 0 , 1 − ε ] Theorem ( H.-M IERMONT 11 ) Then, ∃ time-continuous Markov process X ∞ such that „ X n ([ n γ t ]) « law , t ≥ 0 → ( X ∞ ( t ) , t ≥ 0 ) , n for the Skorokhod topology on the set D ([ 0 , ∞ ) , [ 0 , ∞ )) . Bénédicte Haas SSP - March 2012 10 / 26

  11. The limit process X ∞ is: ◮ self-similar: starting from X ∞ ( 0 ) = x , the process cX ∞ ( c − γ t ) , t ≥ 0 ` ´ is distributed as X ∞ starting from X ∞ ( 0 ) = cx , ∀ c > 0 ◮ starting from X ∞ ( 0 ) = 1, X ∞ writes X ∞ = exp ( − ξ ρ ) where • ξ is a subordinator • E [ exp ( − λξ t )] = exp ( − t φ ( λ )) , with Z ( 1 − x λ ) µ ( d x ) φ ( λ ) = µ ( { 1 } ) λ + 1 − x + µ ( { 0 } ) , λ ≥ 0 , ( 0 , 1 ) • ρ is an acceleration of time: Z u  ff ρ ( t ) = inf u ≥ 0 : exp ( − γξ r ) d r ≥ t 0 Bénédicte Haas SSP - March 2012 11 / 26

  12. Absorption time Consequently, X ∞ starting from X ∞ ( 0 ) = 1 is absorbed at 0 at time Z ∞ exp ( − γξ r ) d r 0 (almost surely finite) Jointly with the previous convergence, we have Z ∞ A n law exp ( − γξ r ) d r → n γ 0 This is not an immediate consequence of the convergence of the whole process! Remark: extension of these results to regular variation assumptions. Bénédicte Haas SSP - March 2012 12 / 26

  13. Application to Markov branching trees ( T n , n ≥ 1 ) : T n random rooted tree with n nodes Markov branching property: 4 nods 9 nods 4 nods 9 nods 3 nods 3 nods T n : law law law ∼ T 4 ∼ T 3 ∼ T 9 ( n = 17) R R R R Conditional on “the root of T n branches in p sub-trees with n 1 ≥ ... ≥ n p nodes”, these sub-trees are independent, with respective distributions those of T n 1 , ..., T n p Similar definition for sequences of trees indexed by the number of leaves Ex.: Galton-Watson trees conditioned to have n nodes (respectively n leaves) Bénédicte Haas SSP - March 2012 13 / 26

  14. Markov branching trees ( T n , n ≥ 1 ) Markov branching indexed by leaves What does it look like when n is large ? First step: height of a leaf chosen uniformly at random R Bénédicte Haas SSP - March 2012 14 / 26

  15. Markov branching trees ( T n , n ≥ 1 ) Markov branching indexed by leaves What does it look like when n is large ? First step: height of a leaf chosen uniformly at random X n ( 0 ) = 9 R X n ( k ) : size of the sub-tree above generation k containing the marked leaf Bénédicte Haas SSP - March 2012 14 / 26

  16. Markov branching trees ( T n , n ≥ 1 ) Markov branching indexed by leaves What does it look like when n is large ? First step: height of a leaf chosen uniformly at random X n ( 0 ) = 9 , X n ( 1 ) = 5 R X n ( k ) : size of the sub-tree above generation k containing the marked leaf Bénédicte Haas SSP - March 2012 14 / 26

  17. Markov branching trees ( T n , n ≥ 1 ) Markov branching indexed by leaves What does it look like when n is large ? First step: height of a leaf chosen uniformly at random X n ( 0 ) = 9 , X n ( 1 ) = 5 , X n ( 2 ) = 3 R X n ( k ) : size of the sub-tree above generation k containing the marked leaf Bénédicte Haas SSP - March 2012 14 / 26

  18. Markov branching trees ( T n , n ≥ 1 ) Markov branching indexed by leaves What does it look like when n is large ? First step: height of a leaf chosen uniformly at random X n ( 0 ) = 9 , X n ( 1 ) = 5 , X n ( 2 ) = 3 , X n ( 3 ) = 2 R X n ( k ) : size of the sub-tree above generation k containing the marked leaf Bénédicte Haas SSP - March 2012 14 / 26

  19. Markov branching trees ( T n , n ≥ 1 ) Markov branching indexed by leaves What does it look like when n is large ? First step: height of a leaf chosen uniformly at random X n ( 0 ) = 9 , X n ( 1 ) = 5 , X n ( 2 ) = 3 , X n ( 3 ) = 2 , X n ( 4 ) = 1 R X n ( k ) : size of the sub-tree above generation k containing the marked leaf It is a Markov chain! Absorption time at 1= height of the marked leaf Bénédicte Haas SSP - March 2012 14 / 26

  20. Scaling limits of Markov branching trees Let q n ( n 1 , ..., n p ) := P ( the root of T n branches in p sub-trees with n 1 ≥ ... ≥ n p leaves ) and S ↓ = n o X s 1 ≥ s 2 ≥ .. ≥ 0 : s i = 1 i Assumption: ∀ bounded continuous f : S ↓ → R , 1 − n 1 “ n 1 n , ..., n p Z “ ” ” n γ X n , 0 , .. S ↓ ( 1 − s 1 ) f ( s ) ν ( d s ) , q n ( n 1 , ..., n p ) f → n n →∞ ( n 1 ,..., n p ) partition of n with γ > 0 and ν a non-trivial σ − finite measure on S ↓ such that S ↓ ( 1 − s 1 ) ν ( d s ) < ∞ . R Informally: s n 2 ~n s n 1 with proba. ∼ 1 : with proba. ∼ ν ( d s ) : s n 3 n γ size: o(n) R R Bénédicte Haas SSP - March 2012 15 / 26

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend