the scaling limit of critical random graphs
play

The scaling limit of critical random graphs Christina Goldschmidt - PowerPoint PPT Presentation

Statistical Mechanics Seminar, Warwick, 18th February 2010 The scaling limit of critical random graphs Christina Goldschmidt Joint work with Louigi Addario-Berry (McGill University) and Nicolas Broutin (INRIA Rocquencourt) Part I : Trees A


  1. Statistical Mechanics Seminar, Warwick, 18th February 2010 The scaling limit of critical random graphs Christina Goldschmidt Joint work with Louigi Addario-Berry (McGill University) and Nicolas Broutin (INRIA Rocquencourt)

  2. Part I : Trees

  3. A warm-up: uniform random trees Take a uniform random tree T m on vertices labelled by [ m ] = { 1 , 2 , . . . , m } . 3 6 4 2 5 7 1

  4. A warm-up: uniform random trees Take a uniform random tree T m on vertices labelled by [ m ] = { 1 , 2 , . . . , m } . 3 6 4 2 5 7 1 What happens as m grows?

  5. Useful link to branching processes A uniform random tree on m vertices has the same distribution as

  6. Useful link to branching processes A uniform random tree on m vertices has the same distribution as ◮ the family tree of a Galton-Watson branching process

  7. Useful link to branching processes A uniform random tree on m vertices has the same distribution as ◮ the family tree of a Galton-Watson branching process ◮ with Poisson(1) offspring distribution

  8. Useful link to branching processes A uniform random tree on m vertices has the same distribution as ◮ the family tree of a Galton-Watson branching process ◮ with Poisson(1) offspring distribution ◮ conditioned to have precisely m vertices

  9. Useful link to branching processes A uniform random tree on m vertices has the same distribution as ◮ the family tree of a Galton-Watson branching process ◮ with Poisson(1) offspring distribution ◮ conditioned to have precisely m vertices ◮ and with a uniformly-chosen labelling.

  10. Useful link to branching processes A uniform random tree on m vertices has the same distribution as ◮ the family tree of a Galton-Watson branching process ◮ with Poisson(1) offspring distribution ◮ conditioned to have precisely m vertices ◮ and with a uniformly-chosen labelling. (The following theory also works for any Galton-Watson branching process having offspring mean 1 and finite offspring variance.)

  11. Two ways of encoding a tree It will be convenient to encode our trees in terms of discrete functions which are easier to manipulate.

  12. Two ways of encoding a tree It will be convenient to encode our trees in terms of discrete functions which are easier to manipulate. We will do this is two different ways: ◮ the height function ◮ the depth-first walk.

  13. Height function We will think of the lowest-labelled vertex as the root.

  14. Height function We will think of the lowest-labelled vertex as the root. Consider the vertices in depth-first order and sequentially record the distance from the root.

  15. Height function 3 6 4 H(k) 3 2 5 2 1 7 0 k 1 2 3 4 5 6 −1 1

  16. Height function 3 6 4 H(k) 3 2 5 2 1 7 0 k 1 2 3 4 5 6 −1 1

  17. Height function 3 6 4 H(k) 3 2 5 2 1 7 0 k 1 2 3 4 5 6 −1 1

  18. Height function 3 6 4 H(k) 3 2 5 2 1 7 0 k 1 2 3 4 5 6 −1 1

  19. Height function 3 6 4 H(k) 3 2 5 2 1 7 0 k 1 2 3 4 5 6 −1 1

  20. Height function 3 6 4 H(k) 3 2 5 2 1 7 0 k 1 2 3 4 5 6 −1 1

  21. Height function 3 6 4 H(k) 3 2 5 2 1 7 0 k 1 2 3 4 5 6 −1 1

  22. Depth-first walk We again consider the vertices in depth-first order but now at each step we add an increment consisting of the number of children minus 1. The walk starts from 0. 3 6 4 X(k) 3 2 5 2 1 7 0 k 1 2 3 4 5 6 7 −1 1

  23. Depth-first walk 3 6 4 X(k) 3 2 5 2 1 7 0 k 1 2 3 4 5 6 7 −1 1

  24. Depth-first walk 3 6 4 X(k) 3 2 5 2 1 7 0 k 1 2 3 4 5 6 7 −1 1

  25. Depth-first walk 3 6 4 X(k) 3 2 5 2 1 7 0 k 1 2 3 4 5 6 7 −1 1

  26. Depth-first walk 3 6 4 X(k) 3 2 5 2 1 7 0 k 1 2 3 4 5 6 7 −1 1

  27. Depth-first walk 3 6 4 X(k) 3 2 5 2 1 7 0 k 1 2 3 4 5 6 7 −1 1

  28. Depth-first walk 3 6 4 X(k) 3 2 5 2 1 7 0 k 1 2 3 4 5 6 7 −1 1

  29. Depth-first walk 3 6 4 X(k) 3 2 5 2 1 7 0 k 1 2 3 4 5 6 7 −1 1

  30. Comparing encodings It is fairly straightforward to see that the height function encodes the topology of the tree (although not its labels).

  31. Comparing encodings It is fairly straightforward to see that the height function encodes the topology of the tree (although not its labels). It is less easy to see that the depth-first walk also encodes the topology. In fact, � � H ( i ) = # 0 ≤ j ≤ i − 1 : X ( j ) = min j ≤ k ≤ i X ( k ) .

  32. Comparing encodings It is fairly straightforward to see that the height function encodes the topology of the tree (although not its labels). It is less easy to see that the depth-first walk also encodes the topology. In fact, � � H ( i ) = # 0 ≤ j ≤ i − 1 : X ( j ) = min j ≤ k ≤ i X ( k ) . The advantage of the depth-first walk is that we can more easily understand its distribution.

  33. Distribution of the depth-first walk Suppose that we had a Poisson-Galton-Watson(1) branching process without any condition on the total progeny. Then at each step of the depth-first walk we would add an independent increment whose distribution is Poisson(1) − 1 until the first time T that the walk hits − 1 (which signals the end of the component).

  34. Distribution of the depth-first walk Suppose that we had a Poisson-Galton-Watson(1) branching process without any condition on the total progeny. Then at each step of the depth-first walk we would add an independent increment whose distribution is Poisson(1) − 1 until the first time T that the walk hits − 1 (which signals the end of the component). In other words, we have a random walk with step-sizes having mean 0 and finite variance. The only complication is that we have to condition it on T = m .

  35. Taking limits By Donsker’s theorem, the unconditioned walk run for n steps and with space rescaled by 1 / √ n converges to a Brownian motion run for time 1.

  36. Taking limits By Donsker’s theorem, the unconditioned walk run for n steps and with space rescaled by 1 / √ n converges to a Brownian motion run for time 1. It turns out to be also true that the random walk conditioned on T = m , with space rescaled by 1 / √ m , converges in distribution to a limit, called a Brownian excursion.

  37. Taking limits By Donsker’s theorem, the unconditioned walk run for n steps and with space rescaled by 1 / √ n converges to a Brownian motion run for time 1. It turns out to be also true that the random walk conditioned on T = m , with space rescaled by 1 / √ m , converges in distribution to a limit, called a Brownian excursion. Intuitively, this is a Brownian motion started from 0, conditioned to leave 0 immediately and to stay positive until it returns to 0 at time 1.

  38. Taking limits By Donsker’s theorem, the unconditioned walk run for n steps and with space rescaled by 1 / √ n converges to a Brownian motion run for time 1. It turns out to be also true that the random walk conditioned on T = m , with space rescaled by 1 / √ m , converges in distribution to a limit, called a Brownian excursion. Intuitively, this is a Brownian motion started from 0, conditioned to leave 0 immediately and to stay positive until it returns to 0 at time 1. (Of course, some work is necessary to make good sense of this, since the conditioning is singular!)

  39. Brownian excursion

  40. Taking limits Formally, we have ( m − 1 / 2 X m ( ⌊ mt ⌋ ) , 0 ≤ t < 1) d → ( e ( t ) , 0 ≤ t < 1) as m → ∞ .

  41. Taking limits Formally, we have ( m − 1 / 2 X m ( ⌊ mt ⌋ ) , 0 ≤ t < 1) d → ( e ( t ) , 0 ≤ t < 1) as m → ∞ . It is also possible to prove that ( m − 1 / 2 H m ( ⌊ mt ⌋ ) , 0 ≤ t < 1) d → ( e ( t ) , 0 ≤ t < 1) as m → ∞ .

  42. Taking limits Formally, we have ( m − 1 / 2 X m ( ⌊ mt ⌋ ) , 0 ≤ t < 1) d → ( e ( t ) , 0 ≤ t < 1) as m → ∞ . It is also possible to prove that ( m − 1 / 2 H m ( ⌊ mt ⌋ ) , 0 ≤ t < 1) d → ( e ( t ) , 0 ≤ t < 1) as m → ∞ . This suggests that there is some sort of limiting object for the tree itself, which should somehow be encoded by the Brownian excursion.

  43. Scaling limit for the tree Consider the tree as a metric space with the natural metric being given by the graph distance.

  44. Scaling limit for the tree Consider the tree as a metric space with the natural metric being given by the graph distance. Rescale the edge-lengths by 1 / √ m : 7 3 6 4 2 8 9 2 11 2 5 6 3 4 5 10 7 12 1 . . . 1 1

  45. Scaling limit for the tree Consider the tree as a metric space with the natural metric being given by the graph distance. Rescale the edge-lengths by 1 / √ m : 7 3 6 4 2 8 9 2 11 2 5 6 3 4 5 10 7 12 1 . . . 1 1 We need a notion of convergence for metric spaces.

  46. Measuring the distance between metric spaces The Hausdorff distance between two compact subsets K and K ′ of a metric space ( M , δ ) is d H ( K , K ′ ) = inf { ǫ > 0 : K ⊆ F ǫ ( K ′ ) , K ′ ⊆ F ǫ ( K ) } , where F ǫ ( K ) := { x ∈ M : δ ( x , K ) ≤ ǫ } is the ǫ -fattening of K .

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend