A line-breaking construction of the stable trees Christina - - PowerPoint PPT Presentation

a line breaking construction of the stable trees
SMART_READER_LITE
LIVE PREVIEW

A line-breaking construction of the stable trees Christina - - PowerPoint PPT Presentation

AofA15, Strobl, Austria, 8-12 June 2015 A line-breaking construction of the stable trees Christina Goldschmidt (Oxford) Joint work with B en edicte Haas (Paris-Dauphine) Uniform random trees Let T n be the set of unordered trees on n


slide-1
SLIDE 1

AofA’15, Strobl, Austria, 8-12 June 2015

A line-breaking construction of the stable trees

Christina Goldschmidt (Oxford) Joint work with B´ en´ edicte Haas (Paris-Dauphine)

slide-2
SLIDE 2

Uniform random trees

Let Tn be the set of unordered trees on n vertices labelled by [n] := {1, 2, . . . , n}. Write Tn for a tree chosen uniformly from Tn.

4 7 5 2 3 6 1

slide-3
SLIDE 3

Uniform random trees

Let Tn be the set of unordered trees on n vertices labelled by [n] := {1, 2, . . . , n}. Write Tn for a tree chosen uniformly from Tn.

4 7 5 2 3 6 1

What happens as n grows?

slide-4
SLIDE 4

An algorithm due to Aldous

In order to study Tn, it’s useful to have a way of building it.

  • 1. Start from the vertex labelled 1.
  • 2. For 2 ≤ i ≤ n, connect vertex i to vertex Vi such that

Vi =

  • i − 1 with probability 1 − (i − 2)/(n − 1)

uniform on {1, 2, . . . , i − 2} otherwise.

  • 3. Take a uniform random permutation of the labels.
slide-5
SLIDE 5

Aldous’ algorithm

Consider n = 10.

1

slide-6
SLIDE 6

Aldous’ algorithm

V2 = 1 with probability 1

2 1

slide-7
SLIDE 7

Aldous’ algorithm

V3 =

  • 1

with probability 1/9 2 with probability 8/9

1 2 3

slide-8
SLIDE 8

Aldous’ algorithm

V4 =

  • j

with probability 1/9, 1 ≤ j ≤ 2 3 with probability 7/9

4 1 2 3

slide-9
SLIDE 9

Aldous’ algorithm

V5 =

  • j

with probability 1/9, 1 ≤ j ≤ 3 4 with probability 6/9

3 2 1 4 5

slide-10
SLIDE 10

Aldous’ algorithm

V6 =

  • j

with probability 1/9, 1 ≤ j ≤ 4 5 with probability 5/9

6 5 4 1 2 3

slide-11
SLIDE 11

Aldous’ algorithm

V7 =

  • j

with probability 1/9, 1 ≤ j ≤ 5 6 with probability 4/9

3 2 1 4 5 6 7

slide-12
SLIDE 12

Aldous’ algorithm

V8 =

  • j

with probability 1/9, 1 ≤ j ≤ 6 7 with probability 3/9

8 3 2 1 4 5 6 7

slide-13
SLIDE 13

Aldous’ algorithm

V9 =

  • j

with probability 1/9, 1 ≤ j ≤ 7 8 with probability 2/9

9 8 3 2 1 4 5 6 7

slide-14
SLIDE 14

Aldous’ algorithm

V10 =

  • j

with probability 1/9, 1 ≤ j ≤ 8 9 with probability 1/9

10 9 8 3 2 1 4 5 6 7

slide-15
SLIDE 15

Aldous’ algorithm

Permute.

10 9 8 3 2 1 4 5 6 7

slide-16
SLIDE 16

Typical distances

Consider the tree before we permute. Let Ln = inf{i ≥ 2 : Vi+1 = i}. We can use Ln to give us an idea of typical distances in the tree. In our example, L10 = 4:

10 9 8 3 2 1 4 5 6 7

slide-17
SLIDE 17

Typical distances

For 2 ≤ i ≤ n, connect vertex i to vertex Vi such that Vi =

  • i − 1 with probability 1 − (i − 2)/(n − 1)

uniform on {1, 2, . . . , i − 2} otherwise. Ln = inf{i ≥ 2 : Vi+1 = i}

Proposition

As n → ∞, P

  • n−1/2Ln > x
  • → exp(−x2/2).
slide-18
SLIDE 18

Proof

P

  • n−1/2Ln > x
  • = P
  • Ln ≥ ⌊xn1/2⌋ + 1
  • = P
  • 2 → 1, 3 → 2, . . . , ⌊xn1/2⌋ + 1 → ⌊xn1/2⌋
  • = 1 ·
  • 1 −

1 n − 1 1 − 2 n − 1

  • · · ·
  • 1 − ⌊xn1/2⌋ − 1

n − 1

  • .
slide-19
SLIDE 19

Proof

P

  • n−1/2Ln > x
  • = P
  • Ln ≥ ⌊xn1/2⌋ + 1
  • = P
  • 2 → 1, 3 → 2, . . . , ⌊xn1/2⌋ + 1 → ⌊xn1/2⌋
  • = 1 ·
  • 1 −

1 n − 1 1 − 2 n − 1

  • · · ·
  • 1 − ⌊xn1/2⌋ − 1

n − 1

  • .

So − log P

  • n−1/2Ln > x
  • = −

⌊xn1/2⌋−1

  • i=1

log

  • 1 −

i n − 1

⌊xn1/2⌋−1

  • i=1

i n = ⌊xn1/2⌋(⌊xn1/2⌋ − 1) 2n ∼ x2 2 .

slide-20
SLIDE 20

Typical distances

Once we have built this first stick of consecutive labels, we pick a uniform starting point along that stick and attach a new stick with a random length, and so on.

slide-21
SLIDE 21

Typical distances

Once we have built this first stick of consecutive labels, we pick a uniform starting point along that stick and attach a new stick with a random length, and so on. Imagine now that edges in the tree have length 1. The proposition suggests that rescaling edge-lengths by n−1/2 will give some sort of limit for the whole tree. The limiting version of the algorithm is as follows.

slide-22
SLIDE 22

Line-breaking construction

Let E1, E2, . . . be independent Exponential(1/2) r.v.’s and set Ck = k

i=1 Ei. (Equivalently, let C1, C2, . . . be the points of an

inhomogeneous Poisson process on R+ of intensity t dt.)

...

6

C

5

C

3

C

4

C

2

C

1

C

slide-23
SLIDE 23

Line-breaking construction

Let E1, E2, . . . be independent Exponential(1/2) r.v.’s and set Ck = k

i=1 Ei. (Equivalently, let C1, C2, . . . be the points of an

inhomogeneous Poisson process on R+ of intensity t dt.)

...

6

C

5

C

3

C

4

C

2

C

1

C

(Note that P (C1 > x) = P

  • E1 > x2

= exp(−x2/2).)

slide-24
SLIDE 24

Line-breaking construction

Let E1, E2, . . . be independent Exponential(1/2) r.v.’s and set Ck = k

i=1 Ei. (Equivalently, let C1, C2, . . . be the points of an

inhomogeneous Poisson process on R+ of intensity t dt.)

...

6

C

5

C

3

C

4

C

2

C

1

C

(Note that P (C1 > x) = P

  • E1 > x2

= exp(−x2/2).)

◮ Consider the line-segments [0, C1), [C1, C2), . . .. ◮ Start from [0, C1) and proceed inductively. ◮ For i ≥ 2, attach [Ci−1, Ci) at a random point chosen

uniformly over the existing tree.

slide-25
SLIDE 25

Line-breaking construction

...

6

C

5

C

3

C

4

C

2

C

1

C

slide-26
SLIDE 26

Line-breaking construction

...

6

C

5

C

3

C

4

C

2

C

1

C

slide-27
SLIDE 27

Line-breaking construction

...

6

C

5

C

3

C

4

C

2

C

1

C

slide-28
SLIDE 28

Line-breaking construction

...

6

C

5

C

3

C

4

C

2

C

1

C

slide-29
SLIDE 29

Line-breaking construction

...

6

C

5

C

3

C

4

C

2

C

1

C

slide-30
SLIDE 30

Line-breaking construction

...

6

C

5

C

3

C

4

C

2

C

1

C

slide-31
SLIDE 31

The Brownian continuum random tree

[Picture by Igor Kortchemski]

slide-32
SLIDE 32

The scaling limit of the uniform random tree

Theorem (Aldous (1991); Le Gall (2005))

Then 1 √nTn

d

→ cT2 as n → ∞ where T2 is Aldous’ Brownian continuum random tree and c is a non-negative constant. (The convergence is in the sense of the Gromov–Hausdorff distance.)

slide-33
SLIDE 33

Trees as metric spaces

The vertices of Tn come equipped with a natural metric: the graph distance.

4 7 5 2 3 6 1

We write

1 √nTn for the metric space given by the vertices of Tn

with the graph distance divided by √n.

slide-34
SLIDE 34

Measuring the distance between metric spaces

Suppose that (X, d) and (X ′, d′) are compact metric spaces.

slide-35
SLIDE 35

Measuring the distance between metric spaces

Suppose that (X, d) and (X ′, d′) are compact metric spaces. A correspondence R is a subset of X × X ′ such that for every x ∈ X, there exists x′ ∈ X ′ with (x, x′) ∈ R and vice versa.

slide-36
SLIDE 36

Measuring the distance between metric spaces

Suppose that (X, d) and (X ′, d′) are compact metric spaces. A correspondence R is a subset of X × X ′ such that for every x ∈ X, there exists x′ ∈ X ′ with (x, x′) ∈ R and vice versa.

slide-37
SLIDE 37

Measuring the distance between metric spaces

The distortion of R is dis(R) = sup{|d(x, y) − d′(x′, y′)| : (x, x′), (y, y′) ∈ R}.

slide-38
SLIDE 38

Measuring the distance between metric spaces

(X, d) and (X ′, d′) are at Gromov-Hausdorff distance less than ǫ > 0 if there exists a correspondence R between X and X ′ such that dis(R) < 2ǫ. Write dGH((X, d), (X ′, d′)) < ǫ.

slide-39
SLIDE 39

The Brownian CRT

Why Brownian continuum random tree? Because T2 can be obtained by a glueing operation performed on the standard Brownian excursion, (e(t), 0 ≤ t ≤ 1).

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

slide-40
SLIDE 40

The Brownian CRT

slide-41
SLIDE 41

The Brownian CRT

slide-42
SLIDE 42

The Brownian CRT

slide-43
SLIDE 43

The Brownian CRT

slide-44
SLIDE 44

The Brownian CRT

slide-45
SLIDE 45

The Brownian CRT

slide-46
SLIDE 46

The Brownian CRT

slide-47
SLIDE 47

The Brownian CRT

slide-48
SLIDE 48

The Brownian CRT

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

− →

[Pictures by Igor Kortchemski]

slide-49
SLIDE 49

Critical Galton–Watson trees

Consider a Galton–Watson branching process with offspring distribution (pk)k≥0. Suppose that the offspring distribution is critical i.e. ∞

k=0 kpk = 1, and condition the tree to have total progeny n.

Let T GW

n

be the family tree associated with this process (thought

  • f as a rooted plane tree with n vertices).
slide-50
SLIDE 50

Combinatorial trees

By taking different offspring distributions, we can obtain various different natural combinatorial models:

◮ Poisson(1) corresponds to the uniform random tree (once we

forget the planar order and give the tree a uniform labelling).

◮ Geometric(1/2) gives a uniform plane tree. ◮ p0 = 1/2, p2 = 1/2 gives a uniform (complete) binary tree (as

long as n is odd).

slide-51
SLIDE 51

The finite-variance case

Theorem (Aldous (1993); Le Gall (2005))

Suppose σ2 := ∞

k=2(k − 1)2pk < ∞. Then

1 √nT GW

n d

→ cσT2 as n → ∞ where T2 is Aldous’ Brownian continuum random tree and cσ is a non-negative constant. (The convergence is in the sense of the Gromov–Hausdorff distance.)

slide-52
SLIDE 52

Infinite variance

What if the offspring distribution does not have finite variance? It is natural to consider offspring distributions such that pk ∼ k−1−α for α ∈ (1, 2) (or, more generally, distributions in the domain of attraction of a stable law of parameter α).

slide-53
SLIDE 53

The infinite-variance case

Theorem (Duquesne & Le Gall (2002); Duquesne (2003))

Suppose that (pk)k≥0 lies in the domain of attraction of a stable law of index α ∈ (1, 2). Then as n → ∞, 1 n1−1/α T GW

n d

→ cαTα, where Tα is the stable tree of parameter α and cα is a non-negative constant. (The convergence is in the sense of the Gromov–Hausdorff distance.)

slide-54
SLIDE 54

The stable trees

[Pictures by Igor Kortchemski]

slide-55
SLIDE 55

The stable trees

The stable trees also possess a functional encoding (although the excursions concerned are rather more involved to describe).

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

[Pictures by Igor Kortchemski]

slide-56
SLIDE 56

The stable trees

The stable trees also possess a functional encoding (although the excursions concerned are rather more involved to describe).

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

[Pictures by Igor Kortchemski]

An important difference between the stable trees for α ∈ (1, 2) and the Brownian CRT is that the Brownian CRT is binary. The stable trees, on the other hand, have only branch-points of infinite degree.

slide-57
SLIDE 57

A uniform measure

The principal theme of the rest of this talk is how to give a (relatively) simple description of the stable trees (and how to use it to get at their distributional properties).

slide-58
SLIDE 58

A uniform measure

The principal theme of the rest of this talk is how to give a (relatively) simple description of the stable trees (and how to use it to get at their distributional properties). For α ∈ (1, 2], the stable tree Tα is naturally endowed with a “uniform” probability measure µα, which is the limit of the discrete uniform measure on T GW

n

. It turns out that µα is supported by the set of leaves of Tα.

slide-59
SLIDE 59

A uniform measure

The principal theme of the rest of this talk is how to give a (relatively) simple description of the stable trees (and how to use it to get at their distributional properties). For α ∈ (1, 2], the stable tree Tα is naturally endowed with a “uniform” probability measure µα, which is the limit of the discrete uniform measure on T GW

n

. It turns out that µα is supported by the set of leaves of Tα. Aldous’ theory of continuum random trees tells us that we can characterize the laws of such trees via sampling.

slide-60
SLIDE 60

Reduced trees

Let X1, X2, . . . be leaves sampled independently from Tα according to µα, and let Tα,n be the subtree spanned by the root ρ and X1, . . . , Xn:

ρ

slide-61
SLIDE 61

Reduced trees

Let X1, X2, . . . be leaves sampled independently from Tα according to µα, and let Tα,n be the subtree spanned by the root ρ and X1, . . . , Xn:

X1 X2 X4 X3 X5 ρ

slide-62
SLIDE 62

Characterising the law of a stable tree

Tα,n can be thought of in two parts: its tree-shape Tα,n (a rooted unordered tree with n labelled leaves) and its edge-lengths.

slide-63
SLIDE 63

Characterising the law of a stable tree

Tα,n can be thought of in two parts: its tree-shape Tα,n (a rooted unordered tree with n labelled leaves) and its edge-lengths. The laws of (Tα,n, n ≥ 1) (the random finite-dimensional distributions) are sufficient to fully specify the law of Tα.

slide-64
SLIDE 64

Characterising the law of a stable tree

Tα,n can be thought of in two parts: its tree-shape Tα,n (a rooted unordered tree with n labelled leaves) and its edge-lengths. The laws of (Tα,n, n ≥ 1) (the random finite-dimensional distributions) are sufficient to fully specify the law of Tα. Moreover, Tα =

  • n≥1

Tα,n.

slide-65
SLIDE 65

Reminder: Aldous’ line-breaking construction of the Brownian CRT

Let C1, C2, . . . be the points of an inhomogeneous Poisson process

  • n R+ of intensity t dt.

...

6

C

5

C

3

C

4

C

2

C

1

C

slide-66
SLIDE 66

Line-breaking construction

˜ T1

...

6

C

5

C

3

C

4

C

2

C

1

C

slide-67
SLIDE 67

Line-breaking construction

˜ T2

...

6

C

5

C

3

C

4

C

2

C

1

C

slide-68
SLIDE 68

Line-breaking construction

˜ T3

...

6

C

5

C

3

C

4

C

2

C

1

C

slide-69
SLIDE 69

Line-breaking construction

˜ T4

...

6

C

5

C

3

C

4

C

2

C

1

C

slide-70
SLIDE 70

Line-breaking construction

˜ T5

...

6

C

5

C

3

C

4

C

2

C

1

C

slide-71
SLIDE 71

Line-breaking construction

˜ T6

...

6

C

5

C

3

C

4

C

2

C

1

C

slide-72
SLIDE 72

Line-breaking construction

It turns out that the line-breaking construction precisely gives the random finite-dimensional distributions for the Brownian CRT, i.e. ( ˜ Tn, n ≥ 1) d =

  • 1

√ 2T2,n, n ≥ 1

  • .
slide-73
SLIDE 73

Line-breaking construction

It turns out that the line-breaking construction precisely gives the random finite-dimensional distributions for the Brownian CRT, i.e. ( ˜ Tn, n ≥ 1) d =

  • 1

√ 2T2,n, n ≥ 1

  • .

Question: does there exist a similar line-breaking construction for the stable trees with α ∈ (1, 2)?

slide-74
SLIDE 74

Marchal’s algorithm

Marchal (2008) discovered a recursive construction of the tree-shapes. Build ( ˜ Tn, n ≥ 1) as follows:

slide-75
SLIDE 75

Marchal’s algorithm

Marchal (2008) discovered a recursive construction of the tree-shapes. Build ( ˜ Tn, n ≥ 1) as follows:

◮ Start from a single edge, rooted at one end-point and with

the other other end-point labelled 1.

slide-76
SLIDE 76

Marchal’s algorithm

Marchal (2008) discovered a recursive construction of the tree-shapes. Build ( ˜ Tn, n ≥ 1) as follows:

◮ Start from a single edge, rooted at one end-point and with

the other other end-point labelled 1.

◮ At all subsequent steps, assign edges weight α − 1 and

vertices of degree d ≥ 3 weight d − 1 − α.

slide-77
SLIDE 77

Marchal’s algorithm

Marchal (2008) discovered a recursive construction of the tree-shapes. Build ( ˜ Tn, n ≥ 1) as follows:

◮ Start from a single edge, rooted at one end-point and with

the other other end-point labelled 1.

◮ At all subsequent steps, assign edges weight α − 1 and

vertices of degree d ≥ 3 weight d − 1 − α.

◮ At step n, pick an edge or a vertex with probability

proportional to their weights.

slide-78
SLIDE 78

Marchal’s algorithm

Marchal (2008) discovered a recursive construction of the tree-shapes. Build ( ˜ Tn, n ≥ 1) as follows:

◮ Start from a single edge, rooted at one end-point and with

the other other end-point labelled 1.

◮ At all subsequent steps, assign edges weight α − 1 and

vertices of degree d ≥ 3 weight d − 1 − α.

◮ At step n, pick an edge or a vertex with probability

proportional to their weights.

◮ If we pick an edge, subdivide it into two edges and attach the

leaf labelled n to the middle vertex we just created.

slide-79
SLIDE 79

Marchal’s algorithm

Marchal (2008) discovered a recursive construction of the tree-shapes. Build ( ˜ Tn, n ≥ 1) as follows:

◮ Start from a single edge, rooted at one end-point and with

the other other end-point labelled 1.

◮ At all subsequent steps, assign edges weight α − 1 and

vertices of degree d ≥ 3 weight d − 1 − α.

◮ At step n, pick an edge or a vertex with probability

proportional to their weights.

◮ If we pick an edge, subdivide it into two edges and attach the

leaf labelled n to the middle vertex we just created.

◮ If we pick a vertex, attach the leaf labelled n to it.

slide-80
SLIDE 80

Marchal’s algorithm

α − 1 ρ 1

slide-81
SLIDE 81

Marchal’s algorithm

α − 1 ρ 1 α − 1 2 − α α − 1 2

slide-82
SLIDE 82

Marchal’s algorithm

α − 1 ρ 1 α − 1 3 − α α − 1 2 α − 1 3

slide-83
SLIDE 83

Marchal’s algorithm

α − 1 ρ 1 α − 1 3 − α α − 1 2 α − 1 3 2 − α 4 α − 1 α − 1

slide-84
SLIDE 84

Marchal’s algorithm

ρ 1 2 3 4 5

slide-85
SLIDE 85

Marchal’s algorithm

ρ 1 2 3 4 5 6

slide-86
SLIDE 86

Marchal’s algorithm

ρ 1 2 3 4 5 6 7

slide-87
SLIDE 87

Marchal’s algorithm

Then ( ˜ Tn, n ≥ 1) d = (Tα,n, n ≥ 1). (The α = 2 case is R´ emy’s algorithm (1985) for building a uniform binary rooted tree with n labelled leaves.)

slide-88
SLIDE 88

Marchal’s algorithm

Then ( ˜ Tn, n ≥ 1) d = (Tα,n, n ≥ 1). (The α = 2 case is R´ emy’s algorithm (1985) for building a uniform binary rooted tree with n labelled leaves.) Moreover, 1 n1−1/α ˜ Tn

a.s.

→ c′

αTα

as n → ∞ [Curien-Haas (2013)].

slide-89
SLIDE 89

Marchal’s algorithm

Then ( ˜ Tn, n ≥ 1) d = (Tα,n, n ≥ 1). (The α = 2 case is R´ emy’s algorithm (1985) for building a uniform binary rooted tree with n labelled leaves.) Moreover, 1 n1−1/α ˜ Tn

a.s.

→ c′

αTα

as n → ∞ [Curien-Haas (2013)]. Our new line-breaking construction gives a nested sequence of continuous trees which converge a.s. to Tα without any need for rescaling.

slide-90
SLIDE 90

The generalized Mittag-Leffler distribution

For β ∈ (0, 1), let σβ be a stable random variable with Laplace transform E [exp(−λσβ)] = exp(−λβ), λ ≥ 0.

slide-91
SLIDE 91

The generalized Mittag-Leffler distribution

For β ∈ (0, 1), let σβ be a stable random variable with Laplace transform E [exp(−λσβ)] = exp(−λβ), λ ≥ 0. Say that a non-negative random variable M has the generalized Mittag-Leffler distribution with parameters β ∈ (0, 1) and θ > −β, and write M ∼ ML(β, θ), if E [f (M)] = Cβ,θE

  • σ−θ

β f

  • σ−β

β

  • .

for all suitable test-functions f .

slide-92
SLIDE 92

The generalized Mittag-Leffler distribution

For β ∈ (0, 1), let σβ be a stable random variable with Laplace transform E [exp(−λσβ)] = exp(−λβ), λ ≥ 0. Say that a non-negative random variable M has the generalized Mittag-Leffler distribution with parameters β ∈ (0, 1) and θ > −β, and write M ∼ ML(β, θ), if E [f (M)] = Cβ,θE

  • σ−θ

β f

  • σ−β

β

  • .

for all suitable test-functions f . The law of M is characterized by its moments: E

  • Mk

= Γ(θ)Γ(θ/β + k) Γ(θ/β)Γ(θ + kβ) for any k ≥ 1.

slide-93
SLIDE 93

The generalized Mittag-Leffler distribution

For β ∈ (0, 1), let σβ be a stable random variable with Laplace transform E [exp(−λσβ)] = exp(−λβ), λ ≥ 0. Say that a non-negative random variable M has the generalized Mittag-Leffler distribution with parameters β ∈ (0, 1) and θ > −β, and write M ∼ ML(β, θ), if E [f (M)] = Cβ,θE

  • σ−θ

β f

  • σ−β

β

  • .

for all suitable test-functions f . The law of M is characterized by its moments: E

  • Mk

= Γ(θ)Γ(θ/β + k) Γ(θ/β)Γ(θ + kβ) for any k ≥ 1. If β = 1/2 and n ≥ 1, ML(1/2, n − 1/2) = 2

  • Gamma(n, 1).
slide-94
SLIDE 94

A generalized P´

  • lya urn scheme

ML(β, θ) arises as an almost sure limit in the context of a generalized P´

  • lya urn scheme.
slide-95
SLIDE 95

A generalized P´

  • lya urn scheme

ML(β, θ) arises as an almost sure limit in the context of a generalized P´

  • lya urn scheme.

Start with weight 0 on black and weight θ/β on red.

slide-96
SLIDE 96

A generalized P´

  • lya urn scheme

ML(β, θ) arises as an almost sure limit in the context of a generalized P´

  • lya urn scheme.

Start with weight 0 on black and weight θ/β on red. Pick a colour with probability proportional to its weight in the urn.

slide-97
SLIDE 97

A generalized P´

  • lya urn scheme

ML(β, θ) arises as an almost sure limit in the context of a generalized P´

  • lya urn scheme.

Start with weight 0 on black and weight θ/β on red. Pick a colour with probability proportional to its weight in the urn.

◮ If black is picked, add 1/β to the black weight. ◮ If red is picked, add 1 − 1/β to the black weight and 1 to the

red weight. Let Rn be the weight of red at step n. Then [Janson (2006)], n−βRn

a.s.

→ W ∼ ML(β, θ).

slide-98
SLIDE 98

Urns in Marchal’s algorithm

Idea: there are many such urns embedded in Marchal’s algorithm!

slide-99
SLIDE 99

Urns in Marchal’s algorithm

Idea: there are many such urns embedded in Marchal’s algorithm! Consider the distance Dn between the root and the leaf labelled 1. The associated weight is (α − 1)Dn. Let Wn be the remaining weight in the rest of the tree. D1 = 1 and W1 = 0.

slide-100
SLIDE 100

Urns in Marchal’s algorithm

Idea: there are many such urns embedded in Marchal’s algorithm! Consider the distance Dn between the root and the leaf labelled 1. The associated weight is (α − 1)Dn. Let Wn be the remaining weight in the rest of the tree. D1 = 1 and W1 = 0. At each subsequent step, (We always add weight α to the whole tree.)

slide-101
SLIDE 101

Urns in Marchal’s algorithm

Idea: there are many such urns embedded in Marchal’s algorithm! Consider the distance Dn between the root and the leaf labelled 1. The associated weight is (α − 1)Dn. Let Wn be the remaining weight in the rest of the tree. D1 = 1 and W1 = 0. At each subsequent step,

◮ with probability proportional to (α − 1)Dn, we pick one of the

Dn edges between the root and 1 to split. Then, Dn+1 = Dn + 1, the associated weight increases by α − 1, and Wn+1 = Wn + (2 − α) + (α − 1) = Wn + 1; (We always add weight α to the whole tree.)

slide-102
SLIDE 102

Urns in Marchal’s algorithm

Idea: there are many such urns embedded in Marchal’s algorithm! Consider the distance Dn between the root and the leaf labelled 1. The associated weight is (α − 1)Dn. Let Wn be the remaining weight in the rest of the tree. D1 = 1 and W1 = 0. At each subsequent step,

◮ with probability proportional to (α − 1)Dn, we pick one of the

Dn edges between the root and 1 to split. Then, Dn+1 = Dn + 1, the associated weight increases by α − 1, and Wn+1 = Wn + (2 − α) + (α − 1) = Wn + 1;

◮ with probability proportional to Wn add the new edge

elsewhere; this yields Wn+1 = Wn + α. (We always add weight α to the whole tree.)

slide-103
SLIDE 103

Urns in Marchal’s algorithm

Then (Dn, n ≥ 1) behaves exactly as the red weight in the generalized P´

  • lya urn with β = θ = 1 − 1/α. It follows that

1 n1−1/α Dn

d

→ ML(1 − 1/α, 1 − 1/α) as n → ∞.

slide-104
SLIDE 104

Urns in Marchal’s algorithm

Then (Dn, n ≥ 1) behaves exactly as the red weight in the generalized P´

  • lya urn with β = θ = 1 − 1/α. It follows that

1 n1−1/α Dn

d

→ ML(1 − 1/α, 1 − 1/α) as n → ∞. This suggests that the first stick in any line-breaking construction should have length distributed as ML(1 − 1/α, 1 − 1/α).

slide-105
SLIDE 105

A Markov chain

We define an increasing R+-valued process which will play a role similar to that of the inhomogeneous Poisson process in the Brownian case.

slide-106
SLIDE 106

A Markov chain

We define an increasing R+-valued process which will play a role similar to that of the inhomogeneous Poisson process in the Brownian case. Let (Mn, n ≥ 1) be a Markov chain such that

◮ Mn ∼ ML(1 − 1/α, n − 1/α) for n ≥ 1. ◮ The backward transition fron Mn+1 to Mn is given by

Mn = Mn+1 βn, where βn is independent of Mn+1 and βn ∼ Beta (n + 1)α − 2 α − 1 , 1 α − 1

  • .
slide-107
SLIDE 107

A Markov chain

Lemma

If α = 2, (Mn, n ≥ 1) are the ordered points of an inhomogeneous Poisson process on R+ with intensity t

2dt.

slide-108
SLIDE 108

A Markov chain

Lemma

If α = 2, (Mn, n ≥ 1) are the ordered points of an inhomogeneous Poisson process on R+ with intensity t

2dt.

Sketch proof.

It suffices to show that (M2

n/4, n ≥ 1) are the ordered points of a

Poisson process of rate 1. But Mn ∼ ML(1/2, n − 1/2) = 2

  • Gamma(n, 1) and so

M2

n/4 ∼ Gamma(n, 1).

slide-109
SLIDE 109

A Markov chain

Lemma

If α = 2, (Mn, n ≥ 1) are the ordered points of an inhomogeneous Poisson process on R+ with intensity t

2dt.

Sketch proof.

It suffices to show that (M2

n/4, n ≥ 1) are the ordered points of a

Poisson process of rate 1. But Mn ∼ ML(1/2, n − 1/2) = 2

  • Gamma(n, 1) and so

M2

n/4 ∼ Gamma(n, 1).

The relationship between successive points encoded in Mn = βnMn+1 where βn ∼ Beta(2n, 1) gives exactly the right dependence structure.

slide-110
SLIDE 110

Line-breaking construction of the stable tree (I)

slide-111
SLIDE 111

Line-breaking construction of the stable tree (I)

◮ Start with M1 and set L1 = M1. Let ˜

T1 be the tree consisting

  • f a line-segment of length L1.
slide-112
SLIDE 112

Line-breaking construction of the stable tree (I)

◮ Start with M1 and set L1 = M1. Let ˜

T1 be the tree consisting

  • f a line-segment of length L1.

◮ For n ≥ 1, given ˜

Tn (which has total length Ln):

slide-113
SLIDE 113

Line-breaking construction of the stable tree (I)

◮ Start with M1 and set L1 = M1. Let ˜

T1 be the tree consisting

  • f a line-segment of length L1.

◮ For n ≥ 1, given ˜

Tn (which has total length Ln):

  • 1. Let Bn+1 ∼ Beta(1, 2−α

α−1) be independent of everything we

have already constructed. We will glue a new branch of length (Mn+1 − Mn) · Bn+1 onto ˜ Tn, at a point to be specified; let Ln+1 = Ln + (Mn+1 − Mn) · Bn+1 be the new total length.

slide-114
SLIDE 114

Line-breaking construction of the stable tree (I)

◮ Start with M1 and set L1 = M1. Let ˜

T1 be the tree consisting

  • f a line-segment of length L1.

◮ For n ≥ 1, given ˜

Tn (which has total length Ln):

  • 1. Let Bn+1 ∼ Beta(1, 2−α

α−1) be independent of everything we

have already constructed. We will glue a new branch of length (Mn+1 − Mn) · Bn+1 onto ˜ Tn, at a point to be specified; let Ln+1 = Ln + (Mn+1 − Mn) · Bn+1 be the new total length.

  • 2. In order to find where to glue the new branch, we first select

either the set of edges of ˜ Tn, with probability Ln/Mn, or the set of branchpoints of ˜ Tn, with probability 1 − Ln/Mn.

slide-115
SLIDE 115

Line-breaking construction of the stable tree (I)

◮ Start with M1 and set L1 = M1. Let ˜

T1 be the tree consisting

  • f a line-segment of length L1.

◮ For n ≥ 1, given ˜

Tn (which has total length Ln):

  • 1. Let Bn+1 ∼ Beta(1, 2−α

α−1) be independent of everything we

have already constructed. We will glue a new branch of length (Mn+1 − Mn) · Bn+1 onto ˜ Tn, at a point to be specified; let Ln+1 = Ln + (Mn+1 − Mn) · Bn+1 be the new total length.

  • 2. In order to find where to glue the new branch, we first select

either the set of edges of ˜ Tn, with probability Ln/Mn, or the set of branchpoints of ˜ Tn, with probability 1 − Ln/Mn.

  • 3. If we select the edges in 2, glue the new branch at a uniform

point along ˜ Tn.

slide-116
SLIDE 116

Line-breaking construction of the stable tree (I)

◮ Start with M1 and set L1 = M1. Let ˜

T1 be the tree consisting

  • f a line-segment of length L1.

◮ For n ≥ 1, given ˜

Tn (which has total length Ln):

  • 1. Let Bn+1 ∼ Beta(1, 2−α

α−1) be independent of everything we

have already constructed. We will glue a new branch of length (Mn+1 − Mn) · Bn+1 onto ˜ Tn, at a point to be specified; let Ln+1 = Ln + (Mn+1 − Mn) · Bn+1 be the new total length.

  • 2. In order to find where to glue the new branch, we first select

either the set of edges of ˜ Tn, with probability Ln/Mn, or the set of branchpoints of ˜ Tn, with probability 1 − Ln/Mn.

  • 3. If we select the edges in 2, glue the new branch at a uniform

point along ˜ Tn.

  • 4. If we select the branchpoints in 2, pick a branchpoint at

random in such a way that a branchpoint of degree d ≥ 3 is chosen with probability proportional to d − 1 − α. Then glue the new branch to the selected branchpoint.

slide-117
SLIDE 117
slide-118
SLIDE 118
slide-119
SLIDE 119
slide-120
SLIDE 120
slide-121
SLIDE 121
slide-122
SLIDE 122
slide-123
SLIDE 123
slide-124
SLIDE 124
slide-125
SLIDE 125
slide-126
SLIDE 126
slide-127
SLIDE 127
slide-128
SLIDE 128

Line-breaking construction of the stable tree (II)

slide-129
SLIDE 129

Line-breaking construction of the stable tree (II)

◮ Start with M1 and set L1 = M1. Let ˜

T1 be the tree consisting

  • f a line-segment of length L1.
slide-130
SLIDE 130

Line-breaking construction of the stable tree (II)

◮ Start with M1 and set L1 = M1. Let ˜

T1 be the tree consisting

  • f a line-segment of length L1.

◮ For n ≥ 1, given ˜

Tn (which has total length Ln):

slide-131
SLIDE 131

Line-breaking construction of the stable tree (II)

◮ Start with M1 and set L1 = M1. Let ˜

T1 be the tree consisting

  • f a line-segment of length L1.

◮ For n ≥ 1, given ˜

Tn (which has total length Ln):

  • 1. Let Bn+1 ∼ Beta(1, 2−α

α−1) be independent of everything we

have already constructed. We will glue a new branch of length (Mn+1 − Mn) · Bn+1 onto ˜ Tn, at a point to be specified; let Ln+1 = Ln + (Mn+1 − Mn) · Bn+1 be the new total length.

slide-132
SLIDE 132

Line-breaking construction of the stable tree (II)

◮ Start with M1 and set L1 = M1. Let ˜

T1 be the tree consisting

  • f a line-segment of length L1.

◮ For n ≥ 1, given ˜

Tn (which has total length Ln):

  • 1. Let Bn+1 ∼ Beta(1, 2−α

α−1) be independent of everything we

have already constructed. We will glue a new branch of length (Mn+1 − Mn) · Bn+1 onto ˜ Tn, at a point to be specified; let Ln+1 = Ln + (Mn+1 − Mn) · Bn+1 be the new total length.

  • 2. In order to find where to glue the new branch, we first select

either the set of edges of ˜ Tn, with probability Ln/Mn, or the internal vertex v with probability W (n)

v

/Mn.

slide-133
SLIDE 133

Line-breaking construction of the stable tree (II)

◮ Start with M1 and set L1 = M1. Let ˜

T1 be the tree consisting

  • f a line-segment of length L1.

◮ For n ≥ 1, given ˜

Tn (which has total length Ln):

  • 1. Let Bn+1 ∼ Beta(1, 2−α

α−1) be independent of everything we

have already constructed. We will glue a new branch of length (Mn+1 − Mn) · Bn+1 onto ˜ Tn, at a point to be specified; let Ln+1 = Ln + (Mn+1 − Mn) · Bn+1 be the new total length.

  • 2. In order to find where to glue the new branch, we first select

either the set of edges of ˜ Tn, with probability Ln/Mn, or the internal vertex v with probability W (n)

v

/Mn.

  • 3. If we select the edges in 2, glue the new branch at a uniform

point along ˜ Tn and assign the new internal vertex weight W (n+1)

v

= (Mn+1 − Mn) · (1 − Bn+1).

slide-134
SLIDE 134

Line-breaking construction of the stable tree (II)

◮ Start with M1 and set L1 = M1. Let ˜

T1 be the tree consisting

  • f a line-segment of length L1.

◮ For n ≥ 1, given ˜

Tn (which has total length Ln):

  • 1. Let Bn+1 ∼ Beta(1, 2−α

α−1) be independent of everything we

have already constructed. We will glue a new branch of length (Mn+1 − Mn) · Bn+1 onto ˜ Tn, at a point to be specified; let Ln+1 = Ln + (Mn+1 − Mn) · Bn+1 be the new total length.

  • 2. In order to find where to glue the new branch, we first select

either the set of edges of ˜ Tn, with probability Ln/Mn, or the internal vertex v with probability W (n)

v

/Mn.

  • 3. If we select the edges in 2, glue the new branch at a uniform

point along ˜ Tn and assign the new internal vertex weight W (n+1)

v

= (Mn+1 − Mn) · (1 − Bn+1).

  • 4. If we select the internal vertex v in 2, glue the new branch to

it and let W (n+1)

v

= W (n)

v

+ (Mn+1 − Mn) · (1 − Bn+1).

slide-135
SLIDE 135

Line-breaking constructions

Theorem (Haas & G.)

Let ( ˜ Tn, n ≥ 1) be the sequence of trees produced by either version

  • f the construction. Then

( ˜ Tn, n ≥ 1) d = (Tα,n, n ≥ 1) and, therefore, Tα

d

=

  • n≥1

˜ Tn.

slide-136
SLIDE 136

Remarks

In the case α = 2, we have Beta

  • 1, 2−α

α−1

  • = Beta(1, 0). We

interpret this as Bn = 1 almost surely for all n ≥ 1. Then we recover (a scaled version of) Aldous’ Poisson line-breaking construction of the Brownian CRT.

slide-137
SLIDE 137

Remarks

In the case α = 2, we have Beta

  • 1, 2−α

α−1

  • = Beta(1, 0). We

interpret this as Bn = 1 almost surely for all n ≥ 1. Then we recover (a scaled version of) Aldous’ Poisson line-breaking construction of the Brownian CRT. The tree-shapes ( ˜ Tn, n ≥ 1) of ( ˜ Tn, n ≥ 1) perform Marchal’s algorithm.

slide-138
SLIDE 138

Consequences: distributional results for (Tα,n, n ≥ 1)

Edge-lengths: Let t be a discrete rooted tree with n ≥ 2 leaves and k edges. Then conditionally on Tα,n = t, the sequence of edge-lengths of Tα,n has the same distribution as Mn · βk · (D1, D2, . . . , Dk), where these random variables are independent and Mn ∼ ML(1 − 1/α, n − 1/α) βk ∼ Beta

  • k, nα − 1

α − 1

  • (D1, D2, . . . , Dk) ∼ Dir(1, 1, . . . , 1).∗
slide-139
SLIDE 139

Consequences: distributional results for (Tα,n, n ≥ 1)

Edge-lengths: Let t be a discrete rooted tree with n ≥ 2 leaves and k edges. Then conditionally on Tα,n = t, the sequence of edge-lengths of Tα,n has the same distribution as Mn · βk · (D1, D2, . . . , Dk), where these random variables are independent and Mn ∼ ML(1 − 1/α, n − 1/α) βk ∼ Beta

  • k, nα − 1

α − 1

  • (D1, D2, . . . , Dk) ∼ Dir(1, 1, . . . , 1).∗

*Dirichlet distribution: Dir(a1, . . . , an) has density

Γ(a1 + . . . + an) n

i=1 Γ(ai )

xa1−1

1

. . . xan−1

n

with respect to Lebesgue measure on

  • (x1, . . . , xn) ∈ [0, 1]n :

n

  • i=1

xi = 1

  • .
slide-140
SLIDE 140

Consequences: distributional results for (Tα,n, n ≥ 1)

Total length of the conditioned tree: Conditionally on Tα,n having k edges, the total length of the tree Tα,n has the same distribution as Mn · βk, where these random variables are independent and Mn ∼ ML(1 − 1/α, n − 1/α) and βk ∼ Beta(k, nα−1

α−1 ).

slide-141
SLIDE 141

Consequences: distributional results for (Tα,n, n ≥ 1)

Total length of the unconditioned tree: The total length of the tree Tα,n has the same distribution as Mn ·  

n−1

  • j=1

βj +

n−1

  • i=1

Bi(1 − βi)

n−1

  • j=i+1

βj   , where the random variables on the right-hand side are mutually independent and such that Mn ∼ ML(1 − 1/α, n − 1/α) βi ∼ Beta (i + 1)α − 2 α − 1 , 1 α − 1

  • ,

i ≥ 1 B1, B2, . . . , Bn ∼ Beta

  • 1, 2 − α

α − 1

  • .
slide-142
SLIDE 142

Open problem

Does there exist a discrete version of our line-breaking construction (` a la Aldous’ construction of the uniform random tree)?

slide-143
SLIDE 143

A line-breaking construction of the stable trees, joint with B´ en´ edicte Haas, Electronic Journal of Probability 20 (2015), paper no. 16, pp.1-24.

slide-144
SLIDE 144

Beta-Gamma algebra

The proof relies heavily on the following distributional facts.

◮ If B ∼ Beta(a, b) and G ∼ Gamma(a + b, 1) are independent

then G × (B, 1 − B) d = (G1, G2), where G1 ∼ Gamma(a, 1) and G2 ∼ Gamma(b, 1) are independent.

slide-145
SLIDE 145

Beta-Gamma algebra

The proof relies heavily on the following distributional facts.

◮ If B ∼ Beta(a, b) and G ∼ Gamma(a + b, 1) are independent

then G × (B, 1 − B) d = (G1, G2), where G1 ∼ Gamma(a, 1) and G2 ∼ Gamma(b, 1) are

  • independent. Looked at the other way around,
  • G1

G1 + G2 , G2 G1 + G2

  • d

= (B, 1 − B) and is independent of G1 + G2 ∼ Gamma(a + b, 1).

slide-146
SLIDE 146

Beta-Gamma algebra

The proof relies heavily on the following distributional facts.

◮ If B ∼ Beta(a, b) and G ∼ Gamma(a + b, 1) are independent

then G × (B, 1 − B) d = (G1, G2), where G1 ∼ Gamma(a, 1) and G2 ∼ Gamma(b, 1) are

  • independent. Looked at the other way around,
  • G1

G1 + G2 , G2 G1 + G2

  • d

= (B, 1 − B) and is independent of G1 + G2 ∼ Gamma(a + b, 1).

◮ Let D = (D1, D2, . . . , Dn) ∼ Dir(a1, a2, . . . , an) and

P (I = i|D) = Di. Then, conditionally on the event {I = i}, we have (D1, . . . , Di, . . . , Dn) ∼ Dir(a1, . . . , ai + 1, . . . , an).

slide-147
SLIDE 147

An idea of the proof (of version (II))

The key point is that, conditionally on the shapes ˜ T1, ˜ T2, . . . , ˜ Tn (with ˜ Tn having k edges and ℓ internal vertices), the edge-lengths and vertex weights are such that (L(n)

1 , . . . , L(n) k , W (n) 1

, . . . , W (n)

)

d

= ML(1 − 1/α, n − 1/α) × Dir

  • 1, . . . , 1, d1 − 1 − α

α − 1 , . . . , dℓ − 1 − α α − 1

  • where the two terms on the RHS are independent.
slide-148
SLIDE 148

An idea of the proof (of version (II))

The key point is that, conditionally on the shapes ˜ T1, ˜ T2, . . . , ˜ Tn (with ˜ Tn having k edges and ℓ internal vertices), the edge-lengths and vertex weights are such that (L(n)

1 , . . . , L(n) k , W (n) 1

, . . . , W (n)

)

d

= ML(1 − 1/α, n − 1/α) × Dir

  • 1, . . . , 1, d1 − 1 − α

α − 1 , . . . , dℓ − 1 − α α − 1

  • where the two terms on the RHS are independent.

This can be proved inductively.

slide-149
SLIDE 149

An idea of the proof (of version (II))

(L(n)

1 , . . . , L(n) k , W (n) 1

, . . . , W (n)

)

d

= ML(1 − 1/α, n − 1/α) × Dir

  • 1, . . . , 1, d1 − 1 − α

α − 1 , . . . , dℓ − 1 − α α − 1

  • Recall that we add our new branch either at a node or somewhere

uniformly chosen along the edges. So we pick an edge or a vertex with probability proportional to its weight.

slide-150
SLIDE 150

An idea of the proof (of version (II))

(L(n)

1 , . . . , L(n) k , W (n) 1

, . . . , W (n)

)

d

= ML(1 − 1/α, n − 1/α) × Dir

  • 1, . . . , 1, d1 − 1 − α

α − 1 , . . . , dℓ − 1 − α α − 1

  • Recall that we add our new branch either at a node or somewhere

uniformly chosen along the edges. So we pick an edge or a vertex with probability proportional to its weight. This amounts to taking a size-biased pick from amongst the co-ordinates of the Dirichlet vector, and has the effect of adding 1 to the corresponding parameter.

slide-151
SLIDE 151

An idea of the proof (of version (II))

(L(n)

1 , . . . , L(n) k , W (n) 1

, . . . , W (n)

)

d

= ML(1 − 1/α, n − 1/α) × Dir

  • 1, . . . , 1, d1 − 1 − α

α − 1 , . . . , dℓ − 1 − α α − 1

  • Recall that we add our new branch either at a node or somewhere

uniformly chosen along the edges. So we pick an edge or a vertex with probability proportional to its weight. This amounts to taking a size-biased pick from amongst the co-ordinates of the Dirichlet vector, and has the effect of adding 1 to the corresponding parameter. If we pick a co-ordinate which corresponded to an edge, it now has parameter 2. Splitting that co-ordinate with an independent uniform gives back 2 co-ordinates with parameter 1.

slide-152
SLIDE 152

An idea of the proof (of version (II))

(L(n)

1 , . . . , L(n) k , W (n) 1

, . . . , W (n)

)

d

= ML(1 − 1/α, n − 1/α) × Dir

  • 1, . . . , 1, d1 − 1 − α

α − 1 , . . . , dℓ − 1 − α α − 1

  • Whether we picked an edge or a vertex, we now want to add one

co-ordinate equal to 1 (representing the new edge) and either a co-ordinate equal to 2−α

α−1 (for a new vertex) or an additional weight

to the existing vertex whose weight we already biased:

d−1−α α−1

+ 1 + 2−α

α−1 = (d+1)−1−α α−1

.

slide-153
SLIDE 153

An idea of the proof (of version (II))

(L(n)

1 , . . . , L(n) k , W (n) 1

, . . . , W (n)

)

d

= ML(1 − 1/α, n − 1/α) × Dir

  • 1, . . . , 1, d1 − 1 − α

α − 1 , . . . , dℓ − 1 − α α − 1

  • Whether we picked an edge or a vertex, we now want to add one

co-ordinate equal to 1 (representing the new edge) and either a co-ordinate equal to 2−α

α−1 (for a new vertex) or an additional weight

to the existing vertex whose weight we already biased:

d−1−α α−1

+ 1 + 2−α

α−1 = (d+1)−1−α α−1

. This is the role of (Bn, 1 − Bn) ∼ Beta(1, 2−α

α−1).

slide-154
SLIDE 154

An idea of the proof (of version (II))

(L(n)

1 , . . . , L(n) k , W (n) 1

, . . . , W (n)

)

d

= ML(1 − 1/α, n − 1/α) × Dir

  • 1, . . . , 1, d1 − 1 − α

α − 1 , . . . , dℓ − 1 − α α − 1

  • Recall that

Mn = Mn+1 βn.

slide-155
SLIDE 155

An idea of the proof (of version (II))

(L(n)

1 , . . . , L(n) k , W (n) 1

, . . . , W (n)

)

d

= ML(1 − 1/α, n − 1/α) × Dir

  • 1, . . . , 1, d1 − 1 − α

α − 1 , . . . , dℓ − 1 − α α − 1

  • Recall that

Mn = Mn+1 βn. The βn factor is precisely what is needed to rescale the Dirichlet vector in order to accommodate the extra co-ordinates we added.

slide-156
SLIDE 156

A line-breaking construction of the stable trees, joint with B´ en´ edicte Haas, Electronic Journal of Probability 20 (2015), paper no. 16, pp.1-24.