Building Random Trees from Blocks Mohan Gopaladesikan Department of - - PowerPoint PPT Presentation

building random trees from blocks
SMART_READER_LITE
LIVE PREVIEW

Building Random Trees from Blocks Mohan Gopaladesikan Department of - - PowerPoint PPT Presentation

Building Random Trees from Blocks Mohan Gopaladesikan Department of Statistics, Purdue University 11th June 2013 Joint work with Dr. Hosam Mahmoud Department of Statistics, The George Washington University Dr. Mark Daniel Ward Department of


slide-1
SLIDE 1

Building Random Trees from Blocks

Mohan Gopaladesikan

Department of Statistics, Purdue University

11th June 2013

Joint work with

  • Dr. Hosam Mahmoud

Department of Statistics, The George Washington University

  • Dr. Mark Daniel Ward

Department of Statistics, Purdue University 1

slide-2
SLIDE 2

The Probability Model

We have a finite collection of unlabeled, rooted, nonplanar building blocks (trees) C = {T1, . . . , Tk}, that occur with respective probabilities p1, . . . , pk; k

j=1 pj = 1. 2

slide-3
SLIDE 3

The Probability Model

We have a finite collection of unlabeled, rooted, nonplanar building blocks (trees) C = {T1, . . . , Tk}, that occur with respective probabilities p1, . . . , pk; k

j=1 pj = 1.

We use these as building blocks of an unlabeled, rooted, nonplanar tree called the “blocks tree”.

2

slide-4
SLIDE 4

The Probability Model

We have a finite collection of unlabeled, rooted, nonplanar building blocks (trees) C = {T1, . . . , Tk}, that occur with respective probabilities p1, . . . , pk; k

j=1 pj = 1.

We use these as building blocks of an unlabeled, rooted, nonplanar tree called the “blocks tree”. Let Tn−1 be the blocks tree after inserting n − 1 blocks.

2

slide-5
SLIDE 5

The Probability Model

We have a finite collection of unlabeled, rooted, nonplanar building blocks (trees) C = {T1, . . . , Tk}, that occur with respective probabilities p1, . . . , pk; k

j=1 pj = 1.

We use these as building blocks of an unlabeled, rooted, nonplanar tree called the “blocks tree”. Let Tn−1 be the blocks tree after inserting n − 1 blocks. At step n, with probability pi we choose the block Ti from C , and we adjoin it to the tree Tn−1 by choosing a “parent” node at random from Tn−1 (all nodes from Tn−1 being equally likely parents).

2

slide-6
SLIDE 6

The Probability Model

We have a finite collection of unlabeled, rooted, nonplanar building blocks (trees) C = {T1, . . . , Tk}, that occur with respective probabilities p1, . . . , pk; k

j=1 pj = 1.

We use these as building blocks of an unlabeled, rooted, nonplanar tree called the “blocks tree”. Let Tn−1 be the blocks tree after inserting n − 1 blocks. At step n, with probability pi we choose the block Ti from C , and we adjoin it to the tree Tn−1 by choosing a “parent” node at random from Tn−1 (all nodes from Tn−1 being equally likely parents). We then attach the root of block Ti to the chosen parent.

2

slide-7
SLIDE 7

The Probability Model

We have a finite collection of unlabeled, rooted, nonplanar building blocks (trees) C = {T1, . . . , Tk}, that occur with respective probabilities p1, . . . , pk; k

j=1 pj = 1.

We use these as building blocks of an unlabeled, rooted, nonplanar tree called the “blocks tree”. Let Tn−1 be the blocks tree after inserting n − 1 blocks. At step n, with probability pi we choose the block Ti from C , and we adjoin it to the tree Tn−1 by choosing a “parent” node at random from Tn−1 (all nodes from Tn−1 being equally likely parents). We then attach the root of block Ti to the chosen parent. For mathematical simplicity: We study the case where all members of C have the same size t.

2

slide-8
SLIDE 8

The Probability Model

We have a finite collection of unlabeled, rooted, nonplanar building blocks (trees) C = {T1, . . . , Tk}, that occur with respective probabilities p1, . . . , pk; k

j=1 pj = 1.

We use these as building blocks of an unlabeled, rooted, nonplanar tree called the “blocks tree”. Let Tn−1 be the blocks tree after inserting n − 1 blocks. At step n, with probability pi we choose the block Ti from C , and we adjoin it to the tree Tn−1 by choosing a “parent” node at random from Tn−1 (all nodes from Tn−1 being equally likely parents). We then attach the root of block Ti to the chosen parent. For mathematical simplicity: We study the case where all members of C have the same size t. A special case: C consists of only one node; the block tree is isomorphic to the well-studied standard recursive tree.

2

slide-9
SLIDE 9

Example

Figure : A collection of building blocks of size 4, with probabilities 1

3 , and 2 3 , respectively.

3

slide-10
SLIDE 10

Example

Figure : A collection of building blocks of size 4, with probabilities 1

3 , and 2 3 , respectively.

Figure : A tree built from building blocks.

Step 1: The first block occurs with probability 1

3 ;

3

slide-11
SLIDE 11

Example

Figure : A collection of building blocks of size 4, with probabilities 1

3 , and 2 3 , respectively.

Figure : A tree built from building blocks.

Step 1: The first block occurs with probability 1

3 ;

Step 2: The second block occurs with probability 2

3 , and the parent is chosen with probability 1 4 ;

3

slide-12
SLIDE 12

Example

Figure : A collection of building blocks of size 4, with probabilities 1

3 , and 2 3 , respectively.

Figure : A tree built from building blocks.

Step 1: The first block occurs with probability 1

3 ;

Step 2: The second block occurs with probability 2

3 , and the parent is chosen with probability 1 4 ;

Step 3: The third block occurs with probability 2

3 , and the parent is chosen with probability 1 8 .

3

slide-13
SLIDE 13

Motivation

The growth of large-scale software applications, which grow by the addition of smaller software modules.

4

slide-14
SLIDE 14

Motivation

The growth of large-scale software applications, which grow by the addition of smaller software modules. The growth of wireless internet networks.

4

slide-15
SLIDE 15

Motivation

The growth of large-scale software applications, which grow by the addition of smaller software modules. The growth of wireless internet networks. Hierarchical Bayesian models

4

slide-16
SLIDE 16

Motivation

The growth of large-scale software applications, which grow by the addition of smaller software modules. The growth of wireless internet networks. Hierarchical Bayesian models Forensic science - some special kinds of probabilistic expert systems and Wigmorian evidence charts.

4

slide-17
SLIDE 17

Motivation

The growth of large-scale software applications, which grow by the addition of smaller software modules. The growth of wireless internet networks. Hierarchical Bayesian models Forensic science - some special kinds of probabilistic expert systems and Wigmorian evidence charts. Biology/Chemistry - The growth of complex bacteria/organic molecules

4

slide-18
SLIDE 18

Motivation

The growth of large-scale software applications, which grow by the addition of smaller software modules. The growth of wireless internet networks. Hierarchical Bayesian models Forensic science - some special kinds of probabilistic expert systems and Wigmorian evidence charts. Biology/Chemistry - The growth of complex bacteria/organic molecules Human Resources - the growth of large organizations

4

slide-19
SLIDE 19

Scope of Work

We study the following asymptotic properties of the blocks tree:

5

slide-20
SLIDE 20

Scope of Work

We study the following asymptotic properties of the blocks tree: Number of Leaves

5

slide-21
SLIDE 21

Scope of Work

We study the following asymptotic properties of the blocks tree: Number of Leaves Three types of distances

5

slide-22
SLIDE 22

Scope of Work

We study the following asymptotic properties of the blocks tree: Number of Leaves Three types of distances Depth of a node

5

slide-23
SLIDE 23

Scope of Work

We study the following asymptotic properties of the blocks tree: Number of Leaves Three types of distances Depth of a node Total path length of the blocks tree

5

slide-24
SLIDE 24

Scope of Work

We study the following asymptotic properties of the blocks tree: Number of Leaves Three types of distances Depth of a node Total path length of the blocks tree Height of the blocks tree

5

slide-25
SLIDE 25

Leaves

Definition

A leaf is a node that has no children.

6

slide-26
SLIDE 26

Leaves

Definition

A leaf is a node that has no children.

Theorem

Let Ln be the number of leaves in a random tree built from the building blocks T1, . . . , Tk, which are selected at each step with probabilities p1, . . . , pk. Let E[ΛC ] be the average number of leaves in the given collection, and Var[ΛC ] be the variance. Then, Ln − t E[ΛC ] t + 1 n √n

D

− → N

  • 0, σ2

C

  • ,

where σ2

C :=

Var[ΛC ] t + 2 + E[ΛC ](t + 1 − E[ΛC ]) (1 + t)2(2 + t)

  • t.

6

slide-27
SLIDE 27

Sketch of Proof.

We color the leaves and internal nodes with white(W) and blue(B) colors respectively.

7

slide-28
SLIDE 28

Sketch of Proof.

We color the leaves and internal nodes with white(W) and blue(B) colors respectively. This will enable us to use the powerful theory of P´

  • lya urns.

7

slide-29
SLIDE 29

Sketch of Proof.

We color the leaves and internal nodes with white(W) and blue(B) colors respectively. This will enable us to use the powerful theory of P´

  • lya urns.

Suppose Ti has ℓi leaves (and consequently it has t − ℓi internal nodes).

7

slide-30
SLIDE 30

Sketch of Proof.

We color the leaves and internal nodes with white(W) and blue(B) colors respectively. This will enable us to use the powerful theory of P´

  • lya urns.

Suppose Ti has ℓi leaves (and consequently it has t − ℓi internal nodes). Let ΛC be number of leaves in a randomly chosen block, i.e., ΛC has probability mass P(ΛC = ℓ) =

  • j pj,

where the sum is taken over all j such that block Tj has ℓ leaves.

7

slide-31
SLIDE 31

Sketch of Proof.

For example:

Figure : A collection of building blocks of size 4, with probabilities 1

3 , and 2 3 , respectively.

8

slide-32
SLIDE 32

Sketch of Proof.

For example:

Figure : A collection of building blocks of size 4, with probabilities 1

3 , and 2 3 , respectively.

P(ΛC = 2) = 1 3, and P(ΛC = 3) = 2 3.

8

slide-33
SLIDE 33

Sketch of Proof.

For example:

Figure : A collection of building blocks of size 4, with probabilities 1

3 , and 2 3 , respectively.

P(ΛC = 2) = 1 3, and P(ΛC = 3) = 2 3. The replacement matrix for this urn: A =    W B W ΛC − 1 t − ΛC + 1 B ΛC t − ΛC   

8

slide-34
SLIDE 34

Sketch of Proof.

For example:

Figure : A collection of building blocks of size 4, with probabilities 1

3 , and 2 3 , respectively.

P(ΛC = 2) = 1 3, and P(ΛC = 3) = 2 3. The replacement matrix for this urn: A =    W B W ΛC − 1 t − ΛC + 1 B ΛC t − ΛC    Row sum for this matrix is a constant! Balanced Urns!

8

slide-35
SLIDE 35

Sketch of Proof.

For balanced urns it is shown by [Athreya 1968] that Wn n

a.s.

− → λ1v1, where λ1 is the principal (largest real) eigenvalue of the average of the replacement matrix, and (v1, v2) is the corresponding left eigenvector of E[A].

9

slide-36
SLIDE 36

Sketch of Proof.

For balanced urns it is shown by [Athreya 1968] that Wn n

a.s.

− → λ1v1, where λ1 is the principal (largest real) eigenvalue of the average of the replacement matrix, and (v1, v2) is the corresponding left eigenvector of E[A]. Calculating these for our blocks tree problem gives us: λ1 = t and λ2 = −1,

9

slide-37
SLIDE 37

Sketch of Proof.

For balanced urns it is shown by [Athreya 1968] that Wn n

a.s.

− → λ1v1, where λ1 is the principal (largest real) eigenvalue of the average of the replacement matrix, and (v1, v2) is the corresponding left eigenvector of E[A]. Calculating these for our blocks tree problem gives us: λ1 = t and λ2 = −1, and hence, Wn n

a.s.

− → t t + 1 E[ΛC ].

9

slide-38
SLIDE 38

Sketch of Proof.

Further we also notice that ℜλ2 < 1

2λ1.

[Smythe 1996] showed that Wn − λ1v1 √n

D

− → N(0, σ2), for some variance σ2 and states that σ2 is generally hard to compute, but we will obtain the exact variance of Wn.

10

slide-39
SLIDE 39

Sketch of Proof.

Let I (W )

n

be the indicator of the event that a white ball is picked at the nth draw.

11

slide-40
SLIDE 40

Sketch of Proof.

Let I (W )

n

be the indicator of the event that a white ball is picked at the nth draw. Wn = Wn−1 − I (W )

n

+ ΛC

11

slide-41
SLIDE 41

Sketch of Proof.

Let I (W )

n

be the indicator of the event that a white ball is picked at the nth draw. Wn = Wn−1 − I (W )

n

+ ΛC W 2

n = W 2 n−1 + I (W ) n

+ Λ2

C + 2ΛC Wn−1 − 2ΛC I (W ) n

− 2Wn−1I (W )

n

11

slide-42
SLIDE 42

Sketch of Proof.

Let I (W )

n

be the indicator of the event that a white ball is picked at the nth draw. Wn = Wn−1 − I (W )

n

+ ΛC W 2

n = W 2 n−1 + I (W ) n

+ Λ2

C + 2ΛC Wn−1 − 2ΛC I (W ) n

− 2Wn−1I (W )

n

Solving these recurrences, we get

11

slide-43
SLIDE 43

Sketch of Proof.

Let I (W )

n

be the indicator of the event that a white ball is picked at the nth draw. Wn = Wn−1 − I (W )

n

+ ΛC W 2

n = W 2 n−1 + I (W ) n

+ Λ2

C + 2ΛC Wn−1 − 2ΛC I (W ) n

− 2Wn−1I (W )

n

Solving these recurrences, we get E

  • Wn
  • ∼ t E[ΛC ]

(t + 1) n + O

  • n−1/t

,

11

slide-44
SLIDE 44

Sketch of Proof.

Let I (W )

n

be the indicator of the event that a white ball is picked at the nth draw. Wn = Wn−1 − I (W )

n

+ ΛC W 2

n = W 2 n−1 + I (W ) n

+ Λ2

C + 2ΛC Wn−1 − 2ΛC I (W ) n

− 2Wn−1I (W )

n

Solving these recurrences, we get E

  • Wn
  • ∼ t E[ΛC ]

(t + 1) n + O

  • n−1/t

, Var[Wn] ∼ Var[ΛC ] t + 2 + E[ΛC ](t + 1 − E[ΛC ]) (1 + t)2(2 + t)

  • tn

+ O(n1−ǫ) := σ2

C n.

11

slide-45
SLIDE 45

Sketch of Proof.

Let I (W )

n

be the indicator of the event that a white ball is picked at the nth draw. Wn = Wn−1 − I (W )

n

+ ΛC W 2

n = W 2 n−1 + I (W ) n

+ Λ2

C + 2ΛC Wn−1 − 2ΛC I (W ) n

− 2Wn−1I (W )

n

Solving these recurrences, we get E

  • Wn
  • ∼ t E[ΛC ]

(t + 1) n + O

  • n−1/t

, Var[Wn] ∼ Var[ΛC ] t + 2 + E[ΛC ](t + 1 − E[ΛC ]) (1 + t)2(2 + t)

  • tn

+ O(n1−ǫ) := σ2

C n.

Hence the theorem follows!

11

slide-46
SLIDE 46

Depth

Definition

The depth of a node is the length of the shortest path from the node to the root.

12

slide-47
SLIDE 47

Depth

Definition

The depth of a node is the length of the shortest path from the node to the root.

Theorem

Let Dn be the depth of the root of the nth inserted block in a random tree built from the building blocks T1, . . . , Tk, which are selected at each step with probabilities p1, . . . , pk. Let E[∆C ] be the average depth of a node in the given collection, and Var[∆C ] be the variance of that depth. Then, Dn − (E[∆C ] + 1) ln n √ ln n

D

− → N

  • 0, Var[∆C ] + (E[∆C ] + 1)2

.

12

slide-48
SLIDE 48

Sketch of Proof.

At step n, the newcomer can join any of the n − 1 existing blocks.

13

slide-49
SLIDE 49

Sketch of Proof.

At step n, the newcomer can join any of the n − 1 existing blocks. The root of the nth block inherits the depth of any of the existing blocks, adjusted by the depth of the node it is choosing as parent within its block plus an extra 1.

13

slide-50
SLIDE 50

Sketch of Proof.

At step n, the newcomer can join any of the n − 1 existing blocks. The root of the nth block inherits the depth of any of the existing blocks, adjusted by the depth of the node it is choosing as parent within its block plus an extra 1. We call the block to which the parent belongs the “parent block”, which can be any member i of the collection, with probability pi.

13

slide-51
SLIDE 51

Sketch of Proof.

At step n, the newcomer can join any of the n − 1 existing blocks. The root of the nth block inherits the depth of any of the existing blocks, adjusted by the depth of the node it is choosing as parent within its block plus an extra 1. We call the block to which the parent belongs the “parent block”, which can be any member i of the collection, with probability pi. δn: be the random depth at which the nth parent node appears in its own block.

13

slide-52
SLIDE 52

Sketch of Proof.

At step n, the newcomer can join any of the n − 1 existing blocks. The root of the nth block inherits the depth of any of the existing blocks, adjusted by the depth of the node it is choosing as parent within its block plus an extra 1. We call the block to which the parent belongs the “parent block”, which can be any member i of the collection, with probability pi. δn: be the random depth at which the nth parent node appears in its own block. δ1, δ2, . . . are equidistributed.

13

slide-53
SLIDE 53

Sketch of Proof.

At step n, the newcomer can join any of the n − 1 existing blocks. The root of the nth block inherits the depth of any of the existing blocks, adjusted by the depth of the node it is choosing as parent within its block plus an extra 1. We call the block to which the parent belongs the “parent block”, which can be any member i of the collection, with probability pi. δn: be the random depth at which the nth parent node appears in its own block. δ1, δ2, . . . are equidistributed. ∆C (representing the depth of a parent node), which is completely determined by the structure of the trees in the collection;

13

slide-54
SLIDE 54

Sketch of Proof.

At step n, the newcomer can join any of the n − 1 existing blocks. The root of the nth block inherits the depth of any of the existing blocks, adjusted by the depth of the node it is choosing as parent within its block plus an extra 1. We call the block to which the parent belongs the “parent block”, which can be any member i of the collection, with probability pi. δn: be the random depth at which the nth parent node appears in its own block. δ1, δ2, . . . are equidistributed. ∆C (representing the depth of a parent node), which is completely determined by the structure of the trees in the collection; Each δn has the same random distribution as ∆C.

13

slide-55
SLIDE 55

For example:

Figure : A collection of building blocks of size 4, with probabilities 1

3 , and 2 3 , respectively.

14

slide-56
SLIDE 56

For example:

Figure : A collection of building blocks of size 4, with probabilities 1

3 , and 2 3 , respectively.

14

slide-57
SLIDE 57

For example:

Figure : A collection of building blocks of size 4, with probabilities 1

3 , and 2 3 , respectively.

∆C =        0, with probability 3/12; 1, with probability 7/12; 2, with probability 2/12.

14

slide-58
SLIDE 58

For example:

Figure : A collection of building blocks of size 4, with probabilities 1

3 , and 2 3 , respectively.

∆C =        0, with probability 3/12; 1, with probability 7/12; 2, with probability 2/12. Dn = Di + δn + 1 with probability 1/(n − 1)

14

slide-59
SLIDE 59

For example:

Figure : A collection of building blocks of size 4, with probabilities 1

3 , and 2 3 , respectively.

∆C =        0, with probability 3/12; 1, with probability 7/12; 2, with probability 2/12. Dn = Di + δn + 1 with probability 1/(n − 1) Hence the moment generating function of Dn conditioned on Fn−1, the sigma field generated by the first n − 1 insertions,would be,

14

slide-60
SLIDE 60

For example:

Figure : A collection of building blocks of size 4, with probabilities 1

3 , and 2 3 , respectively.

∆C =        0, with probability 3/12; 1, with probability 7/12; 2, with probability 2/12. Dn = Di + δn + 1 with probability 1/(n − 1) Hence the moment generating function of Dn conditioned on Fn−1, the sigma field generated by the first n − 1 insertions,would be, E

  • eDnu | Fn−1
  • = E

n−1

  • i=1

e(Di +δn+1)u 1 n − 1

  • Fn−1
  • =

1 n − 1E

  • e(δn+1)u n−1
  • i=1

eDi u.

14

slide-61
SLIDE 61

Sketch of Proof.

Taking expectation again, φDn(u) = euψC(u) n − 1

n−1

  • i=1

φDi (u),

15

slide-62
SLIDE 62

Sketch of Proof.

Taking expectation again, φDn(u) = euψC(u) n − 1

n−1

  • i=1

φDi (u), valid for n ≥ 2, with the initial condition φD1(u) = 1.

15

slide-63
SLIDE 63

Sketch of Proof.

Taking expectation again, φDn(u) = euψC(u) n − 1

n−1

  • i=1

φDi (u), valid for n ≥ 2, with the initial condition φD1(u) = 1. Solving this full history recurrence and after appropriate centering and scaling we get E

  • exp

Dn − (E[∆C] + 1) ln n √ ln n

  • u
  • → e

1 2

  • Var[∆C ]+(E[∆C ]+1)2

u2. 15

slide-64
SLIDE 64

Sketch of Proof.

Taking expectation again, φDn(u) = euψC(u) n − 1

n−1

  • i=1

φDi (u), valid for n ≥ 2, with the initial condition φD1(u) = 1. Solving this full history recurrence and after appropriate centering and scaling we get E

  • exp

Dn − (E[∆C] + 1) ln n √ ln n

  • u
  • → e

1 2

  • Var[∆C ]+(E[∆C ]+1)2

u2.

Therefore we have the theorem, Dn − (E[∆C] + 1) ln n √ ln n

D

− → N

  • 0, Var[∆C] + (E[∆C] + 1)2

.

15

slide-65
SLIDE 65

Sketch of Proof.

Taking expectation again, φDn(u) = euψC(u) n − 1

n−1

  • i=1

φDi (u), valid for n ≥ 2, with the initial condition φD1(u) = 1. Solving this full history recurrence and after appropriate centering and scaling we get E

  • exp

Dn − (E[∆C] + 1) ln n √ ln n

  • u
  • → e

1 2

  • Var[∆C ]+(E[∆C ]+1)2

u2.

Therefore we have the theorem, Dn − (E[∆C] + 1) ln n √ ln n

D

− → N

  • 0, Var[∆C] + (E[∆C] + 1)2

.

  • Remark. The expressions for mean and variance are valid, even if t = t(n)

grows with n. However, in the asymptotic derivations of the central limit theorem we have to keep t relatively small, compared to n.

15

slide-66
SLIDE 66

Total path length

Definition

The total path length of a tree is the sum of the depths of all the nodes.

16

slide-67
SLIDE 67

Total path length

Definition

The total path length of a tree is the sum of the depths of all the nodes.

Theorem

Let Xn be the total path length of a tree built from the blocks of a collection C . Then, there is an absolutely integrable random variable X, such that Xn/n −

  • t + E[χC ]
  • Hn + t converges to X,

both in L2 and almost surely.

16

slide-68
SLIDE 68

Sketch of Proof.

Let xi be the total path length of a block Ti.

17

slide-69
SLIDE 69

Sketch of Proof.

Let xi be the total path length of a block Ti. Let χC be a discrete random variable that assumes the value xi, with probability

j pj,

where the sum is taken over every block Tj with total path length xi.

17

slide-70
SLIDE 70

Sketch of Proof.

Let xi be the total path length of a block Ti. Let χC be a discrete random variable that assumes the value xi, with probability

j pj,

where the sum is taken over every block Tj with total path length xi. For example:

Figure : A collection of building blocks of size 4, with probabilities 1

3 , and 2 3 , respectively.

P(χC = 5) = 1 3, and P(χC = 3) = 2 3.

17

slide-71
SLIDE 71

Sketch of Proof.

Let Xn be the total path length of the entire tree Tn built from the first n block insertions.

18

slide-72
SLIDE 72

Sketch of Proof.

Let Xn be the total path length of the entire tree Tn built from the first n block insertions. If the nth block is adjoined to a node v ∈ Tn−1, at depth ˜ D(v) in the tree Tn−1 then each node in the last inserted block appears at distance equal to ˜ D(v) + 1, plus its own depth in the last block.

18

slide-73
SLIDE 73

Sketch of Proof.

Let Xn be the total path length of the entire tree Tn built from the first n block insertions. If the nth block is adjoined to a node v ∈ Tn−1, at depth ˜ D(v) in the tree Tn−1 then each node in the last inserted block appears at distance equal to ˜ D(v) + 1, plus its own depth in the last block. So we have the following stochastic recurrence for n ≥ 2: E[Xn | Fn−1] = Xn−1 + t

  • 1

t(n − 1)

  • v∈Tn−1

˜ D(v) + 1

  • + E
  • χC | Fn−1
  • = Xn−1 + Xn−1

n − 1 + t + E[χC].

18

slide-74
SLIDE 74

Sketch of Proof.

Let Xn be the total path length of the entire tree Tn built from the first n block insertions. If the nth block is adjoined to a node v ∈ Tn−1, at depth ˜ D(v) in the tree Tn−1 then each node in the last inserted block appears at distance equal to ˜ D(v) + 1, plus its own depth in the last block. So we have the following stochastic recurrence for n ≥ 2: E[Xn | Fn−1] = Xn−1 + t

  • 1

t(n − 1)

  • v∈Tn−1

˜ D(v) + 1

  • + E
  • χC | Fn−1
  • = Xn−1 + Xn−1

n − 1 + t + E[χC]. Taking expectations again, E[Xn] = n n − 1E[Xn−1] + t + E[χC].

18

slide-75
SLIDE 75

Sketch of Proof.

Let Xn be the total path length of the entire tree Tn built from the first n block insertions. If the nth block is adjoined to a node v ∈ Tn−1, at depth ˜ D(v) in the tree Tn−1 then each node in the last inserted block appears at distance equal to ˜ D(v) + 1, plus its own depth in the last block. So we have the following stochastic recurrence for n ≥ 2: E[Xn | Fn−1] = Xn−1 + t

  • 1

t(n − 1)

  • v∈Tn−1

˜ D(v) + 1

  • + E
  • χC | Fn−1
  • = Xn−1 + Xn−1

n − 1 + t + E[χC]. Taking expectations again, E[Xn] = n n − 1E[Xn−1] + t + E[χC]. Solving we get E[Xn] =

  • t + E[χC]
  • n Hn − nt ∼
  • t + E[χC]
  • n ln n,

18

slide-76
SLIDE 76

Sketch of Proof.

If ˇ Dn be the depth of the node chosen as parent for the root of the nth block.

19

slide-77
SLIDE 77

Sketch of Proof.

If ˇ Dn be the depth of the node chosen as parent for the root of the nth block. Then, we have the stochastic recurrence Xn = Xn−1 + t( ˇ Dn + 1) + χC.

19

slide-78
SLIDE 78

Sketch of Proof.

If ˇ Dn be the depth of the node chosen as parent for the root of the nth block. Then, we have the stochastic recurrence Xn = Xn−1 + t( ˇ Dn + 1) + χC. Squaring and taking double expectation we get, E[X 2

n ] = n + 1

n − 1 E[X 2

n−1] +

2n n − 1

  • E[χC] + t
  • E[Xn−1]

+ t2E[ ˇ D2

n] + t2 + 2tE[χC] + E[χ2 C]. 19

slide-79
SLIDE 79

Sketch of Proof.

If ˇ Dn be the depth of the node chosen as parent for the root of the nth block. Then, we have the stochastic recurrence Xn = Xn−1 + t( ˇ Dn + 1) + χC. Squaring and taking double expectation we get, E[X 2

n ] = n + 1

n − 1 E[X 2

n−1] +

2n n − 1

  • E[χC] + t
  • E[Xn−1]

+ t2E[ ˇ D2

n] + t2 + 2tE[χC] + E[χ2 C].

We develop a separate recurrence for E[ ˇ D2

n] and solving for E[X 2 n ] we get,

Var[Xn] ∼

  • t2

E[∆2

C] + 2(E[∆C])2 + 4E[∆C] + 4

  • + E[χ2

C] + 4tE[χC]

  • E[χC] + t

2 2 + π2 6

  • n2.

19

slide-80
SLIDE 80

Sketch of Proof.

We know the conditional relation: E[Xn | Fn−1] = Xn−1 + Xn−1 n − 1 + t + E[χC],

20

slide-81
SLIDE 81

Sketch of Proof.

We know the conditional relation: E[Xn | Fn−1] = Xn−1 + Xn−1 n − 1 + t + E[χC], Hence we see that X ∗

n = Xn n −

  • t + E[χC]
  • Hn + t is a martingale.

20

slide-82
SLIDE 82

Sketch of Proof.

We know the conditional relation: E[Xn | Fn−1] = Xn−1 + Xn−1 n − 1 + t + E[χC], Hence we see that X ∗

n = Xn n −

  • t + E[χC]
  • Hn + t is a martingale.

From the variance expression we see that E

  • (X ∗

n )2

n2 = c + o(1).

20

slide-83
SLIDE 83

Sketch of Proof.

We know the conditional relation: E[Xn | Fn−1] = Xn−1 + Xn−1 n − 1 + t + E[χC], Hence we see that X ∗

n = Xn n −

  • t + E[χC]
  • Hn + t is a martingale.

From the variance expression we see that E

  • (X ∗

n )2

n2 = c + o(1). Hence, we have supn≥1 E

  • (X ∗

n )2

< ∞, and the theorem follows from Doob’s martingale convergence theorem.

20

slide-84
SLIDE 84

Sketch of Proof.

We know the conditional relation: E[Xn | Fn−1] = Xn−1 + Xn−1 n − 1 + t + E[χC], Hence we see that X ∗

n = Xn n −

  • t + E[χC]
  • Hn + t is a martingale.

From the variance expression we see that E

  • (X ∗

n )2

n2 = c + o(1). Hence, we have supn≥1 E

  • (X ∗

n )2

< ∞, and the theorem follows from Doob’s martingale convergence theorem.

20

slide-85
SLIDE 85

Sketch of Proof.

We know the conditional relation: E[Xn | Fn−1] = Xn−1 + Xn−1 n − 1 + t + E[χC], Hence we see that X ∗

n = Xn n −

  • t + E[χC]
  • Hn + t is a martingale.

From the variance expression we see that E

  • (X ∗

n )2

n2 = c + o(1). Hence, we have supn≥1 E

  • (X ∗

n )2

< ∞, and the theorem follows from Doob’s martingale convergence theorem.

  • Remark. The expressions for mean and variance are valid, even when t = t(n)

is no longer fixed but grows with n.

20

slide-86
SLIDE 86

Height

Definition

The height of the tree is the maximum depth among all the existing nodes. Hn = max

v∈Tn D(v).

21

slide-87
SLIDE 87

Height

Definition

The height of the tree is the maximum depth among all the existing nodes. Hn = max

v∈Tn D(v).

Theorem

Let Hn be the height of a random tree built from the building blocks T1, . . . , Tk, which are selected at each step with probabilities p1, . . . , pk. We then have Hn ln n

a.s.

− → e

  • E[∆C ] + 1
  • .

21

slide-88
SLIDE 88

Sketch of Proof.

Recall a standard recursive trees:

22

slide-89
SLIDE 89

Sketch of Proof.

Recall a standard recursive trees: grows out of a root node in steps.

22

slide-90
SLIDE 90

Sketch of Proof.

Recall a standard recursive trees: grows out of a root node in steps. At each step, a new node is added by choosing a parent node from the existing tree at random (all nodes are equally likely).

22

slide-91
SLIDE 91

Sketch of Proof.

Recall a standard recursive trees: grows out of a root node in steps. At each step, a new node is added by choosing a parent node from the existing tree at random (all nodes are equally likely). If ˆ Hn be the height of the recursive tree, [Pittel 1994] showed that

ˆ Hn ln n a.s.

− → e.

22

slide-92
SLIDE 92

Sketch of Proof.

Recall a standard recursive trees: grows out of a root node in steps. At each step, a new node is added by choosing a parent node from the existing tree at random (all nodes are equally likely). If ˆ Hn be the height of the recursive tree, [Pittel 1994] showed that

ˆ Hn ln n a.s.

− → e. Bursting argument:

22

slide-93
SLIDE 93

Sketch of Proof.

Recall a standard recursive trees: grows out of a root node in steps. At each step, a new node is added by choosing a parent node from the existing tree at random (all nodes are equally likely). If ˆ Hn be the height of the recursive tree, [Pittel 1994] showed that

ˆ Hn ln n a.s.

− → e. Bursting argument: The blocks tree can be obtained from a recursive tree by “bursting” its nodes.

22

slide-94
SLIDE 94

Sketch of Proof.

Recall a standard recursive trees: grows out of a root node in steps. At each step, a new node is added by choosing a parent node from the existing tree at random (all nodes are equally likely). If ˆ Hn be the height of the recursive tree, [Pittel 1994] showed that

ˆ Hn ln n a.s.

− → e. Bursting argument: The blocks tree can be obtained from a recursive tree by “bursting” its nodes. Each node in the recursive tree is replaced with (bursts into) a building block, with block Ti being chosen with probability pi

22

slide-95
SLIDE 95

Sketch of Proof.

Recall a standard recursive trees: grows out of a root node in steps. At each step, a new node is added by choosing a parent node from the existing tree at random (all nodes are equally likely). If ˆ Hn be the height of the recursive tree, [Pittel 1994] showed that

ˆ Hn ln n a.s.

− → e. Bursting argument: The blocks tree can be obtained from a recursive tree by “bursting” its nodes. Each node in the recursive tree is replaced with (bursts into) a building block, with block Ti being chosen with probability pi Then each child of that node in the recursive tree independently chooses a parent in the parent block at random, with all nodes of that parent being equally likely (each may be taken as parent with probability 1/t).

22

slide-96
SLIDE 96

Sketch of Proof.

Recall a standard recursive trees: grows out of a root node in steps. At each step, a new node is added by choosing a parent node from the existing tree at random (all nodes are equally likely). If ˆ Hn be the height of the recursive tree, [Pittel 1994] showed that

ˆ Hn ln n a.s.

− → e. Bursting argument: The blocks tree can be obtained from a recursive tree by “bursting” its nodes. Each node in the recursive tree is replaced with (bursts into) a building block, with block Ti being chosen with probability pi Then each child of that node in the recursive tree independently chooses a parent in the parent block at random, with all nodes of that parent being equally likely (each may be taken as parent with probability 1/t). When a child of a node bursts into a block, it is its root that gets joined to a randomly chosen parent in the parent block.

22

slide-97
SLIDE 97

Sketch of Proof.

We now connect Hn and ˆ Hn.

23

slide-98
SLIDE 98

Sketch of Proof.

We now connect Hn and ˆ Hn. Suppose v1, . . . , v ˆ

Hn is a path in the recursive tree leading from the root

to a node at the highest level (of depth ˆ Hn).

23

slide-99
SLIDE 99

Sketch of Proof.

We now connect Hn and ˆ Hn. Suppose v1, . . . , v ˆ

Hn is a path in the recursive tree leading from the root

to a node at the highest level (of depth ˆ Hn). v1 is necessarily the root and when v2 bursts into a block, the root of that block appears in the blocks tree at distance 1 + ˆ δ1 from the root of that tree.

23

slide-100
SLIDE 100

Sketch of Proof.

We now connect Hn and ˆ Hn. Suppose v1, . . . , v ˆ

Hn is a path in the recursive tree leading from the root

to a node at the highest level (of depth ˆ Hn). v1 is necessarily the root and when v2 bursts into a block, the root of that block appears in the blocks tree at distance 1 + ˆ δ1 from the root of that tree. ˆ δ1 is distributed like ∆C.

23

slide-101
SLIDE 101

Sketch of Proof.

We now connect Hn and ˆ Hn. Suppose v1, . . . , v ˆ

Hn is a path in the recursive tree leading from the root

to a node at the highest level (of depth ˆ Hn). v1 is necessarily the root and when v2 bursts into a block, the root of that block appears in the blocks tree at distance 1 + ˆ δ1 from the root of that tree. ˆ δ1 is distributed like ∆C. Similarly when v2 bursts into a block, the root of that block appears in the blocks tree at distance 1 + ˆ δ2 from the root of that tree.

23

slide-102
SLIDE 102

Sketch of Proof.

We now connect Hn and ˆ Hn. Suppose v1, . . . , v ˆ

Hn is a path in the recursive tree leading from the root

to a node at the highest level (of depth ˆ Hn). v1 is necessarily the root and when v2 bursts into a block, the root of that block appears in the blocks tree at distance 1 + ˆ δ1 from the root of that tree. ˆ δ1 is distributed like ∆C. Similarly when v2 bursts into a block, the root of that block appears in the blocks tree at distance 1 + ˆ δ2 from the root of that tree. Again ˆ δ2 is distributed like ∆C, and so forth along that path.

23

slide-103
SLIDE 103

Sketch of Proof.

We now connect Hn and ˆ Hn. Suppose v1, . . . , v ˆ

Hn is a path in the recursive tree leading from the root

to a node at the highest level (of depth ˆ Hn). v1 is necessarily the root and when v2 bursts into a block, the root of that block appears in the blocks tree at distance 1 + ˆ δ1 from the root of that tree. ˆ δ1 is distributed like ∆C. Similarly when v2 bursts into a block, the root of that block appears in the blocks tree at distance 1 + ˆ δ2 from the root of that tree. Again ˆ δ2 is distributed like ∆C, and so forth along that path. Hence

23

slide-104
SLIDE 104

Sketch of Proof.

We now connect Hn and ˆ Hn. Suppose v1, . . . , v ˆ

Hn is a path in the recursive tree leading from the root

to a node at the highest level (of depth ˆ Hn). v1 is necessarily the root and when v2 bursts into a block, the root of that block appears in the blocks tree at distance 1 + ˆ δ1 from the root of that tree. ˆ δ1 is distributed like ∆C. Similarly when v2 bursts into a block, the root of that block appears in the blocks tree at distance 1 + ˆ δ2 from the root of that tree. Again ˆ δ2 is distributed like ∆C, and so forth along that path. Hence Hn ≥ (1 + ˆ δ1) + (1 + ˆ δ2) + · · · + (1 + ˆ δ ˆ

Hn) = ˆ

Hn + ˆ

Hn i=1 ˆ

δi, where ˆ δi, for i = 1, . . . , ˆ Hn are all independent.

23

slide-105
SLIDE 105

Sketch of Proof.

By the strong law of large numbers

1 ˆ Hn

ˆ

Hn i=1 ˆ

δi

a.s.

− → E[∆C].

24

slide-106
SLIDE 106

Sketch of Proof.

By the strong law of large numbers

1 ˆ Hn

ˆ

Hn i=1 ˆ

δi

a.s.

− → E[∆C].

Hn ln n ≥ ˆ Hn ln n +

  • 1

ˆ Hn

ˆ

Hn i=1 ˆ

δi

  • ×

ˆ Hn ln n a.s.

− → e + E[∆C] e.

24

slide-107
SLIDE 107

Sketch of Proof.

By the strong law of large numbers

1 ˆ Hn

ˆ

Hn i=1 ˆ

δi

a.s.

− → E[∆C].

Hn ln n ≥ ˆ Hn ln n +

  • 1

ˆ Hn

ˆ

Hn i=1 ˆ

δi

  • ×

ˆ Hn ln n a.s.

− → e + E[∆C] e. This gives us an a.s. lower bound.

24

slide-108
SLIDE 108

Sketch of Proof.

By the strong law of large numbers

1 ˆ Hn

ˆ

Hn i=1 ˆ

δi

a.s.

− → E[∆C].

Hn ln n ≥ ˆ Hn ln n +

  • 1

ˆ Hn

ˆ

Hn i=1 ˆ

δi

  • ×

ˆ Hn ln n a.s.

− → e + E[∆C] e. This gives us an a.s. lower bound. We label the nodes of the bursting recursive tree according to their time

  • rder of appearance. For example, the root is labeled 1, the second node

is labeled 2 and so on...

24

slide-109
SLIDE 109

Sketch of Proof.

By the strong law of large numbers

1 ˆ Hn

ˆ

Hn i=1 ˆ

δi

a.s.

− → E[∆C].

Hn ln n ≥ ˆ Hn ln n +

  • 1

ˆ Hn

ˆ

Hn i=1 ˆ

δi

  • ×

ˆ Hn ln n a.s.

− → e + E[∆C] e. This gives us an a.s. lower bound. We label the nodes of the bursting recursive tree according to their time

  • rder of appearance. For example, the root is labeled 1, the second node

is labeled 2 and so on... Suppose node i in the recursive tree is at depth ˆ Di, the jth node in the path from the root to node i in the recursive trees bursts into a tree in which the next node down the same path is adjoined to a node at depth δ(i)

j . 24

slide-110
SLIDE 110

Sketch of Proof.

By the strong law of large numbers

1 ˆ Hn

ˆ

Hn i=1 ˆ

δi

a.s.

− → E[∆C].

Hn ln n ≥ ˆ Hn ln n +

  • 1

ˆ Hn

ˆ

Hn i=1 ˆ

δi

  • ×

ˆ Hn ln n a.s.

− → e + E[∆C] e. This gives us an a.s. lower bound. We label the nodes of the bursting recursive tree according to their time

  • rder of appearance. For example, the root is labeled 1, the second node

is labeled 2 and so on... Suppose node i in the recursive tree is at depth ˆ Di, the jth node in the path from the root to node i in the recursive trees bursts into a tree in which the next node down the same path is adjoined to a node at depth δ(i)

j .

Then, the height of the blocks tree is bounded above: Hn ≤ max

1≤i≤n

  • (1 + ˆ

δ(i)

1 ) + (1 + ˆ

δ(i)

2 ) + · · · + (1 + ˆ

δ(i)

ˆ Di )

  • .

24

slide-111
SLIDE 111

Sketch of Proof.

Hn ≤ max

1≤i≤n

ˆ Di + ˆ δ(i)

1 + ˆ

δ(i)

2 + · · · + ˆ

δ(i)

ˆ Di

  • 25
slide-112
SLIDE 112

Sketch of Proof.

Hn ≤ max

1≤i≤n

ˆ Di + ˆ δ(i)

1 + ˆ

δ(i)

2 + · · · + ˆ

δ(i)

ˆ Di

  • ≤ max

1≤i≤n

ˆ Di + max

1≤i≤n

ˆ δ(i)

1 + ˆ

δ(i)

2 + · · · + ˆ

δ(i)

ˆ Di

  • 25
slide-113
SLIDE 113

Sketch of Proof.

Hn ≤ max

1≤i≤n

ˆ Di + ˆ δ(i)

1 + ˆ

δ(i)

2 + · · · + ˆ

δ(i)

ˆ Di

  • ≤ max

1≤i≤n

ˆ Di + max

1≤i≤n

ˆ δ(i)

1 + ˆ

δ(i)

2 + · · · + ˆ

δ(i)

ˆ Di

  • ≤ ˆ

Hn + max

1≤i≤n

ˆ δ(i)

1 + ˆ

δ(i)

2 + · · · + ˆ

δ(i)

ˆ Hn

  • ,

25

slide-114
SLIDE 114

Sketch of Proof.

Hn ≤ max

1≤i≤n

ˆ Di + ˆ δ(i)

1 + ˆ

δ(i)

2 + · · · + ˆ

δ(i)

ˆ Di

  • ≤ max

1≤i≤n

ˆ Di + max

1≤i≤n

ˆ δ(i)

1 + ˆ

δ(i)

2 + · · · + ˆ

δ(i)

ˆ Di

  • ≤ ˆ

Hn + max

1≤i≤n

ˆ δ(i)

1 + ˆ

δ(i)

2 + · · · + ˆ

δ(i)

ˆ Hn

  • ,

where ˆ δ(i)

ˆ Di +1, . . . , δ ˆ H(i)

n

are additional independent random variable padded at the end to make all the expressions of the same length.

25

slide-115
SLIDE 115

Sketch of Proof.

Hn ≤ max

1≤i≤n

ˆ Di + ˆ δ(i)

1 + ˆ

δ(i)

2 + · · · + ˆ

δ(i)

ˆ Di

  • ≤ max

1≤i≤n

ˆ Di + max

1≤i≤n

ˆ δ(i)

1 + ˆ

δ(i)

2 + · · · + ˆ

δ(i)

ˆ Di

  • ≤ ˆ

Hn + max

1≤i≤n

ˆ δ(i)

1 + ˆ

δ(i)

2 + · · · + ˆ

δ(i)

ˆ Hn

  • ,

where ˆ δ(i)

ˆ Di +1, . . . , δ ˆ H(i)

n

are additional independent random variable padded at the end to make all the expressions of the same length. By the strong law of large numbers, we have

25

slide-116
SLIDE 116

Sketch of Proof.

Hn ≤ max

1≤i≤n

ˆ Di + ˆ δ(i)

1 + ˆ

δ(i)

2 + · · · + ˆ

δ(i)

ˆ Di

  • ≤ max

1≤i≤n

ˆ Di + max

1≤i≤n

ˆ δ(i)

1 + ˆ

δ(i)

2 + · · · + ˆ

δ(i)

ˆ Di

  • ≤ ˆ

Hn + max

1≤i≤n

ˆ δ(i)

1 + ˆ

δ(i)

2 + · · · + ˆ

δ(i)

ˆ Hn

  • ,

where ˆ δ(i)

ˆ Di +1, . . . , δ ˆ H(i)

n

are additional independent random variable padded at the end to make all the expressions of the same length. By the strong law of large numbers, we have Hn ln n ≤ ˆ Hn ln n + max

1≤i≤n

ˆ δ(i)

1 + ˆ

δ(i)

2 + · · · + ˆ

δ(i)

ˆ Hn

ln n

  • 25
slide-117
SLIDE 117

Sketch of Proof.

Hn ≤ max

1≤i≤n

ˆ Di + ˆ δ(i)

1 + ˆ

δ(i)

2 + · · · + ˆ

δ(i)

ˆ Di

  • ≤ max

1≤i≤n

ˆ Di + max

1≤i≤n

ˆ δ(i)

1 + ˆ

δ(i)

2 + · · · + ˆ

δ(i)

ˆ Di

  • ≤ ˆ

Hn + max

1≤i≤n

ˆ δ(i)

1 + ˆ

δ(i)

2 + · · · + ˆ

δ(i)

ˆ Hn

  • ,

where ˆ δ(i)

ˆ Di +1, . . . , δ ˆ H(i)

n

are additional independent random variable padded at the end to make all the expressions of the same length. By the strong law of large numbers, we have Hn ln n ≤ ˆ Hn ln n + max

1≤i≤n

ˆ δ(i)

1 + ˆ

δ(i)

2 + · · · + ˆ

δ(i)

ˆ Hn

ln n

  • ≤ e + eE[∆C] + o(1),

a.s.

25

slide-118
SLIDE 118

Sketch of Proof.

Hn ≤ max

1≤i≤n

ˆ Di + ˆ δ(i)

1 + ˆ

δ(i)

2 + · · · + ˆ

δ(i)

ˆ Di

  • ≤ max

1≤i≤n

ˆ Di + max

1≤i≤n

ˆ δ(i)

1 + ˆ

δ(i)

2 + · · · + ˆ

δ(i)

ˆ Di

  • ≤ ˆ

Hn + max

1≤i≤n

ˆ δ(i)

1 + ˆ

δ(i)

2 + · · · + ˆ

δ(i)

ˆ Hn

  • ,

where ˆ δ(i)

ˆ Di +1, . . . , δ ˆ H(i)

n

are additional independent random variable padded at the end to make all the expressions of the same length. By the strong law of large numbers, we have Hn ln n ≤ ˆ Hn ln n + max

1≤i≤n

ˆ δ(i)

1 + ˆ

δ(i)

2 + · · · + ˆ

δ(i)

ˆ Hn

ln n

  • ≤ e + eE[∆C] + o(1),

a.s.

a.s.

− → e

  • 1 + E[∆C]),

25

slide-119
SLIDE 119

Sketch of Proof.

Hn ≤ max

1≤i≤n

ˆ Di + ˆ δ(i)

1 + ˆ

δ(i)

2 + · · · + ˆ

δ(i)

ˆ Di

  • ≤ max

1≤i≤n

ˆ Di + max

1≤i≤n

ˆ δ(i)

1 + ˆ

δ(i)

2 + · · · + ˆ

δ(i)

ˆ Di

  • ≤ ˆ

Hn + max

1≤i≤n

ˆ δ(i)

1 + ˆ

δ(i)

2 + · · · + ˆ

δ(i)

ˆ Hn

  • ,

where ˆ δ(i)

ˆ Di +1, . . . , δ ˆ H(i)

n

are additional independent random variable padded at the end to make all the expressions of the same length. By the strong law of large numbers, we have Hn ln n ≤ ˆ Hn ln n + max

1≤i≤n

ˆ δ(i)

1 + ˆ

δ(i)

2 + · · · + ˆ

δ(i)

ˆ Hn

ln n

  • ≤ e + eE[∆C] + o(1),

a.s.

a.s.

− → e

  • 1 + E[∆C]),

Combining the bounds the result follows!

25

slide-120
SLIDE 120

Future Work and Challenges

The distribution of the number of nodes of outdegree j > 0.

26

slide-121
SLIDE 121

Future Work and Challenges

The distribution of the number of nodes of outdegree j > 0. The joint distribution or atleast covariances of number nodes

  • f outdegree i and j.

26

slide-122
SLIDE 122

Future Work and Challenges

The distribution of the number of nodes of outdegree j > 0. The joint distribution or atleast covariances of number nodes

  • f outdegree i and j.

We can not handle a collection of blocks of different sizes, which would be more general.

26

slide-123
SLIDE 123

Future Work and Challenges

The distribution of the number of nodes of outdegree j > 0. The joint distribution or atleast covariances of number nodes

  • f outdegree i and j.

We can not handle a collection of blocks of different sizes, which would be more general. The underlying structure here is a recursive tree. We could try to build other types of tree from blocks.

26

slide-124
SLIDE 124

Thank You! Questions are welcome!

27