Using graphs and Laplacian eigenvalues to evaluate block designs R. - - PowerPoint PPT Presentation

using graphs and laplacian eigenvalues to evaluate block
SMART_READER_LITE
LIVE PREVIEW

Using graphs and Laplacian eigenvalues to evaluate block designs R. - - PowerPoint PPT Presentation

Using graphs and Laplacian eigenvalues to evaluate block designs R. A. Bailey University of St Andrews / QMUL (emerita) Ongoing joint work with Peter J. Cameron Modern Trends in Algebraic Graph Theory, Villanova, June 2014 1/52 Abstract


slide-1
SLIDE 1

Using graphs and Laplacian eigenvalues to evaluate block designs

  • R. A. Bailey

University of St Andrews / QMUL (emerita) Ongoing joint work with Peter J. Cameron Modern Trends in Algebraic Graph Theory, Villanova, June 2014

1/52

slide-2
SLIDE 2

Abstract

Consider an experiment to compare v treatments in b blocks of size k.

2/52

slide-3
SLIDE 3

Abstract

Consider an experiment to compare v treatments in b blocks of size k. Statisticians use various criteria to decide which design is best.

2/52

slide-4
SLIDE 4

Abstract

Consider an experiment to compare v treatments in b blocks of size k. Statisticians use various criteria to decide which design is best. It turns out that most of these criteria are defined by the Laplacian eigenvalues of one of the two graphs defined by the block design:

◮ the Levi graph, which has v + b vertices, ◮ or the concurrence graph, which has v vertices.

2/52

slide-5
SLIDE 5

Abstract

Consider an experiment to compare v treatments in b blocks of size k. Statisticians use various criteria to decide which design is best. It turns out that most of these criteria are defined by the Laplacian eigenvalues of one of the two graphs defined by the block design:

◮ the Levi graph, which has v + b vertices, ◮ or the concurrence graph, which has v vertices.

The algebraic approach shows that sometimes all the criteria prefer highly symmetric designs but sometimes they favour very different ones.

2/52

slide-6
SLIDE 6

An experiment on detergents

In a consumer experiment, twelve housewives volunteer to test new detergents. There are 16 new detergents to compare, but it is not realistic to ask any one volunteer to compare this many detergents. Each housewife tests one detergent per washload for each of four washloads, and assesses the cleanliness of each washload.

3/52

slide-7
SLIDE 7

An experiment on detergents

In a consumer experiment, twelve housewives volunteer to test new detergents. There are 16 new detergents to compare, but it is not realistic to ask any one volunteer to compare this many detergents. Each housewife tests one detergent per washload for each of four washloads, and assesses the cleanliness of each washload. The experimental units are the 48 washloads. The housewives form 12 blocks of size 4.

3/52

slide-8
SLIDE 8

An experiment on detergents

In a consumer experiment, twelve housewives volunteer to test new detergents. There are 16 new detergents to compare, but it is not realistic to ask any one volunteer to compare this many detergents. Each housewife tests one detergent per washload for each of four washloads, and assesses the cleanliness of each washload. The experimental units are the 48 washloads. The housewives form 12 blocks of size 4. The treatments are the 16 new detergents.

3/52

slide-9
SLIDE 9

Experiments in blocks

I have v treatments that I want to compare. I have b blocks, with k plots in each block.

4/52

slide-10
SLIDE 10

Experiments in blocks

I have v treatments that I want to compare. I have b blocks, with k plots in each block. blocks b k treatments v housewives 12 4 detergents 16

4/52

slide-11
SLIDE 11

Experiments in blocks

I have v treatments that I want to compare. I have b blocks, with k plots in each block. blocks b k treatments v housewives 12 4 detergents 16 How should I choose a block design for these values of b, v and k?

4/52

slide-12
SLIDE 12

Experiments in blocks

I have v treatments that I want to compare. I have b blocks, with k plots in each block. blocks b k treatments v housewives 12 4 detergents 16 How should I choose a block design for these values of b, v and k? What makes a block design good?

4/52

slide-13
SLIDE 13

Two designs with v = 5, b = 7, k = 3: which is better?

Conventions: columns are blocks;

  • rder of treatments within each block is irrelevant;
  • rder of blocks is irrelevant.

1 1 1 1 2 2 2 2 3 3 4 3 3 4 3 4 5 5 4 5 5 1 1 1 1 2 2 2 1 3 3 4 3 3 4 2 4 5 5 4 5 5 binary non-binary A design is binary if no treatment occurs more than once in any block.

5/52

slide-14
SLIDE 14

Two designs with v = 15, b = 7, k = 3: which is better?

1 1 2 3 4 5 6 2 4 5 6 10 11 12 3 7 8 9 13 14 15 1 1 1 1 1 1 1 2 4 6 8 10 12 14 3 5 7 9 11 13 15 replications differ by ≤ 1 queen-bee design The replication of a treatment is its number of occurrences. A design is a queen-bee design if there is a treatment that

  • ccurs in every block.

6/52

slide-15
SLIDE 15

Two designs with v = 7, b = 7, k = 3: which is better?

1 2 3 4 5 6 7 2 3 4 5 6 7 1 4 5 6 7 1 2 3 1 2 3 4 5 6 7 2 3 4 5 6 7 1 3 4 5 6 7 1 2 balanced (2-design) non-balanced A binary design is balanced if every pair of distinct treaments

  • ccurs together in the same number of blocks.

7/52

slide-16
SLIDE 16

Experimental units and incidence matrix

There are bk experimental units.

8/52

slide-17
SLIDE 17

Experimental units and incidence matrix

There are bk experimental units. If ω is an experimental unit, put f(ω) = treatment on ω g(ω) = block containing ω.

8/52

slide-18
SLIDE 18

Experimental units and incidence matrix

There are bk experimental units. If ω is an experimental unit, put f(ω) = treatment on ω g(ω) = block containing ω. For i = 1, . . . , v put ri = |{ω : f(ω) = i}| = replication of treatment i.

8/52

slide-19
SLIDE 19

Experimental units and incidence matrix

There are bk experimental units. If ω is an experimental unit, put f(ω) = treatment on ω g(ω) = block containing ω. For i = 1, . . . , v put ri = |{ω : f(ω) = i}| = replication of treatment i. For i = 1, . . . , v and j = 1, . . . , b, let nij = |{ω : f(ω) = i and g(ω) = j}| = number of experimental units in block j which have treatment i.

8/52

slide-20
SLIDE 20

Experimental units and incidence matrix

There are bk experimental units. If ω is an experimental unit, put f(ω) = treatment on ω g(ω) = block containing ω. For i = 1, . . . , v put ri = |{ω : f(ω) = i}| = replication of treatment i. For i = 1, . . . , v and j = 1, . . . , b, let nij = |{ω : f(ω) = i and g(ω) = j}| = number of experimental units in block j which have treatment i. The v × b incidence matrix N has entries nij.

8/52

slide-21
SLIDE 21

Levi graph

The Levi graph ˜ G of a block design ∆ has

9/52

slide-22
SLIDE 22

Levi graph

The Levi graph ˜ G of a block design ∆ has

◮ one vertex for each treatment,

9/52

slide-23
SLIDE 23

Levi graph

The Levi graph ˜ G of a block design ∆ has

◮ one vertex for each treatment, ◮ one vertex for each block,

9/52

slide-24
SLIDE 24

Levi graph

The Levi graph ˜ G of a block design ∆ has

◮ one vertex for each treatment, ◮ one vertex for each block, ◮ one edge for each experimental unit, with edge ω joining

vertex f(ω) (the treatment on ω) to vertex g(ω) (the block containing ω).

9/52

slide-25
SLIDE 25

Levi graph

The Levi graph ˜ G of a block design ∆ has

◮ one vertex for each treatment, ◮ one vertex for each block, ◮ one edge for each experimental unit, with edge ω joining

vertex f(ω) (the treatment on ω) to vertex g(ω) (the block containing ω).

9/52

slide-26
SLIDE 26

Levi graph

The Levi graph ˜ G of a block design ∆ has

◮ one vertex for each treatment, ◮ one vertex for each block, ◮ one edge for each experimental unit, with edge ω joining

vertex f(ω) (the treatment on ω) to vertex g(ω) (the block containing ω). It is a bipartite graph, with nij edges between treatment-vertex i and block-vertex j.

9/52

slide-27
SLIDE 27

Example 1: v = 4, b = k = 3

1 2 1 3 3 2 4 4 2

10/52

slide-28
SLIDE 28

Example 1: v = 4, b = k = 3

1 2 1 3 3 2 4 4 2

① ①

3 4 1 2

10/52

slide-29
SLIDE 29

Example 1: v = 4, b = k = 3

1 2 1 3 3 2 4 4 2

① ①

3 4 1 2

❍ ❍ ❍ ❍ ❍ ❍ ✟ ✟ ✟ ✟ ✟ ✟

10/52

slide-30
SLIDE 30

Example 1: v = 4, b = k = 3

1 2 1 3 3 2 4 4 2

① ①

3 4 1 2

❍ ❍ ❍ ❍ ❍ ❍ ✟ ✟ ✟ ✟ ✟ ✟ ✟✟✟✟✟ ✟ ❍❍❍❍❍ ❍

10/52

slide-31
SLIDE 31

Example 1: v = 4, b = k = 3

1 2 1 3 3 2 4 4 2

① ①

3 4 1 2

❍ ❍ ❍ ❍ ❍ ❍ ✟ ✟ ✟ ✟ ✟ ✟ ✟✟✟✟✟ ✟ ❍❍❍❍❍ ❍

10/52

slide-32
SLIDE 32

Example 2: v = 8, b = 4, k = 3

1 2 3 4 2 3 4 1 5 6 7 8

11/52

slide-33
SLIDE 33

Example 2: v = 8, b = 4, k = 3

1 2 3 4 2 3 4 1 5 6 7 8

① ① ① ① ① ① ① ①

❅ ❅ ❅

❅ ❅ ❅

1 5 2 6 3 7 4 8

11/52

slide-34
SLIDE 34

Concurrence graph

The concurrence graph G of a block design ∆ has

12/52

slide-35
SLIDE 35

Concurrence graph

The concurrence graph G of a block design ∆ has

◮ one vertex for each treatment,

12/52

slide-36
SLIDE 36

Concurrence graph

The concurrence graph G of a block design ∆ has

◮ one vertex for each treatment, ◮ one edge for each unordered pair α, ω, with α = ω,

g(α) = g(ω) (in the same block) and f(α) = f(ω): this edge joins vertices f(α) and f(ω).

12/52

slide-37
SLIDE 37

Concurrence graph

The concurrence graph G of a block design ∆ has

◮ one vertex for each treatment, ◮ one edge for each unordered pair α, ω, with α = ω,

g(α) = g(ω) (in the same block) and f(α) = f(ω): this edge joins vertices f(α) and f(ω).

12/52

slide-38
SLIDE 38

Concurrence graph

The concurrence graph G of a block design ∆ has

◮ one vertex for each treatment, ◮ one edge for each unordered pair α, ω, with α = ω,

g(α) = g(ω) (in the same block) and f(α) = f(ω): this edge joins vertices f(α) and f(ω). There are no loops.

12/52

slide-39
SLIDE 39

Concurrence graph

The concurrence graph G of a block design ∆ has

◮ one vertex for each treatment, ◮ one edge for each unordered pair α, ω, with α = ω,

g(α) = g(ω) (in the same block) and f(α) = f(ω): this edge joins vertices f(α) and f(ω). There are no loops. If i = j then the number of edges between vertices i and j is λij =

b

s=1

nisnjs;

12/52

slide-40
SLIDE 40

Concurrence graph

The concurrence graph G of a block design ∆ has

◮ one vertex for each treatment, ◮ one edge for each unordered pair α, ω, with α = ω,

g(α) = g(ω) (in the same block) and f(α) = f(ω): this edge joins vertices f(α) and f(ω). There are no loops. If i = j then the number of edges between vertices i and j is λij =

b

s=1

nisnjs; this is called the concurrence of i and j, and is the (i, j)-entry of Λ = NN⊤.

12/52

slide-41
SLIDE 41

Example 1: v = 4, b = k = 3

1 2 1 3 3 2 4 4 2

13/52

slide-42
SLIDE 42

Example 1: v = 4, b = k = 3

1 2 1 3 3 2 4 4 2

① ①

3 4 1 2

❍ ❍ ❍ ❍ ❍ ❍ ✟ ✟ ✟ ✟ ✟ ✟ ✟✟✟✟✟ ✟ ❍❍❍❍❍ ❍ ① ① ① ①

❅ ❅ ❅ ❅ ❅

3 4 1 2 Levi graph concurrence graph

13/52

slide-43
SLIDE 43

Example 1: v = 4, b = k = 3

1 2 1 3 3 2 4 4 2

① ①

3 4 1 2

❍ ❍ ❍ ❍ ❍ ❍ ✟ ✟ ✟ ✟ ✟ ✟ ✟✟✟✟✟ ✟ ❍❍❍❍❍ ❍ ① ① ① ①

❅ ❅ ❅ ❅ ❅

3 4 1 2 Levi graph concurrence graph can recover design may have more symmetry

13/52

slide-44
SLIDE 44

Example 1: v = 4, b = k = 3

1 2 1 3 3 2 4 4 2

① ①

3 4 1 2

❍ ❍ ❍ ❍ ❍ ❍ ✟ ✟ ✟ ✟ ✟ ✟ ✟✟✟✟✟ ✟ ❍❍❍❍❍ ❍ ① ① ① ①

❅ ❅ ❅ ❅ ❅

3 4 1 2 Levi graph concurrence graph can recover design may have more symmetry more vertices

13/52

slide-45
SLIDE 45

Example 1: v = 4, b = k = 3

1 2 1 3 3 2 4 4 2

① ①

3 4 1 2

❍ ❍ ❍ ❍ ❍ ❍ ✟ ✟ ✟ ✟ ✟ ✟ ✟✟✟✟✟ ✟ ❍❍❍❍❍ ❍ ① ① ① ①

❅ ❅ ❅ ❅ ❅

3 4 1 2 Levi graph concurrence graph can recover design may have more symmetry more vertices more edges if k = 2 more edges if k ≥ 4

13/52

slide-46
SLIDE 46

Example 2: v = 8, b = 4, k = 3

1 2 3 4 2 3 4 1 5 6 7 8

14/52

slide-47
SLIDE 47

Example 2: v = 8, b = 4, k = 3

1 2 3 4 2 3 4 1 5 6 7 8

① ① ① ① ① ① ① ①

❅ ❅ ❅

❅ ❅ ❅

1 5 2 6 3 7 4 8

① ① ① ① ① ① ① ① ✟✟ ✟ ❍❍ ❍ ❍ ❍ ❍ ✟ ✟ ✟ ✁ ✁ ✁ ❆ ❆ ❆ ❆ ❆ ❆ ✁ ✁ ✁

5 7 8 6 4 1 2 3 Levi graph concurrence graph

14/52

slide-48
SLIDE 48

Example 3: v = 15, b = 7, k = 3

1 1 2 3 4 5 6 2 4 5 6 10 11 12 3 7 8 9 13 14 15 1 1 1 1 1 1 1 2 4 6 8 10 12 14 3 5 7 9 11 13 15

13

7

8

11

10

1

2

14

12

6

9

15

4

5

3

✔ ✔ ✔ ✔ ✔ ❚ ❚ ❚ ✔ ✔ ✔ ✔ ✔ ❚ ❚ ❚ ❚ ❚ ❜ ❜ ❜ ❜ ❜ ✧✧✧✧ ✧ ❜ ❜ ❜ ❜ ❜ ✧✧✧✧ ✧ ①

1

✥✥✥✥✥ ✥ ✑✑✑✑ ✑ ✡ ✡ ✡ ✡ ✡ ☎ ☎ ☎ ☎ ☎☎ ❵ ❵ ❵ ❵ ❵ ❵ ◗ ◗ ◗ ◗ ◗ ❏ ❏ ❏ ❏ ❏ ❉ ❉ ❉ ❉ ❉ ❉ ✑ ✑ ✑ ✑ ✑ ✡ ✡ ✡ ✡ ✡ ✥ ✥ ✥ ✥ ✥ ✥ ❵❵❵❵❵ ❵ ◗◗◗◗ ◗ ❏ ❏ ❏ ❏ ❏ ①

12

11

10

9

5

6

7

8

14

15

13

4

3

2

❅ ❅ ❅

  • 15/52
slide-49
SLIDE 49

Laplacian matrices

The Laplacian matrix L of the concurrence graph G is a v × v matrix with (i, j)-entry as follows:

16/52

slide-50
SLIDE 50

Laplacian matrices

The Laplacian matrix L of the concurrence graph G is a v × v matrix with (i, j)-entry as follows:

◮ if i = j then

Lij = −(number of edges between i and j) = −λij;

16/52

slide-51
SLIDE 51

Laplacian matrices

The Laplacian matrix L of the concurrence graph G is a v × v matrix with (i, j)-entry as follows:

◮ if i = j then

Lij = −(number of edges between i and j) = −λij;

◮ Lii = valency of i = ∑ j=i

λij.

16/52

slide-52
SLIDE 52

Laplacian matrices

The Laplacian matrix L of the concurrence graph G is a v × v matrix with (i, j)-entry as follows:

◮ if i = j then

Lij = −(number of edges between i and j) = −λij;

◮ Lii = valency of i = ∑ j=i

λij.

16/52

slide-53
SLIDE 53

Laplacian matrices

The Laplacian matrix L of the concurrence graph G is a v × v matrix with (i, j)-entry as follows:

◮ if i = j then

Lij = −(number of edges between i and j) = −λij;

◮ Lii = valency of i = ∑ j=i

λij. The Laplacian matrix ˜ L of the Levi graph ˜ G is a (v + b) × (v + b) matrix with (i, j)-entry as follows:

16/52

slide-54
SLIDE 54

Laplacian matrices

The Laplacian matrix L of the concurrence graph G is a v × v matrix with (i, j)-entry as follows:

◮ if i = j then

Lij = −(number of edges between i and j) = −λij;

◮ Lii = valency of i = ∑ j=i

λij. The Laplacian matrix ˜ L of the Levi graph ˜ G is a (v + b) × (v + b) matrix with (i, j)-entry as follows:

◮ ˜

Lii = valency of i =

  • k

if i is a block replication ri of i if i is a treatment

16/52

slide-55
SLIDE 55

Laplacian matrices

The Laplacian matrix L of the concurrence graph G is a v × v matrix with (i, j)-entry as follows:

◮ if i = j then

Lij = −(number of edges between i and j) = −λij;

◮ Lii = valency of i = ∑ j=i

λij. The Laplacian matrix ˜ L of the Levi graph ˜ G is a (v + b) × (v + b) matrix with (i, j)-entry as follows:

◮ ˜

Lii = valency of i =

  • k

if i is a block replication ri of i if i is a treatment

◮ if i = j then Lij = −(number of edges between i and j)

=      if i and j are both treatments if i and j are both blocks −nij if i is a treatment and j is a block, or vice versa.

16/52

slide-56
SLIDE 56

Connectivity

All row-sums of L and of ˜ L are zero, so both matrices have 0 as eigenvalue

  • n the appropriate all-1 vector.

17/52

slide-57
SLIDE 57

Connectivity

All row-sums of L and of ˜ L are zero, so both matrices have 0 as eigenvalue

  • n the appropriate all-1 vector.

Theorem

The following are equivalent.

  • 1. 0 is a simple eigenvalue of L;
  • 2. G is a connected graph;
  • 3. ˜

G is a connected graph;

  • 4. 0 is a simple eigenvalue of ˜

L;

  • 5. the design ∆ is connected in the sense that all differences between

treatments can be estimated.

17/52

slide-58
SLIDE 58

Connectivity

All row-sums of L and of ˜ L are zero, so both matrices have 0 as eigenvalue

  • n the appropriate all-1 vector.

Theorem

The following are equivalent.

  • 1. 0 is a simple eigenvalue of L;
  • 2. G is a connected graph;
  • 3. ˜

G is a connected graph;

  • 4. 0 is a simple eigenvalue of ˜

L;

  • 5. the design ∆ is connected in the sense that all differences between

treatments can be estimated. From now on, assume connectivity.

17/52

slide-59
SLIDE 59

Connectivity

All row-sums of L and of ˜ L are zero, so both matrices have 0 as eigenvalue

  • n the appropriate all-1 vector.

Theorem

The following are equivalent.

  • 1. 0 is a simple eigenvalue of L;
  • 2. G is a connected graph;
  • 3. ˜

G is a connected graph;

  • 4. 0 is a simple eigenvalue of ˜

L;

  • 5. the design ∆ is connected in the sense that all differences between

treatments can be estimated. From now on, assume connectivity. Call the remaining eigenvalues non-trivial. They are all non-negative.

17/52

slide-60
SLIDE 60

Generalized inverse

Under the assumption of connectivity, the Moore–Penrose generalized inverse L− of L is defined by L− =

  • L + 1

vJv −1 − 1 vJv, where Jv is the v × v all-1 matrix. (The matrix 1 vJv is the orthogonal projector onto the null space

  • f L.)

18/52

slide-61
SLIDE 61

Generalized inverse

Under the assumption of connectivity, the Moore–Penrose generalized inverse L− of L is defined by L− =

  • L + 1

vJv −1 − 1 vJv, where Jv is the v × v all-1 matrix. (The matrix 1 vJv is the orthogonal projector onto the null space

  • f L.)

The Moore–Penrose generalized inverse ˜ L− of ˜ L is defined similarly.

18/52

slide-62
SLIDE 62

Estimation

We measure the response Yω on each experimenal unit ω.

19/52

slide-63
SLIDE 63

Estimation

We measure the response Yω on each experimenal unit ω. If experimental unit ω has treatment i and is in block m (f(ω) = i and g(ω) = m), then we assume that Yω = τi + βm + random noise.

19/52

slide-64
SLIDE 64

Estimation

We measure the response Yω on each experimenal unit ω. If experimental unit ω has treatment i and is in block m (f(ω) = i and g(ω) = m), then we assume that Yω = τi + βm + random noise. We will do an experiment, collect data yω on each experimental unit ω, then want to estimate certain functions of the treatment parameters using functions of the data.

19/52

slide-65
SLIDE 65

Estimation

We measure the response Yω on each experimenal unit ω. If experimental unit ω has treatment i and is in block m (f(ω) = i and g(ω) = m), then we assume that Yω = τi + βm + random noise. We will do an experiment, collect data yω on each experimental unit ω, then want to estimate certain functions of the treatment parameters using functions of the data. We want to estimate contrasts ∑i xiτi with ∑i xi = 0.

19/52

slide-66
SLIDE 66

Estimation

We measure the response Yω on each experimenal unit ω. If experimental unit ω has treatment i and is in block m (f(ω) = i and g(ω) = m), then we assume that Yω = τi + βm + random noise. We will do an experiment, collect data yω on each experimental unit ω, then want to estimate certain functions of the treatment parameters using functions of the data. We want to estimate contrasts ∑i xiτi with ∑i xi = 0. In particular, we want to estimate all the simple differences τi − τj.

19/52

slide-67
SLIDE 67

Variance: why does it matter?

We want to estimate all the simple differences τi − τj. Put Vij = variance of the best linear unbiased estimator for τi − τj. The length of the 95% confidence interval for τi − τj is proportional to Vij.

20/52

slide-68
SLIDE 68

Variance: why does it matter?

We want to estimate all the simple differences τi − τj. Put Vij = variance of the best linear unbiased estimator for τi − τj. The length of the 95% confidence interval for τi − τj is proportional to Vij. (If we always present results using a 95% confidence interval, then our interval will contain the true value in 19 cases out of 20.)

20/52

slide-69
SLIDE 69

Variance: why does it matter?

We want to estimate all the simple differences τi − τj. Put Vij = variance of the best linear unbiased estimator for τi − τj. The length of the 95% confidence interval for τi − τj is proportional to Vij. (If we always present results using a 95% confidence interval, then our interval will contain the true value in 19 cases out of 20.) The smaller the value of Vij, the smaller is the confidence interval, the closer is the estimate to the true value (on average), and the more likely are we to detect correctly which of τi and τj is bigger.

20/52

slide-70
SLIDE 70

Variance: why does it matter?

We want to estimate all the simple differences τi − τj. Put Vij = variance of the best linear unbiased estimator for τi − τj. The length of the 95% confidence interval for τi − τj is proportional to Vij. (If we always present results using a 95% confidence interval, then our interval will contain the true value in 19 cases out of 20.) The smaller the value of Vij, the smaller is the confidence interval, the closer is the estimate to the true value (on average), and the more likely are we to detect correctly which of τi and τj is bigger. We can make better decisions about new drugs, about new varieties of wheat, about new engineering materials . . . if we make all the Vij small.

20/52

slide-71
SLIDE 71

How do we calculate variance?

Theorem

Assume that all the noise is independent, with variance σ2. If ∑i xi = 0, then the variance of the best linear unbiased estimator of ∑i xiτi is equal to (x⊤L−x)kσ2. In particular, the variance of the best linear unbiased estimator of the simple difference τi − τj is Vij =

  • L−

ii + L− jj − 2L− ij

  • kσ2.

21/52

slide-72
SLIDE 72

How do we calculate variance?

Theorem

Assume that all the noise is independent, with variance σ2. If ∑i xi = 0, then the variance of the best linear unbiased estimator of ∑i xiτi is equal to (x⊤L−x)kσ2. In particular, the variance of the best linear unbiased estimator of the simple difference τi − τj is Vij =

  • L−

ii + L− jj − 2L− ij

  • kσ2.

(This follows from assumption Yω = τi + βm + random noise. by using standard theory of linear models.)

21/52

slide-73
SLIDE 73

. . . Or we can use the Levi graph

Theorem

The variance of the best linear unbiased estimator of the simple difference τi − τj is Vij =

  • ˜

L−

ii + ˜

L−

jj − 2˜

L−

ij

  • σ2.

22/52

slide-74
SLIDE 74

. . . Or we can use the Levi graph

Theorem

The variance of the best linear unbiased estimator of the simple difference τi − τj is Vij =

  • ˜

L−

ii + ˜

L−

jj − 2˜

L−

ij

  • σ2.

(Or βi − βj, appropriately labelled.)

22/52

slide-75
SLIDE 75

. . . Or we can use the Levi graph

Theorem

The variance of the best linear unbiased estimator of the simple difference τi − τj is Vij =

  • ˜

L−

ii + ˜

L−

jj − 2˜

L−

ij

  • σ2.

(Or βi − βj, appropriately labelled.) (This follows from assumption Yω = τi − ˜ βm + random noise. by using standard theory of linear models.)

22/52

slide-76
SLIDE 76

How do we calculate these generalized inverses?

We need L− or ˜ L−.

23/52

slide-77
SLIDE 77

How do we calculate these generalized inverses?

We need L− or ˜ L−.

◮ Add a suitable multiple of J,

use GAP to find the inverse with exact rational coefficients, subtract that multiple of J.

23/52

slide-78
SLIDE 78

How do we calculate these generalized inverses?

We need L− or ˜ L−.

◮ Add a suitable multiple of J,

use GAP to find the inverse with exact rational coefficients, subtract that multiple of J.

◮ If the matrix is highly patterned,

guess the eigenspaces, then invert each non-zero eigenvalue.

23/52

slide-79
SLIDE 79

How do we calculate these generalized inverses?

We need L− or ˜ L−.

◮ Add a suitable multiple of J,

use GAP to find the inverse with exact rational coefficients, subtract that multiple of J.

◮ If the matrix is highly patterned,

guess the eigenspaces, then invert each non-zero eigenvalue.

◮ Direct use of the graph: coming up.

23/52

slide-80
SLIDE 80

How do we calculate these generalized inverses?

We need L− or ˜ L−.

◮ Add a suitable multiple of J,

use GAP to find the inverse with exact rational coefficients, subtract that multiple of J.

◮ If the matrix is highly patterned,

guess the eigenspaces, then invert each non-zero eigenvalue.

◮ Direct use of the graph: coming up.

23/52

slide-81
SLIDE 81

How do we calculate these generalized inverses?

We need L− or ˜ L−.

◮ Add a suitable multiple of J,

use GAP to find the inverse with exact rational coefficients, subtract that multiple of J.

◮ If the matrix is highly patterned,

guess the eigenspaces, then invert each non-zero eigenvalue.

◮ Direct use of the graph: coming up.

Not all of these methods are suitable for generic designs with a variable number of treatments.

23/52

slide-82
SLIDE 82

Electrical networks

We can consider the concurrence graph G as an electrical network with a 1-ohm resistance in each edge. Connect a 1-volt battery between vertices i and j. Current flows in the network, according to these rules.

  • 1. Ohm’s Law:

In every edge, voltage drop = current × resistance = current.

  • 2. Kirchhoff’s Voltage Law:

The total voltage drop from one vertex to any other vertex is the same no matter which path we take from one to the

  • ther.
  • 3. Kirchhoff’s Current Law:

At every vertex which is not connected to the battery, the total current coming in is equal to the total current going

  • ut.

Find the total current I from i to j, then use Ohm’s Law to define the effective resistance Rij between i and j as 1/I.

24/52

slide-83
SLIDE 83

Electrical networks: variance

Theorem

The effective resistance Rij between vertices i and j in G is Rij =

  • L−

ii + L− jj − 2L− ij

  • .

25/52

slide-84
SLIDE 84

Electrical networks: variance

Theorem

The effective resistance Rij between vertices i and j in G is Rij =

  • L−

ii + L− jj − 2L− ij

  • .

So Vij = Rij × kσ2.

25/52

slide-85
SLIDE 85

Electrical networks: variance

Theorem

The effective resistance Rij between vertices i and j in G is Rij =

  • L−

ii + L− jj − 2L− ij

  • .

So Vij = Rij × kσ2. Effective resistances are easy to calculate without matrix inversion if the graph is sparse.

25/52

slide-86
SLIDE 86

Example 2 calculation: v = 8, b = 4, k = 3

③ ③ ③ ③ ③ ③ ③ ③ ✟✟✟✟✟ ✟ ❍❍❍❍❍ ❍ ❍ ❍ ❍ ❍ ❍ ❍ ✟ ✟ ✟ ✟ ✟ ✟ ✁ ✁ ✁ ✁ ✁ ✁ ❆ ❆ ❆ ❆ ❆ ❆ ❆ ❆ ❆ ❆ ❆ ❆ ✁ ✁ ✁ ✁ ✁ ✁

5 7 8 6 4 1 2 3

26/52

slide-87
SLIDE 87

Example 2 calculation: v = 8, b = 4, k = 3

③ ③ ③ ③ ③ ③ ③ ③ ✟✟✟✟✟ ✟ ❍❍❍❍❍ ❍ ❍ ❍ ❍ ❍ ❍ ❍ ✟ ✟ ✟ ✟ ✟ ✟ ✁ ✁ ✁ ✁ ✁ ✁ ❆ ❆ ❆ ❆ ❆ ❆ ❆ ❆ ❆ ❆ ❆ ❆ ✁ ✁ ✁ ✁ ✁ ✁

5 7 8 6 4 1 2 3 [0]

5

✕ 5

[10]

10 [5]

26/52

slide-88
SLIDE 88

Example 2 calculation: v = 8, b = 4, k = 3

③ ③ ③ ③ ③ ③ ③ ③ ✟✟✟✟✟ ✟ ❍❍❍❍❍ ❍ ❍ ❍ ❍ ❍ ❍ ❍ ✟ ✟ ✟ ✟ ✟ ✟ ✁ ✁ ✁ ✁ ✁ ✁ ❆ ❆ ❆ ❆ ❆ ❆ ❆ ❆ ❆ ❆ ❆ ❆ ✁ ✁ ✁ ✁ ✁ ✁

5 7 8 6 4 1 2 3 [0]

5

✕ 5

[10]

10 [5]

3 [3]

3 [6]

6

3 [9]

❯ 3

[12]

6

26/52

slide-89
SLIDE 89

Example 2 calculation: v = 8, b = 4, k = 3

③ ③ ③ ③ ③ ③ ③ ③ ✟✟✟✟✟ ✟ ❍❍❍❍❍ ❍ ❍ ❍ ❍ ❍ ❍ ❍ ✟ ✟ ✟ ✟ ✟ ✟ ✁ ✁ ✁ ✁ ✁ ✁ ❆ ❆ ❆ ❆ ❆ ❆ ❆ ❆ ❆ ❆ ❆ ❆ ✁ ✁ ✁ ✁ ✁ ✁

5 7 8 6 4 1 2 3 [0]

5

✕ 5

[10]

10 [5]

3 [3]

3 [6]

6

3 [9]

❯ 3

[12]

6

2

13 [23]

11

26/52

slide-90
SLIDE 90

Example 2 calculation: v = 8, b = 4, k = 3

V = 23 I = 24 R = 23 24

③ ③ ③ ③ ③ ③ ③ ③ ✟✟✟✟✟ ✟ ❍❍❍❍❍ ❍ ❍ ❍ ❍ ❍ ❍ ❍ ✟ ✟ ✟ ✟ ✟ ✟ ✁ ✁ ✁ ✁ ✁ ✁ ❆ ❆ ❆ ❆ ❆ ❆ ❆ ❆ ❆ ❆ ❆ ❆ ✁ ✁ ✁ ✁ ✁ ✁

5 7 8 6 4 1 2 3 [0]

5

✕ 5

[10]

10 [5]

3 [3]

3 [6]

6

3 [9]

❯ 3

[12]

6

2

13 [23]

11

26/52

slide-91
SLIDE 91

. . . Or we can use the Levi graph

If i and j are treatment vertices in the Levi graph ˜ G and ˜ Rij is the effective resistance between them in ˜ G then Vij = ˜ Rij × σ2.

27/52

slide-92
SLIDE 92

Example 2 yet again: v = 8, b = 4, k = 3

1 2 3 4 2 3 4 1 5 6 7 8

① ① ① ① ① ① ① ①

❅ ❅ ❅

❅ ❅ ❅

1 5 2 6 3 7 4 8

① ① ① ① ① ① ① ① ✟✟ ✟ ❍❍ ❍ ❍ ❍ ❍ ✟ ✟ ✟ ✁ ✁ ✁ ❆ ❆ ❆ ❆ ❆ ❆ ✁ ✁ ✁

5 7 8 6 4 1 2 3 Levi graph concurrence graph

28/52

slide-93
SLIDE 93

Example 2 yet again: v = 8, b = 4, k = 3

1 2 3 4 2 3 4 1 5 6 7 8

① ① ① ① ① ① ① ①

❅ ❅ ❅

❅ ❅ ❅

1 5 2 6 3 7 4 8

✻ ✒ ✲ ❘ ❄ ① ① ① ① ① ① ① ① ✟✟ ✟ ❍❍ ❍ ❍ ❍ ❍ ✟ ✟ ✟ ✁ ✁ ✁ ❆ ❆ ❆ ❆ ❆ ❆ ✁ ✁ ✁

5 7 8 6 4 1 2 3 Levi graph concurrence graph

28/52

slide-94
SLIDE 94

Example 2 yet again: v = 8, b = 4, k = 3

1 2 3 4 2 3 4 1 5 6 7 8

① ① ① ① ① ① ① ①

❅ ❅ ❅

❅ ❅ ❅

1 5 2 6 3 7 4 8

✻ ✒ ✲ ❘ ❄

3 3 3 3 3 [15] [0]

① ① ① ① ① ① ① ① ✟✟ ✟ ❍❍ ❍ ❍ ❍ ❍ ✟ ✟ ✟ ✁ ✁ ✁ ❆ ❆ ❆ ❆ ❆ ❆ ✁ ✁ ✁

5 7 8 6 4 1 2 3 Levi graph concurrence graph

28/52

slide-95
SLIDE 95

Example 2 yet again: v = 8, b = 4, k = 3

1 2 3 4 2 3 4 1 5 6 7 8

① ① ① ① ① ① ① ①

❅ ❅ ❅

❅ ❅ ❅

1 5 2 6 3 7 4 8

✻ ✒ ✲ ❘ ❄

3 3 3 3 3 [15] [0]

❘ ✲ ✒

5 5 5

① ① ① ① ① ① ① ① ✟✟ ✟ ❍❍ ❍ ❍ ❍ ❍ ✟ ✟ ✟ ✁ ✁ ✁ ❆ ❆ ❆ ❆ ❆ ❆ ✁ ✁ ✁

5 7 8 6 4 1 2 3 Levi graph concurrence graph

28/52

slide-96
SLIDE 96

Example 2 yet again: v = 8, b = 4, k = 3

1 2 3 4 2 3 4 1 5 6 7 8

① ① ① ① ① ① ① ①

❅ ❅ ❅

❅ ❅ ❅

1 5 2 6 3 7 4 8

✻ ✒ ✲ ❘ ❄

3 3 3 3 3 [15] [0]

❘ ✲ ✒

5 5 5

8 [23]

① ① ① ① ① ① ① ① ✟✟ ✟ ❍❍ ❍ ❍ ❍ ❍ ✟ ✟ ✟ ✁ ✁ ✁ ❆ ❆ ❆ ❆ ❆ ❆ ✁ ✁ ✁

5 7 8 6 4 1 2 3 Levi graph concurrence graph

28/52

slide-97
SLIDE 97

Example 2 yet again: v = 8, b = 4, k = 3

V = 23 I = 8 ˜ R = 23 8 1 2 3 4 2 3 4 1 5 6 7 8

① ① ① ① ① ① ① ①

❅ ❅ ❅

❅ ❅ ❅

1 5 2 6 3 7 4 8

✻ ✒ ✲ ❘ ❄

3 3 3 3 3 [15] [0]

❘ ✲ ✒

5 5 5

8 [23]

① ① ① ① ① ① ① ① ✟✟ ✟ ❍❍ ❍ ❍ ❍ ❍ ✟ ✟ ✟ ✁ ✁ ✁ ❆ ❆ ❆ ❆ ❆ ❆ ✁ ✁ ✁

5 7 8 6 4 1 2 3 Levi graph concurrence graph

28/52

slide-98
SLIDE 98

Optimality: Average pairwise variance

The variance of the best linear unbiased estimator of the simple difference τi − τj is Vij =

  • L−

ii + L− jj − 2L− ij

  • kσ2 = Rijkσ2.

29/52

slide-99
SLIDE 99

Optimality: Average pairwise variance

The variance of the best linear unbiased estimator of the simple difference τi − τj is Vij =

  • L−

ii + L− jj − 2L− ij

  • kσ2 = Rijkσ2.

We want all of the Vij to be small.

29/52

slide-100
SLIDE 100

Optimality: Average pairwise variance

The variance of the best linear unbiased estimator of the simple difference τi − τj is Vij =

  • L−

ii + L− jj − 2L− ij

  • kσ2 = Rijkσ2.

We want all of the Vij to be small. Put ¯ V = average value of the Vij. Then ¯ V = 2kσ2 Tr(L−) v − 1 = 2kσ2 × 1 harmonic mean of θ1, . . . , θv−1 , where θ1, . . . , θv−1 are the nontrivial eigenvalues of L.

29/52

slide-101
SLIDE 101

A-Optimality

A block design is called A-optimal if it minimizes the average

  • f the variances Vij;

30/52

slide-102
SLIDE 102

A-Optimality

A block design is called A-optimal if it minimizes the average

  • f the variances Vij;

—equivalently, it maximizes the harmonic mean of the non-trivial eigenvalues of the Laplacian matrix L;

30/52

slide-103
SLIDE 103

A-Optimality

A block design is called A-optimal if it minimizes the average

  • f the variances Vij;

—equivalently, it maximizes the harmonic mean of the non-trivial eigenvalues of the Laplacian matrix L;

  • ver all block designs with block size k and the given v and b.

30/52

slide-104
SLIDE 104

Optimality: Confidence region

When v > 2 the generalization of confidence interval is the confidence ellipsoid around the point ( ˆ τ1, . . . , ˆ τv) in the hyperplane in Rv with ∑i τi = 0. The volume of this confidence ellipsoid is proportional to

  • v−1

i=1

1 θi

31/52

slide-105
SLIDE 105

Optimality: Confidence region

When v > 2 the generalization of confidence interval is the confidence ellipsoid around the point ( ˆ τ1, . . . , ˆ τv) in the hyperplane in Rv with ∑i τi = 0. The volume of this confidence ellipsoid is proportional to

  • v−1

i=1

1 θi = (geometric mean of θ1, . . . , θv−1)−(v−1)/2

31/52

slide-106
SLIDE 106

Optimality: Confidence region

When v > 2 the generalization of confidence interval is the confidence ellipsoid around the point ( ˆ τ1, . . . , ˆ τv) in the hyperplane in Rv with ∑i τi = 0. The volume of this confidence ellipsoid is proportional to

  • v−1

i=1

1 θi = (geometric mean of θ1, . . . , θv−1)−(v−1)/2 = 1

  • v × number of spanning trees for G.

31/52

slide-107
SLIDE 107

D-Optimality

A block design is called D-optimal if it minimizes the volume

  • f the confidence ellipsoid for ( ˆ

τ1, . . . , ˆ τv) ;

32/52

slide-108
SLIDE 108

D-Optimality

A block design is called D-optimal if it minimizes the volume

  • f the confidence ellipsoid for ( ˆ

τ1, . . . , ˆ τv) ; —equivalently, it maximizes the geometric mean of the non-trivial eigenvalues of the Laplacian matrix L;

32/52

slide-109
SLIDE 109

D-Optimality

A block design is called D-optimal if it minimizes the volume

  • f the confidence ellipsoid for ( ˆ

τ1, . . . , ˆ τv) ; —equivalently, it maximizes the geometric mean of the non-trivial eigenvalues of the Laplacian matrix L; —equivalently, it maximizes the number of spanning trees for the concurrence graph G;

32/52

slide-110
SLIDE 110

D-Optimality

A block design is called D-optimal if it minimizes the volume

  • f the confidence ellipsoid for ( ˆ

τ1, . . . , ˆ τv) ; —equivalently, it maximizes the geometric mean of the non-trivial eigenvalues of the Laplacian matrix L; —equivalently, it maximizes the number of spanning trees for the concurrence graph G; —equivalently, it maximizes the number of spanning trees for the Levi graph ˜ G;

32/52

slide-111
SLIDE 111

D-Optimality

A block design is called D-optimal if it minimizes the volume

  • f the confidence ellipsoid for ( ˆ

τ1, . . . , ˆ τv) ; —equivalently, it maximizes the geometric mean of the non-trivial eigenvalues of the Laplacian matrix L; —equivalently, it maximizes the number of spanning trees for the concurrence graph G; —equivalently, it maximizes the number of spanning trees for the Levi graph ˜ G;

  • ver all block designs with block size k and the given v and b.

32/52

slide-112
SLIDE 112

Optimality: Worst case

If x is a contrast in Rv then the variance of the estimator of x⊤τ is (x⊤L−x)kσ2.

33/52

slide-113
SLIDE 113

Optimality: Worst case

If x is a contrast in Rv then the variance of the estimator of x⊤τ is (x⊤L−x)kσ2. If we multiply every entry in x by a constant c then this variance is multiplied by c2;

33/52

slide-114
SLIDE 114

Optimality: Worst case

If x is a contrast in Rv then the variance of the estimator of x⊤τ is (x⊤L−x)kσ2. If we multiply every entry in x by a constant c then this variance is multiplied by c2; and so is x⊤x.

33/52

slide-115
SLIDE 115

Optimality: Worst case

If x is a contrast in Rv then the variance of the estimator of x⊤τ is (x⊤L−x)kσ2. If we multiply every entry in x by a constant c then this variance is multiplied by c2; and so is x⊤x. The worst case is for contrasts x giving the maximum value of x⊤L−x x⊤x .

33/52

slide-116
SLIDE 116

Optimality: Worst case

If x is a contrast in Rv then the variance of the estimator of x⊤τ is (x⊤L−x)kσ2. If we multiply every entry in x by a constant c then this variance is multiplied by c2; and so is x⊤x. The worst case is for contrasts x giving the maximum value of x⊤L−x x⊤x . These are precisely the eigenvectors corresponding to θ1, where θ1 is the smallest non-trivial eigenvalue of L.

33/52

slide-117
SLIDE 117

E-Optimality

A block design is called E-optimal if it maximizes the smallest non-trivial eigenvalue of the Laplacian matrix L;

34/52

slide-118
SLIDE 118

E-Optimality

A block design is called E-optimal if it maximizes the smallest non-trivial eigenvalue of the Laplacian matrix L;

  • ver all block designs with block size k and the given v and b.

34/52

slide-119
SLIDE 119

BIBDs are optimal

Theorem (Kshirsagar, 1958; Kiefer, 1975)

If there is a balanced incomplete-block design (BIBD) (2-design) for v treatments in b blocks of size k, then it is A-, D- and E-optimal. Moreover, no non-BIBD is A-, D- or E-optimal.

35/52

slide-120
SLIDE 120

D-optimality: spanning trees

A spanning tree for the graph is a collection of edges of the graph which form a tree (connected graph with no cycles) and which include every vertex.

36/52

slide-121
SLIDE 121

D-optimality: spanning trees

A spanning tree for the graph is a collection of edges of the graph which form a tree (connected graph with no cycles) and which include every vertex. Cheng (1981), after Gaffke (1978), after Kirchhoff (1847): product of non-trivial eigenvalues of L = v × number of spanning trees. So a design is D-optimal if and only if its concurrence graph G has the maximal number of spanning trees.

36/52

slide-122
SLIDE 122

D-optimality: spanning trees

A spanning tree for the graph is a collection of edges of the graph which form a tree (connected graph with no cycles) and which include every vertex. Cheng (1981), after Gaffke (1978), after Kirchhoff (1847): product of non-trivial eigenvalues of L = v × number of spanning trees. So a design is D-optimal if and only if its concurrence graph G has the maximal number of spanning trees. This is easy to calculate by hand when the graph is sparse.

36/52

slide-123
SLIDE 123

What about the Levi graph?

Theorem (Gaffke, 1982)

Let G and ˜ G be the concurrence graph and Levi graph for a connected incomplete-block design for v treatments in b blocks of size k. Then the number of spanning trees for ˜ G is equal to kb−v+1 times the number of spanning trees for G.

37/52

slide-124
SLIDE 124

What about the Levi graph?

Theorem (Gaffke, 1982)

Let G and ˜ G be the concurrence graph and Levi graph for a connected incomplete-block design for v treatments in b blocks of size k. Then the number of spanning trees for ˜ G is equal to kb−v+1 times the number of spanning trees for G. So a block design is D-optimal if and only if its Levi graph maximizes the number of spanning trees.

37/52

slide-125
SLIDE 125

What about the Levi graph?

Theorem (Gaffke, 1982)

Let G and ˜ G be the concurrence graph and Levi graph for a connected incomplete-block design for v treatments in b blocks of size k. Then the number of spanning trees for ˜ G is equal to kb−v+1 times the number of spanning trees for G. So a block design is D-optimal if and only if its Levi graph maximizes the number of spanning trees. If v ≥ b + 2 it is easier to count spanning trees in the Levi graph than in the concurrence graph.

37/52

slide-126
SLIDE 126

Example 2 one last time: v = 8, b = 4, k = 3

1 2 3 4 2 3 4 1 5 6 7 8

① ① ① ① ① ① ① ①

❅ ❅ ❅

❅ ❅ ❅

1 5 2 6 3 7 4 8

① ① ① ① ① ① ① ① ✟✟ ✟ ❍❍ ❍ ❍ ❍ ❍ ✟ ✟ ✟ ✁ ✁ ✁ ❆ ❆ ❆ ❆ ❆ ❆ ✁ ✁ ✁

5 7 8 6 4 1 2 3 Levi graph concurrence graph

38/52

slide-127
SLIDE 127

Example 2 one last time: v = 8, b = 4, k = 3

1 2 3 4 2 3 4 1 5 6 7 8

① ① ① ① ① ① ① ①

❅ ❅ ❅

❅ ❅ ❅

1 5 2 6 3 7 4 8

① ① ① ① ① ① ① ① ✟✟ ✟ ❍❍ ❍ ❍ ❍ ❍ ✟ ✟ ✟ ✁ ✁ ✁ ❆ ❆ ❆ ❆ ❆ ❆ ✁ ✁ ✁

5 7 8 6 4 1 2 3 Levi graph concurrence graph

38/52

slide-128
SLIDE 128

Example 2 one last time: v = 8, b = 4, k = 3

1 2 3 4 2 3 4 1 5 6 7 8

① ① ① ① ① ① ① ①

❅ ❅ ❅

❅ ❅ ❅

1 5 2 6 3 7 4 8

① ① ① ① ① ① ① ① ✟✟ ✟ ❍❍ ❍ ❍ ❍ ❍ ✟ ✟ ✟ ✁ ✁ ✁ ❆ ❆ ❆ ❆ ❆ ❆ ✁ ✁ ✁

5 7 8 6 4 1 2 3 Levi graph concurrence graph 8 spanning trees

38/52

slide-129
SLIDE 129

Example 2 one last time: v = 8, b = 4, k = 3

1 2 3 4 2 3 4 1 5 6 7 8

① ① ① ① ① ① ① ①

❅ ❅ ❅

❅ ❅ ❅

1 5 2 6 3 7 4 8

① ① ① ① ① ① ① ① ✟✟ ✟ ❍❍ ❍ ❍ ❍ ❍ ✟ ✟ ✟ ✁ ✁ ✁ ❆ ❆ ❆ ❆ ❆ ❆ ✁ ✁ ✁

5 7 8 6 4 1 2 3 Levi graph concurrence graph 8 spanning trees 216 spanning trees

38/52

slide-130
SLIDE 130

E-optimality: the edge-cutset lemma

A design is E-optimal if it maximizes the smallest non-trivial eigenvalue θ1 of the Laplacian L of the concurrence graph G.

39/52

slide-131
SLIDE 131

E-optimality: the edge-cutset lemma

A design is E-optimal if it maximizes the smallest non-trivial eigenvalue θ1 of the Laplacian L of the concurrence graph G.

Lemma

Let G have an edge-cutset of size c (set of c edges whose removal disconnects the graph) whose removal separates the graph into components of sizes m and n. Then θ1 ≤ c 1 m + 1 n

  • .

39/52

slide-132
SLIDE 132

E-optimality: the edge-cutset lemma

A design is E-optimal if it maximizes the smallest non-trivial eigenvalue θ1 of the Laplacian L of the concurrence graph G.

Lemma

Let G have an edge-cutset of size c (set of c edges whose removal disconnects the graph) whose removal separates the graph into components of sizes m and n. Then θ1 ≤ c 1 m + 1 n

  • .

If c is small but m and n are both large, then θ1 is small.

39/52

slide-133
SLIDE 133

E-optimality: the vertex-cutset lemma

A design is E-optimal if it maximizes the smallest non-trivial eigenvalue θ1 of the Laplacian L of the concurrence graph G.

40/52

slide-134
SLIDE 134

E-optimality: the vertex-cutset lemma

A design is E-optimal if it maximizes the smallest non-trivial eigenvalue θ1 of the Laplacian L of the concurrence graph G.

Lemma

Let G have a vertex-cutset of size c (set of c vertices whose removal disconnects the graph) whose removal separates the graph into components of sizes m and n, with m′ and n′ edges between them and the vertices in the cutset. Then θ1 ≤ m′n2 + n′m2 mn(n + m) , which is at most c if no multiple edges are involved.

40/52

slide-135
SLIDE 135

E-optimality: the vertex-cutset lemma

A design is E-optimal if it maximizes the smallest non-trivial eigenvalue θ1 of the Laplacian L of the concurrence graph G.

Lemma

Let G have a vertex-cutset of size c (set of c vertices whose removal disconnects the graph) whose removal separates the graph into components of sizes m and n, with m′ and n′ edges between them and the vertices in the cutset. Then θ1 ≤ m′n2 + n′m2 mn(n + m) , which is at most c if no multiple edges are involved. If m′ << m and n′ << n then θ1 is small.

40/52

slide-136
SLIDE 136

Minimal connectivity

If the block design is connected then bk ≥ b + v − 1.

41/52

slide-137
SLIDE 137

Minimal connectivity

If the block design is connected then bk ≥ b + v − 1. If the block design is connected and b(k − 1) = v − 1 then the Levi graph is a tree and the concurrence graph is a b-tree of k-cliques.

41/52

slide-138
SLIDE 138

Minimal connectivity

If the block design is connected then bk ≥ b + v − 1. If the block design is connected and b(k − 1) = v − 1 then the Levi graph is a tree and the concurrence graph is a b-tree of k-cliques.

① ① ① ① ① ① ① ① ① ① ① ① ① ① ① ✔ ✔ ✔ ✔ ✔ ❚ ❚ ❚ ✔ ✔ ✔ ❚ ❚ ❚ ❚ ❚ ❚ ❜ ❜ ❜ ❜ ❜ ✧✧✧✧ ✧ ❜ ❜ ❜ ❜ ❜ ✧✧✧✧ ✧ ① ✥✥✥✥✥ ✥ ✑✑✑✑ ✑ ✡ ✡ ✡ ✡✡ ☎ ☎ ☎ ☎ ☎☎ ❵ ❵ ❵ ❵ ❵ ❵ ◗ ◗ ◗ ◗ ◗ ❏ ❏ ❏ ❏ ❏ ❉ ❉ ❉ ❉ ❉ ❉ ✑ ✑ ✑ ✑ ✑ ✡ ✡ ✡ ✡ ✡ ✥ ✥ ✥ ✥ ✥ ✥ ❵❵❵❵❵ ❵ ◗◗◗◗ ◗ ❏ ❏ ❏ ❏❏ ① ① ① ① ① ① ① ① ① ① ① ① ① ①

❅ ❅ ❅

  • 41/52
slide-139
SLIDE 139

Optimality of minimally connected designs

The Levi graph is a tree, so all connected designs are equally good under the D-criterion.

42/52

slide-140
SLIDE 140

Optimality of minimally connected designs

The Levi graph is a tree, so all connected designs are equally good under the D-criterion. The Levi graph is a tree, so effective resistance = graph distance, so the only A-optimal designs are the queen-bee designs.

42/52

slide-141
SLIDE 141

Optimality of minimally connected designs

The Levi graph is a tree, so all connected designs are equally good under the D-criterion. The Levi graph is a tree, so effective resistance = graph distance, so the only A-optimal designs are the queen-bee designs. The concurrence graph is a b-tree of k-cliques, so the Cutset Lemmas show that the only E-optimal designs are the queen-bee designs.

42/52

slide-142
SLIDE 142

Can we use the Levi graph to find E-optimal designs?

For binary designs with equal replication, θ1(L) is a monotonic increasing function of θ1(˜ L).

43/52

slide-143
SLIDE 143

Can we use the Levi graph to find E-optimal designs?

For binary designs with equal replication, θ1(L) is a monotonic increasing function of θ1(˜ L). But, queen-bee designs are E-optimal under minimal connectivity,

43/52

slide-144
SLIDE 144

Can we use the Levi graph to find E-optimal designs?

For binary designs with equal replication, θ1(L) is a monotonic increasing function of θ1(˜ L). But, queen-bee designs are E-optimal under minimal connectivity, and some non-binary designs are E-optimal.

43/52

slide-145
SLIDE 145

Can we use the Levi graph to find E-optimal designs?

For binary designs with equal replication, θ1(L) is a monotonic increasing function of θ1(˜ L). But, queen-bee designs are E-optimal under minimal connectivity, and some non-binary designs are E-optimal. For general block designs, we do not know if we can use the Levi graph to investigate E-optimality.

43/52

slide-146
SLIDE 146

Large blocks; many unreplicated treatments

Suppose that ¯ r = ∑i ri v < 2. New conventions: blocks are rows, and block size = k + n. b blocks                  k plots n plots . . . . . . v treatments bn treatments all single replication whole design ∆

44/52

slide-147
SLIDE 147

Large blocks; many unreplicated treatments

Suppose that ¯ r = ∑i ri v < 2. New conventions: blocks are rows, and block size = k + n. b blocks                  k plots n plots . . . . . . v treatments bn treatments all single replication whole design ∆ Whole design ∆ has v + bn treatments in b blocks of size k + n;

44/52

slide-148
SLIDE 148

Large blocks; many unreplicated treatments

Suppose that ¯ r = ∑i ri v < 2. New conventions: blocks are rows, and block size = k + n. b blocks                  k plots n plots . . . . . . v treatments bn treatments all single replication whole design ∆ Whole design ∆ has v + bn treatments in b blocks of size k + n; the subdesign Γ has v core treatments in b blocks of size k;

44/52

slide-149
SLIDE 149

Large blocks; many unreplicated treatments

Suppose that ¯ r = ∑i ri v < 2. New conventions: blocks are rows, and block size = k + n. b blocks                  k plots n plots . . . . . . v treatments bn treatments all single replication whole design ∆ Whole design ∆ has v + bn treatments in b blocks of size k + n; the subdesign Γ has v core treatments in b blocks of size k; call the remaining treatments orphans.

44/52

slide-150
SLIDE 150

Levi graph: 10 + 5n treatments in 5 blocks of 4 + n plots

1 2 3 4 A1 · · · An 1 5 6 7 B1 · · · Bn 2 5 8 9 C1 · · · Cn 3 6 8 D1 · · · Dn 4 7 9 E1 · · · En

2

✂ ✂ ✂ ✂ ✂ ✂ ✂ ①

7

❆ ❆ ❆ ❆ ①

6

❇ ❇ ❇ ❇ ❇ ❇ ❇ ①

4

✁ ✁ ✁ ✁ ①

8

✟✟✟✟ ①

1

❍ ❍ ❍ ❍ ①

5

✑✑✑✑✑✑ ✑ ①

9

◗ ◗ ◗ ◗ ◗ ◗ ◗ ①

3

A1

An . . . ❅

❅ ①

Cn

C1 . . .

D1

Dn . . .

❅ ❅ ①

En

E1 . . .

B1

❍ ❍ ①

Bn

✟✟

· · ·

45/52

slide-151
SLIDE 151

Pairwise resistance

2

✂ ✂ ✂ ✂ ✂ ✂ ✂ ①

7

❆ ❆ ❆ ❆ ①

6

❇ ❇ ❇ ❇ ❇ ❇ ❇ ①

4

✁ ✁ ✁ ✁ ①

8

✟✟✟✟ ①

1

❍ ❍ ❍ ❍ ①

5

✑✑✑✑✑✑ ✑ ①

9

◗ ◗ ◗ ◗ ◗ ◗ ◗ ①

3

A1

An . . . ❅

❅ ①

Cn

C1 . . .

D1

Dn . . .

❅ ❅ ①

En

E1 . . .

B1

❍ ❍ ①

Bn

✟✟

· · ·

46/52

slide-152
SLIDE 152

Pairwise resistance

2

✂ ✂ ✂ ✂ ✂ ✂ ✂ ①

7

❆ ❆ ❆ ❆ ①

6

❇ ❇ ❇ ❇ ❇ ❇ ❇ ①

4

✁ ✁ ✁ ✁ ①

8

✟✟✟✟ ①

1

❍ ❍ ❍ ❍ ①

5

✑✑✑✑✑✑ ✑ ①

9

◗ ◗ ◗ ◗ ◗ ◗ ◗ ①

3

A1

An . . . ❅

❅ ①

Cn

C1 . . .

D1

Dn . . .

❅ ❅ ①

En

E1 . . .

B1

❍ ❍ ①

Bn

✟✟

· · · Resistance(A1, A2) = 2

46/52

slide-153
SLIDE 153

Pairwise resistance

2

✂ ✂ ✂ ✂ ✂ ✂ ✂ ①

7

❆ ❆ ❆ ❆ ①

6

❇ ❇ ❇ ❇ ❇ ❇ ❇ ①

4

✁ ✁ ✁ ✁ ①

8

✟✟✟✟ ①

1

❍ ❍ ❍ ❍ ①

5

✑✑✑✑✑✑ ✑ ①

9

◗ ◗ ◗ ◗ ◗ ◗ ◗ ①

3

A1

An . . . ❅

❅ ①

Cn

C1 . . .

D1

Dn . . .

❅ ❅ ①

En

E1 . . .

B1

❍ ❍ ①

Bn

✟✟

· · · Resistance(A1, A2) = 2 Resistance(A1, B1) = 2 + Resistance(block A, block B) in Γ

46/52

slide-154
SLIDE 154

Pairwise resistance

2

✂ ✂ ✂ ✂ ✂ ✂ ✂ ①

7

❆ ❆ ❆ ❆ ①

6

❇ ❇ ❇ ❇ ❇ ❇ ❇ ①

4

✁ ✁ ✁ ✁ ①

8

✟✟✟✟ ①

1

❍ ❍ ❍ ❍ ①

5

✑✑✑✑✑✑ ✑ ①

9

◗ ◗ ◗ ◗ ◗ ◗ ◗ ①

3

A1

An . . . ❅

❅ ①

Cn

C1 . . .

D1

Dn . . .

❅ ❅ ①

En

E1 . . .

B1

❍ ❍ ①

Bn

✟✟

· · · Resistance(A1, A2) = 2 Resistance(A1, B1) = 2 + Resistance(block A, block B) in Γ Resistance(A1, 8) = 1 + Resistance(block A, 8) in Γ

46/52

slide-155
SLIDE 155

Pairwise resistance

2

✂ ✂ ✂ ✂ ✂ ✂ ✂ ①

7

❆ ❆ ❆ ❆ ①

6

❇ ❇ ❇ ❇ ❇ ❇ ❇ ①

4

✁ ✁ ✁ ✁ ①

8

✟✟✟✟ ①

1

❍ ❍ ❍ ❍ ①

5

✑✑✑✑✑✑ ✑ ①

9

◗ ◗ ◗ ◗ ◗ ◗ ◗ ①

3

A1

An . . . ❅

❅ ①

Cn

C1 . . .

D1

Dn . . .

❅ ❅ ①

En

E1 . . .

B1

❍ ❍ ①

Bn

✟✟

· · · Resistance(A1, A2) = 2 Resistance(A1, B1) = 2 + Resistance(block A, block B) in Γ Resistance(A1, 8) = 1 + Resistance(block A, 8) in Γ Resistance(1, 8) = Resistance(1, 8) in Γ

46/52

slide-156
SLIDE 156

Sum of the pairwise variances

Theorem (cf Herzberg and Jarrett, 2007)

The sum of the variances of treatment differences in ∆ = constant + V1 + nV3 + n2V2, where V1 = the sum of the variances of treatment differences in Γ V2 = the sum of the variances of block differences in Γ V3 = the sum of the variances of sums of

  • ne treatment and one block in Γ.

(If Γ is equi-replicate then V2 and V3 are both increasing functions of V1.)

47/52

slide-157
SLIDE 157

Sum of the pairwise variances

Theorem (cf Herzberg and Jarrett, 2007)

The sum of the variances of treatment differences in ∆ = constant + V1 + nV3 + n2V2, where V1 = the sum of the variances of treatment differences in Γ V2 = the sum of the variances of block differences in Γ V3 = the sum of the variances of sums of

  • ne treatment and one block in Γ.

(If Γ is equi-replicate then V2 and V3 are both increasing functions of V1.)

Consequence

For a given choice of k, make Γ as efficient as possible.

47/52

slide-158
SLIDE 158

A less obvious consequence

Consequence

If n or b is large, and we want an A-optimal design, it may be best to make Γ a complete block design for k′ controls, even if there is no interest in comparisons between new treatments and controls,

  • r between controls.

48/52

slide-159
SLIDE 159

Spanning trees

A spanning tree for the Levi graph is a collection edges which provides a unique path between every pair of vertices.

49/52

slide-160
SLIDE 160

Spanning trees

A spanning tree for the Levi graph is a collection edges which provides a unique path between every pair of vertices.

2

✂ ✂ ✂ ✂ ✂ ✂ ✂ ①

7

❆ ❆ ❆ ❆ ①

6

❇ ❇ ❇ ❇ ❇ ❇ ❇ ①

4

✁ ✁ ✁ ✁ ①

8

✟✟✟✟ ✟✟ ①

1

❍ ❍ ❍ ❍❍ ❍ ①

5

✑✑✑✑✑✑ ✑ ✑✑✑ ✑ ①

9

◗ ◗ ◗ ◗ ◗ ◗ ◗ ①

3

A1

An . . . ❅

❅ ①

Cn

C1 . . .

D1

Dn . . .

❅ ❅ ①

En

E1 . . .

B1

❍ ❍ ①

Bn

✟✟

· · ·

49/52

slide-161
SLIDE 161

Spanning trees

A spanning tree for the Levi graph is a collection edges which provides a unique path between every pair of vertices.

2

✂ ✂ ✂ ✂ ✂ ✂ ✂ ①

7

❆ ❆ ❆ ❆ ①

6

❇ ❇ ❇ ❇ ❇ ❇ ❇ ①

4

✁ ✁ ✁ ✁ ①

8

✟✟ ①

1

❍ ❍ ①

5

✑✑✑ ✑ ①

9

◗ ◗ ◗ ◗ ◗ ◗ ◗ ①

3

A1

An . . . ❅

❅ ①

Cn

C1 . . .

D1

Dn . . .

❅ ❅ ①

En

E1 . . .

B1

❍ ❍ ①

Bn

✟✟

· · ·

49/52

slide-162
SLIDE 162

Spanning trees

A spanning tree for the Levi graph is a collection edges which provides a unique path between every pair of vertices.

2

✂ ✂ ✂ ✂ ✂ ✂ ✂ ①

7

❆ ❆ ❆ ❆ ①

6

❇ ❇ ❇ ❇ ❇ ❇ ❇ ①

4

✁ ✁ ✁ ✁ ①

8

✟✟ ①

1

❍ ❍ ①

5

✑✑✑ ✑ ①

9

◗ ◗ ◗ ◗ ◗ ◗ ◗ ①

3

A1

An . . . ❅

❅ ①

Cn

C1 . . .

D1

Dn . . .

❅ ❅ ①

En

E1 . . .

B1

❍ ❍ ①

Bn

✟✟

· · · The orphans make no difference to the number of spanning trees for the Levi graph.

49/52

slide-163
SLIDE 163

D-optimality under very low replication

Consequence

The whole design ∆ is D-optimal for v + bn treatments in b blocks of size k + n if and only if the core design Γ is D-optimal for v treatments in b blocks of size k.

50/52

slide-164
SLIDE 164

D-optimality under very low replication

Consequence

The whole design ∆ is D-optimal for v + bn treatments in b blocks of size k + n if and only if the core design Γ is D-optimal for v treatments in b blocks of size k.

Consequence

Even when n or b is large, D-optimal designs do not include uninteresting controls.

50/52

slide-165
SLIDE 165

Conjectures

Conjecture (Underpinned by theoretical work by C.-S. Cheng)

If the A-optimal design is very different from the D-optimal design, then the E-optimal design is (almost) the same as the A-optimal design.

51/52

slide-166
SLIDE 166

Conjectures

Conjecture (Underpinned by theoretical work by C.-S. Cheng)

If the A-optimal design is very different from the D-optimal design, then the E-optimal design is (almost) the same as the A-optimal design.

Conjecture (Underpinned by theoretical work by C.-S. Cheng)

If the connectivity is more than minimal, then all D-optimal designs have (almost) equal replication.

51/52

slide-167
SLIDE 167

Conjectures

Conjecture (Underpinned by theoretical work by C.-S. Cheng)

If the A-optimal design is very different from the D-optimal design, then the E-optimal design is (almost) the same as the A-optimal design.

Conjecture (Underpinned by theoretical work by C.-S. Cheng)

If the connectivity is more than minimal, then all D-optimal designs have (almost) equal replication.

Conjecture (Underpinned by theoretical work by J. R. Johnson and M. Walters)

If ¯ r > 3.5 then designs optimal under one criterion are (almost)

  • ptimal under the other criteria.

51/52

slide-168
SLIDE 168

Main References

◮ R. A. Bailey and Peter J. Cameron:

Combinatorics of optimal designs. In Surveys in Combinatorics 2009 (eds. S. Huczynska, J. D. Mitchell and

  • C. M. Roney-Dougal),

London Mathematical Society Lecture Note Series, 365, Cambridge University Press, Cambridge, 2009, pp. 19–73.

◮ R. A. Bailey and Peter J. Cameron:

Using graphs to find the best block designs. In Topics in Structural Graph Theory (eds. L. W. Beineke and R. J. Wilson), Cambridge University Press, Cambridge, 2013,

  • pp. 282–317.

52/52