using graphs and laplacian eigenvalues to evaluate block
play

Using graphs and Laplacian eigenvalues to evaluate block designs R. - PowerPoint PPT Presentation

Using graphs and Laplacian eigenvalues to evaluate block designs R. A. Bailey University of St Andrews / QMUL (emerita) Ongoing joint work with Peter J. Cameron Modern Trends in Algebraic Graph Theory, Villanova, June 2014 1/52 Abstract


  1. Concurrence graph The concurrence graph G of a block design ∆ has ◮ one vertex for each treatment, ◮ one edge for each unordered pair α , ω , with α � = ω , g ( α ) = g ( ω ) (in the same block) and f ( α ) � = f ( ω ) : this edge joins vertices f ( α ) and f ( ω ) . There are no loops. 12/52

  2. Concurrence graph The concurrence graph G of a block design ∆ has ◮ one vertex for each treatment, ◮ one edge for each unordered pair α , ω , with α � = ω , g ( α ) = g ( ω ) (in the same block) and f ( α ) � = f ( ω ) : this edge joins vertices f ( α ) and f ( ω ) . There are no loops. If i � = j then the number of edges between vertices i and j is b ∑ λ ij = n is n js ; s = 1 12/52

  3. Concurrence graph The concurrence graph G of a block design ∆ has ◮ one vertex for each treatment, ◮ one edge for each unordered pair α , ω , with α � = ω , g ( α ) = g ( ω ) (in the same block) and f ( α ) � = f ( ω ) : this edge joins vertices f ( α ) and f ( ω ) . There are no loops. If i � = j then the number of edges between vertices i and j is b ∑ λ ij = n is n js ; s = 1 this is called the concurrence of i and j , and is the ( i , j ) -entry of Λ = NN ⊤ . 12/52

  4. Example 1: v = 4 , b = k = 3 1 2 1 3 3 2 4 4 2 13/52

  5. Example 1: v = 4 , b = k = 3 1 2 1 3 3 2 4 4 2 1 2 4 ✟ ① ❍❍❍❍❍ ① ① ✟ ❅ � ✟ ❅ � ✟ ✟ ✟ ❍ ❅ � ① ① � � � ❍ ✟ ❍ ✟✟✟✟✟ � ❅ ❍ 1 2 ❍ � ❅ ❍ ❍ � ❅ ① ① ① 3 3 4 Levi graph concurrence graph 13/52

  6. Example 1: v = 4 , b = k = 3 1 2 1 3 3 2 4 4 2 1 2 4 ✟ ① ❍❍❍❍❍ ① ① ✟ ❅ � ✟ ❅ � ✟ ✟ ✟ ❍ ❅ � ① ① � � � ❍ ✟ ❍ ✟✟✟✟✟ � ❅ ❍ 1 2 ❍ � ❅ ❍ ❍ � ❅ ① ① ① 3 3 4 Levi graph concurrence graph can recover design may have more symmetry 13/52

  7. Example 1: v = 4 , b = k = 3 1 2 1 3 3 2 4 4 2 1 2 4 ✟ ① ❍❍❍❍❍ ① ① ✟ ❅ � ✟ ❅ � ✟ ✟ ✟ ❍ ❅ � ① ① � � � ❍ ✟ ❍ ✟✟✟✟✟ � ❅ ❍ 1 2 ❍ � ❅ ❍ ❍ � ❅ ① ① ① 3 3 4 Levi graph concurrence graph can recover design may have more symmetry more vertices 13/52

  8. Example 1: v = 4 , b = k = 3 1 2 1 3 3 2 4 4 2 1 2 4 ✟ ① ❍❍❍❍❍ ① ① ✟ ❅ � ✟ ❅ � ✟ ✟ ✟ ❍ ❅ � ① ① � � � ❍ ✟ ❍ ✟✟✟✟✟ � ❅ ❍ 1 2 ❍ � ❅ ❍ ❍ � ❅ ① ① ① 3 3 4 Levi graph concurrence graph can recover design may have more symmetry more vertices more edges if k = 2 more edges if k ≥ 4 13/52

  9. Example 2: v = 8 , b = 4 , k = 3 1 2 3 4 2 3 4 1 5 6 7 8 14/52

  10. Example 2: v = 8 , b = 4 , k = 3 1 2 3 4 2 3 4 1 5 6 7 8 5 ① 5 � 8 1 ① � ✁ ❆ ① ① � ✁ ❆ ❅ � ❅ 1 2 ❅ � ✁ ❆ ❅ ① ① ✟ ❍ ① 2 � ✟✟ ❍ ❍ ① ① ❍❍ ✟ 8 6 ✟ ❍ ✟ ① ① ① 4 � ❆ ✁ ❅ � ❅ 4 3 ❆ ✁ ❅ � ❅ ① ① ❆ ✁ � ① � 3 6 � 7 ① 7 Levi graph concurrence graph 14/52

  11. Example 3: v = 15 , b = 7 , k = 3 1 1 2 3 4 5 6 1 1 1 1 1 1 1 2 4 5 6 10 11 12 2 4 6 8 10 12 14 3 7 8 9 13 14 15 3 5 7 9 11 13 15 8 9 13 7 8 11 7 ① ① 10 ❉ ☎☎ ① ① ① ① ① ❉ ① ❜ ✧ ❜ ✧ ✧✧✧✧ ❜ ✧✧✧✧ ❜ 4 5 � ❏ ✡ ❅ ❜ ❜ � ❏ ❉ ☎ ✡ ❅ ① ① ❜ ❜ ① ① 6 11 ◗ ✑ ◗ ❏ ❉ ✑✑✑✑ ☎ ✡ ❜ ❜ ① ① ① ① ◗ ❏ ❉ ☎ ✡ ❚ ✔ ❵ ✥ ❵ ✥✥✥✥✥ ❚ ✔ ◗ ① ❵ ① 5 12 ❵ 10 1 2 14 ❵ ◗ ❵ ❏ ❉ ☎ ✡ ✥ ❵❵❵❵❵ ❚ ✔ ✥ ① ✥ ① ✑ ◗◗◗◗ 3 ✥ ✥ ✡ ❏ ✥ ❵ ✔ ✔ ❚ ✑ ✔ ① ① 1 4 13 ✑ ✡ ❏ ✔ ❚ ✔ ✑ ① ① ① 12 9 ✑ ✡ ❏ ◗ ❚ ① ① ❚ ✔ 3 14 6 ❅ ✡ ❏ � ❚ ✔ ❅ ✡ ❏ � ① ① ① 15 2 15 15/52

  12. Laplacian matrices The Laplacian matrix L of the concurrence graph G is a v × v matrix with ( i , j ) -entry as follows: 16/52

  13. Laplacian matrices The Laplacian matrix L of the concurrence graph G is a v × v matrix with ( i , j ) -entry as follows: ◮ if i � = j then L ij = − (number of edges between i and j ) = − λ ij ; 16/52

  14. Laplacian matrices The Laplacian matrix L of the concurrence graph G is a v × v matrix with ( i , j ) -entry as follows: ◮ if i � = j then L ij = − (number of edges between i and j ) = − λ ij ; ◮ L ii = valency of i = ∑ λ ij . j � = i 16/52

  15. Laplacian matrices The Laplacian matrix L of the concurrence graph G is a v × v matrix with ( i , j ) -entry as follows: ◮ if i � = j then L ij = − (number of edges between i and j ) = − λ ij ; ◮ L ii = valency of i = ∑ λ ij . j � = i 16/52

  16. Laplacian matrices The Laplacian matrix L of the concurrence graph G is a v × v matrix with ( i , j ) -entry as follows: ◮ if i � = j then L ij = − (number of edges between i and j ) = − λ ij ; ◮ L ii = valency of i = ∑ λ ij . j � = i The Laplacian matrix ˜ L of the Levi graph ˜ G is a ( v + b ) × ( v + b ) matrix with ( i , j ) -entry as follows: 16/52

  17. Laplacian matrices The Laplacian matrix L of the concurrence graph G is a v × v matrix with ( i , j ) -entry as follows: ◮ if i � = j then L ij = − (number of edges between i and j ) = − λ ij ; ◮ L ii = valency of i = ∑ λ ij . j � = i The Laplacian matrix ˜ L of the Levi graph ˜ G is a ( v + b ) × ( v + b ) matrix with ( i , j ) -entry as follows: ◮ ˜ L ii = valency of i � k if i is a block = replication r i of i if i is a treatment 16/52

  18. Laplacian matrices The Laplacian matrix L of the concurrence graph G is a v × v matrix with ( i , j ) -entry as follows: ◮ if i � = j then L ij = − (number of edges between i and j ) = − λ ij ; ◮ L ii = valency of i = ∑ λ ij . j � = i The Laplacian matrix ˜ L of the Levi graph ˜ G is a ( v + b ) × ( v + b ) matrix with ( i , j ) -entry as follows: ◮ ˜ L ii = valency of i � k if i is a block = replication r i of i if i is a treatment ◮ if i � = j then L ij = − (number of edges between i and j )  0 if i and j are both treatments   = 0 if i and j are both blocks  − n ij if i is a treatment and j is a block, or vice versa.  16/52

  19. Connectivity All row-sums of L and of ˜ L are zero, so both matrices have 0 as eigenvalue on the appropriate all-1 vector. 17/52

  20. Connectivity All row-sums of L and of ˜ L are zero, so both matrices have 0 as eigenvalue on the appropriate all-1 vector. Theorem The following are equivalent. 1. 0 is a simple eigenvalue of L; 2. G is a connected graph; 3. ˜ G is a connected graph; 4. 0 is a simple eigenvalue of ˜ L; 5. the design ∆ is connected in the sense that all differences between treatments can be estimated. 17/52

  21. Connectivity All row-sums of L and of ˜ L are zero, so both matrices have 0 as eigenvalue on the appropriate all-1 vector. Theorem The following are equivalent. 1. 0 is a simple eigenvalue of L; 2. G is a connected graph; 3. ˜ G is a connected graph; 4. 0 is a simple eigenvalue of ˜ L; 5. the design ∆ is connected in the sense that all differences between treatments can be estimated. From now on, assume connectivity. 17/52

  22. Connectivity All row-sums of L and of ˜ L are zero, so both matrices have 0 as eigenvalue on the appropriate all-1 vector. Theorem The following are equivalent. 1. 0 is a simple eigenvalue of L; 2. G is a connected graph; 3. ˜ G is a connected graph; 4. 0 is a simple eigenvalue of ˜ L; 5. the design ∆ is connected in the sense that all differences between treatments can be estimated. From now on, assume connectivity. Call the remaining eigenvalues non-trivial . They are all non-negative. 17/52

  23. Generalized inverse Under the assumption of connectivity, the Moore–Penrose generalized inverse L − of L is defined by � − 1 � L + 1 − 1 L − = vJ v vJ v , where J v is the v × v all-1 matrix. (The matrix 1 vJ v is the orthogonal projector onto the null space of L .) 18/52

  24. Generalized inverse Under the assumption of connectivity, the Moore–Penrose generalized inverse L − of L is defined by � − 1 � L + 1 − 1 L − = vJ v vJ v , where J v is the v × v all-1 matrix. (The matrix 1 vJ v is the orthogonal projector onto the null space of L .) L − of ˜ The Moore–Penrose generalized inverse ˜ L is defined similarly. 18/52

  25. Estimation We measure the response Y ω on each experimenal unit ω . 19/52

  26. Estimation We measure the response Y ω on each experimenal unit ω . If experimental unit ω has treatment i and is in block m ( f ( ω ) = i and g ( ω ) = m ), then we assume that Y ω = τ i + β m + random noise. 19/52

  27. Estimation We measure the response Y ω on each experimenal unit ω . If experimental unit ω has treatment i and is in block m ( f ( ω ) = i and g ( ω ) = m ), then we assume that Y ω = τ i + β m + random noise. We will do an experiment, collect data y ω on each experimental unit ω , then want to estimate certain functions of the treatment parameters using functions of the data. 19/52

  28. Estimation We measure the response Y ω on each experimenal unit ω . If experimental unit ω has treatment i and is in block m ( f ( ω ) = i and g ( ω ) = m ), then we assume that Y ω = τ i + β m + random noise. We will do an experiment, collect data y ω on each experimental unit ω , then want to estimate certain functions of the treatment parameters using functions of the data. We want to estimate contrasts ∑ i x i τ i with ∑ i x i = 0. 19/52

  29. Estimation We measure the response Y ω on each experimenal unit ω . If experimental unit ω has treatment i and is in block m ( f ( ω ) = i and g ( ω ) = m ), then we assume that Y ω = τ i + β m + random noise. We will do an experiment, collect data y ω on each experimental unit ω , then want to estimate certain functions of the treatment parameters using functions of the data. We want to estimate contrasts ∑ i x i τ i with ∑ i x i = 0. In particular, we want to estimate all the simple differences τ i − τ j . 19/52

  30. Variance: why does it matter? We want to estimate all the simple differences τ i − τ j . Put V ij = variance of the best linear unbiased estimator for τ i − τ j . The length of the 95% confidence interval for τ i − τ j is proportional to � V ij . 20/52

  31. Variance: why does it matter? We want to estimate all the simple differences τ i − τ j . Put V ij = variance of the best linear unbiased estimator for τ i − τ j . The length of the 95% confidence interval for τ i − τ j is proportional to � V ij . (If we always present results using a 95% confidence interval, then our interval will contain the true value in 19 cases out of 20.) 20/52

  32. Variance: why does it matter? We want to estimate all the simple differences τ i − τ j . Put V ij = variance of the best linear unbiased estimator for τ i − τ j . The length of the 95% confidence interval for τ i − τ j is proportional to � V ij . (If we always present results using a 95% confidence interval, then our interval will contain the true value in 19 cases out of 20.) The smaller the value of V ij , the smaller is the confidence interval, the closer is the estimate to the true value (on average), and the more likely are we to detect correctly which of τ i and τ j is bigger. 20/52

  33. Variance: why does it matter? We want to estimate all the simple differences τ i − τ j . Put V ij = variance of the best linear unbiased estimator for τ i − τ j . The length of the 95% confidence interval for τ i − τ j is proportional to � V ij . (If we always present results using a 95% confidence interval, then our interval will contain the true value in 19 cases out of 20.) The smaller the value of V ij , the smaller is the confidence interval, the closer is the estimate to the true value (on average), and the more likely are we to detect correctly which of τ i and τ j is bigger. We can make better decisions about new drugs, about new varieties of wheat, about new engineering materials . . . if we make all the V ij small. 20/52

  34. How do we calculate variance? Theorem Assume that all the noise is independent, with variance σ 2 . If ∑ i x i = 0 , then the variance of the best linear unbiased estimator of ∑ i x i τ i is equal to ( x ⊤ L − x ) k σ 2 . In particular, the variance of the best linear unbiased estimator of the simple difference τ i − τ j is � � L − ii + L − jj − 2 L − k σ 2 . V ij = ij 21/52

  35. How do we calculate variance? Theorem Assume that all the noise is independent, with variance σ 2 . If ∑ i x i = 0 , then the variance of the best linear unbiased estimator of ∑ i x i τ i is equal to ( x ⊤ L − x ) k σ 2 . In particular, the variance of the best linear unbiased estimator of the simple difference τ i − τ j is � � L − ii + L − jj − 2 L − k σ 2 . V ij = ij (This follows from assumption Y ω = τ i + β m + random noise. by using standard theory of linear models.) 21/52

  36. . . . Or we can use the Levi graph Theorem The variance of the best linear unbiased estimator of the simple difference τ i − τ j is � � L − ˜ ii + ˜ L − jj − 2˜ L − σ 2 . V ij = ij 22/52

  37. . . . Or we can use the Levi graph Theorem The variance of the best linear unbiased estimator of the simple difference τ i − τ j is � � L − ˜ ii + ˜ L − jj − 2˜ L − σ 2 . V ij = ij (Or β i − β j , appropriately labelled.) 22/52

  38. . . . Or we can use the Levi graph Theorem The variance of the best linear unbiased estimator of the simple difference τ i − τ j is � � L − ˜ ii + ˜ L − jj − 2˜ L − σ 2 . V ij = ij (Or β i − β j , appropriately labelled.) (This follows from assumption Y ω = τ i − ˜ β m + random noise. by using standard theory of linear models.) 22/52

  39. How do we calculate these generalized inverses? We need L − or ˜ L − . 23/52

  40. How do we calculate these generalized inverses? We need L − or ˜ L − . ◮ Add a suitable multiple of J , use GAP to find the inverse with exact rational coefficients, subtract that multiple of J . 23/52

  41. How do we calculate these generalized inverses? We need L − or ˜ L − . ◮ Add a suitable multiple of J , use GAP to find the inverse with exact rational coefficients, subtract that multiple of J . ◮ If the matrix is highly patterned, guess the eigenspaces, then invert each non-zero eigenvalue. 23/52

  42. How do we calculate these generalized inverses? We need L − or ˜ L − . ◮ Add a suitable multiple of J , use GAP to find the inverse with exact rational coefficients, subtract that multiple of J . ◮ If the matrix is highly patterned, guess the eigenspaces, then invert each non-zero eigenvalue. ◮ Direct use of the graph: coming up. 23/52

  43. How do we calculate these generalized inverses? We need L − or ˜ L − . ◮ Add a suitable multiple of J , use GAP to find the inverse with exact rational coefficients, subtract that multiple of J . ◮ If the matrix is highly patterned, guess the eigenspaces, then invert each non-zero eigenvalue. ◮ Direct use of the graph: coming up. 23/52

  44. How do we calculate these generalized inverses? We need L − or ˜ L − . ◮ Add a suitable multiple of J , use GAP to find the inverse with exact rational coefficients, subtract that multiple of J . ◮ If the matrix is highly patterned, guess the eigenspaces, then invert each non-zero eigenvalue. ◮ Direct use of the graph: coming up. Not all of these methods are suitable for generic designs with a variable number of treatments. 23/52

  45. Electrical networks We can consider the concurrence graph G as an electrical network with a 1-ohm resistance in each edge. Connect a 1-volt battery between vertices i and j . Current flows in the network, according to these rules. 1. Ohm’s Law: In every edge, voltage drop = current × resistance = current. 2. Kirchhoff’s Voltage Law: The total voltage drop from one vertex to any other vertex is the same no matter which path we take from one to the other. 3. Kirchhoff’s Current Law: At every vertex which is not connected to the battery, the total current coming in is equal to the total current going out. Find the total current I from i to j , then use Ohm’s Law to define the effective resistance R ij between i and j as 1/ I . 24/52

  46. Electrical networks: variance Theorem The effective resistance R ij between vertices i and j in G is � � L − ii + L − jj − 2 L − R ij = . ij 25/52

  47. Electrical networks: variance Theorem The effective resistance R ij between vertices i and j in G is � � L − ii + L − jj − 2 L − R ij = . ij So V ij = R ij × k σ 2 . 25/52

  48. Electrical networks: variance Theorem The effective resistance R ij between vertices i and j in G is � � L − ii + L − jj − 2 L − R ij = . ij So V ij = R ij × k σ 2 . Effective resistances are easy to calculate without matrix inversion if the graph is sparse. 25/52

  49. Example 2 calculation: v = 8 , b = 4 , k = 3 5 ③ ✁ ❆ ✁ ❆ ✁ ❆ ✁ ❆ ✁ ❆ ✁ ❆ ✟ ③ ③ ❍ ✟✟✟✟✟ ❍ ❍ 1 2 ❍ ❍ ❍ ③ ❍❍❍❍❍ ✟ ③ 8 6 ✟ ✟ ✟ 4 3 ✟ ❍ ✟ ③ ③ ❆ ✁ ❆ ✁ ❆ ✁ ❆ ✁ ❆ ✁ ❆ ✁ ③ 7 26/52

  50. Example 2 calculation: v = 8 , b = 4 , k = 3 5 ③ ✁ ❆ ✁ ❆ ✁ ❆ ✁ ❆ ✁ ❆ ✁ ❆ ✟ ③ ③ ❍ ✟✟✟✟✟ ❍ ❍ 1 2 ❍ ❍ ❍ ❍❍❍❍❍ ③ ✟ ③ 8 6 ✟ ✟ ✟ 4 3 ✟ ❍ ✲ ✟ ③ ③ ❆ ✁ [ 0 ] 10 [ 10 ] ❆ ✁ ❆ ✁ ❯ 5 ✕ 5 ❆ ✁ ❆ ✁ ❆ ✁ ③ [ 5 ] 7 26/52

  51. Example 2 calculation: v = 8 , b = 4 , k = 3 5 ③ [ 9 ] ✁ ❆ ✁ ❆ ❯ 3 ✁ ❆ 3 ✕ ✁ ❆ [ 6 ] ✁ ❆ [ 12 ] 6 ✁ ✲ ❆ ✟ ③ ③ ❍ ✟✟✟✟✟ 3 ❍ ❍ 1 2 ❍ ✯ ❍ ❍ ❍❍❍❍❍ ③ ✟ ③ 8 6 6 ✻ ✟ ✟ [ 3 ] ❨ ✟ 4 3 ✟ ❍ ✲ ✟ 3 ③ ③ ❆ ✁ [ 0 ] 10 [ 10 ] ❆ ✁ ❆ ✁ ❯ 5 ✕ 5 ❆ ✁ ❆ ✁ ❆ ✁ ③ [ 5 ] 7 26/52

  52. Example 2 calculation: v = 8 , b = 4 , k = 3 5 ③ [ 9 ] ✁ ❆ ✁ ❆ ❯ 3 ✁ ❆ 3 ✕ ✁ ❆ [ 6 ] ✁ ❆ [ 12 ] 6 ✁ ✲ ❆ ✟ ③ ❍ ③ ✟✟✟✟✟ 3 ❍ 11 ❍ ❥ 1 2 ❍ ✯ ❍ ❍ ❍❍❍❍❍ ③ ✟ ③ 8 6 2 6 ✻ ✻ ✟ ✟ [ 3 ] ❨ ✯ [ 23 ] ✟ 4 3 ✟ ❍ ✲ ✟ 3 ③ ③ 13 ❆ ✁ [ 0 ] 10 [ 10 ] ❆ ✁ ❆ ✁ ❯ 5 ✕ 5 ❆ ✁ ❆ ✁ ❆ ✁ ③ [ 5 ] 7 26/52

  53. Example 2 calculation: v = 8 , b = 4 , k = 3 R = 23 V = 23 I = 24 24 5 ③ [ 9 ] ✁ ❆ ✁ ❆ ❯ 3 ✁ ❆ 3 ✕ ✁ ❆ [ 6 ] ✁ ❆ [ 12 ] 6 ✁ ✲ ❆ ✟ ③ ③ ❍ ✟✟✟✟✟ 3 ❍ 11 ❍ ❥ 1 2 ❍ ✯ ❍ ❍ ③ ❍❍❍❍❍ ✟ ③ 8 6 2 6 ✻ ✻ ✟ ✟ [ 3 ] ❨ ✯ [ 23 ] ✟ 4 3 ✟ ❍ ✲ ✟ 3 ③ ③ 13 ❆ ✁ [ 0 ] 10 [ 10 ] ❆ ✁ ❆ ✁ ❯ 5 ✕ 5 ❆ ✁ ❆ ✁ ❆ ✁ ③ [ 5 ] 7 26/52

  54. . . . Or we can use the Levi graph If i and j are treatment vertices in the Levi graph ˜ G and ˜ R ij is the effective resistance between them in ˜ G then V ij = ˜ R ij × σ 2 . 27/52

  55. Example 2 yet again: v = 8 , b = 4 , k = 3 1 2 3 4 2 3 4 1 5 6 7 8 5 ① 5 � 8 1 ① � ✁ ❆ ① ① � ✁ ❆ ❅ � ❅ 1 2 ❅ � ✁ ❆ ❅ ① ① ✟ ❍ ① 2 � ✟✟ ❍ ❍ ① ① ❍❍ ✟ 8 6 ✟ ❍ ✟ ① ① ① 4 � ❆ ✁ ❅ � ❅ 4 3 ❆ ✁ ❅ � ❅ ① ① ❆ ✁ � ① � 3 6 � 7 ① 7 Levi graph concurrence graph 28/52

  56. Example 2 yet again: v = 8 , b = 4 , k = 3 1 2 3 4 2 3 4 1 5 6 7 8 5 ① 5 � 8 1 ① � ✁ ❆ ✲ ① ① � ✁ ❆ ❅ � ❘ ❅ 1 2 ❅ � ✁ ❆ ✒ ❅ ① ① ✟ ❍ ① 2 � ✟✟ ❍ ❍ ❄ ① ① ❍❍ ✟ 8 6 ✟ ✻ ❍ ✟ ① ① ① 4 � ❆ ✁ ❅ � ❅ 4 3 ❆ ✁ ❅ � ❅ ① ① ❆ ✁ � ① � 3 6 � 7 ① 7 Levi graph concurrence graph 28/52

  57. Example 2 yet again: v = 8 , b = 4 , k = 3 1 2 3 4 2 3 4 1 5 6 7 8 5 ① 5 � 8 1 ① � ✁ ❆ ✲ ① ① � ✁ ❆ ❅ � ❘ 3 ❅ 1 2 ❅ � ✁ ❆ ✒ ❅ 3 3 ① ① ✟ ❍ ① 2 � ✟✟ ❍ ❍ ❄ ① ① ❍❍ ✟ 8 6 3 3 ✟ ✻ ❍ ✟ ① ① [ 0 ] ① [ 15 ] 4 � ❆ ✁ ❅ � ❅ 4 3 ❆ ✁ ❅ � ❅ ① ① ❆ ✁ � ① � 3 6 � 7 ① 7 Levi graph concurrence graph 28/52

  58. Example 2 yet again: v = 8 , b = 4 , k = 3 1 2 3 4 2 3 4 1 5 6 7 8 5 ① 5 � 8 1 ① � ✁ ❆ ✲ ① ① � ✁ ❆ ❅ � ❘ 3 ❅ 1 2 ❅ � ✁ ❆ ✒ ❅ 3 3 ① ① ✟ ❍ ① 2 � ✟✟ ❍ ❍ ❄ ① ① ❍❍ ✟ 8 6 3 3 ✟ ✻ ❍ ✟ ① ① [ 0 ] ① [ 15 ] 4 � ❆ ✁ ❅ ❘ 5 5 � ❅ 4 3 ❆ ✁ ❅ 5 ✲ ✒ � ❅ ① ① ❆ ✁ � ① � 3 6 � 7 ① 7 Levi graph concurrence graph 28/52

  59. Example 2 yet again: v = 8 , b = 4 , k = 3 1 2 3 4 2 3 4 1 5 6 7 8 5 ① 5 � 8 1 ① � ✁ ❆ ✲ ① ① � ✁ ❆ ❅ � ❘ 3 ❅ 1 2 ❅ � ✁ ❆ ✒ ❅ 3 3 ① ① ✟ ❍ ① 2 � ✟✟ ❍ ❍ ❄ ① ① ❍❍ ✟ 8 6 3 3 ✟ ✻ ❍ ✟ ① ① [ 0 ] ① [ 15 ] 4 � ❆ ✁ ❅ ❘ 5 5 ❘ � ❅ 4 3 ❆ ✁ ❅ 5 ✲ ✒ � ❅ 8 ① ① [ 23 ] ❆ ✁ � ① � 3 6 � 7 ① 7 Levi graph concurrence graph 28/52

  60. Example 2 yet again: v = 8 , b = 4 , k = 3 1 2 3 4 R = 23 ˜ V = 23 I = 8 2 3 4 1 8 5 6 7 8 5 ① 5 � 8 1 ① � ✁ ❆ ✲ ① ① � ✁ ❆ ❅ � ❘ 3 ❅ 1 2 ❅ � ✁ ❆ ✒ ❅ 3 3 ① ① ✟ ❍ ① 2 � ✟✟ ❍ ❍ ❄ ① ① ❍❍ ✟ 8 6 3 3 ✟ ✻ ❍ ✟ ① ① [ 0 ] ① [ 15 ] 4 � ❆ ✁ ❅ ❘ 5 5 ❘ � ❅ 4 3 ❆ ✁ ❅ 5 ✲ � ✒ ❅ 8 ① ① [ 23 ] ❆ ✁ � ① � 3 6 � 7 ① 7 Levi graph concurrence graph 28/52

  61. Optimality: Average pairwise variance The variance of the best linear unbiased estimator of the simple difference τ i − τ j is k σ 2 = R ij k σ 2 . � � L − ii + L − jj − 2 L − V ij = ij 29/52

  62. Optimality: Average pairwise variance The variance of the best linear unbiased estimator of the simple difference τ i − τ j is k σ 2 = R ij k σ 2 . � � L − ii + L − jj − 2 L − V ij = ij We want all of the V ij to be small. 29/52

  63. Optimality: Average pairwise variance The variance of the best linear unbiased estimator of the simple difference τ i − τ j is k σ 2 = R ij k σ 2 . � � L − ii + L − jj − 2 L − V ij = ij We want all of the V ij to be small. Put ¯ V = average value of the V ij . Then V = 2 k σ 2 Tr ( L − ) 1 = 2 k σ 2 × ¯ , v − 1 harmonic mean of θ 1 , . . . , θ v − 1 where θ 1 , . . . , θ v − 1 are the nontrivial eigenvalues of L . 29/52

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend