simple orthogonal block structures nesting and marginality
play

Simple orthogonal block structures, nesting and marginality R. A. - PowerPoint PPT Presentation

Simple orthogonal block structures, nesting and marginality R. A. Bailey University of St Andrews QMUL (emerita) John Nelder Workshop in Methodological Statistics, Imperial College London, 28 March 2015 1/43 Abstract I John Nelder


  1. Combining two factors or pre-factors If A and B are two factors then their infimum A ∧ B is the factor whose levels are all combinations of levels of A and B that occur. ( A ∧ B )( ω ) = ( A ( ω ) , B ( ω )) Other notations: A . B or A : B . 8/43

  2. Crossing and nesting Operation Formula Poset 3 � R 8 � C crossing ( 3 R ) ∗ ( 8 C ) 9/43

  3. Crossing and nesting Operation Formula Poset 3 � R 8 � C crossing ( 3 R ) ∗ ( 8 C ) { 1, 2, 3 } × { 1, 2, 3, 4, 5, 6, 7, 8 } Experimental units Factors: U with one level R with 3 levels (1st coordinate) C with 8 levels (2nd coordinate) R ∧ C with 24 levels 9/43

  4. Crossing and nesting Operation Formula Poset 3 � R 8 � C crossing ( 3 R ) ∗ ( 8 C ) { 1, 2, 3 } × { 1, 2, 3, 4, 5, 6, 7, 8 } Experimental units Factors: U with one level R with 3 levels (1st coordinate) C with 8 levels (2nd coordinate) R ∧ C with 24 levels Operation Formula Poset 5 � B 4 � P nesting ( 5 B ) / ( 4 P ) 9/43

  5. Crossing and nesting Operation Formula Poset 3 � R 8 � C crossing ( 3 R ) ∗ ( 8 C ) { 1, 2, 3 } × { 1, 2, 3, 4, 5, 6, 7, 8 } Experimental units Factors: U with one level R with 3 levels (1st coordinate) C with 8 levels (2nd coordinate) R ∧ C with 24 levels Operation Formula Poset 5 � B 4 � P nesting ( 5 B ) / ( 4 P ) Experimental units { 1, 2, 3, 4, 5 } × { 1, 2, 3, 4 } Factors: U with one level B with 5 levels (1st coordinate) B ∧ P with 20 levels 9/43

  6. From crossing and nesting to simple orthogonal block structures The key ingredient of John Nelder’s 1965 paper on ‘Block structure and the null analysis of variance’ was to realise that crossing and nesting could be iterated (maybe with some steps of each sort). He developed an almost-complete theory, notation and algorithms based on this. He called the resulting sets of experimental units with their factor lists simple orthogonal block structures. 10/43

  7. Factors and refinement If B and Q are factors on the same set, write Q ≺ B to indicate that Q is finer than B . 11/43

  8. Factors and refinement If B and Q are factors on the same set, write Q ≺ B to indicate that Q is finer than B . Write Q � B to mean that either Q ≺ B or Q = B . 11/43

  9. Factors and refinement If B and Q are factors on the same set, write Q ≺ B to indicate that Q is finer than B . Write Q � B to mean that either Q ≺ B or Q = B . Refinement is another partial order, because ◮ F � F for all factors F ; 11/43

  10. Factors and refinement If B and Q are factors on the same set, write Q ≺ B to indicate that Q is finer than B . Write Q � B to mean that either Q ≺ B or Q = B . Refinement is another partial order, because ◮ F � F for all factors F ; ◮ if F � G and G � F then F = G ; 11/43

  11. Factors and refinement If B and Q are factors on the same set, write Q ≺ B to indicate that Q is finer than B . Write Q � B to mean that either Q ≺ B or Q = B . Refinement is another partial order, because ◮ F � F for all factors F ; ◮ if F � G and G � F then F = G ; ◮ if F � G and G � H then F � H . 11/43

  12. Factors and refinement If B and Q are factors on the same set, write Q ≺ B to indicate that Q is finer than B . Write Q � B to mean that either Q ≺ B or Q = B . Refinement is another partial order, because ◮ F � F for all factors F ; ◮ if F � G and G � F then F = G ; ◮ if F � G and G � H then F � H . 11/43

  13. Factors and refinement If B and Q are factors on the same set, write Q ≺ B to indicate that Q is finer than B . Write Q � B to mean that either Q ≺ B or Q = B . Refinement is another partial order, because ◮ F � F for all factors F ; ◮ if F � G and G � F then F = G ; ◮ if F � G and G � H then F � H . (For simplicity here, I am ignoring the possibility of aliasing.) So we can show factors on a Hasse diagram too! 11/43

  14. Crossing 3 � R 8 � C Hasse diagram for pre-factors: Experimental units { 1, 2, 3 } × { 1, 2, 3, 4, 5, 6, 7, 8 } Factors: U with one level R with 3 levels (1st coordinate) C with 8 levels (2nd coordinate) R ∧ C with 24 levels 12/43

  15. Crossing 3 � R 8 � C Hasse diagram for pre-factors: Experimental units { 1, 2, 3 } × { 1, 2, 3, 4, 5, 6, 7, 8 } Factors: U with one level R with 3 levels (1st coordinate) C with 8 levels (2nd coordinate) R ∧ C with 24 levels ✈ U 1 � ❅ � ❅ � ❅ � ❅ ✈ R C ✈ 8 3 ❅ � ❅ � ❅ � ❅ � ✈ R ∧ C 24 Hasse diagram for factors: 12/43

  16. Nesting 5 � B 4 P � Hasse diagram for pre-factors: { 1, 2, 3, 4, 5 } × { 1, 2, 3, 4 } Experimental units Factors: U with one level B with 5 levels (1st coordinate) B ∧ P with 20 levels 13/43

  17. Nesting 5 � B 4 P � Hasse diagram for pre-factors: { 1, 2, 3, 4, 5 } × { 1, 2, 3, 4 } Experimental units Factors: U with one level B with 5 levels (1st coordinate) B ∧ P with 20 levels ✈ U 1 ✈ B 5 ✈ B ∧ P 20 Hasse diagram for factors: 13/43

  18. Iteration: ( 20 Athletes ∗ (( 2 Sessions ) / ( 4 Runs )) 2 � S A � 20 4 � R 14/43

  19. Iteration: ( 20 Athletes ∗ (( 2 Sessions ) / ( 4 Runs )) ✈ U 1 � ❅ � ❅ � ❅ � ❅ ✈ S 2 � S A 20 2 ✈ ❅ � ❅ ❅ � ❅ A � 20 ❅ � ❅ ❅ � ❅ ✈ S ∧ R 4 � R A ∧ S 40 8 ✈ ❅ � ❅ � ❅ � ❅ � ✈ A ∧ S ∧ R 160 14/43

  20. Start with the first poset Terry Speed and I found that you can start with the nesting poset and use it to directly construct the set Ω of experimental units and its factors. Given pre-factors P 1 , . . . , P m with n 1 , . . . , n m levels, and a nesting relation ⊏ : Ω = Ω 1 × Ω 2 × · · · × Ω m where Ω i = { 1, 2, . . . , n i } . If A is any subset of { 1, 2, . . . , m } satisfying if i ∈ A and P i ⊏ P j then j ∈ A � P i . then include the factor i ∈A 15/43

  21. Poset block structures These poset block structures have all John Nelder’s properties, even when the first poset cannot be made by iterated crossing and nesting. 16/43

  22. Poset block structures These poset block structures have all John Nelder’s properties, even when the first poset cannot be made by iterated crossing and nesting. 4 2 Weeks � � Labs � � � � Samples � � Technicians 10 6 16/43

  23. Poset block structures These poset block structures have all John Nelder’s properties, even when the first poset cannot be made by iterated crossing and nesting. ✈ U 1 � ❅ � ❅ � ❅ � ❅ ✈ L W ✈ 4 2 ❅ � ❅ 4 2 ❅ � ❅ Weeks � � Labs ❅ � ❅ � � ❅ � ❅ ✈ L ∧ T � W ∧ L 8 12 ✈ � ❅ � � � ❅ � Samples � � Technicians � ❅ � 10 6 � ❅ � ✈ W ∧ L ∧ T W ∧ L ∧ S 80 48 ✈ ❅ � ❅ � ❅ � ❅ � ✈ W ∧ L ∧ S ∧ T 480 16/43

  24. Too successful John Nelder’s theory of simple orthogonal block structures, and the ensuing algorithms developed with Graham Wilkinson, have been enormously successful, but perhaps too much so. 17/43

  25. Too successful John Nelder’s theory of simple orthogonal block structures, and the ensuing algorithms developed with Graham Wilkinson, have been enormously successful, but perhaps too much so. B 1 1 1 1 2 2 2 2 3 3 3 3 4 4 4 4 5 5 5 5 P 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 Q 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 As factors, B ∧ P = Q = B ∧ Q , but does your software think so? 17/43

  26. Too successful John Nelder’s theory of simple orthogonal block structures, and the ensuing algorithms developed with Graham Wilkinson, have been enormously successful, but perhaps too much so. B 1 1 1 1 2 2 2 2 3 3 3 3 4 4 4 4 5 5 5 5 P 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 Q 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 As factors, B ∧ P = Q = B ∧ Q , but does your software think so? Some software cannot detect that Q ≺ B , because B is not in the name of Q . 17/43

  27. Too successful John Nelder’s theory of simple orthogonal block structures, and the ensuing algorithms developed with Graham Wilkinson, have been enormously successful, but perhaps too much so. B 1 1 1 1 2 2 2 2 3 3 3 3 4 4 4 4 5 5 5 5 P 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 Q 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 As factors, B ∧ P = Q = B ∧ Q , but does your software think so? Some software cannot detect that Q ≺ B , because B is not in the name of Q . Some software thinks that B ∧ Q has 100 levels, and tries to make 100 × 100 matrices to deal with this. 17/43

  28. Other orthogonal block structures There are still other collections of mutually orthogonal factors which obey most of the theory but do not come from pre-factors. 18/43

  29. Other orthogonal block structures There are still other collections of mutually orthogonal factors which obey most of the theory but do not come from pre-factors. For example, the Rows ( R ), Columns ( C ) and Letters ( L ) of a 7 × 7 Latin square give the following. ✈ U 1 � ❅ � ❅ � ❅ � ❅ � ❅ � ❅ ✈ L ✈ C R 7 7 7 ✈ ❅ � ❅ � ❅ � ❅ � ❅ � ❅ � ✈ R ∧ C = R ∧ L = C ∧ L 49 18/43

  30. Combining two factors: II If A and B are factors then their infimum A ∧ B satisfies: ◮ A ∧ B is finer than A , and A ∧ B is finer than B ; ◮ if any other factor is finer than A and finer than B then it is finer than A ∧ B . 19/43

  31. Combining two factors: II If A and B are factors then their infimum A ∧ B satisfies: ◮ A ∧ B is finer than A , and A ∧ B is finer than B ; ◮ if any other factor is finer than A and finer than B then it is finer than A ∧ B . The supremum A ∨ B of factors A and B is defined to satisfy: ◮ A is finer than A ∨ B , and B is finer than A ∨ B ; ◮ if there is any other factor C with A finer than C and B finer than C , then A ∨ B is finer than C . Each level of factor A ∨ B combines levels of A and also combines levels of B , and has replication as small as possible subject to this. 19/43

  32. Combining two factors: II If A and B are factors then their infimum A ∧ B satisfies: ◮ A ∧ B is finer than A , and A ∧ B is finer than B ; ◮ if any other factor is finer than A and finer than B then it is finer than A ∧ B . The supremum A ∨ B of factors A and B is defined to satisfy: ◮ A is finer than A ∨ B , and B is finer than A ∨ B ; ◮ if there is any other factor C with A finer than C and B finer than C , then A ∨ B is finer than C . Each level of factor A ∨ B combines levels of A and also combines levels of B , and has replication as small as possible subject to this. I claim that the supremum is even more important than the infimum in designed experiments and data analysis. 19/43

  33. Factorial treatments plus control Chemical Z N S K M 0 � Dose 1 � � � � 2 � � � � Dose ∨ Chemical = Fumigant, which is the two-level factor distinguishing zero treatment from the rest. 20/43

  34. Factorial treatments plus control Chemical Z N S K M 0 � Dose 1 � � � � 2 � � � � Dose ∨ Chemical = Fumigant, which is the two-level factor distinguishing zero treatment from the rest. If you do not fit Fumigant, its effect will be included in whichever of Dose and Chemical you fit first. 20/43

  35. Hasse diagram including supremum ✈ U 1 Chemical Z N S K M 0 � ✈ F 2 Dose 1 � � � � � ❅ � ❅ 2 � � � � � ❅ � ❅ ✈ C D 3 5 ✈ ❅ � ❅ � ❅ � ❅ � ✈ D ∧ C 9 21/43

  36. Hasse diagram including supremum ✈ U 1 Chemical Z N S K M 0 � ✈ F 2 Dose 1 � � � � � ❅ � ❅ 2 � � � � � ❅ � ❅ ✈ C D 3 5 ✈ ❅ � ❅ � ❅ � ❅ � ✈ D ∧ C 9 With F included, all the usual nice results apply. 21/43

  37. Hasse diagram including supremum ✈ U 1 Chemical Z N S K M 0 � ✈ F 2 Dose 1 � � � � � ❅ � ❅ 2 � � � � � ❅ � ❅ ✈ C D 3 5 ✈ ❅ � ❅ � ❅ � ❅ � ✈ D ∧ C 9 With F included, all the usual nice results apply. Heiko Großmann’s software includes suprema (as well as checking which factors are finer than which others). 21/43

  38. Hasse diagram including supremum ✈ U 1 Chemical Z N S K M 0 � ✈ F 2 Dose 1 � � � � � ❅ � ❅ 2 � � � � � ❅ � ❅ ✈ C D 3 5 ✈ ❅ � ❅ � ❅ � ❅ � ✈ D ∧ C 9 With F included, all the usual nice results apply. Heiko Großmann’s software includes suprema (as well as checking which factors are finer than which others). Does yours? 21/43

  39. Linear model for two factors Given two treatment factors A and B , the linear model for response Y ω on unit ω is often written as follows. If A ( ω ) = i and B ( ω ) = j then Y ω = µ + α i + β j + γ ij + ε ω , where the ε ω are random variables with zero means and a covariance matrix whose eigenspaces we know. 22/43

  40. Linear model for two factors Given two treatment factors A and B , the linear model for response Y ω on unit ω is often written as follows. If A ( ω ) = i and B ( ω ) = j then Y ω = µ + α i + β j + γ ij + ε ω , where the ε ω are random variables with zero means and a covariance matrix whose eigenspaces we know. Some authors: “Too many parameters! Let’s impose constraints.” (a) ∑ α i = 0, and so on i (b) ∑ r i α i = 0, where r i = |{ ω : A ( ω ) = i }| , and so on. i 22/43

  41. Linear model with constraints: bad consequences Y ω = µ + α i + β j + γ ij + ε ω (a) ∑ α i = 0, and so on i (b) ∑ r i α i = 0, where r i = |{ ω : A ( ω ) = i }| , and so on. i 23/43

  42. Linear model with constraints: bad consequences Y ω = µ + α i + β j + γ ij + ε ω (a) ∑ α i = 0, and so on i (b) ∑ r i α i = 0, where r i = |{ ω : A ( ω ) = i }| , and so on. i ◮ It is too easy to give all parameters the same status, and then the conclusions “ β j = 0 for all j ” and “ γ ij = 0 for all i and j ” are comparable. 23/43

  43. Linear model with constraints: bad consequences Y ω = µ + α i + β j + γ ij + ε ω (a) ∑ α i = 0, and so on i (b) ∑ r i α i = 0, where r i = |{ ω : A ( ω ) = i }| , and so on. i ◮ It is too easy to give all parameters the same status, and then the conclusions “ β j = 0 for all j ” and “ γ ij = 0 for all i and j ” are comparable. ◮ If some parameters are, after testing, deemed to be zero, the estimated values of the others may not give the vector of fitted values. For example, if both main effects and interaction are deemed to be zero, then ˆ µ under constraint (a) is not the fitted overall mean if replications are unequal. 23/43

  44. Linear model with constraints: bad consequences Y ω = µ + α i + β j + γ ij + ε ω (a) ∑ α i = 0, and so on i (b) ∑ r i α i = 0, where r i = |{ ω : A ( ω ) = i }| , and so on. i ◮ It is too easy to give all parameters the same status, and then the conclusions “ β j = 0 for all j ” and “ γ ij = 0 for all i and j ” are comparable. ◮ If some parameters are, after testing, deemed to be zero, the estimated values of the others may not give the vector of fitted values. For example, if both main effects and interaction are deemed to be zero, then ˆ µ under constraint (a) is not the fitted overall mean if replications are unequal. 23/43

  45. Linear model with constraints: bad consequences Y ω = µ + α i + β j + γ ij + ε ω (a) ∑ α i = 0, and so on i (b) ∑ r i α i = 0, where r i = |{ ω : A ( ω ) = i }| , and so on. i ◮ It is too easy to give all parameters the same status, and then the conclusions “ β j = 0 for all j ” and “ γ ij = 0 for all i and j ” are comparable. ◮ If some parameters are, after testing, deemed to be zero, the estimated values of the others may not give the vector of fitted values. For example, if both main effects and interaction are deemed to be zero, then ˆ µ under constraint (a) is not the fitted overall mean if replications are unequal. Popular software allows both of these. 23/43

  46. Say goodbye to linear models with constraints ✧✧✧✧✧✧✧✧✧✧✧✧✧✧✧✧✧✧✧✧✧✧✧✧✧✧✧✧✧ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ ❜ Y ω = µ + α i + β j + γ ij + ε ω ❜ ❜ ❜ ❜ ❜ ❜ (a) ∑ α i = 0, and so on ❜ ❜ ❜ i ❜ (b) ∑ ❜ r i α i = 0, where r i = |{ ω : A ( ω ) = i }| , and so on. ❜ ❜ i ❜ ❜ ❜ ❜ ❜ ❜ ❜ 24/43

  47. JAN’s approach to such linear models Y ω = µ + α i + β j + γ ij + ε ω John Nelder had a rant about the constraints on parameters in his 1977 paper ‘A reformulation of linear models’ and various later papers too. 25/43

  48. JAN’s approach to such linear models Y ω = µ + α i + β j + γ ij + ε ω John Nelder had a rant about the constraints on parameters in his 1977 paper ‘A reformulation of linear models’ and various later papers too. Essentially he said: ◮ if γ ij = 0 for all i and j then the model simplifies to Y ω = µ + α i + β j + ε ω so that the expectation of Y lies in a subspace of dimension at most n + m − 1, where n and m are the numbers of levels of A and B ; 25/43

  49. JAN’s approach to such linear models Y ω = µ + α i + β j + γ ij + ε ω John Nelder had a rant about the constraints on parameters in his 1977 paper ‘A reformulation of linear models’ and various later papers too. Essentially he said: ◮ if γ ij = 0 for all i and j then the model simplifies to Y ω = µ + α i + β j + ε ω so that the expectation of Y lies in a subspace of dimension at most n + m − 1, where n and m are the numbers of levels of A and B ; ◮ if β j = 0 for all j then the model does not simplify at all. 25/43

  50. JAN’s approach to such linear models Y ω = µ + α i + β j + γ ij + ε ω John Nelder had a rant about the constraints on parameters in his 1977 paper ‘A reformulation of linear models’ and various later papers too. Essentially he said: ◮ if γ ij = 0 for all i and j then the model simplifies to Y ω = µ + α i + β j + ε ω so that the expectation of Y lies in a subspace of dimension at most n + m − 1, where n and m are the numbers of levels of A and B ; ◮ if β j = 0 for all j then the model does not simplify at all. 25/43

  51. JAN’s approach to such linear models Y ω = µ + α i + β j + γ ij + ε ω John Nelder had a rant about the constraints on parameters in his 1977 paper ‘A reformulation of linear models’ and various later papers too. Essentially he said: ◮ if γ ij = 0 for all i and j then the model simplifies to Y ω = µ + α i + β j + ε ω so that the expectation of Y lies in a subspace of dimension at most n + m − 1, where n and m are the numbers of levels of A and B ; ◮ if β j = 0 for all j then the model does not simplify at all. (I read this in one of his papers, but could not find it again when preparing these slides.) 25/43

  52. RAB’s approach to such linear models Y ω = µ + α i + β j + γ ij + ε ω This equation is a short-hand for saying that there are FIVE subspaces which we might suppose to contain the vector E ( Y ) . 26/43

  53. RAB’s approach to such linear models Y ω = µ + α i + β j + γ ij + ε ω This equation is a short-hand for saying that there are FIVE subspaces which we might suppose to contain the vector E ( Y ) . Let us parametrize these subspaces separately, and consider the relationships between them. 26/43

  54. RAB’s approach to such linear models Y ω = µ + α i + β j + γ ij + ε ω This equation is a short-hand for saying that there are FIVE subspaces which we might suppose to contain the vector E ( Y ) . Let us parametrize these subspaces separately, and consider the relationships between them. This is the approach which I always use in teaching and in consulting, and in my 2008 book. 26/43

  55. Expectation subspaces E ( Y ) ∈ V A ⇐ ⇒ there are constants α i such that E ( Y ω ) = α i whenever A ( ω ) = i . 27/43

  56. Expectation subspaces E ( Y ) ∈ V A ⇐ ⇒ there are constants α i such that E ( Y ω ) = α i whenever A ( ω ) = i . dim ( V A ) = number of levels of A . 27/43

  57. Expectation subspaces E ( Y ) ∈ V A ⇐ ⇒ there are constants α i such that E ( Y ω ) = α i whenever A ( ω ) = i . dim ( V A ) = number of levels of A . E ( Y ) ∈ V B ⇐ ⇒ there are constants β j such that E ( Y ω ) = β j whenever B ( ω ) = j . 27/43

  58. Expectation subspaces E ( Y ) ∈ V A ⇐ ⇒ there are constants α i such that E ( Y ω ) = α i whenever A ( ω ) = i . dim ( V A ) = number of levels of A . E ( Y ) ∈ V B ⇐ ⇒ there are constants β j such that E ( Y ω ) = β j whenever B ( ω ) = j . E ( Y ) ∈ V U ⇐ ⇒ there is a constant µ such that E ( Y ω ) = µ for all ω . 27/43

  59. Expectation subspaces E ( Y ) ∈ V A ⇐ ⇒ there are constants α i such that E ( Y ω ) = α i whenever A ( ω ) = i . dim ( V A ) = number of levels of A . E ( Y ) ∈ V B ⇐ ⇒ there are constants β j such that E ( Y ω ) = β j whenever B ( ω ) = j . E ( Y ) ∈ V U ⇐ ⇒ there is a constant µ such that E ( Y ω ) = µ for all ω . E ( Y ) ∈ V A + V B ⇐ ⇒ there are constants θ i and φ j such that E ( Y ω ) = θ i + φ j if A ( ω ) = i and B ( ω ) = j . 27/43

  60. Expectation subspaces E ( Y ) ∈ V A ⇐ ⇒ there are constants α i such that E ( Y ω ) = α i whenever A ( ω ) = i . dim ( V A ) = number of levels of A . E ( Y ) ∈ V B ⇐ ⇒ there are constants β j such that E ( Y ω ) = β j whenever B ( ω ) = j . E ( Y ) ∈ V U ⇐ ⇒ there is a constant µ such that E ( Y ω ) = µ for all ω . E ( Y ) ∈ V A + V B ⇐ ⇒ there are constants θ i and φ j such that E ( Y ω ) = θ i + φ j if A ( ω ) = i and B ( ω ) = j . E ( Y ) ∈ V A ∧ B ⇐ ⇒ there are constants γ ij such that E ( Y ω ) = γ ij if A ( ω ) = i and B ( ω ) = j . 27/43

  61. Dimensions For general factors A and B : dim ( V A + V B ) = dim ( V A ) + dim ( V B ) − dim ( V A ∩ V B ) . 28/43

  62. Dimensions For general factors A and B : dim ( V A + V B ) = dim ( V A ) + dim ( V B ) − dim ( V A ∩ V B ) . If all combinations of levels of A and B occur, then V A ∩ V B = V U , which has dimension 1, 28/43

  63. Dimensions For general factors A and B : dim ( V A + V B ) = dim ( V A ) + dim ( V B ) − dim ( V A ∩ V B ) . If all combinations of levels of A and B occur, then V A ∩ V B = V U , which has dimension 1, so dim ( V A + V B ) = dim ( V A ) + dim ( V B ) − 1. 28/43

  64. Another partial order; another Hasse diagram The relation “is contained in” gives a partial order on subspaces of a vector space. So we can use a Hasse diagram to show the subspaces being considered to model the expectation of Y . Now it is helpful to show the dimension of each subspace on the diagram. 29/43

  65. Hasse diagram for model subspaces nm � V A ∧ B n + m − 1 � V A + V B � ❅ � ❅ � ❅ � ❅ V A n m � V B � ❅ � ❅ � ❅ � ❅ � � V U 1 30/43

  66. Hasse diagram for model subspaces nm � V A ∧ B n + m − 1 � V A + V B � ❅ � ❅ � ❅ � ❅ V A n m � V B � ❅ � ❅ � ❅ � ❅ � null model � V U 1 30/43

  67. Hasse diagram for model subspaces nm � V A ∧ B n + m − 1 � V A + V B � ❅ � ❅ � ❅ � ❅ only factor B makes any difference V A n m � V B � ❅ � ❅ � ❅ � ❅ � null model � V U 1 30/43

  68. Hasse diagram for model subspaces nm � V A ∧ B additive model n + m − 1 � V A + V B � ❅ � ❅ � ❅ � ❅ only factor B makes any difference V A n m � V B � ❅ � ❅ � ❅ � ❅ � null model � V U 1 30/43

  69. Hasse diagram for model subspaces full model nm � V A ∧ B additive model n + m − 1 � V A + V B � ❅ � ❅ � ❅ � ❅ only factor B makes any difference V A n m � V B � ❅ � ❅ � ❅ � ❅ � null model � V U 1 30/43

  70. Hasse diagram for model subspaces full model nm � V A ∧ B additive model n + m − 1 � V A + V B � ❅ � ❅ � ❅ � ❅ only factor B makes any difference V A n m � V B � ❅ � ❅ � ❅ � ❅ � null model � V U 1 For complicated families of models, non-mathematicians may find the Hasse diagram easier to understand than the equations. 30/43

  71. Diagram from a paper in Global Change Biology Composition × Temp. (45) h ¡ Composition + Richness + Temp. + Type × Temp. (29) e ¡ g ¡ f ¡ Composition + Richness × Temp. Composition + Type × Temp. Richness × Temp. + Type × Temp. (21) (23) (23) g ¡ e ¡ g ¡ f ¡ f ¡ Type × Temp. Composition + Temp. Richness × Temp. e ¡ (17) + Type (15) + Richness (15) e ¡ f ¡ b ¡ g ¡ c ¡ d ¡ Composition Richness × Temp. Rich + Type + Temp. Type × Temp. (15) (12) (9) (12) c ¡ e ¡ g ¡ b ¡ d ¡ f ¡ Type + Temp. Richness + Type Richness + Temp. (6) (7) (6) b ¡ b ¡ d ¡ d ¡ Temp. Richness Type c ¡ c ¡ (3) (4) (4) d ¡ c ¡ b ¡ Constant (1) a ¡ 31/43

  72. Main effects and interaction nm � V A ∧ B n + m − 1 � V A + V B � ❅ � ❅ � ❅ � ❅ V A n m � V B � ❅ � ❅ � ❅ � ❅ � � V U 1 32/43

  73. Main effects and interaction nm � V A ∧ B n + m − 1 � V A + V B � ❅ � ❅ � ❅ � ❅ V A n m � V B � ❅ � ❅ � ❅ � The vector of fitted values in V U has the ❅ � � V U 1 grand mean in every coordinate. 32/43

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend