undesirable optimality results in multiple testing
play

Undesirable Optimality Results in Multiple Testing? Charles Lewis - PowerPoint PPT Presentation

Undesirable Optimality Results in Multiple Testing? Charles Lewis Dorothy T. Thayer 1 Intuitions about multiple testing: - Multiple tests should be more conservative than individual tests. - Controlling per comparison error rate is not


  1. Undesirable Optimality Results in Multiple Testing? Charles Lewis Dorothy T. Thayer 1

  2. Intuitions about multiple testing: - Multiple tests should be more conservative than individual tests. - Controlling per comparison error rate is not enough. Need control of a familywise error rate or, better, FDR . 2

  3. Multiple testing for multilevel models - applying Bayesian ideas in a sampling theory context. Examples: Shaffer (1999), Gelman & Tuerlincks (2000), Lewis & Thayer (2004), and Sarkar & Zhou (2008). 3

  4. One-way random effects ANOVA setup (Treat as known.) , , , . , . 4

  5. Consider all pairwise comparisons , , , for . 5

  6. Decision theory framework (Based on early work of Lehmann) For each , take action . : declare to be positive, : declare to be negative, : unable to determine sign of . 6

  7. Two components for loss functions if the signs of and disagree and 0 otherwise; used to indicate wrong sign declarations if and 0 otherwise; used to indicate signs not determined. 7

  8. Per comparison loss function for declaring sign of . Bayesian decision theory identifies the optimal decision rule, such that is minimized. 8

  9. Finding the posterior expected loss (Some helpful notation) If , define and ; if , define and . It then follows that 9

  10. If , we have Therefore, the Bayes rule declares the sign of , namely , iff ; otherwise it takes . 10

  11. Since the posterior expected loss for is always less than or equal to , it follows that the Bayes risk for is also less than or equal to : . 11

  12. Consequently, . This expectation is the (random effects) probability of incorrectly declaring the sign of using the decision rule : the per comparison wrong sign rate for . 12

  13. Explicit expression for 13

  14. For the usual (fixed effects) per comparison test, , so . Define a fixed effects decision rule by , iff ; otherwise we have . 14

  15. Since is based on the distribution of , we may write , and so . 15

  16. Conclusion: the Bayesian random effects rule and the fixed effects rule both control the random effects per comparison wrong sign rate at , but the Bayesian rule is more conservative than the fixed effects rule. 16

  17. Extend definition of the per comparison loss function to the set of comparisons 17

  18. Interpretation of This new loss function equals the proportion of comparisons whose signs are incorrectly declared using a , plus times the proportion of comparisons whose signs are not determined using a . 18

  19. Family of optimal action vectors Order the so that . Define for as Take . 19

  20. The Bayesian decision rule for the loss function , where is the largest value of k such that , or if . 20

  21. Posterior expected loss for if , and if . 21

  22. Since , the posterior expected loss for the Bayesian decision function must be less than or equal to , and the Bayes risk for must also be less than or equal to : . 22

  23. Consequently, . This expectation is the (random effects) per comparison wrong sign rate for using the Bayes rule . 23

  24. Rewriting the bound on the posterior expected loss for given , we have . 24

  25. Consequently, we may write , so . 25

  26. Since this inequality gives an upper bound on the posterior expectation, a corresponding upper bound holds for the unconditional expectation: . 26

  27. This quantity (evaluated for any decision rule ) is referred to by Sarkar and Zhou (2008) as the Bayesian directional false discovery rate, or BDFDR, for . The result that controls the BDFDR was given by Lewis and Thayer (2004). Having a per comparison rule control a version of the FDR is counterintuitive! 27

  28. Sarkar and Zhou (2008) propose another decision rule (here labeled ) that also controls the BDFDR and maximizes the posterior per comparison power rate. 28

  29. Specifically, , where is the largest value of k such that , or if . Thus controls the BDFDR at . 29

  30. Sarkar and Zhou (2008) also proved that, among (non-randomized) rules that control the BDFDR, maximizes the posterior per comparison power rate: 30

  31. Too much power? Not only does have more power than the Bayes rule , it may also have more power than the fixed effects rule . In other words, will sometimes declare a sign for even when . This is counterintuitive! 31

  32. To summarize, in a multilevel model like random effects ANOVA, Bayesian ideas have sampling interpretations. In particular, we may define a Bayesian (or random effects) version of the FDR: The average (over both levels) proportion of declared signs for a set of comparisons that are incorrectly declared. 32

  33. 1. A Bayesian per comparison decision rule turns out to provide control of this FDR, even though it was only designed to minimize an expected per comparison loss function. 2. And a rule designed to control this FDR may have more power than a conventional per comparison rule. 33

  34. References Benjamini, Y., & Hochberg, Y. (1995). Controlling the false discovery rate: A practical and powerful approach to multiple testing. Journal of the Royal Statistical Society, Series B, 57 , 289-300. Gelman, A., & Tuerlinckx, F. (2000). Type S error rates for classical and Bayesian single and multiple comparison procedures. Computational Statistics , 15 , 373-390. Jones, L. V., & Tukey, J. W. (2000). A sensible formulation of the significance test. Psychological Methods, 5 , 411- 414. Lehmann, E. L. (1950). Some principles of the theory of testing hypotheses. The Annals of Mathematical Statistics, 21, 1-26. 34

  35. Lehmann, E. L. (1957a). A theory of some multiple decision problems. I. The Annals of Mathematical Statistics, 28, 1- 25. Lehmann, E. L. (1957b). A theory of some multiple decision problems. II. The Annals of Mathematical Statistics, 28, 547-572. Lewis, C., & Thayer, D. T. (2004). A loss function related to the FDR for random effects multiple comparisons. Journal of Statistical Planning and Inference , 125 , 49-58. Sakar, S. K., & Zhou, T. (2008). Controlling directional Bayesian false discovery rate in random effects model. Journal of Statistical Planning and Inference , 138 , 682- 693. 35

  36. Shaffer, J. P. (1999). A semi-Bayesian study of Duncan’s Bayesian multiple comparison procedure. Journal of Statistical Planning and Inference , 82 , 197-213. Shaffer, J. P. (2002). Multiplicity, directional (Type III) errors and the null hypothesis. Psychological Methods, 7 , 356- 369. Williams, V. S. L., Jones, L. V., & Tukey, J. W. (1999). Controlling error in multiple comparisons, with examples from state-to-state differences in educational achievement. Journal of Educational and Behavioral Statistics, 24 , 42-69. 36

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend