statistical power in statistical power in anova anova
play

Statistical Power in Statistical Power in ANOVA ANOVA Rick Balkin - PowerPoint PPT Presentation

Statistical Power in Statistical Power in ANOVA ANOVA Rick Balkin Balkin, Ph.D., LPC , Ph.D., LPC Rick Department of Counseling Department of Counseling Texas A&M University-Commerce Texas A&M University-Commerce


  1. Statistical Power in Statistical Power in ANOVA ANOVA Rick Balkin Balkin, Ph.D., LPC , Ph.D., LPC Rick Department of Counseling Department of Counseling Texas A&M University-Commerce Texas A&M University-Commerce Rick_balkin@tamu-commerce.edu Rick_balkin@tamu-commerce.edu Balkin, R. S. (2008). Balkin, R. S. (2008). 1 1

  2. Power Power  As mentioned earlier, As mentioned earlier, power power is the is the  likelihood of finding statistically likelihood of finding statistically significant differences given that significant differences given that statistically significant differences statistically significant differences actually do exist. actually do exist.  Put another way, power is the Put another way, power is the  likelihood of rejecting the null likelihood of rejecting the null hypothesis when it actually should be hypothesis when it actually should be rejected. rejected. Balkin, R. S. (2008). Balkin, R. S. (2008). 2 2

  3. Power Power  Power, therefore is directly related to type II Power, therefore is directly related to type II  error. The more power in a study, the less error. The more power in a study, the less chance there is to identify a non-significant chance there is to identify a non-significant difference when there actually is a difference when there actually is a significant difference. significant difference.  Statistically, power is expressed by 1- Statistically, power is expressed by 1- β , β ,  and therefore, type II error is expressed as and therefore, type II error is expressed as . β . β Balkin, R. S. (2008). Balkin, R. S. (2008). 3 3

  4. Power Power  The power of a study is dependent The power of a study is dependent  upon several factors: upon several factors: – Sample size Sample size – – Effect size Effect size – – Alpha level Alpha level – Balkin, R. S. (2008). Balkin, R. S. (2008). 4 4

  5. Power and sample size Power and sample size  As discussed in the lecture on effect size, a As discussed in the lecture on effect size, a  large sample size increases the likelihood of large sample size increases the likelihood of finding statistically significant differences. finding statistically significant differences.  Thus larger sample sizes increase statistical Thus larger sample sizes increase statistical  power power  Often, statistical tests show significance, not Often, statistical tests show significance, not  because the results are meaningful, but because the results are meaningful, but simply because the sample size is so large simply because the sample size is so large that the test picks up on very minor that the test picks up on very minor deviations/differences. deviations/differences. Balkin, R. S. (2008). Balkin, R. S. (2008). 5 5

  6. Power and alpha level Power and alpha level The alpha level also has an impact. The alpha level also has an impact.   When the alpha is at the .10 level of significance, as opposed When the alpha is at the .10 level of significance, as opposed   to .05, the critical value is lowered and the likelihood of to .05, the critical value is lowered and the likelihood of finding a statistically significant difference increases ( Fobs finding a statistically significant difference increases ( Fobs is is more likely to be larger than Fcrit Fcrit ). ). more likely to be larger than As the likelihood of masking a type I error is increased, the As the likelihood of masking a type I error is increased, the   likelihood of making a type II error is decreased. Therefore, likelihood of making a type II error is decreased. Therefore, there is an inverse relationship between type I and type II there is an inverse relationship between type I and type II error. error. While procedures exist to decrease the chance of making a While procedures exist to decrease the chance of making a   type I error, researchers run this risk of increasing the chance type I error, researchers run this risk of increasing the chance of making a type II error, especially when smaller sample of making a type II error, especially when smaller sample sizes are involved. sizes are involved. Balkin, R. S. (2008). Balkin, R. S. (2008). 6 6

  7. Power and effect size Power and effect size  Additionally, effect size is pertinent. Additionally, effect size is pertinent.   The greater the magnitudes of the The greater the magnitudes of the  differences between groups, the fewer differences between groups, the fewer participants are needed to identify participants are needed to identify statistical significance. statistical significance. Balkin, R. S. (2008). Balkin, R. S. (2008). 7 7

  8. Power and error Power and error  Finally, power is influenced by error; the Finally, power is influenced by error; the  less error measured in a study, the more less error measured in a study, the more power. power.  While issues like the magnitude of the While issues like the magnitude of the  treatment effect or the error variance are treatment effect or the error variance are minimally influenced by the researcher, the minimally influenced by the researcher, the establishment of an alpha level and the establishment of an alpha level and the sample size are easily controlled. sample size are easily controlled.  The easiest method of increasing power in a The easiest method of increasing power in a  study is to increase sample size. study is to increase sample size. Balkin, R. S. (2008). Balkin, R. S. (2008). 8 8

  9. Power and research Power and research  Research methods may be wrought with emphasis Research methods may be wrought with emphasis  on statistical significance. on statistical significance.  An unfortunate trend is to discount meaningful An unfortunate trend is to discount meaningful  findings because no statistical difference or findings because no statistical difference or relationship exists. Perhaps not enough emphasis is relationship exists. Perhaps not enough emphasis is placed on practical significance. placed on practical significance.  Thompson (1999) identified the over-emphasis on Thompson (1999) identified the over-emphasis on  tests for statistical significance and emphasized the tests for statistical significance and emphasized the need to report practical significance along with need to report practical significance along with statistical significance. statistical significance. Balkin, R. S. (2008). Balkin, R. S. (2008). 9 9

  10. Power and research Power and research  Knowing where not to look for answers can be just Knowing where not to look for answers can be just  as important as knowing where to look for answers. as important as knowing where to look for answers.  However, moderate and large effect sizes may be However, moderate and large effect sizes may be  found when statistically significant differences do found when statistically significant differences do not exist, and this is usually due to a lack of not exist, and this is usually due to a lack of statistical power. statistical power.  When sample size is increased, statistical When sample size is increased, statistical  significance will be evident. Thus, having sufficient significance will be evident. Thus, having sufficient power in a design can be very important to the power in a design can be very important to the manner in which results are reported and ultimately manner in which results are reported and ultimately published. published. Balkin, R. S. (2008). Balkin, R. S. (2008). 10 10

  11. Power and research Power and research  For social sciences, power is usually For social sciences, power is usually  deemed sufficient at .80— —80% chance 80% chance deemed sufficient at .80 of finding statistically significant of finding statistically significant differences when they actually do exist differences when they actually do exist and a 20% of type II error. Statistical and a 20% of type II error. Statistical packages do not compute power. packages do not compute power. Balkin, R. S. (2008). Balkin, R. S. (2008). 11 11

  12. Evaluating Power Evaluating Power Three types of power analyses Three types of power analyses    A priori A priori     Post hoc Post hoc     Sensitivity Sensitivity    Balkin, R. S. (2008). Balkin, R. S. (2008). 12 12

  13. A priori A priori The purpose is to identify the appropriate The purpose is to identify the appropriate   sample size to conduct the analysis before sample size to conduct the analysis before data is even collected data is even collected The researcher must be able to The researcher must be able to   1. Estimate the effect size that would define Estimate the effect size that would define 1. statistical significance statistical significance 2. Identify the number of groups in the study Identify the number of groups in the study 2. 3. Set a minimum level of power (usually .80) Set a minimum level of power (usually .80) 3. 4. Identify an alpha level for the study Identify an alpha level for the study 4. Balkin, R. S. (2008). Balkin, R. S. (2008). 13 13

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend