chapter 13 introduction to analysis of chapter 13
play

Chapter 13: Introduction to Analysis of Chapter 13: Introduction to - PowerPoint PPT Presentation

Chapter 13: Introduction to Analysis of Chapter 13: Introduction to Analysis of Variance I t Introduction d ti Analysis of variance (ANOVA) is a hypothesis-testing procedure that is used to evaluate mean differences between two or


  1. Chapter 13: Introduction to Analysis of Chapter 13: Introduction to Analysis of Variance

  2. I t Introduction d ti • Analysis of variance (ANOVA) is a hypothesis-testing procedure that is used to evaluate mean differences between two or more treatments (or populations). • • As with all inferential procedures ANOVA As with all inferential procedures, ANOVA uses sample data as the basis for drawing general conclusions about populations. • The major advantage of ANOVA is that it j g can be used to compare two or more treatments. • Thus, ANOVA provides researchers with much greater flexibility in designing h fl ibili i d i i experiments and interpreting results.

  3. Fig. 13-2, p. 394

  4. T Terminology in Analysis of Variance i l i A l i f V i • When a researcher manipulates a variable to create the treatment conditions in an experiment, the variable is called an independent variable. • • On the other hand when a researcher On the other hand, when a researcher uses a non-manipulated variable to designate groups, the variable is called a quasi-independent variable. • In the context of analysis of variance, an independent variable or a quasi- independent variable is called a factor. • Th i di id The individual conditions or values that l di i l h make up a factor are called the levels of the factor.

  5. Terminology in Analysis of Variance gy y cont. • Although ANOVA can be used in a wide variety of research situations, this chapter introduces ANOVA in its simplest form. • • Specifically we consider only single factor Specifically, we consider only single-factor designs. – That is, we examine studies that have only one independent variable (or only y p ( y one quasi-independent variable). – Second, we consider only independent-measures designs; that i is, studies that use a separate sample di h l for each treatment condition.

  6. St ti ti Statistical Hypotheses for ANOVA l H th f ANOVA • Null Hypothesis would be stated as follows: • The Research Hypothesis would be stated as follows: f ll

  7. Th T The Test Statistic t St ti ti • The test statistic for ANOVA is very similar to the t statistics used in earlier chapters. • For ANOVA, the test statistic is called an F- ratio and has the following structure: • A As you can see from the formula above, the f h f l b h variance in the numerator of the F-ratio provides a single number that describes how big the differences are among all of the g g sample means.

  8. Th T The Test Statistic cont. t St ti ti t • In much the same way, the variance in the denominator of the F-ratio and the standard error in the denominator of the t statistic are both measuring the mean differences that would be expected if there is no that would be expected if there is no treatment effect. • Remember that two samples are not expected to be identical even if there is no treatment effect whatsoever. • In the independent-measures t statistic we computed an estimated standard error to measure how much difference is reasonable measure how much difference is reasonable to expect between two sample means. • In ANOVA, we will compute a variance to measure how big the mean differences g should be if there is no treatment effect.

  9. Th T The Test Statistic cont. t St ti ti t • Finally, you should realize that the t statistic and the F-ratio provide the same basic information. • In each case, the numerator of the ratio measures the actual difference obtained measures the actual difference obtained from the sample data, and the denominator measures the difference that would be expected if there were no treatment effect. • With either the F-ratio or the t statistic, a large value provides evidence that the sample mean difference is more than sample mean difference is more than would be expected by chance alone (Box 13.1).

  10. Th L The Logic of Analysis of Variance i f A l i f V i • Between-Treatments Variance – Remember that calculating variance is simply a method for measuring how big the differences are for a set of numbers numbers. – When you see the term variance, you can automatically translate it into the term differences. – Thus, the between-treatments variance simply measures how much difference exists between the treatment conditions. di i – In addition to measuring the differences between treatments, the overall goal of ANOVAis to interpret overall goal of ANOVAis to interpret the differences between treatments.

  11. Th L The Logic of Analysis of Variance cont. i f A l i f V i t – Specifically, the purpose for the analysis is to distinguish between two alternative explanations: • the differences are the result of sampling error sampling error • the differences have been caused by the treatment effects. – Thus there are always two possible Thus, there are always two possible explanations for the difference (or variance) that exists between treatments: • 1. Systematic Differences Caused by the Treatments • 2. Random, Unsystematic Differences – Two primary sources are usually id identified for these unpredictable ifi d f h di bl differences.

  12. Th L The Logic of Analysis of Variance cont. i f A l i f V i t » Individual differences » Experimental error – Thus, when we compute the between- treatments variance, we are measuring differences that could be i diff th t ld b caused by a systematic treatment effect or could simply be random and unsystematic mean differences caused y by sampling error. – To demonstrate that there really is a treatment effect, we must establish that the differences between h h diff b treatments are bigger than would be expected by sampling error alone.

  13. Th L The Logic of Analysis of Variance cont. i f A l i f V i t – To accomplish this goal, we will determine how big the differences are when there is no systematic treatment effect; that is, we will measure how much difference (or variance) can be explained difference (or variance) can be explained by random and unsystematic factors. – To measure these differences, we compute the variance within treatments . • Within-Treatments Variance – Inside each treatment condition, we have a set of individuals who all receive f i di id l h ll i exactly the same treatment; that is, the researcher does not do anything that would cause these individuals to have different scores.

  14. Th L The Logic of Analysis of Variance cont. i f A l i f V i t – Thus, the within-treatments variance provides a measure of how much difference is reasonable to expect from random and unsystematic factors. – In particular, the within-treatments In particular the within treatments variance measures the naturally occurring differences that exist when there is no treatment effect; that is, how big the differences are when H 0 is true. – Figure 13.4 shows the overall ANOVA and identifies the sources of and identifies the sources of variability that are measured by each of the two basic components.

  15. The F-Ratio: The Test Statistic for ANOVA • Once we have analyzed the total variability into two basic components (between treatments and within treatments), we simply compare them. • • The comparison is made by computing a The comparison is made by computing a statistic called an F-ratio. • For the independent-measures ANOVA, the F-ratio has the following structure: f g • When we express each component of variability in terms of its sources (see Figure 13.4), the structure of the F-ratio is:

  16. Th L The Logic of Analysis of Variance cont. i f A l i f V i t • The value obtained for the F-ratio helps determine whether any treatment effects • exist. • Consider the following two possibilities: – 1. When there are no systematic treatment effects, the differences between treatments (numerator) are entirely caused by random unsystematic factors caused by random, unsystematic factors. • In this case, the numerator and the denominator of the F-ratio are both measuring random differences and should be roughly the same size. • With the numerator and denominator roughly equal, the F-ratio should have a value around 1 00 have a value around 1.00.

  17. Th L The Logic of Analysis of Variance cont. i f A l i f V i t • In terms of the formula, when the treatment effect is zero, we obtain • Thus, an F-ratio near 1.00 indicates that the differences between treatments (numerator) b t t t t ( t ) are random and unsystematic, just like the differences in the denominator. • With an F-ratio near 1.00, we conclude that there is no evidence to suggest that the treatment has any effect. ff t

  18. Th L The Logic of Analysis of Variance cont. i f A l i f V i t – 2. When the treatment does have an effect, causing systematic differences between samples, then the combination of systematic and random differences in the numerator random differences in the numerator should be larger than the random differences alone in the denominator. • In this case, the numerator of the F-ratio should be noticeably larger than the denominator, and we should obtain an F-ratio noticeably larger than 1 00 noticeably larger than 1.00. • Thus, a large F-ratio is evidence for the existence of systematic treatment effects; that is, there are significant differences between treatments.

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend