course business
play

Course Business l Sign up for final project presentations l Dec 5 th - PowerPoint PPT Presentation

Course Business l Sign up for final project presentations l Dec 5 th : Doug, Jenny, Kelly, Kole, Rob, Zac l Dec 12 th : Ciara, Griff, Kori, Lauren, Lin, Rebecca l Shorter lecture on effect size & power l sleep.csv on CourseWeb l Scott to be out


  1. • Is bacon really this bad for you?? October 26, 2015

  2. • Is bacon really this bad for you?? • True that we have as much evidence that bacon causes cancer as smoking causes cancer! • Same level of statistical reliability

  3. • Is bacon really this bad for you?? • True that we have as much evidence that bacon causes cancer as smoking causes cancer! • Same level of statistical reliability • But, effect size is much smaller for bacon

  4. Effect Size • Our model results tell us both Parameter estimate t statistic and p -value tells us about effect tell us about statistical size reliability

  5. Effect Size: Parameter Estimate • Simplest measure: Parameter estimates • Effect of 1-unit change in predictor on outcome variable • “Each hour of exercise the day before resulted in another 0.72 hours of sleep” • “Each minute of exercise increases life expectancy by about 7 minutes.” (Moore et al., 2012, PLOS ONE ) • “People with a college diploma earn around $24,000 more per year.” (Bureau of Labor Statistics, 2018) • Concrete! Good for “real-world” outcomes

  6. Week 12: Effect Size & Power l Missing Data Solutions l Casewise Deletion l Listwise Deletion l Unconditional Imputation l Conditional Imputation l Multiple Imputation l Effect Size l Unstandardized l Standardized l Interpreting Effect Size l Variance Explained l Power l Recap of Null Hypothesis Significance Testing l Why Should We Care? l Estimating Effect Size l Doing Your Own Power Analysis l Influences on Power

  7. Effect Size: Standardization • Which is the bigger effect? • 1 hour of exercise = 0.72 hours of sleep • 1 mg of caffeine = -0.004 hours of sleep • Problem: These are measured in different units • Hours of exercise vs. mg of caffeine

  8. Effect Size: Standardization • Which is the bigger effect? • 1 hour of exercise = 0.72 hours of sleep • 1 mg of caffeine = -0.004 hours of sleep • Problem: These are measured in different units • Hours of exercise vs. mg of caffeine • Convert to z -scores: # of standard deviations from the mean • This scale applies to anything! • Standardized scores

  9. Effect Size: Standardization • scale() puts things in terms of z-scores • New z-scored version of HoursExercise: sleep$HoursExercise.z <- • scale(sleep$HoursExercise)[,1] • # of standard deviations above/below mean hours of exercise)

  10. Effect Size: Standardization • scale() puts things in terms of z-scores • New z-scored version of HoursExercise: sleep$HoursExercise.z <- • scale(sleep$HoursExercise)[,1] • # of standard deviations above/below mean hours of exercise) • Then use these in a new model • Try z-scoring MgCaffeine, too • Then, run a model with the z-scored variables. Which has the largest effect?

  11. Effect Size: Standardization • Old results: No change in Effect size is now statistical reliability estimated differently • New results:

  12. Effect Size: Standardization • New results: • 1 SD increase in exercise = 0.75 hours of sleep • 1 SD increase in caffeine = -0.26 hours of sleep • Exercise effect is bigger

  13. Effect Size: Standardization • Standardized effects make our effect sizes somewhat more reliant on our data • Effect of 1 std. dev of cigarette smoking on life expectancy depends on what that std. dev is • Varies a lot from country to country! • Might get different standardized effects even if unstandardized is the same

  14. Week 12: Effect Size & Power l Missing Data Solutions l Casewise Deletion l Listwise Deletion l Unconditional Imputation l Conditional Imputation l Multiple Imputation l Effect Size l Unstandardized l Standardized l Interpreting Effect Size l Variance Explained l Power l Recap of Null Hypothesis Significance Testing l Why Should We Care? l Estimating Effect Size l Doing Your Own Power Analysis l Influences on Power

  15. Effect Size: Interpretation • Generic heuristic for standardized effect sizes • “Small” ≈ .25 • “Medium” ≈ .50 • “Large” ≈ .80 • But, take these with several grains of salt • Cohen (1988) just made them up • Not in context of particular domain

  16. Effect Size: Interpretation • Consider in context of other effect sizes in this domain: Our Other Other effect: effect 1: effect 2: .20 .30 .40 • vs: Other Other Our effect 1: effect 2: effect: .10 .15 .20 • For interventions: Consider cost, difficulty of implementation, etc. • Aspirin’s effect in reducing heart attacks: d ≈ .06, but cheap!

  17. Effect Size: Interpretation • For theoretically guided research, compare to predictions of competing theories • The lag effect in memory: Study Study Study Study WITCH RACCOON VIKING RACCOON 1 day 1 sec 1 sec 1 sec POOR 5 sec. 5 sec. 5 sec. 5 sec. recall of RACCOON Study Study Study Study RACCOON WITCH VIKING RACCOON 1 day 1 sec 1 sec 1 sec 5 sec. 5 sec. 5 sec. 5 sec. GOOD recall of RACCOON • Is this about intervening items or time ?

  18. Effect Size: Interpretation • Is lag effect about intervening items or time ? Study Study Study Study RACCOON WITCH VIKING RACCOON A: 1 day TEST 1 sec 1 sec 1 sec 5 sec. 5 sec. 5 sec. 5 sec. Study Study Study RACCOON WITCH RACCOON B: 1 day TEST 10 sec 10 sec 5 sec. 5 sec. 5 sec. • Intervening items hypothesis predicts A > B • Time hypothesis predicts B > A • Goal here is to use direction of the effect to adjudicate between competing hypotheses • Not whether the lag effect is “small” or “large”

  19. Week 12: Effect Size & Power l Missing Data Solutions l Casewise Deletion l Listwise Deletion l Unconditional Imputation l Conditional Imputation l Multiple Imputation l Effect Size l Unstandardized l Standardized l Interpreting Effect Size l Variance Explained l Power l Recap of Null Hypothesis Significance Testing l Why Should We Care? l Estimating Effect Size l Doing Your Own Power Analysis l Influences on Power

  20. Overall Variance Explained • How well can we explain this DV? • Test: Do predicted values match up well with the actual outcomes? • R 2 : cor(fitted(SleepModel), sleep$HoursSleep)^2 ● ● ● 12 ● ● ● ● ● ● ● ● • But, this includes what’s ● ● ● ● ● ● ● ● ● ● 10 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● predicted on basis of ● ● ● ● ● ● ● ● ● ● ACTUAL hours of sleep ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 8 ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● subjects (and other random ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 6 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● effects) ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● Compare to the R 2 of a ● 4 ● ● ● ● ● ● ● • ● ● ● ● ● ● ● ● ● ● ● 2 model with just the random ● ● ● effects & no fixed effects 4 6 8 10 PREDICTED hours of sleep

  21. Week 12: Effect Size & Power l Missing Data Solutions l Casewise Deletion l Listwise Deletion l Unconditional Imputation l Conditional Imputation l Multiple Imputation l Effect Size l Unstandardized l Standardized l Interpreting Effect Size l Variance Explained l Power l Recap of Null Hypothesis Significance Testing l Why Should We Care? l Estimating Effect Size l Doing Your Own Power Analysis l Influences on Power

  22. Recap of Null Hypothesis Significance Testing • Does “brain training” affect general cognition? • H 0 : There is no effect of brain training on cognition • H A : There is an effect of brain training on cognition

  23. Recap of Null Hypothesis Significance Testing • Does “brain training” affect general cognition? • H 0 : There is no effect of brain training on cognition • γ 1 = 0 in the population • H A : There is an effect of brain training on cognition • γ 1 ≠ 0 in the population

  24. Recap of Null Hypothesis Significance Testing • Does “brain training” affect general cognition? • H 0 : There is no effect of brain training on cognition • γ 1 = 0 in the population • H A : There is an effect of brain training on cognition • γ 1 ≠ 0 in the population

  25. Recap of Null Hypothesis Significance Testing • Is a z score of 3.3 good evidence against H 0 ? • In a world where brain training has no effect on cognition ( H 0 ), the most probable z score would have been 0

  26. Recap of Null Hypothesis Significance Testing • Is a z score of 3.3 good evidence against H 0 ? • In a world where brain training has no effect on cognition ( H 0 ), the most probable z score would have been 0 z = 0

  27. Recap of Null Hypothesis Significance Testing • But even under H 0 , we wouldn’t always expect to get exactly a z-score of 0 in our sample • Observed effect will sometimes be higher or lower just by chance (but these values have lower probability) – sampling error z = 0 z = 1 z = -1.5

  28. Recap of Null Hypothesis Significance Testing • In a world where H 0 is true, the distribution of z - scores should look like this • The normal distribution of z- scores has mean 0 and std. dev. 1—the standard normal • How plausible is it that the z-score for our sample came from this distribution? z = 0 z = 1 z = -1.5 -3 -2 -1 0 1 2 3

  29. Recap of Null Hypothesis Significance Testing • p -value: Probability of obtaining a result this extreme under the null hypothesis of no effect • We reject H 0 when the observed t or z has < .05 probability of arising under H 0 • But, still possible to get this z when H 0 is true Total probability of a z- score here under H0 = .05

  30. Recap of Null Hypothesis Significance Testing • p -value: Probability of obtaining a result this extreme under the null hypothesis of no effect • We reject H 0 when the observed t or z has < .05 probability of arising under H 0 • But, still possible to get this z when H 0 is true

  31. Recap of Null Hypothesis Significance Testing • p -value: Probability of obtaining a result this extreme under the null hypothesis of no effect • We reject H 0 when the observed t or z has < .05 probability of arising under H 0 • But, still possible to get this z when H 0 is true • In that case, we’d incorrectly conclude that brain training works when it actually doesn’t • False positive or Type I error Total probability of a z- score here under H0 = .05

  32. Recap of Null Hypothesis Significance Testing • What is our rate of Type I error? • Even in a world where H 0 is true, 5% of z values fall in white area • Thus, a 5% probability α = rate of Type I error = .05 • Total probability of a z- score here under H0 = .05

  33. Recap of Null Hypothesis Significance Testing • So, in a world where H 0 is true, two outcomes possible WHAT WE DID Retain H 0 Reject H 0 ACTUAL STATE OF OOPS! H 0 is true Type I error THE WORLD GOOD! Probability: α Probability: 1-α H A is true

  34. Recap of Null Hypothesis Significance Testing • What about a world where H A is true?

  35. Recap of Null Hypothesis Significance Testing • Another mistake we could make: There really is an effect, but we retained H 0 • False negative / Type II error • Traditionally, not considered as “bad” as Type I • Probability: β WHAT WE DID Retain H 0 Reject H 0 ACTUAL STATE OF OOPS! H 0 is true Type I error THE WORLD GOOD! Probability: α Probability: 1-α OOPS! Type II error H A is true Probability: β

  36. Recap of Null Hypothesis Significance Testing

  37. Recap of Null Hypothesis Significance Testing • POWER (1-β): Probability of correct rejection of H 0 : detecting the effect when it really exists • If our hypothesis ( H A ) is right, what probability is there of obtaining significant evidence for it? WHAT WE DID Retain H 0 Reject H 0 ACTUAL STATE OF OOPS! H 0 is true Type I error THE WORLD GOOD! Probability: α Probability: 1-α OOPS! Type II error GOOD! H A is true Probability: 1-β Probability: β

  38. Recap of Null Hypothesis Significance Testing • POWER (1-β): Probability of correct rejection of H 0 : detecting the effect when it really exists • Can we find the thing we’re looking for?

  39. Recap of Null Hypothesis Significance Testing • POWER (1-β): Probability of correct rejection of H 0 : detecting the effect when it really exists • Can we find the thing we’re looking for? • If our hypothesis is true, what is the probability we’ll get p < .05 ? • We compare retrieval practice to re-reading with power = .75 • If retrieval practice is actually beneficial, there is a 75% chance we’ll get a significant result • We compare bilinguals to monolinguals on a test of non-verbal cognition with power = .35 • If there is a difference between monolinguals & bilinguals, there is a 35% chance we’ll get p < .05

  40. Week 12: Effect Size & Power l Missing Data Solutions l Casewise Deletion l Listwise Deletion l Unconditional Imputation l Conditional Imputation l Multiple Imputation l Effect Size l Unstandardized l Standardized l Interpreting Effect Size l Variance Explained l Power l Recap of Null Hypothesis Significance Testing l Why Should We Care? l Estimating Effect Size l Doing Your Own Power Analysis l Influences on Power

  41. Why Do We Care About Power? 1. Efficient use of resources • A major determinant of power is sample size (larger = more power) • Power analyses tell us if our planned sample size ( n ) is: • Large enough to be able to find what we’re looking for • Not too large that we’re collecting more data than necessary

  42. Why Do We Care About Power? 1. Efficient use of resources • A major determinant of power is sample size (larger = more power) • Power analyses tell us if our planned sample size ( n ) is: • Large enough to be able to find what we’re looking for • Not too large that we’re collecting more data than necessary • This is about good use of our resources • Societal resources: Money, participant hours • Your resources: Time!!

  43. Why Do We Care About Power? 1. Efficient use of resources • A major determinant of power is sample size (larger = more power) • Power analyses tell us if our planned sample size ( n ) is: • Large enough to be able to find what we’re looking for • Not too large that we’re collecting more data than necessary • This is about good use of our resources

  44. Why Do We Care About Power? 1. Efficient use of resources 2. Avoid p-hacking (Simmons et al., 2011) • Rate of false positive results increases if we keep collecting data whenever our effect is non-sig. • In the limit, ensures a significant result p -value happens to • Random sampling be higher in this means that p-value slightly larger sample is likely to differ in each sample

  45. Why Do We Care About Power? 1. Efficient use of resources 2. Avoid p-hacking (Simmons et al., 2011) • Rate of false positive results increases if we keep collecting data whenever our effect is non-sig. • In the limit, ensures a significant result Now, p -value • Random sampling happens to be lower means that p-value is likely to differ in each sample

  46. Why Do We Care About Power? 1. Efficient use of resources 2. Avoid p-hacking (Simmons et al., 2011) • Rate of false positive results increases if we keep collecting data whenever our effect is non-sig. • In the limit, ensures a significant result SIGNIFICANT!! • Random sampling PUBLISH NOW!! means that p-value is likely to differ in each sample • At some point, p < .05 by chance

  47. Why Do We Care About Power? 1. Efficient use of resources 2. Avoid p-hacking (Simmons et al., 2011) • Rate of false positive results increases if we keep collecting data whenever our effect is non-sig. • In the limit, ensures a significant result But not significant in • Random sampling this even larger means that p-value sample is likely to differ in each sample • At some point, p < .05 by chance • Bias to get positive results if we stop if and only if p < .05

  48. Why Do We Care About Power? 1. Efficient use of resources 2. Avoid p-hacking (Simmons et al., 2011) • Rate of false positive results increases if we keep collecting data whenever our effect is non-sig. Collect data Maybe we just didn’t Significant NO have enough data yet. result? YES It’s “statistically significant,” so that means it’s real. Publish it!

  49. Why Do We Care About Power? 1. Efficient use of resources 2. Avoid p-hacking (Simmons et al., 2011) • Rate of false positive results increases if we keep collecting data whenever our effect is non-sig. • We can avoid this if we use a power analysis to decide our sample size in advance

  50. Why Do We Care About Power? 1. Efficient use of resources 2. Avoid p-hacking (Simmons et al., 2011) 3. Understand non-replication (Open Science Collaboration, 2015) • Even if an effect exists in the population, we’d expect some non-significant results • Power is almost never 100% • In fact, many common designs in psychology have low power (Etz & Vandekerckhove, 2016; Maxwell et al., 2015) • Small to moderate sample sizes • Small effect sizes

  51. Why Do We Care About Power? 1. Efficient use of resources 2. Avoid p-hacking (Simmons et al., 2011) 3. Understand non-replication (Open Science Collaboration, 2015)

  52. Why Do We Care About Power? 1. Efficient use of resources 2. Avoid p-hacking (Simmons et al., 2011) 3. Understand non-replication (Open Science Collaboration, 2015) • Even if an effect exists in the population, we’d expect some non-significant results • Power is almost never 100% • In fact, many common designs in psychology have low power (Etz & Vandekerckhove, 2016; Maxwell et al., 2015) • Small effect sizes • Small to moderate sample sizes • Failures to replicate might be a sign of low power, rather than a non-existent effect

  53. Why Do We Care About Power? 1. Efficient use of resources 2. Avoid p-hacking (Simmons et al., 2011) 3. Understand non-replication (Open Science Collaboration, 2015) 4. Understand null results A non-significant result, by itself, doesn ’ t prove an • effect doesn’t exist • We “fail to reject H 0 ” rather than “accept H 0 ”

  54. Why Do We Care About Power? 1. Efficient use of resources 2. Avoid p-hacking (Simmons et al., 2011) 3. Understand non-replication (Open Science Collaboration, 2015) 4. Understand null results A non-significant result, by itself, doesn ’ t prove an • effect doesn’t exist • We “fail to reject H 0 ” rather than “accept H 0 ” • “Absence of evidence is not evidence of absence.” “I looked around Schenley Park for 15 minutes and didn’t see any giraffes. Therefore, giraffes don’t exist.”

  55. Why Do We Care About Power? 1. Efficient use of resources 2. Avoid p-hacking (Simmons et al., 2011) 3. Understand non-replication (Open Science Collaboration, 2015) 4. Understand null results A non-significant result, by itself, doesn ’ t prove an • effect doesn’t exist • We “fail to reject H 0 ” rather than “accept H 0 ” • “Absence of evidence is not evidence of absence.” We didn’t find enough DOES No significant effect evidence to conclude NOT exists there is a significant MEAN effect

  56. Why Do We Care About Power? 1. Efficient use of resources 2. Avoid p-hacking (Simmons et al., 2011) 3. Understand non-replication (Open Science Collaboration, 2015) 4. Understand null results A non-significant result, by itself, doesn ’ t prove an • effect doesn’t exist • We “fail to reject H 0 ” rather than “accept H 0 ” • “Absence of evidence is not evidence of absence.” • Major criticism of null hypothesis significance testing!

  57. Why Do We Care About Power? 1. Efficient use of resources 2. Avoid p-hacking (Simmons et al., 2011) 3. Understand non-replication (Open Science Collaboration, 2015) 4. Understand null results A non-significant result, by itself, doesn ’ t prove an • effect doesn’t exist • But, with high power, null result is more informative

  58. Why Do We Care About Power? 1. Efficient use of resources 2. Avoid p-hacking (Simmons et al., 2011) 3. Understand non-replication (Open Science Collaboration, 2015) 4. Understand null results A non-significant result, by itself, doesn ’ t prove an • effect doesn’t exist • But, with high power, null result is more informative • e.g., null effect of working memory training on intelligence with 20% power • Maybe brain training works & we just couldn’t detect the effect • But: null effect of WM on intelligence with 90% power • Unlikely that we just missed the effect!

  59. Why Do We Care About Power? 1. Efficient use of resources 2. Avoid p-hacking (Simmons et al., 2011) 3. Understand non-replication (Open Science Collaboration, 2015) 4. Understand null results A non-significant result, by itself, doesn ’ t prove an • effect doesn’t exist • But, with high power, null result is more informative

  60. Why Do We Care About Power? 1. Efficient use of resources 2. Avoid p-hacking (Simmons et al., 2011) 3. Understand non-replication (Open Science Collaboration, 2015) 4. Understand null results 5. Granting agencies now want to see it • Don’t want to fund a study with low probability of showing anything • e.g., Our theory predicts greater activity in Broca’s area in condition A than condition B. But our experiment has only a 16% probability of detecting the difference. Not good!

  61. Why Do We Care About Power? 1. Efficient use of resources 2. Avoid p-hacking (Simmons et al., 2011) 3. Understand non-replication (Open Science Collaboration, 2015) 4. Understand null results 5. Granting agencies now want to see it • NIH: • IES:

  62. Week 12: Effect Size & Power l Missing Data Solutions l Casewise Deletion l Listwise Deletion l Unconditional Imputation l Conditional Imputation l Multiple Imputation l Effect Size l Unstandardized l Standardized l Interpreting Effect Size l Variance Explained l Power l Recap of Null Hypothesis Significance Testing l Why Should We Care? l Estimating Effect Size l Doing Your Own Power Analysis l Influences on Power

  63. Estimating Effect Size • One reason we haven’t always calculated power is it requires the effect size • But, several ways to estimate effect size: 1. Prior literature • What is the effect size in other studies in this domain or with a similar manipulation?

  64. Estimating Effect Size • One reason we haven’t always calculated power is it requires the effect size • But, several ways to estimate effect size: 1. Prior literature 2. Pilot study • Run a version of the study with a smaller n • Don’t worry about whether effect is significant, just use data to estimate effect size

  65. Estimating Effect Size • One reason we haven’t always calculated power is it requires the effect size • But, several ways to estimate effect size: 1. Prior literature 2. Pilot study 3. Smallest Effect Size Of Interest ( SESOI ) • Decide smallest effect size we’d care about • e.g., we want our educational intervention to have an effect size of at least .05 GPA • Calculate power based on that effect size • True that if actual effect is smaller than .05 GPA, our power would be lower, but the idea is we no longer care about the intervention if its effect is that small

  66. Week 12: Effect Size & Power l Missing Data Solutions l Casewise Deletion l Listwise Deletion l Unconditional Imputation l Conditional Imputation l Multiple Imputation l Effect Size l Unstandardized l Standardized l Interpreting Effect Size l Variance Explained l Power l Recap of Null Hypothesis Significance Testing l Why Should We Care? l Estimating Effect Size l Doing Your Own Power Analysis l Influences on Power

  67. Your Own Power Analysis • Rationale behind power analyses: • Can we detect the kind & size of effect we’re interested in? • What sample size would we need? • In practice: • We can’t control effect size; it’s a property of nature • α is usually fixed (e.g., at .05) by convention • But, we can control our sample size n ! • So: • Determine desired power (often .80) • Estimate the effect size(s) • Calculate the necessary sample size n

  68. Determining Power • Power for ANOVAs can be easily found from tables • Simpler design. Only 1 random effect (at most) • More complicated for mixed effect models

  69. Monte Carlo Methods • Remember the definition of power? • The probability of observing a significant effect in our sample if the effect truly exists in the population • What if we knew for a fact that the effect existed in a particular population? • Then, a measure of power is how often we get a significant result in a sample (of our intended n ) • Observe a significant effect 10 samples out of 20 = 50% of the time = power of .50 • Observe a significant effect 300 samples out of 1000 = 30% of the time = power of .30 • Observe a significant effect 800 samples out of 1000 = 80% of the time = power of .80

  70. Monte Carlo Methods • Remember the definition of power? • The probability of observing a significant effect in our sample if the effect truly exists in the population • What if we knew for a fact that the effect existed in a particular population? • Then, a measure of power is how often we get a significant result in a sample (of our intended n ) Great, but where am I ever going to find data where I know exactly what the population parameters are?

  71. Monte Carlo Methods • Remember the definition of power? • The probability of observing a significant effect in our sample if the effect truly exists in the population • What if we knew for a fact that the effect existed in a particular population? • Then, a measure of power is how often we get a significant result in a sample (of our intended n ) • Solution: We create (“simulate”) the data.

  72. Data Simulation • Set some plausible population parameters ( effect size , subject variance , item var. , etc.) Set population parameters Mean = 723 ms Group difference = 100 ms Subject var = 30 • Since we are creating the data… • We can choose the population parameters • We know we exactly what they are

  73. Data Simulation • Create (“simulate”) a random sample drawn from this population Set population Create a random parameters sample from Mean = 723 ms these data Group difference = N subjects = 20 100 ms N items = 40 Subject var = 30 • Like most samples, the sample statistics will not exactly match the population parameters • It’s randomly generated • But, the difference is we know what the population is like & that there IS an effect

  74. Data Simulation • Now, fit our planned mixed-effects model to this sample of simulated data to get one result Set population Create a random Run our planned parameters sample from model and see if Mean = 723 ms these data we get a Group difference = N subjects = 20 significant result 100 ms N items = 40 Subject var = 30 • Might get a significant result • Correctly detected the effect in the population • Might get a non-significant result • Type II error – missed an effect that really exists in the population

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend