Hypotheses testing, p-values, Type I and Type II Errors Statistics - - PowerPoint PPT Presentation

β–Ά
hypotheses testing p values type i and type ii errors
SMART_READER_LITE
LIVE PREVIEW

Hypotheses testing, p-values, Type I and Type II Errors Statistics - - PowerPoint PPT Presentation

Hypotheses testing, p-values, Type I and Type II Errors Statistics are not substitute for judgment. Henry Clay (US Senator) Formal hypotheses testing population A Is this a difference due B to random chance? Mean height A B sample


slide-1
SLIDE 1

Hypotheses testing, p-values, Type I and Type II Errors

β€œStatistics are not substitute for judgment.”

Henry Clay (US Senator)

slide-2
SLIDE 2

Formal hypotheses testing A B

sample population

A B

Is this a difference due to random chance? Mean height Population sample

𝐼𝑝: 𝑦 𝐡 = 𝑦 𝐢 𝐼1: 𝑦 𝐡 β‰  𝑦 𝐢

If actual p < , reject null hypothesis (𝐼𝑝) and accept alternative hypothesis (𝐼1)

slide-3
SLIDE 3

T-value

(standard error)

P-value

(percentiles, probabilities)

(t-π‘€π‘π‘šπ‘£π‘“ βˆ— 𝑇𝐹𝑦) + 𝑦 Original units π‘Ÿπ‘’(𝛽, 𝑒𝑔) (π‘€π‘π‘šπ‘£π‘“ βˆ’ 𝑦 )/𝑇𝐹𝑦 p𝑒(tβˆ’π‘€π‘π‘šπ‘£π‘“, 𝑒𝑔)

  • 1

1 2 3

  • 2
  • 3

0.001 0.999 0.50

How to convert between scales

𝑦

-level Test p-value

Significant

Test p-value

Not Significant

slide-4
SLIDE 4

β€œIs this difference due to random chance?” A B

Mean height Population sample

P-value – the probability the observed value or larger is due to random chance

Theory: We can never really prove if the 2 samples are truly different or the same – only ask if what we observe (or a greater difference) is due to random chance

How to interpret p-values: P-value = 0.05 – β€œYes, 1 out of 20 times.” P-value = 0.01 – β€œYes, 1 out of 100 times.”

The lower the probability a difference is due to random chance – the more likely is the result of an effect (what we test for)

In other words: β€œIs random chance a plausible explanation?”

slide-5
SLIDE 5

Null hypothesis is true Alternative hypothesis is true Fail to reject the null hypothesis  Correct Decision



Incorrect Decision False Negative Type II Error Reject the null hypothesis  Incorrect Decision False Positive Type I Error



Correct Decision

Type I Error – reject the null hypothesis (H0) when

it is actually true

Type II Error – failing to reject the null hypothesis

(H0) when it is not true

Remember rejection or acceptance of a p-value (and therefore the chance you will make an error) depends on the arbitrary -level you choose

  • -level will probability of making a Type I Error, but this the

probability of making a Type II Error

The -level you choose is completely up to you (typically it is set at 0.05), however, it should be chosen with consideration of the consequences of making a Type I or a Type II Error. Based on your study, would you rather err on the side of false positives or false negatives?

slide-6
SLIDE 6

Example: Will current forests adequately protect genetic resources under

climate change?

Birch Mountain Wildlands

HO: Range of the current climate for the BMW protected area

= Range of the BMW protected area under climate change

Ha: Range of the current climate for the BMW protected area

β‰  Range of the BMW protected area under climate change

If we reject HO: Climates ranges are different, therefore

genetic resources are not adequately protected and new protected areas need to be created

Consequences if I make:

  • Type I Error: Climates are actually the same and genetic resources are indeed

adequately protected in the BMW protected area – we created new parks when we didn’t need to

  • Type II Error: Climates are different and genetic resources are vulnerable – we

didn’t create new protected areas and we should have From an ecological standpoint it is better to make a Type I Error, but from an economic standpoint it is better to make a Type II Error Which standpoint should I take?

slide-7
SLIDE 7

Power is your ability to reject the null hypothesis when it is false (i.e. your ability to detect an effect when there is one). There are many ways to increase power:

  • 1. Increase your sample size (sample more of the population)
  • 2. Increase your alpha value (e.g. from 0.01 to 0.05) – watch for Type I Error!
  • 3. Use a one-tailed test (you know the direction of the expected effect)
  • 4. Use a paired test (control and treatment are same sample)

Given you are testing whether or not what you observed or greater is due to random chance, more data gives you a better understanding of what is truly happening within the population, therefore sample size will the probability of making a Type 2 Error

Statistical Power