bs2247 introduction to econometrics lecture 4 the simple
play

BS2247 Introduction to Econometrics Lecture 4: The simple regression - PowerPoint PPT Presentation

BS2247 Introduction to Econometrics Lecture 4: The simple regression model OLS Unbiasedness, OLS Variances, Units of measurement, Nonlinearities Dr. Kai Sun Aston Business School 1 / 26 Unbiasedness of OLS SLR for Simple Linear


  1. BS2247 Introduction to Econometrics Lecture 4: The simple regression model OLS Unbiasedness, OLS Variances, Units of measurement, Nonlinearities Dr. Kai Sun Aston Business School 1 / 26

  2. Unbiasedness of OLS SLR for “Simple Linear Regression” ◮ Assumption SLR.1: The population model is linear in parameters, i.e., y = β 1 0 + β 1 1 x + u ◮ Assumption SLR.2: Use a random sample of size n , { ( x i , y i ) : i = 1 , 2 , . . . , n } , so y i = β 0 + β 1 x i + u i ◮ Assumption SLR.3: There is sample variation in x , i.e., � x ) 2 > 0 (recall the formula for ˆ i ( x i − ¯ β 1 ) ◮ Assumption SLR.4: Zero conditional mean, i.e., E ( u i | x i ) = 0 2 / 26

  3. Unbiasedness of OLS SLR for “Simple Linear Regression” ◮ Assumption SLR.1: The population model is linear in parameters, i.e., y = β 1 0 + β 1 1 x + u ◮ Assumption SLR.2: Use a random sample of size n , { ( x i , y i ) : i = 1 , 2 , . . . , n } , so y i = β 0 + β 1 x i + u i ◮ Assumption SLR.3: There is sample variation in x , i.e., � x ) 2 > 0 (recall the formula for ˆ i ( x i − ¯ β 1 ) ◮ Assumption SLR.4: Zero conditional mean, i.e., E ( u i | x i ) = 0 2 / 26

  4. Unbiasedness of OLS SLR for “Simple Linear Regression” ◮ Assumption SLR.1: The population model is linear in parameters, i.e., y = β 1 0 + β 1 1 x + u ◮ Assumption SLR.2: Use a random sample of size n , { ( x i , y i ) : i = 1 , 2 , . . . , n } , so y i = β 0 + β 1 x i + u i ◮ Assumption SLR.3: There is sample variation in x , i.e., � x ) 2 > 0 (recall the formula for ˆ i ( x i − ¯ β 1 ) ◮ Assumption SLR.4: Zero conditional mean, i.e., E ( u i | x i ) = 0 2 / 26

  5. Unbiasedness of OLS SLR for “Simple Linear Regression” ◮ Assumption SLR.1: The population model is linear in parameters, i.e., y = β 1 0 + β 1 1 x + u ◮ Assumption SLR.2: Use a random sample of size n , { ( x i , y i ) : i = 1 , 2 , . . . , n } , so y i = β 0 + β 1 x i + u i ◮ Assumption SLR.3: There is sample variation in x , i.e., � x ) 2 > 0 (recall the formula for ˆ i ( x i − ¯ β 1 ) ◮ Assumption SLR.4: Zero conditional mean, i.e., E ( u i | x i ) = 0 2 / 26

  6. Unbiasedness of ˆ β 1 In order to think about unbiasedness, we need to rewrite our estimator, ˆ β ’s, in terms of the population parameter, β ’s. � i ( x i − ¯ x )( y i − ¯ y ) ˆ � β 1 = x ) 2 i ( x i − ¯ � i ( x i − ¯ x ) y i � = x ) 2 i ( x i − ¯ 3 / 26

  7. Unbiasedness of ˆ β 1 Plug in y i = β 0 + β 1 x i + u i and define SST x = � x ) 2 , i ( x i − ¯ � i ( x i − ¯ x )( β 0 + β 1 x i + u i ) ˆ β 1 = SST x � i u i ( x i − ¯ x ) = 0 + β 1 + SST x using the fact that � i ( x i − ¯ x ) = 0 and � x ) = � x ) 2 = SST x . i x i ( x i − ¯ i ( x i − ¯ 4 / 26

  8. Unbiasedness of ˆ β 1 ◮ Take expectation (conditional on x i ) for both sides: � E (ˆ 1 β 1 | x i ) = β 1 + i E ( u i | x i )( x i − ¯ x ) = β 1 SST x (using the Assumption SLR.4 that E ( u i | x i ) = 0) ◮ Finally, E (ˆ β 1 ) = E ( E (ˆ β 1 | x i )) = E ( β 1 ) = β 1 (first equality by law of iterated expectation). ◮ The fact that E (ˆ β 1 ) = β 1 means that ˆ β 1 is unbiased for β 1 . 5 / 26

  9. Unbiasedness of ˆ β 1 ◮ Take expectation (conditional on x i ) for both sides: � E (ˆ 1 β 1 | x i ) = β 1 + i E ( u i | x i )( x i − ¯ x ) = β 1 SST x (using the Assumption SLR.4 that E ( u i | x i ) = 0) ◮ Finally, E (ˆ β 1 ) = E ( E (ˆ β 1 | x i )) = E ( β 1 ) = β 1 (first equality by law of iterated expectation). ◮ The fact that E (ˆ β 1 ) = β 1 means that ˆ β 1 is unbiased for β 1 . 5 / 26

  10. Unbiasedness of ˆ β 1 ◮ Take expectation (conditional on x i ) for both sides: � E (ˆ 1 β 1 | x i ) = β 1 + i E ( u i | x i )( x i − ¯ x ) = β 1 SST x (using the Assumption SLR.4 that E ( u i | x i ) = 0) ◮ Finally, E (ˆ β 1 ) = E ( E (ˆ β 1 | x i )) = E ( β 1 ) = β 1 (first equality by law of iterated expectation). ◮ The fact that E (ˆ β 1 ) = β 1 means that ˆ β 1 is unbiased for β 1 . 5 / 26

  11. Unbiasedness of ˆ β 0 Recall that ˆ y − ˆ u ) − ˆ x = β 0 + ( β 1 − ˆ β 0 = ¯ β 1 ¯ x = ( β 0 + β 1 ¯ x + ¯ β 1 ¯ β 1 )¯ x + ¯ u ◮ Take expectation for both sides: E (ˆ xE ( β 1 − ˆ β 0 ) = β 0 + ¯ β 1 ) + E (¯ u ) = β 0 using the fact that E (ˆ β 1 ) = β 1 and u ) = E (1 / n � i u i ) = 1 / n � E (¯ i E ( u i ) = 0 ◮ The fact that E (ˆ β 0 ) = β 0 means that ˆ β 0 is unbiased for β 0 . 6 / 26

  12. Unbiasedness of ˆ β 0 Recall that ˆ y − ˆ u ) − ˆ x = β 0 + ( β 1 − ˆ β 0 = ¯ β 1 ¯ x = ( β 0 + β 1 ¯ x + ¯ β 1 ¯ β 1 )¯ x + ¯ u ◮ Take expectation for both sides: E (ˆ xE ( β 1 − ˆ β 0 ) = β 0 + ¯ β 1 ) + E (¯ u ) = β 0 using the fact that E (ˆ β 1 ) = β 1 and u ) = E (1 / n � i u i ) = 1 / n � E (¯ i E ( u i ) = 0 ◮ The fact that E (ˆ β 0 ) = β 0 means that ˆ β 0 is unbiased for β 0 . 6 / 26

  13. Unbiasedness of ˆ β 0 Recall that ˆ y − ˆ u ) − ˆ x = β 0 + ( β 1 − ˆ β 0 = ¯ β 1 ¯ x = ( β 0 + β 1 ¯ x + ¯ β 1 ¯ β 1 )¯ x + ¯ u ◮ Take expectation for both sides: E (ˆ xE ( β 1 − ˆ β 0 ) = β 0 + ¯ β 1 ) + E (¯ u ) = β 0 using the fact that E (ˆ β 1 ) = β 1 and u ) = E (1 / n � i u i ) = 1 / n � E (¯ i E ( u i ) = 0 ◮ The fact that E (ˆ β 0 ) = β 0 means that ˆ β 0 is unbiased for β 0 . 6 / 26

  14. Variance of OLS ◮ While unbiasedness means that the sampling distribution of our estimate is centered around the true parameter ◮ Want to think about how spread out this distribution is ◮ Much easier to think about this variance under an additional assumption: Assumption SLR.5: Var ( u | x ) = σ 2 (Homoskedasticity: the conditional variance of u is a constant) 7 / 26

  15. Variance of OLS ◮ While unbiasedness means that the sampling distribution of our estimate is centered around the true parameter ◮ Want to think about how spread out this distribution is ◮ Much easier to think about this variance under an additional assumption: Assumption SLR.5: Var ( u | x ) = σ 2 (Homoskedasticity: the conditional variance of u is a constant) 7 / 26

  16. Variance of OLS ◮ While unbiasedness means that the sampling distribution of our estimate is centered around the true parameter ◮ Want to think about how spread out this distribution is ◮ Much easier to think about this variance under an additional assumption: Assumption SLR.5: Var ( u | x ) = σ 2 (Homoskedasticity: the conditional variance of u is a constant) 7 / 26

  17. Var ( u | x ) = E ( u 2 | x ) − [ E ( u | x )] 2 Since E ( u | x ) = 0, Var ( u | x ) = E ( u 2 | x ) = σ 2 ◮ By the law of iterated expectation, E ( u 2 ) = E ( E ( u 2 | x )) = E ( σ 2 ) = σ 2 . Var ( u ) = E ( u 2 ) = σ 2 means that the unconditional variance of u is a constant, too - σ 2 is also called error variance. ◮ If we take the conditional variance for both sides of y = β 0 + β 1 x + u , Var ( y | x ) = Var ( β 0 + β 1 x | x ) + Var ( u | x ) = Var ( u | x ) = σ 2 . Var ( β 0 + β 1 x | x ) = 0 because when we “conditional on x ”, we can view ( β 0 + β 1 x ) as a constant, whose variance = 0. 8 / 26

  18. Var ( u | x ) = E ( u 2 | x ) − [ E ( u | x )] 2 Since E ( u | x ) = 0, Var ( u | x ) = E ( u 2 | x ) = σ 2 ◮ By the law of iterated expectation, E ( u 2 ) = E ( E ( u 2 | x )) = E ( σ 2 ) = σ 2 . Var ( u ) = E ( u 2 ) = σ 2 means that the unconditional variance of u is a constant, too - σ 2 is also called error variance. ◮ If we take the conditional variance for both sides of y = β 0 + β 1 x + u , Var ( y | x ) = Var ( β 0 + β 1 x | x ) + Var ( u | x ) = Var ( u | x ) = σ 2 . Var ( β 0 + β 1 x | x ) = 0 because when we “conditional on x ”, we can view ( β 0 + β 1 x ) as a constant, whose variance = 0. 8 / 26

  19. Var ( u | x ) = E ( u 2 | x ) − [ E ( u | x )] 2 Since E ( u | x ) = 0, Var ( u | x ) = E ( u 2 | x ) = σ 2 ◮ By the law of iterated expectation, E ( u 2 ) = E ( E ( u 2 | x )) = E ( σ 2 ) = σ 2 . Var ( u ) = E ( u 2 ) = σ 2 means that the unconditional variance of u is a constant, too - σ 2 is also called error variance. ◮ If we take the conditional variance for both sides of y = β 0 + β 1 x + u , Var ( y | x ) = Var ( β 0 + β 1 x | x ) + Var ( u | x ) = Var ( u | x ) = σ 2 . Var ( β 0 + β 1 x | x ) = 0 because when we “conditional on x ”, we can view ( β 0 + β 1 x ) as a constant, whose variance = 0. 8 / 26

  20. Var ( u | x ) = E ( u 2 | x ) − [ E ( u | x )] 2 Since E ( u | x ) = 0, Var ( u | x ) = E ( u 2 | x ) = σ 2 ◮ By the law of iterated expectation, E ( u 2 ) = E ( E ( u 2 | x )) = E ( σ 2 ) = σ 2 . Var ( u ) = E ( u 2 ) = σ 2 means that the unconditional variance of u is a constant, too - σ 2 is also called error variance. ◮ If we take the conditional variance for both sides of y = β 0 + β 1 x + u , Var ( y | x ) = Var ( β 0 + β 1 x | x ) + Var ( u | x ) = Var ( u | x ) = σ 2 . Var ( β 0 + β 1 x | x ) = 0 because when we “conditional on x ”, we can view ( β 0 + β 1 x ) as a constant, whose variance = 0. 8 / 26

  21. Homoskedastic Case y f( y|x ) . E( y | x ) = 0 + 1 x . x 1 x 2 9 / 26

  22. Heteroskedastic Case f( y|x ) . . E( y | x ) = 0 + 1 x . x 1 x 2 x 3 x 10 / 26

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend