unit 6 simple linear regression lecture introduction to
play

Unit 6: Simple Linear Regression Lecture : Introduction to SLR - PowerPoint PPT Presentation

Unit 6: Simple Linear Regression Lecture : Introduction to SLR Statistics 101 Thomas Leininger June 17, 2013 Recap: Chi-square test of independence Recap: Chi-square test of independence 1 Ball throwing Expected counts in two-way tables


  1. Modeling numerical variables Modeling numerical variables So far we have worked with 1 numerical variable ( Z , T ) 1 categorical variable ( χ 2 ) 1 numerical and 1 categorical variable (2-sample Z / T , ANOVA) 2 categorical variables ( χ 2 test for independence) Next up : relationships between two numerical variables, as well as modeling numerical response variables using a numerical or categorical explanatory variable. Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 5 / 35

  2. Modeling numerical variables Modeling numerical variables So far we have worked with 1 numerical variable ( Z , T ) 1 categorical variable ( χ 2 ) 1 numerical and 1 categorical variable (2-sample Z / T , ANOVA) 2 categorical variables ( χ 2 test for independence) Next up : relationships between two numerical variables, as well as modeling numerical response variables using a numerical or categorical explanatory variable. Wed–Friday : to model numerical variables using many explanatory variables at once. Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 5 / 35

  3. Modeling numerical variables Poverty vs. HS graduate rate The scatterplot below shows the relationship between HS graduate rate in all 50 US states and DC and the % of residents who live below the poverty line (income below $23,050 for a family of 4 in 2012) . 18 ● ● ● ● ● 16 ● ● ● ● % in poverty 14 ● ● ● ● ● ● ● ● 12 ● ● ● ● ● ● ● ● ● ● ● 10 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 8 ● ● ● ● ● 6 ● 80 85 90 % HS grad Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 6 / 35

  4. Modeling numerical variables Poverty vs. HS graduate rate The scatterplot below shows the relationship between HS graduate rate in all 50 US states and DC and the % of residents who live below the poverty line (income below $23,050 for a family of 4 in 2012) . 18 ● ● ● ● Response? ● 16 ● ● ● ● % in poverty 14 ● ● ● ● ● ● ● ● 12 ● ● ● ● ● ● ● ● ● ● ● 10 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 8 ● ● ● ● ● 6 ● 80 85 90 % HS grad Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 6 / 35

  5. Modeling numerical variables Poverty vs. HS graduate rate The scatterplot below shows the relationship between HS graduate rate in all 50 US states and DC and the % of residents who live below the poverty line (income below $23,050 for a family of 4 in 2012) . 18 ● ● ● ● Response? ● 16 ● ● ● % in poverty ● % in poverty 14 ● ● ● ● ● ● ● ● 12 ● ● ● ● ● ● ● ● ● ● ● 10 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 8 ● ● ● ● ● 6 ● 80 85 90 % HS grad Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 6 / 35

  6. Modeling numerical variables Poverty vs. HS graduate rate The scatterplot below shows the relationship between HS graduate rate in all 50 US states and DC and the % of residents who live below the poverty line (income below $23,050 for a family of 4 in 2012) . 18 ● ● ● ● Response? ● 16 ● ● ● % in poverty ● % in poverty 14 ● ● ● ● ● ● ● Explanatory? ● 12 ● ● ● ● ● ● ● ● ● ● ● 10 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 8 ● ● ● ● ● 6 ● 80 85 90 % HS grad Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 6 / 35

  7. Modeling numerical variables Poverty vs. HS graduate rate The scatterplot below shows the relationship between HS graduate rate in all 50 US states and DC and the % of residents who live below the poverty line (income below $23,050 for a family of 4 in 2012) . 18 ● ● ● ● Response? ● 16 ● ● ● % in poverty ● % in poverty 14 ● ● ● ● ● ● ● Explanatory? ● 12 ● ● ● ● ● ● ● ● % HS grad ● ● ● 10 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 8 ● ● ● ● ● 6 ● 80 85 90 % HS grad Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 6 / 35

  8. Modeling numerical variables Poverty vs. HS graduate rate The scatterplot below shows the relationship between HS graduate rate in all 50 US states and DC and the % of residents who live below the poverty line (income below $23,050 for a family of 4 in 2012) . 18 ● ● ● ● Response? ● 16 ● ● ● % in poverty ● % in poverty 14 ● ● ● ● ● ● ● Explanatory? ● 12 ● ● ● ● ● ● ● ● % HS grad ● ● ● 10 ● ● ● ● ● ● ● ● ● ● ● Relationship? ● ● ● ● ● ● 8 ● ● ● ● ● 6 ● 80 85 90 % HS grad Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 6 / 35

  9. Modeling numerical variables Poverty vs. HS graduate rate The scatterplot below shows the relationship between HS graduate rate in all 50 US states and DC and the % of residents who live below the poverty line (income below $23,050 for a family of 4 in 2012) . 18 ● ● ● ● Response? ● 16 ● ● ● % in poverty ● % in poverty 14 ● ● ● ● ● ● ● Explanatory? ● 12 ● ● ● ● ● ● ● ● % HS grad ● ● ● 10 ● ● ● ● ● ● ● ● ● ● ● Relationship? ● ● ● ● ● ● 8 ● ● ● ● linear, negative, ● 6 ● moderately strong 80 85 90 % HS grad Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 6 / 35

  10. Correlation Recap: Chi-square test of independence 1 Ball throwing Expected counts in two-way tables Modeling numerical variables 2 Correlation 3 Fitting a line by least squares regression 4 Residuals Best line The least squares line Prediction & extrapolation Conditions for the least squares line R 2 Categorical explanatory variables Statistics 101 U6 - L1: Introduction to SLR Thomas Leininger

  11. Correlation Quantifying the relationship Correlation describes the strength of the linear association between two variables. Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 7 / 35

  12. Correlation Quantifying the relationship Correlation describes the strength of the linear association between two variables. It takes values between -1 (perfect negative) and +1 (perfect positive). Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 7 / 35

  13. Correlation Quantifying the relationship Correlation describes the strength of the linear association between two variables. It takes values between -1 (perfect negative) and +1 (perfect positive). A value of 0 indicates no linear association. Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 7 / 35

  14. Correlation Guessing the correlation Question Which of the following is the best guess for the correlation between % in poverty and % HS grad? 18 ● ● ● ● ● 16 ● ● (a) 0.6 ● ● % in poverty 14 ● ● ● ● ● ● ● (b) -0.75 ● 12 ● ● ● ● ● ● ● (c) -0.1 ● ● ● ● 10 ● ● ● ● ● ● ● ● ● ● (d) 0.02 ● ● ● ● ● ● ● 8 ● ● ● ● (e) -1.5 ● 6 ● 80 85 90 % HS grad Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 8 / 35

  15. Correlation Guessing the correlation Question Which of the following is the best guess for the correlation between % in poverty and % HS grad? 18 ● ● ● ● ● 16 ● ● (a) 0.6 ● ● % in poverty 14 ● ● ● ● ● ● ● (b) -0.75 ● 12 ● ● ● ● ● ● ● (c) -0.1 ● ● ● ● 10 ● ● ● ● ● ● ● ● ● ● (d) 0.02 ● ● ● ● ● ● ● 8 ● ● ● ● (e) -1.5 ● 6 ● 80 85 90 % HS grad Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 8 / 35

  16. Correlation Guessing the correlation Question Which of the following is the best guess for the correlation between % in poverty and % HS female householder? 18 ● ● ● ● ● 16 ● ● (a) 0.1 ● ● % in poverty 14 ● ● ● ● ● ● ● (b) -0.6 ● 12 ● ● ● ● ● ● ● (c) -0.4 ● ● ● ● 10 ● ● ● ● ● ● ● ● ● ● (d) 0.9 ● ● ● ● ● ● ● 8 ● ● ● ● (e) 0.5 ● 6 ● 8 10 12 14 16 18 % female householder, no husband present Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 9 / 35

  17. Correlation Guessing the correlation Question Which of the following is the best guess for the correlation between % in poverty and % HS female householder? 18 ● ● ● ● ● 16 ● ● (a) 0.1 ● ● % in poverty 14 ● ● ● ● ● ● ● (b) -0.6 ● 12 ● ● ● ● ● ● ● (c) -0.4 ● ● ● ● 10 ● ● ● ● ● ● ● ● ● ● (d) 0.9 ● ● ● ● ● ● ● 8 ● ● ● ● (e) 0.5 ● 6 ● 8 10 12 14 16 18 % female householder, no husband present Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 9 / 35

  18. Correlation Assessing the correlation Question Which of the following is has the strongest correlation, i.e. correlation coefficient closest to +1 or -1? ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● (a) (b) ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● (c) (d) Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 10 / 35

  19. Correlation Assessing the correlation Question Which of the following is has the strongest correlation, i.e. correlation coefficient closest to +1 or -1? ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● (b) → ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● correlation (a) (b) means linear ● ● ● ● ● ● ● ● ● association ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● (c) (d) Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 10 / 35

  20. Fitting a line by least squares regression Recap: Chi-square test of independence 1 Ball throwing Expected counts in two-way tables Modeling numerical variables 2 Correlation 3 Fitting a line by least squares regression 4 Residuals Best line The least squares line Prediction & extrapolation Conditions for the least squares line R 2 Categorical explanatory variables Statistics 101 U6 - L1: Introduction to SLR Thomas Leininger

  21. Fitting a line by least squares regression Residuals Recap: Chi-square test of independence 1 Ball throwing Expected counts in two-way tables Modeling numerical variables 2 Correlation 3 Fitting a line by least squares regression 4 Residuals Best line The least squares line Prediction & extrapolation Conditions for the least squares line R 2 Categorical explanatory variables Statistics 101 U6 - L1: Introduction to SLR Thomas Leininger

  22. Fitting a line by least squares regression Residuals Residuals Residuals are the leftovers from the model fit: Data = Fit + Residual 18 ● ● ● ● ● 16 ● ● ● ● % in poverty 14 ● ● ● ● ● ● ● ● 12 ● ● ● ● ● ● ● ● ● 10 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 8 ● ● ● ● ● 6 ● 80 85 90 % HS grad Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 11 / 35

  23. Fitting a line by least squares regression Residuals Residuals (cont.) Residual Residual is the difference between the observed and predicted y . e i = y i − ˆ y i 18 ● ● ● DC y ● ● 16 ● ● ^ ● y ● % in poverty 5.44 14 ● ● ● ● ● ● ● ● −4.16 12 ● ● ^ ● ● y ● ● ● ● y ● ● ● 10 ● ● RI ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 8 ● ● ● ● ● 6 ● 80 85 90 % HS grad Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 12 / 35

  24. Fitting a line by least squares regression Residuals Residuals (cont.) Residual Residual is the difference between the observed and predicted y . e i = y i − ˆ y i 18 ● ● ● DC y ● ● 16 ● % living in poverty in ● ^ ● y ● % in poverty 5.44 14 ● ● DC is 5.44% more ● ● ● ● ● ● −4.16 than predicted. 12 ● ● ^ ● ● y ● ● ● ● y ● ● ● 10 ● ● RI ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 8 ● ● ● ● ● 6 ● 80 85 90 % HS grad Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 12 / 35

  25. Fitting a line by least squares regression Residuals Residuals (cont.) Residual Residual is the difference between the observed and predicted y . e i = y i − ˆ y i 18 ● ● ● DC y ● ● 16 ● % living in poverty in ● ^ ● y ● % in poverty 5.44 14 ● ● DC is 5.44% more ● ● ● ● ● ● −4.16 than predicted. 12 ● ● ^ ● ● y ● ● ● ● y ● ● ● 10 ● ● RI ● % living in poverty in ● ● ● ● ● ● ● ● ● ● ● ● ● ● 8 RI is 4.16% less than ● ● ● ● predicted. ● 6 ● 80 85 90 % HS grad Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 12 / 35

  26. Fitting a line by least squares regression Best line Recap: Chi-square test of independence 1 Ball throwing Expected counts in two-way tables Modeling numerical variables 2 Correlation 3 Fitting a line by least squares regression 4 Residuals Best line The least squares line Prediction & extrapolation Conditions for the least squares line R 2 Categorical explanatory variables Statistics 101 U6 - L1: Introduction to SLR Thomas Leininger

  27. Fitting a line by least squares regression Best line A measure for the best line We want a line that has small residuals: Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 13 / 35

  28. Fitting a line by least squares regression Best line A measure for the best line We want a line that has small residuals: Option 1: Minimize the sum of magnitudes (absolute values) of 1 residuals | e 1 | + | e 2 | + · · · + | e n | Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 13 / 35

  29. Fitting a line by least squares regression Best line A measure for the best line We want a line that has small residuals: Option 1: Minimize the sum of magnitudes (absolute values) of 1 residuals | e 1 | + | e 2 | + · · · + | e n | Option 2: Minimize the sum of squared residuals – least squares 2 e 2 1 + e 2 2 + · · · + e 2 n Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 13 / 35

  30. Fitting a line by least squares regression Best line A measure for the best line We want a line that has small residuals: Option 1: Minimize the sum of magnitudes (absolute values) of 1 residuals | e 1 | + | e 2 | + · · · + | e n | Option 2: Minimize the sum of squared residuals – least squares 2 e 2 1 + e 2 2 + · · · + e 2 n Why least squares? Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 13 / 35

  31. Fitting a line by least squares regression Best line A measure for the best line We want a line that has small residuals: Option 1: Minimize the sum of magnitudes (absolute values) of 1 residuals | e 1 | + | e 2 | + · · · + | e n | Option 2: Minimize the sum of squared residuals – least squares 2 e 2 1 + e 2 2 + · · · + e 2 n Why least squares? Most commonly used 1 Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 13 / 35

  32. Fitting a line by least squares regression Best line A measure for the best line We want a line that has small residuals: Option 1: Minimize the sum of magnitudes (absolute values) of 1 residuals | e 1 | + | e 2 | + · · · + | e n | Option 2: Minimize the sum of squared residuals – least squares 2 e 2 1 + e 2 2 + · · · + e 2 n Why least squares? Most commonly used 1 Easier to compute by hand and using software 2 Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 13 / 35

  33. Fitting a line by least squares regression Best line A measure for the best line We want a line that has small residuals: Option 1: Minimize the sum of magnitudes (absolute values) of 1 residuals | e 1 | + | e 2 | + · · · + | e n | Option 2: Minimize the sum of squared residuals – least squares 2 e 2 1 + e 2 2 + · · · + e 2 n Why least squares? Most commonly used 1 Easier to compute by hand and using software 2 In many applications, a residual twice as large as another is more 3 than twice as bad Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 13 / 35

  34. Fitting a line by least squares regression Best line The least squares line ˆ y = β 0 + β 1 x ✟ ❍❍❍❍ ✟ � ✟ ❅ ✟ ✙ ✟ � predicted y ❥ ❍ ❅ ❅ ❘ � ✠ � explanatory variable slope intercept Notation: Intercept: Parameter: β 0 Point estimate: b 0 Slope: Parameter: β 1 Point estimate: b 1 Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 14 / 35

  35. Fitting a line by least squares regression The least squares line Recap: Chi-square test of independence 1 Ball throwing Expected counts in two-way tables Modeling numerical variables 2 Correlation 3 Fitting a line by least squares regression 4 Residuals Best line The least squares line Prediction & extrapolation Conditions for the least squares line R 2 Categorical explanatory variables Statistics 101 U6 - L1: Introduction to SLR Thomas Leininger

  36. Fitting a line by least squares regression The least squares line Given... 18 ● ● ● ● ● 16 ● % HS grad % in poverty ● ● ● % in poverty 14 ● ● ( x ) ( y ) ● ● ● ● ● ● 12 ● ● ● ● mean ¯ x = 86 . 01 ¯ y = 11 . 35 ● ● ● ● ● ● ● 10 ● ● ● sd s x = 3 . 73 s y = 3 . 1 ● ● ● ● ● ● ● ● ● ● ● ● ● ● 8 ● correlation R = − 0 . 75 ● ● ● ● 6 ● 80 85 90 % HS grad Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 15 / 35

  37. Fitting a line by least squares regression The least squares line Slope Slope The slope of the regression can be calculated as b 1 = s y R s x Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 16 / 35

  38. Fitting a line by least squares regression The least squares line Slope Slope The slope of the regression can be calculated as b 1 = s y R s x In context... b 1 = 3 . 1 3 . 73 × − 0 . 75 = − 0 . 62 Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 16 / 35

  39. Fitting a line by least squares regression The least squares line Slope Slope The slope of the regression can be calculated as b 1 = s y R s x In context... b 1 = 3 . 1 3 . 73 × − 0 . 75 = − 0 . 62 Interpretation For each % point increase in HS graduate rate, we would expect the % living in poverty to decrease on average by 0.62% points. Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 16 / 35

  40. Fitting a line by least squares regression The least squares line Intercept Intercept The intercept is where the regression line intersects the y -axis. The calculation of the intercept uses the fact the a regression line always passes through (¯ x , ¯ y ) . b 0 = ¯ y − b 1 ¯ x Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 17 / 35

  41. Fitting a line by least squares regression The least squares line Intercept Intercept The intercept is where the regression line intersects the y -axis. The calculation of the intercept uses the fact the a regression line always passes through (¯ x , ¯ y ) . b 0 = ¯ y − b 1 ¯ x 70 intercept 60 50 % in poverty 40 30 20 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 10 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0 0 20 40 60 80 100 % HS grad Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 17 / 35

  42. Fitting a line by least squares regression The least squares line Intercept Intercept The intercept is where the regression line intersects the y -axis. The calculation of the intercept uses the fact the a regression line always passes through (¯ x , ¯ y ) . b 0 = ¯ y − b 1 ¯ x 70 intercept 60 50 % in poverty 40 b 0 = 11 . 35 − ( − 0 . 62) × 86 . 01 30 20 = 64 . 68 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 10 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0 0 20 40 60 80 100 % HS grad Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 17 / 35

  43. Fitting a line by least squares regression The least squares line Interpret b 0 Question How do we interpret the intercept? ( b 0 = 64 . 68 ) 70 intercept 60 50 % in poverty 40 30 20 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 10 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0 0 20 40 60 80 100 % HS grad Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 18 / 35

  44. Fitting a line by least squares regression The least squares line Interpret b 0 Question How do we interpret the intercept? ( b 0 = 64 . 68 ) 70 intercept 60 50 % in poverty 40 30 20 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 10 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0 0 20 40 60 80 100 % HS grad States with no HS graduates are expected on average to have 64.68% of residents living below the poverty line. Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 18 / 35

  45. Fitting a line by least squares regression The least squares line Recap: Interpretation of slope and intercept Intercept: When x = 0 , y is expected to equal the value of the intercept. Slope: For each unit increase in x , y is expected to increase/decrease on average by value of the slope. Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 19 / 35

  46. Fitting a line by least squares regression The least squares line Regression line � % in poverty = 64 . 68 − 0 . 62 % HS grad 18 ● ● ● ● ● 16 ● ● ● ● % in poverty 14 ● ● ● ● ● ● ● ● 12 ● ● ● ● ● ● ● ● ● 10 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 8 ● ● ● ● ● ● ● ● 6 ● 80 85 90 % HS grad Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 20 / 35

  47. Fitting a line by least squares regression Prediction & extrapolation Recap: Chi-square test of independence 1 Ball throwing Expected counts in two-way tables Modeling numerical variables 2 Correlation 3 Fitting a line by least squares regression 4 Residuals Best line The least squares line Prediction & extrapolation Conditions for the least squares line R 2 Categorical explanatory variables Statistics 101 U6 - L1: Introduction to SLR Thomas Leininger

  48. Fitting a line by least squares regression Prediction & extrapolation Prediction Using the linear model to predict the value of the response variable for a given value of the explanatory variable is called prediction , simply by plugging in the value of x in the linear model equation. There will be some uncertainty associated with the predicted value - we’ll talk about this next time. 18 ● ● ● ● ● 16 ● ● ● ● % in poverty 14 ● ● ● ● ● ● ● ● 12 ● ● ● ● ● ● ● ● ● ● ● 10 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 8 ● ● ● ● ● 6 ● 80 85 90 % HS grad Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 21 / 35

  49. Fitting a line by least squares regression Prediction & extrapolation Extrapolation Applying a model estimate to values outside of the realm of the original data is called extrapolation . Sometimes the intercept might be an extrapolation. 70 intercept 60 50 % in poverty 40 30 20 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 10 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0 0 20 40 60 80 100 % HS grad Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 22 / 35

  50. Fitting a line by least squares regression Prediction & extrapolation Examples of extrapolation Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 23 / 35

  51. Fitting a line by least squares regression Prediction & extrapolation Examples of extrapolation http://www.colbertnation.com/the-colbert-report-videos/269929 1 Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 24 / 35

  52. Fitting a line by least squares regression Prediction & extrapolation Examples of extrapolation http://www.colbertnation.com/the-colbert-report-videos/269929 1 Sprinting: 2 Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 24 / 35

  53. Fitting a line by least squares regression Prediction & extrapolation Examples of extrapolation Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 25 / 35

  54. Fitting a line by least squares regression Conditions for the least squares line Recap: Chi-square test of independence 1 Ball throwing Expected counts in two-way tables Modeling numerical variables 2 Correlation 3 Fitting a line by least squares regression 4 Residuals Best line The least squares line Prediction & extrapolation Conditions for the least squares line R 2 Categorical explanatory variables Statistics 101 U6 - L1: Introduction to SLR Thomas Leininger

  55. Fitting a line by least squares regression Conditions for the least squares line Conditions for the least squares line Linearity 1 Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 26 / 35

  56. Fitting a line by least squares regression Conditions for the least squares line Conditions for the least squares line Linearity 1 Nearly normal residuals 2 Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 26 / 35

  57. Fitting a line by least squares regression Conditions for the least squares line Conditions for the least squares line Linearity 1 Nearly normal residuals 2 Constant variability 3 Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 26 / 35

  58. Fitting a line by least squares regression Conditions for the least squares line Conditions: (1) Linearity The relationship between the explanatory and the response variable should be linear. Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 27 / 35

  59. Fitting a line by least squares regression Conditions for the least squares line Conditions: (1) Linearity The relationship between the explanatory and the response variable should be linear. Methods for fitting a model to non-linear relationships exist, but are beyond the scope of this class. Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 27 / 35

  60. Fitting a line by least squares regression Conditions for the least squares line Conditions: (1) Linearity The relationship between the explanatory and the response variable should be linear. Methods for fitting a model to non-linear relationships exist, but are beyond the scope of this class. Check using a scatterplot of the data, or a residuals plot . Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 27 / 35

  61. Fitting a line by least squares regression Conditions for the least squares line Anatomy of a residuals plot ∗ RI: % HS grad = 81 % in poverty = 10 . 3 � % in poverty = 64 . 68 − 0 . 62 ∗ 81 = 14 . 46 15 % in poverty � e = % in poverty − % in poverty 10 = 10 . 3 − 14 . 46 = − 4 . 16 5 80 85 90 % HS grad 5 0 −5 Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 28 / 35

  62. Fitting a line by least squares regression Conditions for the least squares line Anatomy of a residuals plot ∗ RI: % HS grad = 81 % in poverty = 10 . 3 � % in poverty = 64 . 68 − 0 . 62 ∗ 81 = 14 . 46 15 % in poverty � e = % in poverty − % in poverty 10 = 10 . 3 − 14 . 46 = − 4 . 16 � DC: 5 80 85 90 % HS grad = 86 % in poverty = 16 . 8 % HS grad 5 � % in poverty = 64 . 68 − 0 . 62 ∗ 86 = 11 . 36 0 � e = % in poverty − % in poverty −5 = 16 . 8 − 11 . 36 = 5 . 44 Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 28 / 35

  63. Fitting a line by least squares regression Conditions for the least squares line Conditions: (2) Nearly normal residuals The residuals should be nearly normal. Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 29 / 35

  64. Fitting a line by least squares regression Conditions for the least squares line Conditions: (2) Nearly normal residuals The residuals should be nearly normal. This condition may not be satisfied when there are unusual observations that don’t follow the trend of the rest of the data. Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 29 / 35

  65. Fitting a line by least squares regression Conditions for the least squares line Conditions: (2) Nearly normal residuals The residuals should be nearly normal. This condition may not be satisfied when there are unusual observations that don’t follow the trend of the rest of the data. Check using a histogram or normal probability plot of residuals. Normal Q−Q Plot 12 ● ● 10 4 ● ● ● Sample Quantiles ● ● 8 frequency 2 ● ● ● ● ● ● ● 6 ● ● ● ● ● ● ● ● ● 0 ● ● ● ● ● ● ● ● ● ● ● 4 ● ● ● ● ● ● ● ● ● −2 ● ● ● 2 ● ● ● ● −4 0 ● −4 −2 0 2 4 6 −2 −1 0 1 2 residuals Theoretical Quantiles Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 29 / 35

  66. Fitting a line by least squares regression Conditions for the least squares line Conditions: (3) Constant variability 18 ● ● ● ● ● 16 The variability of points ● ● ● ● 14 ● ● around the least squares line % in poverty ● ● ● ● ● ● 12 ● ● should be roughly constant. ● ● ● ● ● ● ● ● ● 10 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 8 ● ● ● ● 6 ● 80 85 90 % HS grad ● ● 4 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● −4 ● ● 80 90 Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 30 / 35

  67. Fitting a line by least squares regression Conditions for the least squares line Conditions: (3) Constant variability 18 ● ● ● ● ● 16 The variability of points ● ● ● ● 14 ● ● around the least squares line % in poverty ● ● ● ● ● ● 12 ● ● should be roughly constant. ● ● ● ● ● ● ● ● ● 10 ● ● ● ● ● ● ● ● ● This implies that the variability ● ● ● ● ● ● ● ● ● 8 ● ● of residuals around the 0 line ● ● 6 ● should be roughly constant as 80 85 90 well. % HS grad ● ● 4 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● −4 ● ● 80 90 Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 30 / 35

  68. Fitting a line by least squares regression Conditions for the least squares line Conditions: (3) Constant variability 18 ● ● ● ● ● 16 The variability of points ● ● ● ● 14 ● ● around the least squares line % in poverty ● ● ● ● ● ● 12 ● ● should be roughly constant. ● ● ● ● ● ● ● ● ● 10 ● ● ● ● ● ● ● ● ● This implies that the variability ● ● ● ● ● ● ● ● ● 8 ● ● of residuals around the 0 line ● ● 6 ● should be roughly constant as 80 85 90 well. % HS grad Also called homoscedasticity . ● ● 4 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● −4 ● ● 80 90 Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 30 / 35

  69. Fitting a line by least squares regression Conditions for the least squares line Conditions: (3) Constant variability 18 ● ● ● ● ● 16 The variability of points ● ● ● ● 14 ● ● around the least squares line % in poverty ● ● ● ● ● ● 12 ● ● should be roughly constant. ● ● ● ● ● ● ● ● ● 10 ● ● ● ● ● ● ● ● ● This implies that the variability ● ● ● ● ● ● ● ● ● 8 ● ● of residuals around the 0 line ● ● 6 ● should be roughly constant as 80 85 90 well. % HS grad Also called homoscedasticity . ● ● 4 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● Check using a residuals plot. ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● −4 ● ● 80 90 Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 30 / 35

  70. Fitting a line by least squares regression Conditions for the least squares line Checking conditions Question What condition is this linear model obviously violating? (a) Constant variability (b) Linear relationship (c) Non-normal residuals (d) No extreme outliers Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 31 / 35

  71. Fitting a line by least squares regression Conditions for the least squares line Checking conditions Question What condition is this linear model obviously violating? (a) Constant variability (b) Linear relationship (c) Non-normal residuals (d) No extreme outliers Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 31 / 35

  72. Fitting a line by least squares regression Conditions for the least squares line Checking conditions Question What condition is this linear model obviously violating? (a) Constant variability (b) Linear relationship (c) Non-normal residuals (d) No extreme outliers Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 32 / 35

  73. Fitting a line by least squares regression Conditions for the least squares line Checking conditions Question What condition is this linear model obviously violating? (a) Constant variability (b) Linear relationship (c) Non-normal residuals (d) No extreme outliers Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 32 / 35

  74. R 2 Fitting a line by least squares regression Recap: Chi-square test of independence 1 Ball throwing Expected counts in two-way tables Modeling numerical variables 2 Correlation 3 Fitting a line by least squares regression 4 Residuals Best line The least squares line Prediction & extrapolation Conditions for the least squares line R 2 Categorical explanatory variables Statistics 101 U6 - L1: Introduction to SLR Thomas Leininger

  75. R 2 Fitting a line by least squares regression R 2 The strength of the fit of a linear model is most commonly evaluated using R 2 . Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 33 / 35

  76. R 2 Fitting a line by least squares regression R 2 The strength of the fit of a linear model is most commonly evaluated using R 2 . R 2 is calculated as the square of the correlation coefficient. Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 33 / 35

  77. R 2 Fitting a line by least squares regression R 2 The strength of the fit of a linear model is most commonly evaluated using R 2 . R 2 is calculated as the square of the correlation coefficient. It tells us what percent of variability in the response variable is explained by the model. Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 33 / 35

  78. R 2 Fitting a line by least squares regression R 2 The strength of the fit of a linear model is most commonly evaluated using R 2 . R 2 is calculated as the square of the correlation coefficient. It tells us what percent of variability in the response variable is explained by the model. The remainder of the variability is explained by variables not included in the model. Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 33 / 35

  79. R 2 Fitting a line by least squares regression R 2 The strength of the fit of a linear model is most commonly evaluated using R 2 . R 2 is calculated as the square of the correlation coefficient. It tells us what percent of variability in the response variable is explained by the model. The remainder of the variability is explained by variables not included in the model. For the model we’ve been working with, R 2 = ( − 0 . 62) 2 = 0 . 38 . Statistics 101 (Thomas Leininger) U6 - L1: Introduction to SLR June 17, 2013 33 / 35

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend