lecture 11 logistic regression part ii
play

Lecture #11: Logistic Regression - Part II Data Science 1 CS 109A, - PowerPoint PPT Presentation

Lecture #11: Logistic Regression - Part II Data Science 1 CS 109A, STAT 121A, AC 209A, E-109A Pavlos Protopapas Kevin Rader Margo Levine Rahul Dave Lecture Outline Logistic Regression: a Brief Review Classification Boundaries


  1. Lecture #11: Logistic Regression - Part II Data Science 1 CS 109A, STAT 121A, AC 209A, E-109A Pavlos Protopapas Kevin Rader Margo Levine Rahul Dave

  2. Lecture Outline Logistic Regression: a Brief Review Classification Boundaries Regularization in Logistic Regression Multinomial Logistic Regression Bayes Theorem and Misclassification Rates ROC Curves 2

  3. Logistic Regression: a Brief Review 3

  4. log Multiple logistic regression is a generalization to multiple predictors. More specifically we can define a multiple logistic regression model to predict as such: log where there are predictors: . Note: statisticians are often lazy and use the notation log to mean ln (the text does this). We will write log if this is what we mean. Multiple Logistic Regression Earlier we saw the general form of simple logistic regression, meaning when there is just one predictor used in the model. What was the model statement (in terms of linear predictors)? 4

  5. Multiple logistic regression is a generalization to multiple predictors. More specifically we can define a multiple logistic regression model to predict as such: log where there are predictors: . Note: statisticians are often lazy and use the notation log to mean ln (the text does this). We will write log if this is what we mean. Multiple Logistic Regression Earlier we saw the general form of simple logistic regression, meaning when there is just one predictor used in the model. What was the model statement (in terms of linear predictors)? ( P ( Y = 1) ) log = β 0 + β 1 X 1 − P ( Y = 1) 4

  6. Multiple Logistic Regression Earlier we saw the general form of simple logistic regression, meaning when there is just one predictor used in the model. What was the model statement (in terms of linear predictors)? ( P ( Y = 1) ) log = β 0 + β 1 X 1 − P ( Y = 1) Multiple logistic regression is a generalization to multiple predictors. More specifically we can define a multiple logistic regression model to predict P ( Y = 1) as such: ( ) P ( Y = 1) log = β 0 + β 1 X 1 + β 2 X 2 + ... + β p X p 1 − P ( Y = 1) where there are p predictors: X = ( X 1 , X 2 , ..., X p ) . Note: statisticians are often lazy and use the notation log to mean ln (the text does this). We will write log 10 if this is what we mean. 4

  7. Interpreting Multiple Logistic Regression: an Example Let’s get back to the NFL data. We are attempting to predict whether a play results in a TD based on location (yard line) and whether the play was a pass. The simultaneous effect of these two predictors can be brought into one model. Recall from earlier we had the following estimated models: ( � ) P ( Y = 1) log = − 7 . 425 + 0 . 0626 · X yard � 1 − P ( Y = 1) ( � ) P ( Y = 1) log = − 4 . 061 + 1 . 106 · X pass � 1 − P ( Y = 1) The results for the multiple logistic regression model are on the next slide. 5

  8. Interpreting Multiple Logistic Regression: an Example 6

  9. Some questions 1. Write down the complete model. Break this down into the model to predict log-odds of a touchdown based on the yard line for passes and the same model for non-passes. How is this different from the previous model (without interaction)? 2. Estimate the odds ratio of a TD comparing passes to non-passes. 3. Is there any evidence of multicollinearity in this model? 4. Is there any confounding in this problem? 7

  10. Interactions in Multiple Logistic Regression Just like in linear regression, interaction terms can be considered in logistic regression. An interaction terms is incorporated into the model the same way, and the interpretation is very similar (on the log-odds scale of the response of course). Write down the model for the NFL data for the 2 predictors plus the interactions term. 8

  11. Interpreting Multiple Logistic Regression with Interaction: an Example 9

  12. Some questions 1. Write down the complete model. Break this down into the model to predict log-odds of a touchdown based on the yard line for passes and the same model for non-passes. How is this different from the previous model (without interaction)? 2. Use this model to estimate the probability of a touchdown for a pass at the 20 yard line. Do the same for a run at the 20 yard line. 3. Use this model to estimate the probability of a touchdown for a pass at the 99 yard line. Do the same for a run at the 99 yard line. 4. Is this a stronger model than the previous one? How would we check? 10

  13. Classification Boundaries 11

  14. Classification Recall that we could attempt to purely classify each observation based on whether the estimated P ( Y = 1) from the model was greater than 0.5. When dealing with ‘well-separated’ data, logistic regression can work well in performing classification. We saw a 2-D plot last time which had two predictors, X 1 and X 2 and depicted the classes as different colors. A similar one is shown on the next slide. 12

  15. 2D Classification in Logistic Regression: an Example 13

  16. What would be a good logistic regression model to classify these points? Based on these predictors, two separate logistic regression model were considered that were based on different ordered polynomials of and and their interactions. The ‘circles’ represent the boundary for classification. How can the classification boundary be calculated for a logistic regression? 2D Classification in Logistic Regression: an Example Would a logistic regression model perform well in classifying the observations in this example? 14

  17. Based on these predictors, two separate logistic regression model were considered that were based on different ordered polynomials of and and their interactions. The ‘circles’ represent the boundary for classification. How can the classification boundary be calculated for a logistic regression? 2D Classification in Logistic Regression: an Example Would a logistic regression model perform well in classifying the observations in this example? What would be a good logistic regression model to classify these points? 14

  18. 2D Classification in Logistic Regression: an Example Would a logistic regression model perform well in classifying the observations in this example? What would be a good logistic regression model to classify these points? Based on these predictors, two separate logistic regression model were considered that were based on different ordered polynomials of X 1 and X 2 and their interactions. The ‘circles’ represent the boundary for classification. How can the classification boundary be calculated for a logistic regression? 14

  19. We could determine the misclassification rates in left out validation or test set(s) 2D Classification in Logistic Regression: an Example In the previous plot, which classification boundary performs better? How can you tell? How would you make this determination in an actual data example? 15

  20. 2D Classification in Logistic Regression: an Example In the previous plot, which classification boundary performs better? How can you tell? How would you make this determination in an actual data example? We could determine the misclassification rates in left out validation or test set(s) 15

  21. Regularization in Logistic Regression 16

  22. arg min arg min And a regularization approach was to add a penalty factor to this equation. Which for Ridge Regression becomes: arg min This penalty shrinks the estimates towards zero, and had the analogue of using a Normal prior in the Bayesian paradigm. Regularization in Linear Regression Based on the Likelihood framework, a loss function can be determined based on the likelihood function. We saw in linear regression that maximizing the log-likelihood is equivalent to minimizing the sum of squares error: 17

  23. arg min This penalty shrinks the estimates towards zero, and had the analogue of using a Normal prior in the Bayesian paradigm. Regularization in Linear Regression Based on the Likelihood framework, a loss function can be determined based on the likelihood function. We saw in linear regression that maximizing the log-likelihood is equivalent to minimizing the sum of squares error: n n y i ) 2 = arg min arg min ∑ ∑ ( y i − ( β 0 + β 1 x 1 i + ... + β p x pi )) 2 ( y i − ˆ i =1 i =1 And a regularization approach was to add a penalty factor to this equation. Which for Ridge Regression becomes: 17

  24. Regularization in Linear Regression Based on the Likelihood framework, a loss function can be determined based on the likelihood function. We saw in linear regression that maximizing the log-likelihood is equivalent to minimizing the sum of squares error: n n y i ) 2 = arg min arg min ∑ ∑ ( y i − ( β 0 + β 1 x 1 i + ... + β p x pi )) 2 ( y i − ˆ i =1 i =1 And a regularization approach was to add a penalty factor to this equation. Which for Ridge Regression becomes:   2     n n n ∑ ∑ ∑ arg min β 2  y i −  β 0 + + λ β j x ji     j   i =1 j =1 j =1 This penalty shrinks the estimates towards zero, and had the analogue of using a Normal prior in the Bayesian paradigm. 17

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend