cs 445 introduction to machine learning logistic
play

CS 445 Introduction to Machine Learning Logistic Regression - PowerPoint PPT Presentation

CS 445 Introduction to Machine Learning Logistic Regression Instructor: Dr. Kevin Molloy Review Linear regression Finding the weights to assign to a polynomial so that the resulting line minimizes the "loss". ( ! ,


  1. CS 445 Introduction to Machine Learning Logistic Regression Instructor: Dr. Kevin Molloy

  2. Review Linear regression Finding the weights to assign to a polynomial so that the resulting line minimizes the "loss". ℎ(𝑦 ! , 𝑦 " , … 𝑦 # ) = 𝑥 $ + 𝑥 ! 𝑦 ! + . . +𝑥 # 𝑦 # ℎ 𝑦 = 𝑥 % 𝑦 This function h(x) (hypothesis function) makes a real valued prediction (regression). ! " ∑ # !, $ ! ∈& 𝑧 ' − 𝑥 ( 𝑦 ' " Linear Regression 𝑀 𝑥 =

  3. Approach for Linear Regression ! ") ∑ # !, $ ! ∈& 𝑧 ' − 𝑥 ( 𝑦 ' " Linear Regression 𝑀 𝑥 = Optimize (find the min) of the loss function using the derivatives: 𝜖L(w) = 1 ()) (y ' − w + x ' ) N ) 𝑦 ' 𝜖w ! "#$..& 𝜖L(w) = 1 (y " − w + x " ) N ) 𝜖w $ "#$..&

  4. Linear Regression Algorithm Make predictions using current w and compute loss 1. Compute derivative and update w's 2. When loss change is a little STOP, otherwise, go back to 1. 3.

  5. Logistic Regression X X X X World's WORST algorithm name O O O O Transform linear regression into a classification algorithm h(x) >= 0.5, predict y = 1 (X class) h(x) < 0.5, predict y = 0 () class)

  6. Map Function to Values Between 0 and 1 ! Sigmoid (z) = !" # #$ 1 1 + 𝑓 $% % &

  7. Different Loss Function 1 1 + 𝑓 $% % & ! ") ∑ # !, $ ! ∈& 𝑧 ' − 𝑥 ( 𝑦 ' " Linear Regression 𝑀 𝑥 =

  8. Cost Function for Linear Regression Loss( h ( x ), y ) = − log 𝑔 , 𝑦 𝑗𝑔 𝑧 = 1 − log 1 − 𝑔 , 𝑦 𝑗𝑔 𝑧 = 0

  9. Cost Function for Linear Regression Loss( h ( x ), y ) = − log 𝑔 , 𝑦 𝑗𝑔 𝑧 = 1 − log 1 − 𝑔 , 𝑦 𝑗𝑔 𝑧 = 0 When y = 1: f(x) = 1, then Cost = 0 (since (-log(1) = 0) f(x) = 0 , then the loss (or penalty) will be very large.

  10. Cost Function for Linear Regression Loss( h ( x ), y ) = − log 𝑔 , 𝑦 𝑗𝑔 𝑧 = 1 − log 1 − 𝑔 , 𝑦 𝑗𝑔 𝑧 = 0 When y = 0: f(x) = 0, then Cost = 0 (since (-log(1 –f(x)) = 0) f(x) = , then the loss (or penalty) will be very large.

  11. Logistic Regression Loss Loss( h ( x ), y ) = − log 𝑔 , 𝑦 𝑗𝑔 𝑧 = 1 − log 1 − 𝑔 , 𝑦 𝑗𝑔 𝑧 = 0 . 𝑄 𝑧 = 1 𝑦 ) ) / ! 𝑦 𝑄 𝑧 = 0 𝑦 ) ) - 0 / ! Loss( h ( x ), y ) = ∏ )#-

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend