cs 445 introduction to machine learning logistic

CS 445 Introduction to Machine Learning Logistic Regression - PowerPoint PPT Presentation

CS 445 Introduction to Machine Learning Logistic Regression Instructor: Dr. Kevin Molloy Review Linear regression Finding the weights to assign to a polynomial so that the resulting line minimizes the "loss". ( ! ,


  1. CS 445 Introduction to Machine Learning Logistic Regression Instructor: Dr. Kevin Molloy

  2. Review Linear regression Finding the weights to assign to a polynomial so that the resulting line minimizes the "loss". β„Ž(𝑦 ! , 𝑦 " , … 𝑦 # ) = π‘₯ $ + π‘₯ ! 𝑦 ! + . . +π‘₯ # 𝑦 # β„Ž 𝑦 = π‘₯ % 𝑦 This function h(x) (hypothesis function) makes a real valued prediction (regression). ! " βˆ‘ # !, $ ! ∈& 𝑧 ' βˆ’ π‘₯ ( 𝑦 ' " Linear Regression 𝑀 π‘₯ =

  3. Approach for Linear Regression ! ") βˆ‘ # !, $ ! ∈& 𝑧 ' βˆ’ π‘₯ ( 𝑦 ' " Linear Regression 𝑀 π‘₯ = Optimize (find the min) of the loss function using the derivatives: πœ–L(w) = 1 ()) (y ' βˆ’ w + x ' ) N ) 𝑦 ' πœ–w ! "#$..& πœ–L(w) = 1 (y " βˆ’ w + x " ) N ) πœ–w $ "#$..&

  4. Linear Regression Algorithm Make predictions using current w and compute loss 1. Compute derivative and update w's 2. When loss change is a little STOP, otherwise, go back to 1. 3.

  5. Logistic Regression X X X X World's WORST algorithm name O O O O Transform linear regression into a classification algorithm h(x) >= 0.5, predict y = 1 (X class) h(x) < 0.5, predict y = 0 () class)

  6. Map Function to Values Between 0 and 1 ! Sigmoid (z) = !" # #$ 1 1 + 𝑓 $% % &

  7. Different Loss Function 1 1 + 𝑓 $% % & ! ") βˆ‘ # !, $ ! ∈& 𝑧 ' βˆ’ π‘₯ ( 𝑦 ' " Linear Regression 𝑀 π‘₯ =

  8. Cost Function for Linear Regression Loss( h ( x ), y ) = βˆ’ log 𝑔 , 𝑦 𝑗𝑔 𝑧 = 1 βˆ’ log 1 βˆ’ 𝑔 , 𝑦 𝑗𝑔 𝑧 = 0

  9. Cost Function for Linear Regression Loss( h ( x ), y ) = βˆ’ log 𝑔 , 𝑦 𝑗𝑔 𝑧 = 1 βˆ’ log 1 βˆ’ 𝑔 , 𝑦 𝑗𝑔 𝑧 = 0 When y = 1: f(x) = 1, then Cost = 0 (since (-log(1) = 0) f(x) = 0 , then the loss (or penalty) will be very large.

  10. Cost Function for Linear Regression Loss( h ( x ), y ) = βˆ’ log 𝑔 , 𝑦 𝑗𝑔 𝑧 = 1 βˆ’ log 1 βˆ’ 𝑔 , 𝑦 𝑗𝑔 𝑧 = 0 When y = 0: f(x) = 0, then Cost = 0 (since (-log(1 –f(x)) = 0) f(x) = , then the loss (or penalty) will be very large.

  11. Logistic Regression Loss Loss( h ( x ), y ) = βˆ’ log 𝑔 , 𝑦 𝑗𝑔 𝑧 = 1 βˆ’ log 1 βˆ’ 𝑔 , 𝑦 𝑗𝑔 𝑧 = 0 . 𝑄 𝑧 = 1 𝑦 ) ) / ! 𝑦 𝑄 𝑧 = 0 𝑦 ) ) - 0 / ! Loss( h ( x ), y ) = ∏ )#-

Recommend


More recommend