with linear models
play

with Linear Models CMSC 422 M ARINE C ARPUAT marine@cs.umd.edu - PowerPoint PPT Presentation

Binary Classification with Linear Models CMSC 422 M ARINE C ARPUAT marine@cs.umd.edu Figures credit: Piyush Rai T opics Linear Models Loss functions Regularization Gradient Descent Calculus refresher Convexity


  1. Binary Classification with Linear Models CMSC 422 M ARINE C ARPUAT marine@cs.umd.edu Figures credit: Piyush Rai

  2. T opics • Linear Models – Loss functions – Regularization • Gradient Descent • Calculus refresher – Convexity – Gradients [CIML Chapter 6]

  3. Binary classification via hyperplanes • A classifier is a hyperplane (w,b) • At test time, we check on what side of the hyperplane examples fall 𝑧 = 𝑡𝑗𝑕𝑜(𝑥 𝑈 𝑦 + 𝑐) • This is a linear classifier – Because the prediction is a linear combination of feature values x

  4. Learning a Linear Classifier as an Optimization Problem Loss function Regularizer measures how well prefers solutions Objective classifier fits training that generalize function data well Indicator function: 1 if (.) is true, 0 otherwise The loss function above is called the 0-1 loss

  5. Learning a Linear Classifier as an Optimization Problem • Problem: The 0-1 loss above is NP-hard to optimize • Solution: Different loss function approximations and regularizers lead to specific algorithms (e.g., perceptron, support vector machines, logistic regression, etc.)

  6. The 0-1 Loss • Small changes in w,b can lead to big changes in the loss value • 0-1 loss is non-smooth, non-convex

  7. Calculus refresher: Smooth functions, convex functions

  8. Approximating the 0-1 loss with surrogate loss functions • Examples (with b = 0) – Hinge loss – Log loss – Exponential loss • All are convex upper- bounds on the 0-1 loss

  9. Approximating the 0-1 loss with surrogate loss functions • Examples (with b = 0) – Hinge loss – Log loss – Exponential loss • Q: Which of these loss functions is not smooth?

  10. Approximating the 0-1 loss with surrogate loss functions • Examples (with b = 0) – Hinge loss – Log loss – Exponential loss • Q: Which of these loss functions is most sensitive to outliers?

  11. Casting Linear Classification as an Optimization Problem Loss function Regularizer measures how well prefers solutions Objective classifier fits training that generalize function data well Indicator function: 1 if (.) is true, 0 otherwise The loss function above is called the 0-1 loss

  12. The regularizer term • Goal: find simple solutions (inductive bias) • Ideally, we want most entries of w to be zero, so prediction depends only on a small number of features. • Formally, we want to minimize: • That’s NP -hard, so we use approximations instead. – E.g., we encourage w d ’s to be small

  13. Norm-based Regularizers • 𝑚 𝑞 norms can be used as regularizers Contour plots for p = 2 p = 1 p < 1

  14. Norm-based Regularizers • 𝑚 𝑞 norms can be used as regularizers • Smaller p favors sparse vectors w – i.e. most entries of w are close or equal to 0 • 𝑚 2 norm: convex, smooth, easy to optimize • 𝑚 1 norm: encourages sparse w, convex, but not smooth at axis points • 𝑞 < 1 : norm becomes non convex and hard to optimize

  15. Casting Linear Classification as an Optimization Problem Loss function Regularizer measures how well prefers solutions Objective classifier fits training that generalize function data well Indicator function: 1 if (.) is true, 0 otherwise The loss function above is called the 0-1 loss

  16. What is the perceptron optimizing? • Loss function is a variant of the hinge loss

  17. Recap: Linear Models • General framework for binary classification • Cast learning as optimization problem • Optimization objective combines 2 terms – loss function: measures how well classifier fits training data – Regularizer: measures how simple classifier is • Does not assume data is linearly separable • Lets us separate model definition from training algorithm

  18. Calculus refresher: Gradients

  19. Gradient descent • A general solution for our optimization problem Idea: take iterative steps to update parameters in the direction of the gradient

  20. Gradient descent algorithm

  21. Recap: Linear Models • General framework for binary classification • Cast learning as optimization problem • Optimization objective combines 2 terms – loss function: measures how well classifier fits training data – Regularizer: measures how simple classifier is • Does not assume data is linearly separable • Lets us separate model definition from training algorithm (Gradient Descent)

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend