Machine Learning (CSE 446): Learning as Minimizing Loss; Least Squares
Sham M Kakade
c 2018 University of Washington cse446-staff@cs.washington.edu
1 / 13
Machine Learning (CSE 446): Learning as Minimizing Loss; Least - - PowerPoint PPT Presentation
Machine Learning (CSE 446): Learning as Minimizing Loss; Least Squares Sham M Kakade c 2018 University of Washington cse446-staff@cs.washington.edu 1 / 13 Review 1 / 13 Alternate View of PCA: Minimizing Reconstruction Error Assume that
1 / 13
1 / 13
1 / 13
1 / 13
2 / 13
3 / 13
3 / 13
3 / 13
3 / 13
3 / 13
4 / 13
5 / 13
◮ idea: try to use a (sharp) upper bound of the zero-one loss by ℓ:
6 / 13
◮ idea: try to use a (sharp) upper bound of the zero-one loss by ℓ:
◮ differentiable? sensitive to changes in w? ◮ convex? 6 / 13
◮ for binary classification, it is a an upper bound on the zero-one loss. ◮ It makes sense more generally, e.g. if we want to predict real valued y. ◮ We have a convex optimization problem.
7 / 13
8 / 13
9 / 13
9 / 13
10 / 13
11 / 13
11 / 13
12 / 13
12 / 13
12 / 13
◮ Suppose we are in “high dimensions”: more dimensions than data points. ◮ Inductive bias: we need a way to control the complexity of the model.
13 / 13