introduction to machine learning
play

Introduction to Machine Learning Risk Minimization Barnabs Pczos - PowerPoint PPT Presentation

Introduction to Machine Learning Risk Minimization Barnabs Pczos What have we seen so far? Several classification & regression algorithms seem to work fine on training datasets: Linear regression Gaussian Processes Nave


  1. Introduction to Machine Learning Risk Minimization Barnabás Póczos

  2. What have we seen so far? Several classification & regression algorithms seem to work fine on training datasets: • Linear regression • Gaussian Processes • Naïve Bayes classifier • Support Vector Machines  How good are these algorithms on unknown test sets?  How many training samples do we need to achieve small error?  What is the smallest possible error we can achieve? => Learning Theory 2

  3. Outline • Risk and loss – Loss functions – Risk – Empirical risk vs True risk – Empirical Risk minimization • Underfitting and Overfitting • Classification • Regression 3

  4. Supervised Learning Setup Generative model of the data : (train and test data) Regression: Classification: 4

  5. Loss Loss function: It measures how good we are on a particular (x,y) pair. 5

  6. Loss Examples Classification loss: Regression : Predict house prices. Price L 2 loss for regression: L 1 loss for regression: 6

  7. Squared loss, L 2 loss Picture form Alex 7

  8. L 1 loss Picture form Alex 8

  9. - insensitive loss Picture form Alex 9

  10. Huber’s robust loss Picture form Alex 10

  11. Risk Risk of f classification/regression function: = The expected loss Why do we care about this? 11

  12. Why do we care about risk? Risk of f classification/regression function: = The expected loss Our true goal is to minimize the loss of the test points! Usually we don’t know the test points and their labels in advance…, but (LLN) That is why our goal is to minimize the risk . 12

  13. Risk Examples Risk: The expected loss Classification loss: Risk of classification loss: L 2 loss for regression: Risk of L 2 loss: 13

  14. Bayes Risk The expected loss Definition: Bayes Risk We consider all possible function f here We don’t know P, but we have i.i.d. training data sampled from P! Goal of Learning: The learning algorithm constructs this function f D from the training data. 14

  15. Convergence in Probability

  16. Convergence in Probability Notation: Definition: This indeed measures how far the values of Z n (  ) and Z(  ) are from each other. 16

  17. Consistency of learning methods Risk is a random variable: Definition: Stone’s theorem 1977 : Many classification, regression algorithms are universally consistent for certain loss functions under certain conditions: kNN, Parzen kernel regression, SVM,… Yayyy!!! ☺ Wait! This doesn’t tell us anything about the rates… 18

  18. No Free Lunch! Devroy 1982 : For every consistent learning method and for every fixed convergence rate a n , there exists P(X,Y) distribution such that the convergence rate of this learning method on P(X,Y) distributed data is slower than a n  What can we do now? 19

  19. Empirical Risk and True Risk 20

  20. Empirical Risk Shorthand: True risk of f (deterministic) : Bayes risk : Let us use the empirical counter part: Empirical risk: 21

  21. Empirical Risk Minimization Law of Large Numbers: Empirical risk is converging to the Bayes risk 22

  22. Overfitting in Classification with ERM Generative model: Bayes classifier: Bayes risk: Picture from David Pal 23

  23. Overfitting in Classification with ERM n-order thresholded polynomials Empirical risk: Bayes risk: Picture from David Pal 24

  24. Overfitting in Regression with ERM Is the following predictor a good one? What is its empirical risk? (performance on training data) zero ! What about true risk? > zero Will predict very poorly on new random test point: Large generalization error ! 25

  25. Overfitting in Regression If we allow very complicated predictors, we could overfit the training data. Examples: Regression (Polynomial of order k-1 – degree k ) 1.5 1.4 k=1 k=2 1.2 linear constant 1 1 0.8 0.6 0.5 0.4 0.2 0 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.4 5 k=3 k=7 0 1.2 -5 quadratic 1 6 th order -10 0.8 -15 0.6 -20 -25 0.4 -30 0.2 -35 0 -40 -0.2 -45 26 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

  26. Solutions to Overfitting 27

  27. Solutions to Overfitting Structural Risk Minimization Notation: Empirical risk Risk 1 st issue: (Model error, Approximation error) Solution: Structural Risk Minimzation (SRM) 28

  28. Approximation error, Estimation error, PAC framework Risk of the classifier f Approximation error Estimation error Bayes risk Probably Approximately Correct (PAC) learning framework Estimation error 29

  29. Big Picture Ultimate goal: Estimation error Approximation error Bayes risk Bayes risk Estimation error Bayes risk Approximation error 30

  30. Solution to Overfitting 2 nd issue: Solution: 31

  31. Approximation with the Hinge loss and quadratic loss Picture is taken from R. Herbrich

  32. Effect of Model Complexity If we allow very complicated predictors, we could overfit the training data. fixed # training data Prediction error on training data Empirical risk is no longer a good indicator of true risk 33

  33. Underfitting Bayes risk = 0.1 34

  34. Underfitting Best linear classifier: The empirical risk of the best linear classifier: 35

  35. Underfitting Best quadratic classifier: Same as the Bayes risk ) good fit! 36

  36. Classification using the 0-1 loss 37

  37. The Bayes Classifier Lemma I: Lemma II: 38

  38. Proofs Lemma I: Trivial from definition Lemma II: Surprisingly long calculation 39

  39. The Bayes Classifier This is what the learning algorithm produces We will need these definitions, please copy it! 40

  40. The Bayes Classifier This is what the learning algorithm produces Theorem I: Bound on the Estimation error The true risk of what the learning algorithm produces 41

  41. Proof of Theorem 1 Theorem I: Bound on the Estimation error The true risk of what the learning algorithm produces Proof:

  42. The Bayes Classifier This is what the learning algorithm produces Theorem II: 43

  43. Proofs Theorem I: Not so long calculations. Theorem II: Trivial Corollary: Main message: It’s enough to derive upper bounds for 44

  44. Illustration of the Risks 45

  45. It’s enough to derive upper bounds for It is a random variable that we need to bound! We will bound it with tail bounds! 46

  46. Hoeffding’s inequality (1963) Special case 47

  47. Binomial distributions Our goal is to bound Bernoulli(p) Therefore, from Hoeffding we have: Yuppie!!! 48

  48. Inversion From Hoeffding we have: Therefore, 49

  49. Union Bound Our goal is to bound: We already know: Theorem : [tail bound on the ‘deviation’ in the worst case] Worst case error This is not the worst classifier in terms of classification accuracy! Worst case means that the empirical risk of classifier f is the furthest from its true risk! Proof : 50

  50. Inversion of Union Bound We already know: Therefore, 51

  51. Inversion of Union Bound • The larger the N , the looser the bound • This results is distribution free: True for all P(X,Y) distributions • It is useless if N is big, or infinite… (e.g. all possible hyperplanes) We will see later how to fix that. (Hint: McDiarmid , VC dimension…) 52

  52. The Expected Error Our goal is to bound: We already know: (Tail bound, Concentration inequality) Theorem : [Expected ‘deviation’ in the worst case] Worst case deviation This is not the worst classifier in terms of classification accuracy! Worst case means that the empirical risk of classifier f is the furthest from its true risk! Proof: we already know a tail bound. (From that actually we get a bit weaker inequality… oh well) 53

  53. Thanks for your attention ☺ 54

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend