linear regression optimization for ml
play

Linear Regression + Optimization for ML Matt Gormley Lecture 8 - PowerPoint PPT Presentation

10-601 Introduction to Machine Learning Machine Learning Department School of Computer Science Carnegie Mellon University Linear Regression + Optimization for ML Matt Gormley Lecture 8 Feb. 07, 2020 1 Q&A Q: How can I get more


  1. 10-601 Introduction to Machine Learning Machine Learning Department School of Computer Science Carnegie Mellon University Linear Regression + Optimization for ML Matt Gormley Lecture 8 Feb. 07, 2020 1

  2. Q&A Q: How can I get more one-on-one interaction with the course staff? A: Attend office hours as soon after the homework release as possible! 3

  3. Reminders • Homework 3: KNN, Perceptron, Lin.Reg. – Out: Wed, Feb. 05 (+ 1 day) – Due: Wed, Feb. 12 at 11:59pm • Today’s In-Class Poll – http://p8.mlcourse.org 4

  4. LINEAR REGRESSION 5

  5. Regression Problems Chalkboard – Definition of Regression – Linear functions – Residuals – Notation trick: fold in the intercept 6

  6. The Big Picture OPTIMIZATION FOR ML 7

  7. Optimization for ML Not quite the same setting as other fields… – Function we are optimizing might not be the true goal (e.g. likelihood vs generalization error) – Precision might not matter (e.g. data is noisy, so optimal up to 1e-16 might not help) – Stopping early can help generalization error (i.e. “early stopping” is a technique for regularization – discussed more next time) 8

  8. min vs. argmin 3 v* = min x f(x) 2 x* = argmin x f(x) 1 y = f(x) =x 2 + 1 1. Q: What is v*? v* = 1, the minimum value of the function 2. Q: What is x*? x* = 0, the argument that yields the minimum value 9

  9. Linear Regression as Function Approximation 10

  10. 11

  11. Contour Plots Contour Plots 1. Each level curve labeled J( θ ) = J(θ 1 , θ 2 ) = (10(θ 1 – 0.5)) 2 + (6(θ 1 – 0.4)) 2 with value 2. Value label indicates the value of the function for all points lying on that level curve 3. Just like a topographical map, but for a function θ 2 θ 1 12

  12. Optimization by Random Guessing J( θ ) = J(θ 1 , θ 2 ) = (10(θ 1 – 0.5)) 2 + (6(θ 1 – 0.4)) 2 Optimization Method #0: Random Guessing 1. Pick a random θ 2. Evaluate J( θ ) 3. Repeat steps 1 and 2 many times 4. Return θ that gives θ 2 smallest J( θ ) θ 1 t θ 1 θ 2 J(θ 1 , θ 2 ) 1 0.2 0.2 10.4 2 0.3 0.7 7.2 3 0.6 0.4 1.0 13 4 0.9 0.7 19.2

  13. Optimization by Random Guessing J( θ ) = J(θ 1 , θ 2 ) = (10(θ 1 – 0.5)) 2 + (6(θ 1 – 0.4)) 2 Optimization Method #0: Random Guessing 1. Pick a random θ 2. Evaluate J( θ ) 3. Repeat steps 1 and 2 many times 4. Return θ that gives θ 2 smallest J( θ ) For Linear Regression: • objective function is Mean Squared Error (MSE) • MSE = J(w, b) = J(θ 1 , θ 2 ) = θ 1 • contour plot: each line labeled with t θ 1 θ 2 J(θ 1 , θ 2 ) MSE – lower means a better fit 1 0.2 0.2 10.4 • minimum corresponds to 2 0.3 0.7 7.2 parameters (w,b) = (θ 1 , θ 2 ) that 3 0.6 0.4 1.0 best fit some training dataset 14 4 0.9 0.7 19.2

  14. Linear Regression by Rand. Guessing Optimization Method #0: Random Guessing 1. Pick a random θ 2. Evaluate J( θ ) 3. Repeat steps 1 and 2 many times 4. Return θ that gives smallest J( θ ) For Linear Regression: y = h*(x) • target function h*(x) is unknown h(x; θ (4) ) (unknown) # tourists (thousands) • only have access to h*(x) through h(x; θ (2) ) training examples (x (i) ,y (i) ) h(x; θ (3) ) • want h(x; θ (t) ) that best approximates h*(x) • enable generalization w/inductive bias that restricts hypothesis class to linear functions h(x; θ (1) ) 15 time

  15. Linear Regression by Rand. Guessing J( θ ) = J(θ 1 , θ 2 ) = (10(θ 1 – 0.5)) 2 + (6(θ 1 – 0.4)) 2 Optimization Method #0: Random Guessing 1. Pick a random θ 2. Evaluate J( θ ) 3. Repeat steps 1 and 2 many times 4. Return θ that gives θ 2 smallest J( θ ) y = h*(x) h(x; θ (4) ) (unknown) # tourists (thousands) h(x; θ (2) ) h(x; θ (3) ) θ 1 t θ 1 θ 2 J(θ 1 , θ 2 ) 1 0.2 0.2 10.4 h(x; θ (1) ) 2 0.3 0.7 7.2 3 0.6 0.4 1.0 16 4 0.9 0.7 19.2 time

  16. OPTIMIZATION METHOD #1: GRADIENT DESCENT 17

  17. Optimization for ML Chalkboard – Unconstrained optimization – Derivatives – Gradient 18

  18. Topographical Maps 19

  19. Topographical Maps 20

  20. Gradients 21

  21. Gradients These are the gradients that 22 Gradient Ascent would follow.

  22. (Negative) Gradients These are the negative gradients that 23 Gradient Descent would follow.

  23. (Negative) Gradient Paths Shown are the paths that Gradient Descent would follow if it were making infinitesimally 24 small steps .

  24. Pros and cons of gradient descent • Simple and often quite effective on ML tasks • Often very scalable • Only applies to smooth functions (differentiable) • Might find a local minimum, rather than a global one 25 Slide courtesy of William Cohen

  25. Gradient Descent Chalkboard – Gradient Descent Algorithm – Details: starting point, stopping criterion, line search 26

  26. Gradient Descent Algorithm 1 Gradient Descent 1: procedure GD ( D , θ (0) ) θ � θ (0) 2: while not converged do 3: θ � θ + λ � θ J ( θ ) — 4: return θ 5: d d θ 1 J ( θ )   In order to apply GD to Linear d Regression all we need is the d θ 2 J ( θ )   gradient of the objective � θ J ( θ ) =  .  .   function (i.e. vector of partial .   derivatives). d d θ N J ( θ ) M 27

  27. Gradient Descent Algorithm 1 Gradient Descent 1: procedure GD ( D , θ (0) ) θ � θ (0) 2: while not converged do 3: θ � θ + λ � θ J ( θ ) — 4: return θ 5: There are many possible ways to detect convergence . For example, we could check whether the L2 norm of the gradient is below some small tolerance. || � θ J ( θ ) || 2 � � Alternatively we could check that the reduction in the objective function from one iteration to the next is small. 28

  28. GRADIENT DESCENT FOR LINEAR REGRESSION 29

  29. Linear Regression as Function Approximation 30

  30. Linear Regression by Gradient Desc. J( θ ) = J(θ 1 , θ 2 ) = (10(θ 1 – 0.5)) 2 + (6(θ 1 – 0.4)) 2 Optimization Method #1: Gradient Descent 1. Pick a random θ 2. Repeat: a. Evaluate gradient ∇ J( θ ) b. Step opposite gradient 3. Return θ that gives θ 2 smallest J( θ ) θ 1 t θ 1 θ 2 J(θ 1 , θ 2 ) 1 0.01 0.02 25.2 2 0.30 0.12 8.7 3 0.51 0.30 1.5 31 4 0.59 0.43 0.2

  31. Linear Regression by Gradient Desc. Optimization Method #1: Gradient Descent 1. Pick a random θ 2. Repeat: a. Evaluate gradient ∇ J( θ ) b. Step opposite gradient 3. Return θ that gives smallest J( θ ) y = h*(x) (unknown) # tourists (thousands) h(x; θ (4) ) h(x; θ (3) ) t θ 1 θ 2 J(θ 1 , θ 2 ) 1 0.01 0.02 25.2 h(x; θ (2) ) 2 0.30 0.12 8.7 h(x; θ (1) ) 3 0.51 0.30 1.5 32 4 0.59 0.43 0.2 time

  32. Linear Regression by Gradient Desc. J( θ ) = J(θ 1 , θ 2 ) = (10(θ 1 – 0.5)) 2 + (6(θ 1 – 0.4)) 2 Optimization Method #1: Gradient Descent 1. Pick a random θ 2. Repeat: a. Evaluate gradient ∇ J( θ ) b. Step opposite gradient 3. Return θ that gives θ 2 smallest J( θ ) y = h*(x) (unknown) # tourists (thousands) h(x; θ (4) ) h(x; θ (3) ) θ 1 t θ 1 θ 2 J(θ 1 , θ 2 ) 1 0.01 0.02 25.2 h(x; θ (2) ) 2 0.30 0.12 8.7 h(x; θ (1) ) 3 0.51 0.30 1.5 33 4 0.59 0.43 0.2 time

  33. Linear Regression by Gradient Desc. mean squared error, J(θ 1 , θ 2 ) iteration, t y = h*(x) (unknown) # tourists (thousands) h(x; θ (4) ) h(x; θ (3) ) t θ 1 θ 2 J(θ 1 , θ 2 ) 1 0.01 0.02 25.2 h(x; θ (2) ) 2 0.30 0.12 8.7 h(x; θ (1) ) 3 0.51 0.30 1.5 34 4 0.59 0.43 0.2 time

  34. Linear Regression by Gradient Desc. J( θ ) = J(θ 1 , θ 2 ) = (10(θ 1 – 0.5)) 2 + (6(θ 1 – 0.4)) 2 mean squared error, J(θ 1 , θ 2 ) θ 2 iteration, t y = h*(x) (unknown) # tourists (thousands) h(x; θ (4) ) h(x; θ (3) ) θ 1 t θ 1 θ 2 J(θ 1 , θ 2 ) 1 0.01 0.02 25.2 h(x; θ (2) ) 2 0.30 0.12 8.7 h(x; θ (1) ) 3 0.51 0.30 1.5 35 4 0.59 0.43 0.2 time

  35. Optimization for Linear Regression Chalkboard – Computing the gradient for Linear Regression – Gradient Descent for Linear Regression 36

  36. Gradient Calculation for Linear Regression [used by Gradient Descent] 37

  37. GD for Linear Regression Gradient Descent for Linear Regression repeatedly takes steps opposite the gradient of the objective function 38

  38. CONVEXITY 39

  39. Convexity 40

  40. Convexity Convex Function Nonconvex Function • • Each local minimum is a A nonconvex function is not global minimum convex • Each local minimum is not necessarily a global minimum 41

  41. Convexity Each local minimum of a convex function is also a global minimum . A strictly convex function has a unique global minimum . 44

  42. CONVEXITY AND LINEAR REGRESSION 45

  43. Convexity and Linear Regression The Mean Squared Error function, which we minimize for learning the parameters of Linear Regression, is convex ! …but in the general case it is not strictly convex . 46

  44. Regression Loss Functions In-Class Exercise: Which of the following could be used as loss functions for training a linear regression model? Select all that apply . 47

  45. Solving Linear Regression Question: Answer: 48

  46. OPTIMIZATION METHOD #2: CLOSED FORM SOLUTION 49

  47. Calculus and Optimization In-Class Exercise Answer Here: Plot three functions: 50

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend