10 601 machine learning
play

10-601 Machine Learning Regression Outline Regression vs - PowerPoint PPT Presentation

10-601 Machine Learning Regression Outline Regression vs Classification Linear regression another discriminative learning method As optimization Gradient descent As matrix inversion (Ordinary Least Squares) Overfitting


  1. 10-601 Machine Learning Regression

  2. Outline • Regression vs Classification • Linear regression – another discriminative learning method – As optimization è Gradient descent – As matrix inversion (Ordinary Least Squares) • Overfitting and bias-variance • Bias-variance decomposition for classification

  3. What is regression?

  4. Where we are Inputs Prob- Density √ ability Estimator Inputs Classifier Predict √ category Inputs Predict Regressor Today real #

  5. Regression examples

  6. Prediction of menu prices Chaheau Gimpel … and Smith EMNLP 2012 …

  7. A decision tree: classification Play Don’t Play Don’t Play Play

  8. A regression tree Play ~= Play ~= Play ~= 5 Play ~= 0 37 32 Play = 30m, 45min Play = 0m, 0m, 15m Play = 0m, 0m Play = 20m, 30m, 45m,

  9. Theme for the week: learning as optimization

  10. Types of learners • Two types of learners: 1. Generative: make assumptions about how to generate data (given the class) - e.g., naïve Bayes 2. Discriminative - directly estimate a decision rule/boundary - e.g., logistic regression Today: another discriminative learner, but for regression tasks

  11. Least Mean Squares Regression for LMS as optimization Toy problem #2 11

  12. Linear regression • Given an input x we would like to compute an output y • For example: Y - Predict height from age - Predict Google ’ s price from Yahoo ’ s price - Predict distance from wall from sensors X

  13. Linear regression • Given an input x we would like to compute an output y • In linear regression we assume that y and x are related with the Y following equation: Observed values What we are trying to predict y = wx+ ε X where w is a parameter and ε represents measurement or other noise

  14. Linear regression y = wx + ε Y • Our goal is to estimate w from a training data of <x i ,y i > pairs • Optimization goal: minimize squared error (least squares): 2 arg min ( y wx ) ∑ − w i i X i • Why least squares? - minimizes squared distance between see HW measurements and predicted line - has a nice probabilistic interpretation - the math is pretty

  15. Solving linear regression • To optimize: • We just take the derivative w.r.t. to w …. prediction ∂ ∑ ( y i − wx i ) 2 ∑ = 2 − x i ( y i − wx i ) ∂ w i i prediction Compare to logistic regression…

  16. Solving linear regression • To optimize – closed form: • We just take the derivative w.r.t. to w and set to 0: ∂ ∑ ( y i − wx i ) 2 ∑ = 2 − x i ( y i − wx i ) ⇒ ∂ w i i ∑ ∑ ∑ 2 x i y i − 2 wx i x i = 0 2 x i ( y i − wx i ) = 0 ⇒ i i i ∑ ∑ 2 x i y i = wx i ⇒ i i covar(X,Y)/var(X) ∑ if mean(X)=mean(Y)=0 x i y i i w = ∑ 2 x i i

  17. Regression example • Generated: w=2 • Recovered: w=2.03 • Noise: std=1

  18. Regression example • Generated: w=2 • Recovered: w=2.05 • Noise: std=2

  19. Regression example • Generated: w=2 • Recovered: w=2.08 • Noise: std=4

  20. Bias term • So far we assumed that the line passes through the origin Y • What if the line does not? • No problem, simply change the model to y = w 0 + w 1 x+ ε w 0 • Can use least squares to determine w 0 , w 1 X x ( y w ) y w x ∑ ∑ − − i i 0 i 1 i w i i = w = 1 2 x ∑ 0 n i i

  21. Simpler solution is coming soon… Bias term • So far we assumed that the line passes through the origin Y • What if the line does not? • No problem, simply extend the model to y = w 0 + w 1 x+ ε w 0 • Can use least squares to determine w 0 , w 1 X x ( y w ) y w x ∑ ∑ − − i i 0 i 1 i w i i = w = 1 2 x ∑ 0 n i i

  22. Multivariate regression • What if we have several inputs? - Stock prices for Yahoo, Microsoft and Ebay for the Google prediction task • This becomes a multivariate regression problem • Again, its easy to model: y = w 0 + w 1 x 1 + … + w k x k + ε Google ’ s stock Microsoft’s stock price price Yahoo ’ s stock price

  23. Multivariate regression • What if we have several inputs? - Stock prices for Yahoo, Microsoft and Ebay for the Google prediction task • This becomes a multivariate regression problem • Again, its easy to model: y = w 0 + w 1 x 1 + … + w k x k + ε

  24. Not all functions can be approximated by a line/hyperplane… y=10+3x 1 2 -2x 2 2 + ε In some cases we would like to use polynomial or other terms based on the input data, are these still linear regression problems? Yes. As long as the coefficients are linear the equation is still a linear regression problem!

  25. Non-Linear basis function • So far we only used the observed values x 1 ,x 2 ,… • However, linear regression can be applied in the same way to functions of these values – Eg: to add a term w x 1 x 2 add a new variable z=x 1 x 2 so each example becomes: x 1 , x 2 , …. z • As long as these functions can be directly computed from the observed values the parameters are still linear in the data and the problem remains a multi-variate linear regression problem 2 2 y w w x … w k x = + + + + ε 0 1 1 k

  26. Non-Linear basis function • How can we use this to add an intercept term? Add a new “ variable ” z=1 and weight w 0

  27. Non-linear basis functions • What type of functions can we use? • A few common examples: - Polynomial: φ j (x) = x j for j=0 … n φ j ( x ) = ( x − µ j ) Any function of the input - Gaussian: values can be used. The 2 2 σ j solution for the parameters 1 of the regression remains - Sigmoid: φ j ( x ) = 1 + exp( − s j x ) the same. - Logs: φ j ( x ) = log( x + 1)

  28. General linear regression problem • Using our new notations for the basis function linear regression can be written as n ∑ y = w j φ j ( x ) j = 0 • Where φ j ( x ) can be either x j for multivariate regression or one of the non-linear basis functions we defined • … and φ 0 ( x ) =1 for the intercept term

  29. Learning/Optimizing Multivariate Least Squares Approach 1: Gradient Descent

  30. Gradient descent 30

  31. Gradient Descent for Linear Regression n i = ∑ w j φ j ( x i ) predict with : ˆ y Goal: minimize the following j loss function: 2 # & 2 = y i − ˆ y i − ( ) ∑ y i ∑ ∑ w j φ j ( x i ) J X,y ( w ) = % ( % ( $ ' i i j sum over n examples sum over k+1 basis vectors

  32. Gradient Descent for Linear Regression n Goal: minimize the following i = ∑ w j φ j ( x i ) predict with : ˆ y loss function: j 2 % ( 2 = y i − ˆ y i − ∑ ( ) ∑ ∑ i w j φ j ( x i ) J X,y ( w ) = y ' * ' * & ) i i j ∂ ∂ 2 y i − ˆ ∑ ( ) i J ( w ) = y ∂ w j ∂ w j i ∂ y i − ˆ ∑ ( ) i i ˆ = 2 y y ∂ w j i ∂ y i − ˆ ∑ ( ) ∑ i w j φ j ( x i ) = 2 y ∂ w j i j y i − ˆ ∑ ( ) i φ j ( x i ) = 2 y i

  33. Gradient Descent for Linear Regression Learning algorithm: • Initialize weights w=0 • For t=1,… until convergence: k y i = ∑ w j φ j ( x i ) • Predict for each example x i using w: ˆ j = 0 ∂ y i − ˆ ∑ ( ) • Compute gradient of loss: i φ j ( x i ) J ( w ) = 2 y ∂ w j • This is a vector g i • Update: w = w – λ g • λ is the learning rate.

  34. Gradient Descent for Linear Regression • We can use any of the tricks we used for logistic regression: – stochastic gradient descent (if the data is too big to put in memory) – regularization – …

  35. Linear regression is a convex optimization problem so again gradient descent will reach a global optimum proof: differentiate again to get the second derivative

  36. Multivariate Least Squares Approach 2: Matrix Inversion

  37. OLS (Ordinary Least Squares Solution) n Goal: minimize the following i = ∑ w j φ j ( x i ) predict with : ˆ y loss function: j 2 % ( 2 = y i − ˆ y i − ∑ ( ) ∑ ∑ i w j φ j ( x i ) J X,y ( w ) = y ' * ' * & ) i i j ∂ y i − ˆ ∑ ( ) i φ j ( x i ) J ( w ) = 2 y ∂ w j i

  38. n Goal: minimize the following i = ∑ w j φ j ( x i ) predict with : ˆ y loss function: j 2 % ( 2 = y i − ˆ y i − ∑ ( ) ∑ ∑ i w j φ j ( x i ) J X,y ( w ) = y ' * ' * & ) i i j ∂ y i − ˆ ∑ ( ) i φ j ( x i ) J ( w ) = 2 y ∂ w j i k+1 basis vectors " % φ 0 ( x 1 ) φ 1 ( x 1 )  φ k ( x 1 ) Notation: $ ' $ ' φ 0 ( x 2 ) φ 1 ( x 2 )  φ k ( x 2 ) Φ = $ ' n examples     $ ' $ ' φ 0 ( x n ) φ 1 ( x n )  φ k ( x n ) # &

  39. n Goal: minimize the following i = ∑ w j φ j ( x i ) predict with : ˆ y loss function: j 2 % ( 2 = y i − ˆ y i − ∑ ( ) ∑ ∑ i w j φ j ( x i ) J X,y ( w ) = y ' * ' * & ) i i j ∂ y i − ˆ ∑ ( ) i φ j ( x i ) J ( w ) = 2 y ! $ w 0 ∂ w j # & i k+1 basis vectors w = # .. & # & " % w k # & φ 0 ( x 1 ) φ 1 ( x 1 )  φ k ( x 1 ) " % y 1 $ ' " % $ ' $ ' ... φ 0 ( x 2 ) φ 1 ( x 2 )  φ k ( x 2 ) $ ' y = Φ = $ ' n examples ...     $ ' $ ' $ ' y n $ ' # & φ 0 ( x n ) φ 1 ( x n )  φ k ( x n ) # &

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend