61a extra lecture 13 announcements prediction regression
play

61A Extra Lecture 13 Announcements Prediction Regression 4 - PowerPoint PPT Presentation

61A Extra Lecture 13 Announcements Prediction Regression 4 Regression Given a set of (x, y) pairs, find a function f(x) that returns good y values 4 Regression Given a set of (x, y) pairs, find a function f(x) that returns good y values


  1. 61A Extra Lecture 13

  2. Announcements

  3. Prediction

  4. Regression 4

  5. Regression Given a set of (x, y) pairs, find a function f(x) that returns good y values 4

  6. Regression Given a set of (x, y) pairs, find a function f(x) that returns good y values pairs = [(1656, 215.0), (896, 105.0), (1329, 172.0), ...] 4

  7. Regression Given a set of (x, y) pairs, find a function f(x) that returns good y values pairs = [(1656, 215.0), (896, 105.0), (1329, 172.0), ...] Data from home sales records in Ames, Iowa 4

  8. Regression Given a set of (x, y) pairs, find a function f(x) that returns good y values pairs = [(1656, 215.0), (896, 105.0), (1329, 172.0), ...] Data from home sales records in Ames, Iowa Square feet 4

  9. Regression Given a set of (x, y) pairs, find a function f(x) that returns good y values pairs = [(1656, 215.0), (896, 105.0), (1329, 172.0), ...] Data from home sales records in Ames, Iowa Square feet Price (thousands) 4

  10. Regression Given a set of (x, y) pairs, find a function f(x) that returns good y values pairs = [(1656, 215.0), (896, 105.0), (1329, 172.0), ...] Data from home sales records in Ames, Iowa Square feet Price (thousands) Measuring error: |y-f(x)| or (y-f(x)) 2 are both typical 4

  11. Regression Given a set of (x, y) pairs, find a function f(x) that returns good y values pairs = [(1656, 215.0), (896, 105.0), (1329, 172.0), ...] Data from home sales records in Ames, Iowa Square feet Price (thousands) Measuring error: |y-f(x)| or (y-f(x)) 2 are both typical Over the whole set of (x, y) pairs, we can compute the mean of the squared error 4

  12. Regression Given a set of (x, y) pairs, find a function f(x) that returns good y values pairs = [(1656, 215.0), (896, 105.0), (1329, 172.0), ...] Data from home sales records in Ames, Iowa Square feet Price (thousands) Measuring error: |y-f(x)| or (y-f(x)) 2 are both typical Over the whole set of (x, y) pairs, we can compute the mean of the squared error Squared error has the wrong units, so it's common to take the square root 4

  13. Regression Given a set of (x, y) pairs, find a function f(x) that returns good y values pairs = [(1656, 215.0), (896, 105.0), (1329, 172.0), ...] Data from home sales records in Ames, Iowa Square feet Price (thousands) Measuring error: |y-f(x)| or (y-f(x)) 2 are both typical Over the whole set of (x, y) pairs, we can compute the mean of the squared error Squared error has the wrong units, so it's common to take the square root The result is the "root mean squared error" of a predictor f on a set of (x, y) pairs 4

  14. Regression Given a set of (x, y) pairs, find a function f(x) that returns good y values pairs = [(1656, 215.0), (896, 105.0), (1329, 172.0), ...] Data from home sales records in Ames, Iowa Square feet Price (thousands) Measuring error: |y-f(x)| or (y-f(x)) 2 are both typical Over the whole set of (x, y) pairs, we can compute the mean of the squared error Squared error has the wrong units, so it's common to take the square root The result is the "root mean squared error" of a predictor f on a set of (x, y) pairs (Demo) 4

  15. Purpose of Newton's Method Quickly finds accurate approximations to zeroes of differentiable functions! 5

  16. Purpose of Newton's Method Quickly finds accurate approximations to zeroes of differentiable functions! f(x) = x 2 - 2 5

  17. Purpose of Newton's Method Quickly finds accurate approximations to zeroes of differentiable functions! 2.5 f(x) = x 2 - 2 -5 -2.5 0 2.5 5 -2.5 5

  18. Purpose of Newton's Method Quickly finds accurate approximations to zeroes of differentiable functions! 2.5 A "zero" of a function f is f(x) = x 2 - 2 an input x such that f(x)=0 -5 -2.5 0 2.5 5 -2.5 5

  19. Purpose of Newton's Method Quickly finds accurate approximations to zeroes of differentiable functions! 2.5 A "zero" of a function f is f(x) = x 2 - 2 an input x such that f(x)=0 -5 -2.5 0 2.5 5 x=1.414213562373095 -2.5 5

  20. Purpose of Newton's Method Quickly finds accurate approximations to zeroes of differentiable functions! 2.5 A "zero" of a function f is f(x) = x 2 - 2 an input x such that f(x)=0 -5 -2.5 0 2.5 5 x=1.414213562373095 -2.5 Application: Find the minimum of a function by finding the zero of its derivative 5

  21. Approximate Differentiation 6

  22. Approximate Differentiation 6

  23. Approximate Differentiation Differentiation can be performed symbolically or numerically 6

  24. Approximate Differentiation Differentiation can be performed symbolically or numerically f(x) = x 2 - 16 6

  25. Approximate Differentiation Differentiation can be performed symbolically or numerically f(x) = x 2 - 16 f'(x) = 2x 6

  26. Approximate Differentiation Differentiation can be performed symbolically or numerically f(x) = x 2 - 16 f'(x) = 2x f'(2) = 4 6

  27. Approximate Differentiation Differentiation can be performed symbolically or numerically f(x) = x 2 - 16 f'(x) = 2x f'(2) = 4 6

  28. Approximate Differentiation Differentiation can be performed symbolically or numerically f(x) = x 2 - 16 f'(x) = 2x f'(2) = 4 f ( x + a ) − f ( x ) f 0 ( x ) = lim a a ! 0 6

  29. Approximate Differentiation Differentiation can be performed symbolically or numerically f(x) = x 2 - 16 f'(x) = 2x f'(2) = 4 f ( x + a ) − f ( x ) f 0 ( x ) = lim a a ! 0 f 0 ( x ) ≈ f ( x + a ) − f ( x ) a 6

  30. Approximate Differentiation Differentiation can be performed symbolically or numerically f(x) = x 2 - 16 f'(x) = 2x f'(2) = 4 f ( x + a ) − f ( x ) f 0 ( x ) = lim a a ! 0 f 0 ( x ) ≈ f ( x + a ) − f ( x ) (if 𝑏 is small) a 6

  31. Approximate Differentiation Differentiation can be performed symbolically or numerically f(x) = x 2 - 16 f'(x) = 2x f'(2) = 4 f ( x + a ) − f ( x ) f 0 ( x ) = lim a a ! 0 f 0 ( x ) ≈ f ( x + a ) − f ( x ) (if 𝑏 is small) a 6

  32. Approximate Differentiation Differentiation can be performed symbolically or numerically f(x) = x 2 - 16 f'(x) = 2x f'(2) = 4 f ( x + a ) − f ( x ) f 0 ( x ) = lim a a ! 0 f 0 ( x ) ≈ f ( x + a ) − f ( x ) (if 𝑏 is small) a 6

  33. Critical Points 7

  34. Critical Points Maxima, minima, and inflection points of a differentiable function occur when the derivative is 0 7

  35. Critical Points Maxima, minima, and inflection points of a differentiable function occur when the derivative is 0 7 http://upload.wikimedia.org/wikipedia/commons/f/fd/Stationary_vs_inflection_pts.svg

  36. Critical Points Maxima, minima, and inflection points of a differentiable function occur when the derivative is 0 The global minimum of convex functions that are (mostly) twice-differentiable can be computed numerically using techniques that are similar to Newton's method 7 http://upload.wikimedia.org/wikipedia/commons/f/fd/Stationary_vs_inflection_pts.svg

  37. Critical Points Maxima, minima, and inflection points of a differentiable function occur when the derivative is 0 The global minimum of convex functions that are (mostly) twice-differentiable can be computed numerically using techniques that are similar to Newton's method (Demo) 7 http://upload.wikimedia.org/wikipedia/commons/f/fd/Stationary_vs_inflection_pts.svg

  38. Multiple Linear Regression Given a set of (xs, y) pairs, find a linear function f(xs) that returns good y values 8

  39. Multiple Linear Regression Given a set of (xs, y) pairs, find a linear function f(xs) that returns good y values A linear function has the form w • xs + b for vectors w and xs and scalar b 8

  40. Multiple Linear Regression Given a set of (xs, y) pairs, find a linear function f(xs) that returns good y values A linear function has the form w • xs + b for vectors w and xs and scalar b (Demo) 8

  41. Multiple Linear Regression Given a set of (xs, y) pairs, find a linear function f(xs) that returns good y values A linear function has the form w • xs + b for vectors w and xs and scalar b (Demo) Note: Root mean squared error can be optimized through linear algebra alone, but numerical optimization works for a much larger class of related error measures 8

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend