linear least squares i
play

Linear Least Squares I Steve Marschner Cornell CS 322 Cornell CS - PowerPoint PPT Presentation

Fitting Applications Solving Trouble Summary Linear Least Squares I Steve Marschner Cornell CS 322 Cornell CS 322 Linear Least Squares I 1 Fitting Applications Solving Trouble Summary Outline Linear Fitting Examples of linear tting


  1. Fitting Applications Solving Trouble Summary Linear Least Squares I Steve Marschner Cornell CS 322 Cornell CS 322 Linear Least Squares I 1

  2. Fitting Applications Solving Trouble Summary Outline Linear Fitting Examples of linear �tting problems Solving linear least squares problems Dif�culties in least squares �tting Summary Cornell CS 322 Linear Least Squares I 2

  3. Fitting Applications Solving Trouble Summary Linear systems We have been looking at systems for i = 1 . . . n y i = f i ( x 1 , . . . , x n ) or R n → I where f : I R n y = f ( x ) which, when f is linear, read where A ∈ I R n × n Ax = b Cornell CS 322 Linear Least Squares I 3

  4. Fitting Applications Solving Trouble Summary Square linear systems The equation Ax = b with A an n × n matrix is a square linear system. Generally we expect this system to have exactly one solution. A x b = (If A is singular, there might be no solution or many solutions.) Cornell CS 322 Linear Least Squares I 4

  5. Fitting Applications Solving Trouble Summary Non-square systems If A is n × m , and n � = m , the system is called (surprise!) non-square or rectangular and is generally either overdetermined or underdetermined . x A = b A = b x overdetermined underdetermined (If A is singular, you can't necessarily tell whether a system is over- or underdetermined from its shape.) Cornell CS 322 Linear Least Squares I 5

  6. Fitting Applications Solving Trouble Summary Overdetermined systems Today, we're interested in the overdetermined case: m > n , more knowns than unknowns. = Ax ≈ b Generally such an equation will have no exact solution, and we are in the business of �nding a compromise. Cornell CS 322 Linear Least Squares I 6

  7. Fitting Applications Solving Trouble Summary Linear regression Experiment to �nd thermal expansion coef�cient with metal bar and a torch: • measure temperature of bar, record as T 1 . • measure length of bar, record as L 1 . • crank up heat, wait for a bit. • measure temperature T 2 and length L 2 . • repeat for many trials. The data is n pairs ( T i , L i ) . The hypothesis is that L ( T ) = L 0 (1 + αT ) , where L 0 is the bar's nominal length, and we want to estimate α . Cornell CS 322 Linear Least Squares I 7

  8. Fitting Applications Solving Trouble Summary Linear regression To put this in the standard form, we have a set of given data points ( x i , y i ) and we believe that y = mx + b . (Here x is T , y is L , and m is L 0 α .) We believe that if there were no experimental uncertainty the model would �t the data exactly, but since there is noise the best we can do is minimize error. The problem is � ( mx i + b − y i ) 2 min m,b i To make this look like our standard problem we use the HW2 trick: � � m � � mx + b = x 1 b Cornell CS 322 Linear Least Squares I 8

  9. Fitting Applications Solving Trouble Summary Linear regression Stacking the data points into a matrix results in: 2 � �     x 1 1 y 1 � � . � m � . � � . . min − . . �     � b �     � m,b � � x n 1 y n � � which is a linear least squares problem in the standard form. Cornell CS 322 Linear Least Squares I 9

  10. Fitting Applications Solving Trouble Summary Polynomial regression Suppose the model we expect to �t our data ( x i , y i ) is a cubic polynomial rather than a straight line: p ( x ) = ax 3 + bx 2 + cx + d We want to �nd a , b , c , and d to best match the data: � ( ax 3 i + bx 2 i + cx i + d − y i ) 2 min a,b,c,d i Thinking of the coef�cients as variables and the variables as coef�cients, we can write this: 2 �   � a  x 3 x 2    1 x 1 y 1 � � 1 1 � � . b . . . �   � min  −  .   .  �   � c     a,b,c,d � �  x 3 x 2 1 x n y n � � d n n � � Cornell CS 322 Linear Least Squares I 10

  11. Fitting Applications Solving Trouble Summary Fitting with basis functions This same approach works for any set of functions you want to add together to approximate some data: � y i ≈ a j b j ( x i ) j This works for any b j s, such as monomials (which we just saw), sines and cosines, etc. Cornell CS 322 Linear Least Squares I 11

  12. Fitting Applications Solving Trouble Summary Economic prediction So far we have looked at a single independent variable, with complexity arising from the type of model. Some problems have many independent variables. Moler's problem 5.11 has an example of an economic application. We would like to be able to predict total employment from a set of other economic measures: • x 1 : GNP implicit price de�ator • x 2 : Gross National Product • x 3 : Unemployment • x 4 : Size of armed forces • x 5 : Population • x 6 : Year Cornell CS 322 Linear Least Squares I 12

  13. Fitting Applications Solving Trouble Summary Economic prediction We'd like to approximate y , the total employment, as a linear combination of the others: � y ≈ β 0 + β j x j j We have historical data available for many years, and so we can set up a system with a row for each year, each of which reads   β 0 β 1   � � . y = 1 x 1 x 2 x 3 x 4 x 5 x 6   . .     β 6 with more than 7 years of data, this will be an overdetermined system that can be solved by least squares. Then y can be predicted in future years for which only the x s are available. Cornell CS 322 Linear Least Squares I 13

  14. Fitting Applications Solving Trouble Summary Least squares �tting The basic approach is to look for an x that makes Ax close to b : x ∗ = min x distance( Ax , b ) . How to measure distance? Usually by the magnitude of the difference: x ∗ = min x size( Ax , b ) How we measure �size� determines what kind of answer we get. Cornell CS 322 Linear Least Squares I 14

  15. Fitting Applications Solving Trouble Summary Least squares �tting The default way to measure size is with a vector norm, such as the familiar Euclidean distance (2-norm): x ∗ = min x � Ax − b � which expands out to �� x ∗ = min ( a i · x − b i ) 2 x i � ( a i · x − b i ) 2 . = min x i Since we only care about the minimum value, we can drop the square root, and our problem is to minimize the sum of squares. Cornell CS 322 Linear Least Squares I 15

  16. Fitting Applications Solving Trouble Summary Why least squares? Why are we using this sum-of-squares metric for error? • Because it is the right norm for the problem? maybe with some strong assumptions. . . • Because it corresponds to a familiar notion of distance? getting closer. . . • Because it results in a problem that's really easy to solve? bingo! Don't let its elegance seduce you into thinking that a least squares solution is the Right Answer for every �tting problem. Cornell CS 322 Linear Least Squares I 16

  17. Fitting Applications Solving Trouble Summary Solving a 2 × 1 least squares system Let's look at an example for n = 1 , m = 2 : � a 1 � � b 1 � or x ≈ a x ≈ b a 2 b 2 In this case we are taking a scalar multiple of a single vector a and trying to come close to a point b . Here is a picture of the situation: a b Cornell CS 322 Linear Least Squares I 17

  18. Fitting Applications Solving Trouble Summary Solving a 2 × 1 least squares system What is the closest point on this line to b ? a b It is the orthogonal projection of b on to the line. Cornell CS 322 Linear Least Squares I 18

  19. Fitting Applications Solving Trouble Summary Solving a 2 × 1 least squares system What is the closest point on this line to b ? a x a b It is the orthogonal projection of b on to the line. Cornell CS 322 Linear Least Squares I 18

  20. Fitting Applications Solving Trouble Summary Solving a 2 × 1 least squares system What is the closest point on this line to b ? a x r a b It is the orthogonal projection of b on to the line. If a x ∗ is the closest point to b , then the residual r = a x ∗ − b must be orthogonal to a : a · ( a x ∗ − b ) = 0 a · a x ∗ = a · b a · r = 0 ; ; Cornell CS 322 Linear Least Squares I 18

  21. Fitting Applications Solving Trouble Summary Solving a 2 × 1 least squares system So the 2 × 1 case boils down to a · a x ∗ = a · b Some interpretations of this: • Residual is orthogonal to a . • The vectors a x ∗ and b have the same component in the a direction. • (If � a � = 1 ) x ∗ is the component of b in the a direction. Cornell CS 322 Linear Least Squares I 19

  22. Fitting Applications Solving Trouble Summary Solving a 3 × 2 least squares system Now we can graduate to the 3 × 2 case: or or � � Ax ≈ b x ≈ b a 1 x 1 + a 2 x 2 ≈ b a 1 a 2 Geometrically, this is �nding the point on the plane spanned by a 1 and a 2 that is closest to b . b a 2 a 1 Cornell CS 322 Linear Least Squares I 20

  23. Fitting Applications Solving Trouble Summary Solving a 3 × 2 least squares system Now we can graduate to the 3 × 2 case: or or � � Ax ≈ b x ≈ b a 1 x 1 + a 2 x 2 ≈ b a 1 a 2 Geometrically, this is �nding the point on the plane spanned by a 1 and a 2 that is closest to b . b r a 2 Ax a 1 Now the residual is orthogonal to the plane�which is to say, it is orthogonal to both columns of A . Cornell CS 322 Linear Least Squares I 20

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend