least squares and data fitting data fitting
play

Least Squares and Data Fitting Data fitting How do we best fit a - PowerPoint PPT Presentation

Least Squares and Data Fitting Data fitting How do we best fit a set of data points? Linear Least Squares 1) Fitting with a line Given data points { ! , ! , , " , " } , we want to find the function =


  1. Least Squares and Data Fitting

  2. Data fitting How do we best fit a set of data points?

  3. Linear Least Squares 1) Fitting with a line Given ๐‘› data points { ๐‘ข ! , ๐‘ง ! , โ€ฆ , ๐‘ข " , ๐‘ง " } , we want to find the function ๐‘ง = ๐‘ฆ ! + ๐‘ฆ " ๐‘ข that best fit the data (or better, we want to find the coefficients ๐‘ฆ # , ๐‘ฆ ! ). Thinking geometrically, we can think โ€œwhat is the line that most nearly passes through all the points?โ€

  4. Given ๐‘› data points { ๐‘ข ! , ๐‘ง ! , โ€ฆ , ๐‘ข " , ๐‘ง " } , we want to find ๐‘ฆ # and ๐‘ฆ ! such that ๐‘ง $ = ๐‘ฆ # + ๐‘ฆ ! ๐‘ข $ โˆ€๐‘— โˆˆ 1, ๐‘› or in matrix form: Note that this system of ๐‘ง ! linear equations has more 1 ๐‘ข ! ๐‘ฆ # โ‹ฎ ๐‘ฉ ๐’š = ๐’„ โ‹ฎ โ‹ฎ equations than unknowns โ€“ ๐‘ฆ ! = ๐‘ง " 1 ๐‘ข " OVERDETERMINED SYSTEMS ๐’ร—๐’ ๐’ร—๐Ÿ ๐’ร—๐Ÿ We want to find the appropriate linear combination of the columns of ๐‘ฉ that makes up the vector ๐’„ . If a solution exists that satisfies ๐‘ฉ ๐’š = ๐’„ then ๐’„ โˆˆ ๐‘ ๐‘๐‘œ๐‘•๐‘“(๐‘ฉ)

  5. Linear Least Squares โ€ข In most cases, ๐’„ โˆ‰ ๐‘ ๐‘๐‘œ๐‘•๐‘“(๐‘ฉ) and ๐‘ฉ ๐’š = ๐’„ does not have an exact solution! โ€ข Therefore, an overdetermined system is better expressed as ๐‘ฉ ๐’š โ‰… ๐’„

  6. Linear Least Squares โ€ข Least Squares : find the solution ๐’š that minimizes the residual ๐’” = ๐’„ โˆ’ ๐‘ฉ ๐’š โ€ข Letโ€™s define the function ๐œš as the square of the 2-norm of the residual % ๐œš ๐’š = ๐’„ โˆ’ ๐‘ฉ ๐’š %

  7. Linear Least Squares โ€ข Least Squares : find the solution ๐’š that minimizes the residual ๐’” = ๐’„ โˆ’ ๐‘ฉ ๐’š โ€ข Letโ€™s define the function ๐œš as the square of the 2-norm of the residual % ๐œš ๐’š = ๐’„ โˆ’ ๐‘ฉ ๐’š % โ€ข Then the least squares problem becomes min ๐’š ๐œš (๐’š) โ€ข Suppose ๐œš: โ„› " โ†’ โ„› is a smooth function, then ๐œš ๐’š reaches a (local) maximum or minimum at a point ๐’š โˆ— โˆˆ โ„› " only if โˆ‡๐œš ๐’š โˆ— = 0

  8. How to find the minimizer? โ€ข To minimize the 2-norm of the residual vector % min ๐’š ๐œš ๐’š = ๐’„ โˆ’ ๐‘ฉ ๐’š % ๐œš ๐’š = (๐’„ โˆ’ ๐‘ฉ ๐’š) ( (๐’„ โˆ’ ๐‘ฉ ๐’š ) โˆ‡๐œš ๐’š = 2(๐‘ฉ ( ๐’„ โˆ’ ๐‘ฉ ( ๐‘ฉ ๐’š) Normal Equations โ€“ solve a linear system of equations First order necessary condition: โˆ‡๐œš ๐’š = 0 โ†’ ๐‘ฉ ( ๐’„ โˆ’ ๐‘ฉ ( ๐‘ฉ ๐’š = ๐Ÿ โ†’ ๐‘ฉ ( ๐‘ฉ ๐’š = ๐‘ฉ ( ๐’„ Second order sufficient condition: ๐ธ % ๐œš ๐’š = 2๐‘ฉ ( ๐‘ฉ 2๐‘ฉ ( ๐‘ฉ is a positive semi-definite matrix โ†’ the solution is a minimum

  9. Linear Least Squares (another approach) โ€ข Find ๐’› = ๐‘ฉ ๐’š which is closest to the vector ๐’„ โ€ข What is the vector ๐’› = ๐‘ฉ ๐’š โˆˆ ๐‘ ๐‘๐‘œ๐‘•๐‘“(๐‘ฉ) that is closest to vector ๐’› in the Euclidean norm? When ๐’” = ๐’„ โˆ’ ๐’› = ๐’„ โˆ’ ๐‘ฉ ๐’š is orthogonal to all columns of ๐‘ฉ , then ๐’› is closest to ๐’„ ๐‘ฉ " ๐‘ฉ ๐’š = ๐‘ฉ " ๐’„ ๐‘ฉ ๐‘ผ ๐’” = ๐‘ฉ ๐‘ผ (๐’„ โˆ’ ๐‘ฉ ๐’š)= 0

  10. Summary: โ€ข ๐‘ฉ is a ๐‘›ร—๐‘œ matrix, where ๐‘› > ๐‘œ . โ€ข ๐‘› is the number of data pair points. ๐‘œ is the number of parameters of the โ€œbest fitโ€ function. โ€ข Linear Least Squares problem ๐‘ฉ ๐’š โ‰… ๐’„ always has solution. โ€ข The Linear Least Squares solution ๐’š minimizes the square of the 2-norm of the residual: % min ๐’„ โˆ’ ๐‘ฉ ๐’š % ๐’š โ€ข One method to solve the minimization problem is to solve the system of Normal Equations ๐‘ฉ ( ๐‘ฉ ๐’š = ๐‘ฉ ( ๐’„ โ€ข Letโ€™s see some examples and discuss the limitations of this method.

  11. Example: Solve: ๐‘ฉ # ๐‘ฉ ๐’š = ๐‘ฉ # ๐’„

  12. Data fitting - not always a line fit! โ€ข Does not need to be a line! For example, here we are fitting the data using a quadratic curve. Linear Least Squares : The problem is linear in its coefficients!

  13. Another examples We want to find the coefficients of the quadratic function that best fits the data points: ๐‘ง = ๐‘ฆ ! + ๐‘ฆ " ๐‘ข + ๐‘ฆ # ๐‘ข # We would not want our โ€œfitโ€ curve to pass through the data points exactly as we are looking to model the general trend and not capture the noise.

  14. Data fitting ๐‘ง ! " ๐‘ฆ $ 1 ๐‘ข ! ๐‘ข ! Solve: ๐‘ฉ # ๐‘ฉ ๐’š = ๐‘ฉ # ๐’„ ๐‘ฆ ! โ‹ฎ = โ‹ฎ โ‹ฎ โ‹ฎ ๐‘ฆ " ๐‘ง # " 1 ๐‘ข # ๐‘ข # ( ๐‘ข $ , ๐‘ง $ )

  15. Which function is not suitable for linear least squares? A) ๐‘ง = ๐‘ + ๐‘ ๐‘ฆ + ๐‘‘ ๐‘ฆ # + ๐‘’ ๐‘ฆ % B) ๐‘ง = ๐‘ฆ ๐‘ + ๐‘ ๐‘ฆ + ๐‘‘ ๐‘ฆ # + ๐‘’ ๐‘ฆ % C) ๐‘ง = ๐‘ sin ๐‘ฆ + ๐‘/ cos ๐‘ฆ D) ๐‘ง = ๐‘ sin ๐‘ฆ + ๐‘ฆ/ cos ๐‘๐‘ฆ E) ๐‘ง = ๐‘ ๐‘“ &#' + ๐‘ ๐‘“ #'

  16. Computational Cost ๐‘ฉ # ๐‘ฉ ๐’š = ๐‘ฉ # ๐’„ โ€ข Compute ๐‘ฉ ( ๐‘ฉ : ๐‘ƒ ๐‘›๐‘œ % % ! 2 ๐‘œ 2 , Cholesky โ†’ ๐‘ƒ โ€ข Factorize ๐‘ฉ ( ๐‘ฉ : LU โ†’ ๐‘ƒ 2 ๐‘œ 2 โ€ข Solve ๐‘ƒ ๐‘œ % โ€ข Since ๐‘› > ๐‘œ the overall cost is ๐‘ƒ ๐‘›๐‘œ %

  17. Short questions Given the data in the table below, which of the plots shows the line of best fit in terms of least squares? A) B) C) D)

  18. Short questions Given the data in the table below, and the least squares model ๐‘ง = ๐‘‘ ! + ๐‘‘ % sin ๐‘ข๐œŒ + ๐‘‘ 2 sin ๐‘ข๐œŒ/2 + ๐‘‘ 3 sin ๐‘ข๐œŒ/4 written in matrix form as determine the entry ๐ต %2 of the matrix ๐‘ฉ . Note that indices start with 1. A) โˆ’1.0 B) 1.0 C) โˆ’ 0.7 D) 0.7 E) 0.0

  19. Solving Linear Least Squares with SVD

  20. What we have learned so farโ€ฆ ๐‘ฉ is a ๐‘›ร—๐‘œ matrix where ๐‘› > ๐‘œ (more points to fit than coefficient to be determined) Normal Equations: ๐‘ฉ ! ๐‘ฉ ๐’š = ๐‘ฉ ! ๐’„ The solution ๐‘ฉ ๐’š โ‰… ๐’„ is unique if and only if ๐‘ ๐‘๐‘œ๐‘™ ๐ = ๐‘œ โ€ข ( ๐‘ฉ is full column rank) ๐‘ ๐‘๐‘œ๐‘™ ๐ = ๐‘œ โ†’ columns of ๐‘ฉ are linearly independent โ†’ ๐‘œ non-zero โ€ข singular values โ†’ ๐‘ฉ ! ๐‘ฉ has only positive eigenvalues โ†’ ๐‘ฉ ! ๐‘ฉ is a symmetric and positive definite matrix โ†’ ๐‘ฉ ! ๐‘ฉ is invertible ๐’š = ๐‘ฉ ! ๐‘ฉ "๐Ÿ ๐‘ฉ ! ๐’„ If ๐‘ ๐‘๐‘œ๐‘™ ๐ < ๐‘œ , then ๐‘ฉ is rank-deficient, and solution of linear least squares โ€ข problem is not unique .

  21. Condition number for Normal Equations Finding the least square solution of ๐‘ฉ ๐’š โ‰… ๐’„ (where ๐‘ฉ is full rank matrix) using the Normal Equations ๐‘ฉ ( ๐‘ฉ ๐’š = ๐‘ฉ ( ๐’„ has some advantages, since we are solving a square system of linear equations with a symmetric matrix (and hence it is possible to use decompositions such as Cholesky Factorization) However, the normal equations tend to worsen the conditioning of the matrix. ๐‘‘๐‘๐‘œ๐‘’ ๐‘ฉ ( ๐‘ฉ = (๐‘‘๐‘๐‘œ๐‘’ ๐‘ฉ ) % How can we solve the least square problem without squaring the condition of the matrix?

  22. SVD to solve linear least squares problems ๐‘ฉ is a ๐‘›ร—๐‘œ rectangular matrix where ๐‘› > ๐‘œ, and hence the SVD decomposition is given by: ๐œ # โ‹ฑ " โ‹ฎ โ€ฆ โ‹ฎ โ€ฆ ๐ฐ # โ€ฆ ๐œ % ๐‘ฉ = ๐’— # โ€ฆ ๐’— $ โ‹ฎ โ‹ฎ โ‹ฎ 0 " โ‹ฎ โ€ฆ โ‹ฎ โ€ฆ ๐ฐ % โ€ฆ โ‹ฎ 0 We want to find the least square solution of ๐‘ฉ ๐’š โ‰… ๐’„ , where ๐‘ฉ = ๐‘ฝ ๐šป ๐‘พ ๐‘ผ or better expressed in reduced form: ๐‘ฉ = ๐‘ฝ 5 ๐šป ๐‘บ ๐‘พ ๐‘ผ

  23. Recall Reduced SVD ๐‘› > ๐‘œ ๐‘ฉ = ๐‘ฝ 4 ๐šป ๐‘บ ๐‘พ ๐‘ผ ๐‘›ร—๐‘œ ๐‘›ร—๐‘œ ๐‘œร—๐‘œ ๐‘œร—๐‘œ

  24. SVD to solve linear least squares problems ๐‘ฉ = ๐‘ฝ 4 ๐šป ๐‘บ ๐‘พ ๐‘ผ ( ๐œ ! โ‹ฎ โ€ฆ โ‹ฎ โ€ฆ ๐ฐ ! โ€ฆ ๐‘ฉ = ๐’— ! โ€ฆ ๐’— A โ‹ฑ โ‹ฎ โ‹ฎ โ‹ฎ ( โ‹ฎ โ€ฆ โ‹ฎ ๐œ A โ€ฆ ๐ฐ A โ€ฆ We want to find the least square solution of ๐‘ฉ ๐’š โ‰… ๐’„ , where ๐‘ฉ = ๐‘ฝ $ ๐šป ๐‘บ ๐‘พ ๐‘ผ Normal equations: ๐‘ฉ " ๐‘ฉ ๐’š = ๐‘ฉ " ๐’„ โŸถ ๐‘ฝ & ๐šป ๐‘บ ๐‘พ " " ๐‘ฝ & ๐šป ๐‘บ ๐‘พ " ๐’š = ๐‘ฝ & ๐šป ๐‘บ ๐‘พ " " ๐’„ " ๐‘ฝ & ๐šป ๐‘บ ๐‘พ " ๐’š = ๐‘พ ๐šป ๐‘บ ๐‘ฝ & " ๐’„ ๐‘พ ๐šป ๐‘บ ๐‘ฝ & " ๐’„ ๐‘พ ๐šป ๐‘บ ๐šป ๐‘บ ๐‘พ " ๐’š = ๐‘พ ๐šป ๐‘บ ๐‘ฝ & ( ๐‘พ " ๐’š = ๐šป ๐‘บ ๐‘ฝ & " ๐’„ ๐šป ๐‘บ When can we take the inverse of the singular matrix?

  25. " ๐‘พ # ๐’š = ๐šป ๐‘บ ๐‘ฝ $ # ๐’„ ๐šป ๐‘บ 1) Full rank matrix ( ๐œ $ โ‰  0 โˆ€๐‘— ): Unique solution: rank ๐‘ฉ = ๐‘œ ! ๐’„ "' ๐‘ฝ $ ๐’š = ๐‘พ ๐šป ๐‘บ # ๐’„ ๐‘พ # ๐’š = %& ๐‘ฝ $ ๐šป ๐‘บ ๐‘›ร—1 ๐‘œร—1 ๐‘œร—๐‘œ ๐‘œร—๐‘œ ๐‘œร—๐‘› 2) Rank deficient matrix ( rank ๐‘ฉ = ๐‘  < ๐‘œ ) " ๐‘พ # ๐’š = ๐šป ๐‘บ ๐‘ฝ $ # ๐’„ ๐šป ๐‘บ Solution is not unique!! " Find solution ๐’š such that min ๐’š ๐œš ๐’š = ๐’„ โˆ’ ๐‘ฉ ๐’š " min ๐’š ๐Ÿ‘ and also ๐’š

  26. 2) Rank deficient matrix (continue) " ๐‘พ # ๐’š = ๐šป ๐‘บ ๐‘ฝ $ # ๐’„ and also satisfies We want to find the solution ๐’š that satisfies ๐šป ๐‘บ min ๐’š ๐Ÿ‘ ๐’š # ๐’„ for the variable ๐’› Change of variables: Set ๐‘พ # ๐’š = ๐’› and then solve ๐šป ๐‘บ ๐’› = ๐‘ฝ $ " ๐’„ ๐‘ง + = ๐’— + ๐‘— = 1,2, โ€ฆ , ๐‘  " ๐’„ ๐’— # ๐œ # ๐‘ง # ๐œ + โ‹ฎ โ‹ฑ โ‹ฎ What do we do when ๐‘— > ๐‘  ? " ๐’„ ๐’— ) ๐œ ) ๐‘ง ) Which choice of ๐‘ง + will minimize = ๐‘ง )*# " ๐’— )*# ๐’„ 0 โ‹ฎ โ‹ฎ โ‹ฑ ๐’š ๐Ÿ‘ = ๐‘พ ๐’› ๐Ÿ‘ ? ๐‘ง % " ๐’„ ๐’— % 0 ๐‘ง + = 0, ๐‘— = ๐‘  + 1, โ€ฆ , ๐‘œ Set Evaluate ๐‘ง & ) (๐’— * โ‹ฎ โ€ฆ โ‹ฎ ) # ๐’„) ๐‘ง " ๐’š = ๐‘พ๐’› = ๐’˜ & โ€ฆ ๐’˜ ) ๐’š = 9 ๐‘ง * ๐’˜ ๐’‹ = 9 ๐’˜ ๐’‹ โ‹ฎ ๐œ * โ‹ฎ โ€ฆ โ‹ฎ ๐‘ง ) *+& *+& - ! ./

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend