part ii interpolation and approximation theory
play

Part II: Interpolation and Approximation theory Contents: Review of - PowerPoint PPT Presentation

Part II: Interpolation and Approximation theory Contents: Review of Lagrange interpolation polynomials 1 Newton interpolation 2 Optimal interpolation points; Chebyshev polynomials 3 Cubic spline interpolation 4 Error analysis 5 Numerical


  1. Part II: Interpolation and Approximation theory Contents: Review of Lagrange interpolation polynomials 1 Newton interpolation 2 Optimal interpolation points; Chebyshev polynomials 3 Cubic spline interpolation 4 Error analysis 5 Numerical Methods January 29, 2016 1 / 47

  2. Interpolation is the process of finding a function f ( x ) whose graph passes through a given set of data points ( x 0 , y 0 ), ( x 1 , y 1 ), ... ( x n , y n ). In other words, we know the values of the function at n + 1 points: f ( x 0 ) = y 0 , f ( x 1 ) = y 1 . . . f ( x n ) = y n and we need to find an analytic expression for f ( x ), which would then specify the values of the function at other points, not listed among the x i ’s. In interpolation, we need to estimate f ( x ) for arbitrary x that lies between the smallest and the largest x i . (If x is outside the range of the x i s, this is called extrapolation.) Numerical Methods January 29, 2016 2 / 47

  3. Polynomial interpolation The most common functions used for interpolation are polynomials. Given a set of n + 1 data points ( x i , y i ) , we want to find a polynomial curve that passes through all the points. A polynomial P for which P ( x i ) = y i when 0 ≤ i ≤ n is said to interpolate the given set of data points. The points x i are called nodes or interpolating points . For the simple case of n = 1, we have two points: ( x 0 , y 0 ) and ( x 1 , y 1 ). We can always find a linear polynomial, P 1 ( x ) = a 0 + a 1 x , passing through the given points and show that it has the form P 1 ( x ) = x − x 1 y 0 + x − x 0 y 1 = L 1 , 0 ( x ) y 0 + L 1 , 1 ( x ) y 1 . x 0 − x 1 x 1 − x 0 Numerical Methods January 29, 2016 3 / 47

  4. The Lagrange form of the interpolating polynomial If n + 1 data points, ( x 0 , y 0 ) , . . . ( x n , y n ), are available, then we can find a polynomial of degree n , P n ( x ), which interpolates the data, that is P n ( x i ) = y i , 0 ≤ i ≤ n The Lagrange form of the interpolating polynomial is n � P n ( x ) = L n , i ( x ) y i , i =0 where n � x − x k L n , i ( x ) = x i − x k k =0 k � = i ( x − x 1 )( x − x 2 ) · · · ( x − x i − 1 )( x − x i +1 ) · · · ( x − x n ) = ( x i − x 1 )( x i − x 2 ) · · · ( x i − x i − 1 )( x i − x i +1 ) · · · ( x i − x n ) Numerical Methods January 29, 2016 4 / 47

  5. It has the property that � 1 i = j L n , i ( x j ) = 0 i � = j Example: Consider the function f ( x ) = ln( x ) Construct the Lagrange form of the interpolating polynomial for f ( x ) 1 which passes through the points (1 , ln(1)), (2 , ln(2)) and (3 , ln(3)). Use the polynomial in part (a) to estimate ln(1 . 5) and ln(2 . 4). What 2 is the error in each approximation? Numerical Methods January 29, 2016 5 / 47

  6. Graphical representation Figure: The polynomial P 2 ( x ) which interpolates the points (1 , ln(1)), (2 , ln(2)) and (3 , ln(3)) (solid curve). The graph of ln( x ) is shown as the dotted curve. Numerical Methods January 29, 2016 6 / 47

  7. Existence and uniqueness of the interpolating polynomial If ( x 0 , y 0 ) , . . . ( x n , y n ) are n + 1 distinct data points then there exists a unique polynomial P n of degree at most n such that P n interpolates the points, that is P n ( x i ) = y i , for all 0 ≤ i ≤ n . Exercise 1: Consider the following data set ( − 1 , 5) , (0 , 1) , (1 , 1) , (2 , 11) Show that the polynomials P ( x ) = x 3 + 2 x 2 − 3 x + 1 and Q ( x ) = 1 8 x 4 + 3 4 x 3 + 15 8 x 2 − 11 4 x + 1 both interpolate the data. Why does this not contradict the uniqueness property of the interpolating polynomial? Numerical Methods January 29, 2016 7 / 47

  8. The Newton interpolation polynomial Consider a set of n + 1 data points, ( x 0 , y 0 ) , . . . ( x n , y n ), and assume they are given by a function f , so that y 0 = f ( x 0 ) , . . . , y n = f ( x n ). We introduce the following quantities, called divided differences : f [ x k ] = f ( x k ) , f [ x k , x k +1 ] = f [ x k +1 ] − f [ x k ] x k +1 − x k f [ x k , x k +1 x k +2 ] = f [ x k +1 , x k +2 ] − f [ x k , x k +1 ] x k +2 − x k f [ x k , x k +1 x k +2 , x k +3 ] = f [ x k +1 , x k +2 , x k +3 ] − f [ x k , x k +1 , x k +2 ] x k +3 − x k . . . Numerical Methods January 29, 2016 8 / 47

  9. Newton’s divided differences formula The Newton form of the interpolation polynomial is given by the divided difference formula: P n ( x ) = f [ x 0 ] + f [ x 0 , x 1 ] ( x − x 0 ) + f [ x 0 , x 1 , x 2 ] ( x − x 0 )( x − x 1 ) + f [ x 0 , x 1 , x 2 , x 3 ] ( x − x 0 )( x − x 1 )( x − x 2 ) + · · · + f [ x 0 , x 1 . . . x n ] ( x − x 0 )( x − x 1 ) · · · ( x − x n ) Exercise 2: Use both the Lagrange and Newton methods to find an interpolating polynomial of degree 2 for the following data (0 , 1) , (2 , 2) , (3 , 4) . Check that both methods yield the same polynomial! Numerical Methods January 29, 2016 9 / 47

  10. Derivation of the Newton formula Look for an interpolating polynomial in the “nested” form P n ( x ) = a 0 + a 1 ( x − x 0 ) + a 2 ( x − x 0 )( x − x 1 ) + · · · + a n ( x − x 0 ) · · · ( x − x n ) � k − 1 � n � � = a k ( x − x i ) k =0 i =0 Then the interpolating conditions yield f ( x 0 ) = P n ( x 0 ) = a 0 f ( x 1 ) = P n ( x 1 ) = a 0 + a 1 ( x 1 − x 0 ) f ( x 2 ) = P n ( x 2 ) = a 0 + a 1 ( x 2 − x 0 ) + a 2 ( x 2 − x 0 )( x 2 − x 1 ) . . . Numerical Methods January 29, 2016 10 / 47

  11. Exercise 3: Show that a 0 = f ( x 0 ) ≡ f [ x 0 ] a 1 = f ( x 1 ) − f ( x 0 ) ≡ f [ x 0 , x 1 ] x 1 − x 0 a 2 = f [ x 1 , x 2 ] − f [ x 0 , x 1 ] ≡ f [ x 0 , x 1 , x 2 ] x 2 − x 0 Exercise 4: Suppose that, in addition to the three points considered in Exercise 1, a new point becomes available so now we must construct an interpolating polynomial for the data (0 , 1) , (1 , 3) , (2 , 2) , (3 , 4) Construct the new polynomial using both the Lagrange and Newton methods. Which one is easier? Numerical Methods January 29, 2016 11 / 47

  12. Lagrange Interpolation error Let f be a continuous function on an interval [ a , b ], which has n + 1 continuous derivatives. If we take n + 1 distinct points on the graph of the function, ( x i , y i ) for 0 ≤ i ≤ n so that y i = f ( x i ), then the interpolating polynomial P n ( x ) satisfies f ( x ) − P n ( x ) = f ( n +1) ( ξ ) ( n + 1)! ( x − x 0 )( x − x 1 ) · · · ( x − x n ) where ξ is a point between a and b . Numerical Methods January 29, 2016 12 / 47

  13. Newton form of the interpolation error If the polynomial P n interpolates the data ( x 0 , f ( x 0 )), . . . , ( x n , f ( x n )) then the interpolation error can also be written as f ( x ) − P n ( x ) = f [ x 0 , x 1 , . . . , x n , x ] ( x − x 0 )( x − x 1 ) · · · ( x − x n ) Note that the errors given by the Lagrange and Newton forms above are equal. Exercise 5: Construct the Lagrange and Newton forms of the √ x which passes 3 interpolating polynomial P 3 ( x ) for the function f ( x ) = through the points (0,0), (1,1), (8,2) and (27,3). Calculate the interpolation error at x = 5 and compare with the theoretical error bound. Numerical Methods January 29, 2016 13 / 47

  14. Optimal points for interpolation Assume that we need to approximate a continuous function f ( x ) on an interval [ a , b ] using an interpolation polynomial of degree n and we have the freedom to choose the interpolation nodes x 0 , x 1 , . . . , x n . The optimal points will be chosen so that the total interpolation error is as small as possible, which means that the worst case error, a ≤ x ≤ b | f ( x ) − P n ( x ) | , max is minimized. In what follows we show that the optimal points for interpolation are given by the zeros of special polynomials called Chebyshev polynomials. Numerical Methods January 29, 2016 14 / 47

  15. Chebyshev polynomials For any integer n ≥ 0 define the function T n ( x ) = cos( n cos − 1 ( x )) , − 1 ≤ x ≤ 1 We need to show that T n ( x ) is a polynomial of degree n . We calculate the functions T n ( x ) recursively. Let θ = cos − 1 ( x ) so cos( θ ) = x . Then T n ( x ) = cos( n θ ) Easy to see that: n = 0 = ⇒ T 0 ( x ) = cos(0) = 1 n = 1 = ⇒ T 1 ( x ) = cos( θ ) = x ⇒ T 2 ( x ) = cos(2 θ ) = 2 cos 2 ( θ ) − 1 = 2 x 2 − 1 n = 2 = Numerical Methods January 29, 2016 15 / 47

  16. Recurrence relations for Chebyshev polynomials Using trigonometric formulas we can prove that T n + m ( x ) + T n − m ( x ) = 2 T n ( x ) T m ( x ) for all n ≥ m ≥ 0 and all x ∈ [ − 1 , 1]. Hence, for m = 1 we get T n +1 ( x ) + T n − 1 ( x ) = 2 xT n ( x ) which is then used to calculate the Chebyshev polynomials of higher order. Example: Calculate T 3 ( x ), T 4 ( x ) and T 5 ( x ). Numerical Methods January 29, 2016 16 / 47

  17. More properties of Chebyshev polynomials Note that | T n ( x ) | ≤ 1 and T n ( x ) = 2 n − 1 x n + lower degree terms for all n ≥ 0 and all x in [ − 1 , 1]. If we define the modified Chebyshev polynomial : T n ( x ) = T n ( x ) � 2 n − 1 then we have 1 T n ( x ) = x n + lower degree terms | � � T n ( x ) | ≤ and 2 n − 1 for all n ≥ 0 and all x in [ − 1 , 1]. Numerical Methods January 29, 2016 17 / 47

  18. Zeros of Chebyshev polynomials We have θ = cos − 1 ( x ) T n ( x ) = cos( n θ ) , so n θ = ± π 2 , ± 3 π 2 , ± 5 π T n ( x ) = 0 = ⇒ 2 , . . . which implies θ = ± (2 k + 1) π , k = 0 , 1 , 2 , . . . 2 n and hence the zeros of T n ( x ) are given by � (2 k + 1) π � x k = cos , k = 0 , 1 , 2 , . . . n − 1 . 2 n Numerical Methods January 29, 2016 18 / 47

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend