Part II: Interpolation and Approximation theory Contents: Review of - - PowerPoint PPT Presentation

part ii interpolation and approximation theory
SMART_READER_LITE
LIVE PREVIEW

Part II: Interpolation and Approximation theory Contents: Review of - - PowerPoint PPT Presentation

Part II: Interpolation and Approximation theory Contents: Review of Lagrange interpolation polynomials 1 Newton interpolation 2 Optimal interpolation points; Chebyshev polynomials 3 Cubic spline interpolation 4 Error analysis 5 Numerical


slide-1
SLIDE 1

Part II: Interpolation and Approximation theory

Contents:

1

Review of Lagrange interpolation polynomials

2

Newton interpolation

3

Optimal interpolation points; Chebyshev polynomials

4

Cubic spline interpolation

5

Error analysis

Numerical Methods January 29, 2016 1 / 47

slide-2
SLIDE 2

Interpolation is the process of finding a function f (x) whose graph passes through a given set of data points (x0, y0), (x1, y1), ... (xn, yn). In other words, we know the values of the function at n + 1 points: f (x0) = y0, f (x1) = y1 . . . f (xn) = yn and we need to find an analytic expression for f (x), which would then specify the values of the function at other points, not listed among the xi’s. In interpolation, we need to estimate f (x) for arbitrary x that lies between the smallest and the largest xi . (If x is outside the range of the xi s, this is called extrapolation.)

Numerical Methods January 29, 2016 2 / 47

slide-3
SLIDE 3

Polynomial interpolation

The most common functions used for interpolation are polynomials. Given a set of n + 1 data points (xi, yi) , we want to find a polynomial curve that passes through all the points. A polynomial P for which P(xi) = yi when 0 ≤ i ≤ n is said to interpolate the given set of data points. The points xi are called nodes or interpolating points. For the simple case of n = 1, we have two points: (x0, y0) and (x1, y1). We can always find a linear polynomial, P1(x) = a0 + a1x, passing through the given points and show that it has the form P1(x) = x − x1 x0 − x1 y0 + x − x0 x1 − x0 y1 = L1,0(x) y0 + L1,1(x) y1.

Numerical Methods January 29, 2016 3 / 47

slide-4
SLIDE 4

The Lagrange form of the interpolating polynomial

If n + 1 data points, (x0, y0), . . . (xn, yn), are available, then we can find a polynomial of degree n, Pn(x), which interpolates the data, that is Pn(xi) = yi, 0 ≤ i ≤ n The Lagrange form of the interpolating polynomial is Pn(x) =

n

  • i=0

Ln,i(x)yi, where Ln,i(x) =

n

  • k=0

k=i

x − xk xi − xk = (x − x1)(x − x2) · · · (x − xi−1)(x − xi+1) · · · (x − xn) (xi − x1)(xi − x2) · · · (xi − xi−1)(xi − xi+1) · · · (xi − xn)

Numerical Methods January 29, 2016 4 / 47

slide-5
SLIDE 5

It has the property that Ln,i(xj) =

  • 1

i = j i = j Example: Consider the function f (x) = ln(x)

1

Construct the Lagrange form of the interpolating polynomial for f (x) which passes through the points (1, ln(1)), (2, ln(2)) and (3, ln(3)).

2

Use the polynomial in part (a) to estimate ln(1.5) and ln(2.4). What is the error in each approximation?

Numerical Methods January 29, 2016 5 / 47

slide-6
SLIDE 6

Graphical representation

Figure: The polynomial P2(x) which interpolates the points (1, ln(1)), (2, ln(2)) and (3, ln(3)) (solid curve). The graph of ln(x) is shown as the dotted curve.

Numerical Methods January 29, 2016 6 / 47

slide-7
SLIDE 7

Existence and uniqueness of the interpolating polynomial

If (x0, y0), . . . (xn, yn) are n + 1 distinct data points then there exists a unique polynomial Pn of degree at most n such that Pn interpolates the points, that is Pn(xi) = yi, for all 0 ≤ i ≤ n. Exercise 1: Consider the following data set (−1, 5), (0, 1), (1, 1), (2, 11) Show that the polynomials P(x) = x3 + 2x2 − 3x + 1 and Q(x) = 1 8 x4 + 3 4 x3 + 15 8 x2 − 11 4 x + 1 both interpolate the data. Why does this not contradict the uniqueness property of the interpolating polynomial?

Numerical Methods January 29, 2016 7 / 47

slide-8
SLIDE 8

The Newton interpolation polynomial

Consider a set of n + 1 data points, (x0, y0), . . . (xn, yn), and assume they are given by a function f , so that y0 = f (x0), . . . , yn = f (xn). We introduce the following quantities, called divided differences: f [xk] = f (xk), f [xk, xk+1] = f [xk+1] − f [xk] xk+1 − xk f [xk, xk+1 xk+2] = f [xk+1, xk+2] − f [xk, xk+1] xk+2 − xk f [xk, xk+1 xk+2, xk+3] = f [xk+1, xk+2, xk+3] − f [xk, xk+1, xk+2] xk+3 − xk . . .

Numerical Methods January 29, 2016 8 / 47

slide-9
SLIDE 9

Newton’s divided differences formula

The Newton form of the interpolation polynomial is given by the divided difference formula: Pn(x) = f [x0] + f [x0, x1] (x − x0) + f [x0, x1, x2] (x − x0)(x − x1) +f [x0, x1, x2, x3] (x − x0)(x − x1)(x − x2) + · · · +f [x0, x1 . . . xn] (x − x0)(x − x1) · · · (x − xn) Exercise 2: Use both the Lagrange and Newton methods to find an interpolating polynomial of degree 2 for the following data (0, 1), (2, 2), (3, 4). Check that both methods yield the same polynomial!

Numerical Methods January 29, 2016 9 / 47

slide-10
SLIDE 10

Derivation of the Newton formula

Look for an interpolating polynomial in the “nested” form Pn(x) = a0 + a1(x − x0) + a2(x − x0)(x − x1) + · · · + an(x − x0) · · · (x − xn) =

n

  • k=0

ak k−1

  • i=0

(x − xi)

  • Then the interpolating conditions yield

f (x0) = Pn(x0) = a0 f (x1) = Pn(x1) = a0 + a1(x1 − x0) f (x2) = Pn(x2) = a0 + a1(x2 − x0) + a2(x2 − x0)(x2 − x1) . . .

Numerical Methods January 29, 2016 10 / 47

slide-11
SLIDE 11

Exercise 3: Show that a0 = f (x0) ≡ f [x0] a1 = f (x1) − f (x0) x1 − x0 ≡ f [x0, x1] a2 = f [x1, x2] − f [x0, x1] x2 − x0 ≡ f [x0, x1, x2] Exercise 4: Suppose that, in addition to the three points considered in Exercise 1, a new point becomes available so now we must construct an interpolating polynomial for the data (0, 1), (1, 3), (2, 2), (3, 4) Construct the new polynomial using both the Lagrange and Newton

  • methods. Which one is easier?

Numerical Methods January 29, 2016 11 / 47

slide-12
SLIDE 12

Lagrange Interpolation error

Let f be a continuous function on an interval [a, b], which has n + 1 continuous derivatives. If we take n + 1 distinct points on the graph of the function, (xi, yi) for 0 ≤ i ≤ n so that yi = f (xi), then the interpolating polynomial Pn(x) satisfies f (x) − Pn(x) = f (n+1)(ξ) (n + 1)! (x − x0)(x − x1) · · · (x − xn) where ξ is a point between a and b.

Numerical Methods January 29, 2016 12 / 47

slide-13
SLIDE 13

Newton form of the interpolation error

If the polynomial Pn interpolates the data (x0, f (x0)), . . ., (xn, f (xn)) then the interpolation error can also be written as f (x) − Pn(x) = f [x0, x1, . . . , xn, x] (x − x0)(x − x1) · · · (x − xn) Note that the errors given by the Lagrange and Newton forms above are equal. Exercise 5: Construct the Lagrange and Newton forms of the interpolating polynomial P3(x) for the function f (x) =

3

√x which passes through the points (0,0), (1,1), (8,2) and (27,3). Calculate the interpolation error at x = 5 and compare with the theoretical error bound.

Numerical Methods January 29, 2016 13 / 47

slide-14
SLIDE 14

Optimal points for interpolation

Assume that we need to approximate a continuous function f (x) on an interval [a, b] using an interpolation polynomial of degree n and we have the freedom to choose the interpolation nodes x0, x1, . . . , xn. The optimal points will be chosen so that the total interpolation error is as small as possible, which means that the worst case error, max

a≤x≤b |f (x) − Pn(x)|,

is minimized. In what follows we show that the optimal points for interpolation are given by the zeros of special polynomials called Chebyshev polynomials.

Numerical Methods January 29, 2016 14 / 47

slide-15
SLIDE 15

Chebyshev polynomials

For any integer n ≥ 0 define the function Tn(x) = cos(n cos−1(x)), −1 ≤ x ≤ 1 We need to show that Tn(x) is a polynomial of degree n. We calculate the functions Tn(x) recursively. Let θ = cos−1(x) so cos(θ) = x. Then Tn(x) = cos(nθ) Easy to see that: n = 0 = ⇒ T0(x) = cos(0) = 1 n = 1 = ⇒ T1(x) = cos(θ) = x n = 2 = ⇒ T2(x) = cos(2θ) = 2 cos2(θ) − 1 = 2x2 − 1

Numerical Methods January 29, 2016 15 / 47

slide-16
SLIDE 16

Recurrence relations for Chebyshev polynomials

Using trigonometric formulas we can prove that Tn+m(x) + Tn−m(x) = 2Tn(x)Tm(x) for all n ≥ m ≥ 0 and all x ∈ [−1, 1]. Hence, for m = 1 we get Tn+1(x) + Tn−1(x) = 2xTn(x) which is then used to calculate the Chebyshev polynomials of higher order. Example: Calculate T3(x), T4(x) and T5(x).

Numerical Methods January 29, 2016 16 / 47

slide-17
SLIDE 17

More properties of Chebyshev polynomials

Note that |Tn(x)| ≤ 1 and Tn(x) = 2n−1xn + lower degree terms for all n ≥ 0 and all x in [−1, 1]. If we define the modified Chebyshev polynomial:

  • Tn(x) = Tn(x)

2n−1 then we have | Tn(x)| ≤ 1 2n−1 and

  • Tn(x) = xn + lower degree terms

for all n ≥ 0 and all x in [−1, 1].

Numerical Methods January 29, 2016 17 / 47

slide-18
SLIDE 18

Zeros of Chebyshev polynomials

We have Tn(x) = cos(nθ), θ = cos−1(x) so Tn(x) = 0 = ⇒ nθ = ±π 2 , ±3π 2 , ±5π 2 , . . . which implies θ = ±(2k + 1)π 2n , k = 0, 1, 2, . . . and hence the zeros of Tn(x) are given by xk = cos (2k + 1)π 2n

  • ,

k = 0, 1, 2, . . . n − 1.

Numerical Methods January 29, 2016 18 / 47

slide-19
SLIDE 19

The minimum size property

Let n ≥ 1 be an integer and consider all possible monic polynomials (that is, polynomials whose highest-degree term has coefficient equal to 1) of degree n. Then the degree n monic polynomial with the smallest maximum absolute value on [−1, 1] is the modified Chebyshev polynomial Tn(x) and its maximum value is 1/2n−1. In other words, 1 2n−1 = max

−1≤x≤1 |

Tn(x)| ≤ max

−1≤x≤1 |Pn(x)|

for any monic polynomial of degree n, Pn(x).

Numerical Methods January 29, 2016 19 / 47

slide-20
SLIDE 20

Optimal interpolating points

Let f (x) be a continuous function on [−1, 1]. We are looking for an approximation given by an interpolating polynomial of degree n, Pn(x). Let x0, x1, x2, . . . , xn be the interpolating nodes. Recall the formula for the interpolation error: f (x) − Pn(x) = (x − x0)(x − x1) · · · (x − xn) (n + 1)! f (n+1)(ξ) where ξ is in [−1, 1]. We need to find the interpolating points so that we minimize E[Pn] = max

−1≤x≤1 |f (x) − Pn(x)|

Numerical Methods January 29, 2016 20 / 47

slide-21
SLIDE 21

This is equivalent to minimizing max

−1≤x≤1 |(x − x0)(x − x1) · · · (x − xn)|

But we know that the minimum value of a monic polynomial of degree (n + 1) is obtained for the modified Chebyshev polynomial Tn+1(x) hence (x − x0)(x − x1)(x − x2) · · · (x − xn) = Tn+1(x) hence the n + 1 interpolating points x0, x1, x2, . . . , xn are the zeros of Tn+1(x), that is xk = cos (2k + 1)π 2(n + 1)

  • ,

k = 0, 1, 2, . . . n.

Numerical Methods January 29, 2016 21 / 47

slide-22
SLIDE 22

Example

Let f (x) = ex on [−1, 1]. Find the interpolating polynomial of degree 3 which approximates f (x) such that the maximum error is minimized. Then find an upper bound for the interpolation error on [-1,1]. The interpolation nodes x0, x1, x2, x3 have to be chosen as the zeros of T4(x), that is cos(π 8 ), cos(3π 8 ), cos(5π 8 ), cos(7π 8 ) We use Newton’s divided difference formula for the interpolating polynomial P3(x) = f (x0) + (x − x0)f [x0, x1] + (x − x0)(x − x1)f [x0, x1, x2] +(x − x0)(x − x1)(x − x2)f [x0, x1, x2, x3] and calculate

Numerical Methods January 29, 2016 22 / 47

slide-23
SLIDE 23

f (x0) = 2.5190, f (x1) = 1.4662, f (x2) = 0.6820, f (x3) = 0.3970 f [x0, x1] = f (x1) − f (x0) x1 − x0 = 1.9454 f [x1, x2] = 1.0246, f [x2, x3] = 0.5267 f [x0, x1, x2] = 0.7047, f [x1, x2, x3] = 0.3811, f [x0, x1, x2, x3] = 0.1752 so the interpolating polynomial is P4(x) = 0.9946 + 0.9989x + 0.5429x2 + 0.1752x3 and |Error| =

  • (x − x0)(x − x1)(x − x2)(x − x3) f (4)(ξ)

4!

  • ≤ 1

23 e 4! ≈ 0.014 for all x ∈ [−1, 1].

Numerical Methods January 29, 2016 23 / 47

slide-24
SLIDE 24

Conclusions

1

The degree n monic polynomial with the smallest maximum absolute value on [−1, 1] is the modified Chebyshev polynomial Tn(x) and its maximum value is 1/2n−1.

2

Hence, the Chebyshev polynomials can be used to minimize approximation error by providing optimal interpolation points.

3

If we want to interpolate a function f (x) by a polynomial of degree n

  • n the interval [−1, 1], the interpolation points which give the

smallest possible error are given by the zeros of the Chebyshev polynomial of degree n + 1, Tn+1(x).

Numerical Methods January 29, 2016 24 / 47

slide-25
SLIDE 25

Change of interval

The linear transformation x = b − a 2 t + a + b 2 translates and scales the interval −1 ≤ t ≤ 1 onto the interval a ≤ x ≤ b. The optimal interpolation points on the interval [a, b] are then given by the “transformed” zeros of the Chebyshev polynomials, xk = b − a 2 cos (2k + 1)π 2n

  • + a + b

2 , k = 0, 1, 2, . . . n − 1. and the following inequality holds on [a, b]: |(x − x0)(x − x1) · · · (x − xn−1)| ≤ b−a

2

n 2n−1

Numerical Methods January 29, 2016 25 / 47

slide-26
SLIDE 26

Review of Taylor series and polynomials

Suppose f (x) is a function on [a, b] which is (n + 1)-times differentiable. If a ≤ x0 ≤ b then f (x) can be approximated by its n-th Taylor polynomial f (x) ≈ Pn(x) = f (x0) + f ′(x0)(x − x0) + f ′′(x0) 2 (x − x0)2 + · · · + f (n)(x0) n! (x − x0)n with approximation error (also called the remainder term) Rn(x) = f (n+1)(ξ) (n + 1)! (x − x0)n+1 where ξ is a number between x0 and x. Hence f (x) = Pn(x) + Rn(x)

Numerical Methods January 29, 2016 26 / 47

slide-27
SLIDE 27

Examples

The infinite series obtained from Pn(x) as n → ∞ is called Taylor series of f (x) about x0. In the case x0 = 0, the Taylor polynomials/series are often called MacLaurin polynomials/series. The Taylor/MacLaurin series for some standard functions are as follows ex = 1 + x + x2 2! + · · · + xn n! + · · · sin(x) = x − x3 3! + · · · + (−1)n x2n+1 (2n + 1)! + · · · cos(x) = 1 − x2 2! + x4 4! + · · · + (−1)n x2n (2n)! + · · ·

Numerical Methods January 29, 2016 27 / 47

slide-28
SLIDE 28

Exercises

1

Find the third Taylor polynomial P3(x) for the function f (x) = √x + 1 about x0 = 0. Approximate √ 0.75 using this

  • polynomial. Find an error bound for this approximation and compare

it with the actual error.

2

Find the second Taylor polynomial P2(x) for the function f (x) = ex cos(x) about x0 = 0. Approximate f (0.5) using this

  • polynomial. Find an error bound for this approximation and compare

it with the actual error.

3

Let f (x) = cos(x). Write down the second and third MacLaurin polynomial for f (x) and the corresponding remainder terms.

4

Find the third Taylor polynomial P3(x) for f (x) = (x − 1) ln(x) about x0 = 1. Approximate f (0.5) using P3(0.5). Find an upper bound for the approximation error using the Taylor remainder term and compare it to the actual error.

Numerical Methods January 29, 2016 28 / 47

slide-29
SLIDE 29

Applications of Chebyshev polynomials: Economization of power series

Given a function f (x) we know how to approximate this function by an interpolating polynomial Pn(x), using either equally spaced interpolation points or the zeros of an appropriate Chebyshev polynomial. (Recall that the latter approach has the advantage of minimizing the approximation error.) The question now is how to approximate a given function f (x) by a polynomial of lowest possible degree, which satisfies a certain error bound. Since every continuous function can be approximated by a Taylor (or MacLaurin) polynomial, we need to learn how to approximate a polynomial by a polynomial of smaller degree.

Numerical Methods January 29, 2016 29 / 47

slide-30
SLIDE 30

Approximating a given polynomial by a smaller degree polynomial

Consider approximating a polynomial of degree n Pn(x) = anxn + an−1xn−1 + · · · + a1x + a0

  • n [−1, 1] with a polynomial of degree at most n − 1. We need to choose

Pn−1(x) which minimizes max

x∈[−1,1] |Pn(x) − Pn−1(x)|

Note that (Pn(x) − Pn−1(x))/an is a monic polynomial of degree n. If this polynomial is to have the smallest possible maximum absolute value then it needs to be equal to the modified Chebyshev polynomial of degree n,

  • Tn(x). This minimax error is then equal to 1/2n−1.

Numerical Methods January 29, 2016 30 / 47

slide-31
SLIDE 31

Hence we have 1 an (Pn(x) − Pn−1(x)) = Tn(x) so Pn−1(x) = Pn(x) − an Tn(x) and this choice gives max

x∈[−1,1] |Pn(x) − Pn−1(x)| = |an|

2n−1 In conclusion, the approximating polynomial Pn−1 which minimizes the error has to be chosen as Pn−1(x) = Pn(x) − an Tn(x).

Numerical Methods January 29, 2016 31 / 47

slide-32
SLIDE 32

General economization procedure

Problem: Suppose we need to approximate f (x) by a polynomial Pn(x) of smallest possible degree such that the approximation error satisfies |f (x) − Pn(x) < ε Step 1: Suppose we can find a polynomial Pn(x) such that |f (x) − Pn(x) < ε (for example, a Taylor polynomial). Step 2: Now try to find a polynomial of degree n − 1, Pn−1, which satisfies the same error requirement. Since |f (x) − Pn−1(x)| < |f (x) − Pn(x)| + |Pn(x) − Pn−1(x)| the second term on the right should be as small as possible, therefore using the procedure detailed above, we choose Pn−1(x) = Pn(x) − an Tn(x)

Numerical Methods January 29, 2016 32 / 47

slide-33
SLIDE 33

Step 3: Check the error |f (x) − Pn−1(x)|. If it is still less than ε then repeat Step 2 and try to find a polynomial of degree n − 2, using the same

  • procedure. If it now exceeds ε then the smallest degree polynomial which

satisfies the required error bound is Pn(x). Example: Starting with the fourth-order MacLaurin polynomial, find the polynomial

  • f least degree which best approximates the function f (x) = ex on [−1, 1]

while keeping the error less than 0.05.

Numerical Methods January 29, 2016 33 / 47

slide-34
SLIDE 34

Solution:

The function f (x) = ex can be written as f (x) = P4(x) + R4(x) where P4(x) = 1 + x + x2 2 + x3 6 + x4 24, R4(x) = x5 120f (5)(ξ) We see that |R4(x)| ≤ e/120 ≈ 0.023 < 0.05 for all −1 ≤ x ≤ 1. The polynomial of degree 3 which approximates P4(x) with the smallest possible error is P3(x) = P4(x) − a4 T4(x) =

  • 1 + x + x2

2 + x3 6 + x4 24

  • − 1

24

  • x4 − x2 + 1

8

  • = 191

192 + x + 13 24 x2 + 1 6 x3

Numerical Methods January 29, 2016 34 / 47

slide-35
SLIDE 35

and the error is |P3(x) − P4(x)| = |a4 T4(x)| ≤ 1 24 · 1 23 ≤ 0.0053 and hence |f (x) − P3(x)| ≤ 0.023 + 0.0053 = 0.0283 < 0.05. We now try to find a polynomial of degree 2 P2(x) = P3(x) − 1 6

  • T3(x) = 191

192 + 9 8 x + 13 24 x2 However, |P3(x) − P2(x)| ≤ 1 6 1 22 ≈ 0.042 which, when added to the previosly accummulated error of 0.0283, exceeds the total tolerance of 0.05. Hence, the smallest possible polynomial with the required property is P3(x).

Numerical Methods January 29, 2016 35 / 47

slide-36
SLIDE 36

Exercises

1

Find the sixth order MacLaurin polynomial for sin(x) and use Chebyshev polynomials to obtain a lesser degree polynomial approximation while keeping the error less than 0.01 on [−1, 1].

2

Same problem for f (x) = xex on [−1, 1].

Numerical Methods January 29, 2016 36 / 47

slide-37
SLIDE 37

Cubic Spline Interpolation

A different approach to approximating a function on an interval is to split the interval into a number of subintervals and construct a different approximating polynomial on each subinterval. This is called piecewise polynomial approximation. The simplest approximating polynomials are linear functions, so that the initial function is approximated by straight line segments on each

  • subinterval. However, this method has the disadvantage that the resulting

approximating function is not differentiable (smooth). The most commonly used piecewise polynomial approximation uses cubic polynomials, which is the lowest degree allowing the interpolation and smoothness conditions to be satisfied.

Numerical Methods January 29, 2016 37 / 47

slide-38
SLIDE 38

Cubic splines

Definition: Given a function f (x) on an interval [a, b] and a set of nodes a = x0 < x1 < x2 < · · · , xn = b, a cubic spline interpolant, S, for f (x) is a function which satisfies the following conditions:

1

S(x) = Sj(x) on each subinterval [xj, xj+1], where Sj is the cubic polynomial Sj(x) = aj + bj(x − xj) + cj(x − xj)2 + dj(x − xj)3

2

Sj(xj) = f (xj) and Sj(xj+1) = f (xj+1) for each j = 0, 1, . . . , n − 1.

3

Sj+1(xj+1) = Sj(xj+1) for each j = 0, 1, . . . , n − 2.

4

S′

j+1(xj+1) = S′ j(xj+1) for each j = 0, 1, . . . , n − 2.

5

S′′

j+1(xj+1) = S′′ j (xj+1) for each j = 0, 1, . . . , n − 2.

Numerical Methods January 29, 2016 38 / 47

slide-39
SLIDE 39

Notes

Condition (2) above means that S interpolates the function f (x) at each

  • f the nodes x0, x1 ... xn. Conditions (3)–(5) state that S and its

derivatives are continuous on [a, b]. Note that the function S consists of n cubic polynomials, each having 4 unknown coefficients. There are a total of 4n unknowns to be determined. The interpolation condition provides n + 1 equations. The 3 continuity conditions provide an additional 3(n-1) equations. To completely determine the interpolant, two more conditions are required. These are applied at the boundary points a and b:

6

One of the following boundary conditions is satisfied at x = a, b:

1

S′′(x0) = S′′(xn) = 0 (natural or free boundary)

2

S′(x0) = f ′(x0) and S′(xn) = f ′(xn) (clamped boundary)

Numerical Methods January 29, 2016 39 / 47

slide-40
SLIDE 40

Example

Construct a natural cubic spline passing through the points (1, 2), (2, 3) and (3, 5). We need to construct two cubic polynomials, one for [1, 2]: S0(x) = a0 + b0(x − 1) + c0(x − 1)2 + d0(x − 1)3 and one for the interval [2, 3]: S1(x) = a1 + b1(x − 2) + c1(x − 2)2 + d1(x − 2)3 We need 8 equations to determine the 8 unknown constants. The first 4 are interpolations conditions: 2 = f (1) = S0(1) = a0, 3 = f (2) = S0(2) = a0 + b0 + c0 + d0 3 = f (2) = S1(2) = a1, 5 = f (3) = S1(3) = a1 + b1 + c1 + d1

Numerical Methods January 29, 2016 40 / 47

slide-41
SLIDE 41

Next we have 2 derivative conditions: S′

0(2) = S′ 1(2)

= ⇒ b0 + 2c0 + 3d0 = b1 S′′

0 (2) = S′′ 1 (2)

= ⇒ 2c0 + 6d0 = 2c1 The final two come from the natural boundary conditions S′′

0 (1) = 0 =

⇒ 2c0 = 0 and S′′

1 (3) = 0 =

⇒ 2c1 + 6d1 = 0 Solving the system for the 8 unknowns gives S(x) =

  • 2 + 3

4 (x − 1) + 1 4 (x − 1)3,

for x ∈ [1, 2] 3 + 3

2 (x − 2) + 3 4 (x − 2)2 − 1 4 (x − 2)3,

for x ∈ [2, 3]

Numerical Methods January 29, 2016 41 / 47

slide-42
SLIDE 42

Graphical representation

Figure: The cubic spline S which interpolates the points (1, 2), (2, 3) and (3, 5).

Numerical Methods January 29, 2016 42 / 47

slide-43
SLIDE 43

Example 2

Now construct a clamped cubic spline passing through the points (1, 2), (2, 3) and (3, 5), which has S′(1) = 2 and S′(3) = 1. Let the cubic polynomials for [1, 2] and [2, 3] be given by S0(x) = a0 + b0(x − 1) + c0(x − 1)2 + d0(x − 1)3 S1(x) = a1 + b1(x − 2) + c1(x − 2)2 + d1(x − 2)3 The first 6 conditions are the same as in the previous example. However, the two boundary conditions are now S′

0(1) = 2 =

⇒ b0 = 2 and S′

1(3) = 1 =

⇒ b1 + 2c1 + 3d1 = 1. The spline is now obtained as S(x) =

  • 2 + 2 (x − 1) − 5

2 (x − 1)2 + 3 2 (x − 1)3,

for x ∈ [1, 2] 3 + 3

2 (x − 2) + 2 (x − 2)2 − 3 2 (x − 2)3,

for x ∈ [2, 3]

Numerical Methods January 29, 2016 43 / 47

slide-44
SLIDE 44

Uniqueness of cubic spline interpolant

If f is defined at a = x0 < x1 < · · · xn = b, then there is a unique spline interpolant for f on these nodes which satisfies the natural boundary conditions S′′(a) = 0 and S′′(b) = 0. If f is defined at a = x0 < x1 < · · · xn = b and differentiable at a and b, then there is a unique spline interpolant for f on these nodes which satisfies the clamped boundary conditions S′(a) = f ′(a) and S′(b) = f ′(b).

Numerical Methods January 29, 2016 44 / 47

slide-45
SLIDE 45

Spline interpolation error

The following error bound formula holds for a clamped cubic spline. Let f be a function on [a, b] which is 4 times differentiable and maxa≤x≤b |f (4)(x)| = M. If S is the unique clamped cubic spline interpolant for f (x) with respect to the nodes a = x0 < x1 · · · xn = b then we have |f (x) − S(x)| ≤ 5M 384 max

0≤j≤n−1(xj+1 − xj)4

for all x ∈ [a, b]. A similar formula holds for natural cubic splines but is more difficult to express.

Numerical Methods January 29, 2016 45 / 47

slide-46
SLIDE 46

Example

Consider the function f (x) = ex on [0, 3] and the nodes x0 = 0, x1 = 1, x2 = 2 and x3 = 3.

1

Find a natural spline interpolant for f on these nodes.

2

Find a clamped spline interpolant for f on these nodes.

3

Calculate the error of each approximation at x = 2.5 and compare with the error bound obtained for the clamped spline.

Numerical Methods January 29, 2016 46 / 47

slide-47
SLIDE 47

The natural cubic spline is given by S(x) =      1 + 1.466 x + 0.252 x3, [0, 1] 2.718 + 2.223 (x − 1) + 0.757 (x − 1)2 + 1.691 (x − 1)3, [1, 2] 7.389 + 8.810 (x − 2) + 5.830 (x − 2)2 − 1.943 (x − 2)3, [2, 3]

Figure: The cubic spline S (green) which interpolates the function ex (black).

Numerical Methods January 29, 2016 47 / 47