Near best rational approximation and spectral methods Joris Van - - PowerPoint PPT Presentation

near best rational approximation and spectral methods
SMART_READER_LITE
LIVE PREVIEW

Near best rational approximation and spectral methods Joris Van - - PowerPoint PPT Presentation

Near best rational approximation and spectral methods Joris Van Deun University of Antwerp Dept. Math. & Computer Science 20 May 2008 1 / 44 Part I Near best interpolation 2 / 44 Introduction A very old and very classical problem. .


slide-1
SLIDE 1

Near best rational approximation and spectral methods

Joris Van Deun

University of Antwerp

  • Dept. Math. & Computer Science

20 May 2008

1 / 44

slide-2
SLIDE 2

Part I Near best interpolation

2 / 44

slide-3
SLIDE 3

Introduction

A very old and very classical problem. . .

Given a real, continuous function f(x) on [−1, 1], find a good polynomial approximation

Possible solutions

◮ Best (minimax) polynomial approximation according to the

norm f := f∞ = max

−1≤x≤1 |f(x)| ◮ Polynomial least squares approximation ◮ Interpolating polynomial

3 / 44

slide-4
SLIDE 4

Linear minimax approximation

Problem

Given linearly independent functions {ϕk} find min

ak

  • f(x) −

n

  • k=0

akϕk(x)

  • Solution

ak such that f − akϕk equi-oscillates, i.e. n + 2 extremal points

  • f equal magnitude and alternating sign

Example: minimax polynomial approximation

Take ϕk(x) = xk for k = 0, 1, . . . , n

4 / 44

slide-5
SLIDE 5

Interpolating polynomial

Take n + 1 points x0, x1, . . . , xn and construct polynomial pn(x) such that f(xi) = pn(xi), i = 1, 2, . . . n Choice of interpolation points? It is well-known that f(x) − pn(x) = f (n+1)(ξ) (n + 1)! (x − x0) · · · (x − xn) where ξ depends on x and x0, x1, . . . , xn and f Try to choose x0, . . . , xn such that f − pn equi-oscillates . . .

5 / 44

slide-6
SLIDE 6

Equi-oscillating polynomial on [−1, 1]

Find points x0, . . . , xn such that (x − x0) · · · (x − xn) equi-oscillates on [−1, 1]

◮ Chebyshev polynomial

Tn+1(x) = cos((n+1) arccos x)

◮ Zeros are given by

xk = cos π(2k + 1) 2(n + 1) for k = 0, . . . , n

◮ Interpolation in xk is near

best

−1 −0.8 −0.6 −0.4 −0.2 0.2 0.4 0.6 0.8 1 −1 −0.8 −0.6 −0.4 −0.2 0.2 0.4 0.6 0.8 1

6 / 44

slide-7
SLIDE 7

Alternative interpretation

From f(x) − pn(x) = f (n+1)(ξ) (n + 1)! (x − x0) · · · (x − xn) it follows that f − pn ≤ max−1≤t≤1 f (n+1)(t) (n + 1)! (x − x0) · · · (x − xn) Minimising (x − x0) · · · (x − xn) over x0, . . . , xn leads to the Chebyshev zeros The unique monic polynomial of degree n + 1 which deviates least from zero in the infinity norm, is a scaled Chebyshev polynomial

7 / 44

slide-8
SLIDE 8

How good is near best?

Let f be a continuous function on [−1, 1], pn its polynomial interpolant in the Chebyshev zeros, and p∗

n its best

approximation on [−1, 1] according to the infinity norm. Then f − pn ≤

  • 2 + 2

π log n

  • f − p∗

n ◮ If n < 105 we loose at most 1 digit ◮ If n < 1066 we loose at most 2 digits

If f is analytic in an ellipse with foci ±1 and semimajor/minor axis lengths a ≥ 1 and b ≥ 0, then f − pn = O((a + b)−n), n → ∞

8 / 44

slide-9
SLIDE 9

Rational generalisation

What if f has singularities close to [−1, 1]?

Example

Take f(x) = 1 ε2 + x2 , 0 < ε ≪ 1 with poles at ±iε Then f − pn = O((1 + ε)−n) Polynomial interpolation converges too slowly!

9 / 44

slide-10
SLIDE 10

Near best fixed pole rational interpolation

Let poles α1, . . . , αm be given (real or complex conjugate) and put πm(x) = (x − α1) · · · (x − αm) Then f(x) − pn(x) πm(x) = [πm(ξ)f(ξ)](n+1) (n + 1)! (x − x0) · · · (x − xn) πm(x) when f(xi) = pn(xi) πm(xi), i = 0, 1, . . . , n

10 / 44

slide-11
SLIDE 11

Linear minimax approximation

Problem

Given linearly independent functions {ϕk} find min

ak

  • f(x) −

n

  • k=0

akϕk(x)

  • Solution

ak such that f − akϕk equi-oscillates, i.e. n + 2 extremal points

  • f equal magnitude and alternating sign

Example: minimax rational approximation

Take ϕk(x) = xk/πm(x) for k = 0, 1, . . . , n

11 / 44

slide-12
SLIDE 12

Near best fixed pole rational interpolation

Problem statement

Given πm, find x0, . . . , xn with n + 1 ≥ m such that qn+1/πm is minimal, where qn+1(x) = (x − x0) · · · (x − xn) (equivalently: such that qn+1/πm equi-oscillates)

History

◮ Special case studied by Markoff, 1884 ◮ General case solved by Bernstein, 1937 ◮ Discussed in Appendix A of Achieser’s “Theory of

Approximation”, 1956

◮ Only theoretical solution, no properties, computational

aspects, . . .

12 / 44

slide-13
SLIDE 13

Joukowski transformation

J−1 J x = J(z) = 1 2

  • z + 1

z

  • z = x −
  • x2 − 1

13 / 44

slide-14
SLIDE 14

Near best fixed pole rational interpolation

Solution

◮ Let {α1, . . . , αm} denote zeros of πm ◮ Put βk = J−1(αk) for k = 1, . . . , m ◮ Define Bm by

Bm(z) = z − β1 1 − β1z · · · z − βm 1 − βmz Then Tn(x) = 1 2

  • zn−mBm(z) +

1 zn−mBm(z)

  • is a rational function in x of the form qn(x)/πm(x).

The interpolation points x0, . . . , xn are the zeros of Tn+1(x).

14 / 44

slide-15
SLIDE 15

Equi-oscillating rational function on [−1, 1]

Example

πm(x) = m/2

k=1(x2 + k2ω2) where ω = 0.1

−1 −0.8 −0.6 −0.4 −0.2 0.2 0.4 0.6 0.8 1 −1 −0.8 −0.6 −0.4 −0.2 0.2 0.4 0.6 0.8 1

Tn(x)

−1 −0.8 −0.6 −0.4 −0.2 0.2 0.4 0.6 0.8 1 −0.4 −0.3 −0.2 −0.1 0.1 0.2 0.3 0.4

Poles & zeros

Note

Poles attract zeros (see later: electrostatic interpretation)

15 / 44

slide-16
SLIDE 16

Why bother?

Can we not just do rational interpolation in the (polynomial) Chebyshev points (zeros of Chebyshev polynomial Tn)?

◮ If α1, . . . , αm correspond to poles of f close to the interval,

then πmf − pn will be small (enlarging the ellipse of analyticity)

◮ However, dividing by πm can destroy this advantage and

f − pn/πm may not be small

◮ If poles gather near the interior of the interval, Chebyshev

zeros are useless

◮ Application: differential equations with interior layers

16 / 44

slide-17
SLIDE 17

Example

Let f(x) = πx/ω sinh(πx/ω) This function has simple poles at ±ikω for k = 1, 2, . . .

◮ Interpolate by pn−1 in zeros of Tn ◮ Interpolate by pn−1/πn−2

◮ in zeros of Tn ◮ in zeros of Tn

Plot interpolation error f − pn−1 and f − pn−1/πn−2 for the case ω = 0.01

17 / 44

slide-18
SLIDE 18

Graph of f(x)

0.25 0.5 0.75 1 −1 −0.5 0.5 1

18 / 44

slide-19
SLIDE 19

Interpolation error as function of n

10 20 30 40 50 60 70 80 90 100 10

−14

10

−12

10

−10

10

−8

10

−6

10

−4

10

−2

10

19 / 44

slide-20
SLIDE 20

Interpolation error as function of n

10 20 30 40 50 60 70 80 90 100 10

−14

10

−12

10

−10

10

−8

10

−6

10

−4

10

−2

10

19 / 44

slide-21
SLIDE 21

Interpolation error as function of n

10 20 30 40 50 60 70 80 90 100 10

−14

10

−12

10

−10

10

−8

10

−6

10

−4

10

−2

10

19 / 44

slide-22
SLIDE 22

Part II Properties

20 / 44

slide-23
SLIDE 23

Properties of Tn and Tn

Definition

Tn(x) = 1 2

  • zn + 1

zn

  • Tn(x) = 1

2

  • zn−mBm(z) +

1 zn−mBm(z)

  • Orthogonality property

1

−1

Tj(x)Tk(x) dx √ 1 − x2 = 0, j = k 1

−1

Tj(x)Tk(x) dx √ 1 − x2 = 0, j = k, j, k ≥ m

21 / 44

slide-24
SLIDE 24

Properties of Tn and Tn

Three term recurrence

It is well known that Tn+1(x) = 2xTn(x) − Tn−1(x) for n = 1, 2, . . . Writing Tn(x) = qn(x) πn(x) where πn(x) = (x − α1) · · · (x − αn), n ≤ m = πm(x), n > m we can extend the definition for Tn to n < m using the theory of

  • rthogonal rational functions

22 / 44

slide-25
SLIDE 25

They satisfy the recurrence relation Tn(x) =

  • An

x 1 − x/αn + Bn 1 − x/αn−1 1 − x/αn

  • Tn−1(x)

+ Cn 1 − x/¯ αn−2 1 − x/αn Tn−2(x) for n = 1, 2, . . . with T0 = 1 and T−1 = 0 The recurrence coefficients An, Bn and Cn are known explicitly

23 / 44

slide-26
SLIDE 26

Explicit formulas for the recurrence coefficients

An = 2(1 − βnβn−1)(1 − |βn−1|2) (1 + β2

n−1)(1 + β2 n)

Bn = −(1 − |βn−1|2)(βn + ¯ βn−2) + (βn−1 + ¯ βn−1)(1 − βn ¯ βn−2) (1 + β2

n)(1 − βn−1 ¯

βn−2) Cn = −(1 − βn ¯ βn−1)(1 + ¯ β2

n−2)

(1 − βn−1 ¯ βn−2)(1 + β2

n)

24 / 44

slide-27
SLIDE 27

Interpolation points as eigenvalues

From the three term recurrence it follows immediately that the zeros of Tn(x) are the eigenvalues of         1

1 2 1 2

... ... ...

1 2 1 2

        Explicitly: xk = cos π(2k + 1) 2n , k = 0, 1, . . . , n − 1

25 / 44

slide-28
SLIDE 28

Interpolation points as eigenvalues

The zeros of Tn(x) are also the generalised eigenvalues of the matrix pencil (Jn, JnDn − Sn + In), where Jn =       −B1

A1 1 A1

−C2

A2

−B2

A2 1 A2

... ... ... −Cn

An

−Bn

An

      , Dn =     

1 α1

...

1 αn−1

     Sn = 2i        

ℑ(α1)C3 |α1|2A3

...

ℑ(αn−2)Cn |αn−2|2An

       

26 / 44

slide-29
SLIDE 29

Electrostatic interpretation of the zeros

Chebyshev polynomials

◮ Put n positive unit charges on (−1, 1) to move freely ◮ Fix positive charges of magnitude 1/4 on −1 and 1 ◮ Equilibrium position of unit charges corresponds to zeros

  • f Tn

−1 −0.8 −0.6 −0.4 −0.2 0.2 0.4 0.6 0.8 1

27 / 44

slide-30
SLIDE 30

Chebyshev rational functions

Denote by ˜ αk the m eigenvalues of the matrix    α1 ... αm    + 1 n − mwwT where w = [√w1, · · · , √wm]T and wk = (1 − β2

k)/(2βk)

These ˜ αk are ghost poles If m fixed and n → ∞, then they converge to the real poles

28 / 44

slide-31
SLIDE 31

Chebyshev rational functions

◮ Put n positive unit charges on (−1, 1) to move freely ◮ Fix positive charges of magnitude 1/4 at −1 and 1 ◮ Fix negative charges of magnitude 1/2 at each αk and ˜

αk

◮ Equilibrium position of unit charges corresponds to zeros

  • f Tn

−0.4 0.4 −1 −0.8 −0.6 −0.4 −0.2 0.2 0.4 0.6 0.8 1

29 / 44

slide-32
SLIDE 32

Part III Spectral collocation methods

30 / 44

slide-33
SLIDE 33

Approximating the derivative

Example

Uniform grid x0, . . . , xn with xj+1 − xj = h and function values f(xj) = fj

1 2 3 4 2 4 6 8 10

Finite difference approximation f ′(xj) ≈ fj+1 − fj−1 2h

31 / 44

slide-34
SLIDE 34

Approximating the derivative

Example

Uniform grid x0, . . . , xn with xj+1 − xj = h and function values f(xj) = fj

1 2 3 4 2 4 6 8 10

Finite difference approximation f ′(xj) ≈ fj+1 − fj−1 2h

31 / 44

slide-35
SLIDE 35

Differentiation matrix

Writing down this approximation for each j gives        f ′ . . . f ′

n

       ≈ h−1         

1 2

−1

2

−1

2

... ... ...

1 2 1 2

−1

2

                f0 . . . fn        Differentiation becomes sparse matrix-vector multiplication f′ ≈ Df Differential equation f ′(x) + f(x) = g(x) becomes linear system (D + I)f = g

32 / 44

slide-36
SLIDE 36

Spectral collocation

◮ Use global interpolant (polynomial or rational function)

instead of local

◮ Dense differentiation matrices instead of sparse ◮ O(e−cn) convergence instead of O(n−2) or O(n−4)

1 2 3 4 2 4 6 8 10 33 / 44

slide-37
SLIDE 37

Boundary value problems

Boundary conditions

If boundary conditions are given in 1 and −1, then we need those points as interpolation points

◮ either use zeros of Tn or Tn, and include −1 and 1 ◮ or use the extrema (which already include −1 and 1)

Polynomial case

Extrema of Tn are given by the zeros of Un−1 together with the points −1 and 1, where Un−1 is a Chebyshev polynomial of the second kind

Rational case

Extrema of Tn are given by the zeros of Un−1 together with the points −1 and 1, where Un−1 is a Chebyshev rational function of the second kind

34 / 44

slide-38
SLIDE 38

Solution with boundary/interior layer

If the solution f(x) changes abruptly (almost discontinuously) in a small region of [−1, 1], then

◮ polynomial interpolation converges too slowly ◮ rational interpolation is appropriate

How do we choose the poles? Obtain rough approximation of f(x) using

◮ boundary layer analysis, or ◮ polynomial interpolation, or ◮ . . .

and extract poles doing some kind of Pad´ e approximation

35 / 44

slide-39
SLIDE 39

Interior layer problem

Solve the boundary value problem ǫd2f dx2 + xdf dx + xf = 0, −1 < x < 1 with boundary values f(−1) = e and f(1) = 2/e where 0 < ǫ ≪ 1 Asymptotic estimate for ǫ → 0 gives f(x) ≈ 1 2 erf x √ 2ǫ

  • + 3

2

  • e−x

Pad´ e approximation of erf function provides poles

36 / 44

slide-40
SLIDE 40

Solution for ǫ = 0.0002

Spectral method with n = 50 and m = 10

Using

◮ Polynomial interpolant in zeros of Tn ◮ Rational interpolant in zeros of Tn ◮ Rational interpolant in zeros of Tn

1 2 3 −1 −0.5 0.5 1

10−9 10−6 10−3 100 −1 −0.5 0.5 1

37 / 44

slide-41
SLIDE 41

Using the extrema

1 2 3 −1 −0.5 0.5 1

10−9 10−6 10−3 100 −1 −0.5 0.5 1

38 / 44

slide-42
SLIDE 42

Chebyshev-Pad´ e instead of asymptotic

−2.5 2.5 5 7.5 −1 −0.5 0.5 1 10−9 10−6 10−3 100 −1 −0.5 0.5 1

39 / 44

slide-43
SLIDE 43

Chebyshev-Pad´ e instead of asymptotic

1 2 3 −1 −0.5 0.5 1

10−9 10−6 10−3 100 −1 −0.5 0.5 1

39 / 44

slide-44
SLIDE 44

Boundary layer problem

Solve the boundary value problem 4ǫd2f dx2 − 2 x + 1 2 − a 2 df dx − x + 1 2 f = 0, −1 < x < 1 with boundary values f(−1) = −3 and f(1) = 1 where 0 < ǫ ≪ 1 Example: ǫ = 0.01, a = 0.4

−3 −2 −1 1 −1 −0.5 0.5 1

40 / 44

slide-45
SLIDE 45

ǫ = 0.0001, zeros, asymptotic, n = 80, m = 20

−3 −2 −1 1 −1 −0.5 0.5 1

41 / 44

slide-46
SLIDE 46

Same with extrema instead of poles

−3 −2 −1 1 −1 −0.5 0.5 1

42 / 44

slide-47
SLIDE 47

Same with Pad´ e instead of asymptotic

−3 −2 −1 1 −1 −0.5 0.5 1

43 / 44

slide-48
SLIDE 48

THE END

44 / 44