Scientific Computation and Differential Equations John Butcher The - - PowerPoint PPT Presentation

scientific computation and differential equations
SMART_READER_LITE
LIVE PREVIEW

Scientific Computation and Differential Equations John Butcher The - - PowerPoint PPT Presentation

Scientific Computation and Differential Equations John Butcher The University of Auckland New Zealand Annual Fundamental Sciences Seminar June 2006 Institut Kajian Sains Fundamental Ibnu Sina Scientific Computation and Differential Equations


slide-1
SLIDE 1

Scientific Computation and Differential Equations

John Butcher The University of Auckland New Zealand Annual Fundamental Sciences Seminar June 2006 Institut Kajian Sains Fundamental Ibnu Sina

Scientific Computation and Differential Equations – p. 1/36

slide-2
SLIDE 2

Overview

This year marks the fiftieth anniversary of the first computer in the Southern Hemisphere – the SILLIAC computer at the University of Sydney.

Scientific Computation and Differential Equations – p. 2/36

slide-3
SLIDE 3

Overview

This year marks the fiftieth anniversary of the first computer in the Southern Hemisphere – the SILLIAC computer at the University of Sydney. I had the privilege of being present when this computer did its first calculation and my own first program ran on this computer soon afterwards.

Scientific Computation and Differential Equations – p. 2/36

slide-4
SLIDE 4

Overview

This year marks the fiftieth anniversary of the first computer in the Southern Hemisphere – the SILLIAC computer at the University of Sydney. I had the privilege of being present when this computer did its first calculation and my own first program ran on this computer soon afterwards. This computer was built for one reason only – to solve scientific problems.

Scientific Computation and Differential Equations – p. 2/36

slide-5
SLIDE 5

Overview

This year marks the fiftieth anniversary of the first computer in the Southern Hemisphere – the SILLIAC computer at the University of Sydney. I had the privilege of being present when this computer did its first calculation and my own first program ran on this computer soon afterwards. This computer was built for one reason only – to solve scientific problems. Scientific problems have always been the driving force in the development of faster and faster computers.

Scientific Computation and Differential Equations – p. 2/36

slide-6
SLIDE 6

Overview

This year marks the fiftieth anniversary of the first computer in the Southern Hemisphere – the SILLIAC computer at the University of Sydney. I had the privilege of being present when this computer did its first calculation and my own first program ran on this computer soon afterwards. This computer was built for one reason only – to solve scientific problems. Scientific problems have always been the driving force in the development of faster and faster computers. Within Scientific Computation, the approximate solution

  • f differential equations has always been an area of

special challenge.

Scientific Computation and Differential Equations – p. 2/36

slide-7
SLIDE 7

Differential equations can usually not be solved analytically and numerical methods are necessary.

Scientific Computation and Differential Equations – p. 3/36

slide-8
SLIDE 8

Differential equations can usually not be solved analytically and numerical methods are necessary. Two main types of numerical methods exist: linear multistep methods and Runge–Kutta methods.

Scientific Computation and Differential Equations – p. 3/36

slide-9
SLIDE 9

Differential equations can usually not be solved analytically and numerical methods are necessary. Two main types of numerical methods exist: linear multistep methods and Runge–Kutta methods. These traditional methods are special cases within the larger class of “General Linear Methods”.

Scientific Computation and Differential Equations – p. 3/36

slide-10
SLIDE 10

Differential equations can usually not be solved analytically and numerical methods are necessary. Two main types of numerical methods exist: linear multistep methods and Runge–Kutta methods. These traditional methods are special cases within the larger class of “General Linear Methods”. Today we will look briefly at the history of numerical methods for differential equations.

Scientific Computation and Differential Equations – p. 3/36

slide-11
SLIDE 11

Differential equations can usually not be solved analytically and numerical methods are necessary. Two main types of numerical methods exist: linear multistep methods and Runge–Kutta methods. These traditional methods are special cases within the larger class of “General Linear Methods”. Today we will look briefly at the history of numerical methods for differential equations. We will then look at some particular questions concerning the theory of general linear methods.

Scientific Computation and Differential Equations – p. 3/36

slide-12
SLIDE 12

Differential equations can usually not be solved analytically and numerical methods are necessary. Two main types of numerical methods exist: linear multistep methods and Runge–Kutta methods. These traditional methods are special cases within the larger class of “General Linear Methods”. Today we will look briefly at the history of numerical methods for differential equations. We will then look at some particular questions concerning the theory of general linear methods. We will also look at some aspects of their practical implementation.

Scientific Computation and Differential Equations – p. 3/36

slide-13
SLIDE 13

Contents

A short history of numerical differential equations

Scientific Computation and Differential Equations – p. 4/36

slide-14
SLIDE 14

Contents

A short history of numerical differential equations Linear multistep methods

Scientific Computation and Differential Equations – p. 4/36

slide-15
SLIDE 15

Contents

A short history of numerical differential equations Linear multistep methods Runge–Kutta methods

Scientific Computation and Differential Equations – p. 4/36

slide-16
SLIDE 16

Contents

A short history of numerical differential equations Linear multistep methods Runge–Kutta methods General linear methods

Scientific Computation and Differential Equations – p. 4/36

slide-17
SLIDE 17

Contents

A short history of numerical differential equations Linear multistep methods Runge–Kutta methods General linear methods Examples of general linear methods

Scientific Computation and Differential Equations – p. 4/36

slide-18
SLIDE 18

Contents

A short history of numerical differential equations Linear multistep methods Runge–Kutta methods General linear methods Examples of general linear methods Order of GLMs

Scientific Computation and Differential Equations – p. 4/36

slide-19
SLIDE 19

Contents

A short history of numerical differential equations Linear multistep methods Runge–Kutta methods General linear methods Examples of general linear methods Order of GLMs Methods with the IRK stability property

Scientific Computation and Differential Equations – p. 4/36

slide-20
SLIDE 20

Contents

A short history of numerical differential equations Linear multistep methods Runge–Kutta methods General linear methods Examples of general linear methods Order of GLMs Methods with the IRK stability property Implementation questions for IRKS methods

Scientific Computation and Differential Equations – p. 4/36

slide-21
SLIDE 21

A short history of numerical ODEs

We will make use of three standard types of initial value problems y′(x) = f(x, y(x)), y(x0) = y0 ∈ R, (1) y′(x) = f(x, y(x)), y(x0) = y0 ∈ RN, (2) y′(x) = f(y(x)), y(x0) = y0 ∈ RN. (3)

Scientific Computation and Differential Equations – p. 5/36

slide-22
SLIDE 22

A short history of numerical ODEs

We will make use of three standard types of initial value problems y′(x) = f(x, y(x)), y(x0) = y0 ∈ R, (1) y′(x) = f(x, y(x)), y(x0) = y0 ∈ RN, (2) y′(x) = f(y(x)), y(x0) = y0 ∈ RN. (3) Problem (1) is used in traditional descriptions of numerical methods but in applications we need to use either (2) or (3).

Scientific Computation and Differential Equations – p. 5/36

slide-23
SLIDE 23

A short history of numerical ODEs

We will make use of three standard types of initial value problems y′(x) = f(x, y(x)), y(x0) = y0 ∈ R, (1) y′(x) = f(x, y(x)), y(x0) = y0 ∈ RN, (2) y′(x) = f(y(x)), y(x0) = y0 ∈ RN. (3) Problem (1) is used in traditional descriptions of numerical methods but in applications we need to use either (2) or (3). These are actually equivalent and we will often use (3) instead of (2) because of its simplicity.

Scientific Computation and Differential Equations – p. 5/36

slide-24
SLIDE 24

The Euler method Euler proposed a simple numerical scheme in approximately 1770; this can be used for a system of first

  • rder equations.

Scientific Computation and Differential Equations – p. 6/36

slide-25
SLIDE 25

The Euler method Euler proposed a simple numerical scheme in approximately 1770; this can be used for a system of first

  • rder equations. The idea is to treat the solution as

though it had constant derivative in each time step.

Scientific Computation and Differential Equations – p. 6/36

slide-26
SLIDE 26

The Euler method Euler proposed a simple numerical scheme in approximately 1770; this can be used for a system of first

  • rder equations. The idea is to treat the solution as

though it had constant derivative in each time step.

x0 x1 x2 x3 x4 y0

Scientific Computation and Differential Equations – p. 6/36

slide-27
SLIDE 27

The Euler method Euler proposed a simple numerical scheme in approximately 1770; this can be used for a system of first

  • rder equations. The idea is to treat the solution as

though it had constant derivative in each time step.

x0 x1 x2 x3 x4 y0 y1

f(y0) y1 =y0+hf(y0)

Scientific Computation and Differential Equations – p. 6/36

slide-28
SLIDE 28

The Euler method Euler proposed a simple numerical scheme in approximately 1770; this can be used for a system of first

  • rder equations. The idea is to treat the solution as

though it had constant derivative in each time step.

x0 x1 x2 x3 x4 y0 y1

f(y0) y1 =y0+hf(y0)

Scientific Computation and Differential Equations – p. 6/36

slide-29
SLIDE 29

The Euler method Euler proposed a simple numerical scheme in approximately 1770; this can be used for a system of first

  • rder equations. The idea is to treat the solution as

though it had constant derivative in each time step.

x0 x1 x2 x3 x4 y0 y1

f(y0) y1 =y0+hf(y0)

y2

f ( y1 ) y2 =y1+hf(y1)

Scientific Computation and Differential Equations – p. 6/36

slide-30
SLIDE 30

The Euler method Euler proposed a simple numerical scheme in approximately 1770; this can be used for a system of first

  • rder equations. The idea is to treat the solution as

though it had constant derivative in each time step.

x0 x1 x2 x3 x4 y0 y1

f(y0) y1 =y0+hf(y0)

y2

f ( y1 ) y2 =y1+hf(y1)

Scientific Computation and Differential Equations – p. 6/36

slide-31
SLIDE 31

The Euler method Euler proposed a simple numerical scheme in approximately 1770; this can be used for a system of first

  • rder equations. The idea is to treat the solution as

though it had constant derivative in each time step.

x0 x1 x2 x3 x4 y0 y1

f(y0) y1 =y0+hf(y0)

y2

f ( y1 ) y2 =y1+hf(y1)

y3

f ( y

2

) y3 =y2+hf(y2)

Scientific Computation and Differential Equations – p. 6/36

slide-32
SLIDE 32

The Euler method Euler proposed a simple numerical scheme in approximately 1770; this can be used for a system of first

  • rder equations. The idea is to treat the solution as

though it had constant derivative in each time step.

x0 x1 x2 x3 x4 y0 y1

f(y0) y1 =y0+hf(y0)

y2

f ( y1 ) y2 =y1+hf(y1)

y3

f ( y

2

) y3 =y2+hf(y2)

Scientific Computation and Differential Equations – p. 6/36

slide-33
SLIDE 33

The Euler method Euler proposed a simple numerical scheme in approximately 1770; this can be used for a system of first

  • rder equations. The idea is to treat the solution as

though it had constant derivative in each time step.

x0 x1 x2 x3 x4 y0 y1

f(y0) y1 =y0+hf(y0)

y2

f ( y1 ) y2 =y1+hf(y1)

y3

f ( y

2

) y3 =y2+hf(y2)

y4

f ( y

3

) y4 =y3+hf(y3)

Scientific Computation and Differential Equations – p. 6/36

slide-34
SLIDE 34

More modern methods attempt to improve on the Euler method by

Scientific Computation and Differential Equations – p. 7/36

slide-35
SLIDE 35

More modern methods attempt to improve on the Euler method by

  • 1. Using more past history

Scientific Computation and Differential Equations – p. 7/36

slide-36
SLIDE 36

More modern methods attempt to improve on the Euler method by

  • 1. Using more past history

– Linear multistep methods

Scientific Computation and Differential Equations – p. 7/36

slide-37
SLIDE 37

More modern methods attempt to improve on the Euler method by

  • 1. Using more past history

– Linear multistep methods

  • 2. Doing more complicated calculations in each step

Scientific Computation and Differential Equations – p. 7/36

slide-38
SLIDE 38

More modern methods attempt to improve on the Euler method by

  • 1. Using more past history

– Linear multistep methods

  • 2. Doing more complicated calculations in each step

– Runge–Kutta methods

Scientific Computation and Differential Equations – p. 7/36

slide-39
SLIDE 39

More modern methods attempt to improve on the Euler method by

  • 1. Using more past history

– Linear multistep methods

  • 2. Doing more complicated calculations in each step

– Runge–Kutta methods

  • 3. Doing both of these

Scientific Computation and Differential Equations – p. 7/36

slide-40
SLIDE 40

More modern methods attempt to improve on the Euler method by

  • 1. Using more past history

– Linear multistep methods

  • 2. Doing more complicated calculations in each step

– Runge–Kutta methods

  • 3. Doing both of these

– General linear methods

Scientific Computation and Differential Equations – p. 7/36

slide-41
SLIDE 41

Some important dates 1883 Adams & Bashforth Linear multistep methods 1895 Runge Runge-Kutta method 1901 Kutta 1925 Nyström Special methods for second order 1926 Moulton Adams-Moulton method 1952 Curtiss & Hirschfelder Stiff problems

Scientific Computation and Differential Equations – p. 8/36

slide-42
SLIDE 42

Linear multistep methods

We will write the differential equation in autonomous form y′(x) = f(y(x)), y(x0) = y0,

Scientific Computation and Differential Equations – p. 9/36

slide-43
SLIDE 43

Linear multistep methods

We will write the differential equation in autonomous form y′(x) = f(y(x)), y(x0) = y0, and the aim, for the moment, will be to calculate approximations to y(xi), where xi = x0 + hi, i = 1, 2, 3, . . . , and h is the “stepsize”.

Scientific Computation and Differential Equations – p. 9/36

slide-44
SLIDE 44

Linear multistep methods

We will write the differential equation in autonomous form y′(x) = f(y(x)), y(x0) = y0, and the aim, for the moment, will be to calculate approximations to y(xi), where xi = x0 + hi, i = 1, 2, 3, . . . , and h is the “stepsize”. Linear multistep methods base the approximation to y(xn) on a linear combination of approximations to y(xn−i) and approximations to y′(xn−i), i = 1, 2, . . . , k.

Scientific Computation and Differential Equations – p. 9/36

slide-45
SLIDE 45

Write yi as the approximation to y(xi) and fi as the approximation to y′(xi) = f(y(xi)).

Scientific Computation and Differential Equations – p. 10/36

slide-46
SLIDE 46

Write yi as the approximation to y(xi) and fi as the approximation to y′(xi) = f(y(xi)). A linear multistep method can be written as yn =

k

  • i=1

αiyn−i + h

k

  • i=0

βifn−i

Scientific Computation and Differential Equations – p. 10/36

slide-47
SLIDE 47

Write yi as the approximation to y(xi) and fi as the approximation to y′(xi) = f(y(xi)). A linear multistep method can be written as yn =

k

  • i=1

αiyn−i + h

k

  • i=0

βifn−i This is a 1-stage 2k-value method.

Scientific Computation and Differential Equations – p. 10/36

slide-48
SLIDE 48

Write yi as the approximation to y(xi) and fi as the approximation to y′(xi) = f(y(xi)). A linear multistep method can be written as yn =

k

  • i=1

αiyn−i + h

k

  • i=0

βifn−i This is a 1-stage 2k-value method. 1 stage? One evaluation of f per step.

Scientific Computation and Differential Equations – p. 10/36

slide-49
SLIDE 49

Write yi as the approximation to y(xi) and fi as the approximation to y′(xi) = f(y(xi)). A linear multistep method can be written as yn =

k

  • i=1

αiyn−i + h

k

  • i=0

βifn−i This is a 1-stage 2k-value method. 1 stage? One evaluation of f per step. 2k value? This many quantities are passed between steps.

Scientific Computation and Differential Equations – p. 10/36

slide-50
SLIDE 50

Write yi as the approximation to y(xi) and fi as the approximation to y′(xi) = f(y(xi)). A linear multistep method can be written as yn =

k

  • i=1

αiyn−i + h

k

  • i=0

βifn−i This is a 1-stage 2k-value method. 1 stage? One evaluation of f per step. 2k value? This many quantities are passed between steps. β0 = 0: explicit.

Scientific Computation and Differential Equations – p. 10/36

slide-51
SLIDE 51

Write yi as the approximation to y(xi) and fi as the approximation to y′(xi) = f(y(xi)). A linear multistep method can be written as yn =

k

  • i=1

αiyn−i + h

k

  • i=0

βifn−i This is a 1-stage 2k-value method. 1 stage? One evaluation of f per step. 2k value? This many quantities are passed between steps. β0 = 0: explicit. β0 = 0: implicit.

Scientific Computation and Differential Equations – p. 10/36

slide-52
SLIDE 52

Runge–Kutta methods

A Runge–Kutta method computes yn in terms of a single input yn−1 and s stages Y1, Y2, . . . , Ys,

Scientific Computation and Differential Equations – p. 11/36

slide-53
SLIDE 53

Runge–Kutta methods

A Runge–Kutta method computes yn in terms of a single input yn−1 and s stages Y1, Y2, . . . , Ys, where Yi = yn−1 + h

s

  • j=1

aijf(Yj), i = 1, 2, . . . , s,

Scientific Computation and Differential Equations – p. 11/36

slide-54
SLIDE 54

Runge–Kutta methods

A Runge–Kutta method computes yn in terms of a single input yn−1 and s stages Y1, Y2, . . . , Ys, where Yi = yn−1 + h

s

  • j=1

aijf(Yj), i = 1, 2, . . . , s, yn = yn−1 + h

s

  • i=1

bif(Yi).

Scientific Computation and Differential Equations – p. 11/36

slide-55
SLIDE 55

Runge–Kutta methods

A Runge–Kutta method computes yn in terms of a single input yn−1 and s stages Y1, Y2, . . . , Ys, where Yi = yn−1 + h

s

  • j=1

aijf(Yj), i = 1, 2, . . . , s, yn = yn−1 + h

s

  • i=1

bif(Yi). This is an s-stage 1-value method.

Scientific Computation and Differential Equations – p. 11/36

slide-56
SLIDE 56

Runge–Kutta methods

A Runge–Kutta method computes yn in terms of a single input yn−1 and s stages Y1, Y2, . . . , Ys, where Yi = yn−1 + h

s

  • j=1

aijf(Yj), i = 1, 2, . . . , s, yn = yn−1 + h

s

  • i=1

bif(Yi). This is an s-stage 1-value method. It is natural to ask if there are useful methods which are multistage (as for Runge–Kutta methods) and multivalue (as for linear multistep methods).

Scientific Computation and Differential Equations – p. 11/36

slide-57
SLIDE 57

In other words, we ask if there is any value in completing this diagram:

Runge-Kutta Linear Multistep Euler

Scientific Computation and Differential Equations – p. 12/36

slide-58
SLIDE 58

In other words, we ask if there is any value in completing this diagram:

Runge-Kutta Linear Multistep Euler

Scientific Computation and Differential Equations – p. 12/36

slide-59
SLIDE 59

In other words, we ask if there is any value in completing this diagram:

General Linear Methods Runge-Kutta Linear Multistep Euler

Scientific Computation and Differential Equations – p. 12/36

slide-60
SLIDE 60

General linear methods

We will consider methods characterised by an (s + r) × (s + r) partitioned matrix of the form A U B V

  • s

r s r

.

Scientific Computation and Differential Equations – p. 13/36

slide-61
SLIDE 61

General linear methods

We will consider methods characterised by an (s + r) × (s + r) partitioned matrix of the form A U B V

  • s

r s r

. The r values input to step n − 1 will be denoted by y[n−1]

i

, i = 1, 2, . . . , r with corresponding output values y[n]

i

and the stage values by Yi, i = 1, 2, . . . , s.

Scientific Computation and Differential Equations – p. 13/36

slide-62
SLIDE 62

General linear methods

We will consider methods characterised by an (s + r) × (s + r) partitioned matrix of the form A U B V

  • s

r s r

. The r values input to step n − 1 will be denoted by y[n−1]

i

, i = 1, 2, . . . , r with corresponding output values y[n]

i

and the stage values by Yi, i = 1, 2, . . . , s. The stage derivatives will be denoted by Fi = f(Yi).

Scientific Computation and Differential Equations – p. 13/36

slide-63
SLIDE 63

The formula for computing the stages (and simultaneously the stage derivatives) are: Yi = h

s

  • j=1

aijFj +

r

  • j=1

uijy[n−1]

j

, Fi = f(Yi), for i = 1, 2, . . . , s.

Scientific Computation and Differential Equations – p. 14/36

slide-64
SLIDE 64

The formula for computing the stages (and simultaneously the stage derivatives) are: Yi = h

s

  • j=1

aijFj +

r

  • j=1

uijy[n−1]

j

, Fi = f(Yi), for i = 1, 2, . . . , s. To compute the output values, use the formula y[n]

i

= h

s

  • j=1

bijFj +

r

  • j=1

vijy[n−1]

j

, i = 1, 2, . . . , r.

Scientific Computation and Differential Equations – p. 14/36

slide-65
SLIDE 65

For convenience, write y[n−1]=      y[n−1]

1

y[n−1]

2 .

. . y[n−1]

r

     , y[n]=      y[n]

1

y[n]

2

. . . y[n]

r

     , Y =      Y1 Y2 . . . Ys      , F =      F1 F2 . . . Fs      ,

Scientific Computation and Differential Equations – p. 15/36

slide-66
SLIDE 66

For convenience, write y[n−1]=      y[n−1]

1

y[n−1]

2 .

. . y[n−1]

r

     , y[n]=      y[n]

1

y[n]

2

. . . y[n]

r

     , Y =      Y1 Y2 . . . Ys      , F =      F1 F2 . . . Fs      , so that we can write the calculations in a step more simply as Y y[n]

  • =

A U B V hF y[n−1]

  • .

Scientific Computation and Differential Equations – p. 15/36

slide-67
SLIDE 67

Examples of general linear methods

We will look at five examples

Scientific Computation and Differential Equations – p. 16/36

slide-68
SLIDE 68

Examples of general linear methods

We will look at five examples A Runge–Kutta method

Scientific Computation and Differential Equations – p. 16/36

slide-69
SLIDE 69

Examples of general linear methods

We will look at five examples A Runge–Kutta method A “re-use” method

Scientific Computation and Differential Equations – p. 16/36

slide-70
SLIDE 70

Examples of general linear methods

We will look at five examples A Runge–Kutta method A “re-use” method An Almost Runge–Kutta method

Scientific Computation and Differential Equations – p. 16/36

slide-71
SLIDE 71

Examples of general linear methods

We will look at five examples A Runge–Kutta method A “re-use” method An Almost Runge–Kutta method An Adams-Bashforth/Adams-Moulton method

Scientific Computation and Differential Equations – p. 16/36

slide-72
SLIDE 72

Examples of general linear methods

We will look at five examples A Runge–Kutta method A “re-use” method An Almost Runge–Kutta method An Adams-Bashforth/Adams-Moulton method A modified linear multistep method

Scientific Computation and Differential Equations – p. 16/36

slide-73
SLIDE 73

A Runge–Kutta method One of the famous families of fourth order methods of Kutta, written as a general linear method, is A U B V

  • =

       0 0 1 θ 0 0 1

1 2 − 1 8θ 1 8θ

0 0 1

1 2θ − 1 − 1 2θ

2 0 1

1 6 2 3 1 6 1

      

Scientific Computation and Differential Equations – p. 17/36

slide-74
SLIDE 74

A Runge–Kutta method One of the famous families of fourth order methods of Kutta, written as a general linear method, is A U B V

  • =

       0 0 1 θ 0 0 1

1 2 − 1 8θ 1 8θ

0 0 1

1 2θ − 1 − 1 2θ

2 0 1

1 6 2 3 1 6 1

       In a step from xn−1 to xn = xn−1 + h, the stages give approximations at xn−1, xn−1 + θh, xn−1 + 1

2h

and xn−1 + h.

Scientific Computation and Differential Equations – p. 17/36

slide-75
SLIDE 75

A Runge–Kutta method One of the famous families of fourth order methods of Kutta, written as a general linear method, is A U B V

  • =

       0 0 1 θ 0 0 1

1 2 − 1 8θ 1 8θ

0 0 1

1 2θ − 1 − 1 2θ

2 0 1

1 6 2 3 1 6 1

       In a step from xn−1 to xn = xn−1 + h, the stages give approximations at xn−1, xn−1 + θh, xn−1 + 1

2h

and xn−1 + h. We will look at the special case θ = −1

2.

Scientific Computation and Differential Equations – p. 17/36

slide-76
SLIDE 76

In the special θ = −1

2 case

A U B V

  • =

       0 1 −1

2

0 1

3 4

−1

4

0 1 −2 1 2 0 1

1 6 2 3 1 6 1

      

Scientific Computation and Differential Equations – p. 18/36

slide-77
SLIDE 77

In the special θ = −1

2 case

A U B V

  • =

       0 1 −1

2

0 1

3 4

−1

4

0 1 −2 1 2 0 1

1 6 2 3 1 6 1

       Because the derivative at xn−1 + θh = xn−1 − 1

2h = xn−2 + 1 2h,

was evaluated in the previous step, we can try re-using this value.

Scientific Computation and Differential Equations – p. 18/36

slide-78
SLIDE 78

In the special θ = −1

2 case

A U B V

  • =

       0 1 −1

2

0 1

3 4

−1

4

0 1 −2 1 2 0 1

1 6 2 3 1 6 1

       Because the derivative at xn−1 + θh = xn−1 − 1

2h = xn−2 + 1 2h,

was evaluated in the previous step, we can try re-using this value. This will save one function evaluation.

Scientific Computation and Differential Equations – p. 18/36

slide-79
SLIDE 79

A ‘re-use’ method This gives the re-use method A U B V

  • =

       1

3 4

1 −1

4

−2 2 1 1

1 6 2 3 1 6

1 1       

Scientific Computation and Differential Equations – p. 19/36

slide-80
SLIDE 80

A ‘re-use’ method This gives the re-use method A U B V

  • =

       1

3 4

1 −1

4

−2 2 1 1

1 6 2 3 1 6

1 1        Why should this method not be preferred to a standard Runge–Kutta method?

Scientific Computation and Differential Equations – p. 19/36

slide-81
SLIDE 81

A ‘re-use’ method This gives the re-use method A U B V

  • =

       1

3 4

1 −1

4

−2 2 1 1

1 6 2 3 1 6

1 1        Why should this method not be preferred to a standard Runge–Kutta method? There are at least two reasons Stepsize change is complicated and difficult

Scientific Computation and Differential Equations – p. 19/36

slide-82
SLIDE 82

A ‘re-use’ method This gives the re-use method A U B V

  • =

       1

3 4

1 −1

4

−2 2 1 1

1 6 2 3 1 6

1 1        Why should this method not be preferred to a standard Runge–Kutta method? There are at least two reasons Stepsize change is complicated and difficult The stability region is smaller

Scientific Computation and Differential Equations – p. 19/36

slide-83
SLIDE 83

To overcome these difficulties, we can do several things:

Scientific Computation and Differential Equations – p. 20/36

slide-84
SLIDE 84

To overcome these difficulties, we can do several things: Restore the missing stage,

Scientific Computation and Differential Equations – p. 20/36

slide-85
SLIDE 85

To overcome these difficulties, we can do several things: Restore the missing stage, Move the first derivative calculation to the end of the previous step,

Scientific Computation and Differential Equations – p. 20/36

slide-86
SLIDE 86

To overcome these difficulties, we can do several things: Restore the missing stage, Move the first derivative calculation to the end of the previous step, Use a linear combination of the derivatives computed in the previous step (instead of just one of these),

Scientific Computation and Differential Equations – p. 20/36

slide-87
SLIDE 87

To overcome these difficulties, we can do several things: Restore the missing stage, Move the first derivative calculation to the end of the previous step, Use a linear combination of the derivatives computed in the previous step (instead of just one of these), Re-organize the data passed between steps.

Scientific Computation and Differential Equations – p. 20/36

slide-88
SLIDE 88

To overcome these difficulties, we can do several things: Restore the missing stage, Move the first derivative calculation to the end of the previous step, Use a linear combination of the derivatives computed in the previous step (instead of just one of these), Re-organize the data passed between steps. We then get methods like the following:

Scientific Computation and Differential Equations – p. 20/36

slide-89
SLIDE 89

An ARK method            0 1 1

1 2 1 16

0 1

7 16 1 16

−1

4

2 0 1 −3

4

−1

4 2 3 1 6

0 1

1 6 2 3 1 6

0 1

1 6

1 0 −1

3

0 −2

3

2 0 −1            ,

Scientific Computation and Differential Equations – p. 21/36

slide-90
SLIDE 90

An ARK method            0 1 1

1 2 1 16

0 1

7 16 1 16

−1

4

2 0 1 −3

4

−1

4 2 3 1 6

0 1

1 6 2 3 1 6

0 1

1 6

1 0 −1

3

0 −2

3

2 0 −1            , where y[n]

1

≈ y(xn), y[n]

2

≈ hy′(xn), y[n]

3

≈ h2y′′(xn),

Scientific Computation and Differential Equations – p. 21/36

slide-91
SLIDE 91

An ARK method            0 1 1

1 2 1 16

0 1

7 16 1 16

−1

4

2 0 1 −3

4

−1

4 2 3 1 6

0 1

1 6 2 3 1 6

0 1

1 6

1 0 −1

3

0 −2

3

2 0 −1            , where y[n]

1

≈ y(xn), y[n]

2

≈ hy′(xn), y[n]

3

≈ h2y′′(xn), with Y1 ≈ Y3 ≈ Y4 ≈ y(xn), Y2 ≈ y(xn−1 + 1

2h).

Scientific Computation and Differential Equations – p. 21/36

slide-92
SLIDE 92

The good things about this “Almost Runge–Kutta method” are:

Scientific Computation and Differential Equations – p. 22/36

slide-93
SLIDE 93

The good things about this “Almost Runge–Kutta method” are: It has the same stability region as for a genuine Runge–Kutta method

Scientific Computation and Differential Equations – p. 22/36

slide-94
SLIDE 94

The good things about this “Almost Runge–Kutta method” are: It has the same stability region as for a genuine Runge–Kutta method Unlike standard Runge–Kutta methods, the stage

  • rder is 2.

Scientific Computation and Differential Equations – p. 22/36

slide-95
SLIDE 95

The good things about this “Almost Runge–Kutta method” are: It has the same stability region as for a genuine Runge–Kutta method Unlike standard Runge–Kutta methods, the stage

  • rder is 2.

This means that the stage values are computed to the same accuracy as an order 2 Runge-Kutta method.

Scientific Computation and Differential Equations – p. 22/36

slide-96
SLIDE 96

The good things about this “Almost Runge–Kutta method” are: It has the same stability region as for a genuine Runge–Kutta method Unlike standard Runge–Kutta methods, the stage

  • rder is 2.

This means that the stage values are computed to the same accuracy as an order 2 Runge-Kutta method. Although it is a multi-value method, both starting the method and changing stepsize are essentially cost-free operations.

Scientific Computation and Differential Equations – p. 22/36

slide-97
SLIDE 97

An Adams-Bashforth/Adams-Moulton method It is usual practice to combine Adams–Bashforth and Adams–Moulton methods as a predictor corrector pair.

Scientific Computation and Differential Equations – p. 23/36

slide-98
SLIDE 98

An Adams-Bashforth/Adams-Moulton method It is usual practice to combine Adams–Bashforth and Adams–Moulton methods as a predictor corrector pair. For example, the ‘PECE’ method of order 3 computes a predictor y∗

n and a corrector yn by the formulae

y∗

n = yn−1 + h

23

12f(yn−1) − 4 3f(yn−2) + 5 12f(yn−3)

  • ,

Scientific Computation and Differential Equations – p. 23/36

slide-99
SLIDE 99

An Adams-Bashforth/Adams-Moulton method It is usual practice to combine Adams–Bashforth and Adams–Moulton methods as a predictor corrector pair. For example, the ‘PECE’ method of order 3 computes a predictor y∗

n and a corrector yn by the formulae

y∗

n = yn−1 + h

23

12f(yn−1) − 4 3f(yn−2) + 5 12f(yn−3)

  • ,

yn = yn−1 + h 5

12f(y∗ n) + 2 3f(yn−1) − 1 12f(yn−2)

  • .

Scientific Computation and Differential Equations – p. 23/36

slide-100
SLIDE 100

An Adams-Bashforth/Adams-Moulton method It is usual practice to combine Adams–Bashforth and Adams–Moulton methods as a predictor corrector pair. For example, the ‘PECE’ method of order 3 computes a predictor y∗

n and a corrector yn by the formulae

y∗

n = yn−1 + h

23

12f(yn−1) − 4 3f(yn−2) + 5 12f(yn−3)

  • ,

yn = yn−1 + h 5

12f(y∗ n) + 2 3f(yn−1) − 1 12f(yn−2)

  • .

It might be asked: Is it possible to obtain improved order by using values of yn−2, yn−3 in the formulae?

Scientific Computation and Differential Equations – p. 23/36

slide-101
SLIDE 101

An Adams-Bashforth/Adams-Moulton method It is usual practice to combine Adams–Bashforth and Adams–Moulton methods as a predictor corrector pair. For example, the ‘PECE’ method of order 3 computes a predictor y∗

n and a corrector yn by the formulae

y∗

n = yn−1 + h

23

12f(yn−1) − 4 3f(yn−2) + 5 12f(yn−3)

  • ,

yn = yn−1 + h 5

12f(y∗ n) + 2 3f(yn−1) − 1 12f(yn−2)

  • .

It might be asked: Is it possible to obtain improved order by using values of yn−2, yn−3 in the formulae? The answer is that not much can be gained because we are limited by the famous ‘Dahlquist barrier’.

Scientific Computation and Differential Equations – p. 23/36

slide-102
SLIDE 102

A modified linear multistep method But what if we allow off-step points?

Scientific Computation and Differential Equations – p. 24/36

slide-103
SLIDE 103

A modified linear multistep method But what if we allow off-step points? We can get order 5 if we allow for two predictors, the first giving an approximation to y(xn − 1

2h).

Scientific Computation and Differential Equations – p. 24/36

slide-104
SLIDE 104

A modified linear multistep method But what if we allow off-step points? We can get order 5 if we allow for two predictors, the first giving an approximation to y(xn − 1

2h).

This new method, with predictors at the off-step point and also at the end of the step, is y∗

n− 1

2 = yn−2 + h

9

8f(yn−1) + 3 8f(yn−2)

  • ,

Scientific Computation and Differential Equations – p. 24/36

slide-105
SLIDE 105

A modified linear multistep method But what if we allow off-step points? We can get order 5 if we allow for two predictors, the first giving an approximation to y(xn − 1

2h).

This new method, with predictors at the off-step point and also at the end of the step, is y∗

n− 1

2 = yn−2 + h

9

8f(yn−1) + 3 8f(yn−2)

  • ,

y∗

n = 28 5 yn−1 − 23 5 yn−2

+ h 32

15f(y∗ n− 1

2) − 4f(yn−1) − 26

15f(yn−2)

  • ,

Scientific Computation and Differential Equations – p. 24/36

slide-106
SLIDE 106

A modified linear multistep method But what if we allow off-step points? We can get order 5 if we allow for two predictors, the first giving an approximation to y(xn − 1

2h).

This new method, with predictors at the off-step point and also at the end of the step, is y∗

n− 1

2 = yn−2 + h

9

8f(yn−1) + 3 8f(yn−2)

  • ,

y∗

n = 28 5 yn−1 − 23 5 yn−2

+ h 32

15f(y∗ n− 1

2) − 4f(yn−1) − 26

15f(yn−2)

  • ,

yn = 32

31yn−1 − 1 31yn−2

+ h 64

93f(y∗ n− 1

2)+ 5

31f(y∗ n)+ 4 31f(yn−1)− 1 93f(yn−2)

  • .

Scientific Computation and Differential Equations – p. 24/36

slide-107
SLIDE 107

Order of general linear methods

Classical methods are all built on a plan where we know in advance what we are trying to approximate.

Scientific Computation and Differential Equations – p. 25/36

slide-108
SLIDE 108

Order of general linear methods

Classical methods are all built on a plan where we know in advance what we are trying to approximate. For an abstract general linear method, the interpretation

  • f the input and output quantities is quite general.

Scientific Computation and Differential Equations – p. 25/36

slide-109
SLIDE 109

Order of general linear methods

Classical methods are all built on a plan where we know in advance what we are trying to approximate. For an abstract general linear method, the interpretation

  • f the input and output quantities is quite general.

We want to understand order in a similar general way.

Scientific Computation and Differential Equations – p. 25/36

slide-110
SLIDE 110

Order of general linear methods

Classical methods are all built on a plan where we know in advance what we are trying to approximate. For an abstract general linear method, the interpretation

  • f the input and output quantities is quite general.

We want to understand order in a similar general way. The key ideas are Use a general starting method to represent the input to a step.

Scientific Computation and Differential Equations – p. 25/36

slide-111
SLIDE 111

Order of general linear methods

Classical methods are all built on a plan where we know in advance what we are trying to approximate. For an abstract general linear method, the interpretation

  • f the input and output quantities is quite general.

We want to understand order in a similar general way. The key ideas are Use a general starting method to represent the input to a step. Require the output to be similarly related to the starting method applied one time-step later.

Scientific Computation and Differential Equations – p. 25/36

slide-112
SLIDE 112

The input to a step is an approximation to some vector of quantities related to the exact solution at xn−1.

Scientific Computation and Differential Equations – p. 26/36

slide-113
SLIDE 113

The input to a step is an approximation to some vector of quantities related to the exact solution at xn−1. When the step has been completed, the vectors comprising the output are approximations to the same quantities, but now related to xn.

Scientific Computation and Differential Equations – p. 26/36

slide-114
SLIDE 114

The input to a step is an approximation to some vector of quantities related to the exact solution at xn−1. When the step has been completed, the vectors comprising the output are approximations to the same quantities, but now related to xn. If the input is exactly what it is supposed to approximate, then the “local truncation error” is defined as the error in the output after a single step.

Scientific Computation and Differential Equations – p. 26/36

slide-115
SLIDE 115

The input to a step is an approximation to some vector of quantities related to the exact solution at xn−1. When the step has been completed, the vectors comprising the output are approximations to the same quantities, but now related to xn. If the input is exactly what it is supposed to approximate, then the “local truncation error” is defined as the error in the output after a single step. If this can be estimated in terms of hp+1, then the method has order p.

Scientific Computation and Differential Equations – p. 26/36

slide-116
SLIDE 116

The input to a step is an approximation to some vector of quantities related to the exact solution at xn−1. When the step has been completed, the vectors comprising the output are approximations to the same quantities, but now related to xn. If the input is exactly what it is supposed to approximate, then the “local truncation error” is defined as the error in the output after a single step. If this can be estimated in terms of hp+1, then the method has order p. We will refer to the calculation which produces y[n−1] from y(xn−1) as a “starting method”.

Scientific Computation and Differential Equations – p. 26/36

slide-117
SLIDE 117

Let S denote the “starting method”

Scientific Computation and Differential Equations – p. 27/36

slide-118
SLIDE 118

Let S denote the “starting method”, that is a mapping from RN to RrN

Scientific Computation and Differential Equations – p. 27/36

slide-119
SLIDE 119

Let S denote the “starting method”, that is a mapping from RN to RrN, and let F : RrN → RN denote a corresponding finishing method

Scientific Computation and Differential Equations – p. 27/36

slide-120
SLIDE 120

Let S denote the “starting method”, that is a mapping from RN to RrN, and let F : RrN → RN denote a corresponding finishing method, such that F ◦ S = id.

Scientific Computation and Differential Equations – p. 27/36

slide-121
SLIDE 121

Let S denote the “starting method”, that is a mapping from RN to RrN, and let F : RrN → RN denote a corresponding finishing method, such that F ◦ S = id. The order of accuracy of a multivalue method is defined in terms of the diagram

E S S M

Scientific Computation and Differential Equations – p. 27/36

slide-122
SLIDE 122

Let S denote the “starting method”, that is a mapping from RN to RrN, and let F : RrN → RN denote a corresponding finishing method, such that F ◦ S = id. The order of accuracy of a multivalue method is defined in terms of the diagram

E S S M O(hp+1)

(h = stepsize)

Scientific Computation and Differential Equations – p. 27/36

slide-123
SLIDE 123

By duplicating this diagram over many steps, global error estimates can be found.

Scientific Computation and Differential Equations – p. 28/36

slide-124
SLIDE 124

By duplicating this diagram over many steps, global error estimates can be found.

E E E S S S S S M M M

Scientific Computation and Differential Equations – p. 28/36

slide-125
SLIDE 125

By duplicating this diagram over many steps, global error estimates can be found.

E E E S S S S S M M M O(hp)

Scientific Computation and Differential Equations – p. 28/36

slide-126
SLIDE 126

By duplicating this diagram over many steps, global error estimates can be found.

E E E S S S S S M M M O(hp) F

Scientific Computation and Differential Equations – p. 28/36

slide-127
SLIDE 127

By duplicating this diagram over many steps, global error estimates can be found.

E E E S S S S S M M M O(hp) F O(hp)

Scientific Computation and Differential Equations – p. 28/36

slide-128
SLIDE 128

Methods with the IRK Stability property

An important attribute of a numerical method is its “stability matrix” M(z) defined by M(z) = V + zB(I − zA)−1U.

Scientific Computation and Differential Equations – p. 29/36

slide-129
SLIDE 129

Methods with the IRK Stability property

An important attribute of a numerical method is its “stability matrix” M(z) defined by M(z) = V + zB(I − zA)−1U. This represents the behaviour of the method in the case

  • f linear problems.

Scientific Computation and Differential Equations – p. 29/36

slide-130
SLIDE 130

Methods with the IRK Stability property

An important attribute of a numerical method is its “stability matrix” M(z) defined by M(z) = V + zB(I − zA)−1U. This represents the behaviour of the method in the case

  • f linear problems.

That is, for the problem y′(x) = qy(x), we have y[n] = M(z)y[n−1] where z = hq

Scientific Computation and Differential Equations – p. 29/36

slide-131
SLIDE 131

Methods with the IRK Stability property

An important attribute of a numerical method is its “stability matrix” M(z) defined by M(z) = V + zB(I − zA)−1U. This represents the behaviour of the method in the case

  • f linear problems.

That is, for the problem y′(x) = qy(x), we have y[n] = M(z)y[n−1] where z = hq In the special case of a Runge–Kutta method, M(z) is a scalar R(z).

Scientific Computation and Differential Equations – p. 29/36

slide-132
SLIDE 132

To solve “stiff” problems, we want to use A-stable methods or, even better L-stable methods.

Scientific Computation and Differential Equations – p. 30/36

slide-133
SLIDE 133

To solve “stiff” problems, we want to use A-stable methods or, even better L-stable methods. In the case of Runge–Kutta methods the meanings of these are

Scientific Computation and Differential Equations – p. 30/36

slide-134
SLIDE 134

To solve “stiff” problems, we want to use A-stable methods or, even better L-stable methods. In the case of Runge–Kutta methods the meanings of these are For an A-stable method, |R(z)| ≤ 1, if Rez ≤ 0.

Scientific Computation and Differential Equations – p. 30/36

slide-135
SLIDE 135

To solve “stiff” problems, we want to use A-stable methods or, even better L-stable methods. In the case of Runge–Kutta methods the meanings of these are For an A-stable method, |R(z)| ≤ 1, if Rez ≤ 0. An L-stable method is A-stable and, in addition, R(∞) = 0.

Scientific Computation and Differential Equations – p. 30/36

slide-136
SLIDE 136

A general linear method is said to have “Runge–Kutta stability” if the stability matrix for the method M(z) has characteristic polynomial of the form det(wI − M(z)) = wr−1(w − R(z)).

Scientific Computation and Differential Equations – p. 31/36

slide-137
SLIDE 137

A general linear method is said to have “Runge–Kutta stability” if the stability matrix for the method M(z) has characteristic polynomial of the form det(wI − M(z)) = wr−1(w − R(z)). This means that the method has exactly the same stability region as a Runge–Kutta method whose stability function is R(z).

Scientific Computation and Differential Equations – p. 31/36

slide-138
SLIDE 138

Do methods with RK stability exist?

Scientific Computation and Differential Equations – p. 32/36

slide-139
SLIDE 139

Do methods with RK stability exist? Yes, it is even possible to construct them with rational

  • perations by imposing a condition known as “Inherent

RK stability”.

Scientific Computation and Differential Equations – p. 32/36

slide-140
SLIDE 140

Do methods with RK stability exist? Yes, it is even possible to construct them with rational

  • perations by imposing a condition known as “Inherent

RK stability”. Methods exist for both stiff and non-stiff problems for arbitrary orders and the only question is how to select the best methods from the large families that are available.

Scientific Computation and Differential Equations – p. 32/36

slide-141
SLIDE 141

Do methods with RK stability exist? Yes, it is even possible to construct them with rational

  • perations by imposing a condition known as “Inherent

RK stability”. Methods exist for both stiff and non-stiff problems for arbitrary orders and the only question is how to select the best methods from the large families that are available. We will give just two examples.

Scientific Computation and Differential Equations – p. 32/36

slide-142
SLIDE 142

The following third order method is explicit and suitable for the solution of non-stiff problems

  • AU

BV

  • =

                1

1 4 1 32 1 384

− 176

1885

1

2237 3770 2237 15080 2149 90480

−335624

311025 29 55

1 1619591

1244100 260027 904800 1517801 39811200

−67843

6435 395 33

−5 0 1

29428 6435 527 585 41819 102960

−67843

6435 395 33

−5 0 1

29428 6435 527 585 41819 102960

1

82 33

−274

11 170 9

−4

3 0 482 99

−161

264

−8 −12

40 3

−2 0

26 3

               

Scientific Computation and Differential Equations – p. 33/36

slide-143
SLIDE 143

The following fourth order method is implicit, L-stable, and suitable for the solution of stiff problems

                         

1 4

0 1

3 4 1 2 1 4

513 54272 1 4

0 1

27649 54272 5601 27136 1539 54272

459 6784 3706119 69088256

488 3819 1 4

0 1

15366379 207264768 756057 34544128 1620299 69088256 − 4854 454528 32161061 197549232 − 111814 232959 134 183 1 4

0 1−

32609017 197549232 929753 32924872 4008881 32924872 174981 3465776

135425 2948496 − 641 10431 73 183 1 2 1 4

1 −

367313 8845488 − 22727 1474248 40979 982832 323 25864

135425 2948496 − 641 10431 73 183 1 2 1 4

1 −

367313 8845488 − 22727 1474248 40979 982832 323 25864

1 0

2255 2318

47125 20862 447 122 − 11 4 4 3

0 −

28745 20862

1937 13908 351 18544 65 976 12620 10431

96388 31293 3364 549 − 10 3 4 3

0 −

70634 31293

2050 10431

187 2318 113 366 414 1159

29954 31293 130 61

−1

1 3

0 −

27052 31293

113 10431

491 4636 161 732

                         

Scientific Computation and Differential Equations – p. 34/36

slide-144
SLIDE 144

Implementation questions for IRKS methods

Many implementation questions are similar to those for traditional methods but there are some new challenges.

Scientific Computation and Differential Equations – p. 35/36

slide-145
SLIDE 145

Implementation questions for IRKS methods

Many implementation questions are similar to those for traditional methods but there are some new challenges. We want variable order and stepsize and it is even a realistic aim to change between stiff and non-stiff methods automatically.

Scientific Computation and Differential Equations – p. 35/36

slide-146
SLIDE 146

Implementation questions for IRKS methods

Many implementation questions are similar to those for traditional methods but there are some new challenges. We want variable order and stepsize and it is even a realistic aim to change between stiff and non-stiff methods automatically. Because of the variable order and stepsize aims, we wish to be able to do the following: Estimate the local truncation error of the current step

Scientific Computation and Differential Equations – p. 35/36

slide-147
SLIDE 147

Implementation questions for IRKS methods

Many implementation questions are similar to those for traditional methods but there are some new challenges. We want variable order and stepsize and it is even a realistic aim to change between stiff and non-stiff methods automatically. Because of the variable order and stepsize aims, we wish to be able to do the following: Estimate the local truncation error of the current step Estimate the local truncation error of an alternative method of higher order

Scientific Computation and Differential Equations – p. 35/36

slide-148
SLIDE 148

Implementation questions for IRKS methods

Many implementation questions are similar to those for traditional methods but there are some new challenges. We want variable order and stepsize and it is even a realistic aim to change between stiff and non-stiff methods automatically. Because of the variable order and stepsize aims, we wish to be able to do the following: Estimate the local truncation error of the current step Estimate the local truncation error of an alternative method of higher order Change the stepsize with little cost and with little impact on stability

Scientific Computation and Differential Equations – p. 35/36

slide-149
SLIDE 149

We believe we have solutions to all these problems and that we can construct methods of quite high orders which will work well and competitively.

Scientific Computation and Differential Equations – p. 36/36

slide-150
SLIDE 150

We believe we have solutions to all these problems and that we can construct methods of quite high orders which will work well and competitively. I would like to name, with gratitude and appreciation, my principal collaborators in this project:

Scientific Computation and Differential Equations – p. 36/36

slide-151
SLIDE 151

We believe we have solutions to all these problems and that we can construct methods of quite high orders which will work well and competitively. I would like to name, with gratitude and appreciation, my principal collaborators in this project: Zdzisław Jackiewicz Arizona State University, Phoenix AZ Helmut Podhaisky Martin Luther Universität, Halle Will Wright La Trobe University, Melbourne

Scientific Computation and Differential Equations – p. 36/36

slide-152
SLIDE 152

We believe we have solutions to all these problems and that we can construct methods of quite high orders which will work well and competitively. I would like to name, with gratitude and appreciation, my principal collaborators in this project: Zdzisław Jackiewicz Arizona State University, Phoenix AZ Helmut Podhaisky Martin Luther Universität, Halle Will Wright La Trobe University, Melbourne I also express my thanks to other colleagues who are closely associated with this project, especially: Robert Chan, Allison Heard, Shirley Huang, Nicolette Rattenbury, Gustaf Söderlind, Angela Tsai.

Scientific Computation and Differential Equations – p. 36/36