Iterative Techniques in Matrix Algebra Relaxation Techniques for - - PowerPoint PPT Presentation

iterative techniques in matrix algebra relaxation
SMART_READER_LITE
LIVE PREVIEW

Iterative Techniques in Matrix Algebra Relaxation Techniques for - - PowerPoint PPT Presentation

Iterative Techniques in Matrix Algebra Relaxation Techniques for Solving Linear Systems Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University 2011


slide-1
SLIDE 1

Iterative Techniques in Matrix Algebra Relaxation Techniques for Solving Linear Systems

Numerical Analysis (9th Edition) R L Burden & J D Faires

Beamer Presentation Slides prepared by John Carroll Dublin City University

c 2011 Brooks/Cole, Cengage Learning

slide-2
SLIDE 2

Residual Vectors SOR Method Optimal ω SOR Algorithm

Outline

1

Residual Vectors & the Gauss-Seidel Method

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 2 / 36

slide-3
SLIDE 3

Residual Vectors SOR Method Optimal ω SOR Algorithm

Outline

1

Residual Vectors & the Gauss-Seidel Method

2

Relaxation Methods (including SOR)

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 2 / 36

slide-4
SLIDE 4

Residual Vectors SOR Method Optimal ω SOR Algorithm

Outline

1

Residual Vectors & the Gauss-Seidel Method

2

Relaxation Methods (including SOR)

3

Choosing the Optimal Value of ω

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 2 / 36

slide-5
SLIDE 5

Residual Vectors SOR Method Optimal ω SOR Algorithm

Outline

1

Residual Vectors & the Gauss-Seidel Method

2

Relaxation Methods (including SOR)

3

Choosing the Optimal Value of ω

4

The SOR Algorithm

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 2 / 36

slide-6
SLIDE 6

Residual Vectors SOR Method Optimal ω SOR Algorithm

Outline

1

Residual Vectors & the Gauss-Seidel Method

2

Relaxation Methods (including SOR)

3

Choosing the Optimal Value of ω

4

The SOR Algorithm

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 3 / 36

slide-7
SLIDE 7

Residual Vectors SOR Method Optimal ω SOR Algorithm

Residual Vectors & the Gauss-Seidel Method

Motivation

We have seen that the rate of convergence of an iterative technique depends on the spectral radius of the matrix associated with the method.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 4 / 36

slide-8
SLIDE 8

Residual Vectors SOR Method Optimal ω SOR Algorithm

Residual Vectors & the Gauss-Seidel Method

Motivation

We have seen that the rate of convergence of an iterative technique depends on the spectral radius of the matrix associated with the method. One way to select a procedure to accelerate convergence is to choose a method whose associated matrix has minimal spectral radius.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 4 / 36

slide-9
SLIDE 9

Residual Vectors SOR Method Optimal ω SOR Algorithm

Residual Vectors & the Gauss-Seidel Method

Motivation

We have seen that the rate of convergence of an iterative technique depends on the spectral radius of the matrix associated with the method. One way to select a procedure to accelerate convergence is to choose a method whose associated matrix has minimal spectral radius. We start by introducing a new means of measuring the amount by which an approximation to the solution to a linear system differs from the true solution to the system.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 4 / 36

slide-10
SLIDE 10

Residual Vectors SOR Method Optimal ω SOR Algorithm

Residual Vectors & the Gauss-Seidel Method

Motivation

We have seen that the rate of convergence of an iterative technique depends on the spectral radius of the matrix associated with the method. One way to select a procedure to accelerate convergence is to choose a method whose associated matrix has minimal spectral radius. We start by introducing a new means of measuring the amount by which an approximation to the solution to a linear system differs from the true solution to the system. The method makes use of the vector described in the following definition.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 4 / 36

slide-11
SLIDE 11

Residual Vectors SOR Method Optimal ω SOR Algorithm

Residual Vectors & the Gauss-Seidel Method

Definition

Suppose ˜ x ∈ I Rn is an approximation to the solution of the linear system defined by Ax = b The residual vector for ˜ x with respect to this system is r = b − A˜ x

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 5 / 36

slide-12
SLIDE 12

Residual Vectors SOR Method Optimal ω SOR Algorithm

Residual Vectors & the Gauss-Seidel Method

Definition

Suppose ˜ x ∈ I Rn is an approximation to the solution of the linear system defined by Ax = b The residual vector for ˜ x with respect to this system is r = b − A˜ x

Comments

A residual vector is associated with each calculation of an approximate component to the solution vector.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 5 / 36

slide-13
SLIDE 13

Residual Vectors SOR Method Optimal ω SOR Algorithm

Residual Vectors & the Gauss-Seidel Method

Definition

Suppose ˜ x ∈ I Rn is an approximation to the solution of the linear system defined by Ax = b The residual vector for ˜ x with respect to this system is r = b − A˜ x

Comments

A residual vector is associated with each calculation of an approximate component to the solution vector. The true objective is to generate a sequence of approximations that will cause the residual vectors to converge rapidly to zero.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 5 / 36

slide-14
SLIDE 14

Residual Vectors SOR Method Optimal ω SOR Algorithm

Residual Vectors & the Gauss-Seidel Method

Looking at the Gauss-Seidel Method

Suppose we let r(k)

i

= (r (k)

1i , r (k) 2i , . . . , r (k) ni )t

denote the residual vector for the Gauss-Seidel method

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 6 / 36

slide-15
SLIDE 15

Residual Vectors SOR Method Optimal ω SOR Algorithm

Residual Vectors & the Gauss-Seidel Method

Looking at the Gauss-Seidel Method

Suppose we let r(k)

i

= (r (k)

1i , r (k) 2i , . . . , r (k) ni )t

denote the residual vector for the Gauss-Seidel method corresponding to the approximate solution vector x(k)

i

defined by x(k)

i

= (x(k)

1 , x(k) 2 , . . . , x(k) i−1, x(k−1) i

, . . . , x(k−1)

n

)t

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 6 / 36

slide-16
SLIDE 16

Residual Vectors SOR Method Optimal ω SOR Algorithm

Residual Vectors & the Gauss-Seidel Method

Looking at the Gauss-Seidel Method

Suppose we let r(k)

i

= (r (k)

1i , r (k) 2i , . . . , r (k) ni )t

denote the residual vector for the Gauss-Seidel method corresponding to the approximate solution vector x(k)

i

defined by x(k)

i

= (x(k)

1 , x(k) 2 , . . . , x(k) i−1, x(k−1) i

, . . . , x(k−1)

n

)t The m-th component of r(k)

i

is r (k)

mi = bm − i−1

  • j=1

amjx(k)

j

n

  • j=i

amjx(k−1)

j

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 6 / 36

slide-17
SLIDE 17

Residual Vectors SOR Method Optimal ω SOR Algorithm

Residual Vectors & the Gauss-Seidel Method

r (k)

mi = bm − i−1

  • j=1

amjx(k)

j

n

  • j=i

amjx(k−1)

j

Looking at the Gauss-Seidel Method (Cont’d)

Equivalently, we can write r (k)

mi in the form:

r (k)

mi = bm − i−1

  • j=1

amjx(k)

j

n

  • j=i+1

amjx(k−1)

j

− amix(k−1)

i

for each m = 1, 2, . . . , n.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 7 / 36

slide-18
SLIDE 18

Residual Vectors SOR Method Optimal ω SOR Algorithm

Residual Vectors & the Gauss-Seidel Method

r (k)

mi = bm − i−1

  • j=1

amjx(k)

j

n

  • j=i+1

amjx(k−1)

j

− amix(k−1)

i

Looking at the Gauss-Seidel Method (Cont’d)

In particular, the ith component of r(k)

i

is r (k)

ii

= bi −

i−1

  • j=1

aijx(k)

j

n

  • j=i+1

aijx(k−1)

j

− aiix(k−1)

i

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 8 / 36

slide-19
SLIDE 19

Residual Vectors SOR Method Optimal ω SOR Algorithm

Residual Vectors & the Gauss-Seidel Method

r (k)

mi = bm − i−1

  • j=1

amjx(k)

j

n

  • j=i+1

amjx(k−1)

j

− amix(k−1)

i

Looking at the Gauss-Seidel Method (Cont’d)

In particular, the ith component of r(k)

i

is r (k)

ii

= bi −

i−1

  • j=1

aijx(k)

j

n

  • j=i+1

aijx(k−1)

j

− aiix(k−1)

i

so aiix(k−1)

i

+ r (k)

ii

= bi −

i−1

  • j=1

aijx(k)

j

n

  • j=i+1

aijx(k−1)

j

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 8 / 36

slide-20
SLIDE 20

Residual Vectors SOR Method Optimal ω SOR Algorithm

Residual Vectors & the Gauss-Seidel Method

(E) aiix(k−1)

i

+ r (k)

ii

= bi −

i−1

  • j=1

aijx(k)

j

n

  • j=i+1

aijx(k−1)

j

Looking at the Gauss-Seidel Method (Cont’d)

Recall, however, that in the Gauss-Seidel method, x(k)

i

is chosen to be x(k)

i

= 1 aii  bi −

i−1

  • j=1

aijx(k)

j

n

  • j=i+1

aijx(k−1)

j

 

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 9 / 36

slide-21
SLIDE 21

Residual Vectors SOR Method Optimal ω SOR Algorithm

Residual Vectors & the Gauss-Seidel Method

(E) aiix(k−1)

i

+ r (k)

ii

= bi −

i−1

  • j=1

aijx(k)

j

n

  • j=i+1

aijx(k−1)

j

Looking at the Gauss-Seidel Method (Cont’d)

Recall, however, that in the Gauss-Seidel method, x(k)

i

is chosen to be x(k)

i

= 1 aii  bi −

i−1

  • j=1

aijx(k)

j

n

  • j=i+1

aijx(k−1)

j

  so (E) can be rewritten as aiix(k−1)

i

+ r (k)

ii

= aiix(k)

i

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 9 / 36

slide-22
SLIDE 22

Residual Vectors SOR Method Optimal ω SOR Algorithm

Residual Vectors & the Gauss-Seidel Method

aiix(k−1)

i

+ r (k)

ii

= aiix(k)

i

Looking at the Gauss-Seidel Method (Cont’d)

Consequently, the Gauss-Seidel method can be characterized as choosing x(k)

i

to satisfy x(k)

i

= x(k−1)

i

+ r (k)

ii

aii

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 10 / 36

slide-23
SLIDE 23

Residual Vectors SOR Method Optimal ω SOR Algorithm

Residual Vectors & the Gauss-Seidel Method

A 2nd Connection with Residual Vectors

We can derive another connection between the residual vectors and the Gauss-Seidel technique.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 11 / 36

slide-24
SLIDE 24

Residual Vectors SOR Method Optimal ω SOR Algorithm

Residual Vectors & the Gauss-Seidel Method

A 2nd Connection with Residual Vectors

We can derive another connection between the residual vectors and the Gauss-Seidel technique. Consider the residual vector r(k)

i+1, associated with the vector

x(k)

i+1 = (x(k) 1 ,. . ., x(k) i

, x(k−1)

i+1

, . . ., x(k−1)

n

)t.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 11 / 36

slide-25
SLIDE 25

Residual Vectors SOR Method Optimal ω SOR Algorithm

Residual Vectors & the Gauss-Seidel Method

A 2nd Connection with Residual Vectors

We can derive another connection between the residual vectors and the Gauss-Seidel technique. Consider the residual vector r(k)

i+1, associated with the vector

x(k)

i+1 = (x(k) 1 ,. . ., x(k) i

, x(k−1)

i+1

, . . ., x(k−1)

n

)t. We have seen that the m-th component of r(k)

i

is r (k)

mi = bm − i−1

  • j=1

amjx(k)

j

n

  • j=i

amjx(k−1)

j

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 11 / 36

slide-26
SLIDE 26

Residual Vectors SOR Method Optimal ω SOR Algorithm

Residual Vectors & the Gauss-Seidel Method

r (k)

mi = bm − i−1

  • j=1

amjx(k)

j

n

  • j=i

amjx(k−1)

j

A 2nd Connection with Residual Vectors (Cont’d)

Therefore, the ith component of r(k)

i+1 is

r (k)

i,i+1

= bi −

i

  • j=1

aijx(k)

j

n

  • j=i+1

aijx(k−1)

j

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 12 / 36

slide-27
SLIDE 27

Residual Vectors SOR Method Optimal ω SOR Algorithm

Residual Vectors & the Gauss-Seidel Method

r (k)

mi = bm − i−1

  • j=1

amjx(k)

j

n

  • j=i

amjx(k−1)

j

A 2nd Connection with Residual Vectors (Cont’d)

Therefore, the ith component of r(k)

i+1 is

r (k)

i,i+1

= bi −

i

  • j=1

aijx(k)

j

n

  • j=i+1

aijx(k−1)

j

= bi −

i−1

  • j=1

aijx(k)

j

n

  • j=i+1

aijx(k−1)

j

− aiix(k)

i

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 12 / 36

slide-28
SLIDE 28

Residual Vectors SOR Method Optimal ω SOR Algorithm

Residual Vectors & the Gauss-Seidel Method

r (k)

i,i+1 = bi − i−1

  • j=1

aijx(k)

j

n

  • j=i+1

aijx(k−1)

j

− aiix(k)

i

A 2nd Connection with Residual Vectors (Cont’d)

By the manner in which x(k)

i

is defined in x(k)

i

= 1 aii  bi −

i−1

  • j=1

aijx(k)

j

n

  • j=i+1

aijx(k−1)

j

  we see that r (k)

i,i+1 = 0.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 13 / 36

slide-29
SLIDE 29

Residual Vectors SOR Method Optimal ω SOR Algorithm

Residual Vectors & the Gauss-Seidel Method

r (k)

i,i+1 = bi − i−1

  • j=1

aijx(k)

j

n

  • j=i+1

aijx(k−1)

j

− aiix(k)

i

A 2nd Connection with Residual Vectors (Cont’d)

By the manner in which x(k)

i

is defined in x(k)

i

= 1 aii  bi −

i−1

  • j=1

aijx(k)

j

n

  • j=i+1

aijx(k−1)

j

  we see that r (k)

i,i+1 = 0. In a sense, then, the Gauss-Seidel technique is

characterized by choosing each x(k)

i+1 in such a way that the ith

component of r(k)

i+1 is zero.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 13 / 36

slide-30
SLIDE 30

Residual Vectors SOR Method Optimal ω SOR Algorithm

Outline

1

Residual Vectors & the Gauss-Seidel Method

2

Relaxation Methods (including SOR)

3

Choosing the Optimal Value of ω

4

The SOR Algorithm

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 14 / 36

slide-31
SLIDE 31

Residual Vectors SOR Method Optimal ω SOR Algorithm

From Gauss-Seidel to Relaxation Methods

Reducing the Norm of the Residual Vector

Choosing x(k)

i+1 so that one coordinate of the residual vector is

zero, however, is not necessarily the most efficient way to reduce the norm of the vector r(k)

i+1.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 15 / 36

slide-32
SLIDE 32

Residual Vectors SOR Method Optimal ω SOR Algorithm

From Gauss-Seidel to Relaxation Methods

Reducing the Norm of the Residual Vector

Choosing x(k)

i+1 so that one coordinate of the residual vector is

zero, however, is not necessarily the most efficient way to reduce the norm of the vector r(k)

i+1.

If we modify the Gauss-Seidel procedure, as given by x(k)

i

= x(k−1)

i

+ r (k)

ii

aii

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 15 / 36

slide-33
SLIDE 33

Residual Vectors SOR Method Optimal ω SOR Algorithm

From Gauss-Seidel to Relaxation Methods

Reducing the Norm of the Residual Vector

Choosing x(k)

i+1 so that one coordinate of the residual vector is

zero, however, is not necessarily the most efficient way to reduce the norm of the vector r(k)

i+1.

If we modify the Gauss-Seidel procedure, as given by x(k)

i

= x(k−1)

i

+ r (k)

ii

aii to x(k)

i

= x(k−1)

i

+ ωr (k)

ii

aii

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 15 / 36

slide-34
SLIDE 34

Residual Vectors SOR Method Optimal ω SOR Algorithm

From Gauss-Seidel to Relaxation Methods

Reducing the Norm of the Residual Vector

Choosing x(k)

i+1 so that one coordinate of the residual vector is

zero, however, is not necessarily the most efficient way to reduce the norm of the vector r(k)

i+1.

If we modify the Gauss-Seidel procedure, as given by x(k)

i

= x(k−1)

i

+ r (k)

ii

aii to x(k)

i

= x(k−1)

i

+ ωr (k)

ii

aii then for certain choices of positive ω we can reduce the norm of the residual vector and obtain significantly faster convergence.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 15 / 36

slide-35
SLIDE 35

Residual Vectors SOR Method Optimal ω SOR Algorithm

From Gauss-Seidel to Relaxation Methods

Introducing the SOR Method

Methods involving x(k)

i

= x(k−1)

i

+ ωr (k)

ii

aii are called relaxation methods.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 16 / 36

slide-36
SLIDE 36

Residual Vectors SOR Method Optimal ω SOR Algorithm

From Gauss-Seidel to Relaxation Methods

Introducing the SOR Method

Methods involving x(k)

i

= x(k−1)

i

+ ωr (k)

ii

aii are called relaxation methods. For choices of ω with 0 < ω < 1, the procedures are called under-relaxation methods.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 16 / 36

slide-37
SLIDE 37

Residual Vectors SOR Method Optimal ω SOR Algorithm

From Gauss-Seidel to Relaxation Methods

Introducing the SOR Method

Methods involving x(k)

i

= x(k−1)

i

+ ωr (k)

ii

aii are called relaxation methods. For choices of ω with 0 < ω < 1, the procedures are called under-relaxation methods. We will be interested in choices of ω with 1 < ω, and these are called over-relaxation methods.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 16 / 36

slide-38
SLIDE 38

Residual Vectors SOR Method Optimal ω SOR Algorithm

From Gauss-Seidel to Relaxation Methods

Introducing the SOR Method

Methods involving x(k)

i

= x(k−1)

i

+ ωr (k)

ii

aii are called relaxation methods. For choices of ω with 0 < ω < 1, the procedures are called under-relaxation methods. We will be interested in choices of ω with 1 < ω, and these are called over-relaxation methods. They are used to accelerate the convergence for systems that are convergent by the Gauss-Seidel technique.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 16 / 36

slide-39
SLIDE 39

Residual Vectors SOR Method Optimal ω SOR Algorithm

From Gauss-Seidel to Relaxation Methods

Introducing the SOR Method

Methods involving x(k)

i

= x(k−1)

i

+ ωr (k)

ii

aii are called relaxation methods. For choices of ω with 0 < ω < 1, the procedures are called under-relaxation methods. We will be interested in choices of ω with 1 < ω, and these are called over-relaxation methods. They are used to accelerate the convergence for systems that are convergent by the Gauss-Seidel technique. The methods are abbreviated SOR, for Successive Over-Relaxation, and are particularly useful for solving the linear systems that occur in the numerical solution of certain partial-differential equations.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 16 / 36

slide-40
SLIDE 40

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Method

A More Computationally-Efficient Formulation

Note that by using the i-th component of r(k)

i

in the form r (k)

ii

= bi −

i−1

  • j=1

aijx(k)

j

n

  • j=i+1

aijx(k−1)

j

− aiix(k−1)

i

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 17 / 36

slide-41
SLIDE 41

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Method

A More Computationally-Efficient Formulation

Note that by using the i-th component of r(k)

i

in the form r (k)

ii

= bi −

i−1

  • j=1

aijx(k)

j

n

  • j=i+1

aijx(k−1)

j

− aiix(k−1)

i

we can reformulate the SOR equation x(k)

i

= x(k−1)

i

+ ωr (k)

ii

aii for calculation purposes

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 17 / 36

slide-42
SLIDE 42

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Method

A More Computationally-Efficient Formulation

Note that by using the i-th component of r(k)

i

in the form r (k)

ii

= bi −

i−1

  • j=1

aijx(k)

j

n

  • j=i+1

aijx(k−1)

j

− aiix(k−1)

i

we can reformulate the SOR equation x(k)

i

= x(k−1)

i

+ ωr (k)

ii

aii for calculation purposes as x(k)

i

= (1 − ω)x(k−1)

i

+ ω aii  bi −

i−1

  • j=1

aijx(k)

j

n

  • j=i+1

aijx(k−1)

j

 

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 17 / 36

slide-43
SLIDE 43

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Method

A More Computationally-Efficient Formulation (Cont’d)

To determine the matrix form of the SOR method, we rewrite x(k)

i

= (1 − ω)x(k−1)

i

+ ω aii  bi −

i−1

  • j=1

aijx(k)

j

n

  • j=i+1

aijx(k−1)

j

 

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 18 / 36

slide-44
SLIDE 44

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Method

A More Computationally-Efficient Formulation (Cont’d)

To determine the matrix form of the SOR method, we rewrite x(k)

i

= (1 − ω)x(k−1)

i

+ ω aii  bi −

i−1

  • j=1

aijx(k)

j

n

  • j=i+1

aijx(k−1)

j

  as aiix(k)

i

+ ω

i−1

  • j=1

aijx(k)

j

= (1 − ω)aiix(k−1)

i

− ω

n

  • j=i+1

aijx(k−1)

j

+ ωbi

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 18 / 36

slide-45
SLIDE 45

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Method

aiix(k)

i

+ ω

i−1

  • j=1

aijx(k)

j

= (1 − ω)aiix(k−1)

i

− ω

n

  • j=i+1

aijx(k−1)

j

+ ωbi

A More Computationally-Efficient Formulation (Cont’d)

In vector form, we therefore have (D − ωL)x(k) = [(1 − ω)D + ωU]x(k−1) + ωb

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 19 / 36

slide-46
SLIDE 46

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Method

aiix(k)

i

+ ω

i−1

  • j=1

aijx(k)

j

= (1 − ω)aiix(k−1)

i

− ω

n

  • j=i+1

aijx(k−1)

j

+ ωbi

A More Computationally-Efficient Formulation (Cont’d)

In vector form, we therefore have (D − ωL)x(k) = [(1 − ω)D + ωU]x(k−1) + ωb from which we obtain:

The SOR Method

x(k) = (D − ωL)−1[(1 − ω)D + ωU]x(k−1) + ω(D − ωL)−1b

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 19 / 36

slide-47
SLIDE 47

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Method

x(k) = (D − ωL)−1[(1 − ω)D + ωU]x(k−1) + ω(D − ωL)−1b

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 20 / 36

slide-48
SLIDE 48

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Method

x(k) = (D − ωL)−1[(1 − ω)D + ωU]x(k−1) + ω(D − ωL)−1b Letting Tω = (D − ωL)−1[(1 − ω)D + ωU]

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 20 / 36

slide-49
SLIDE 49

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Method

x(k) = (D − ωL)−1[(1 − ω)D + ωU]x(k−1) + ω(D − ωL)−1b Letting Tω = (D − ωL)−1[(1 − ω)D + ωU] and cω = ω(D − ωL)−1b

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 20 / 36

slide-50
SLIDE 50

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Method

x(k) = (D − ωL)−1[(1 − ω)D + ωU]x(k−1) + ω(D − ωL)−1b Letting Tω = (D − ωL)−1[(1 − ω)D + ωU] and cω = ω(D − ωL)−1b gives the SOR technique the form x(k) = Tωx(k−1) + cω

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 20 / 36

slide-51
SLIDE 51

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Method

Example

The linear system Ax = b given by 4x1 + 3x2 = 24 3x1 + 4x2 − x3 = 30 − x2 + 4x3 = −24 has the solution (3, 4, −5)t.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 21 / 36

slide-52
SLIDE 52

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Method

Example

The linear system Ax = b given by 4x1 + 3x2 = 24 3x1 + 4x2 − x3 = 30 − x2 + 4x3 = −24 has the solution (3, 4, −5)t. Compare the iterations from the Gauss-Seidel method and the SOR method with ω = 1.25 using x(0) = (1, 1, 1)t for both methods.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 21 / 36

slide-53
SLIDE 53

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Method

Solution (1/3)

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 22 / 36

slide-54
SLIDE 54

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Method

Solution (1/3)

For each k = 1, 2, . . . , the equations for the Gauss-Seidel method are x(k)

1

= −0.75x(k−1)

2

+ 6 x(k)

2

= −0.75x(k)

1

+ 0.25x(k−1)

3

+ 7.5 x(k)

3

= 0.25x(k)

2

− 6

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 22 / 36

slide-55
SLIDE 55

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Method

Solution (1/3)

For each k = 1, 2, . . . , the equations for the Gauss-Seidel method are x(k)

1

= −0.75x(k−1)

2

+ 6 x(k)

2

= −0.75x(k)

1

+ 0.25x(k−1)

3

+ 7.5 x(k)

3

= 0.25x(k)

2

− 6 and the equations for the SOR method with ω = 1.25 are x(k)

1

= −0.25x(k−1)

1

− 0.9375x(k−1)

2

+ 7.5 x(k)

2

= −0.9375x(k)

1

− 0.25x(k−1)

2

+ 0.3125x(k−1)

3

+ 9.375 x(k)

3

= 0.3125x(k)

2

− 0.25x(k−1)

3

− 7.5

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 22 / 36

slide-56
SLIDE 56

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Method: Solution (2/3)

Gauss-Seidel Iterations

k 1 2 3 · · · 7 x(k)

1

1 5.250000 3.1406250 3.0878906 3.0134110 x(k)

2

1 3.812500 3.8828125 3.9267578 3.9888241 x(k)

3

1 −5.046875 −5.0292969 −5.0183105 −5.0027940

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 23 / 36

slide-57
SLIDE 57

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Method: Solution (2/3)

Gauss-Seidel Iterations

k 1 2 3 · · · 7 x(k)

1

1 5.250000 3.1406250 3.0878906 3.0134110 x(k)

2

1 3.812500 3.8828125 3.9267578 3.9888241 x(k)

3

1 −5.046875 −5.0292969 −5.0183105 −5.0027940

SOR Iterations (ω = 1.25)

k 1 2 3 · · · 7 x(k)

1

1 6.312500 2.6223145 3.1333027 3.0000498 x(k)

2

1 3.5195313 3.9585266 4.0102646 4.0002586 x(k)

3

1 −6.6501465 −4.6004238 −5.0966863 −5.0003486

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 23 / 36

slide-58
SLIDE 58

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Method

Solution (3/3)

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 24 / 36

slide-59
SLIDE 59

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Method

Solution (3/3)

For the iterates to be accurate to 7 decimal places, the Gauss-Seidel method requires 34 iterations,

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 24 / 36

slide-60
SLIDE 60

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Method

Solution (3/3)

For the iterates to be accurate to 7 decimal places, the Gauss-Seidel method requires 34 iterations, as opposed to 14 iterations for the SOR method with ω = 1.25.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 24 / 36

slide-61
SLIDE 61

Residual Vectors SOR Method Optimal ω SOR Algorithm

Outline

1

Residual Vectors & the Gauss-Seidel Method

2

Relaxation Methods (including SOR)

3

Choosing the Optimal Value of ω

4

The SOR Algorithm

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 25 / 36

slide-62
SLIDE 62

Residual Vectors SOR Method Optimal ω SOR Algorithm

Choosing the Optimal Value of ω

An obvious question to ask is how the appropriate value of ω is chosen when the SOR method is used?

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 26 / 36

slide-63
SLIDE 63

Residual Vectors SOR Method Optimal ω SOR Algorithm

Choosing the Optimal Value of ω

An obvious question to ask is how the appropriate value of ω is chosen when the SOR method is used? Although no complete answer to this question is known for the general n × n linear system, the following results can be used in certain important situations.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 26 / 36

slide-64
SLIDE 64

Residual Vectors SOR Method Optimal ω SOR Algorithm

Choosing the Optimal Value of ω

Theorem (Kahan)

If aii = 0, for each i = 1, 2, . . . , n, then ρ(Tω) ≥ |ω − 1|. This implies that the SOR method can converge only if 0 < ω < 2.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 27 / 36

slide-65
SLIDE 65

Residual Vectors SOR Method Optimal ω SOR Algorithm

Choosing the Optimal Value of ω

Theorem (Kahan)

If aii = 0, for each i = 1, 2, . . . , n, then ρ(Tω) ≥ |ω − 1|. This implies that the SOR method can converge only if 0 < ω < 2. The proof of this theorem is considered in Exercise 9, Chapter 7 of Burden R. L. & Faires J. D., Numerical Analysis, 9th Ed., Cengage, 2011.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 27 / 36

slide-66
SLIDE 66

Residual Vectors SOR Method Optimal ω SOR Algorithm

Choosing the Optimal Value of ω

Theorem (Kahan)

If aii = 0, for each i = 1, 2, . . . , n, then ρ(Tω) ≥ |ω − 1|. This implies that the SOR method can converge only if 0 < ω < 2. The proof of this theorem is considered in Exercise 9, Chapter 7 of Burden R. L. & Faires J. D., Numerical Analysis, 9th Ed., Cengage, 2011.

Theorem (Ostrowski-Reich)

If A is a positive definite matrix and 0 < ω < 2, then the SOR method converges for any choice of initial approximate vector x(0).

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 27 / 36

slide-67
SLIDE 67

Residual Vectors SOR Method Optimal ω SOR Algorithm

Choosing the Optimal Value of ω

Theorem (Kahan)

If aii = 0, for each i = 1, 2, . . . , n, then ρ(Tω) ≥ |ω − 1|. This implies that the SOR method can converge only if 0 < ω < 2. The proof of this theorem is considered in Exercise 9, Chapter 7 of Burden R. L. & Faires J. D., Numerical Analysis, 9th Ed., Cengage, 2011.

Theorem (Ostrowski-Reich)

If A is a positive definite matrix and 0 < ω < 2, then the SOR method converges for any choice of initial approximate vector x(0). The proof of this theorem can be found in Ortega, J. M., Numerical Analysis; A Second Course, Academic Press, New York, 1972, 201 pp.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 27 / 36

slide-68
SLIDE 68

Residual Vectors SOR Method Optimal ω SOR Algorithm

Choosing the Optimal Value of ω

Theorem

If A is positive definite and tridiagonal, then ρ(Tg) = [ρ(Tj)]2 < 1, and the optimal choice of ω for the SOR method is ω = 2 1 +

  • 1 − [ρ(Tj)]2

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 28 / 36

slide-69
SLIDE 69

Residual Vectors SOR Method Optimal ω SOR Algorithm

Choosing the Optimal Value of ω

Theorem

If A is positive definite and tridiagonal, then ρ(Tg) = [ρ(Tj)]2 < 1, and the optimal choice of ω for the SOR method is ω = 2 1 +

  • 1 − [ρ(Tj)]2

With this choice of ω, we have ρ(Tω) = ω − 1.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 28 / 36

slide-70
SLIDE 70

Residual Vectors SOR Method Optimal ω SOR Algorithm

Choosing the Optimal Value of ω

Theorem

If A is positive definite and tridiagonal, then ρ(Tg) = [ρ(Tj)]2 < 1, and the optimal choice of ω for the SOR method is ω = 2 1 +

  • 1 − [ρ(Tj)]2

With this choice of ω, we have ρ(Tω) = ω − 1. The proof of this theorem can be found in Ortega, J. M., Numerical Analysis; A Second Course, Academic Press, New York, 1972, 201 pp.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 28 / 36

slide-71
SLIDE 71

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Method

Example

Find the optimal choice of ω for the SOR method for the matrix A =   4 3 3 4 −1 −1 4  

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 29 / 36

slide-72
SLIDE 72

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Method

Solution (1/3)

This matrix is clearly tridiagonal, so we can apply the result in the SOR theorem if we can also show that it is positive definite.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 30 / 36

slide-73
SLIDE 73

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Method

Solution (1/3)

This matrix is clearly tridiagonal, so we can apply the result in the SOR theorem if we can also show that it is positive definite. Because the matrix is symmetric, the theory tells us that it is positive definite if and only if all its leading principle submatrices has a positive determinant.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 30 / 36

slide-74
SLIDE 74

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Method

Solution (1/3)

This matrix is clearly tridiagonal, so we can apply the result in the SOR theorem if we can also show that it is positive definite. Because the matrix is symmetric, the theory tells us that it is positive definite if and only if all its leading principle submatrices has a positive determinant. This is easily seen to be the case because det(A) = 24, det 4 3 3 4

  • = 7

and det ([4]) = 4

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 30 / 36

slide-75
SLIDE 75

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Method

Solution (2/3)

We compute Tj = D−1(L + U)

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 31 / 36

slide-76
SLIDE 76

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Method

Solution (2/3)

We compute Tj = D−1(L + U) =   

1 4 1 4 1 4

     −3 −3 1 1  

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 31 / 36

slide-77
SLIDE 77

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Method

Solution (2/3)

We compute Tj = D−1(L + U) =   

1 4 1 4 1 4

     −3 −3 1 1   =   −0.75 −0.75 0.25 0.25  

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 31 / 36

slide-78
SLIDE 78

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Method

Solution (2/3)

We compute Tj = D−1(L + U) =   

1 4 1 4 1 4

     −3 −3 1 1   =   −0.75 −0.75 0.25 0.25   so that Tj − λI =   −λ −0.75 −0.75 −λ 0.25 0.25 −λ  

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 31 / 36

slide-79
SLIDE 79

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Method

Solution (3/3)

Therefore det(Tj − λI) =

  • −λ

−0.75 −0.75 −λ 0.25 0.25 −λ

  • Numerical Analysis (Chapter 7)

Relaxation Techniques R L Burden & J D Faires 32 / 36

slide-80
SLIDE 80

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Method

Solution (3/3)

Therefore det(Tj − λI) =

  • −λ

−0.75 −0.75 −λ 0.25 0.25 −λ

  • = −λ(λ2 − 0.625)

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 32 / 36

slide-81
SLIDE 81

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Method

Solution (3/3)

Therefore det(Tj − λI) =

  • −λ

−0.75 −0.75 −λ 0.25 0.25 −λ

  • = −λ(λ2 − 0.625)

Thus ρ(Tj) = √ 0.625

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 32 / 36

slide-82
SLIDE 82

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Method

Solution (3/3)

Therefore det(Tj − λI) =

  • −λ

−0.75 −0.75 −λ 0.25 0.25 −λ

  • = −λ(λ2 − 0.625)

Thus ρ(Tj) = √ 0.625 and ω = 2 1 +

  • 1 − [ρ(Tj)]2

= 2 1 + √ 1 − 0.625 ≈ 1.24.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 32 / 36

slide-83
SLIDE 83

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Method

Solution (3/3)

Therefore det(Tj − λI) =

  • −λ

−0.75 −0.75 −λ 0.25 0.25 −λ

  • = −λ(λ2 − 0.625)

Thus ρ(Tj) = √ 0.625 and ω = 2 1 +

  • 1 − [ρ(Tj)]2

= 2 1 + √ 1 − 0.625 ≈ 1.24. This explains the rapid convergence obtained in the last example when using ω = 1.25.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 32 / 36

slide-84
SLIDE 84

Residual Vectors SOR Method Optimal ω SOR Algorithm

Outline

1

Residual Vectors & the Gauss-Seidel Method

2

Relaxation Methods (including SOR)

3

Choosing the Optimal Value of ω

4

The SOR Algorithm

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 33 / 36

slide-85
SLIDE 85

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Algorithm (1/2)

To solve Ax = b given the parameter ω and an initial approximation x(0):

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 34 / 36

slide-86
SLIDE 86

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Algorithm (1/2)

To solve Ax = b given the parameter ω and an initial approximation x(0):

INPUT

the number of equations and unknowns n; the entries aij, 1 ≤ i, j ≤ n, of the matrix A; the entries bi, 1 ≤ i ≤ n, of b; the entries XOi, 1 ≤ i ≤ n, of XO = x(0); the parameter ω; tolerance TOL; maximum number of iterations N.

OUTPUT

the approximate solution x1, . . . , xn or a message that the number of iterations was exceeded.

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 34 / 36

slide-87
SLIDE 87

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Algorithm (2/2)

Step 1 Set k = 1 Step 2 While (k ≤ N) do Steps 3–6:

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 35 / 36

slide-88
SLIDE 88

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Algorithm (2/2)

Step 1 Set k = 1 Step 2 While (k ≤ N) do Steps 3–6: Step 3 For i = 1, . . . , n set xi = (1 − ω)XOi + 1 aii

  • ω
  • − i−1

j=1 aijxj − n j=i+1 aijXOj + bi

  • Numerical Analysis (Chapter 7)

Relaxation Techniques R L Burden & J D Faires 35 / 36

slide-89
SLIDE 89

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Algorithm (2/2)

Step 1 Set k = 1 Step 2 While (k ≤ N) do Steps 3–6: Step 3 For i = 1, . . . , n set xi = (1 − ω)XOi + 1 aii

  • ω
  • − i−1

j=1 aijxj − n j=i+1 aijXOj + bi

  • Step 4

If ||x − XO|| < TOL then OUTPUT (x1, . . . , xn)

STOP (The procedure was successful)

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 35 / 36

slide-90
SLIDE 90

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Algorithm (2/2)

Step 1 Set k = 1 Step 2 While (k ≤ N) do Steps 3–6: Step 3 For i = 1, . . . , n set xi = (1 − ω)XOi + 1 aii

  • ω
  • − i−1

j=1 aijxj − n j=i+1 aijXOj + bi

  • Step 4

If ||x − XO|| < TOL then OUTPUT (x1, . . . , xn)

STOP (The procedure was successful)

Step 5 Set k = k + 1

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 35 / 36

slide-91
SLIDE 91

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Algorithm (2/2)

Step 1 Set k = 1 Step 2 While (k ≤ N) do Steps 3–6: Step 3 For i = 1, . . . , n set xi = (1 − ω)XOi + 1 aii

  • ω
  • − i−1

j=1 aijxj − n j=i+1 aijXOj + bi

  • Step 4

If ||x − XO|| < TOL then OUTPUT (x1, . . . , xn)

STOP (The procedure was successful)

Step 5 Set k = k + 1 Step 6 For i = 1, . . . , n set XOi = xi

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 35 / 36

slide-92
SLIDE 92

Residual Vectors SOR Method Optimal ω SOR Algorithm

The SOR Algorithm (2/2)

Step 1 Set k = 1 Step 2 While (k ≤ N) do Steps 3–6: Step 3 For i = 1, . . . , n set xi = (1 − ω)XOi + 1 aii

  • ω
  • − i−1

j=1 aijxj − n j=i+1 aijXOj + bi

  • Step 4

If ||x − XO|| < TOL then OUTPUT (x1, . . . , xn)

STOP (The procedure was successful)

Step 5 Set k = k + 1 Step 6 For i = 1, . . . , n set XOi = xi Step 7

OUTPUT (‘Maximum number of iterations exceeded’) STOP (The procedure was successful)

Numerical Analysis (Chapter 7) Relaxation Techniques R L Burden & J D Faires 35 / 36

slide-93
SLIDE 93

Questions?