MA-207 Differential Equations II Ronnie Sebastian Department of - - PowerPoint PPT Presentation

ma 207 differential equations ii
SMART_READER_LITE
LIVE PREVIEW

MA-207 Differential Equations II Ronnie Sebastian Department of - - PowerPoint PPT Presentation

MA-207 Differential Equations II Ronnie Sebastian Department of Mathematics Indian Institute of Technology Bombay Powai, Mumbai - 76 1 / 38 Last week The two important things we did last week were 2 / 38 Last week The two important things


slide-1
SLIDE 1

MA-207 Differential Equations II

Ronnie Sebastian

Department of Mathematics Indian Institute of Technology Bombay Powai, Mumbai - 76

1 / 38

slide-2
SLIDE 2

Last week

The two important things we did last week were

2 / 38

slide-3
SLIDE 3

Last week

The two important things we did last week were

1 How to compute the radius of convergence of a power series 2 / 38

slide-4
SLIDE 4

Last week

The two important things we did last week were

1 How to compute the radius of convergence of a power series 2 Power series defines a nice function in its interval of

convergence

2 / 38

slide-5
SLIDE 5

Last week

The two important things we did last week were

1 How to compute the radius of convergence of a power series 2 Power series defines a nice function in its interval of

convergence

3 Suppose we are given an ODE: y′′ + p(x)y′ + q(x)y = 0, and

p(x) and q(x) are analytic (given by power series) in an interval I around x0, then the solution y is also analytic on I.

2 / 38

slide-6
SLIDE 6

Last week

The two important things we did last week were

1 How to compute the radius of convergence of a power series 2 Power series defines a nice function in its interval of

convergence

3 Suppose we are given an ODE: y′′ + p(x)y′ + q(x)y = 0, and

p(x) and q(x) are analytic (given by power series) in an interval I around x0, then the solution y is also analytic on I.

4 We can compute the two independent solutions, to an ODE

as above, by plugging in a power series into the ODE and getting recursive relation for coefficients.

2 / 38

slide-7
SLIDE 7

Legendre equation

3 / 38

slide-8
SLIDE 8

Legendre equation

The following ODE is known as the Legendre equation. (1 − x2)y′′ − 2xy′ + p(p + 1)y = 0 Here p denotes a fixed real number.

3 / 38

slide-9
SLIDE 9

Legendre equation

The following ODE is known as the Legendre equation. (1 − x2)y′′ − 2xy′ + p(p + 1)y = 0 Here p denotes a fixed real number. By Existence theorem, power series solution in x exists on the interval (−1, 1).

3 / 38

slide-10
SLIDE 10

Legendre equation

The following ODE is known as the Legendre equation. (1 − x2)y′′ − 2xy′ + p(p + 1)y = 0 Here p denotes a fixed real number. By Existence theorem, power series solution in x exists on the interval (−1, 1). Put y(x) =

  • n=0

anxn in the Legendre equation.

3 / 38

slide-11
SLIDE 11

Legendre equation

The following ODE is known as the Legendre equation. (1 − x2)y′′ − 2xy′ + p(p + 1)y = 0 Here p denotes a fixed real number. By Existence theorem, power series solution in x exists on the interval (−1, 1). Put y(x) =

  • n=0

anxn in the Legendre equation. Equating the coefficient of xn in the resulting equation, we get the recursive relation (n + 2)(n + 1)an+2 − n(n + 1)an + p(p + 1)an = 0, n ≥ 0

3 / 38

slide-12
SLIDE 12

Legendre equation: Two independent solutions

= ⇒ an+2 = (n − p)(p + n + 1) (n + 2)(n + 1) an

4 / 38

slide-13
SLIDE 13

Legendre equation: Two independent solutions

= ⇒ an+2 = (n − p)(p + n + 1) (n + 2)(n + 1) an Let us set a0 = 1 and a1 = 0 in the recursion formula to find a first solution.

4 / 38

slide-14
SLIDE 14

Legendre equation: Two independent solutions

= ⇒ an+2 = (n − p)(p + n + 1) (n + 2)(n + 1) an Let us set a0 = 1 and a1 = 0 in the recursion formula to find a first solution. The solution is given by (note it is an even function) y1(x) := a0

  • 1 − p(p + 1)

2! x2 + p(p + 1)(p − 2)(p + 3) 4! x4 + . . .

  • 4 / 38
slide-15
SLIDE 15

Legendre equation: Two independent solutions

= ⇒ an+2 = (n − p)(p + n + 1) (n + 2)(n + 1) an Let us set a0 = 1 and a1 = 0 in the recursion formula to find a first solution. The solution is given by (note it is an even function) y1(x) := a0

  • 1 − p(p + 1)

2! x2 + p(p + 1)(p − 2)(p + 3) 4! x4 + . . .

  • Let us find a second solution by setting a0 = 0 and a1 = 1 in the

recursion formula.

4 / 38

slide-16
SLIDE 16

Legendre equation: Two independent solutions

= ⇒ an+2 = (n − p)(p + n + 1) (n + 2)(n + 1) an Let us set a0 = 1 and a1 = 0 in the recursion formula to find a first solution. The solution is given by (note it is an even function) y1(x) := a0

  • 1 − p(p + 1)

2! x2 + p(p + 1)(p − 2)(p + 3) 4! x4 + . . .

  • Let us find a second solution by setting a0 = 0 and a1 = 1 in the

recursion formula. The second solution is given by (note it is an odd function)

y2(x) := a1

  • x − (p − 1)(p + 2)

3! x3 + (p − 1)(p + 2)(p − 3)(p + 4) 5! x5 + . . .

  • 4 / 38
slide-17
SLIDE 17

Legendre polynomials

Thus, the two independent solutions are

5 / 38

slide-18
SLIDE 18

Legendre polynomials

Thus, the two independent solutions are y1(x) := a0

  • 1 − p(p + 1)

2! x2 + p(p + 1)(p − 2)(p + 3) 4! x4 + . . .

  • y2(x) := a1
  • x − (p − 1)(p + 2)

3! x3 + (p − 1)(p + 2)(p − 3)(p + 4) 5! x5 + . . .

  • 5 / 38
slide-19
SLIDE 19

Legendre polynomials

Thus, the two independent solutions are y1(x) := a0

  • 1 − p(p + 1)

2! x2 + p(p + 1)(p − 2)(p + 3) 4! x4 + . . .

  • y2(x) := a1
  • x − (p − 1)(p + 2)

3! x3 + (p − 1)(p + 2)(p − 3)(p + 4) 5! x5 + . . .

  • Remark

If p ∈ {0, 2, 4, . . .} ∪ {−1, −3, −5, . . .} then y1(x) is a polynomial

  • function. It is an even function.

5 / 38

slide-20
SLIDE 20

Legendre polynomials

Thus, the two independent solutions are y1(x) := a0

  • 1 − p(p + 1)

2! x2 + p(p + 1)(p − 2)(p + 3) 4! x4 + . . .

  • y2(x) := a1
  • x − (p − 1)(p + 2)

3! x3 + (p − 1)(p + 2)(p − 3)(p + 4) 5! x5 + . . .

  • Remark

If p ∈ {0, 2, 4, . . .} ∪ {−1, −3, −5, . . .} then y1(x) is a polynomial

  • function. It is an even function.

If p ∈ {1, 3, 5, . . .} ∪ {−2, −4, −6, . . .} then y2(x) is a polynomial

  • function. It is an odd function.

5 / 38

slide-21
SLIDE 21

Legendre polynomials

Thus, the two independent solutions are y1(x) := a0

  • 1 − p(p + 1)

2! x2 + p(p + 1)(p − 2)(p + 3) 4! x4 + . . .

  • y2(x) := a1
  • x − (p − 1)(p + 2)

3! x3 + (p − 1)(p + 2)(p − 3)(p + 4) 5! x5 + . . .

  • Remark

If p ∈ {0, 2, 4, . . .} ∪ {−1, −3, −5, . . .} then y1(x) is a polynomial

  • function. It is an even function.

If p ∈ {1, 3, 5, . . .} ∪ {−2, −4, −6, . . .} then y2(x) is a polynomial

  • function. It is an odd function.

Thus, if p is an integer then exactly one solution is a polynomial and the other is an infinite power series.

5 / 38

slide-22
SLIDE 22

Legendre polynomials

The general solution y(x) = a0y1(x) + a1y2(x) is called a Legendre function.

6 / 38

slide-23
SLIDE 23

Legendre polynomials

The general solution y(x) = a0y1(x) + a1y2(x) is called a Legendre function. If p = m is an integer, then precisely one of y1 or y2 is a polynomial, and it is called the m-th Legendre polynomial Pm(x).

6 / 38

slide-24
SLIDE 24

Legendre polynomials

The general solution y(x) = a0y1(x) + a1y2(x) is called a Legendre function. If p = m is an integer, then precisely one of y1 or y2 is a polynomial, and it is called the m-th Legendre polynomial Pm(x). For m ≥ 0 note that Pm(x) is a polynomial of degree m. It is an even function if m is even and an odd function if m is odd.

6 / 38

slide-25
SLIDE 25

Legendre polynomials

The general solution y(x) = a0y1(x) + a1y2(x) is called a Legendre function. If p = m is an integer, then precisely one of y1 or y2 is a polynomial, and it is called the m-th Legendre polynomial Pm(x). For m ≥ 0 note that Pm(x) is a polynomial of degree m. It is an even function if m is even and an odd function if m is odd. Let us write down few Legendre polynomials.

6 / 38

slide-26
SLIDE 26

Legendre polynomials

The general solution y(x) = a0y1(x) + a1y2(x) is called a Legendre function. If p = m is an integer, then precisely one of y1 or y2 is a polynomial, and it is called the m-th Legendre polynomial Pm(x). For m ≥ 0 note that Pm(x) is a polynomial of degree m. It is an even function if m is even and an odd function if m is odd. Let us write down few Legendre polynomials. P0(x) = 1 P1(x) = x P2(x) = (1 − 3x2)( −1

2 )

=

1 2(3x2 − 1)

P3(x) = (x − 5

3x3)( −3 2 )

=

1 2(5x3 − 3x)

P4(x) = (1 − 10x2 + 35

3 x4)( 3 8)

=

1 8(35x4 − 30x2 + 3)

P5(x) = (x − 14

3 x3 + 21 5 x5)( 15 8 )

=

1 8(63x5 − 70x3 + 15x)

6 / 38

slide-27
SLIDE 27

Legendre polynomials

The graphs of Pm’s in the interval (−1, 1) are given below.

7 / 38

slide-28
SLIDE 28

Legendre polynomials

The graphs of Pm’s in the interval (−1, 1) are given below.

7 / 38

slide-29
SLIDE 29

What is so interesting about the collection of Legendre polynomials?

8 / 38

slide-30
SLIDE 30

What is so interesting about the collection of Legendre polynomials?

To answer this question we need some linear algebra.

8 / 38

slide-31
SLIDE 31

Vector spaces

We will recall the notion of Inner product space from Linear Algebra.

9 / 38

slide-32
SLIDE 32

Vector spaces

We will recall the notion of Inner product space from Linear Algebra. First recall the notion of a vector space V over R.

9 / 38

slide-33
SLIDE 33

Vector spaces

We will recall the notion of Inner product space from Linear Algebra. First recall the notion of a vector space V over R. A vector space is a set equipped with two operations addition v + w, v, w ∈ V scalar multiplication cv, c ∈ R, v ∈ V

9 / 38

slide-34
SLIDE 34

Vector spaces

We will recall the notion of Inner product space from Linear Algebra. First recall the notion of a vector space V over R. A vector space is a set equipped with two operations addition v + w, v, w ∈ V scalar multiplication cv, c ∈ R, v ∈ V A vector space V has a dimension, which may not be finite.

9 / 38

slide-35
SLIDE 35

Inner product spaces

Let V be a vector space over R (not necessarily finite-dimensional).

10 / 38

slide-36
SLIDE 36

Inner product spaces

Let V be a vector space over R (not necessarily finite-dimensional). A bilinear form on V is a map , : V × V → R

10 / 38

slide-37
SLIDE 37

Inner product spaces

Let V be a vector space over R (not necessarily finite-dimensional). A bilinear form on V is a map , : V × V → R which is linear in both coordinates, that is, au + v, w = au, w + v, w u, av + w = au, v + u, w for a ∈ R and u, v ∈ V .

10 / 38

slide-38
SLIDE 38

Inner product spaces

Let V be a vector space over R (not necessarily finite-dimensional). A bilinear form on V is a map , : V × V → R which is linear in both coordinates, that is, au + v, w = au, w + v, w u, av + w = au, v + u, w for a ∈ R and u, v ∈ V . An inner product on V is a bilinear form on V which is

10 / 38

slide-39
SLIDE 39

Inner product spaces

Let V be a vector space over R (not necessarily finite-dimensional). A bilinear form on V is a map , : V × V → R which is linear in both coordinates, that is, au + v, w = au, w + v, w u, av + w = au, v + u, w for a ∈ R and u, v ∈ V . An inner product on V is a bilinear form on V which is symmetric:v, w = w, v

10 / 38

slide-40
SLIDE 40

Inner product spaces

Let V be a vector space over R (not necessarily finite-dimensional). A bilinear form on V is a map , : V × V → R which is linear in both coordinates, that is, au + v, w = au, w + v, w u, av + w = au, v + u, w for a ∈ R and u, v ∈ V . An inner product on V is a bilinear form on V which is symmetric:v, w = w, v positive definite: v, v ≥ 0 for all v and v, v = 0 iff v = 0

10 / 38

slide-41
SLIDE 41

Inner product spaces

Let V be a vector space over R (not necessarily finite-dimensional). A bilinear form on V is a map , : V × V → R which is linear in both coordinates, that is, au + v, w = au, w + v, w u, av + w = au, v + u, w for a ∈ R and u, v ∈ V . An inner product on V is a bilinear form on V which is symmetric:v, w = w, v positive definite: v, v ≥ 0 for all v and v, v = 0 iff v = 0 A vector space with an inner product is called an inner product space.

10 / 38

slide-42
SLIDE 42

Orthogonality

11 / 38

slide-43
SLIDE 43

Orthogonality

In an inner product space V , two vectors u and v are orthogonal if u, v = 0.

11 / 38

slide-44
SLIDE 44

Orthogonality

In an inner product space V , two vectors u and v are orthogonal if u, v = 0. More generally, a set of vectors forms an orthogonal system if they are mutually orthogonal.

11 / 38

slide-45
SLIDE 45

Orthogonality

In an inner product space V , two vectors u and v are orthogonal if u, v = 0. More generally, a set of vectors forms an orthogonal system if they are mutually orthogonal. An orthogonal basis is an orthogonal system which is also a basis.

11 / 38

slide-46
SLIDE 46

Orthogonality

In an inner product space V , two vectors u and v are orthogonal if u, v = 0. More generally, a set of vectors forms an orthogonal system if they are mutually orthogonal. An orthogonal basis is an orthogonal system which is also a basis. Example Consider the vector space Rn with coordinate-wise addition and scalar multiplication.

11 / 38

slide-47
SLIDE 47

Orthogonality

In an inner product space V , two vectors u and v are orthogonal if u, v = 0. More generally, a set of vectors forms an orthogonal system if they are mutually orthogonal. An orthogonal basis is an orthogonal system which is also a basis. Example Consider the vector space Rn with coordinate-wise addition and scalar multiplication. The rule (a1, . . . , an), (b1, . . . , bn) :=

n

  • i=1

aibi defines an inner product on Rn.

11 / 38

slide-48
SLIDE 48

Orthogonality

In an inner product space V , two vectors u and v are orthogonal if u, v = 0. More generally, a set of vectors forms an orthogonal system if they are mutually orthogonal. An orthogonal basis is an orthogonal system which is also a basis. Example Consider the vector space Rn with coordinate-wise addition and scalar multiplication. The rule (a1, . . . , an), (b1, . . . , bn) :=

n

  • i=1

aibi defines an inner product on Rn. The standard basis {e1, . . . , en} is an orthogonal basis of Rn.

11 / 38

slide-49
SLIDE 49

The previous example can be formulated more abstractly as follows.

12 / 38

slide-50
SLIDE 50

The previous example can be formulated more abstractly as follows. Example Let V be a finite-dimensional vector space with ordered basis B = {e1, . . . , en}.

12 / 38

slide-51
SLIDE 51

The previous example can be formulated more abstractly as follows. Example Let V be a finite-dimensional vector space with ordered basis B = {e1, . . . , en}. For u =

n

  • i=1

aiei and v =

n

  • i=1

biei define u, v :=

n

  • i=1

aibi

12 / 38

slide-52
SLIDE 52

The previous example can be formulated more abstractly as follows. Example Let V be a finite-dimensional vector space with ordered basis B = {e1, . . . , en}. For u =

n

  • i=1

aiei and v =

n

  • i=1

biei define u, v :=

n

  • i=1

aibi This defines an inner product on V

12 / 38

slide-53
SLIDE 53

The previous example can be formulated more abstractly as follows. Example Let V be a finite-dimensional vector space with ordered basis B = {e1, . . . , en}. For u =

n

  • i=1

aiei and v =

n

  • i=1

biei define u, v :=

n

  • i=1

aibi This defines an inner product on V With this definition, {e1, . . . , en} is an orthogonal basis of V .

12 / 38

slide-54
SLIDE 54

Lemma Suppose V is a finite dimensional inner product space, and e1, . . . , en is an orthogonal basis.

13 / 38

slide-55
SLIDE 55

Lemma Suppose V is a finite dimensional inner product space, and e1, . . . , en is an orthogonal basis. Then for any v ∈ V v =

n

  • i=1

v, ei ei, eiei

13 / 38

slide-56
SLIDE 56

Lemma Suppose V is a finite dimensional inner product space, and e1, . . . , en is an orthogonal basis. Then for any v ∈ V v =

n

  • i=1

v, ei ei, eiei Proof. Write v =

n

  • i=1

aiei.

13 / 38

slide-57
SLIDE 57

Lemma Suppose V is a finite dimensional inner product space, and e1, . . . , en is an orthogonal basis. Then for any v ∈ V v =

n

  • i=1

v, ei ei, eiei Proof. Write v =

n

  • i=1

aiei. We want to find the coefficients aj.

13 / 38

slide-58
SLIDE 58

Lemma Suppose V is a finite dimensional inner product space, and e1, . . . , en is an orthogonal basis. Then for any v ∈ V v =

n

  • i=1

v, ei ei, eiei Proof. Write v =

n

  • i=1

aiei. We want to find the coefficients aj. Take inner product of v with ej:

13 / 38

slide-59
SLIDE 59

Lemma Suppose V is a finite dimensional inner product space, and e1, . . . , en is an orthogonal basis. Then for any v ∈ V v =

n

  • i=1

v, ei ei, eiei Proof. Write v =

n

  • i=1

aiei. We want to find the coefficients aj. Take inner product of v with ej: v, ej =

n

  • i=1

aiei, ej =

n

  • i=1

aiei, ej = ajej, ej

13 / 38

slide-60
SLIDE 60

Lemma Suppose V is a finite dimensional inner product space, and e1, . . . , en is an orthogonal basis. Then for any v ∈ V v =

n

  • i=1

v, ei ei, eiei Proof. Write v =

n

  • i=1

aiei. We want to find the coefficients aj. Take inner product of v with ej: v, ej =

n

  • i=1

aiei, ej =

n

  • i=1

aiei, ej = ajej, ej Thus, aj = v, ej ej, ej

13 / 38

slide-61
SLIDE 61

Lemma In a finite-dimensional inner product space, there always exists an

  • rthogonal basis.

14 / 38

slide-62
SLIDE 62

Lemma In a finite-dimensional inner product space, there always exists an

  • rthogonal basis.

Start with any basis and modify it to an orthogonal basis by Gram-Schmidt orthogonalization.

14 / 38

slide-63
SLIDE 63

Lemma In a finite-dimensional inner product space, there always exists an

  • rthogonal basis.

Start with any basis and modify it to an orthogonal basis by Gram-Schmidt orthogonalization. This result is not necessarily true in infinite-dimensional inner product spaces.

14 / 38

slide-64
SLIDE 64

Lemma In a finite-dimensional inner product space, there always exists an

  • rthogonal basis.

Start with any basis and modify it to an orthogonal basis by Gram-Schmidt orthogonalization. This result is not necessarily true in infinite-dimensional inner product spaces. For infinite dimensional vector spaces, we can only talk of a maximal orthogonal set.

14 / 38

slide-65
SLIDE 65

Lemma In a finite-dimensional inner product space, there always exists an

  • rthogonal basis.

Start with any basis and modify it to an orthogonal basis by Gram-Schmidt orthogonalization. This result is not necessarily true in infinite-dimensional inner product spaces. For infinite dimensional vector spaces, we can only talk of a maximal orthogonal set. A subset {e1, e2, . . .} is called a maximal orthogonal set for V if

14 / 38

slide-66
SLIDE 66

Lemma In a finite-dimensional inner product space, there always exists an

  • rthogonal basis.

Start with any basis and modify it to an orthogonal basis by Gram-Schmidt orthogonalization. This result is not necessarily true in infinite-dimensional inner product spaces. For infinite dimensional vector spaces, we can only talk of a maximal orthogonal set. A subset {e1, e2, . . .} is called a maximal orthogonal set for V if ei, ej = δij

14 / 38

slide-67
SLIDE 67

Lemma In a finite-dimensional inner product space, there always exists an

  • rthogonal basis.

Start with any basis and modify it to an orthogonal basis by Gram-Schmidt orthogonalization. This result is not necessarily true in infinite-dimensional inner product spaces. For infinite dimensional vector spaces, we can only talk of a maximal orthogonal set. A subset {e1, e2, . . .} is called a maximal orthogonal set for V if ei, ej = δij v, ei = 0 for all i iff v = 0.

14 / 38

slide-68
SLIDE 68

Length of a vector

15 / 38

slide-69
SLIDE 69

Length of a vector

For a vector v in an inner product space, define v := v, v1/2

15 / 38

slide-70
SLIDE 70

Length of a vector

For a vector v in an inner product space, define v := v, v1/2 This is called the norm or length of the vector v.

15 / 38

slide-71
SLIDE 71

Length of a vector

For a vector v in an inner product space, define v := v, v1/2 This is called the norm or length of the vector v. It satisfies the following three properties.

15 / 38

slide-72
SLIDE 72

Length of a vector

For a vector v in an inner product space, define v := v, v1/2 This is called the norm or length of the vector v. It satisfies the following three properties. 0 = 0 and v > 0 if v = 0 v + w ≤ v + w av = |a|v for all v, w ∈ V and a ∈ R.

15 / 38

slide-73
SLIDE 73

Pythagoras theorem

16 / 38

slide-74
SLIDE 74

Pythagoras theorem

Theorem For orthogonal vectors v and w in any inner product space V , v + w2 = v2 + w2

16 / 38

slide-75
SLIDE 75

Pythagoras theorem

Theorem For orthogonal vectors v and w in any inner product space V , v + w2 = v2 + w2 Proof.

16 / 38

slide-76
SLIDE 76

Pythagoras theorem

Theorem For orthogonal vectors v and w in any inner product space V , v + w2 = v2 + w2 Proof. v + w2 = v + w, v + w

16 / 38

slide-77
SLIDE 77

Pythagoras theorem

Theorem For orthogonal vectors v and w in any inner product space V , v + w2 = v2 + w2 Proof. v + w2 = v + w, v + w = v, v + v, w + w, v + w, w

16 / 38

slide-78
SLIDE 78

Pythagoras theorem

Theorem For orthogonal vectors v and w in any inner product space V , v + w2 = v2 + w2 Proof. v + w2 = v + w, v + w = v, v + v, w + w, v + w, w = v, v + w, w

16 / 38

slide-79
SLIDE 79

Pythagoras theorem

Theorem For orthogonal vectors v and w in any inner product space V , v + w2 = v2 + w2 Proof. v + w2 = v + w, v + w = v, v + v, w + w, v + w, w = v, v + w, w = v2 + w2

16 / 38

slide-80
SLIDE 80

Pythagoras theorem

Theorem For orthogonal vectors v and w in any inner product space V , v + w2 = v2 + w2 Proof. v + w2 = v + w, v + w = v, v + v, w + w, v + w, w = v, v + w, w = v2 + w2 More generally, for any orthogonal system {v1, . . . , vn} v1 + · · · + vn2 = v12 + · · · + vn2

16 / 38

slide-81
SLIDE 81

The vector space of polynomials

17 / 38

slide-82
SLIDE 82

The vector space of polynomials

The set of all polynomials in the variable x is a vector space denoted by P(x).

17 / 38

slide-83
SLIDE 83

The vector space of polynomials

The set of all polynomials in the variable x is a vector space denoted by P(x). The set {1, x, x2, . . . } is an infinite basis of the vector space P(x).

17 / 38

slide-84
SLIDE 84

The vector space of polynomials

The set of all polynomials in the variable x is a vector space denoted by P(x). The set {1, x, x2, . . . } is an infinite basis of the vector space P(x). P(x) carries an inner product defined by f, g := 1

−1

f(x)g(x) dx

17 / 38

slide-85
SLIDE 85

The vector space of polynomials

The set of all polynomials in the variable x is a vector space denoted by P(x). The set {1, x, x2, . . . } is an infinite basis of the vector space P(x). P(x) carries an inner product defined by f, g := 1

−1

f(x)g(x) dx We are integrating over finite interval [−1, 1] which ensures that the integral is finite.

17 / 38

slide-86
SLIDE 86

The vector space of polynomials

The set of all polynomials in the variable x is a vector space denoted by P(x). The set {1, x, x2, . . . } is an infinite basis of the vector space P(x). P(x) carries an inner product defined by f, g := 1

−1

f(x)g(x) dx We are integrating over finite interval [−1, 1] which ensures that the integral is finite. The norm of a polynomial is by definition f, f f := 1

−1

f(x)f(x)dx 1/2

17 / 38

slide-87
SLIDE 87

Derivative transfer

Note that d dx(fg) = g d f dx + f dg dx

18 / 38

slide-88
SLIDE 88

Derivative transfer

Note that d dx(fg) = g d f dx + f dg dx Integrating both sides we get 1

−1

d dx(fg) = 1

−1

g d f dx + 1

−1

f dg dx

18 / 38

slide-89
SLIDE 89

Derivative transfer

Note that d dx(fg) = g d f dx + f dg dx Integrating both sides we get 1

−1

d dx(fg) = 1

−1

g d f dx + 1

−1

f dg dx = ⇒ f(1)g(1) − f(−1)g(−1) = 1

−1

g d f dx + 1

−1

f dg dx

18 / 38

slide-90
SLIDE 90

Derivative transfer

Note that d dx(fg) = g d f dx + f dg dx Integrating both sides we get 1

−1

d dx(fg) = 1

−1

g d f dx + 1

−1

f dg dx = ⇒ f(1)g(1) − f(−1)g(−1) = 1

−1

g d f dx + 1

−1

f dg dx Thus if f(1)g(1) − f(−1)g(−1) = 0 then we get 1

−1

g d f dx = − 1

−1

f dg dx

18 / 38

slide-91
SLIDE 91

Derivative transfer

Note that d dx(fg) = g d f dx + f dg dx Integrating both sides we get 1

−1

d dx(fg) = 1

−1

g d f dx + 1

−1

f dg dx = ⇒ f(1)g(1) − f(−1)g(−1) = 1

−1

g d f dx + 1

−1

f dg dx Thus if f(1)g(1) − f(−1)g(−1) = 0 then we get 1

−1

g d f dx = − 1

−1

f dg dx This will be referred to as derivative-transfer

18 / 38

slide-92
SLIDE 92

Orthogonality of Legendre polynomials

19 / 38

slide-93
SLIDE 93

Orthogonality of Legendre polynomials

Since Pm(x) is a polynomial of degree m, it follows that {P0(x), P1(x), P2(x), . . . } is a basis of the vector space of polynomials P(x).

19 / 38

slide-94
SLIDE 94

Orthogonality of Legendre polynomials

Since Pm(x) is a polynomial of degree m, it follows that {P0(x), P1(x), P2(x), . . . } is a basis of the vector space of polynomials P(x). Theorem We have Pm, Pn = 1

−1

Pm(x)Pn(x) dx =

  • if m = n

2 2n+1

if m = n i.e. Legendre polynomials form an orthogonal basis for the vector space P(x) and Pn(x)2 = 2 2n + 1

19 / 38

slide-95
SLIDE 95

Orthogonality of Legendre polynomials

The Legendre equation may be written as ((1 − x2)y′)′ + p(p + 1)y = 0

20 / 38

slide-96
SLIDE 96

Orthogonality of Legendre polynomials

The Legendre equation may be written as ((1 − x2)y′)′ + p(p + 1)y = 0 In particular, Pm(x) satisfies ((1 − x2)P ′

m(x))′ + m(m + 1)Pm(x) = 0

(∗)

20 / 38

slide-97
SLIDE 97

Orthogonality of Legendre polynomials

The Legendre equation may be written as ((1 − x2)y′)′ + p(p + 1)y = 0 In particular, Pm(x) satisfies ((1 − x2)P ′

m(x))′ + m(m + 1)Pm(x) = 0

(∗) Proof of Orthogonality. Multiply (∗) by Pn and integrate to get 1

−1

((1 − x2)P ′

m)′Pn + m(m + 1)

1

−1

PmPn = 0

20 / 38

slide-98
SLIDE 98

Orthogonality of Legendre polynomials

The Legendre equation may be written as ((1 − x2)y′)′ + p(p + 1)y = 0 In particular, Pm(x) satisfies ((1 − x2)P ′

m(x))′ + m(m + 1)Pm(x) = 0

(∗) Proof of Orthogonality. Multiply (∗) by Pn and integrate to get 1

−1

((1 − x2)P ′

m)′Pn + m(m + 1)

1

−1

PmPn = 0 By derivative transfer (f = (1 − x2)P ′

m and g = Pn), we get

− 1

−1

(1 − x2)P ′

mP ′ n + m(m + 1)

1

−1

PmPn = 0

20 / 38

slide-99
SLIDE 99

continued ... Interchanging the roles of m and n, we get − 1

−1

(1 − x2)P ′

mP ′ n + n(n + 1)

1

−1

PmPn = 0

21 / 38

slide-100
SLIDE 100

continued ... Interchanging the roles of m and n, we get − 1

−1

(1 − x2)P ′

mP ′ n + n(n + 1)

1

−1

PmPn = 0 Subtracting the two identities, we obtain [m(m + 1) − n(n + 1)] 1

−1

PmPn = 0

21 / 38

slide-101
SLIDE 101

continued ... Interchanging the roles of m and n, we get − 1

−1

(1 − x2)P ′

mP ′ n + n(n + 1)

1

−1

PmPn = 0 Subtracting the two identities, we obtain [m(m + 1) − n(n + 1)] 1

−1

PmPn = 0 If m = n we get 1

−1

PmPn = 0 Thus, Pm and Pn are orthogonal.

21 / 38

slide-102
SLIDE 102

Rodrigues formula

It only remains to show that Pn(x)2 = 2 2n + 1.

22 / 38

slide-103
SLIDE 103

Rodrigues formula

It only remains to show that Pn(x)2 = 2 2n + 1. We need some intermediate steps before we can show this.

22 / 38

slide-104
SLIDE 104

Rodrigues formula

It only remains to show that Pn(x)2 = 2 2n + 1. We need some intermediate steps before we can show this. Denote by D the differential operator

d dx.

22 / 38

slide-105
SLIDE 105

Rodrigues formula

It only remains to show that Pn(x)2 = 2 2n + 1. We need some intermediate steps before we can show this. Denote by D the differential operator

d dx.

Let us first note that for 0 ≤ i < n

  • Di(x2 − 1)n

(1) = 0

22 / 38

slide-106
SLIDE 106

Rodrigues formula

It only remains to show that Pn(x)2 = 2 2n + 1. We need some intermediate steps before we can show this. Denote by D the differential operator

d dx.

Let us first note that for 0 ≤ i < n

  • Di(x2 − 1)n

(1) = 0 This is clear once we observe Di(x2 − 1)n = Di (x − 1)n(2 + x − 1)n = Di 2n(x − 1)n + (∗)(x − 1)n+1 + · · ·

  • 22 / 38
slide-107
SLIDE 107

Rodrigues formula

It only remains to show that Pn(x)2 = 2 2n + 1. We need some intermediate steps before we can show this. Denote by D the differential operator

d dx.

Let us first note that for 0 ≤ i < n

  • Di(x2 − 1)n

(1) = 0 This is clear once we observe Di(x2 − 1)n = Di (x − 1)n(2 + x − 1)n = Di 2n(x − 1)n + (∗)(x − 1)n+1 + · · ·

  • By the same reasoning we get for 0 ≤ i < n
  • Di(x2 − 1)n

(−1) = 0

22 / 38

slide-108
SLIDE 108

Rodrigues formula

It only remains to show that Pn(x)2 = 2 2n + 1. We need some intermediate steps before we can show this. Denote by D the differential operator

d dx.

Let us first note that for 0 ≤ i < n

  • Di(x2 − 1)n

(1) = 0 This is clear once we observe Di(x2 − 1)n = Di (x − 1)n(2 + x − 1)n = Di 2n(x − 1)n + (∗)(x − 1)n+1 + · · ·

  • By the same reasoning we get for 0 ≤ i < n
  • Di(x2 − 1)n

(−1) = 0 Consider the polynomial of degree n given by y(x) = Dn(x2 − 1)n

22 / 38

slide-109
SLIDE 109

Rodrigues formula

For k < n consider the integral 1

−1

Pk(x)y(x) = 1

−1

Pk(x)D(Dn−1(x2 − 1)n)

23 / 38

slide-110
SLIDE 110

Rodrigues formula

For k < n consider the integral 1

−1

Pk(x)y(x) = 1

−1

Pk(x)D(Dn−1(x2 − 1)n) applying derivative transfer with f = Dn−1(x2 − 1)n and g = Pk(x) we get

23 / 38

slide-111
SLIDE 111

Rodrigues formula

For k < n consider the integral 1

−1

Pk(x)y(x) = 1

−1

Pk(x)D(Dn−1(x2 − 1)n) applying derivative transfer with f = Dn−1(x2 − 1)n and g = Pk(x) we get 1

−1

Pk(x)y(x) = − 1

−1

DPk(x)Dn−1(x2 − 1)n = 1

−1

D2Pk(x)Dn−2(x2 − 1)n = 1

−1

DnPk(x)(x2 − 1)n = 0

23 / 38

slide-112
SLIDE 112

Rodrigues formula

For k < n consider the integral 1

−1

Pk(x)y(x) = 1

−1

Pk(x)D(Dn−1(x2 − 1)n) applying derivative transfer with f = Dn−1(x2 − 1)n and g = Pk(x) we get 1

−1

Pk(x)y(x) = − 1

−1

DPk(x)Dn−1(x2 − 1)n = 1

−1

D2Pk(x)Dn−2(x2 − 1)n = 1

−1

DnPk(x)(x2 − 1)n = 0 We have repeatedly applied derivative transfer with f = Dn−i(x2 − 1)n and g = Di−1Pk(x).

23 / 38

slide-113
SLIDE 113

Rodrigues formula

For k < n consider the integral 1

−1

Pk(x)y(x) = 1

−1

Pk(x)D(Dn−1(x2 − 1)n) applying derivative transfer with f = Dn−1(x2 − 1)n and g = Pk(x) we get 1

−1

Pk(x)y(x) = − 1

−1

DPk(x)Dn−1(x2 − 1)n = 1

−1

D2Pk(x)Dn−2(x2 − 1)n = 1

−1

DnPk(x)(x2 − 1)n = 0 We have repeatedly applied derivative transfer with f = Dn−i(x2 − 1)n and g = Di−1Pk(x). Since Pk(x) is a polynomial of degree k we get that DnPk(x) = 0.

23 / 38

slide-114
SLIDE 114

Rodrigues formula

This forces that y(x) = cPn(x) for some nonzero constant c as we know that Pk(x)’s are orthogonal to each other.

24 / 38

slide-115
SLIDE 115

Rodrigues formula

This forces that y(x) = cPn(x) for some nonzero constant c as we know that Pk(x)’s are orthogonal to each other. Dn(x2 − 1)n = Dn (x − 1)n(2 + x − 1)n = Dn 2n(x − 1)n + (∗)(x − 1)n+1 + · · ·

  • 24 / 38
slide-116
SLIDE 116

Rodrigues formula

This forces that y(x) = cPn(x) for some nonzero constant c as we know that Pk(x)’s are orthogonal to each other. Dn(x2 − 1)n = Dn (x − 1)n(2 + x − 1)n = Dn 2n(x − 1)n + (∗)(x − 1)n+1 + · · ·

  • From the above it is clear that

y(1) = n!2n

24 / 38

slide-117
SLIDE 117

Rodrigues formula

This forces that y(x) = cPn(x) for some nonzero constant c as we know that Pk(x)’s are orthogonal to each other. Dn(x2 − 1)n = Dn (x − 1)n(2 + x − 1)n = Dn 2n(x − 1)n + (∗)(x − 1)n+1 + · · ·

  • From the above it is clear that

y(1) = n!2n Thus, we can normalize our Legendre polynomials so that Pm(1) = 1. That is, take Pm(x) = 1 2nn! dn dxn (x2 − 1)n

24 / 38

slide-118
SLIDE 118

Rodrigues formula

This forces that y(x) = cPn(x) for some nonzero constant c as we know that Pk(x)’s are orthogonal to each other. Dn(x2 − 1)n = Dn (x − 1)n(2 + x − 1)n = Dn 2n(x − 1)n + (∗)(x − 1)n+1 + · · ·

  • From the above it is clear that

y(1) = n!2n Thus, we can normalize our Legendre polynomials so that Pm(1) = 1. That is, take Pm(x) = 1 2nn! dn dxn (x2 − 1)n This is called Rodrigues formula.

24 / 38

slide-119
SLIDE 119

Computing Pn(x)

Proof. 1

−1

Pn(x)Pn(x) dx = 1 22n(n!)2 1

−1

dn dxn (x2 − 1)n dn dxn (x2 − 1)n dx

25 / 38

slide-120
SLIDE 120

Computing Pn(x)

Proof. 1

−1

Pn(x)Pn(x) dx = 1 22n(n!)2 1

−1

dn dxn (x2 − 1)n dn dxn (x2 − 1)n dx = (−1)n 22n(n!)2 1

−1

(x2 − 1)n d2n dx2n (x2 − 1)n dx by derivative transfer

25 / 38

slide-121
SLIDE 121

Computing Pn(x)

Proof. 1

−1

Pn(x)Pn(x) dx = 1 22n(n!)2 1

−1

dn dxn (x2 − 1)n dn dxn (x2 − 1)n dx = (−1)n 22n(n!)2 1

−1

(x2 − 1)n d2n dx2n (x2 − 1)n dx by derivative transfer = (2n)! 22n(n!)2 1

−1

(1 − x2)n dx

25 / 38

slide-122
SLIDE 122

Computing Pn(x)

Proof. 1

−1

Pn(x)Pn(x) dx = 1 22n(n!)2 1

−1

dn dxn (x2 − 1)n dn dxn (x2 − 1)n dx = (−1)n 22n(n!)2 1

−1

(x2 − 1)n d2n dx2n (x2 − 1)n dx by derivative transfer = (2n)! 22n(n!)2 1

−1

(1 − x2)n dx In = 1

−1

(1 − x2)n dx = 1

−1

(1 − x2)n dx dxdx

dt

= 2n 1

−1

(1 − x2)n−1x2dx = −2nIn + 2nIn−1

25 / 38

slide-123
SLIDE 123

Computing Pn(x)

Proof. We get the recursive relation (2n + 1)In = 2nIn−1

26 / 38

slide-124
SLIDE 124

Computing Pn(x)

Proof. We get the recursive relation (2n + 1)In = 2nIn−1 We conclude that In = 2n 2n + 1 2(n − 1) 2n − 1 · · · 2 3I0

26 / 38

slide-125
SLIDE 125

Computing Pn(x)

Proof. We get the recursive relation (2n + 1)In = 2nIn−1 We conclude that In = 2n 2n + 1 2(n − 1) 2n − 1 · · · 2 3I0 We conclude that Pn(x) = (2n)! 22n(n!)2 2n 2n + 1 2(n − 1) 2n − 1 · · · 2 3I0 = I0 2n + 1 = 2 2n + 1

26 / 38

slide-126
SLIDE 126

Computing Pn(x)

Proof. We get the recursive relation (2n + 1)In = 2nIn−1 We conclude that In = 2n 2n + 1 2(n − 1) 2n − 1 · · · 2 3I0 We conclude that Pn(x) = (2n)! 22n(n!)2 2n 2n + 1 2(n − 1) 2n − 1 · · · 2 3I0 = I0 2n + 1 = 2 2n + 1 This completes the proof of the theorem.

26 / 38

slide-127
SLIDE 127

Expansion of polynomial in terms of Pn’s

27 / 38

slide-128
SLIDE 128

Expansion of polynomial in terms of Pn’s

Since each Pn(x) is a polynomial of degree n, we see that {P0(x), P1(x), P2(x), . . .} form a basis for the vector space of polynomials P(x).

27 / 38

slide-129
SLIDE 129

Expansion of polynomial in terms of Pn’s

Since each Pn(x) is a polynomial of degree n, we see that {P0(x), P1(x), P2(x), . . .} form a basis for the vector space of polynomials P(x). If f(x) is a polynomial of degree n, then we can express f(x) =

n

  • k=0

akPk(x) ak ∈ R

27 / 38

slide-130
SLIDE 130

Expansion of polynomial in terms of Pn’s

Since each Pn(x) is a polynomial of degree n, we see that {P0(x), P1(x), P2(x), . . .} form a basis for the vector space of polynomials P(x). If f(x) is a polynomial of degree n, then we can express f(x) =

n

  • k=0

akPk(x) ak ∈ R To find ak, we can use orthogonality of Pn’s.

27 / 38

slide-131
SLIDE 131

Expansion of polynomial in terms of Pn’s

Since each Pn(x) is a polynomial of degree n, we see that {P0(x), P1(x), P2(x), . . .} form a basis for the vector space of polynomials P(x). If f(x) is a polynomial of degree n, then we can express f(x) =

n

  • k=0

akPk(x) ak ∈ R To find ak, we can use orthogonality of Pn’s. 1

−1

f(x)Pk(x) dx = 1

−1

n

  • i=0

aiPi(x)

  • Pk(x) dx

27 / 38

slide-132
SLIDE 132

Expansion of polynomial in terms of Pn’s

Since each Pn(x) is a polynomial of degree n, we see that {P0(x), P1(x), P2(x), . . .} form a basis for the vector space of polynomials P(x). If f(x) is a polynomial of degree n, then we can express f(x) =

n

  • k=0

akPk(x) ak ∈ R To find ak, we can use orthogonality of Pn’s. 1

−1

f(x)Pk(x) dx = 1

−1

n

  • i=0

aiPi(x)

  • Pk(x) dx

=

n

  • i=0

1

−1

aiPi(x)Pk(x) dx

  • = ak

1

−1

Pk(x)Pk(x) dx

27 / 38

slide-133
SLIDE 133

Expansion of polynomial in terms of Pn’s

Since each Pn(x) is a polynomial of degree n, we see that {P0(x), P1(x), P2(x), . . .} form a basis for the vector space of polynomials P(x). If f(x) is a polynomial of degree n, then we can express f(x) =

n

  • k=0

akPk(x) ak ∈ R To find ak, we can use orthogonality of Pn’s. 1

−1

f(x)Pk(x) dx = 1

−1

n

  • i=0

aiPi(x)

  • Pk(x) dx

=

n

  • i=0

1

−1

aiPi(x)Pk(x) dx

  • = ak

1

−1

Pk(x)Pk(x) dx = ⇒ ak = 2n + 1 2 1

−1

f(x)Pk(x) dx

27 / 38

slide-134
SLIDE 134

Square-integrable functions

28 / 38

slide-135
SLIDE 135

Square-integrable functions

A function f(x) on [−1, 1] is square-integrable if 1

−1

f(x)f(x)dx < ∞

28 / 38

slide-136
SLIDE 136

Square-integrable functions

A function f(x) on [−1, 1] is square-integrable if 1

−1

f(x)f(x)dx < ∞ For instance, polynomials, continuous functions, piecewise continuous functions are square-integrable.

28 / 38

slide-137
SLIDE 137

Square-integrable functions

A function f(x) on [−1, 1] is square-integrable if 1

−1

f(x)f(x)dx < ∞ For instance, polynomials, continuous functions, piecewise continuous functions are square-integrable. The set of all square-integrable functions on [−1, 1] is a vector space and is denoted by L2([−1, 1]).

28 / 38

slide-138
SLIDE 138

Square-integrable functions

A function f(x) on [−1, 1] is square-integrable if 1

−1

f(x)f(x)dx < ∞ For instance, polynomials, continuous functions, piecewise continuous functions are square-integrable. The set of all square-integrable functions on [−1, 1] is a vector space and is denoted by L2([−1, 1]). For square-integrable functions f and g, we define their inner product by f, g := 1

−1

f(x)g(x)dx

28 / 38

slide-139
SLIDE 139

Fourier-Legendre series

29 / 38

slide-140
SLIDE 140

Fourier-Legendre series

The Legendre polynomials no longer form a basis for the vector space L2([−1, 1]) of square-integrable functions.

29 / 38

slide-141
SLIDE 141

Fourier-Legendre series

The Legendre polynomials no longer form a basis for the vector space L2([−1, 1]) of square-integrable functions. But they form a maximal orthogonal set in L2([−1, 1]).

29 / 38

slide-142
SLIDE 142

Fourier-Legendre series

The Legendre polynomials no longer form a basis for the vector space L2([−1, 1]) of square-integrable functions. But they form a maximal orthogonal set in L2([−1, 1]). This means that there is no non-zero square-integrable function which is orthogonal to all Legendre polynomials (a nontrivial fact).

29 / 38

slide-143
SLIDE 143

Fourier-Legendre series

The Legendre polynomials no longer form a basis for the vector space L2([−1, 1]) of square-integrable functions. But they form a maximal orthogonal set in L2([−1, 1]). This means that there is no non-zero square-integrable function which is orthogonal to all Legendre polynomials (a nontrivial fact). We can expand any square-integrable function f(x) on [−1, 1] in a series of Legendre polynomials

  • n=0

cnPn(x), cn = f, Pn Pn, Pn = 2n + 1 2 1

−1

f(x)Pn(x)dx This is called the Fourier-Legendre series (or simply the Legendre series) of f(x).

29 / 38

slide-144
SLIDE 144

Theorem (Convergence in norm)

30 / 38

slide-145
SLIDE 145

Theorem (Convergence in norm) The Fourier-Legendre series of f(x) ∈ L2([−1, 1]) given by

  • n=0

cnPn(x), cn = 2n + 1 2 1

−1

f(x)Pn(x)dx converges in L2 norm to f(x), that is f(x) −

m

  • n=0

cnPn(x) → 0 as m → ∞

30 / 38

slide-146
SLIDE 146

Theorem (Convergence in norm) The Fourier-Legendre series of f(x) ∈ L2([−1, 1]) given by

  • n=0

cnPn(x), cn = 2n + 1 2 1

−1

f(x)Pn(x)dx converges in L2 norm to f(x), that is f(x) −

m

  • n=0

cnPn(x) → 0 as m → ∞ Pointwise convergence of Fourier-Legendre series to f(x) is more delicate.

30 / 38

slide-147
SLIDE 147

Theorem (Convergence in norm) The Fourier-Legendre series of f(x) ∈ L2([−1, 1]) given by

  • n=0

cnPn(x), cn = 2n + 1 2 1

−1

f(x)Pn(x)dx converges in L2 norm to f(x), that is f(x) −

m

  • n=0

cnPn(x) → 0 as m → ∞ Pointwise convergence of Fourier-Legendre series to f(x) is more delicate. There are two issues here: Does the Fourier-Legendre series converge at x? If yes, then does it converge to f(x)?

30 / 38

slide-148
SLIDE 148

A useful result in this direction is the Legendre expansion theorem:

31 / 38

slide-149
SLIDE 149

A useful result in this direction is the Legendre expansion theorem: Theorem If both f(x) and f′(x) have at most a finite number of jump discontinuities in the interval [−1, 1],

31 / 38

slide-150
SLIDE 150

A useful result in this direction is the Legendre expansion theorem: Theorem If both f(x) and f′(x) have at most a finite number of jump discontinuities in the interval [−1, 1], then the Legendre series converges to

31 / 38

slide-151
SLIDE 151

A useful result in this direction is the Legendre expansion theorem: Theorem If both f(x) and f′(x) have at most a finite number of jump discontinuities in the interval [−1, 1], then the Legendre series converges to 1 2(f(x−) + f(x+)) for − 1 < x < 1

31 / 38

slide-152
SLIDE 152

A useful result in this direction is the Legendre expansion theorem: Theorem If both f(x) and f′(x) have at most a finite number of jump discontinuities in the interval [−1, 1], then the Legendre series converges to 1 2(f(x−) + f(x+)) for − 1 < x < 1 f(−1+) for x = −1

31 / 38

slide-153
SLIDE 153

A useful result in this direction is the Legendre expansion theorem: Theorem If both f(x) and f′(x) have at most a finite number of jump discontinuities in the interval [−1, 1], then the Legendre series converges to 1 2(f(x−) + f(x+)) for − 1 < x < 1 f(−1+) for x = −1 f(1−) for x = 1

31 / 38

slide-154
SLIDE 154

A useful result in this direction is the Legendre expansion theorem: Theorem If both f(x) and f′(x) have at most a finite number of jump discontinuities in the interval [−1, 1], then the Legendre series converges to 1 2(f(x−) + f(x+)) for − 1 < x < 1 f(−1+) for x = −1 f(1−) for x = 1 In particular, the series converges to f(x) at every point of continuity x.

31 / 38

slide-155
SLIDE 155

Example Consider the function f(x) =

  • 1

if 0 < x < 1 −1 if − 1 < x < 0

32 / 38

slide-156
SLIDE 156

Example Consider the function f(x) =

  • 1

if 0 < x < 1 −1 if − 1 < x < 0 The Legendre series of f(x) is

  • n=0

cnPn(x), cn = 2n + 1 2 1

−1

f(x)Pn(x) dx

32 / 38

slide-157
SLIDE 157

Example Consider the function f(x) =

  • 1

if 0 < x < 1 −1 if − 1 < x < 0 The Legendre series of f(x) is

  • n=0

cnPn(x), cn = 2n + 1 2 1

−1

f(x)Pn(x) dx Since P2n(x) is even function and f is an odd function, we get c2n = 0 n ≥ 0

32 / 38

slide-158
SLIDE 158

Example Consider the function f(x) =

  • 1

if 0 < x < 1 −1 if − 1 < x < 0 The Legendre series of f(x) is

  • n=0

cnPn(x), cn = 2n + 1 2 1

−1

f(x)Pn(x) dx Since P2n(x) is even function and f is an odd function, we get c2n = 0 n ≥ 0 Recall, P1(x) = x, so c1 = 3 2 1

−1

f(x)x dx = 3 2

32 / 38

slide-159
SLIDE 159

Example (continued . . .) P3(x) = 1

2(5x3 − 3x), so

33 / 38

slide-160
SLIDE 160

Example (continued . . .) P3(x) = 1

2(5x3 − 3x), so

c3 = 7 2 1

−1

f(x)1 2(5x3 − 3x) dx = 7 2(5 4x4 − 3 2x2)

  • 1

0 = −7

8

33 / 38

slide-161
SLIDE 161

Example (continued . . .) P3(x) = 1

2(5x3 − 3x), so

c3 = 7 2 1

−1

f(x)1 2(5x3 − 3x) dx = 7 2(5 4x4 − 3 2x2)

  • 1

0 = −7

8 Check that the Legendre series of f is 3 2P1(x) − 7 8P3(x) + 11 16P5(x) − . . .

33 / 38

slide-162
SLIDE 162

Example (continued . . .) P3(x) = 1

2(5x3 − 3x), so

c3 = 7 2 1

−1

f(x)1 2(5x3 − 3x) dx = 7 2(5 4x4 − 3 2x2)

  • 1

0 = −7

8 Check that the Legendre series of f is 3 2P1(x) − 7 8P3(x) + 11 16P5(x) − . . . By the Legendre expansion theorem, this series converges to f(x) for x = 0 and to 0 for x = 0.

33 / 38

slide-163
SLIDE 163

Least Square Approximation

34 / 38

slide-164
SLIDE 164

Least Square Approximation

Theorem Suppose we want to approximate f ∈ L2([−1, 1]) in the sense of least square by polynomials p(x) of degree ≤ n;

34 / 38

slide-165
SLIDE 165

Least Square Approximation

Theorem Suppose we want to approximate f ∈ L2([−1, 1]) in the sense of least square by polynomials p(x) of degree ≤ n; i.e. we want to minimize I = 1

−1

[f(x) − p(x)]2 dx

34 / 38

slide-166
SLIDE 166

Least Square Approximation

Theorem Suppose we want to approximate f ∈ L2([−1, 1]) in the sense of least square by polynomials p(x) of degree ≤ n; i.e. we want to minimize I = 1

−1

[f(x) − p(x)]2 dx Then the minimizing polynomial is precisely the first n + 1 terms

  • f the Legendre series of f(x), i.e.

c0P0(x) + . . . + cnPn(x) ck = 2k + 1 2 1

−1

f(x)Pk(x)dx

34 / 38

slide-167
SLIDE 167

Least Square Approximation

Theorem Suppose we want to approximate f ∈ L2([−1, 1]) in the sense of least square by polynomials p(x) of degree ≤ n; i.e. we want to minimize I = 1

−1

[f(x) − p(x)]2 dx Then the minimizing polynomial is precisely the first n + 1 terms

  • f the Legendre series of f(x), i.e.

c0P0(x) + . . . + cnPn(x) ck = 2k + 1 2 1

−1

f(x)Pk(x)dx Proof. Write degree ≤ n polynomial p(x) =

n

  • k=0

bkPk(x), then

34 / 38

slide-168
SLIDE 168

proof continued . . .

I = 1

−1

  • f(x) −

n

  • k=0

bkPk(x) 2 dx

35 / 38

slide-169
SLIDE 169

proof continued . . .

I = 1

−1

  • f(x) −

n

  • k=0

bkPk(x) 2 dx = 1

−1

f(x)2 dx +

n

  • k=0

2 2k + 1b2

k − 2 n

  • k=0

bk 1

−1

f(x)Pk(x) dx

  • 35 / 38
slide-170
SLIDE 170

proof continued . . .

I = 1

−1

  • f(x) −

n

  • k=0

bkPk(x) 2 dx = 1

−1

f(x)2 dx +

n

  • k=0

2 2k + 1b2

k − 2 n

  • k=0

bk 1

−1

f(x)Pk(x) dx

  • =

1

−1

f(x)2 dx +

n

  • k=0

2 2k + 1b2

k − 2 n

  • k=0

bk 2ck 2k + 1

35 / 38

slide-171
SLIDE 171

proof continued . . .

I = 1

−1

  • f(x) −

n

  • k=0

bkPk(x) 2 dx = 1

−1

f(x)2 dx +

n

  • k=0

2 2k + 1b2

k − 2 n

  • k=0

bk 1

−1

f(x)Pk(x) dx

  • =

1

−1

f(x)2 dx +

n

  • k=0

2 2k + 1b2

k − 2 n

  • k=0

bk 2ck 2k + 1 = 1

−1

f(x)2 dx +

n

  • k=0

2 2k + 1(bk − ck)2 −

n

  • k=0

2 2k + 1c2

k

35 / 38

slide-172
SLIDE 172

proof continued . . .

I = 1

−1

  • f(x) −

n

  • k=0

bkPk(x) 2 dx = 1

−1

f(x)2 dx +

n

  • k=0

2 2k + 1b2

k − 2 n

  • k=0

bk 1

−1

f(x)Pk(x) dx

  • =

1

−1

f(x)2 dx +

n

  • k=0

2 2k + 1b2

k − 2 n

  • k=0

bk 2ck 2k + 1 = 1

−1

f(x)2 dx +

n

  • k=0

2 2k + 1(bk − ck)2 −

n

  • k=0

2 2k + 1c2

k

Clearly, I is minimum when bk = ck for k = 0, . . . , n.

35 / 38

slide-173
SLIDE 173

proof continued . . .

I = 1

−1

  • f(x) −

n

  • k=0

bkPk(x) 2 dx = 1

−1

f(x)2 dx +

n

  • k=0

2 2k + 1b2

k − 2 n

  • k=0

bk 1

−1

f(x)Pk(x) dx

  • =

1

−1

f(x)2 dx +

n

  • k=0

2 2k + 1b2

k − 2 n

  • k=0

bk 2ck 2k + 1 = 1

−1

f(x)2 dx +

n

  • k=0

2 2k + 1(bk − ck)2 −

n

  • k=0

2 2k + 1c2

k

Clearly, I is minimum when bk = ck for k = 0, . . . , n.

  • Caution. If f has a power series expansion on [−1, 1], then best

“least square polynomial approximation” to f(x) is not the partial sums of the power series, in general.

35 / 38

slide-174
SLIDE 174

Some remarks

This brings to an end the discussion of second order linear ODE’s which we can solve by power series.

36 / 38

slide-175
SLIDE 175

Some remarks

This brings to an end the discussion of second order linear ODE’s which we can solve by power series. Before we go on to more complicated ODE’s, let us review what we have done so far.

36 / 38

slide-176
SLIDE 176

Some remarks

This brings to an end the discussion of second order linear ODE’s which we can solve by power series. Before we go on to more complicated ODE’s, let us review what we have done so far.

  • 1. Given an ODE of the type

F0(x)y′′ + F1(x)y′ + F2(x)y = 0 (∗) first convert it to the standard form y′′ + F1(x) F0(x)y′ + F2(x) F0(x)y = 0 (∗∗)

36 / 38

slide-177
SLIDE 177

Some remarks

This brings to an end the discussion of second order linear ODE’s which we can solve by power series. Before we go on to more complicated ODE’s, let us review what we have done so far.

  • 1. Given an ODE of the type

F0(x)y′′ + F1(x)y′ + F2(x)y = 0 (∗) first convert it to the standard form y′′ + F1(x) F0(x)y′ + F2(x) F0(x)y = 0 (∗∗) Let p(x) := F1(x) F0(x) q(x) := F2(x) F0(x)

36 / 38

slide-178
SLIDE 178

Some remarks

  • 2. Now find the set

U := {x0 ∈ R | p(x), q(x) are analytic at x0}

37 / 38

slide-179
SLIDE 179

Some remarks

  • 2. Now find the set

U := {x0 ∈ R | p(x), q(x) are analytic at x0}

  • 3. By the existence theorem, for every x0 ∈ U, there will exist two

independent solutions to the above ODE, call them y1(x) and y2(x), such that both of them will be analytic in an interval I around x0.

37 / 38

slide-180
SLIDE 180

Some remarks

  • 2. Now find the set

U := {x0 ∈ R | p(x), q(x) are analytic at x0}

  • 3. By the existence theorem, for every x0 ∈ U, there will exist two

independent solutions to the above ODE, call them y1(x) and y2(x), such that both of them will be analytic in an interval I around x0.

  • 4. To find the solutions in a neighborhood of x0, set

y(x) =

n≥0 an(x − x0)n into the ODE (∗) or (∗∗) and get

recursive relations involving the an. Note that when you do this, the coefficient functions (p(x), q(x), F0(x), ..) have to be written as power series in x − x0. Note that the recursive relation you get, will be same, irrespective of whether you choose equation (∗) or (∗∗).

37 / 38

slide-181
SLIDE 181

Some remarks

  • 5. Thus, depending on the situation, you may want to choose (∗)
  • r (∗∗).

38 / 38

slide-182
SLIDE 182

Some remarks

  • 5. Thus, depending on the situation, you may want to choose (∗)
  • r (∗∗).

For example, the Legendre equation, in the open interval (−1, 1) around x0 = 0, the equation (∗) looks like (1 − x2)y′′ − 2xy′ + p(p + 1)y = 0 while (∗∗) looks like y′′ − 2

n≥0

x2n+1 y′ + p(p + 1)

n≥0

x2n y = 0 In this case it is clear that, we should choose (∗), as it will be easier to work with. This is what we did in class.

38 / 38