Vector spaces We will recall the notion of Inner product space from Linear Algebra. First recall the notion of a vector space V over R . A vector space is a set equipped with two operations addition v + w, v, w ∈ V scalar multiplication c ∈ R , v ∈ V cv, A vector space V has a dimension, which may not be finite. 9 / 38
Inner product spaces Let V be a vector space over R (not necessarily finite-dimensional). 10 / 38
Inner product spaces Let V be a vector space over R (not necessarily finite-dimensional). A bilinear form on V is a map � , � : V × V → R 10 / 38
Inner product spaces Let V be a vector space over R (not necessarily finite-dimensional). A bilinear form on V is a map � , � : V × V → R which is linear in both coordinates, that is, � au + v, w � = a � u, w � + � v, w � � u, av + w � = a � u, v � + � u, w � for a ∈ R and u, v ∈ V . 10 / 38
Inner product spaces Let V be a vector space over R (not necessarily finite-dimensional). A bilinear form on V is a map � , � : V × V → R which is linear in both coordinates, that is, � au + v, w � = a � u, w � + � v, w � � u, av + w � = a � u, v � + � u, w � for a ∈ R and u, v ∈ V . An inner product on V is a bilinear form on V which is 10 / 38
Inner product spaces Let V be a vector space over R (not necessarily finite-dimensional). A bilinear form on V is a map � , � : V × V → R which is linear in both coordinates, that is, � au + v, w � = a � u, w � + � v, w � � u, av + w � = a � u, v � + � u, w � for a ∈ R and u, v ∈ V . An inner product on V is a bilinear form on V which is symmetric: � v, w � = � w, v � 10 / 38
Inner product spaces Let V be a vector space over R (not necessarily finite-dimensional). A bilinear form on V is a map � , � : V × V → R which is linear in both coordinates, that is, � au + v, w � = a � u, w � + � v, w � � u, av + w � = a � u, v � + � u, w � for a ∈ R and u, v ∈ V . An inner product on V is a bilinear form on V which is symmetric: � v, w � = � w, v � positive definite: � v, v � ≥ 0 for all v and � v, v � = 0 iff v = 0 10 / 38
Inner product spaces Let V be a vector space over R (not necessarily finite-dimensional). A bilinear form on V is a map � , � : V × V → R which is linear in both coordinates, that is, � au + v, w � = a � u, w � + � v, w � � u, av + w � = a � u, v � + � u, w � for a ∈ R and u, v ∈ V . An inner product on V is a bilinear form on V which is symmetric: � v, w � = � w, v � positive definite: � v, v � ≥ 0 for all v and � v, v � = 0 iff v = 0 A vector space with an inner product is called an inner product space. 10 / 38
Orthogonality 11 / 38
Orthogonality In an inner product space V , two vectors u and v are orthogonal if � u, v � = 0 . 11 / 38
Orthogonality In an inner product space V , two vectors u and v are orthogonal if � u, v � = 0 . More generally, a set of vectors forms an orthogonal system if they are mutually orthogonal. 11 / 38
Orthogonality In an inner product space V , two vectors u and v are orthogonal if � u, v � = 0 . More generally, a set of vectors forms an orthogonal system if they are mutually orthogonal. An orthogonal basis is an orthogonal system which is also a basis. 11 / 38
Orthogonality In an inner product space V , two vectors u and v are orthogonal if � u, v � = 0 . More generally, a set of vectors forms an orthogonal system if they are mutually orthogonal. An orthogonal basis is an orthogonal system which is also a basis. Example Consider the vector space R n with coordinate-wise addition and scalar multiplication. 11 / 38
Orthogonality In an inner product space V , two vectors u and v are orthogonal if � u, v � = 0 . More generally, a set of vectors forms an orthogonal system if they are mutually orthogonal. An orthogonal basis is an orthogonal system which is also a basis. Example Consider the vector space R n with coordinate-wise addition and scalar multiplication. The rule n � � ( a 1 , . . . , a n ) , ( b 1 , . . . , b n ) � := a i b i i =1 defines an inner product on R n . 11 / 38
Orthogonality In an inner product space V , two vectors u and v are orthogonal if � u, v � = 0 . More generally, a set of vectors forms an orthogonal system if they are mutually orthogonal. An orthogonal basis is an orthogonal system which is also a basis. Example Consider the vector space R n with coordinate-wise addition and scalar multiplication. The rule n � � ( a 1 , . . . , a n ) , ( b 1 , . . . , b n ) � := a i b i i =1 defines an inner product on R n . The standard basis { e 1 , . . . , e n } is an orthogonal basis of R n . 11 / 38
The previous example can be formulated more abstractly as follows. 12 / 38
The previous example can be formulated more abstractly as follows. Example Let V be a finite-dimensional vector space with ordered basis B = { e 1 , . . . , e n } . 12 / 38
The previous example can be formulated more abstractly as follows. Example Let V be a finite-dimensional vector space with ordered basis B = { e 1 , . . . , e n } . n n � � For u = a i e i and v = b i e i define i =1 i =1 n � � u, v � := a i b i i =1 12 / 38
The previous example can be formulated more abstractly as follows. Example Let V be a finite-dimensional vector space with ordered basis B = { e 1 , . . . , e n } . n n � � For u = a i e i and v = b i e i define i =1 i =1 n � � u, v � := a i b i i =1 This defines an inner product on V 12 / 38
The previous example can be formulated more abstractly as follows. Example Let V be a finite-dimensional vector space with ordered basis B = { e 1 , . . . , e n } . n n � � For u = a i e i and v = b i e i define i =1 i =1 n � � u, v � := a i b i i =1 This defines an inner product on V With this definition, { e 1 , . . . , e n } is an orthogonal basis of V . 12 / 38
Lemma Suppose V is a finite dimensional inner product space, and e 1 , . . . , e n is an orthogonal basis. 13 / 38
Lemma Suppose V is a finite dimensional inner product space, and e 1 , . . . , e n is an orthogonal basis. Then for any v ∈ V n � v, e i � � v = � e i , e i � e i i =1 13 / 38
Lemma Suppose V is a finite dimensional inner product space, and e 1 , . . . , e n is an orthogonal basis. Then for any v ∈ V n � v, e i � � v = � e i , e i � e i i =1 Proof. n � Write v = a i e i . i =1 13 / 38
Lemma Suppose V is a finite dimensional inner product space, and e 1 , . . . , e n is an orthogonal basis. Then for any v ∈ V n � v, e i � � v = � e i , e i � e i i =1 Proof. n � Write v = a i e i . i =1 We want to find the coefficients a j . 13 / 38
Lemma Suppose V is a finite dimensional inner product space, and e 1 , . . . , e n is an orthogonal basis. Then for any v ∈ V n � v, e i � � v = � e i , e i � e i i =1 Proof. n � Write v = a i e i . i =1 We want to find the coefficients a j . Take inner product of v with e j : 13 / 38
Lemma Suppose V is a finite dimensional inner product space, and e 1 , . . . , e n is an orthogonal basis. Then for any v ∈ V n � v, e i � � v = � e i , e i � e i i =1 Proof. n � Write v = a i e i . i =1 We want to find the coefficients a j . Take inner product of v with e j : n n � � � v, e j � = � a i e i , e j � = a i � e i , e j � = a j � e j , e j � i =1 i =1 13 / 38
Lemma Suppose V is a finite dimensional inner product space, and e 1 , . . . , e n is an orthogonal basis. Then for any v ∈ V n � v, e i � � v = � e i , e i � e i i =1 Proof. n � Write v = a i e i . i =1 We want to find the coefficients a j . Take inner product of v with e j : n n � � � v, e j � = � a i e i , e j � = a i � e i , e j � = a j � e j , e j � i =1 i =1 a j = � v, e j � Thus, � e j , e j � 13 / 38
Lemma In a finite-dimensional inner product space, there always exists an orthogonal basis. 14 / 38
Lemma In a finite-dimensional inner product space, there always exists an orthogonal basis. Start with any basis and modify it to an orthogonal basis by Gram-Schmidt orthogonalization. 14 / 38
Lemma In a finite-dimensional inner product space, there always exists an orthogonal basis. Start with any basis and modify it to an orthogonal basis by Gram-Schmidt orthogonalization. This result is not necessarily true in infinite-dimensional inner product spaces. 14 / 38
Lemma In a finite-dimensional inner product space, there always exists an orthogonal basis. Start with any basis and modify it to an orthogonal basis by Gram-Schmidt orthogonalization. This result is not necessarily true in infinite-dimensional inner product spaces. For infinite dimensional vector spaces, we can only talk of a maximal orthogonal set. 14 / 38
Lemma In a finite-dimensional inner product space, there always exists an orthogonal basis. Start with any basis and modify it to an orthogonal basis by Gram-Schmidt orthogonalization. This result is not necessarily true in infinite-dimensional inner product spaces. For infinite dimensional vector spaces, we can only talk of a maximal orthogonal set. A subset { e 1 , e 2 , . . . } is called a maximal orthogonal set for V if 14 / 38
Lemma In a finite-dimensional inner product space, there always exists an orthogonal basis. Start with any basis and modify it to an orthogonal basis by Gram-Schmidt orthogonalization. This result is not necessarily true in infinite-dimensional inner product spaces. For infinite dimensional vector spaces, we can only talk of a maximal orthogonal set. A subset { e 1 , e 2 , . . . } is called a maximal orthogonal set for V if � e i , e j � = δ ij 14 / 38
Lemma In a finite-dimensional inner product space, there always exists an orthogonal basis. Start with any basis and modify it to an orthogonal basis by Gram-Schmidt orthogonalization. This result is not necessarily true in infinite-dimensional inner product spaces. For infinite dimensional vector spaces, we can only talk of a maximal orthogonal set. A subset { e 1 , e 2 , . . . } is called a maximal orthogonal set for V if � e i , e j � = δ ij � v, e i � = 0 for all i iff v = 0 . 14 / 38
Length of a vector 15 / 38
Length of a vector For a vector v in an inner product space, define � v � := � v, v � 1 / 2 15 / 38
Length of a vector For a vector v in an inner product space, define � v � := � v, v � 1 / 2 This is called the norm or length of the vector v . 15 / 38
Length of a vector For a vector v in an inner product space, define � v � := � v, v � 1 / 2 This is called the norm or length of the vector v . It satisfies the following three properties. 15 / 38
Length of a vector For a vector v in an inner product space, define � v � := � v, v � 1 / 2 This is called the norm or length of the vector v . It satisfies the following three properties. � 0 � = 0 and � v � > 0 if v � = 0 � v + w � ≤ � v � + � w � � av � = | a |� v � for all v, w ∈ V and a ∈ R . 15 / 38
Pythagoras theorem 16 / 38
Pythagoras theorem Theorem For orthogonal vectors v and w in any inner product space V , � v + w � 2 = � v � 2 + � w � 2 16 / 38
Pythagoras theorem Theorem For orthogonal vectors v and w in any inner product space V , � v + w � 2 = � v � 2 + � w � 2 Proof. 16 / 38
Pythagoras theorem Theorem For orthogonal vectors v and w in any inner product space V , � v + w � 2 = � v � 2 + � w � 2 Proof. � v + w � 2 = � v + w, v + w � 16 / 38
Pythagoras theorem Theorem For orthogonal vectors v and w in any inner product space V , � v + w � 2 = � v � 2 + � w � 2 Proof. � v + w � 2 = � v + w, v + w � = � v, v � + � v, w � + � w, v � + � w, w � 16 / 38
Pythagoras theorem Theorem For orthogonal vectors v and w in any inner product space V , � v + w � 2 = � v � 2 + � w � 2 Proof. � v + w � 2 = � v + w, v + w � = � v, v � + � v, w � + � w, v � + � w, w � = � v, v � + � w, w � 16 / 38
Pythagoras theorem Theorem For orthogonal vectors v and w in any inner product space V , � v + w � 2 = � v � 2 + � w � 2 Proof. � v + w � 2 = � v + w, v + w � = � v, v � + � v, w � + � w, v � + � w, w � = � v, v � + � w, w � = � v � 2 + � w � 2 16 / 38
Pythagoras theorem Theorem For orthogonal vectors v and w in any inner product space V , � v + w � 2 = � v � 2 + � w � 2 Proof. � v + w � 2 = � v + w, v + w � = � v, v � + � v, w � + � w, v � + � w, w � = � v, v � + � w, w � = � v � 2 + � w � 2 More generally, for any orthogonal system { v 1 , . . . , v n } � v 1 + · · · + v n � 2 = � v 1 � 2 + · · · + � v n � 2 16 / 38
The vector space of polynomials 17 / 38
The vector space of polynomials The set of all polynomials in the variable x is a vector space denoted by P ( x ) . 17 / 38
The vector space of polynomials The set of all polynomials in the variable x is a vector space denoted by P ( x ) . The set { 1 , x, x 2 , . . . } is an infinite basis of the vector space P ( x ) . 17 / 38
The vector space of polynomials The set of all polynomials in the variable x is a vector space denoted by P ( x ) . The set { 1 , x, x 2 , . . . } is an infinite basis of the vector space P ( x ) . P ( x ) carries an inner product defined by � 1 � f, g � := f ( x ) g ( x ) dx − 1 17 / 38
The vector space of polynomials The set of all polynomials in the variable x is a vector space denoted by P ( x ) . The set { 1 , x, x 2 , . . . } is an infinite basis of the vector space P ( x ) . P ( x ) carries an inner product defined by � 1 � f, g � := f ( x ) g ( x ) dx − 1 We are integrating over finite interval [ − 1 , 1] which ensures that the integral is finite. 17 / 38
The vector space of polynomials The set of all polynomials in the variable x is a vector space denoted by P ( x ) . The set { 1 , x, x 2 , . . . } is an infinite basis of the vector space P ( x ) . P ( x ) carries an inner product defined by � 1 � f, g � := f ( x ) g ( x ) dx − 1 We are integrating over finite interval [ − 1 , 1] which ensures that the integral is finite. The norm of a polynomial is by definition � f, f � �� 1 � 1 / 2 � f � := f ( x ) f ( x ) dx − 1 17 / 38
Derivative transfer Note that dx ( fg ) = g d d dx + f dg f dx 18 / 38
Derivative transfer Note that dx ( fg ) = g d d dx + f dg f dx Integrating both sides we get � 1 � 1 � 1 d g d f f dg dx ( fg ) = dx + dx − 1 − 1 − 1 18 / 38
Derivative transfer Note that dx ( fg ) = g d d dx + f dg f dx Integrating both sides we get � 1 � 1 � 1 d g d f f dg dx ( fg ) = dx + dx − 1 − 1 − 1 � 1 � 1 g d f f dg = ⇒ f (1) g (1) − f ( − 1) g ( − 1) = dx + dx − 1 − 1 18 / 38
Derivative transfer Note that dx ( fg ) = g d d dx + f dg f dx Integrating both sides we get � 1 � 1 � 1 d g d f f dg dx ( fg ) = dx + dx − 1 − 1 − 1 � 1 � 1 g d f f dg = ⇒ f (1) g (1) − f ( − 1) g ( − 1) = dx + dx − 1 − 1 Thus if f (1) g (1) − f ( − 1) g ( − 1) = 0 then we get � 1 � 1 g d f f dg dx = − dx − 1 − 1 18 / 38
Derivative transfer Note that dx ( fg ) = g d d dx + f dg f dx Integrating both sides we get � 1 � 1 � 1 d g d f f dg dx ( fg ) = dx + dx − 1 − 1 − 1 � 1 � 1 g d f f dg = ⇒ f (1) g (1) − f ( − 1) g ( − 1) = dx + dx − 1 − 1 Thus if f (1) g (1) − f ( − 1) g ( − 1) = 0 then we get � 1 � 1 g d f f dg dx = − dx − 1 − 1 This will be referred to as derivative-transfer 18 / 38
Orthogonality of Legendre polynomials 19 / 38
Orthogonality of Legendre polynomials Since P m ( x ) is a polynomial of degree m , it follows that { P 0 ( x ) , P 1 ( x ) , P 2 ( x ) , . . . } is a basis of the vector space of polynomials P ( x ) . 19 / 38
Orthogonality of Legendre polynomials Since P m ( x ) is a polynomial of degree m , it follows that { P 0 ( x ) , P 1 ( x ) , P 2 ( x ) , . . . } is a basis of the vector space of polynomials P ( x ) . Theorem We have � 1 � 0 if m � = n � P m , P n � = P m ( x ) P n ( x ) dx = 2 if m = n − 1 2 n +1 i.e. Legendre polynomials form an orthogonal basis for the vector space P ( x ) and 2 � P n ( x ) � 2 = 2 n + 1 19 / 38
Orthogonality of Legendre polynomials The Legendre equation may be written as ((1 − x 2 ) y ′ ) ′ + p ( p + 1) y = 0 20 / 38
Orthogonality of Legendre polynomials The Legendre equation may be written as ((1 − x 2 ) y ′ ) ′ + p ( p + 1) y = 0 In particular, P m ( x ) satisfies m ( x )) ′ + m ( m + 1) P m ( x ) = 0 ((1 − x 2 ) P ′ ( ∗ ) 20 / 38
Orthogonality of Legendre polynomials The Legendre equation may be written as ((1 − x 2 ) y ′ ) ′ + p ( p + 1) y = 0 In particular, P m ( x ) satisfies m ( x )) ′ + m ( m + 1) P m ( x ) = 0 ((1 − x 2 ) P ′ ( ∗ ) Proof of Orthogonality. Multiply ( ∗ ) by P n and integrate to get � 1 � 1 ((1 − x 2 ) P ′ m ) ′ P n + m ( m + 1) P m P n = 0 − 1 − 1 20 / 38
Orthogonality of Legendre polynomials The Legendre equation may be written as ((1 − x 2 ) y ′ ) ′ + p ( p + 1) y = 0 In particular, P m ( x ) satisfies m ( x )) ′ + m ( m + 1) P m ( x ) = 0 ((1 − x 2 ) P ′ ( ∗ ) Proof of Orthogonality. Multiply ( ∗ ) by P n and integrate to get � 1 � 1 ((1 − x 2 ) P ′ m ) ′ P n + m ( m + 1) P m P n = 0 − 1 − 1 By derivative transfer ( f = (1 − x 2 ) P ′ m and g = P n ), we get � 1 � 1 (1 − x 2 ) P ′ m P ′ − n + m ( m + 1) P m P n = 0 − 1 − 1 20 / 38
continued ... Interchanging the roles of m and n , we get � 1 � 1 (1 − x 2 ) P ′ m P ′ − n + n ( n + 1) P m P n = 0 − 1 − 1 21 / 38
continued ... Interchanging the roles of m and n , we get � 1 � 1 (1 − x 2 ) P ′ m P ′ − n + n ( n + 1) P m P n = 0 − 1 − 1 Subtracting the two identities, we obtain � 1 [ m ( m + 1) − n ( n + 1)] P m P n = 0 − 1 21 / 38
Recommend
More recommend