linear algebra and analysis recalls
play

Linear algebra and analysis recalls Lectures for PHD course on - PowerPoint PPT Presentation

Linear algebra and analysis recalls Lectures for PHD course on Numerical optimization Enrico Bertolazzi DIMS Universit a di Trento November 21 December 14, 2011 Linear algebra and analysis recalls 1 / 30 Outline Linear algebra 1


  1. Linear algebra and analysis recalls Lectures for PHD course on Numerical optimization Enrico Bertolazzi DIMS – Universit´ a di Trento November 21 – December 14, 2011 Linear algebra and analysis recalls 1 / 30

  2. Outline Linear algebra 1 Analysis 2 The Separation Theorem and Farkas’ Lemma 3 Linear algebra and analysis recalls 2 / 30

  3. Linear algebra Outline Linear algebra 1 Analysis 2 The Separation Theorem and Farkas’ Lemma 3 Linear algebra and analysis recalls 3 / 30

  4. Linear algebra We always work with finite dimensional Euclidean vector spaces ❘ n , the natural number n denote the dimension of the space. Elements v ∈ ❘ n will be referred to as vectors, and we think them as composed of n real numbers stacked on top of each other, i.e.,   v 1 v 2 � T =   � v = v 1 , v 2 , . . . , v n .   .   .   v n v k being real numbers, and T denotes the transpose operator. Linear algebra and analysis recalls 4 / 30

  5. Linear algebra Basic operation Basic operations defined for two vectors a , b ∈ ❘ n , and an arbitrary scalar α ∈ ❘ � T � T � � a = a 1 , a 2 , . . . , a n b = b 1 , b 2 , . . . , b n are: � T ∈ ❘ n ; 1 addition: a + b = � a 1 + b 1 , . . . , a n + b n � T ∈ ❘ n ; 2 multiplication by a scalar: α a = � α a 1 , . . . , α a n 3 scalar product between two vectors: ( a , b ) = a T b = � n k =1 a i b i ∈ ❘ . 4 A linear subspace L ⊂ ❘ n is a set with the two properties: for every a , b ∈ L it holds that a + b ∈ L ; 1 and for every α ∈ ❘ , a ∈ L it holds that α a ∈ L . 2 5 An affine subspace A ⊂ ❘ n is any set that can be represented as v + L := { v + x | x ∈ L } for some vector v ∈ ❘ n and some linear subspace L ⊂ ❘ n . Linear algebra and analysis recalls 5 / 30

  6. Linear algebra Norm We associate a norm, or length, of a vector v ∈ ❘ n with a scalar product as: � � v � = ( v , v ) The Cauchy–Bunyakowski–Schwarz inequality says that for a , b ∈ ❘ n ( a , b ) ≤ � a � � b � we define the angle θ between two vectors via ( a , b ) cos θ = � a � � b � . We say that a is orthogonal to b if and only if ( a , b ) = 0 . The only vector orthogonal to itself is 0 = (0 , . . . , 0) T ; moreover, this is the only vector with zero norm. Linear algebra and analysis recalls 6 / 30

  7. Linear algebra Linear and affine dependence The scalar product is symmetric and bilinear, i.e., for every a , b , c , α , β it holds that ( a , b ) = ( b , a ) , and ( α a + β b , c ) = α ( a , c ) + β ( b , c ) A collection of vectors ( v 1 , . . . , v k ) is said to be linearly independent if and only if k � α i v i = 0 ⇒ α 1 = · · · = α k = 0 . i =1 Similarly, a collection of vectors ( v 1 , . . . , v k ) is said to be affinely independent if and only if the collection ( v 2 − v 1 , v 3 − v 1 , . . . , v k − v 1 ) is linearly independent. Linear algebra and analysis recalls 7 / 30

  8. Linear algebra Basis The largest number of linearly independent vectors in ❘ n is n ; n linearly independent vectors from ❘ n is referred to as basis. The basis ( v 1 , . . . , v n ) is said to be orthogonal if ( v i , v j ) = 0 for all i � = j . If, in addition � v i � = 1 for i = 1 , . . . , n , the basis is called orthonormal. Given the basis ( v 1 , . . . , v n ) every vector v can be written in a unique way as v = � n i =1 α i v i , and the n -tuple ( α 1 , . . . , α n ) will be referred to as coordinates of v in this basis. If the basis ( v 1 , . . . , v n ) is orthonormal, the coordinates α i are computed as α i = ( v , v i ) . The space ❘ n will be typically equipped with the standard basis ( e 1 , . . . , e n ) where e i = (0 , . . . , 0 , 1 , 0 , . . . , 0) T . For every vector v = ( v 1 , . . . , v n ) T we have ( v , e i ) = v i which allows us to identify vectors and their coordinates. Linear algebra and analysis recalls 8 / 30

  9. Linear algebra Matrices All linear functions from ❘ n to ❘ k may be described using a linear space of real matrices ❘ k × n (i.e., with k row and n columns). Given a matrix A ∈ ❘ k × n it will often be convenient to view it as a row of its columns, which are thus vectors in ❘ k . Let A ∈ ❘ k × n have elements A ij we write A = ( a 1 , . . . , a n ) , where a i = ( A 1 i , . . . , A ki ) T ∈ ❘ k . The addition of two matrices and scalar-matrix multiplication are defined in a straightforward way. For v = ( v 1 , . . . , v n ) ∈ ❘ n we define n � v i a i ∈ ❘ k Av = i =1 Linear algebra and analysis recalls 9 / 30

  10. Linear algebra Matrix norm and transpose We also define a norm of the matrix A by � A � = v ∈ ❘ n , � v � =1 � Av � max For a given matrix A ∈ ❘ k × n we define A T ∈ ❘ n × k with elements ( A T ) ij = A ji as matrix transpose A more elegant definition: A T is the unique matrix, satisfying the equality ( Av , u ) = ( v , A T u ) for all v ∈ ❘ n and u ∈ ❘ k . � and � � A T � From this definition it should be clear that � A � = that ( A T ) T = A Linear algebra and analysis recalls 10 / 30

  11. Linear algebra Matrix product Given two matrices A ∈ ❘ k × n and B ∈ ❘ n × m , we define the product matrix product C = AB ∈ ❘ k × m elementwise by n � C ij = A iℓ B ℓj , i = 1 , . . . , k j = 1 , . . . , m. ℓ =1 In other words, C = AB iff for all v ∈ ❘ n , Cv = A ( Bv ) . The matrix product is: associative i.e., A ( BC ) = ( AB ) C ; not commutative i.e., AB � = BA in general; for matrices of compatible sizes. Linear algebra and analysis recalls 11 / 30

  12. Linear algebra Matrix norm and product It is easy (and instructive) to check that � AB � ≤ � A � � B � and that ( AB ) T = B T A T . Vectors v ∈ ❘ n can be (and sometimes will be) viewed as matrices v ∈ ❘ n × 1 . Check that this embedding is norm-preserving, i.e., the norm of v viewed as a vector equals the norm of v viewed as a matrix with one column. The triangle inequality for vectors and matrices is valid � a + b � ≤ � a � + � b � , � A + B � ≤ � A � + � B � � a − b � ≥ � a � − � b � , � A − B � ≥ � A � − � B � Linear algebra and analysis recalls 12 / 30

  13. Linear algebra Matrix inverse For a square matrix A ∈ ❘ n × n we can discuss the existence of the unique matrix A − 1 , called the inverse of A , verifying A − 1 Av = v for all v ∈ ❘ n . If the inverse of a given matrix exists, we call the latter nonsingular. The inverse matrix exists iff the columns of A are linearly independent; the columns of A T are linearly independent; the system Ax = v has a unique solution for every v ∈ ❘ n ; the system Ax = 0 has x = 0 as its unique solution. From this definition it follows that A is nonsingular iff A T is nonsingular, and, furthermore, ( A − 1 ) T = ( A T ) − 1 and therefore will be denoted simply as A − T . At last, if A and B are two nonsingular matrices of the same size, then AB is nonsingular and ( AB ) − 1 = B − 1 A − 1 . Linear algebra and analysis recalls 13 / 30

  14. Linear algebra Eigenvalues and eigenvectors (1 / 2) If for some vector v ∈ ❘ n , and some scalar α ∈ ❘ it holds that Av = α v , we call α an eigenvalue of A and v an eigenvector, corresponding to eigenvalue α . Eigenvectors, corresponding to a given eigenvalue, form a linear subspace of ❘ n ; two nonzero eigenvectors, corresponding to two distinct eigenvalues are linearly independent. In general, every matrix A ∈ ❘ n × n has n eigenvalues (counted with multiplicity), maybe complex, which are furthermore roots of the characteristic equation det( A − λ I ) = 0 , where I ∈ ❘ n × n is the identity matrix, characterized by the fact that for all v ∈ ❘ n : Iv = v . Linear algebra and analysis recalls 14 / 30

  15. Linear algebra Eigenvalues and eigenvectors (2 / 2) In general we have � A � ≥ | λ n | where λ n is the eigenvalue with largest absolute value. The matrix A is nonsingular iff none of its eigenvalues are equal to zero, and in this case the eigenvalues of A − 1 are equal to the reciprocal of the eigenvalues of A . The eigenvalues of A T are equal to the eigenvalues of A . We call A symmetric iff A T = A . All eigenvalues of symmetric matrices are real, and eigenvectors corresponding to distinct eigenvalues are orthogonal. Linear algebra and analysis recalls 15 / 30

  16. Analysis Outline Linear algebra 1 Analysis 2 The Separation Theorem and Farkas’ Lemma 3 Linear algebra and analysis recalls 16 / 30

  17. Analysis Taylor series A function f ( x ) has the expansion f ( x + h ) = f ( x ) + hf ′ ( x ) + · · · + h k k ! f ( k ) ( x ) + E where the error term E take the forms � h E = 1 ( h − t ) k f ( k +1) ( x + t ) d t, [Peano] k ! 0 h k +1 ( k + 1)! f ( k +1) ( x + η ) , = η ∈ (0 , h ) [Lagrange] = O ( h k +1 ) Linear algebra and analysis recalls 17 / 30

  18. Analysis Multi-index notation Given a list of (non negative) integer α = ( α 1 , α 2 , . . . , α n ) called multi-index and a vector z ∈ ❘ n and a function f : ❘ n �→ ❘ we define α ! = α 1 ! α 2 ! · · · α n ! | α | = α 1 + α 2 + · · · + α n z α = z α 1 1 z α 2 2 · · · z α n n = ∂ | α | f ( z 1 , z 2 , . . . , z n ) ∂f ( z ) ∂ α ∂α 1 ∂α 2 · · · ∂α n Linear algebra and analysis recalls 18 / 30

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend