location clough 144
play

Location: Clough 144 DISCLAIMER: Everything I say about the final - PowerPoint PPT Presentation

Announcements Monday, April 22 Please fill out your CIOS survey! If 85% of the class completes the survey by April 25th , then we will drop two quizzes instead of one. My office hours are this week Monday - Wednesday 10 am and next week


  1. Announcements Monday, April 22 ◮ Please fill out your CIOS survey! If 85% of the class completes the survey by April 25th , then we will drop two quizzes instead of one. ◮ My office hours are this week Monday - Wednesday 10 am and next week Monday 10 am in Clough 248. ◮ DISCLAIMER: Everything I say about the final today is advice, it does not guarantee anything. In particular, if something is not on the slides today, does not mean it is not important for the exam. ◮ The final we be about twice as long as the midterm ◮ It is roughly split between three topics, namely “midterm 2 + linear independence”, “midterm 3” and “post midterm 3”. ◮ Final exam time: Tuesday, April 30th, 6–8:50pm. Location: Clough 144 ◮ DISCLAIMER: Everything I say about the final today is advice, it does not guarantee anything.

  2. Section 3.5 Linear Independence

  3. Linear Independence, Definition Definition A set of vectors { v 1 , v 2 , . . . , v p } in R n is linearly independent if the vector equation x 1 v 1 + x 2 v 2 + · · · + x p v p = 0 has only the trivial solution x 1 = x 2 = · · · = x p = 0. The set { v 1 , v 2 , . . . , v p } is linearly dependent otherwise. Theorem A set of vectors { v 1 , v 2 , . . . , v p } is linearly de pendent if and only if one of the vectors is in the span of the other ones. Theorem A set of vectors { v 1 , v 2 , . . . , v p } is linearly de pendent if and only if you can remove one of the vectors without shrinking the span.

  4. Linear Dependence and Free Variables Theorem Let v 1 , v 2 , . . . , v p be vectors in R n , and consider the matrix   | | |   . A = v 1 v 2 · · · v p | | | Then you can delete the columns of A without pivots (the columns corresponding to free variables), without changing Span { v 1 , v 2 , . . . , v p } . The pivot columns are linearly independent, so you can’t delete any more columns. This means that each time you add a pivot column, then the span increases. Upshot Let d be the number of pivot columns in the matrix A above. ◮ If d = 1 then Span { v 1 , v 2 , . . . , v p } is a line. ◮ If d = 2 then Span { v 1 , v 2 , . . . , v p } is a plane. ◮ If d = 3 then Span { v 1 , v 2 , . . . , v p } is a 3-space. ◮ Etc.

  5. Section 3.6 Subspaces

  6. Definition of Subspace Definition A subspace of R n is a subset V of R n satisfying: 1. The zero vector is in V . “not empty” 2. If u and v are in V , then u + v is also in V . “closed under addition” 3. If u is in V and c is in R , then cu is in V . “closed under × scalars” Fast-forward Every subspace is a span, and every span is a subspace. A subspace is a span of some vectors, but you haven’t computed what those vectors are yet.

  7. Section 3.7 Basis and Dimension

  8. Basis of a Subspace Definition Let V be a subspace of R n . A basis of V is a set of vectors { v 1 , v 2 , . . . , v m } in V such that: 1. V = Span { v 1 , v 2 , . . . , v m } , and 2. { v 1 , v 2 , . . . , v m } is linearly independent. The number of vectors in a basis is the dimension of V , and is written dim V . Important A subspace has many different bases, but they all have the same number of vectors.

  9. Bases of R n The unit coordinate vectors         1 0 0 0  0   1   0   0          . . . .         . . . . e 1 = , e 2 = , . . . , e n − 1 = , e n =         . . . .                 0 0 1 0 0 0 0 1 are a basis for R n . The identity matrix has columns e 1 , e 2 , . . . , e n . 1. They span: I n has a pivot in every row. 2. They are linearly independent: I n has a pivot in every column. In general: { v 1 , v 2 , . . . , v n } is a basis for R n if and only if the matrix   | | |   A = · · · v 1 v 2 v n | | | has a pivot in every row and every column. Sanity check: we have shown that dim R n = n .

  10. Basis for Nul A and Col( A ) Fact The vectors in the parametric vector form of the general solution to Ax = 0 always form a basis for Nul A . Fact The pivot columns of A always form a basis for Col A .

  11. The Rank Theorem Definition The rank of a matrix A , written rank A , is the dimension of the column space Col A . The nullity of A , written nullity A , is the dimension of the solution set of Ax = 0. Observe: rank A = dim Col A = the number of columns with pivots nullity A = dim Nul A = the number of free variables = the number of columns without pivots. Rank Theorem If A is an m × n matrix, then rank A + nullity A = n = the number of columns of A . In other words, (dimension of column space) + (dimension of solution set) = (number of variables) .

  12. The Basis Theorem Basis Theorem Let V be a subspace of dimension m . Then: ◮ Any m linearly independent vectors in V form a basis for V . ◮ Any m vectors that span V form a basis for V . Upshot If you already know that dim V = m , and you have m vectors B = { v 1 , v 2 , . . . , v m } in V , then you only have to check one of 1. B is linearly independent, or 2. B spans V in order for B to be a basis.

  13. Chapter 4 Linear Transformations and Matrix Algebra

  14. Section 4.1 Matrix Transformations

  15. Transformations Vocabulary Definition A transformation (or function or map ) from R n to R m is a rule T that assigns to each vector x in R n a vector T ( x ) in R m . ◮ R n is called the domain of T (the inputs). ◮ R m is called the codomain of T (the outputs). ◮ For x in R n , the vector T ( x ) in R m is the image of x under T . Notation: x �→ T ( x ). ◮ The set of all images { T ( x ) | x in R n } is the range of T . Notation: T : R n − T is a transformation from R n to R m . → R m means It may help to think of T as a “machine” that takes x T as an input, and gives you x T ( x ) as the output. T ( x ) range T R n R m domain codomain

  16. Section 4.2 One-to-one and Onto Transformations

  17. One-to-one Transformations Definition A transformation T : R n → R m is one-to-one (or into , or injective ) if different vectors in R n map to different vectors in R m . In other words, for every b in R m , the equation T ( x ) = b has at most one solution x . Or, different inputs have different outputs. Note that not one-to-one means at least two different vectors in R n have the same image. Definition A transformation T : R n → R m is onto (or surjective ) if the range of T is equal to R m (its codomain). In other words, for every b in R m , the equation T ( x ) = b has at least one solution . Or, every possible output has an input. Note that not onto means there is some b in R m which is not the image of any x in R n .

  18. Characterization of One-to-One Matrix Transformations Theorem Let T : R n → R m be a matrix transformation with matrix A . Then the following are equivalent: ◮ T is one-to-one ◮ T ( x ) = b has one or zero solutions for every b in R m ◮ Ax = b has a unique solution or is inconsistent for every b in R m ◮ Ax = 0 has a unique solution ◮ The columns of A are linearly independent ◮ A has a pivot in every column.

  19. Characterization of Onto Matrix Transformations Theorem Let T : R n → R m be a matrix transformation with matrix A . Then the following are equivalent: ◮ T is onto ◮ T ( x ) = b has a solution for every b in R m ◮ Ax = b is consistent for every b in R m ◮ The columns of A span R m ◮ A has a pivot in every row

  20. Section 4.3 Linear Transformations

  21. Linear Transformations Definition A transformation T : R n → R m is linear if it satisfies T ( u + v ) = T ( u ) + T ( v ) and T ( cv ) = cT ( v ) . for all vectors u , v in R n and all scalars c . In other words, T “respects” addition and scalar multiplication. Take-Away Linear transformations are the same as matrix transformations. Dictionary   | | | Linear transformation m × n matrix A = T ( e 1 ) T ( e 2 ) · · · T ( e n ) T : R n → R m   | | | T ( x ) = Ax m × n matrix A T : R n → R m

  22. Section 4.4 Matrix Multiplication

  23. Section 4.5 Matrix Inverses

  24. The Definition of Inverse Definition Let A be an n × n square matrix. We say A is invertible (or nonsingular ) if there is a matrix B of the same size, such that identity matrix 1 0 · · · 0   AB = I n and BA = I n . 0 1 · · · 0    . . .  ... In this case, B is the inverse of A , and is written A − 1 . . . .   . . .   0 0 · · · 1

  25. The Invertible Matrix Theorem A.K.A. The Really Big Theorem of Math 1553 The Invertible Matrix Theorem Let A be an n × n matrix, and let T : R n → R n be the linear transformation T ( x ) = Ax . The following statements are equivalent. 1. A is invertible. 2. T is invertible. 3. The reduced row echelon form of A is the identity matrix I n . 4. A has n pivots. you really have to know these 5. Ax = 0 has no solutions other than the trivial solution. 6. Nul( A ) = { 0 } . 7. nullity( A ) = 0. 8. The columns of A are linearly independent. 9. The columns of A form a basis for R n . 10. T is one-to-one. 11. Ax = b is consistent for all b in R n . 12. Ax = b has a unique solution for each b in R n . 13. The columns of A span R n . 14. Col A = R n . 15. dim Col A = n . 16. rank A = n . 17. T is onto. 18. There exists a matrix B such that AB = I n . 19. There exists a matrix B such that BA = I n .

  26. Chapter 5 Determinants

  27. Section 5.1 Determinants: Definition

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend