math 313 linear algebra 1 7 1 10
play

Math 313 - Linear Algebra 1.7 - 1.10 September 9 - 16, 2016 There - PowerPoint PPT Presentation

Math 313 - Linear Algebra 1.7 - 1.10 September 9 - 16, 2016 There is no royal road to geometry. - Euclid to Ptolemy 1.7 Linear Independence Definition (Linear Independence) An indexed set of vectors { v 1 , . . . , v p } in R n is said to


  1. Math 313 - Linear Algebra § 1.7 - 1.10 September 9 - 16, 2016 There is no royal road to geometry. - Euclid to Ptolemy

  2. § 1.7 Linear Independence Definition (Linear Independence) An indexed set of vectors { v 1 , . . . , v p } in R n is said to be linearly independent if the vector equation x 1 v 1 + x 2 v 2 + . . . + x p v p = 0 has only the trivial solution. The set { v 1 , . . . , v p } is said to be linearly dependent if there exist weights c 1 , . . . , c p , not all zero such that c 1 v 1 + c 2 v 2 + . . . + c p v p = 0 (1) In this case equation (1) is called a dependence relation .

  3. § 1.7 Linear Independence Example: Determine if the vectors  1   2   3   ,  , v 1 = 4 v 2 = 5 and v 3 = 6     7 8 9 are linearly independent. Example: Determine if       1 1 1  ,  , w 1 = 0 w 2 = 0 and w 3 = 1     1 2 0 are linearly independent.

  4. § 1.7 Linear Independence Saying that the vector equation x 1 v 1 + x 2 v 2 + . . . + x p v p = 0 has only the trivial solution x 1 = · · · = x p = 0 is the same as saying that    0  x 1 x 2 0     � � · · · v 1 v 2 v p  =  .   .  . .     . .    x p 0 has only the trivial solution x = 0 . Fact The columns of a matrix A are linearly independent if and only if the equation A x = 0 has only the trivial solution.

  5. § 1.7 Linear Independence Fact 1. A set { v } containing a single vector is linearly independent if and only if v � = 0 . 2. A set { v , w } is linearly independent if and only if neither of v not w is a scalar multiple of the other. Warning! A set of 3 or more vectors may be linearly dependent even though none of them is a scalar multiple of another vector in the set.

  6. § 1.7 Linear Independence Example: Determine if  2   16   , w 1 = 1 and w 2 = 8    − 1 − 7 are linearly independent.

  7. § 1.7 Linear Independence Theorem An indexed set S = { v 1 , . . . , v p } of two or more vectors is linearly dependent if and only if at least one of the vectors in S is a linear combination of the others. In fact, if S is linearly dependent and v 1 � = 0 , then some v j (with j > 1) is a linear combination of the preceding vectors, v 1 , . . . , v j − 1 . Warning! This theorem does not say that every vector of a linearly independent set can be written as a linear combination of the other vectors, just that some vector can.

  8. § 1.7 Linear Independence     1 1  and v = Example: Consider u = 2 1  . What is   0 0 Span { u , v } ? If w is another vector in R 3 , where will w lie if { u , v , w } is linearly independent? Where will it lie if { u , v , w } is linearly dependent?

  9. § 1.7 Linear Independence Theorem If a set contains more vectors than there are entries in each vector, then the set is linearly dependent. That is, any set { v 1 , . . . , v p } in R n is linearly dependent if p > n . Theorem If a set S = { v 1 , . . . , v p } in R n contains the zero vector, then the set is linearly dependent.

  10. § 1.7 Linear Independence Example: Which of the following sets of vectors is linearly independent?         1 1 − 3 5  ,  ,  , 1. 2 2 3 5  .     0 1 − 2 2  1   0   − 3   ,  , 2. 2 0 3  .    0 0 − 2  2   3   − 3  2 4 − 3       3.  ,  ,       8 5 − 12     4 6 − 6

  11. § 1.8 Introduction to Linear Transformations Given an m × n matrix A and a vector x ∈ R n , we can multiply A and x to give a vector in R m . We can think of an m × n matrix as taking vectors in R n and transforming them to vectors in R m . � 1   2 � − 2 3  ∈ R 3 . Then Example: If A = and x = − 2  1 1 7 5 � 1 � 21 �  2  � − 2 3  = ∈ R 2 . A x = − 2  1 1 7 35 5

  12. § 1.8 Introduction to Linear Transformations A transformation (or function or mapping ) T from R n to R m is a rule that assigns to each vector x in R n a vector T ( x ) in R m . The set R n is called the domain of T , and the set R m is called the codomain of T . Notation: T : R n → R m x �→ T ( x ) For a vector x ∈ R n , the vector T ( x ) in R m is called the image of x , and the set of all images T ( x ) is called the range of T .

  13. § 1.8 Introduction to Linear Transformations T x T ( x ) range codomain = R m domain = R n

  14. § 1.8 Introduction to Linear Transformations Example: Let T ( x ) = A x for all x ∈ R 3 , where � 1 � − 2 3 A = . 1 1 7 What is the domain and codomain of T ? Let � − 4 � − 2   1 � �  . b = c = and u = 7 ,  − 5 − 5 − 4 Compute T ( u ). Are b and c in the range of T ? If so, find vectors x and v with T ( x ) = b and T ( v ) = c . What is the range of T ?

  15. § 1.8 Introduction to Linear Transformations Definition A transformation T is linear if for all u , v in the domain of T and all scalars c ∈ R 1. T ( u + v ) = T ( u ) + T ( v ) and 2. T ( c u ) = cT ( u ). Fact If T is a linear transformation, then for all vectors u , v in the domain of T , and all scalars c, d T ( 0 ) = 0 , and T ( c u + d v ) = cT ( u ) + dT ( v ) .

  16. § 1.8 Introduction to Linear Transformations Example: Let � 1   1 0 0 2 �  , I = 0 1 0 B = ,  0 1 0 0 1 � 0   1 0 0 � − 1  , C = 0 1 0 D = ,  1 0 0 0 0 � 1   1 0 0 �  . E = and F = 0 1 ,  0 − 2 0 0 Described the linear transformations defined by these matrices. The matrix I above is called the identity matrix .

  17. § 1.9 The Matrix of a Linear Transformation Recall that for any n , we can define the following vectors in R n :  1   0   0   0  0 1 0 0                 0 0 1 0 e 1 = e 2 = e 3 = e n = , , , . . . ,          .   .   .   .  . . . .         . . . .         0 0 0 1

  18. § 1.9 The Matrix of a Linear Transformation Theorem Let T : R n → R m be a linear transformation. Then there exists a unique matrix A such that T ( x ) = A x for all x in R n . In fact, A is the m × n matrix whose j th column is the vector T ( e j ) where e j is the j th column of the identity matrix I n in R n : A = [ T ( e 1 ) T ( e n )] . . . . The matrix given above is called the standard matrix for the linear transformation T .

  19. § 1.9 The Matrix of a Linear Transformation Example: Find the standard matrix of the linear transformation T : R 2 → R 2 which reflects vectors in the line y = − x . Example: Find the standard matrix of the transformation S : R 3 → R 3 which reflects every vector through the xy -plane, and then projects to the xz -plane.

  20. § 1.9 The Matrix of a Linear Transformation Definition A transformation T : R n → R m is called onto if every vector b ∈ R m is the image of at least one vector in R n . T S x x T ( x ) S ( x ) range codomain = range = R m domain = R n domain = R n codomain = R m T is not onto. S is onto.

  21. § 1.9 The Matrix of a Linear Transformation Definition A transformation T : R n → R m is called one-to-one if every vector b ∈ R m is the image of at most one vector in R n . T S 0 0 0 0 range range domain = R n codomain = R m domain = R n codomain = R m T is not one-to-one. S is one-to-one.

  22. § 1.9 The Matrix of a Linear Transformation Example: Let T : R 4 → R 3 be a linear transformation with standard matrix  2 1 8 1   . A = 0 2 1 0  0 0 0 − 3 Is T one-to-one? Is T onto?

  23. § 1.9 The Matrix of a Linear Transformation Theorem Let T : R n → R m be a linear transformation, with standard matrix A (i.e. T ( x ) = A x for all vectors x ∈ R n ). Then the following are equivalent: 1. T is onto, 2. the columns of A span R m , 3. A has a pivot in every row. Proof. This is essentially Theorem 4 in § 1.4.

  24. § 1.9 The Matrix of a Linear Transformation Theorem Let T : R n → R m be a linear transformation, with standard matrix A (i.e. T ( x ) = A x for all vectors x ∈ R n ). Then the following are equivalent: 1. T is one-to-one, 2. T ( x ) = 0 has only the trivial solution x = 0 , 3. the columns of A are linearly independent, 4. A has a pivot in every column. Proof. We prove 1 ⇒ 4 ⇒ 2 ⇒ 3 ⇒ 1.

  25. § 1.10 Linear Models in Business, Science and Engineering Difference Equations: Consider an influenza strain, which has the following properties. Each week, an uninfected person has a 1% chance of catching the disease. Each week, an infected person has a 70% percent chance of recovering, and a 30% percent chance of remaining sick. Suppose that at week zero, 1000 people in a population of 100 000 are infected. Find a matrix equation to model this situation. � # of healthy at week k � � 999 000 � Let x k = , then x 0 = . # of sick at week k 1000

  26. § 1.10 Linear Models in Business, Science and Engineering Let � 0 . 99 0 . 7 � A = . 0 . 01 0 . 3 Then for all k ≥ 1, x k = A x k − 1 . � 989710 � � 987016 � x 1 = A x 0 = , x 2 = A x 1 = , 10290 12984 � 986235 � � 986008 � x 3 = A x 2 = , and x 4 = A x 3 = . 13765 13992

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend