Math 313 - Linear Algebra 1.7 - 1.10 September 9 - 16, 2016 There - - PowerPoint PPT Presentation

math 313 linear algebra 1 7 1 10
SMART_READER_LITE
LIVE PREVIEW

Math 313 - Linear Algebra 1.7 - 1.10 September 9 - 16, 2016 There - - PowerPoint PPT Presentation

Math 313 - Linear Algebra 1.7 - 1.10 September 9 - 16, 2016 There is no royal road to geometry. - Euclid to Ptolemy 1.7 Linear Independence Definition (Linear Independence) An indexed set of vectors { v 1 , . . . , v p } in R n is said to


slide-1
SLIDE 1

Math 313 - Linear Algebra §1.7 - 1.10

September 9 - 16, 2016 There is no royal road to geometry.

  • Euclid to Ptolemy
slide-2
SLIDE 2

§1.7 Linear Independence

Definition (Linear Independence)

An indexed set of vectors {v1, . . . , vp} in Rn is said to be linearly independent if the vector equation x1v1 + x2v2 + . . . + xpvp = 0 has only the trivial solution. The set {v1, . . . , vp} is said to be linearly dependent if there exist weights c1, . . . , cp, not all zero such that c1v1 + c2v2 + . . . + cpvp = 0 (1) In this case equation (1) is called a dependence relation.

slide-3
SLIDE 3

§1.7 Linear Independence

Example: Determine if the vectors v1 =   1 4 7   , v2 =   2 5 8   , and v3 =   3 6 9   are linearly independent. Example: Determine if w1 =   1 1   , w2 =   1 2   , and w3 =   1 1   are linearly independent.

slide-4
SLIDE 4

§1.7 Linear Independence

Saying that the vector equation x1v1 + x2v2 + . . . + xpvp = 0 has only the trivial solution x1 = · · · = xp = 0 is the same as saying that

  • v1

v2 · · · vp

    x1 x2 . . . xp      =      . . .      has only the trivial solution x = 0.

Fact

The columns of a matrix A are linearly independent if and only if the equation Ax = 0 has only the trivial solution.

slide-5
SLIDE 5

§1.7 Linear Independence

Fact

  • 1. A set {v} containing a single vector is linearly

independent if and only if v = 0.

  • 2. A set {v, w} is linearly independent if and only if neither
  • f v not w is a scalar multiple of the other.

Warning! A set of 3 or more vectors may be linearly dependent even though none of them is a scalar multiple of another vector in the set.

slide-6
SLIDE 6

§1.7 Linear Independence

Example: Determine if w1 =   2 1 −1   , and w2 =   16 8 −7   are linearly independent.

slide-7
SLIDE 7

§1.7 Linear Independence

Theorem

An indexed set S = {v1, . . . , vp} of two or more vectors is linearly dependent if and only if at least one of the vectors in S is a linear combination of the others. In fact, if S is linearly dependent and v1 = 0, then some vj (with j > 1) is a linear combination of the preceding vectors, v1, . . . , vj−1. Warning! This theorem does not say that every vector of a linearly independent set can be written as a linear combination

  • f the other vectors, just that some vector can.
slide-8
SLIDE 8

§1.7 Linear Independence

Example: Consider u =   1 2   and v =   1 1  . What is Span {u, v}? If w is another vector in R3, where will w lie if {u, v, w} is linearly independent? Where will it lie if {u, v, w} is linearly dependent?

slide-9
SLIDE 9

§1.7 Linear Independence

Theorem

If a set contains more vectors than there are entries in each vector, then the set is linearly dependent. That is, any set {v1, . . . , vp} in Rn is linearly dependent if p > n.

Theorem

If a set S = {v1, . . . , vp} in Rn contains the zero vector, then the set is linearly dependent.

slide-10
SLIDE 10

§1.7 Linear Independence

Example: Which of the following sets of vectors is linearly independent? 1.   1 2   ,   1 2 1   ,   −3 3 −2   ,   5 5 2  . 2.   1 2   ,     ,   −3 3 −2  . 3.     2 2 8 4     ,     3 4 5 6     ,     −3 −3 −12 −6    

slide-11
SLIDE 11

§1.8 Introduction to Linear Transformations

Given an m × n matrix A and a vector x ∈ Rn, we can multiply A and x to give a vector in Rm. We can think of an m × n matrix as taking vectors in Rn and transforming them to vectors in Rm. Example: If A = 1 −2 3 1 1 7

  • and x =

  2 −2 5   ∈ R3. Then Ax = 1 −2 3 1 1 7   2 −2 5   = 21 35

  • ∈ R2.
slide-12
SLIDE 12

§1.8 Introduction to Linear Transformations

A transformation (or function or mapping) T from Rn to Rm is a rule that assigns to each vector x in Rn a vector T(x) in Rm. The set Rn is called the domain of T, and the set Rm is called the codomain of T. Notation: T : Rn → Rm x → T(x) For a vector x ∈ Rn, the vector T(x) in Rm is called the image

  • f x, and the set of all images T(x) is called the range of T.
slide-13
SLIDE 13

§1.8 Introduction to Linear Transformations

x T(x) domain = Rn codomain = Rm range T

slide-14
SLIDE 14

§1.8 Introduction to Linear Transformations

Example: Let T(x) = Ax for all x ∈ R3, where A = 1 −2 3 1 1 7

  • .

What is the domain and codomain of T? Let b = −4 −5

  • ,

c = −2 −5

  • and

u =   1 7 −4   . Compute T(u). Are b and c in the range of T? If so, find vectors x and v with T(x) = b and T(v) = c. What is the range of T?

slide-15
SLIDE 15

§1.8 Introduction to Linear Transformations

Definition

A transformation T is linear if for all u, v in the domain of T and all scalars c ∈ R

  • 1. T(u + v) = T(u) + T(v) and
  • 2. T(cu) = cT(u).

Fact

If T is a linear transformation, then for all vectors u, v in the domain of T, and all scalars c, d T(0) = 0, and T(cu + dv) = cT(u) + dT(v).

slide-16
SLIDE 16

§1.8 Introduction to Linear Transformations

Example: Let I =   1 1 1   , B = 1 2 1

  • ,

C =   1 1   , D = −1 1

  • ,

E = 1 −2

  • ,

and F =   1 1   . Described the linear transformations defined by these matrices. The matrix I above is called the identity matrix.

slide-17
SLIDE 17

§1.9 The Matrix of a Linear Transformation

Recall that for any n, we can define the following vectors in Rn: e1 =        1 . . .        , e2 =        1 . . .        , e3 =        1 . . .        , . . . , en =        . . . 1       

slide-18
SLIDE 18

§1.9 The Matrix of a Linear Transformation

Theorem

Let T : Rn → Rm be a linear transformation. Then there exists a unique matrix A such that T(x) = Ax for all x in Rn. In fact, A is the m × n matrix whose jth column is the vector T(ej) where ej is the jth column of the identity matrix In in Rn: A = [T(e1) . . . T(en)]. The matrix given above is called the standard matrix for the linear transformation T.

slide-19
SLIDE 19

§1.9 The Matrix of a Linear Transformation

Example: Find the standard matrix of the linear transformation T : R2 → R2 which reflects vectors in the line y = −x. Example: Find the standard matrix of the transformation S : R3 → R3 which reflects every vector through the xy-plane, and then projects to the xz-plane.

slide-20
SLIDE 20

§1.9 The Matrix of a Linear Transformation

Definition

A transformation T : Rn → Rm is called onto if every vector b ∈ Rm is the image of at least one vector in Rn.

x T(x) domain = Rn codomain = Rm range T

T is not onto.

x S(x) domain = Rn codomain = range = Rm S

S is onto.

slide-21
SLIDE 21

§1.9 The Matrix of a Linear Transformation

Definition

A transformation T : Rn → Rm is called one-to-one if every vector b ∈ Rm is the image of at most one vector in Rn.

domain = Rn codomain = Rm range T

T is not one-to-one.

domain = Rn codomain = Rm range S

S is one-to-one.

slide-22
SLIDE 22

§1.9 The Matrix of a Linear Transformation

Example: Let T : R4 → R3 be a linear transformation with standard matrix A =   2 1 8 1 2 1 −3   . Is T one-to-one? Is T onto?

slide-23
SLIDE 23

§1.9 The Matrix of a Linear Transformation

Theorem

Let T : Rn → Rm be a linear transformation, with standard matrix A (i.e. T(x) = Ax for all vectors x ∈ Rn). Then the following are equivalent:

  • 1. T is onto,
  • 2. the columns of A span Rm,
  • 3. A has a pivot in every row.

Proof.

This is essentially Theorem 4 in §1.4.

slide-24
SLIDE 24

§1.9 The Matrix of a Linear Transformation

Theorem

Let T : Rn → Rm be a linear transformation, with standard matrix A (i.e. T(x) = Ax for all vectors x ∈ Rn). Then the following are equivalent:

  • 1. T is one-to-one,
  • 2. T(x) = 0 has only the trivial solution x = 0,
  • 3. the columns of A are linearly independent,
  • 4. A has a pivot in every column.

Proof.

We prove 1 ⇒ 4 ⇒ 2 ⇒ 3 ⇒ 1.

slide-25
SLIDE 25

§1.10 Linear Models in Business, Science and Engineering

Difference Equations: Consider an influenza strain, which has the following properties. Each week, an uninfected person has a 1% chance of catching the

  • disease. Each week, an infected person has a 70% percent chance
  • f recovering, and a 30% percent chance of remaining sick.

Suppose that at week zero, 1000 people in a population of 100 000 are infected. Find a matrix equation to model this situation. Let xk = # of healthy at week k # of sick at week k

  • , then x0 =

999 000 1000

  • .
slide-26
SLIDE 26

§1.10 Linear Models in Business, Science and Engineering

Let A = 0.99 0.7 0.01 0.3

  • .

Then for all k ≥ 1, xk = Axk−1. x1 = Ax0 = 989710 10290

  • ,

x2 = Ax1 = 987016 12984

  • ,

x3 = Ax2 = 986235 13765

  • ,

and x4 = Ax3 = 986008 13992

  • .