Linear algebra and differential equations (Math 54): Lecture 4 - - PowerPoint PPT Presentation

linear algebra and differential equations math 54 lecture
SMART_READER_LITE
LIVE PREVIEW

Linear algebra and differential equations (Math 54): Lecture 4 - - PowerPoint PPT Presentation

Linear algebra and differential equations (Math 54): Lecture 4 Vivek Shende February 4, 2019 Hello and welcome to class! Last time Hello and welcome to class! Last time We discussed the matrix-vector product and corresponding formulation of


slide-1
SLIDE 1

Linear algebra and differential equations (Math 54): Lecture 4

Vivek Shende February 4, 2019

slide-2
SLIDE 2

Hello and welcome to class!

Last time

slide-3
SLIDE 3

Hello and welcome to class!

Last time

We discussed the matrix-vector product and corresponding formulation of linear equations.

slide-4
SLIDE 4

Hello and welcome to class!

Last time

We discussed the matrix-vector product and corresponding formulation of linear equations. We also introduced the notions of linear dependence and linear independence.

slide-5
SLIDE 5

Hello and welcome to class!

Last time

We discussed the matrix-vector product and corresponding formulation of linear equations. We also introduced the notions of linear dependence and linear independence.

Today

slide-6
SLIDE 6

Hello and welcome to class!

Last time

We discussed the matrix-vector product and corresponding formulation of linear equations. We also introduced the notions of linear dependence and linear independence.

Today

We’ll see many equivalent conditions to the linear independence of the rows or columns of a matrix.

slide-7
SLIDE 7

Hello and welcome to class!

Last time

We discussed the matrix-vector product and corresponding formulation of linear equations. We also introduced the notions of linear dependence and linear independence.

Today

We’ll see many equivalent conditions to the linear independence of the rows or columns of a matrix. Then we’ll study linear transformations

slide-8
SLIDE 8

Hello and welcome to class!

Last time

We discussed the matrix-vector product and corresponding formulation of linear equations. We also introduced the notions of linear dependence and linear independence.

Today

We’ll see many equivalent conditions to the linear independence of the rows or columns of a matrix. Then we’ll study linear transformations and the matrices which represent them.

slide-9
SLIDE 9

Review from last class

slide-10
SLIDE 10

Matrices multiply vectors

r               a11 a12 · · · a1c a21 a22 · · · a2c . . . . . . ... . . . ar1 ar2 · · · arc      c

    x1 x2 . . . xc               c =      a11x1 + · · · + a1cxc a21x1 + · · · + a2cxc . . . ar1x1 + · · · + arcxc               r and thereby define functions A : Rc → Rr x → Ax

slide-11
SLIDE 11

When does Ax = b have solutions for any b?

slide-12
SLIDE 12

When does Ax = b have solutions for any b?

In terms of the row reduced matrix

slide-13
SLIDE 13

When does Ax = b have solutions for any b?

In terms of the row reduced matrix

When every row of A has a pivot

slide-14
SLIDE 14

When does Ax = b have solutions for any b?

In terms of the row reduced matrix

When every row of A has a pivot — as we saw last time, Ax = b has a solution

slide-15
SLIDE 15

When does Ax = b have solutions for any b?

In terms of the row reduced matrix

When every row of A has a pivot — as we saw last time, Ax = b has a solution exactly when the augmented matrix [A|b] has no pivots in the last column.

slide-16
SLIDE 16

When does Ax = b have solutions for any b?

In terms of the row reduced matrix

When every row of A has a pivot — as we saw last time, Ax = b has a solution exactly when the augmented matrix [A|b] has no pivots in the last column. If there is already a pivot in every row of A,

slide-17
SLIDE 17

When does Ax = b have solutions for any b?

In terms of the row reduced matrix

When every row of A has a pivot — as we saw last time, Ax = b has a solution exactly when the augmented matrix [A|b] has no pivots in the last column. If there is already a pivot in every row of A, there can’t be a pivot in the final column.

slide-18
SLIDE 18

When does Ax = b have solutions for any b?

In terms of the rows

slide-19
SLIDE 19

When does Ax = b have solutions for any b?

In terms of the rows

When the rows are linearly independent.

slide-20
SLIDE 20

When does Ax = b have solutions for any b?

In terms of the rows

When the rows are linearly independent. Recall this means that no nonzero linear combination of the rows

  • f A is zero.
slide-21
SLIDE 21

When does Ax = b have solutions for any b?

In terms of the rows

When the rows are linearly independent. Recall this means that no nonzero linear combination of the rows

  • f A is zero.

Indeed, if there were such an expression,

slide-22
SLIDE 22

When does Ax = b have solutions for any b?

In terms of the rows

When the rows are linearly independent. Recall this means that no nonzero linear combination of the rows

  • f A is zero.

Indeed, if there were such an expression, then by row operations, a row of the form [0 0 · · · 0 |1] can be created in the augmented matrix for some choice of b.

slide-23
SLIDE 23

When does Ax = b have solutions for any b?

In terms of the columns

slide-24
SLIDE 24

When does Ax = b have solutions for any b?

In terms of the columns

When the columns of A span the entire space.

slide-25
SLIDE 25

When does Ax = b have solutions for any b?

In terms of the columns

When the columns of A span the entire space. Recall that solving Ax = b means expressing b as a linear combination of the columns of A.

slide-26
SLIDE 26

When does Ax = b have solutions for any b?

In terms of the associated function

slide-27
SLIDE 27

When does Ax = b have solutions for any b?

In terms of the associated function

When the function determined by the matrix A is onto

slide-28
SLIDE 28

When does Ax = b have solutions for any b?

In terms of the associated function

When the function determined by the matrix A is onto — it hits every point in Rr.

slide-29
SLIDE 29

When does Ax = b have solutions for any b?

In terms of the associated function

When the function determined by the matrix A is onto — it hits every point in Rr. Indeed, solving Ax = b

slide-30
SLIDE 30

When does Ax = b have solutions for any b?

In terms of the associated function

When the function determined by the matrix A is onto — it hits every point in Rr. Indeed, solving Ax = b means finding a point which maps to b,

slide-31
SLIDE 31

When does Ax = b have solutions for any b?

In terms of the associated function

When the function determined by the matrix A is onto — it hits every point in Rr. Indeed, solving Ax = b means finding a point which maps to b, and if every point is hit by the map,

slide-32
SLIDE 32

When does Ax = b have solutions for any b?

In terms of the associated function

When the function determined by the matrix A is onto — it hits every point in Rr. Indeed, solving Ax = b means finding a point which maps to b, and if every point is hit by the map, then this can always be done.

slide-33
SLIDE 33

When does Ax = b have solutions for any b?

If A has r rows and c columns, the following are equivalent

slide-34
SLIDE 34

When does Ax = b have solutions for any b?

If A has r rows and c columns, the following are equivalent

◮ Ax = b has solutions for any b

slide-35
SLIDE 35

When does Ax = b have solutions for any b?

If A has r rows and c columns, the following are equivalent

◮ Ax = b has solutions for any b ◮ The matrix A has a pivot in every row

slide-36
SLIDE 36

When does Ax = b have solutions for any b?

If A has r rows and c columns, the following are equivalent

◮ Ax = b has solutions for any b ◮ The matrix A has a pivot in every row ◮ The rows of A are linearly independent

slide-37
SLIDE 37

When does Ax = b have solutions for any b?

If A has r rows and c columns, the following are equivalent

◮ Ax = b has solutions for any b ◮ The matrix A has a pivot in every row ◮ The rows of A are linearly independent ◮ The columns of A span all of Rr

slide-38
SLIDE 38

When does Ax = b have solutions for any b?

If A has r rows and c columns, the following are equivalent

◮ Ax = b has solutions for any b ◮ The matrix A has a pivot in every row ◮ The rows of A are linearly independent ◮ The columns of A span all of Rr ◮ The function corresponding to A hits all of Rr.

slide-39
SLIDE 39

The zero solution

slide-40
SLIDE 40

The zero solution

Homogeneous equations always have the zero solution. Ax = 0 is always solved by x = 0.

slide-41
SLIDE 41

The zero solution

Homogeneous equations always have the zero solution. Ax = 0 is always solved by x = 0. Inhomogenous equations do not.

slide-42
SLIDE 42

The zero solution

Homogeneous equations always have the zero solution. Ax = 0 is always solved by x = 0. Inhomogenous equations do not. An inhomogenous equation need have no solutions at all.

slide-43
SLIDE 43

When does Ax = 0 have only the zero solution?

slide-44
SLIDE 44

When does Ax = 0 have only the zero solution?

In terms of the row reduced matrix

slide-45
SLIDE 45

When does Ax = 0 have only the zero solution?

In terms of the row reduced matrix

When every column of A has a pivot

slide-46
SLIDE 46

When does Ax = 0 have only the zero solution?

In terms of the row reduced matrix

When every column of A has a pivot As we saw last time, this means we get to introduce zero free parameters.

slide-47
SLIDE 47

When does Ax = 0 have only the zero solution?

In terms of the row reduced matrix

When every column of A has a pivot As we saw last time, this means we get to introduce zero free

  • parameters. Thus there is at most one solution.
slide-48
SLIDE 48

When does Ax = 0 have only the zero solution?

In terms of the row reduced matrix

When every column of A has a pivot As we saw last time, this means we get to introduce zero free

  • parameters. Thus there is at most one solution. The zero solution

is a solution, and there are no others.

slide-49
SLIDE 49

When does Ax = 0 have only the zero solution?

In terms of the columns

When the columns are linearly independent.

slide-50
SLIDE 50

When does Ax = 0 have only the zero solution?

In terms of the columns

When the columns are linearly independent. A solution to Ax = 0 is a way of writing 0 as a linear combination

  • f the columns of A.
slide-51
SLIDE 51

When does Ax = 0 have only the zero solution?

In terms of the columns

When the columns are linearly independent. A solution to Ax = 0 is a way of writing 0 as a linear combination

  • f the columns of A. If this equations has only the zero solution
slide-52
SLIDE 52

When does Ax = 0 have only the zero solution?

In terms of the columns

When the columns are linearly independent. A solution to Ax = 0 is a way of writing 0 as a linear combination

  • f the columns of A. If this equations has only the zero solution

that means the only way of doing this is to have all the coefficients be zero

slide-53
SLIDE 53

When does Ax = 0 have only the zero solution?

In terms of the columns

When the columns are linearly independent. A solution to Ax = 0 is a way of writing 0 as a linear combination

  • f the columns of A. If this equations has only the zero solution

that means the only way of doing this is to have all the coefficients be zero which is the definition of linear independence of the columns of A.

slide-54
SLIDE 54

When does Ax = 0 have only the zero solution?

In terms of the rows

When the rows span. I’ll let you think about this one.

slide-55
SLIDE 55

When does Ax = 0 have only the zero solution?

In terms of the rows

When the rows span. I’ll let you think about this one. Hint: every column has a pivot.

slide-56
SLIDE 56

When does Ax = 0 have only the zero solution?

In terms of the associated function

slide-57
SLIDE 57

When does Ax = 0 have only the zero solution?

In terms of the associated function

When the function determined by the matrix A is one-to-one

slide-58
SLIDE 58

When does Ax = 0 have only the zero solution?

In terms of the associated function

When the function determined by the matrix A is one-to-one — no two distinct points in Rc are mapped to the same point in Rr.

slide-59
SLIDE 59

When does Ax = 0 have only the zero solution?

In terms of the associated function

When the function determined by the matrix A is one-to-one — no two distinct points in Rc are mapped to the same point in Rr. Indeed, if two points x, y are sent to the same point,

slide-60
SLIDE 60

When does Ax = 0 have only the zero solution?

In terms of the associated function

When the function determined by the matrix A is one-to-one — no two distinct points in Rc are mapped to the same point in Rr. Indeed, if two points x, y are sent to the same point, Ax = Ay,

slide-61
SLIDE 61

When does Ax = 0 have only the zero solution?

In terms of the associated function

When the function determined by the matrix A is one-to-one — no two distinct points in Rc are mapped to the same point in Rr. Indeed, if two points x, y are sent to the same point, Ax = Ay, then we have A(x − y) = 0. So if zero is the only solution, then x − y = 0,

slide-62
SLIDE 62

When does Ax = 0 have only the zero solution?

In terms of the associated function

When the function determined by the matrix A is one-to-one — no two distinct points in Rc are mapped to the same point in Rr. Indeed, if two points x, y are sent to the same point, Ax = Ay, then we have A(x − y) = 0. So if zero is the only solution, then x − y = 0, or in other words, x = y.

slide-63
SLIDE 63

When does Ax = 0 have only the zero solution?

In terms of the associated function

When the function determined by the matrix A is one-to-one — no two distinct points in Rc are mapped to the same point in Rr. Indeed, if two points x, y are sent to the same point, Ax = Ay, then we have A(x − y) = 0. So if zero is the only solution, then x − y = 0, or in other words, x = y. So the only way two points can be sent to the same point

slide-64
SLIDE 64

When does Ax = 0 have only the zero solution?

In terms of the associated function

When the function determined by the matrix A is one-to-one — no two distinct points in Rc are mapped to the same point in Rr. Indeed, if two points x, y are sent to the same point, Ax = Ay, then we have A(x − y) = 0. So if zero is the only solution, then x − y = 0, or in other words, x = y. So the only way two points can be sent to the same point is if they were the same point to begin with.

slide-65
SLIDE 65

When does Ax = b have solutions for any b?

If A has r rows and c columns, the following are equivalent

slide-66
SLIDE 66

When does Ax = b have solutions for any b?

If A has r rows and c columns, the following are equivalent

◮ Ax = 0 has only the zero solution.

slide-67
SLIDE 67

When does Ax = b have solutions for any b?

If A has r rows and c columns, the following are equivalent

◮ Ax = 0 has only the zero solution. ◮ The matrix A has a pivot in every column

slide-68
SLIDE 68

When does Ax = b have solutions for any b?

If A has r rows and c columns, the following are equivalent

◮ Ax = 0 has only the zero solution. ◮ The matrix A has a pivot in every column ◮ The columns of A are linearly independent

slide-69
SLIDE 69

When does Ax = b have solutions for any b?

If A has r rows and c columns, the following are equivalent

◮ Ax = 0 has only the zero solution. ◮ The matrix A has a pivot in every column ◮ The columns of A are linearly independent ◮ The rows of A span all of Rc

slide-70
SLIDE 70

When does Ax = b have solutions for any b?

If A has r rows and c columns, the following are equivalent

◮ Ax = 0 has only the zero solution. ◮ The matrix A has a pivot in every column ◮ The columns of A are linearly independent ◮ The rows of A span all of Rc ◮ The function corresponding to A carries distinct points to

distinct points.

slide-71
SLIDE 71

A has r rows and c columns.

Ax = 0 implies x = 0 pivot in every column columns linearly independent rows span all of Rc distinct points to distinct points

slide-72
SLIDE 72

A has r rows and c columns.

Ax = 0 implies x = 0 pivot in every column columns linearly independent rows span all of Rc distinct points to distinct points Ax = b has solutions for any b pivot in every row rows linearly independent columns span all of Rr hits all of Rr.

slide-73
SLIDE 73

A has r rows and c columns.

Ax = 0 implies x = 0 pivot in every column columns linearly independent rows span all of Rc distinct points to distinct points Ax = b has solutions for any b pivot in every row rows linearly independent columns span all of Rr hits all of Rr. If A is square, i.e. r = c,

slide-74
SLIDE 74

A has r rows and c columns.

Ax = 0 implies x = 0 pivot in every column columns linearly independent rows span all of Rc distinct points to distinct points Ax = b has solutions for any b pivot in every row rows linearly independent columns span all of Rr hits all of Rr. If A is square, i.e. r = c, there’s a pivot in every row if and only if there’s a pivot in every column

slide-75
SLIDE 75

A has r rows and c columns.

Ax = 0 implies x = 0 pivot in every column columns linearly independent rows span all of Rc distinct points to distinct points Ax = b has solutions for any b pivot in every row rows linearly independent columns span all of Rr hits all of Rr. If A is square, i.e. r = c, there’s a pivot in every row if and only if there’s a pivot in every column so these are all equivalent.

slide-76
SLIDE 76

Span and linear independence

A collection of vectors v1, · · · , vk ∈ Rn spans if every vector in Rn can be written as a linear combination of the vi.

slide-77
SLIDE 77

Span and linear independence

A collection of vectors v1, · · · , vk ∈ Rn spans if every vector in Rn can be written as a linear combination of the vi. A collection of vectors v1, · · · , vk ∈ Rn is linearly independent if, whenever a1v1 + · · · + akvk = 0, then all the ai are zero.

slide-78
SLIDE 78

Span and linear independence

A collection of vectors v1, · · · , vk ∈ Rn spans if every vector in Rn can be written as a linear combination of the vi. A collection of vectors v1, · · · , vk ∈ Rn is linearly independent if, whenever a1v1 + · · · + akvk = 0, then all the ai are zero. A collection consisting of a single vector is linearly independent so long as it’s not the zero vector,

slide-79
SLIDE 79

Span and linear independence

A collection of vectors v1, · · · , vk ∈ Rn spans if every vector in Rn can be written as a linear combination of the vi. A collection of vectors v1, · · · , vk ∈ Rn is linearly independent if, whenever a1v1 + · · · + akvk = 0, then all the ai are zero. A collection consisting of a single vector is linearly independent so long as it’s not the zero vector, and two vectors are linearly independent as long as one isn’t a multiple of the other.

slide-80
SLIDE 80

Pictures of linear transformations

Shear

slide-81
SLIDE 81

Pictures of linear transformations

Reflection

slide-82
SLIDE 82

Pictures of linear transformations

Rotation

slide-83
SLIDE 83

Pictures of linear transformations

R

  • t

a t i

  • n
slide-84
SLIDE 84

Pictures of linear transformations

R

  • t

a t i

  • n
slide-85
SLIDE 85

Pictures of linear transformations

R

  • t

a t i

  • n
slide-86
SLIDE 86

Pictures of linear transformations

R

  • t

a t i

  • n
slide-87
SLIDE 87

Pictures of linear transformations

R

  • t

a t i

  • n
slide-88
SLIDE 88

Pictures of linear transformations

Rotation

slide-89
SLIDE 89

Linear transformations

Geometrically, linear transformations take lines to lines.

slide-90
SLIDE 90

Linear transformations

Geometrically, linear transformations take lines to lines. Our definitions will also be such that they preserve the origin.

slide-91
SLIDE 91

Linear transformations

Geometrically, linear transformations take lines to lines. Our definitions will also be such that they preserve the origin. These two properties characterize linear transformations

slide-92
SLIDE 92

Linear transformations

Geometrically, linear transformations take lines to lines. Our definitions will also be such that they preserve the origin. These two properties characterize linear transformations (assuming you know what a line is),

slide-93
SLIDE 93

Linear transformations

Geometrically, linear transformations take lines to lines. Our definitions will also be such that they preserve the origin. These two properties characterize linear transformations (assuming you know what a line is), but we will prefer the following algebraic definition.

slide-94
SLIDE 94

Linear transformations

Definition

A linear transformation is a function T : Rc → Rr such that T(av + bw) = aT(v) + bT(w)

slide-95
SLIDE 95

Reminder about functions

Given two sets X and Y ,

slide-96
SLIDE 96

Reminder about functions

Given two sets X and Y , a function f : X → Y gives some element f (x) of Y for every element x of X.

slide-97
SLIDE 97

Reminder about functions

Given two sets X and Y , a function f : X → Y gives some element f (x) of Y for every element x of X. We say that the domain of the function is X,

slide-98
SLIDE 98

Reminder about functions

Given two sets X and Y , a function f : X → Y gives some element f (x) of Y for every element x of X. We say that the domain of the function is X, and that the codomain is Y .

slide-99
SLIDE 99

Reminder about functions

Given two sets X and Y , a function f : X → Y gives some element f (x) of Y for every element x of X. We say that the domain of the function is X, and that the codomain is Y . The range is the subset of Y consisting of elements of the form f (x) for some x in X.

slide-100
SLIDE 100

Reminder about functions

Given two sets X and Y , a function f : X → Y gives some element f (x) of Y for every element x of X. We say that the domain of the function is X, and that the codomain is Y . The range is the subset of Y consisting of elements of the form f (x) for some x in X. The function is said to be one-to-one

slide-101
SLIDE 101

Reminder about functions

Given two sets X and Y , a function f : X → Y gives some element f (x) of Y for every element x of X. We say that the domain of the function is X, and that the codomain is Y . The range is the subset of Y consisting of elements of the form f (x) for some x in X. The function is said to be one-to-one if no two elements of X map to the same element of Y ,

slide-102
SLIDE 102

Reminder about functions

Given two sets X and Y , a function f : X → Y gives some element f (x) of Y for every element x of X. We say that the domain of the function is X, and that the codomain is Y . The range is the subset of Y consisting of elements of the form f (x) for some x in X. The function is said to be one-to-one if no two elements of X map to the same element of Y , and is said to be onto if every element

  • f Y is hit,
slide-103
SLIDE 103

Reminder about functions

Given two sets X and Y , a function f : X → Y gives some element f (x) of Y for every element x of X. We say that the domain of the function is X, and that the codomain is Y . The range is the subset of Y consisting of elements of the form f (x) for some x in X. The function is said to be one-to-one if no two elements of X map to the same element of Y , and is said to be onto if every element

  • f Y is hit, i.e., the range and codomain are equal.
slide-104
SLIDE 104

A has r rows and c columns; A : Rc → Rr

Columns below have equivalent conditions (except in parethesis) Ax = 0 implies x = 0 pivot in every column columns linearly independent rows span all of Rc

  • ne-to-one

(can only happen if c ≤ r) Ax = b has solutions for any b pivot in every row rows linearly independent columns span all of Rr

  • nto

(can only happen if r ≤ c) If A is square, i.e. r = c, there’s a pivot in every row if and only if there’s a pivot in every column so these are all equivalent.

slide-105
SLIDE 105

A (nonlinear) function example

slide-106
SLIDE 106

A (nonlinear) function example

We could define a function t : this room → R each point → the temperature there

slide-107
SLIDE 107

A (nonlinear) function example

We could define a function t : this room → R each point → the temperature there The domain is this room,

slide-108
SLIDE 108

A (nonlinear) function example

We could define a function t : this room → R each point → the temperature there The domain is this room, the codomain is R,

slide-109
SLIDE 109

A (nonlinear) function example

We could define a function t : this room → R each point → the temperature there The domain is this room, the codomain is R, and the range is some subset of the interval (60◦F, 110◦F).

slide-110
SLIDE 110

A (nonlinear) function example

We could define a function t : this room → R each point → the temperature there The domain is this room, the codomain is R, and the range is some subset of the interval (60◦F, 110◦F). The function is not

  • nto or one-to-one.
slide-111
SLIDE 111

Linear transformations

Definition

A linear transformation is a function T : Rc → Rr such that T(av + bw) = aT(v) + bT(w)

slide-112
SLIDE 112

Linear transformations

Definition

A linear transformation is a function T : Rc → Rr such that T(av + bw) = aT(v) + bT(w) Note that Rc is the domain and Rr is the codomain.

slide-113
SLIDE 113

Linear transformations

Definition

A linear transformation is a function T : Rc → Rr such that T(av + bw) = aT(v) + bT(w) Note that Rc is the domain and Rr is the codomain. The range and in particular if the function is onto, and whether the function is one-to-one, depend on the details of T.

slide-114
SLIDE 114

Example

Consider the following matrix   1 2 1 3 1 4   It determines a linear transformation by the formula x y

  1 2 1 3 1 4   x y

  • =

  x + 2y x + 3y x + 4y  

slide-115
SLIDE 115

Example

x y

  1 2 1 3 1 4   x y

  • =

  x + 2y x + 3y x + 4y   has domain R2 and codomain R3. Its range is Span     1 1 1   ,   2 3 4     The columns are linearly independent, so the linear transformation is one-to-one.

slide-116
SLIDE 116

Example

x y

  1 2 1 3 1 4   x y

  • =

  x + 2y x + 3y x + 4y   has domain R2 and codomain R3. Its range is Span     1 1 1   ,   2 3 4     The columns are linearly independent, so the linear transformation is one-to-one. The columns don’t span, so it’s not onto.

slide-117
SLIDE 117

Linear transformations

Definition

A linear transformation is a function T : Rc → Rr such that T(av + bw) = aT(v) + bT(w)

Example

The zero function T(x) = 0 for all x is linear, since T(av + bw) = 0 = 0 + 0 = a0 + b0 = aT(v) + bT(w)

slide-118
SLIDE 118

Linear transformations

Definition

A linear transformation is a function T : Rc → Rr such that T(av + bw) = aT(v) + bT(w)

Example

If A is a matrix with r rows and c columns, then we saw last time that the following function is linear. A : Rc → Rr x → Ax

slide-119
SLIDE 119

Linear transformations

Definition

A linear transformation is a function T : Rc → Rr such that T(av + bw) = aT(v) + bT(w)

Nonexample

The function f (x) = x2 is not linear. Indeed f (1 + 1) = (1 + 1)2 = 4 = 2 = 12 + 12 = f (1) + f (1)

slide-120
SLIDE 120

Try it yourself

Is it a linear transformation?

slide-121
SLIDE 121

Try it yourself

Is it a linear transformation? f (x) = 0

slide-122
SLIDE 122

Try it yourself

Is it a linear transformation? f (x) = 0 yes

slide-123
SLIDE 123

Try it yourself

Is it a linear transformation? f (x) = 0 yes f (x) = 2

slide-124
SLIDE 124

Try it yourself

Is it a linear transformation? f (x) = 0 yes f (x) = 2 no

slide-125
SLIDE 125

Try it yourself

Is it a linear transformation? f (x) = 0 yes f (x) = 2 no f (x, y) = (x + 2y, y + 3x)

slide-126
SLIDE 126

Try it yourself

Is it a linear transformation? f (x) = 0 yes f (x) = 2 no f (x, y) = (x + 2y, y + 3x) yes

slide-127
SLIDE 127

Try it yourself

Is it a linear transformation? f (x) = 0 yes f (x) = 2 no f (x, y) = (x + 2y, y + 3x) yes f (x, y) = xy

slide-128
SLIDE 128

Try it yourself

Is it a linear transformation? f (x) = 0 yes f (x) = 2 no f (x, y) = (x + 2y, y + 3x) yes f (x, y) = xy no

slide-129
SLIDE 129

Try it yourself

Is it a linear transformation? f (x) = 0 yes f (x) = 2 no f (x, y) = (x + 2y, y + 3x) yes f (x, y) = xy no f (x, y, z) = x + y + z

slide-130
SLIDE 130

Try it yourself

Is it a linear transformation? f (x) = 0 yes f (x) = 2 no f (x, y) = (x + 2y, y + 3x) yes f (x, y) = xy no f (x, y, z) = x + y + z yes

slide-131
SLIDE 131

Rotation is a linear transformation

slide-132
SLIDE 132

Rotation is a linear transformation

slide-133
SLIDE 133

Rotation is a linear transformation

slide-134
SLIDE 134

Rotation is a linear transformation

Because the sum of the rotated vectors is the rotation of the sum

  • f the vectors, i.e.,

Rotate(a + b) = Rotate(a) + Rotate(b)

slide-135
SLIDE 135

Rotation is a linear transformation

Because the sum of the rotated vectors is the rotation of the sum

  • f the vectors, i.e.,

Rotate(a + b) = Rotate(a) + Rotate(b) Geometrically, rotation preserves the rule for adding vectors.

slide-136
SLIDE 136

Reflection is a linear transformation

slide-137
SLIDE 137

Reflection is a linear transformation

slide-138
SLIDE 138

Reflection is a linear transformation

slide-139
SLIDE 139

Reflection is a linear transformation

Because the sum of the rotated vectors is the rotation of the sum

  • f the vectors, i.e.,

Reflect(a + b) = Reflect(a) + Reflect(b)

slide-140
SLIDE 140

Reflection is a linear transformation

Because the sum of the rotated vectors is the rotation of the sum

  • f the vectors, i.e.,

Reflect(a + b) = Reflect(a) + Reflect(b) Geometrically, reflection preserves the rule for adding vectors.

slide-141
SLIDE 141

Shear is a linear transformation

slide-142
SLIDE 142

Shear is a linear transformation

slide-143
SLIDE 143

Shear is a linear transformation

slide-144
SLIDE 144

Shear is a linear transformation

Because the sum of the sheared vectors is the shear of the sum of the vectors, i.e., Shear(a + b) = Shear(a) + Shear(b)

slide-145
SLIDE 145

Shear is a linear transformation

Because the sum of the sheared vectors is the shear of the sum of the vectors, i.e., Shear(a + b) = Shear(a) + Shear(b) Geometrically, shearing preserves the rule for adding vectors.

slide-146
SLIDE 146

Rescaling is a linear transformation

Because the sum of the scaled vectors is the scaling of the sum of the vectors, i.e., Scale(a + b) = Scale(a) + Scale(b) Geometrically, scaling preserves the rule for adding vectors.

slide-147
SLIDE 147

The matrix of rotation

slide-148
SLIDE 148

The matrix of rotation

Consider rotation of the plane by the angle θ

slide-149
SLIDE 149

The matrix of rotation

Consider rotation of the plane by the angle θ

slide-150
SLIDE 150

The matrix of rotation

Consider rotation of the plane by the angle θ It takes (1, 0) to (cos θ, sin θ)

slide-151
SLIDE 151

The matrix of rotation

Consider rotation of the plane by the angle θ It takes (1, 0) to (cos θ, sin θ) and (0, 1) to (− sin θ, cos θ).

slide-152
SLIDE 152

The matrix of rotation

Consider rotation of the plane by the angle θ It takes (1, 0) to (cos θ, sin θ) and (0, 1) to (− sin θ, cos θ). A matrix which does the same is cos θ − sin θ sin θ cos θ

slide-153
SLIDE 153

The matrix of rotation

Consider rotation of the plane by the angle θ It takes (1, 0) to (cos θ, sin θ) and (0, 1) to (− sin θ, cos θ). A matrix which does the same is cos θ − sin θ sin θ cos θ

  • Are these the same linear transformation?
slide-154
SLIDE 154

Classifying linear transformations

slide-155
SLIDE 155

Classifying linear transformations

Warmup

slide-156
SLIDE 156

Classifying linear transformations

Warmup

Describe all linear transformations from R to R.

slide-157
SLIDE 157

Classifying linear transformations

Warmup

Describe all linear transformations from R to R. Suppose T : R → R is a linear transformation.

slide-158
SLIDE 158

Classifying linear transformations

Warmup

Describe all linear transformations from R to R. Suppose T : R → R is a linear transformation. Then T(1) has some value t (in R).

slide-159
SLIDE 159

Classifying linear transformations

Warmup

Describe all linear transformations from R to R. Suppose T : R → R is a linear transformation. Then T(1) has some value t (in R). If you want to know what T(c) is for any

  • ther c, you can write

T(c) = T(c · 1) = cT(1) = ct

slide-160
SLIDE 160

Classifying linear transformations

Warmup

Describe all linear transformations from R to R. Suppose T : R → R is a linear transformation. Then T(1) has some value t (in R). If you want to know what T(c) is for any

  • ther c, you can write

T(c) = T(c · 1) = cT(1) = ct Moreover the function defined by T(c) = ct is linear,

slide-161
SLIDE 161

Classifying linear transformations

Warmup

Describe all linear transformations from R to R. Suppose T : R → R is a linear transformation. Then T(1) has some value t (in R). If you want to know what T(c) is for any

  • ther c, you can write

T(c) = T(c · 1) = cT(1) = ct Moreover the function defined by T(c) = ct is linear, since T(cv + dw)

slide-162
SLIDE 162

Classifying linear transformations

Warmup

Describe all linear transformations from R to R. Suppose T : R → R is a linear transformation. Then T(1) has some value t (in R). If you want to know what T(c) is for any

  • ther c, you can write

T(c) = T(c · 1) = cT(1) = ct Moreover the function defined by T(c) = ct is linear, since T(cv + dw) = (cv + dw)t

slide-163
SLIDE 163

Classifying linear transformations

Warmup

Describe all linear transformations from R to R. Suppose T : R → R is a linear transformation. Then T(1) has some value t (in R). If you want to know what T(c) is for any

  • ther c, you can write

T(c) = T(c · 1) = cT(1) = ct Moreover the function defined by T(c) = ct is linear, since T(cv + dw) = (cv + dw)t = cvt + dwt

slide-164
SLIDE 164

Classifying linear transformations

Warmup

Describe all linear transformations from R to R. Suppose T : R → R is a linear transformation. Then T(1) has some value t (in R). If you want to know what T(c) is for any

  • ther c, you can write

T(c) = T(c · 1) = cT(1) = ct Moreover the function defined by T(c) = ct is linear, since T(cv + dw) = (cv + dw)t = cvt + dwt = c(vt) + d(wt)

slide-165
SLIDE 165

Classifying linear transformations

Warmup

Describe all linear transformations from R to R. Suppose T : R → R is a linear transformation. Then T(1) has some value t (in R). If you want to know what T(c) is for any

  • ther c, you can write

T(c) = T(c · 1) = cT(1) = ct Moreover the function defined by T(c) = ct is linear, since T(cv + dw) = (cv + dw)t = cvt + dwt = c(vt) + d(wt) = cT(v) + dT(w)

slide-166
SLIDE 166

Classifying linear transformations

Warmup

Describe all linear transformations from R to R. Suppose T : R → R is a linear transformation. Then T(1) has some value t (in R). If you want to know what T(c) is for any

  • ther c, you can write

T(c) = T(c · 1) = cT(1) = ct Moreover the function defined by T(c) = ct is linear, since T(cv + dw) = (cv + dw)t = cvt + dwt = c(vt) + d(wt) = cT(v) + dT(w) Thus, the linear functions from R → R are exactly those functions

  • f the form T(c) = ct for some t ∈ R.
slide-167
SLIDE 167

Classifying linear transformations

Warmup, II

Describe all linear transformations from R to Rn.

slide-168
SLIDE 168

Classifying linear transformations

Warmup, II

Describe all linear transformations from R to Rn. Suppose T : R → Rn is a linear transformation.

slide-169
SLIDE 169

Classifying linear transformations

Warmup, II

Describe all linear transformations from R to Rn. Suppose T : R → Rn is a linear transformation. Then T(1) has some value t (in Rn).

slide-170
SLIDE 170

Classifying linear transformations

Warmup, II

Describe all linear transformations from R to Rn. Suppose T : R → Rn is a linear transformation. Then T(1) has some value t (in Rn). If you want to know what T(c) is for any

  • ther c, you can write

T(c) = T(c · 1) = cT(1) = ct

slide-171
SLIDE 171

Classifying linear transformations

Warmup, II

Describe all linear transformations from R to Rn. Suppose T : R → Rn is a linear transformation. Then T(1) has some value t (in Rn). If you want to know what T(c) is for any

  • ther c, you can write

T(c) = T(c · 1) = cT(1) = ct Moreover the function defined by T(c) = ct is linear, since T(cv+dw) = (cv+dw)t = cvt+dwt = c(vt)+d(wt) = cT(v)+dT(w)

slide-172
SLIDE 172

Classifying linear transformations

Warmup, II

Describe all linear transformations from R to Rn. Suppose T : R → Rn is a linear transformation. Then T(1) has some value t (in Rn). If you want to know what T(c) is for any

  • ther c, you can write

T(c) = T(c · 1) = cT(1) = ct Moreover the function defined by T(c) = ct is linear, since T(cv+dw) = (cv+dw)t = cvt+dwt = c(vt)+d(wt) = cT(v)+dT(w) Thus, the linear functions from R → Rn are exactly those functions of the form T(c) = ct for some t ∈ Rn.

slide-173
SLIDE 173

The matrix of a linear transformation

For linear maps T : Rm → Rn,

slide-174
SLIDE 174

The matrix of a linear transformation

For linear maps T : Rm → Rn, we can’t do the same thing, since it’s no longer true that every vector in Rm is a scalar multiple of some given vector.

slide-175
SLIDE 175

The matrix of a linear transformation

For linear maps T : Rm → Rn, we can’t do the same thing, since it’s no longer true that every vector in Rm is a scalar multiple of some given vector. But, every vector is a linear combination of the ei.

slide-176
SLIDE 176

The matrix of a linear transformation

For linear maps T : Rm → Rn, we can’t do the same thing, since it’s no longer true that every vector in Rm is a scalar multiple of some given vector. But, every vector is a linear combination of the ei.      v1 v2 . . . vm      = v1      1 . . .      + v2      1 . . .      + · · · + vm      . . . 1      = v1e1 + · · · + vmem

slide-177
SLIDE 177

The matrix of a linear transformation

T           v1 v2 . . . vm           = v1T           1 . . .           + · · · + vmT           . . . 1           = v1T(e1) + · · · + vmT(em) =

  • T(e1)

T(e2) · · · T(em)

    v1 v2 . . . vm     

slide-178
SLIDE 178

The matrix of a linear transformation

Thus the linear transformation T : Rm → Rn

slide-179
SLIDE 179

The matrix of a linear transformation

Thus the linear transformation T : Rm → Rn is the linear transformation associated to the matrix

  • T(e1)

T(e2) · · · T(em)

  • whose columns are the T(ei).
slide-180
SLIDE 180

Example

The matrix of the linear transformation f (x, y) = (3x + 5y, 2x + 4y, x + 2y)

slide-181
SLIDE 181

Example

The matrix of the linear transformation f (x, y) = (3x + 5y, 2x + 4y, x + 2y) We are supposed to evaluate f (e1) and f (e2) and stick them in as the columns of the matrix.

slide-182
SLIDE 182

Example

The matrix of the linear transformation f (x, y) = (3x + 5y, 2x + 4y, x + 2y) We are supposed to evaluate f (e1) and f (e2) and stick them in as the columns of the matrix. We have f (1, 0) = (3, 2, 1) and f (0, 1) = (5, 4, 2),

slide-183
SLIDE 183

Example

The matrix of the linear transformation f (x, y) = (3x + 5y, 2x + 4y, x + 2y) We are supposed to evaluate f (e1) and f (e2) and stick them in as the columns of the matrix. We have f (1, 0) = (3, 2, 1) and f (0, 1) = (5, 4, 2), so the matrix is   3 5 2 4 1 2  

slide-184
SLIDE 184

Example

The matrix of the linear transformation f (x, y) = (3x + 5y, 2x + 4y, x + 2y) We are supposed to evaluate f (e1) and f (e2) and stick them in as the columns of the matrix. We have f (1, 0) = (3, 2, 1) and f (0, 1) = (5, 4, 2), so the matrix is   3 5 2 4 1 2   x y

  • =

  3x + 5y 2x + 4y x + 2y  

slide-185
SLIDE 185

The matrix of rotation

Consider rotation of the plane by the angle θ It takes (1, 0) to (cos θ, sin θ) and (0, 1) to (− sin θ, cos θ). The matrix which does the same is cos θ − sin θ sin θ cos θ

slide-186
SLIDE 186

Try it yourself!

Find the matrix which performs the reflection in the x axis in 2 dimensions.

slide-187
SLIDE 187

Try it yourself!

Find the matrix which performs the reflection in the x axis in 2 dimensions. 1 −1

slide-188
SLIDE 188

Composing linear transformations.

If S : Ra → Rb and T : Rb → Rc are linear transformations, then so is their composition T ◦ S : Ra → Rc.

slide-189
SLIDE 189

Composing linear transformations.

If S : Ra → Rb and T : Rb → Rc are linear transformations, then so is their composition T ◦ S : Ra → Rc. Indeed, (T ◦ S)(cv + dw) = T(S(cv + dw)) = T(cS(v) + dS(w)) = cT(S(v)) + dT(S(w)) = c(T ◦ S)(v) + d(T ◦ S)(w)

slide-190
SLIDE 190

Composing linear transformations.

If S : Ra → Rb and T : Rb → Rc are linear transformations, then so is their composition T ◦ S : Ra → Rc. Indeed, (T ◦ S)(cv + dw) = T(S(cv + dw)) = T(cS(v) + dS(w)) = cT(S(v)) + dT(S(w)) = c(T ◦ S)(v) + d(T ◦ S)(w) The transformations S, T, and T ◦ S are each determined by a

  • matrix. What’s the relation between these matrices?