Chapter 5 Least Squares Chapter 5 Inconsistent Systems In - - PowerPoint PPT Presentation

chapter 5
SMART_READER_LITE
LIVE PREVIEW

Chapter 5 Least Squares Chapter 5 Inconsistent Systems In - - PowerPoint PPT Presentation

Chapter 5 Least Squares Chapter 5 Inconsistent Systems In regression (and many other applications) we have a system of equations wed like to solve: X = y However, this system does not have an exact solution. (i.e. all of our data points


slide-1
SLIDE 1

Chapter 5

Least Squares

Chapter 5

slide-2
SLIDE 2

Inconsistent Systems

In regression (and many other applications) we have a system

  • f equations we’d like to solve:

Xβ = y However, this system does not have an exact solution. (i.e. all

  • f our data points don’t lie exactly on a flat surface)

The best we can do is consider an equation with error and try to minimize that error: X β = y + ǫ X β = y

  • y is the vector of predicted values.
  • β is the vector of parameter estimates.

X is the design matrix. ǫ = y − y is a vector of residuals

Chapter 5

slide-3
SLIDE 3

The Normal Equations

Since we can’t solve Xβ = y, we want to solve X β = y, where ǫTǫ = ( y − y)T( y − y) is minimized. (Remember, ǫTǫ is just the sum of squared error.) Then β is called a least-squares solution.

Chapter 5

slide-4
SLIDE 4

The Normal Equations

Since we can’t solve Xβ = y, we want to solve X β = y, where ǫTǫ = ( y − y)T( y − y) is minimized.

Chapter 5

slide-5
SLIDE 5

The Normal Equations

Since we can’t solve Xβ = y, we want to solve X β = y, where ǫTǫ = ( y − y)T( y − y) is minimized. The set of least-squares solutions is precisely the set of solutions to the Normal Equations, XTX β = XTy.

Chapter 5

slide-6
SLIDE 6

The Normal Equations

Since we can’t solve Xβ = y, we want to solve X β = y, where ǫTǫ = ( y − y)T( y − y) is minimized. The set of least-squares solutions is precisely the set of solutions to the Normal Equations, XTX β = XTy. There is a unique solution if and only if X has full rank.

Chapter 5

slide-7
SLIDE 7

The Normal Equations

Since we can’t solve Xβ = y, we want to solve X β = y, where ǫTǫ = ( y − y)T( y − y) is minimized. The set of least-squares solutions is precisely the set of solutions to the Normal Equations, XTX β = XTy. There is a unique solution if and only if X has full rank.

Linear independence of variables.

Chapter 5

slide-8
SLIDE 8

The Normal Equations

Since we can’t solve Xβ = y, we want to solve X β = y, where ǫTǫ = ( y − y)T( y − y) is minimized. The set of least-squares solutions is precisely the set of solutions to the Normal Equations, XTX β = XTy. There is a unique solution if and only if X has full rank.

Linear independence of variables. # NoPerfectMulticollinearity

Chapter 5

slide-9
SLIDE 9

The Normal Equations

XTX β = XTy When X has full rank, XTX is invertible. So we can multiply both sides by the inverse matrix:

  • β = (XTX)−1XTy

And then by definition, our predicted values are

  • y = X

β = X(XTX)−1XTy.

Chapter 5

slide-10
SLIDE 10

The Normal Equations

XTX β = XTy When X has full rank, XTX is invertible. So we can multiply both sides by the inverse matrix:

  • β = (XTX)−1XTy

And then by definition, our predicted values are

  • y = X

β = X(XTX)−1XTy.

Chapter 5

slide-11
SLIDE 11

The Intercept

Remember that we generally have an intercept built into our model: β0 + β1x1 + · · · + βpxp = y This means our design matrix, X, has a built-in column of ones:      x1 x2 . . . xp

  • bs1

1 x11 x12 . . . x1p

  • bs2

1 x21 x22 . . . x2p . . . . . . . . . . . . . . . . . .

  • bsn

1 xn1 xn2 . . . xnp     

  • X

     β0 β1 . . . βp     

β

=      y0 y1 . . . yn     

y

Chapter 5