Maximum Likelihood Estimator Lecture 10 Biostatistics 602 - - - PowerPoint PPT Presentation

maximum likelihood estimator lecture 10 biostatistics 602
SMART_READER_LITE
LIVE PREVIEW

Maximum Likelihood Estimator Lecture 10 Biostatistics 602 - - - PowerPoint PPT Presentation

. . February 12th, 2013 Biostatistics 602 - Lecture 10 Hyun Min Kang February 12th, 2013 Hyun Min Kang Maximum Likelihood Estimator Lecture 10 Biostatistics 602 - Statistical Inference . Summary . . MLE Recap . . . . . . . . . 1


slide-1
SLIDE 1

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

. .

Biostatistics 602 - Statistical Inference Lecture 10 Maximum Likelihood Estimator

Hyun Min Kang February 12th, 2013

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 1 / 20

slide-2
SLIDE 2

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

Last Lecture

. . 1 What is a point estimator, and a point estimate? . . 2 What is a method of moment estimator? . . 3 What are advantages and disadvantages of method of moment

estimator?

. . 4 What is a maximum likelihood estimator (MLE)? . . 5 How can you find an MLE?

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 2 / 20

slide-3
SLIDE 3

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

Last Lecture

. . 1 What is a point estimator, and a point estimate? . . 2 What is a method of moment estimator? . . 3 What are advantages and disadvantages of method of moment

estimator?

. . 4 What is a maximum likelihood estimator (MLE)? . . 5 How can you find an MLE?

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 2 / 20

slide-4
SLIDE 4

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

Last Lecture

. . 1 What is a point estimator, and a point estimate? . . 2 What is a method of moment estimator? . . 3 What are advantages and disadvantages of method of moment

estimator?

. . 4 What is a maximum likelihood estimator (MLE)? . . 5 How can you find an MLE?

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 2 / 20

slide-5
SLIDE 5

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

Last Lecture

. . 1 What is a point estimator, and a point estimate? . . 2 What is a method of moment estimator? . . 3 What are advantages and disadvantages of method of moment

estimator?

. . 4 What is a maximum likelihood estimator (MLE)? . . 5 How can you find an MLE?

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 2 / 20

slide-6
SLIDE 6

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

Last Lecture

. . 1 What is a point estimator, and a point estimate? . . 2 What is a method of moment estimator? . . 3 What are advantages and disadvantages of method of moment

estimator?

. . 4 What is a maximum likelihood estimator (MLE)? . . 5 How can you find an MLE?

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 2 / 20

slide-7
SLIDE 7

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

Recap - Method of Moment Estimator

  • Point Estimation - Estimate θ or τ(θ).
  • Method of Moment

m1 = 1 n ∑ Xi = EX = µ1 m2 = 1 n ∑ X2

i = EX2 = µ2

. . . mk = 1 n ∑ Xk

i = EXk = µk

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 3 / 20

slide-8
SLIDE 8

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

Recap - Example of Method of Moment Estimator

X1, · · · , Xn

i.i.d.

∼ N(µ, σ2)

ˆ µ = X ˆ µ2 + ˆ σ2 = EX2 = 1 n

n

i=1

X2

i

ˆ σ2 = ∑ (Xi − X)2/n

  • Easy to implement
  • Easy to understand
  • Estimators can be improved; use as initial value to get other

estimators

  • No guarantee that the estimator will fall into the range of valid

parameter space.

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 4 / 20

slide-9
SLIDE 9

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

Recap - Example of Method of Moment Estimator

X1, · · · , Xn

i.i.d.

∼ N(µ, σ2)

ˆ µ = X ˆ µ2 + ˆ σ2 = EX2 = 1 n

n

i=1

X2

i

ˆ σ2 = ∑ (Xi − X)2/n

  • Easy to implement
  • Easy to understand
  • Estimators can be improved; use as initial value to get other

estimators

  • No guarantee that the estimator will fall into the range of valid

parameter space.

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 4 / 20

slide-10
SLIDE 10

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

Recap - Example of Method of Moment Estimator

X1, · · · , Xn

i.i.d.

∼ N(µ, σ2)

ˆ µ = X ˆ µ2 + ˆ σ2 = EX2 = 1 n

n

i=1

X2

i

ˆ σ2 = ∑ (Xi − X)2/n

  • Easy to implement
  • Easy to understand
  • Estimators can be improved; use as initial value to get other

estimators

  • No guarantee that the estimator will fall into the range of valid

parameter space.

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 4 / 20

slide-11
SLIDE 11

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

Recap - Example of Method of Moment Estimator

X1, · · · , Xn

i.i.d.

∼ N(µ, σ2)

ˆ µ = X ˆ µ2 + ˆ σ2 = EX2 = 1 n

n

i=1

X2

i

ˆ σ2 = ∑ (Xi − X)2/n

  • Easy to implement
  • Easy to understand
  • Estimators can be improved; use as initial value to get other

estimators

  • No guarantee that the estimator will fall into the range of valid

parameter space.

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 4 / 20

slide-12
SLIDE 12

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

Recap - Example of Method of Moment Estimator

X1, · · · , Xn

i.i.d.

∼ N(µ, σ2)

ˆ µ = X ˆ µ2 + ˆ σ2 = EX2 = 1 n

n

i=1

X2

i

ˆ σ2 = ∑ (Xi − X)2/n

  • Easy to implement
  • Easy to understand
  • Estimators can be improved; use as initial value to get other

estimators

  • No guarantee that the estimator will fall into the range of valid

parameter space.

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 4 / 20

slide-13
SLIDE 13

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

Recap - Likelihood Function

.

Definition

. . X1, · · · , Xn

i.i.d.

∼ fX(x|θ). The join distribution of X = (X1, · · · , Xn) is

fX(x|θ) =

n

i=1

fX(xi|θ) Given that X x is observed, the function of defined by L x f x is called the likelihood function.

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 5 / 20

slide-14
SLIDE 14

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

Recap - Likelihood Function

.

Definition

. . X1, · · · , Xn

i.i.d.

∼ fX(x|θ). The join distribution of X = (X1, · · · , Xn) is

fX(x|θ) =

n

i=1

fX(xi|θ) Given that X = x is observed, the function of θ defined by L(θ|x) = f(x|θ) is called the likelihood function.

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 5 / 20

slide-15
SLIDE 15

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

Recap - Example Likelihood Function

  • X1, X2, X3, X4

i.i.d.

∼ Bernoulli(p), 0 < p < 1.

  • x = (1, 1, 1, 1)T
  • Intuitively, it is more likely that p is larger than smaller.
  • L(p|x) = f(x|p) = ∏4

i=1 pxi(1 − p)1−xi = p4.

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 6 / 20

slide-16
SLIDE 16

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

How do we find MLE?

If the function is differentiable with respect to θ,

. . 1 Find candidates that makes first order derivative to be zero . . 2 Check second-order derivative to check local maximum.

  • For one-dimensional parameter, negative second order derivative

implies local maximum.

  • For two-dimensional parameter, suppose L

is the likelihood

  • function. Then we need to show

(a) L

  • r

L . (b) Determinant of second-order derivative is positive

  • Check boundary points to see whether boundary gives global maximum.

If the function is NOT differentiable with respect to .

  • Use numerical methods
  • Or perform direct maximization, using inequalities, or properties of

the function.

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 7 / 20

slide-17
SLIDE 17

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

How do we find MLE?

If the function is differentiable with respect to θ,

. . 1 Find candidates that makes first order derivative to be zero . . 2 Check second-order derivative to check local maximum.

  • For one-dimensional parameter, negative second order derivative

implies local maximum.

  • For two-dimensional parameter, suppose L

is the likelihood

  • function. Then we need to show

(a) L

  • r

L . (b) Determinant of second-order derivative is positive

  • Check boundary points to see whether boundary gives global maximum.

If the function is NOT differentiable with respect to .

  • Use numerical methods
  • Or perform direct maximization, using inequalities, or properties of

the function.

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 7 / 20

slide-18
SLIDE 18

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

How do we find MLE?

If the function is differentiable with respect to θ,

. . 1 Find candidates that makes first order derivative to be zero . . 2 Check second-order derivative to check local maximum.

  • For one-dimensional parameter, negative second order derivative

implies local maximum.

  • For two-dimensional parameter, suppose L

is the likelihood

  • function. Then we need to show

(a) L

  • r

L . (b) Determinant of second-order derivative is positive

  • Check boundary points to see whether boundary gives global maximum.

If the function is NOT differentiable with respect to .

  • Use numerical methods
  • Or perform direct maximization, using inequalities, or properties of

the function.

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 7 / 20

slide-19
SLIDE 19

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

How do we find MLE?

If the function is differentiable with respect to θ,

. . 1 Find candidates that makes first order derivative to be zero . . 2 Check second-order derivative to check local maximum.

  • For one-dimensional parameter, negative second order derivative

implies local maximum.

  • For two-dimensional parameter, suppose L

is the likelihood

  • function. Then we need to show

(a) L

  • r

L . (b) Determinant of second-order derivative is positive

  • Check boundary points to see whether boundary gives global maximum.

If the function is NOT differentiable with respect to .

  • Use numerical methods
  • Or perform direct maximization, using inequalities, or properties of

the function.

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 7 / 20

slide-20
SLIDE 20

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

How do we find MLE?

If the function is differentiable with respect to θ,

. . 1 Find candidates that makes first order derivative to be zero . . 2 Check second-order derivative to check local maximum.

  • For one-dimensional parameter, negative second order derivative

implies local maximum.

  • For two-dimensional parameter, suppose L(θ1, θ2) is the likelihood
  • function. Then we need to show

(a) L

  • r

L . (b) Determinant of second-order derivative is positive

  • Check boundary points to see whether boundary gives global maximum.

If the function is NOT differentiable with respect to .

  • Use numerical methods
  • Or perform direct maximization, using inequalities, or properties of

the function.

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 7 / 20

slide-21
SLIDE 21

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

How do we find MLE?

If the function is differentiable with respect to θ,

. . 1 Find candidates that makes first order derivative to be zero . . 2 Check second-order derivative to check local maximum.

  • For one-dimensional parameter, negative second order derivative

implies local maximum.

  • For two-dimensional parameter, suppose L(θ1, θ2) is the likelihood
  • function. Then we need to show

(a) ∂2L(θ1, θ2)2/∂θ2

1 < 0 or ∂2L(θ1, θ2)2/∂θ2 2 < 0.

(b) Determinant of second-order derivative is positive

  • Check boundary points to see whether boundary gives global maximum.

If the function is NOT differentiable with respect to .

  • Use numerical methods
  • Or perform direct maximization, using inequalities, or properties of

the function.

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 7 / 20

slide-22
SLIDE 22

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

How do we find MLE?

If the function is differentiable with respect to θ,

. . 1 Find candidates that makes first order derivative to be zero . . 2 Check second-order derivative to check local maximum.

  • For one-dimensional parameter, negative second order derivative

implies local maximum.

  • For two-dimensional parameter, suppose L(θ1, θ2) is the likelihood
  • function. Then we need to show

(a) ∂2L(θ1, θ2)2/∂θ2

1 < 0 or ∂2L(θ1, θ2)2/∂θ2 2 < 0.

(b) Determinant of second-order derivative is positive

  • Check boundary points to see whether boundary gives global maximum.

If the function is NOT differentiable with respect to .

  • Use numerical methods
  • Or perform direct maximization, using inequalities, or properties of

the function.

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 7 / 20

slide-23
SLIDE 23

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

How do we find MLE?

If the function is differentiable with respect to θ,

. . 1 Find candidates that makes first order derivative to be zero . . 2 Check second-order derivative to check local maximum.

  • For one-dimensional parameter, negative second order derivative

implies local maximum.

  • For two-dimensional parameter, suppose L(θ1, θ2) is the likelihood
  • function. Then we need to show

(a) ∂2L(θ1, θ2)2/∂θ2

1 < 0 or ∂2L(θ1, θ2)2/∂θ2 2 < 0.

(b) Determinant of second-order derivative is positive

  • Check boundary points to see whether boundary gives global maximum.

If the function is NOT differentiable with respect to .

  • Use numerical methods
  • Or perform direct maximization, using inequalities, or properties of

the function.

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 7 / 20

slide-24
SLIDE 24

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

How do we find MLE?

If the function is differentiable with respect to θ,

. . 1 Find candidates that makes first order derivative to be zero . . 2 Check second-order derivative to check local maximum.

  • For one-dimensional parameter, negative second order derivative

implies local maximum.

  • For two-dimensional parameter, suppose L(θ1, θ2) is the likelihood
  • function. Then we need to show

(a) ∂2L(θ1, θ2)2/∂θ2

1 < 0 or ∂2L(θ1, θ2)2/∂θ2 2 < 0.

(b) Determinant of second-order derivative is positive

  • Check boundary points to see whether boundary gives global maximum.

If the function is NOT differentiable with respect to θ.

  • Use numerical methods
  • Or perform direct maximization, using inequalities, or properties of

the function.

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 7 / 20

slide-25
SLIDE 25

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

How do we find MLE?

If the function is differentiable with respect to θ,

. . 1 Find candidates that makes first order derivative to be zero . . 2 Check second-order derivative to check local maximum.

  • For one-dimensional parameter, negative second order derivative

implies local maximum.

  • For two-dimensional parameter, suppose L(θ1, θ2) is the likelihood
  • function. Then we need to show

(a) ∂2L(θ1, θ2)2/∂θ2

1 < 0 or ∂2L(θ1, θ2)2/∂θ2 2 < 0.

(b) Determinant of second-order derivative is positive

  • Check boundary points to see whether boundary gives global maximum.

If the function is NOT differentiable with respect to θ.

  • Use numerical methods
  • Or perform direct maximization, using inequalities, or properties of

the function.

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 7 / 20

slide-26
SLIDE 26

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

How do we find MLE?

If the function is differentiable with respect to θ,

. . 1 Find candidates that makes first order derivative to be zero . . 2 Check second-order derivative to check local maximum.

  • For one-dimensional parameter, negative second order derivative

implies local maximum.

  • For two-dimensional parameter, suppose L(θ1, θ2) is the likelihood
  • function. Then we need to show

(a) ∂2L(θ1, θ2)2/∂θ2

1 < 0 or ∂2L(θ1, θ2)2/∂θ2 2 < 0.

(b) Determinant of second-order derivative is positive

  • Check boundary points to see whether boundary gives global maximum.

If the function is NOT differentiable with respect to θ.

  • Use numerical methods
  • Or perform direct maximization, using inequalities, or properties of

the function.

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 7 / 20

slide-27
SLIDE 27

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

Example of MLE : Uniform Distribution

.

Problem

. . X1, · · · , Xn

i.i.d.

∼ Uniform(0, θ), where Xi ∈ [0, θ] and θ > 0.

.

Solution

. . . . . . . . L x

n i

I xi

n n i

I xi

n I

x xn

n I x n

I x We need to maximize

n subject to constraint that

x n . Because

n decreases in

, the MLE is X X n .

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 8 / 20

slide-28
SLIDE 28

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

Example of MLE : Uniform Distribution

.

Problem

. . X1, · · · , Xn

i.i.d.

∼ Uniform(0, θ), where Xi ∈ [0, θ] and θ > 0.

.

Solution

. . L(θ|x) =

n

i=1

1 θI(0 ≤ xi ≤ θ) = 1 θn

n

i=1

I(0 ≤ xi ≤ θ)

n I

x xn

n I x n

I x We need to maximize

n subject to constraint that

x n . Because

n decreases in

, the MLE is X X n .

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 8 / 20

slide-29
SLIDE 29

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

Example of MLE : Uniform Distribution

.

Problem

. . X1, · · · , Xn

i.i.d.

∼ Uniform(0, θ), where Xi ∈ [0, θ] and θ > 0.

.

Solution

. . L(θ|x) =

n

i=1

1 θI(0 ≤ xi ≤ θ) = 1 θn

n

i=1

I(0 ≤ xi ≤ θ) = 1 θn I(0 ≤ x1 ≤ θ ∧ · · · ∧ 0 ≤ xn ≤ θ)

n I x n

I x We need to maximize

n subject to constraint that

x n . Because

n decreases in

, the MLE is X X n .

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 8 / 20

slide-30
SLIDE 30

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

Example of MLE : Uniform Distribution

.

Problem

. . X1, · · · , Xn

i.i.d.

∼ Uniform(0, θ), where Xi ∈ [0, θ] and θ > 0.

.

Solution

. . L(θ|x) =

n

i=1

1 θI(0 ≤ xi ≤ θ) = 1 θn

n

i=1

I(0 ≤ xi ≤ θ) = 1 θn I(0 ≤ x1 ≤ θ ∧ · · · ∧ 0 ≤ xn ≤ θ) = 1 θn I(x(n) ≤ θ)I(x(1) ≥ 0) We need to maximize

n subject to constraint that

x n . Because

n decreases in

, the MLE is X X n .

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 8 / 20

slide-31
SLIDE 31

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

Example of MLE : Uniform Distribution

.

Problem

. . X1, · · · , Xn

i.i.d.

∼ Uniform(0, θ), where Xi ∈ [0, θ] and θ > 0.

.

Solution

. . L(θ|x) =

n

i=1

1 θI(0 ≤ xi ≤ θ) = 1 θn

n

i=1

I(0 ≤ xi ≤ θ) = 1 θn I(0 ≤ x1 ≤ θ ∧ · · · ∧ 0 ≤ xn ≤ θ) = 1 θn I(x(n) ≤ θ)I(x(1) ≥ 0) We need to maximize 1/θn subject to constraint that 0 ≤ x(n) ≤ θ. Because

n decreases in

, the MLE is X X n .

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 8 / 20

slide-32
SLIDE 32

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

Example of MLE : Uniform Distribution

.

Problem

. . X1, · · · , Xn

i.i.d.

∼ Uniform(0, θ), where Xi ∈ [0, θ] and θ > 0.

.

Solution

. . L(θ|x) =

n

i=1

1 θI(0 ≤ xi ≤ θ) = 1 θn

n

i=1

I(0 ≤ xi ≤ θ) = 1 θn I(0 ≤ x1 ≤ θ ∧ · · · ∧ 0 ≤ xn ≤ θ) = 1 θn I(x(n) ≤ θ)I(x(1) ≥ 0) We need to maximize 1/θn subject to constraint that 0 ≤ x(n) ≤ θ. Because 1/θn decreases in θ, the MLE is ˆ θ(X) = X(n).

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 8 / 20

slide-33
SLIDE 33

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

Example of MLE : Normal Distribution

.

Problem

. . Suppose n pairs of data (X1, Y1), · · · , (Xn, Yn) where Xi is generated from an unknown distribution, and Yi are generated conditionally on Xi. Yi|Xi ∼ N(α + βXi, σ2) Find the MLE of (α, β, σ2). .

Solution

. . . . . . . . The joint distribution of X Y Xn Yn is fXY x y fX x

n i

fY yi xi fX x

n i

exp yi xi

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 9 / 20

slide-34
SLIDE 34

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

Example of MLE : Normal Distribution

.

Problem

. . Suppose n pairs of data (X1, Y1), · · · , (Xn, Yn) where Xi is generated from an unknown distribution, and Yi are generated conditionally on Xi. Yi|Xi ∼ N(α + βXi, σ2) Find the MLE of (α, β, σ2). .

Solution

. . The joint distribution of (X1, Y1), · · · , (Xn, Yn) is fXY x y fX x

n i

fY yi xi fX x

n i

exp yi xi

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 9 / 20

slide-35
SLIDE 35

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

Example of MLE : Normal Distribution

.

Problem

. . Suppose n pairs of data (X1, Y1), · · · , (Xn, Yn) where Xi is generated from an unknown distribution, and Yi are generated conditionally on Xi. Yi|Xi ∼ N(α + βXi, σ2) Find the MLE of (α, β, σ2). .

Solution

. . The joint distribution of (X1, Y1), · · · , (Xn, Yn) is fXY(x, y) = fX(x)

n

i=1

fY(yi|xi) fX x

n i

exp yi xi

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 9 / 20

slide-36
SLIDE 36

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

Example of MLE : Normal Distribution

.

Problem

. . Suppose n pairs of data (X1, Y1), · · · , (Xn, Yn) where Xi is generated from an unknown distribution, and Yi are generated conditionally on Xi. Yi|Xi ∼ N(α + βXi, σ2) Find the MLE of (α, β, σ2). .

Solution

. . The joint distribution of (X1, Y1), · · · , (Xn, Yn) is fXY(x, y) = fX(x)

n

i=1

fY(yi|xi) = fX(x)

n

i=1

1 2πσ2 exp [ −(yi − α − βxi)2 2σ2 ]

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 9 / 20

slide-37
SLIDE 37

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

Solution : Normal Distribution (cont’d)

The likelihood function is L(α, β, σ2|x, y) = fX(x)(2πσ2)−n/2 exp [ − ∑n

i=1(yi − α − βxi)2

2σ2 ] The log-likelhood function can be simplied as l C n log

n i

yi xi l

n i

yi xi ny n n x y x

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 10 / 20

slide-38
SLIDE 38

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

Solution : Normal Distribution (cont’d)

The likelihood function is L(α, β, σ2|x, y) = fX(x)(2πσ2)−n/2 exp [ − ∑n

i=1(yi − α − βxi)2

2σ2 ] The log-likelhood function can be simplied as l C n log

n i

yi xi l

n i

yi xi ny n n x y x

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 10 / 20

slide-39
SLIDE 39

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

Solution : Normal Distribution (cont’d)

The likelihood function is L(α, β, σ2|x, y) = fX(x)(2πσ2)−n/2 exp [ − ∑n

i=1(yi − α − βxi)2

2σ2 ] The log-likelhood function can be simplied as l(α, β, σ2) = C − n 2 log(2πσ2) − ∑n

i=1(yi − α − βxi)2

2σ2 l

n i

yi xi ny n n x y x

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 10 / 20

slide-40
SLIDE 40

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

Solution : Normal Distribution (cont’d)

The likelihood function is L(α, β, σ2|x, y) = fX(x)(2πσ2)−n/2 exp [ − ∑n

i=1(yi − α − βxi)2

2σ2 ] The log-likelhood function can be simplied as l(α, β, σ2) = C − n 2 log(2πσ2) − ∑n

i=1(yi − α − βxi)2

2σ2 ∂l ∂α = 2 ∑n

i=1(yi − α − βxi)

2σ2 = ny − nα − nβx σ2 = 0 y x

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 10 / 20

slide-41
SLIDE 41

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

Solution : Normal Distribution (cont’d)

The likelihood function is L(α, β, σ2|x, y) = fX(x)(2πσ2)−n/2 exp [ − ∑n

i=1(yi − α − βxi)2

2σ2 ] The log-likelhood function can be simplied as l(α, β, σ2) = C − n 2 log(2πσ2) − ∑n

i=1(yi − α − βxi)2

2σ2 ∂l ∂α = 2 ∑n

i=1(yi − α − βxi)

2σ2 = ny − nα − nβx σ2 = 0 ˆ α = y − ˆ βx

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 10 / 20

slide-42
SLIDE 42

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

Solution : Normal Distribution (cont’d)

∂l ∂β = 2 ∑n

i=1(yi − α − βxi)xi

2σ2 = ∑n

i=1 xiyi − nαx − β ∑n i=1 x2 i

σ2 = 0 ∑n

i=1 xiyi − nx(y − βx) − β ∑n i=1 x2 i = 0

ˆ β = ∑n

i=1 xiyi − nxy

∑n

i=1 x2 i − nx2

∂l ∂σ2 = −n 2 2π 2πσ + ∑n

i=1(yi − α − βxi)2

2(σ2)2 = 0 ˆ σ2 = 1 n

n

i=1

(yi − ˆ α − ˆ βxi)2

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 11 / 20

slide-43
SLIDE 43

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

Solution : Normal Distribution (cont’d)

∂l ∂β = 2 ∑n

i=1(yi − α − βxi)xi

2σ2 = ∑n

i=1 xiyi − nαx − β ∑n i=1 x2 i

σ2 = 0

n i

xiyi nx y x

n i

xi ˆ β = ∑n

i=1 xiyi − nxy

∑n

i=1 x2 i − nx2

∂l ∂σ2 = −n 2 2π 2πσ + ∑n

i=1(yi − α − βxi)2

2(σ2)2 = 0 ˆ σ2 = 1 n

n

i=1

(yi − ˆ α − ˆ βxi)2

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 11 / 20

slide-44
SLIDE 44

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

Solution : Normal Distribution (cont’d)

∂l ∂β = 2 ∑n

i=1(yi − α − βxi)xi

2σ2 = ∑n

i=1 xiyi − nαx − β ∑n i=1 x2 i

σ2 = 0 ∑n

i=1 xiyi − nx(y − βx) − β ∑n i=1 x2 i = 0

ˆ β = ∑n

i=1 xiyi − nxy

∑n

i=1 x2 i − nx2

l n

n i

yi xi n

n i

yi xi

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 11 / 20

slide-45
SLIDE 45

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

Solution : Normal Distribution (cont’d)

∂l ∂β = 2 ∑n

i=1(yi − α − βxi)xi

2σ2 = ∑n

i=1 xiyi − nαx − β ∑n i=1 x2 i

σ2 = 0 ∑n

i=1 xiyi − nx(y − βx) − β ∑n i=1 x2 i = 0

ˆ β = ∑n

i=1 xiyi − nxy

∑n

i=1 x2 i − nx2

∂l ∂σ2 = −n 2 2π 2πσ + ∑n

i=1(yi − α − βxi)2

2(σ2)2 = 0 n

n i

yi xi

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 11 / 20

slide-46
SLIDE 46

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

Solution : Normal Distribution (cont’d)

∂l ∂β = 2 ∑n

i=1(yi − α − βxi)xi

2σ2 = ∑n

i=1 xiyi − nαx − β ∑n i=1 x2 i

σ2 = 0 ∑n

i=1 xiyi − nx(y − βx) − β ∑n i=1 x2 i = 0

ˆ β = ∑n

i=1 xiyi − nxy

∑n

i=1 x2 i − nx2

∂l ∂σ2 = −n 2 2π 2πσ + ∑n

i=1(yi − α − βxi)2

2(σ2)2 = 0 ˆ σ2 = 1 n

n

i=1

(yi − ˆ α − ˆ βxi)2

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 11 / 20

slide-47
SLIDE 47

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

Putting Things Together

Therefore, the MLE of (α, β, σ2) is y x

n i

xiyi nxy

n i

xi nx n

n i

yi xi

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 12 / 20

slide-48
SLIDE 48

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

Putting Things Together

Therefore, the MLE of (α, β, σ2) is ˆ α = y − ˆ βx ˆ β = ∑n

i=1 xiyi − nxy

∑n

i=1 x2 i − nx2

ˆ σ2 = 1 n

n

i=1

(yi − ˆ α − ˆ βxi)2

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 12 / 20

slide-49
SLIDE 49

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

Example : Normal Distribution with Known Variance

.

Problem

. . X1, · · · , Xn

i.i.d.

∼ N(µ, 1) where µ ≥ 0. Find MLE of µ.

.

Solution

. . . . . . . . L x

n i

exp xi

n

exp

n i

xi l x log L x C

n i

xi l

n i

xi l

n i

xi n x Are we done?

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 13 / 20

slide-50
SLIDE 50

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

Example : Normal Distribution with Known Variance

.

Problem

. . X1, · · · , Xn

i.i.d.

∼ N(µ, 1) where µ ≥ 0. Find MLE of µ.

.

Solution

. . L(µ|x) =

n

i=1

1 √ 2π exp [ −(xi − µ)2 2 ] = (2π)−n/2 exp [ − ∑n

i=1(xi − µ)2

2 ] l x log L x C

n i

xi l

n i

xi l

n i

xi n x Are we done?

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 13 / 20

slide-51
SLIDE 51

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

Example : Normal Distribution with Known Variance

.

Problem

. . X1, · · · , Xn

i.i.d.

∼ N(µ, 1) where µ ≥ 0. Find MLE of µ.

.

Solution

. . L(µ|x) =

n

i=1

1 √ 2π exp [ −(xi − µ)2 2 ] = (2π)−n/2 exp [ − ∑n

i=1(xi − µ)2

2 ] l(µ|x) = log L(µ, x) = C − ∑n

i=1(xi − µ)2

2 l

n i

xi l

n i

xi n x Are we done?

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 13 / 20

slide-52
SLIDE 52

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

Example : Normal Distribution with Known Variance

.

Problem

. . X1, · · · , Xn

i.i.d.

∼ N(µ, 1) where µ ≥ 0. Find MLE of µ.

.

Solution

. . L(µ|x) =

n

i=1

1 √ 2π exp [ −(xi − µ)2 2 ] = (2π)−n/2 exp [ − ∑n

i=1(xi − µ)2

2 ] l(µ|x) = log L(µ, x) = C − ∑n

i=1(xi − µ)2

2 ∂l ∂µ = 2 ∑n

i=1(xi − µ)

2 = 0, ∂2l ∂µ2 < 0

n i

xi n x Are we done?

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 13 / 20

slide-53
SLIDE 53

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

Example : Normal Distribution with Known Variance

.

Problem

. . X1, · · · , Xn

i.i.d.

∼ N(µ, 1) where µ ≥ 0. Find MLE of µ.

.

Solution

. . L(µ|x) =

n

i=1

1 √ 2π exp [ −(xi − µ)2 2 ] = (2π)−n/2 exp [ − ∑n

i=1(xi − µ)2

2 ] l(µ|x) = log L(µ, x) = C − ∑n

i=1(xi − µ)2

2 ∂l ∂µ = 2 ∑n

i=1(xi − µ)

2 = 0, ∂2l ∂µ2 < 0 ˆ µ =

n

i=1

xi/n = x Are we done?

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 13 / 20

slide-54
SLIDE 54

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

The MLE parameter must be within the parameter space

We need to check whether ˆ µ is within the parameter space [0, ∞).

  • If x

, x falls into the parameter space.

  • If x

, x does NOT fall into the parameter space. When x l

n i

xi n x for . Therefore, l x is a decreasing function of . So when x . Therefore, MLE is X max X

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 14 / 20

slide-55
SLIDE 55

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

The MLE parameter must be within the parameter space

We need to check whether ˆ µ is within the parameter space [0, ∞).

  • If x ≥ 0, ˆ

µ = x falls into the parameter space.

  • If x

, x does NOT fall into the parameter space. When x l

n i

xi n x for . Therefore, l x is a decreasing function of . So when x . Therefore, MLE is X max X

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 14 / 20

slide-56
SLIDE 56

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

The MLE parameter must be within the parameter space

We need to check whether ˆ µ is within the parameter space [0, ∞).

  • If x ≥ 0, ˆ

µ = x falls into the parameter space.

  • If x < 0, ˆ

µ = x does NOT fall into the parameter space. When x l

n i

xi n x for . Therefore, l x is a decreasing function of . So when x . Therefore, MLE is X max X

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 14 / 20

slide-57
SLIDE 57

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

The MLE parameter must be within the parameter space

We need to check whether ˆ µ is within the parameter space [0, ∞).

  • If x ≥ 0, ˆ

µ = x falls into the parameter space.

  • If x < 0, ˆ

µ = x does NOT fall into the parameter space. When x < 0 l

n i

xi n x for . Therefore, l x is a decreasing function of . So when x . Therefore, MLE is X max X

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 14 / 20

slide-58
SLIDE 58

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

The MLE parameter must be within the parameter space

We need to check whether ˆ µ is within the parameter space [0, ∞).

  • If x ≥ 0, ˆ

µ = x falls into the parameter space.

  • If x < 0, ˆ

µ = x does NOT fall into the parameter space. When x < 0 ∂l ∂µ =

n

i=1

(xi − µ) = n(x − µ) < 0 for . Therefore, l x is a decreasing function of . So when x . Therefore, MLE is X max X

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 14 / 20

slide-59
SLIDE 59

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

The MLE parameter must be within the parameter space

We need to check whether ˆ µ is within the parameter space [0, ∞).

  • If x ≥ 0, ˆ

µ = x falls into the parameter space.

  • If x < 0, ˆ

µ = x does NOT fall into the parameter space. When x < 0 ∂l ∂µ =

n

i=1

(xi − µ) = n(x − µ) < 0 for µ ≥ 0. Therefore, l(µ|x) is a decreasing function of µ. So ˆ µ = 0 when x < 0. Therefore, MLE is X max X

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 14 / 20

slide-60
SLIDE 60

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

The MLE parameter must be within the parameter space

We need to check whether ˆ µ is within the parameter space [0, ∞).

  • If x ≥ 0, ˆ

µ = x falls into the parameter space.

  • If x < 0, ˆ

µ = x does NOT fall into the parameter space. When x < 0 ∂l ∂µ =

n

i=1

(xi − µ) = n(x − µ) < 0 for µ ≥ 0. Therefore, l(µ|x) is a decreasing function of µ. So ˆ µ = 0 when x < 0. Therefore, MLE is ˆ µ(X) = max(X, 0)

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 14 / 20

slide-61
SLIDE 61

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

Invariance Property of MLE

.

Question

. . If ˆ θ is the MLE of θ, what is the MLE of τ(θ)? .

Example

. . . . . . . . X Xn

i.i.d. Bernoulli p where

p .

. . 1 What is the MLE of p? . . 2 What is the MLE of odds, defined by

p p ?

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 15 / 20

slide-62
SLIDE 62

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

Invariance Property of MLE

.

Question

. . If ˆ θ is the MLE of θ, what is the MLE of τ(θ)? .

Example

. . X1, · · · , Xn

i.i.d.

∼ Bernoulli(p) where 0 < p < 1.

. . 1 What is the MLE of p? . . 2 What is the MLE of odds, defined by

p p ?

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 15 / 20

slide-63
SLIDE 63

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

Invariance Property of MLE

.

Question

. . If ˆ θ is the MLE of θ, what is the MLE of τ(θ)? .

Example

. . X1, · · · , Xn

i.i.d.

∼ Bernoulli(p) where 0 < p < 1.

. . 1 What is the MLE of p? . . 2 What is the MLE of odds, defined by η = p/(1 − p)?

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 15 / 20

slide-64
SLIDE 64

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

MLE of p

L(p|x) =

n

i=1

pxi(1 − p)1−xi = p

∑ xi(1 − p)n−∑ xi

l p x log p

n i

xi log p n

n i

xi l p

n i

xi p n

n i

xi p p

n i

xi n x

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 16 / 20

slide-65
SLIDE 65

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

MLE of p

L(p|x) =

n

i=1

pxi(1 − p)1−xi = p

∑ xi(1 − p)n−∑ xi

l(p|x) = log p

n

i=1

xi + log(1 − p)(n −

n

i=1

xi) l p

n i

xi p n

n i

xi p p

n i

xi n x

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 16 / 20

slide-66
SLIDE 66

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

MLE of p

L(p|x) =

n

i=1

pxi(1 − p)1−xi = p

∑ xi(1 − p)n−∑ xi

l(p|x) = log p

n

i=1

xi + log(1 − p)(n −

n

i=1

xi) ∂l ∂p = ∑n

i=1 xi

p − n − ∑n

i=1 xi

1 − p = 0 p

n i

xi n x

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 16 / 20

slide-67
SLIDE 67

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

MLE of p

L(p|x) =

n

i=1

pxi(1 − p)1−xi = p

∑ xi(1 − p)n−∑ xi

l(p|x) = log p

n

i=1

xi + log(1 − p)(n −

n

i=1

xi) ∂l ∂p = ∑n

i=1 xi

p − n − ∑n

i=1 xi

1 − p = 0 ˆ p = ∑n

i=1 xi

n = x

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 16 / 20

slide-68
SLIDE 68

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

MLE of η =

p 1−p

  • η = p/(1 − p) = τ(p)
  • p = η/(1 + η) = τ −1(η)

L x p

xi

p n

xi

p p

xi

p n

xi n

l x

n i

xi log n log l

n i

xi n

n i

xi n

n i

xi n p

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 17 / 20

slide-69
SLIDE 69

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

MLE of η =

p 1−p

  • η = p/(1 − p) = τ(p)
  • p = η/(1 + η) = τ −1(η)

L

∗(η|x)

= p

∑ xi(1 − p)n−∑ xi

p p

xi

p n

xi n

l x

n i

xi log n log l

n i

xi n

n i

xi n

n i

xi n p

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 17 / 20

slide-70
SLIDE 70

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

MLE of η =

p 1−p

  • η = p/(1 − p) = τ(p)
  • p = η/(1 + η) = τ −1(η)

L

∗(η|x)

= p

∑ xi(1 − p)n−∑ xi

= p 1 − p

∑ xi(1 − p)n =

η

∑ xi

(1 + η)n l x

n i

xi log n log l

n i

xi n

n i

xi n

n i

xi n p

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 17 / 20

slide-71
SLIDE 71

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

MLE of η =

p 1−p

  • η = p/(1 − p) = τ(p)
  • p = η/(1 + η) = τ −1(η)

L

∗(η|x)

= p

∑ xi(1 − p)n−∑ xi

= p 1 − p

∑ xi(1 − p)n =

η

∑ xi

(1 + η)n l

∗(η|x)

=

n

i=1

xi log η − n log(1 + η) l

n i

xi n

n i

xi n

n i

xi n p

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 17 / 20

slide-72
SLIDE 72

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

MLE of η =

p 1−p

  • η = p/(1 − p) = τ(p)
  • p = η/(1 + η) = τ −1(η)

L

∗(η|x)

= p

∑ xi(1 − p)n−∑ xi

= p 1 − p

∑ xi(1 − p)n =

η

∑ xi

(1 + η)n l

∗(η|x)

=

n

i=1

xi log η − n log(1 + η) ∂l∗ ∂η = ∑n

i=1 xi

η − n 1 + η = 0

n i

xi n

n i

xi n p

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 17 / 20

slide-73
SLIDE 73

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

MLE of η =

p 1−p

  • η = p/(1 − p) = τ(p)
  • p = η/(1 + η) = τ −1(η)

L

∗(η|x)

= p

∑ xi(1 − p)n−∑ xi

= p 1 − p

∑ xi(1 − p)n =

η

∑ xi

(1 + η)n l

∗(η|x)

=

n

i=1

xi log η − n log(1 + η) ∂l∗ ∂η = ∑n

i=1 xi

η − n 1 + η = 0 ˆ η = ∑n

i=1 xi/n

1 − ∑n

i=1 xi/n = τ(ˆ

p)

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 17 / 20

slide-74
SLIDE 74

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

Another way to get MLE of η =

p 1−p

L

∗(η|x)

= η

∑ xi

(1 + η)n

  • From MLE of p, we know L

x is maximized when p p.

  • Equivalently, L

x is maximized when p p p , because is a one-to-one function.

  • Therefore

p .

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 18 / 20

slide-75
SLIDE 75

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

Another way to get MLE of η =

p 1−p

L

∗(η|x)

= η

∑ xi

(1 + η)n

  • From MLE of ˆ

p, we know L∗(η|x) is maximized when p = η/(1 + η) = ˆ p.

  • Equivalently, L

x is maximized when p p p , because is a one-to-one function.

  • Therefore

p .

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 18 / 20

slide-76
SLIDE 76

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

Another way to get MLE of η =

p 1−p

L

∗(η|x)

= η

∑ xi

(1 + η)n

  • From MLE of ˆ

p, we know L∗(η|x) is maximized when p = η/(1 + η) = ˆ p.

  • Equivalently, L∗(η|x) is maximized when η = ˆ

p/(1 − ˆ p) = τ(ˆ p), because τ is a one-to-one function.

  • Therefore

p .

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 18 / 20

slide-77
SLIDE 77

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

Another way to get MLE of η =

p 1−p

L

∗(η|x)

= η

∑ xi

(1 + η)n

  • From MLE of ˆ

p, we know L∗(η|x) is maximized when p = η/(1 + η) = ˆ p.

  • Equivalently, L∗(η|x) is maximized when η = ˆ

p/(1 − ˆ p) = τ(ˆ p), because τ is a one-to-one function.

  • Therefore ˆ

η = τ(ˆ p).

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 18 / 20

slide-78
SLIDE 78

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

Invariance Property of MLE

.

Fact

. . Denote the MLE of θ by ˆ θ. If τ(θ) is an one-to-one function of θ, then MLE of τ(θ) is τ(ˆ θ). .

Proof

. . . . . . . . The likelihood function in terms of is L x

n i

fX xi

n i

f xi L x We know this function is maximized when , or equivalently, when . Therefore, MLE of is .

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 19 / 20

slide-79
SLIDE 79

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

Invariance Property of MLE

.

Fact

. . Denote the MLE of θ by ˆ θ. If τ(θ) is an one-to-one function of θ, then MLE of τ(θ) is τ(ˆ θ). .

Proof

. . The likelihood function in terms of τ(θ) = η is L∗(τ(θ)|x) =

n

i=1

fX(xi|θ) =

n

i=1

f(xi|τ −1(η)) L x We know this function is maximized when , or equivalently, when . Therefore, MLE of is .

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 19 / 20

slide-80
SLIDE 80

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

Invariance Property of MLE

.

Fact

. . Denote the MLE of θ by ˆ θ. If τ(θ) is an one-to-one function of θ, then MLE of τ(θ) is τ(ˆ θ). .

Proof

. . The likelihood function in terms of τ(θ) = η is L∗(τ(θ)|x) =

n

i=1

fX(xi|θ) =

n

i=1

f(xi|τ −1(η)) = L(τ −1(η)|x) We know this function is maximized when , or equivalently, when . Therefore, MLE of is .

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 19 / 20

slide-81
SLIDE 81

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

Invariance Property of MLE

.

Fact

. . Denote the MLE of θ by ˆ θ. If τ(θ) is an one-to-one function of θ, then MLE of τ(θ) is τ(ˆ θ). .

Proof

. . The likelihood function in terms of τ(θ) = η is L∗(τ(θ)|x) =

n

i=1

fX(xi|θ) =

n

i=1

f(xi|τ −1(η)) = L(τ −1(η)|x) We know this function is maximized when τ −1(η) = ˆ θ, or equivalently, when η = τ(ˆ θ). Therefore, MLE of is .

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 19 / 20

slide-82
SLIDE 82

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

Invariance Property of MLE

.

Fact

. . Denote the MLE of θ by ˆ θ. If τ(θ) is an one-to-one function of θ, then MLE of τ(θ) is τ(ˆ θ). .

Proof

. . The likelihood function in terms of τ(θ) = η is L∗(τ(θ)|x) =

n

i=1

fX(xi|θ) =

n

i=1

f(xi|τ −1(η)) = L(τ −1(η)|x) We know this function is maximized when τ −1(η) = ˆ θ, or equivalently, when η = τ(ˆ θ). Therefore, MLE of η = τ(θ) is τ(ˆ θ).

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 19 / 20

slide-83
SLIDE 83

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

Summary

.

Today

. . • Maximum Likelihood Estimator .

Next Lecture

. . . . . . . .

  • Mean Squared Error
  • Unbiased Estimator
  • Cramer-Rao inequality

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 20 / 20

slide-84
SLIDE 84

. . . . . .

. . . . . Recap . . . . . . . . . . . . . MLE . Summary

Summary

.

Today

. . • Maximum Likelihood Estimator .

Next Lecture

. .

  • Mean Squared Error
  • Unbiased Estimator
  • Cramer-Rao inequality

Hyun Min Kang Biostatistics 602 - Lecture 10 February 12th, 2013 20 / 20