Evaluation of Point Estimators Lecture 11 Biostatistics 602 - - - PowerPoint PPT Presentation

evaluation of point estimators lecture 11 biostatistics
SMART_READER_LITE
LIVE PREVIEW

Evaluation of Point Estimators Lecture 11 Biostatistics 602 - - - PowerPoint PPT Presentation

. . February 14th, 2013 Biostatistics 602 - Lecture 11 Hyun Min Kang February 14th, 2013 Hyun Min Kang Evaluation of Point Estimators Lecture 11 Biostatistics 602 - Statistical Inference . . Summary Cramer-Rao . Evaluation MLE Recap


slide-1
SLIDE 1

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

. .

Biostatistics 602 - Statistical Inference Lecture 11 Evaluation of Point Estimators

Hyun Min Kang February 14th, 2013

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 1 / 33

slide-2
SLIDE 2

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Some News

  • Homework 3 is posted.
  • Due is Tuesday, February 26th.
  • Next Thursday (Feb 21) is the midterm day.
  • We will start sharply at 1:10pm.
  • It would be better to solve homework 3 yourself to get prepared.
  • The exam is closed book, covering all the material from Lecture 1 to

12.

  • Last year’s midterm is posted on the web page.

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 2 / 33

slide-3
SLIDE 3

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Some News

  • Homework 3 is posted.
  • Due is Tuesday, February 26th.
  • Next Thursday (Feb 21) is the midterm day.
  • We will start sharply at 1:10pm.
  • It would be better to solve homework 3 yourself to get prepared.
  • The exam is closed book, covering all the material from Lecture 1 to

12.

  • Last year’s midterm is posted on the web page.

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 2 / 33

slide-4
SLIDE 4

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Last Lecture

. . 1 What is a maximum likelihood estimator (MLE)? . . 2 How can you find an MLE? . 3 Does an ML estimate always fall into a valid parameter space? . . 4 If you know MLE of

, can you also know MLE of ?

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 3 / 33

slide-5
SLIDE 5

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Last Lecture

. . 1 What is a maximum likelihood estimator (MLE)? . . 2 How can you find an MLE? . . 3 Does an ML estimate always fall into a valid parameter space? . . 4 If you know MLE of

, can you also know MLE of ?

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 3 / 33

slide-6
SLIDE 6

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Last Lecture

. . 1 What is a maximum likelihood estimator (MLE)? . . 2 How can you find an MLE? . . 3 Does an ML estimate always fall into a valid parameter space? . . 4 If you know MLE of

, can you also know MLE of ?

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 3 / 33

slide-7
SLIDE 7

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Last Lecture

. . 1 What is a maximum likelihood estimator (MLE)? . . 2 How can you find an MLE? . . 3 Does an ML estimate always fall into a valid parameter space? . . 4 If you know MLE of θ, can you also know MLE of τ(θ)?

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 3 / 33

slide-8
SLIDE 8

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Recap - Maximum Likelihood Estimator

.

Definition

. .

  • For a given sample point x = (x1, · · · , xn),
  • let ˆ

θ(x) be the value such that

  • L(θ|x) attains its maximum.
  • More formally, L(ˆ

θ(x)|x) ≥ L(θ|x) ∀θ ∈ Ω where ˆ θ(x) ∈ Ω.

  • ˆ

θ(x) is called the maximum likelihood estimate of θ based on data x,

  • and ˆ

θ(X) is the maximum likelihood estimator (MLE) of θ.

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 4 / 33

slide-9
SLIDE 9

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Recap - Invariance Property of MLE

.

Question

. . If ˆ θ is the MLE of θ, what is the MLE of τ(θ)? .

Example

. . X1, · · · , Xn

i.i.d.

∼ Bernoulli(p) where 0 < p < 1.

. . 1 What is the MLE of p? . . 2 What is the MLE of odds, defined by η = p/(1 − p)?

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 5 / 33

slide-10
SLIDE 10

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Getting MLE of η =

p 1−p from ˆ

p

L

∗(η|x)

= η

∑ xi

(1 + η)n

  • From MLE of ˆ

p, we know L∗(η|x) is maximized when p = η/(1 + η) = ˆ p.

  • Equivalently, L∗(η|x) is maximized when η = ˆ

p/(1 − ˆ p) = τ(ˆ p), because τ is a one-to-one function.

  • Therefore ˆ

η = τ(ˆ p).

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 6 / 33

slide-11
SLIDE 11

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Invariance Property of MLE

.

Fact

. . Denote the MLE of θ by ˆ θ. If τ(θ) is an one-to-one function of θ, then MLE of τ(θ) is τ(ˆ θ). .

Proof

. . . . . . . . The likelihood function in terms of is L x

n i

fX xi

n i

f xi L x We know this function is maximized when , or equivalently, when . Therefore, MLE of is .

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 7 / 33

slide-12
SLIDE 12

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Invariance Property of MLE

.

Fact

. . Denote the MLE of θ by ˆ θ. If τ(θ) is an one-to-one function of θ, then MLE of τ(θ) is τ(ˆ θ). .

Proof

. . The likelihood function in terms of τ(θ) = η is L∗(τ(θ)|x) =

n

i=1

fX(xi|θ)

n i

f xi L x We know this function is maximized when , or equivalently, when . Therefore, MLE of is .

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 7 / 33

slide-13
SLIDE 13

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Invariance Property of MLE

.

Fact

. . Denote the MLE of θ by ˆ θ. If τ(θ) is an one-to-one function of θ, then MLE of τ(θ) is τ(ˆ θ). .

Proof

. . The likelihood function in terms of τ(θ) = η is L∗(τ(θ)|x) =

n

i=1

fX(xi|θ) =

n

i=1

f(xi|τ −1(η)) L x We know this function is maximized when , or equivalently, when . Therefore, MLE of is .

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 7 / 33

slide-14
SLIDE 14

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Invariance Property of MLE

.

Fact

. . Denote the MLE of θ by ˆ θ. If τ(θ) is an one-to-one function of θ, then MLE of τ(θ) is τ(ˆ θ). .

Proof

. . The likelihood function in terms of τ(θ) = η is L∗(τ(θ)|x) =

n

i=1

fX(xi|θ) =

n

i=1

f(xi|τ −1(η)) = L(τ −1(η)|x) We know this function is maximized when , or equivalently, when . Therefore, MLE of is .

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 7 / 33

slide-15
SLIDE 15

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Invariance Property of MLE

.

Fact

. . Denote the MLE of θ by ˆ θ. If τ(θ) is an one-to-one function of θ, then MLE of τ(θ) is τ(ˆ θ). .

Proof

. . The likelihood function in terms of τ(θ) = η is L∗(τ(θ)|x) =

n

i=1

fX(xi|θ) =

n

i=1

f(xi|τ −1(η)) = L(τ −1(η)|x) We know this function is maximized when τ −1(η) = ˆ θ, or equivalently, when η = τ(ˆ θ). Therefore, MLE of is .

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 7 / 33

slide-16
SLIDE 16

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Invariance Property of MLE

.

Fact

. . Denote the MLE of θ by ˆ θ. If τ(θ) is an one-to-one function of θ, then MLE of τ(θ) is τ(ˆ θ). .

Proof

. . The likelihood function in terms of τ(θ) = η is L∗(τ(θ)|x) =

n

i=1

fX(xi|θ) =

n

i=1

f(xi|τ −1(η)) = L(τ −1(η)|x) We know this function is maximized when τ −1(η) = ˆ θ, or equivalently, when η = τ(ˆ θ). Therefore, MLE of η = τ(θ) is τ(ˆ θ).

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 7 / 33

slide-17
SLIDE 17

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Induced Likelihood Function

.

Definition

. .

  • Let L(θ|x) be the likelihood function for a given data x1, · · · , xn,
  • and let

be a (possibly not a one-to-one) function of . We define the induced likelihood function L by L x sup L x where .

  • The value of

that maximize L x is called the MLE of .

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 8 / 33

slide-18
SLIDE 18

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Induced Likelihood Function

.

Definition

. .

  • Let L(θ|x) be the likelihood function for a given data x1, · · · , xn,
  • and let η = τ(θ) be a (possibly not a one-to-one) function of θ.

We define the induced likelihood function L by L x sup L x where .

  • The value of

that maximize L x is called the MLE of .

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 8 / 33

slide-19
SLIDE 19

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Induced Likelihood Function

.

Definition

. .

  • Let L(θ|x) be the likelihood function for a given data x1, · · · , xn,
  • and let η = τ(θ) be a (possibly not a one-to-one) function of θ.

We define the induced likelihood function L∗ by L∗(η|x) = sup

θ∈τ −1(η)

L(θ|x) where τ −1(η) = {θ : τ(θ) = η, θ ∈ Ω}.

  • The value of

that maximize L x is called the MLE of .

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 8 / 33

slide-20
SLIDE 20

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Induced Likelihood Function

.

Definition

. .

  • Let L(θ|x) be the likelihood function for a given data x1, · · · , xn,
  • and let η = τ(θ) be a (possibly not a one-to-one) function of θ.

We define the induced likelihood function L∗ by L∗(η|x) = sup

θ∈τ −1(η)

L(θ|x) where τ −1(η) = {θ : τ(θ) = η, θ ∈ Ω}.

  • The value of η that maximize L∗(η|x) is called the MLE of η = τ(θ).

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 8 / 33

slide-21
SLIDE 21

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Invariance Property of MLE

.

Theorem 7.2.10

. . If θ is the MLE of ˆ θ, then the MLE of η = τ(θ) is τ(ˆ θ), where τ(θ) is any function of θ. .

Proof - Using Induced Likelihood Function

. . . . . . . . L x sup L x sup sup L x sup L x L x L x sup L x L x Hence, L x L x and is the MLE of .

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 9 / 33

slide-22
SLIDE 22

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Invariance Property of MLE

.

Theorem 7.2.10

. . If θ is the MLE of ˆ θ, then the MLE of η = τ(θ) is τ(ˆ θ), where τ(θ) is any function of θ. .

Proof - Using Induced Likelihood Function

. . L∗(ˆ η|x) = sup

η L∗(η|x)

sup sup L x sup L x L x L x sup L x L x Hence, L x L x and is the MLE of .

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 9 / 33

slide-23
SLIDE 23

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Invariance Property of MLE

.

Theorem 7.2.10

. . If θ is the MLE of ˆ θ, then the MLE of η = τ(θ) is τ(ˆ θ), where τ(θ) is any function of θ. .

Proof - Using Induced Likelihood Function

. . L∗(ˆ η|x) = sup

η L∗(η|x) = sup η

sup

θ∈τ −1(η)

L(θ|x) sup L x L x L x sup L x L x Hence, L x L x and is the MLE of .

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 9 / 33

slide-24
SLIDE 24

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Invariance Property of MLE

.

Theorem 7.2.10

. . If θ is the MLE of ˆ θ, then the MLE of η = τ(θ) is τ(ˆ θ), where τ(θ) is any function of θ. .

Proof - Using Induced Likelihood Function

. . L∗(ˆ η|x) = sup

η L∗(η|x) = sup η

sup

θ∈τ −1(η)

L(θ|x) = sup

θ

L(θ|x) L x L x sup L x L x Hence, L x L x and is the MLE of .

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 9 / 33

slide-25
SLIDE 25

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Invariance Property of MLE

.

Theorem 7.2.10

. . If θ is the MLE of ˆ θ, then the MLE of η = τ(θ) is τ(ˆ θ), where τ(θ) is any function of θ. .

Proof - Using Induced Likelihood Function

. . L∗(ˆ η|x) = sup

η L∗(η|x) = sup η

sup

θ∈τ −1(η)

L(θ|x) = sup

θ

L(θ|x) = L(ˆ θ|x) L x sup L x L x Hence, L x L x and is the MLE of .

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 9 / 33

slide-26
SLIDE 26

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Invariance Property of MLE

.

Theorem 7.2.10

. . If θ is the MLE of ˆ θ, then the MLE of η = τ(θ) is τ(ˆ θ), where τ(θ) is any function of θ. .

Proof - Using Induced Likelihood Function

. . L∗(ˆ η|x) = sup

η L∗(η|x) = sup η

sup

θ∈τ −1(η)

L(θ|x) = sup

θ

L(θ|x) = L(ˆ θ|x) L(ˆ θ|x) = sup

θ∈τ −1(τ(ˆ θ))

L(θ|x) L x Hence, L x L x and is the MLE of .

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 9 / 33

slide-27
SLIDE 27

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Invariance Property of MLE

.

Theorem 7.2.10

. . If θ is the MLE of ˆ θ, then the MLE of η = τ(θ) is τ(ˆ θ), where τ(θ) is any function of θ. .

Proof - Using Induced Likelihood Function

. . L∗(ˆ η|x) = sup

η L∗(η|x) = sup η

sup

θ∈τ −1(η)

L(θ|x) = sup

θ

L(θ|x) = L(ˆ θ|x) L(ˆ θ|x) = sup

θ∈τ −1(τ(ˆ θ))

L(θ|x) = L∗[τ(ˆ θ)|x] Hence, L x L x and is the MLE of .

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 9 / 33

slide-28
SLIDE 28

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Invariance Property of MLE

.

Theorem 7.2.10

. . If θ is the MLE of ˆ θ, then the MLE of η = τ(θ) is τ(ˆ θ), where τ(θ) is any function of θ. .

Proof - Using Induced Likelihood Function

. . L∗(ˆ η|x) = sup

η L∗(η|x) = sup η

sup

θ∈τ −1(η)

L(θ|x) = sup

θ

L(θ|x) = L(ˆ θ|x) L(ˆ θ|x) = sup

θ∈τ −1(τ(ˆ θ))

L(θ|x) = L∗[τ(ˆ θ)|x] Hence, L∗(ˆ η|x) = L∗[τ(ˆ θ)|x] and τ(ˆ θ) is the MLE of τ(θ).

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 9 / 33

slide-29
SLIDE 29

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Properties of MLE

. . 1 Optimal in some sense : We will study this later . . 2 By definition, MLE will always fall into the range of the parameter

space.

. 3 Not always easy to obtain; may be hard to find the global maximum. . 4 Heavily depends on the underlying distributional assumptions (i.e. not

robust).

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 10 / 33

slide-30
SLIDE 30

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Properties of MLE

. . 1 Optimal in some sense : We will study this later . . 2 By definition, MLE will always fall into the range of the parameter

space.

. . 3 Not always easy to obtain; may be hard to find the global maximum. . 4 Heavily depends on the underlying distributional assumptions (i.e. not

robust).

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 10 / 33

slide-31
SLIDE 31

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Properties of MLE

. . 1 Optimal in some sense : We will study this later . . 2 By definition, MLE will always fall into the range of the parameter

space.

. . 3 Not always easy to obtain; may be hard to find the global maximum. . . 4 Heavily depends on the underlying distributional assumptions (i.e. not

robust).

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 10 / 33

slide-32
SLIDE 32

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Properties of MLE

. . 1 Optimal in some sense : We will study this later . . 2 By definition, MLE will always fall into the range of the parameter

space.

. . 3 Not always easy to obtain; may be hard to find the global maximum. . . 4 Heavily depends on the underlying distributional assumptions (i.e. not

robust).

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 10 / 33

slide-33
SLIDE 33

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Method of Evaluating Estimators

.

Definition : Unbiasedness

. . Suppose ˆ θ is an estimator for θ, then the bias of θ is defined as Bias(θ) = E(ˆ θ) − θ If the bias is equal to 0, then is an unbiased estimator for . .

Example

. . . . . . . . X Xn are iid samples from a distribution with mean . Let X

n n i

Xi is an estimator of . The bias is Bias E X E n

n i

Xi n

n i

E Xi Therefore X is an unbiased estimator for .

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 11 / 33

slide-34
SLIDE 34

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Method of Evaluating Estimators

.

Definition : Unbiasedness

. . Suppose ˆ θ is an estimator for θ, then the bias of θ is defined as Bias(θ) = E(ˆ θ) − θ If the bias is equal to 0, then ˆ θ is an unbiased estimator for θ. .

Example

. . . . . . . . X Xn are iid samples from a distribution with mean . Let X

n n i

Xi is an estimator of . The bias is Bias E X E n

n i

Xi n

n i

E Xi Therefore X is an unbiased estimator for .

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 11 / 33

slide-35
SLIDE 35

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Method of Evaluating Estimators

.

Definition : Unbiasedness

. . Suppose ˆ θ is an estimator for θ, then the bias of θ is defined as Bias(θ) = E(ˆ θ) − θ If the bias is equal to 0, then ˆ θ is an unbiased estimator for θ. .

Example

. . X1, · · · , Xn are iid samples from a distribution with mean µ. Let X = 1

n

∑n

i=1 Xi is an estimator of µ.

The bias is Bias E X E n

n i

Xi n

n i

E Xi Therefore X is an unbiased estimator for .

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 11 / 33

slide-36
SLIDE 36

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Method of Evaluating Estimators

.

Definition : Unbiasedness

. . Suppose ˆ θ is an estimator for θ, then the bias of θ is defined as Bias(θ) = E(ˆ θ) − θ If the bias is equal to 0, then ˆ θ is an unbiased estimator for θ. .

Example

. . X1, · · · , Xn are iid samples from a distribution with mean µ. Let X = 1

n

∑n

i=1 Xi is an estimator of µ. The bias is

Bias(µ) = E(X) − µ E n

n i

Xi n

n i

E Xi Therefore X is an unbiased estimator for .

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 11 / 33

slide-37
SLIDE 37

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Method of Evaluating Estimators

.

Definition : Unbiasedness

. . Suppose ˆ θ is an estimator for θ, then the bias of θ is defined as Bias(θ) = E(ˆ θ) − θ If the bias is equal to 0, then ˆ θ is an unbiased estimator for θ. .

Example

. . X1, · · · , Xn are iid samples from a distribution with mean µ. Let X = 1

n

∑n

i=1 Xi is an estimator of µ. The bias is

Bias(µ) = E(X) − µ = E ( 1 n

n

i=1

Xi ) − µ n

n i

E Xi Therefore X is an unbiased estimator for .

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 11 / 33

slide-38
SLIDE 38

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Method of Evaluating Estimators

.

Definition : Unbiasedness

. . Suppose ˆ θ is an estimator for θ, then the bias of θ is defined as Bias(θ) = E(ˆ θ) − θ If the bias is equal to 0, then ˆ θ is an unbiased estimator for θ. .

Example

. . X1, · · · , Xn are iid samples from a distribution with mean µ. Let X = 1

n

∑n

i=1 Xi is an estimator of µ. The bias is

Bias(µ) = E(X) − µ = E ( 1 n

n

i=1

Xi ) − µ n

n i

E Xi Therefore X is an unbiased estimator for .

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 11 / 33

slide-39
SLIDE 39

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Method of Evaluating Estimators

.

Definition : Unbiasedness

. . Suppose ˆ θ is an estimator for θ, then the bias of θ is defined as Bias(θ) = E(ˆ θ) − θ If the bias is equal to 0, then ˆ θ is an unbiased estimator for θ. .

Example

. . X1, · · · , Xn are iid samples from a distribution with mean µ. Let X = 1

n

∑n

i=1 Xi is an estimator of µ. The bias is

Bias(µ) = E(X) − µ = E ( 1 n

n

i=1

Xi ) − µ = 1 n

n

i=1

E (Xi) − µ Therefore X is an unbiased estimator for .

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 11 / 33

slide-40
SLIDE 40

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Method of Evaluating Estimators

.

Definition : Unbiasedness

. . Suppose ˆ θ is an estimator for θ, then the bias of θ is defined as Bias(θ) = E(ˆ θ) − θ If the bias is equal to 0, then ˆ θ is an unbiased estimator for θ. .

Example

. . X1, · · · , Xn are iid samples from a distribution with mean µ. Let X = 1

n

∑n

i=1 Xi is an estimator of µ. The bias is

Bias(µ) = E(X) − µ = E ( 1 n

n

i=1

Xi ) − µ = 1 n

n

i=1

E (Xi) − µ = µ − µ = 0 Therefore X is an unbiased estimator for .

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 11 / 33

slide-41
SLIDE 41

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Method of Evaluating Estimators

.

Definition : Unbiasedness

. . Suppose ˆ θ is an estimator for θ, then the bias of θ is defined as Bias(θ) = E(ˆ θ) − θ If the bias is equal to 0, then ˆ θ is an unbiased estimator for θ. .

Example

. . X1, · · · , Xn are iid samples from a distribution with mean µ. Let X = 1

n

∑n

i=1 Xi is an estimator of µ. The bias is

Bias(µ) = E(X) − µ = E ( 1 n

n

i=1

Xi ) − µ = 1 n

n

i=1

E (Xi) − µ = µ − µ = 0 Therefore X is an unbiased estimator for µ.

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 11 / 33

slide-42
SLIDE 42

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

How important is unbiased?

  • (blue) is unbiased but has a chance to be very far away from

.

  • (red) is biased but more likely to be closer to the true

than .

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 12 / 33

slide-43
SLIDE 43

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

How important is unbiased?

  • ˆ

θ1 (blue) is unbiased but has a chance to be very far away from θ = 0.

  • ˆ

θ2 (red) is biased but more likely to be closer to the true θ than ˆ θ1.

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 12 / 33

slide-44
SLIDE 44

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Mean Squared Error

.

Definition

. . Mean Squared Error (MSE) of an estimator ˆ θ is defined as MSE(ˆ θ) = E[(ˆ θ − θ)]2 .

Property of MSE

. . . . . . . . MSE E E E E E E E E E E E E E E E E E E Var Bias

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 13 / 33

slide-45
SLIDE 45

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Mean Squared Error

.

Definition

. . Mean Squared Error (MSE) of an estimator ˆ θ is defined as MSE(ˆ θ) = E[(ˆ θ − θ)]2 .

Property of MSE

. . MSE(ˆ θ) = E[(ˆ θ − Eˆ θ + Eˆ θ − θ)]2 E E E E E E E E E E E E E E E Var Bias

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 13 / 33

slide-46
SLIDE 46

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Mean Squared Error

.

Definition

. . Mean Squared Error (MSE) of an estimator ˆ θ is defined as MSE(ˆ θ) = E[(ˆ θ − θ)]2 .

Property of MSE

. . MSE(ˆ θ) = E[(ˆ θ − Eˆ θ + Eˆ θ − θ)]2 = E[(ˆ θ − Eˆ θ)2] + E[(Eˆ θ − θ)2] + 2E[(ˆ θ − Eˆ θ)]E[(Eˆ θ − θ)] E E E E E E E Var Bias

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 13 / 33

slide-47
SLIDE 47

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Mean Squared Error

.

Definition

. . Mean Squared Error (MSE) of an estimator ˆ θ is defined as MSE(ˆ θ) = E[(ˆ θ − θ)]2 .

Property of MSE

. . MSE(ˆ θ) = E[(ˆ θ − Eˆ θ + Eˆ θ − θ)]2 = E[(ˆ θ − Eˆ θ)2] + E[(Eˆ θ − θ)2] + 2E[(ˆ θ − Eˆ θ)]E[(Eˆ θ − θ)] = E[(ˆ θ − Eˆ θ)2] + (Eˆ θ − θ)2 + 2(Eˆ θ − Eˆ θ)E[(Eˆ θ − θ)] Var Bias

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 13 / 33

slide-48
SLIDE 48

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Mean Squared Error

.

Definition

. . Mean Squared Error (MSE) of an estimator ˆ θ is defined as MSE(ˆ θ) = E[(ˆ θ − θ)]2 .

Property of MSE

. . MSE(ˆ θ) = E[(ˆ θ − Eˆ θ + Eˆ θ − θ)]2 = E[(ˆ θ − Eˆ θ)2] + E[(Eˆ θ − θ)2] + 2E[(ˆ θ − Eˆ θ)]E[(Eˆ θ − θ)] = E[(ˆ θ − Eˆ θ)2] + (Eˆ θ − θ)2 + 2(Eˆ θ − Eˆ θ)E[(Eˆ θ − θ)] = Var(ˆ θ) + Bias2(θ)

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 13 / 33

slide-49
SLIDE 49

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Example

  • X1, · · · , Xn

i.i.d.

∼ N(µ, 1)

  • µ1 = 1, µ2 = X.

MSE E MSE E X Var X n

  • Suppose that the true

, then MSE MSE , and no estimator can beat in terms of MSE when true .

  • Therefore, we cannot find an estimator that is uniformly the best in

terms of MSE across all among all estimators

  • Restrict the class of estimators, and find the ”best” estimator within

the small class.

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 14 / 33

slide-50
SLIDE 50

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Example

  • X1, · · · , Xn

i.i.d.

∼ N(µ, 1)

  • µ1 = 1, µ2 = X.

MSE(ˆ µ1) = E(ˆ µ1 − µ)2 = (1 − µ)2 MSE E X Var X n

  • Suppose that the true

, then MSE MSE , and no estimator can beat in terms of MSE when true .

  • Therefore, we cannot find an estimator that is uniformly the best in

terms of MSE across all among all estimators

  • Restrict the class of estimators, and find the ”best” estimator within

the small class.

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 14 / 33

slide-51
SLIDE 51

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Example

  • X1, · · · , Xn

i.i.d.

∼ N(µ, 1)

  • µ1 = 1, µ2 = X.

MSE(ˆ µ1) = E(ˆ µ1 − µ)2 = (1 − µ)2 MSE(ˆ µ2) = E(X − µ)2 = Var(X) = 1 n

  • Suppose that the true

, then MSE MSE , and no estimator can beat in terms of MSE when true .

  • Therefore, we cannot find an estimator that is uniformly the best in

terms of MSE across all among all estimators

  • Restrict the class of estimators, and find the ”best” estimator within

the small class.

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 14 / 33

slide-52
SLIDE 52

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Example

  • X1, · · · , Xn

i.i.d.

∼ N(µ, 1)

  • µ1 = 1, µ2 = X.

MSE(ˆ µ1) = E(ˆ µ1 − µ)2 = (1 − µ)2 MSE(ˆ µ2) = E(X − µ)2 = Var(X) = 1 n

  • Suppose that the true µ = 1, then MSE(µ1) = 0 < MSE(µ2), and no

estimator can beat µ1 in terms of MSE when true µ = 1.

  • Therefore, we cannot find an estimator that is uniformly the best in

terms of MSE across all among all estimators

  • Restrict the class of estimators, and find the ”best” estimator within

the small class.

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 14 / 33

slide-53
SLIDE 53

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Example

  • X1, · · · , Xn

i.i.d.

∼ N(µ, 1)

  • µ1 = 1, µ2 = X.

MSE(ˆ µ1) = E(ˆ µ1 − µ)2 = (1 − µ)2 MSE(ˆ µ2) = E(X − µ)2 = Var(X) = 1 n

  • Suppose that the true µ = 1, then MSE(µ1) = 0 < MSE(µ2), and no

estimator can beat µ1 in terms of MSE when true µ = 1.

  • Therefore, we cannot find an estimator that is uniformly the best in

terms of MSE across all θ ∈ Ω among all estimators

  • Restrict the class of estimators, and find the ”best” estimator within

the small class.

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 14 / 33

slide-54
SLIDE 54

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Example

  • X1, · · · , Xn

i.i.d.

∼ N(µ, 1)

  • µ1 = 1, µ2 = X.

MSE(ˆ µ1) = E(ˆ µ1 − µ)2 = (1 − µ)2 MSE(ˆ µ2) = E(X − µ)2 = Var(X) = 1 n

  • Suppose that the true µ = 1, then MSE(µ1) = 0 < MSE(µ2), and no

estimator can beat µ1 in terms of MSE when true µ = 1.

  • Therefore, we cannot find an estimator that is uniformly the best in

terms of MSE across all θ ∈ Ω among all estimators

  • Restrict the class of estimators, and find the ”best” estimator within

the small class.

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 14 / 33

slide-55
SLIDE 55

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Uniformly Minimum Variance Unbiased Estimator

.

Definition

. . W∗(X) is the best unbiased estimator, or uniformly minimum variance unbiased estimator (UMVUE) of τ(θ) if,

. . 1 E W

X for all (unbiased)

. . 2 and Var W

X Var W X for all , where W is any other unbiased estimator of (minimum variance). .

How to find the Best Unbiased Estimator

. . . . . . . .

  • Find the lower bound of variances of any unbiased estimator of

, say B .

  • If W is an unbiased estimator of

and satisfies Var W X B , then W is the best unbiased estimator.

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 15 / 33

slide-56
SLIDE 56

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Uniformly Minimum Variance Unbiased Estimator

.

Definition

. . W∗(X) is the best unbiased estimator, or uniformly minimum variance unbiased estimator (UMVUE) of τ(θ) if,

. . 1 E[W∗(X)|θ] = τ(θ) for all θ (unbiased) . . 2 and Var W

X Var W X for all , where W is any other unbiased estimator of (minimum variance). .

How to find the Best Unbiased Estimator

. . . . . . . .

  • Find the lower bound of variances of any unbiased estimator of

, say B .

  • If W is an unbiased estimator of

and satisfies Var W X B , then W is the best unbiased estimator.

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 15 / 33

slide-57
SLIDE 57

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Uniformly Minimum Variance Unbiased Estimator

.

Definition

. . W∗(X) is the best unbiased estimator, or uniformly minimum variance unbiased estimator (UMVUE) of τ(θ) if,

. . 1 E[W∗(X)|θ] = τ(θ) for all θ (unbiased) . . 2 and Var[W∗(X)|θ] ≤ Var[W(X)|θ] for all θ, where W is any other

unbiased estimator of τ(θ) (minimum variance). .

How to find the Best Unbiased Estimator

. . . . . . . .

  • Find the lower bound of variances of any unbiased estimator of

, say B .

  • If W is an unbiased estimator of

and satisfies Var W X B , then W is the best unbiased estimator.

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 15 / 33

slide-58
SLIDE 58

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Uniformly Minimum Variance Unbiased Estimator

.

Definition

. . W∗(X) is the best unbiased estimator, or uniformly minimum variance unbiased estimator (UMVUE) of τ(θ) if,

. . 1 E[W∗(X)|θ] = τ(θ) for all θ (unbiased) . . 2 and Var[W∗(X)|θ] ≤ Var[W(X)|θ] for all θ, where W is any other

unbiased estimator of τ(θ) (minimum variance). .

How to find the Best Unbiased Estimator

. .

  • Find the lower bound of variances of any unbiased estimator of τ(θ),

say B(θ).

  • If W is an unbiased estimator of

and satisfies Var W X B , then W is the best unbiased estimator.

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 15 / 33

slide-59
SLIDE 59

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Uniformly Minimum Variance Unbiased Estimator

.

Definition

. . W∗(X) is the best unbiased estimator, or uniformly minimum variance unbiased estimator (UMVUE) of τ(θ) if,

. . 1 E[W∗(X)|θ] = τ(θ) for all θ (unbiased) . . 2 and Var[W∗(X)|θ] ≤ Var[W(X)|θ] for all θ, where W is any other

unbiased estimator of τ(θ) (minimum variance). .

How to find the Best Unbiased Estimator

. .

  • Find the lower bound of variances of any unbiased estimator of τ(θ),

say B(θ).

  • If W∗ is an unbiased estimator of τ(θ) and satisfies

Var[W∗(X)|θ] = B(θ), then W∗ is the best unbiased estimator.

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 15 / 33

slide-60
SLIDE 60

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Cramer-Rao inequality

.

Theorem 7.3.9 : Cramer-Rao Theorem

. . Let X1, · · · , Xn be a sample with joint pdf/pmf of fX(x|θ). Suppose W(X) is an estimator satisfying

. . 1 E W X

.

. . 2 Var W X

. For h x and h x W x , if the differentiation and integrations are interchangeable, i.e. d d E h x d d

x

h x fX x dx

x

h x fX x dx Then, a lower bound of Var W X is Var W X E log fX x

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 16 / 33

slide-61
SLIDE 61

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Cramer-Rao inequality

.

Theorem 7.3.9 : Cramer-Rao Theorem

. . Let X1, · · · , Xn be a sample with joint pdf/pmf of fX(x|θ). Suppose W(X) is an estimator satisfying

. . 1 E[W(X)|θ] = τ(θ), ∀θ ∈ Ω. . 2 Var W X

. For h x and h x W x , if the differentiation and integrations are interchangeable, i.e. d d E h x d d

x

h x fX x dx

x

h x fX x dx Then, a lower bound of Var W X is Var W X E log fX x

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 16 / 33

slide-62
SLIDE 62

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Cramer-Rao inequality

.

Theorem 7.3.9 : Cramer-Rao Theorem

. . Let X1, · · · , Xn be a sample with joint pdf/pmf of fX(x|θ). Suppose W(X) is an estimator satisfying

. . 1 E[W(X)|θ] = τ(θ), ∀θ ∈ Ω. . . 2 Var[W(X)|θ] < ∞.

For h x and h x W x , if the differentiation and integrations are interchangeable, i.e. d d E h x d d

x

h x fX x dx

x

h x fX x dx Then, a lower bound of Var W X is Var W X E log fX x

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 16 / 33

slide-63
SLIDE 63

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Cramer-Rao inequality

.

Theorem 7.3.9 : Cramer-Rao Theorem

. . Let X1, · · · , Xn be a sample with joint pdf/pmf of fX(x|θ). Suppose W(X) is an estimator satisfying

. . 1 E[W(X)|θ] = τ(θ), ∀θ ∈ Ω. . . 2 Var[W(X)|θ] < ∞.

For h(x) = 1 and h(x) = W(x), if the differentiation and integrations are interchangeable, i.e. d d E h x d d

x

h x fX x dx

x

h x fX x dx Then, a lower bound of Var W X is Var W X E log fX x

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 16 / 33

slide-64
SLIDE 64

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Cramer-Rao inequality

.

Theorem 7.3.9 : Cramer-Rao Theorem

. . Let X1, · · · , Xn be a sample with joint pdf/pmf of fX(x|θ). Suppose W(X) is an estimator satisfying

. . 1 E[W(X)|θ] = τ(θ), ∀θ ∈ Ω. . . 2 Var[W(X)|θ] < ∞.

For h(x) = 1 and h(x) = W(x), if the differentiation and integrations are interchangeable, i.e. d dθE[h(x)|θ] = d dθ ∫

x∈X

h(x)fX(x|θ)dx = ∫

x∈X

h(x) ∂ ∂θfX(x|θ)dx Then, a lower bound of Var W X is Var W X E log fX x

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 16 / 33

slide-65
SLIDE 65

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Cramer-Rao inequality

.

Theorem 7.3.9 : Cramer-Rao Theorem

. . Let X1, · · · , Xn be a sample with joint pdf/pmf of fX(x|θ). Suppose W(X) is an estimator satisfying

. . 1 E[W(X)|θ] = τ(θ), ∀θ ∈ Ω. . . 2 Var[W(X)|θ] < ∞.

For h(x) = 1 and h(x) = W(x), if the differentiation and integrations are interchangeable, i.e. d dθE[h(x)|θ] = d dθ ∫

x∈X

h(x)fX(x|θ)dx = ∫

x∈X

h(x) ∂ ∂θfX(x|θ)dx Then, a lower bound of Var[W(X)|θ] is Var[W(X)] ≥ [τ ′(θ)]2 E [ { ∂

∂θ log fX(x|θ)}2]

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 16 / 33

slide-66
SLIDE 66

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Proving Cramer-Rao Theorem (1/4)

By Cauchy-Schwarz inequality, [Cov(X, Y)]2 ≤ Var(X)Var(Y) Replacing X and Y, Cov W X log fX X Var W X Var log fX X Var W X Cov W X log fX X Var log fX X Using Var X EX EX , Var log fX X E log fX X E log fX X

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 17 / 33

slide-67
SLIDE 67

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Proving Cramer-Rao Theorem (1/4)

By Cauchy-Schwarz inequality, [Cov(X, Y)]2 ≤ Var(X)Var(Y) Replacing X and Y, [ Cov{W(X), ∂ ∂θ log fX(X|θ)} ]2 ≤ Var[W(X)]Var [ ∂ ∂θ log fX(X|θ) ] Var W X Cov W X log fX X Var log fX X Using Var X EX EX , Var log fX X E log fX X E log fX X

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 17 / 33

slide-68
SLIDE 68

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Proving Cramer-Rao Theorem (1/4)

By Cauchy-Schwarz inequality, [Cov(X, Y)]2 ≤ Var(X)Var(Y) Replacing X and Y, [ Cov{W(X), ∂ ∂θ log fX(X|θ)} ]2 ≤ Var[W(X)]Var [ ∂ ∂θ log fX(X|θ) ] Var[W(X)] ≥ [ Cov{W(X), ∂

∂θ log fX(X|θ)}

]2 Var [ ∂

∂θ log fX(X|θ)

] Using Var X EX EX , Var log fX X E log fX X E log fX X

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 17 / 33

slide-69
SLIDE 69

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Proving Cramer-Rao Theorem (1/4)

By Cauchy-Schwarz inequality, [Cov(X, Y)]2 ≤ Var(X)Var(Y) Replacing X and Y, [ Cov{W(X), ∂ ∂θ log fX(X|θ)} ]2 ≤ Var[W(X)]Var [ ∂ ∂θ log fX(X|θ) ] Var[W(X)] ≥ [ Cov{W(X), ∂

∂θ log fX(X|θ)}

]2 Var [ ∂

∂θ log fX(X|θ)

] Using Var(X) = EX2 − (EX)2, Var [ ∂ ∂θ log fX(X|θ) ] = E [{ ∂ ∂θ log fX(X|θ) }2] − E [ ∂ ∂θ log fX(X|θ) ]2

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 17 / 33

slide-70
SLIDE 70

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Proving Cramer-Rao Theorem (2/4)

E [ ∂ ∂θ log fX(X|θ) ] = ∫

x∈X

[ ∂ ∂θ log fX(x|θ) ] fX(x|θ)dx

x

fX x fX x fX x dx

x

fX x dx d d

x

fX x dx (by assumption) d d Var log fX X E log fX X

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 18 / 33

slide-71
SLIDE 71

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Proving Cramer-Rao Theorem (2/4)

E [ ∂ ∂θ log fX(X|θ) ] = ∫

x∈X

[ ∂ ∂θ log fX(x|θ) ] fX(x|θ)dx = ∫

x∈X ∂ ∂θfX(x|θ)

fX(x|θ) fX(x|θ)dx

x

fX x dx d d

x

fX x dx (by assumption) d d Var log fX X E log fX X

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 18 / 33

slide-72
SLIDE 72

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Proving Cramer-Rao Theorem (2/4)

E [ ∂ ∂θ log fX(X|θ) ] = ∫

x∈X

[ ∂ ∂θ log fX(x|θ) ] fX(x|θ)dx = ∫

x∈X ∂ ∂θfX(x|θ)

fX(x|θ) fX(x|θ)dx = ∫

x∈X

∂ ∂θfX(x|θ)dx d d

x

fX x dx (by assumption) d d Var log fX X E log fX X

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 18 / 33

slide-73
SLIDE 73

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Proving Cramer-Rao Theorem (2/4)

E [ ∂ ∂θ log fX(X|θ) ] = ∫

x∈X

[ ∂ ∂θ log fX(x|θ) ] fX(x|θ)dx = ∫

x∈X ∂ ∂θfX(x|θ)

fX(x|θ) fX(x|θ)dx = ∫

x∈X

∂ ∂θfX(x|θ)dx = d dθ ∫

x∈X

fX(x|θ)dx (by assumption) d d Var log fX X E log fX X

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 18 / 33

slide-74
SLIDE 74

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Proving Cramer-Rao Theorem (2/4)

E [ ∂ ∂θ log fX(X|θ) ] = ∫

x∈X

[ ∂ ∂θ log fX(x|θ) ] fX(x|θ)dx = ∫

x∈X ∂ ∂θfX(x|θ)

fX(x|θ) fX(x|θ)dx = ∫

x∈X

∂ ∂θfX(x|θ)dx = d dθ ∫

x∈X

fX(x|θ)dx (by assumption) = d dθ1 = 0 Var log fX X E log fX X

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 18 / 33

slide-75
SLIDE 75

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Proving Cramer-Rao Theorem (2/4)

E [ ∂ ∂θ log fX(X|θ) ] = ∫

x∈X

[ ∂ ∂θ log fX(x|θ) ] fX(x|θ)dx = ∫

x∈X ∂ ∂θfX(x|θ)

fX(x|θ) fX(x|θ)dx = ∫

x∈X

∂ ∂θfX(x|θ)dx = d dθ ∫

x∈X

fX(x|θ)dx (by assumption) = d dθ1 = 0 Var [ ∂ ∂θ log fX(X|θ) ] = E [{ ∂ ∂θ log fX(X|θ) }2]

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 18 / 33

slide-76
SLIDE 76

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Proving Cramer-Rao Theorem (3/4)

Cov [ W(X), ∂ ∂θ log fX(X|θ) ] E W X log fX X E W X E log fX X E W X log fX X

x

W x log fX x f x dx

x

W x fX x f x f x dx

x

W x fX x d d

x

W x fX x (by assumption) d d E W X d d

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 19 / 33

slide-77
SLIDE 77

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Proving Cramer-Rao Theorem (3/4)

Cov [ W(X), ∂ ∂θ log fX(X|θ) ] = E [ W(X) · ∂ ∂θ log fX(X|θ) ] − E [W(X)] E [ ∂ ∂θ log fX(X|θ) ] E W X log fX X

x

W x log fX x f x dx

x

W x fX x f x f x dx

x

W x fX x d d

x

W x fX x (by assumption) d d E W X d d

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 19 / 33

slide-78
SLIDE 78

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Proving Cramer-Rao Theorem (3/4)

Cov [ W(X), ∂ ∂θ log fX(X|θ) ] = E [ W(X) · ∂ ∂θ log fX(X|θ) ] − E [W(X)] E [ ∂ ∂θ log fX(X|θ) ] = E [ W(X) · ∂ ∂θ log fX(X|θ) ]

x

W x log fX x f x dx

x

W x fX x f x f x dx

x

W x fX x d d

x

W x fX x (by assumption) d d E W X d d

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 19 / 33

slide-79
SLIDE 79

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Proving Cramer-Rao Theorem (3/4)

Cov [ W(X), ∂ ∂θ log fX(X|θ) ] = E [ W(X) · ∂ ∂θ log fX(X|θ) ] − E [W(X)] E [ ∂ ∂θ log fX(X|θ) ] = E [ W(X) · ∂ ∂θ log fX(X|θ) ] = ∫

x∈X

W(x) ∂ ∂θ log fX(x|θ)f(x|θ)dx

x

W x fX x f x f x dx

x

W x fX x d d

x

W x fX x (by assumption) d d E W X d d

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 19 / 33

slide-80
SLIDE 80

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Proving Cramer-Rao Theorem (3/4)

Cov [ W(X), ∂ ∂θ log fX(X|θ) ] = E [ W(X) · ∂ ∂θ log fX(X|θ) ] − E [W(X)] E [ ∂ ∂θ log fX(X|θ) ] = E [ W(X) · ∂ ∂θ log fX(X|θ) ] = ∫

x∈X

W(x) ∂ ∂θ log fX(x|θ)f(x|θ)dx = ∫

x∈X

W(x)

∂ ∂θfX(x|θ)

f(x|θ) f(x|θ)dx

x

W x fX x d d

x

W x fX x (by assumption) d d E W X d d

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 19 / 33

slide-81
SLIDE 81

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Proving Cramer-Rao Theorem (3/4)

Cov [ W(X), ∂ ∂θ log fX(X|θ) ] = E [ W(X) · ∂ ∂θ log fX(X|θ) ] − E [W(X)] E [ ∂ ∂θ log fX(X|θ) ] = E [ W(X) · ∂ ∂θ log fX(X|θ) ] = ∫

x∈X

W(x) ∂ ∂θ log fX(x|θ)f(x|θ)dx = ∫

x∈X

W(x)

∂ ∂θfX(x|θ)

f(x|θ) f(x|θ)dx = ∫

x∈X

W(x) ∂ ∂θfX(x|θ) d d

x

W x fX x (by assumption) d d E W X d d

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 19 / 33

slide-82
SLIDE 82

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Proving Cramer-Rao Theorem (3/4)

Cov [ W(X), ∂ ∂θ log fX(X|θ) ] = E [ W(X) · ∂ ∂θ log fX(X|θ) ] − E [W(X)] E [ ∂ ∂θ log fX(X|θ) ] = E [ W(X) · ∂ ∂θ log fX(X|θ) ] = ∫

x∈X

W(x) ∂ ∂θ log fX(x|θ)f(x|θ)dx = ∫

x∈X

W(x)

∂ ∂θfX(x|θ)

f(x|θ) f(x|θ)dx = ∫

x∈X

W(x) ∂ ∂θfX(x|θ) = d dθ ∫

x∈X

W(x)fX(x|θ) (by assumption) d d E W X d d

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 19 / 33

slide-83
SLIDE 83

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Proving Cramer-Rao Theorem (3/4)

Cov [ W(X), ∂ ∂θ log fX(X|θ) ] = E [ W(X) · ∂ ∂θ log fX(X|θ) ] − E [W(X)] E [ ∂ ∂θ log fX(X|θ) ] = E [ W(X) · ∂ ∂θ log fX(X|θ) ] = ∫

x∈X

W(x) ∂ ∂θ log fX(x|θ)f(x|θ)dx = ∫

x∈X

W(x)

∂ ∂θfX(x|θ)

f(x|θ) f(x|θ)dx = ∫

x∈X

W(x) ∂ ∂θfX(x|θ) = d dθ ∫

x∈X

W(x)fX(x|θ) (by assumption) = d dθE [W(X)] = d dθτ(θ) = τ ′(θ)

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 19 / 33

slide-84
SLIDE 84

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Proving Cramer-Rao Theorem (4/4)

From the previous results Var [ ∂ ∂θ log fX(X|θ) ] = E [{ ∂ ∂θ log fX(X|θ) }2] Cov W X log fX X Therefore, Cramer-Rao lower bound is Var W X Cov W X log fX X Var log fX X E log fX X

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 20 / 33

slide-85
SLIDE 85

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Proving Cramer-Rao Theorem (4/4)

From the previous results Var [ ∂ ∂θ log fX(X|θ) ] = E [{ ∂ ∂θ log fX(X|θ) }2] Cov [ W(X), ∂ ∂θ log fX(X|θ) ] = τ ′(θ) Therefore, Cramer-Rao lower bound is Var[W(X)] ≥ [ Cov{W(X), ∂

∂θ log fX(X|θ)}

]2 Var [ ∂

∂θ log fX(X|θ)

] E log fX X

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 20 / 33

slide-86
SLIDE 86

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Proving Cramer-Rao Theorem (4/4)

From the previous results Var [ ∂ ∂θ log fX(X|θ) ] = E [{ ∂ ∂θ log fX(X|θ) }2] Cov [ W(X), ∂ ∂θ log fX(X|θ) ] = τ ′(θ) Therefore, Cramer-Rao lower bound is Var[W(X)] ≥ [ Cov{W(X), ∂

∂θ log fX(X|θ)}

]2 Var [ ∂

∂θ log fX(X|θ)

] = [τ ′(θ)]2 E [ { ∂

∂θ log fX(X|θ)}2]

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 20 / 33

slide-87
SLIDE 87

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Cramer-Rao bound in iid case

.

Corollary 7.3.10

. . If X1, · · · , Xn are iid samples from pdf/pmf fX(x|θ), and the assumptions in the above Cramer-Rao theorem hold, then the lower-bound of Var[W(X)|θ] becomes Var W X nE log fX X .

Proof

. . . . . . . . We need to show that E log fX X nE log fX X

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 21 / 33

slide-88
SLIDE 88

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Cramer-Rao bound in iid case

.

Corollary 7.3.10

. . If X1, · · · , Xn are iid samples from pdf/pmf fX(x|θ), and the assumptions in the above Cramer-Rao theorem hold, then the lower-bound of Var[W(X)|θ] becomes Var[W(X)] ≥ [τ ′(θ)]2 nE [ { ∂

∂θ log fX(X|θ)}2]

.

Proof

. . . . . . . . We need to show that E log fX X nE log fX X

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 21 / 33

slide-89
SLIDE 89

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Cramer-Rao bound in iid case

.

Corollary 7.3.10

. . If X1, · · · , Xn are iid samples from pdf/pmf fX(x|θ), and the assumptions in the above Cramer-Rao theorem hold, then the lower-bound of Var[W(X)|θ] becomes Var[W(X)] ≥ [τ ′(θ)]2 nE [ { ∂

∂θ log fX(X|θ)}2]

.

Proof

. . We need to show that E [{ ∂ ∂θ log fX(X|θ) }2] = nE [{ ∂ ∂θ log fX(X|θ) }2]

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 21 / 33

slide-90
SLIDE 90

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Proving Corollary 7.3.10

E [{ ∂ ∂θ log fX(X|θ) }2] = E   { ∂ ∂θ log

n

i=1

fX(Xi|θ) }2  E

n i

log fX Xi E

n i

log fX Xi E

n i

log fX Xi

i j

log fX Xi log fX Xj

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 22 / 33

slide-91
SLIDE 91

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Proving Corollary 7.3.10

E [{ ∂ ∂θ log fX(X|θ) }2] = E   { ∂ ∂θ log

n

i=1

fX(Xi|θ) }2  = E   { ∂ ∂θ

n

i=1

log fX(Xi|θ) }2  E

n i

log fX Xi E

n i

log fX Xi

i j

log fX Xi log fX Xj

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 22 / 33

slide-92
SLIDE 92

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Proving Corollary 7.3.10

E [{ ∂ ∂θ log fX(X|θ) }2] = E   { ∂ ∂θ log

n

i=1

fX(Xi|θ) }2  = E   { ∂ ∂θ

n

i=1

log fX(Xi|θ) }2  = E   { n ∑

i=1

∂ ∂θ log fX(Xi|θ) }2  E

n i

log fX Xi

i j

log fX Xi log fX Xj

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 22 / 33

slide-93
SLIDE 93

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Proving Corollary 7.3.10

E [{ ∂ ∂θ log fX(X|θ) }2] = E   { ∂ ∂θ log

n

i=1

fX(Xi|θ) }2  = E   { ∂ ∂θ

n

i=1

log fX(Xi|θ) }2  = E   { n ∑

i=1

∂ ∂θ log fX(Xi|θ) }2  = E [∑n

i=1

{ ∂

∂θ log fX(Xi|θ)

}2 + ∑

i̸=j ∂ ∂θ log fX(Xi|θ) ∂ ∂θ log fX(Xj|θ)

]

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 22 / 33

slide-94
SLIDE 94

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Proving Corollary 7.3.10

Because X1, · · · , Xn are independent, E  ∑

i̸=j

∂ ∂θ log fX(Xi|θ) ∂ ∂θ log fX(Xj|θ)   = ∑

i̸=j

E [ ∂ ∂θ log fX(Xi|θ) ] E [ ∂ ∂θ log fX(Xj|θ) ] = 0 E log fX X E

n i

log fX Xi

n i

E log fX Xi nE log fX X

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 23 / 33

slide-95
SLIDE 95

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Proving Corollary 7.3.10

Because X1, · · · , Xn are independent, E  ∑

i̸=j

∂ ∂θ log fX(Xi|θ) ∂ ∂θ log fX(Xj|θ)   = ∑

i̸=j

E [ ∂ ∂θ log fX(Xi|θ) ] E [ ∂ ∂θ log fX(Xj|θ) ] = 0 E [{ ∂ ∂θ log fX(X|θ) }2] = E [ n ∑

i=1

{ ∂ ∂θ log fX(Xi|θ) }2]

n i

E log fX Xi nE log fX X

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 23 / 33

slide-96
SLIDE 96

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Proving Corollary 7.3.10

Because X1, · · · , Xn are independent, E  ∑

i̸=j

∂ ∂θ log fX(Xi|θ) ∂ ∂θ log fX(Xj|θ)   = ∑

i̸=j

E [ ∂ ∂θ log fX(Xi|θ) ] E [ ∂ ∂θ log fX(Xj|θ) ] = 0 E [{ ∂ ∂θ log fX(X|θ) }2] = E [ n ∑

i=1

{ ∂ ∂θ log fX(Xi|θ) }2] =

n

i=1

E [{ ∂ ∂θ log fX(Xi|θ) }2] nE log fX X

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 23 / 33

slide-97
SLIDE 97

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Proving Corollary 7.3.10

Because X1, · · · , Xn are independent, E  ∑

i̸=j

∂ ∂θ log fX(Xi|θ) ∂ ∂θ log fX(Xj|θ)   = ∑

i̸=j

E [ ∂ ∂θ log fX(Xi|θ) ] E [ ∂ ∂θ log fX(Xj|θ) ] = 0 E [{ ∂ ∂θ log fX(X|θ) }2] = E [ n ∑

i=1

{ ∂ ∂θ log fX(Xi|θ) }2] =

n

i=1

E [{ ∂ ∂θ log fX(Xi|θ) }2] = nE [{ ∂ ∂θ log fX(X|θ) }2]

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 23 / 33

slide-98
SLIDE 98

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Remark from Corollary 7.3.10

In iid case, Cramer-Rao lower bound for an unbiased estimator of θ is Var W X nE log fX X Because and .

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 24 / 33

slide-99
SLIDE 99

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Remark from Corollary 7.3.10

In iid case, Cramer-Rao lower bound for an unbiased estimator of θ is Var[W(X)] ≥ 1 nE [ { ∂

∂θ log fX(X|θ)}2]

Because and .

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 24 / 33

slide-100
SLIDE 100

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Remark from Corollary 7.3.10

In iid case, Cramer-Rao lower bound for an unbiased estimator of θ is Var[W(X)] ≥ 1 nE [ { ∂

∂θ log fX(X|θ)}2]

Because τ(θ) = θ and τ ′(θ) = 1.

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 24 / 33

slide-101
SLIDE 101

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Score Function

.

Definition: Score or Score Function for X

. . X1, · · · , Xn

i.i.d.

fX(x|θ) S X log fX X E S X Sn X log fX X

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 25 / 33

slide-102
SLIDE 102

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Score Function

.

Definition: Score or Score Function for X

. . X1, · · · , Xn

i.i.d.

fX(x|θ) S(X|θ) = ∂ ∂θ log fX(X|θ) E S X Sn X log fX X

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 25 / 33

slide-103
SLIDE 103

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Score Function

.

Definition: Score or Score Function for X

. . X1, · · · , Xn

i.i.d.

fX(x|θ) S(X|θ) = ∂ ∂θ log fX(X|θ) E [S(X|θ)] = Sn X log fX X

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 25 / 33

slide-104
SLIDE 104

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Score Function

.

Definition: Score or Score Function for X

. . X1, · · · , Xn

i.i.d.

fX(x|θ) S(X|θ) = ∂ ∂θ log fX(X|θ) E [S(X|θ)] = Sn(X|θ) = ∂ ∂θ log fX(X|θ)

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 25 / 33

slide-105
SLIDE 105

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Fisher Information Number

.

Definition: Fisher Information Number

. . I(θ) = E [{ ∂ ∂θ log fX(X|θ) }2] = E [ S2(X|θ) ] In E log fX X nE log fX X nI The bigger the information number, the more information we have about , the smaller bound on the variance of unbiased estimates.

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 26 / 33

slide-106
SLIDE 106

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Fisher Information Number

.

Definition: Fisher Information Number

. . I(θ) = E [{ ∂ ∂θ log fX(X|θ) }2] = E [ S2(X|θ) ] In(θ) = E [{ ∂ ∂θ log fX(X|θ) }2] nE log fX X nI The bigger the information number, the more information we have about , the smaller bound on the variance of unbiased estimates.

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 26 / 33

slide-107
SLIDE 107

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Fisher Information Number

.

Definition: Fisher Information Number

. . I(θ) = E [{ ∂ ∂θ log fX(X|θ) }2] = E [ S2(X|θ) ] In(θ) = E [{ ∂ ∂θ log fX(X|θ) }2] = nE [{ ∂ ∂θ log fX(X|θ) }2] = nI(θ) The bigger the information number, the more information we have about , the smaller bound on the variance of unbiased estimates.

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 26 / 33

slide-108
SLIDE 108

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Fisher Information Number

.

Definition: Fisher Information Number

. . I(θ) = E [{ ∂ ∂θ log fX(X|θ) }2] = E [ S2(X|θ) ] In(θ) = E [{ ∂ ∂θ log fX(X|θ) }2] = nE [{ ∂ ∂θ log fX(X|θ) }2] = nI(θ) The bigger the information number, the more information we have about θ, the smaller bound on the variance of unbiased estimates.

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 26 / 33

slide-109
SLIDE 109

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Simplified Fisher Information

.

Lemma 7.3.11

. . If fX(x|θ) satisfies the two interchangeability conditions d dθ ∫

x∈X

fX(x|θ)dx = ∫

x∈X

∂ ∂θfX(x|θ)dx d d

x

fX x dx

x

fX x dx which are true for exponential family, then I E log fX X E log fX X

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 27 / 33

slide-110
SLIDE 110

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Simplified Fisher Information

.

Lemma 7.3.11

. . If fX(x|θ) satisfies the two interchangeability conditions d dθ ∫

x∈X

fX(x|θ)dx = ∫

x∈X

∂ ∂θfX(x|θ)dx d dθ ∫

x∈X

∂ ∂θfX(x|θ)dx = ∫

x∈X

∂2 ∂θ2 fX(x|θ)dx which are true for exponential family, then I E log fX X E log fX X

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 27 / 33

slide-111
SLIDE 111

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Simplified Fisher Information

.

Lemma 7.3.11

. . If fX(x|θ) satisfies the two interchangeability conditions d dθ ∫

x∈X

fX(x|θ)dx = ∫

x∈X

∂ ∂θfX(x|θ)dx d dθ ∫

x∈X

∂ ∂θfX(x|θ)dx = ∫

x∈X

∂2 ∂θ2 fX(x|θ)dx which are true for exponential family, then I(θ) = E [{ ∂ ∂θ log fX(X|θ) }2] = −E [ ∂2 ∂θ2 log fX(X|θ) ]

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 27 / 33

slide-112
SLIDE 112

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Example - Poisson Distribution

  • X1, · · · , Xn

i.i.d.

∼ Poisson(λ)

  • λ1 = X
  • λ2 = s2

X

  • E[λ1] = E(X) = λ.

Cramer-Rao lower bound is In nI . I E log fX X E log fX X E log e

X

X E X log log X E X E X

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 28 / 33

slide-113
SLIDE 113

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Example - Poisson Distribution

  • X1, · · · , Xn

i.i.d.

∼ Poisson(λ)

  • λ1 = X
  • λ2 = s2

X

  • E[λ1] = E(X) = λ.

Cramer-Rao lower bound is I−1

n (λ) = [nI(λ)]−1.

I E log fX X E log fX X E log e

X

X E X log log X E X E X

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 28 / 33

slide-114
SLIDE 114

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Example - Poisson Distribution

  • X1, · · · , Xn

i.i.d.

∼ Poisson(λ)

  • λ1 = X
  • λ2 = s2

X

  • E[λ1] = E(X) = λ.

Cramer-Rao lower bound is I−1

n (λ) = [nI(λ)]−1.

I(λ) = E [{ ∂ ∂λ log fX(X|λ) }2] = −E [ ∂2 ∂λ2 log fX(X|λ) ] E log e

X

X E X log log X E X E X

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 28 / 33

slide-115
SLIDE 115

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Example - Poisson Distribution

  • X1, · · · , Xn

i.i.d.

∼ Poisson(λ)

  • λ1 = X
  • λ2 = s2

X

  • E[λ1] = E(X) = λ.

Cramer-Rao lower bound is I−1

n (λ) = [nI(λ)]−1.

I(λ) = E [{ ∂ ∂λ log fX(X|λ) }2] = −E [ ∂2 ∂λ2 log fX(X|λ) ] = −E [ ∂2 ∂λ2 log e−λλX X! ] = −E [ ∂2 ∂λ2 (−λ + X log λ − log X!) ] E X E X

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 28 / 33

slide-116
SLIDE 116

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Example - Poisson Distribution

  • X1, · · · , Xn

i.i.d.

∼ Poisson(λ)

  • λ1 = X
  • λ2 = s2

X

  • E[λ1] = E(X) = λ.

Cramer-Rao lower bound is I−1

n (λ) = [nI(λ)]−1.

I(λ) = E [{ ∂ ∂λ log fX(X|λ) }2] = −E [ ∂2 ∂λ2 log fX(X|λ) ] = −E [ ∂2 ∂λ2 log e−λλX X! ] = −E [ ∂2 ∂λ2 (−λ + X log λ − log X!) ] = E [ X λ2 ] = 1 λ2 E(X) = 1 λ

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 28 / 33

slide-117
SLIDE 117

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Example - Poisson Distribution (cont’d)

Therefore, the Cramer-Rao lower bound is Var[W(X)] ≥ 1 nI(λ) = λ n where W is any unbiased estimator. Var Var X Var X n n Therefore, X is the best unbiased estimator of . Var n (details is omitted), so is not the best unbiased estimator.

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 29 / 33

slide-118
SLIDE 118

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Example - Poisson Distribution (cont’d)

Therefore, the Cramer-Rao lower bound is Var[W(X)] ≥ 1 nI(λ) = λ n where W is any unbiased estimator. Var(ˆ λ1) = Var(X) = Var(X) n = λ n Therefore, X is the best unbiased estimator of . Var n (details is omitted), so is not the best unbiased estimator.

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 29 / 33

slide-119
SLIDE 119

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Example - Poisson Distribution (cont’d)

Therefore, the Cramer-Rao lower bound is Var[W(X)] ≥ 1 nI(λ) = λ n where W is any unbiased estimator. Var(ˆ λ1) = Var(X) = Var(X) n = λ n Therefore, λ1 = X is the best unbiased estimator of λ. Var(ˆ λ2) > λ n (details is omitted), so ˆ λ2 is not the best unbiased estimator.

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 29 / 33

slide-120
SLIDE 120

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

With and without Lemma 7.3.11

.

With Lemma 7.3.11

. . I(λ) = −E [

∂2 ∂λ2 log fX(X|λ)

] = −E [

∂2 ∂λ2 (−λ + X log λ − log X!)

] = 1

λ

.

Without Lemma 7.3.11

. . . . . . . . I E log fX X E X log log X E X E X X E X E X E X Var X E X

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 30 / 33

slide-121
SLIDE 121

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

With and without Lemma 7.3.11

.

With Lemma 7.3.11

. . I(λ) = −E [

∂2 ∂λ2 log fX(X|λ)

] = −E [

∂2 ∂λ2 (−λ + X log λ − log X!)

] = 1

λ

.

Without Lemma 7.3.11

. . I(λ) = E [{ ∂ ∂λ log fX(X|λ) }2] = E [{ ∂ ∂λ (−λ + X log λ − log X!) }2] E X E X X E X E X E X Var X E X

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 30 / 33

slide-122
SLIDE 122

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

With and without Lemma 7.3.11

.

With Lemma 7.3.11

. . I(λ) = −E [

∂2 ∂λ2 log fX(X|λ)

] = −E [

∂2 ∂λ2 (−λ + X log λ − log X!)

] = 1

λ

.

Without Lemma 7.3.11

. . I(λ) = E [{ ∂ ∂λ log fX(X|λ) }2] = E [{ ∂ ∂λ (−λ + X log λ − log X!) }2] = E [{ −1 + X λ }2] = E [ 1 − 2X λ + X2 λ2 ] = 1 − 2E(X) λ + E(X2) λ2 E X Var X E X

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 30 / 33

slide-123
SLIDE 123

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

With and without Lemma 7.3.11

.

With Lemma 7.3.11

. . I(λ) = −E [

∂2 ∂λ2 log fX(X|λ)

] = −E [

∂2 ∂λ2 (−λ + X log λ − log X!)

] = 1

λ

.

Without Lemma 7.3.11

. . I(λ) = E [{ ∂ ∂λ log fX(X|λ) }2] = E [{ ∂ ∂λ (−λ + X log λ − log X!) }2] = E [{ −1 + X λ }2] = E [ 1 − 2X λ + X2 λ2 ] = 1 − 2E(X) λ + E(X2) λ2 = 1 − 2E(X) λ + Var(X) + [E(X)]2 λ2 = 1 − 2λ λ + λ + λ2 λ2 = 1 λ

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 30 / 33

slide-124
SLIDE 124

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Example - Normal Distribution

  • X1, · · · , Xn

i.i.d.

∼ N(µ, σ2), where σ2 is known.

  • The Cramer-Rao bound for

is nI . I E log fX X E log exp X E log X E X

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 31 / 33

slide-125
SLIDE 125

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Example - Normal Distribution

  • X1, · · · , Xn

i.i.d.

∼ N(µ, σ2), where σ2 is known.

  • The Cramer-Rao bound for µ is [nI(µ)]−1.

I E log fX X E log exp X E log X E X

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 31 / 33

slide-126
SLIDE 126

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Example - Normal Distribution

  • X1, · · · , Xn

i.i.d.

∼ N(µ, σ2), where σ2 is known.

  • The Cramer-Rao bound for µ is [nI(µ)]−1.

I(µ) = −E [ ∂2 ∂µ2 log fX(X|µ) ] E log exp X E log X E X

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 31 / 33

slide-127
SLIDE 127

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Example - Normal Distribution

  • X1, · · · , Xn

i.i.d.

∼ N(µ, σ2), where σ2 is known.

  • The Cramer-Rao bound for µ is [nI(µ)]−1.

I(µ) = −E [ ∂2 ∂µ2 log fX(X|µ) ] = −E [ ∂2 ∂µ2 log { 1 √ 2πσ2 exp ( −(X − µ)2 2σ2 )}] E log X E X

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 31 / 33

slide-128
SLIDE 128

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Example - Normal Distribution

  • X1, · · · , Xn

i.i.d.

∼ N(µ, σ2), where σ2 is known.

  • The Cramer-Rao bound for µ is [nI(µ)]−1.

I(µ) = −E [ ∂2 ∂µ2 log fX(X|µ) ] = −E [ ∂2 ∂µ2 log { 1 √ 2πσ2 exp ( −(X − µ)2 2σ2 )}] = −E [ ∂2 ∂µ2 { −1 2 log(2πσ2) − (X − µ)2 2σ2 }] E X

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 31 / 33

slide-129
SLIDE 129

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Example - Normal Distribution

  • X1, · · · , Xn

i.i.d.

∼ N(µ, σ2), where σ2 is known.

  • The Cramer-Rao bound for µ is [nI(µ)]−1.

I(µ) = −E [ ∂2 ∂µ2 log fX(X|µ) ] = −E [ ∂2 ∂µ2 log { 1 √ 2πσ2 exp ( −(X − µ)2 2σ2 )}] = −E [ ∂2 ∂µ2 { −1 2 log(2πσ2) − (X − µ)2 2σ2 }] = −E [ ∂ ∂µ {2(X − µ) 2σ2 }] = 1 σ2

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 31 / 33

slide-130
SLIDE 130

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Applying Lemma 7.3.11

.

Question

. . When can we interchange the order of differentiation and integration? .

Answer

. . . . . . . .

  • For exponential family, always yes.
  • Not always yes for non-exponential family. Will have to check the

individual case. .

Example

. . . . . . . . X Xn

i.i.d. Uniform

d d h x fX x dx h x fX x dx

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 32 / 33

slide-131
SLIDE 131

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Applying Lemma 7.3.11

.

Question

. . When can we interchange the order of differentiation and integration? .

Answer

. .

  • For exponential family, always yes.
  • Not always yes for non-exponential family. Will have to check the

individual case. .

Example

. . . . . . . . X Xn

i.i.d. Uniform

d d h x fX x dx h x fX x dx

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 32 / 33

slide-132
SLIDE 132

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Applying Lemma 7.3.11

.

Question

. . When can we interchange the order of differentiation and integration? .

Answer

. .

  • For exponential family, always yes.
  • Not always yes for non-exponential family. Will have to check the

individual case. .

Example

. . . . . . . . X Xn

i.i.d. Uniform

d d h x fX x dx h x fX x dx

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 32 / 33

slide-133
SLIDE 133

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Applying Lemma 7.3.11

.

Question

. . When can we interchange the order of differentiation and integration? .

Answer

. .

  • For exponential family, always yes.
  • Not always yes for non-exponential family. Will have to check the

individual case. .

Example

. . X1, · · · , Xn

i.i.d.

∼ Uniform(0, θ)

d d h x fX x dx h x fX x dx

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 32 / 33

slide-134
SLIDE 134

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Applying Lemma 7.3.11

.

Question

. . When can we interchange the order of differentiation and integration? .

Answer

. .

  • For exponential family, always yes.
  • Not always yes for non-exponential family. Will have to check the

individual case. .

Example

. . X1, · · · , Xn

i.i.d.

∼ Uniform(0, θ)

d dθ ∫ θ h(x)fX(x|θ)dx ̸= ∫ θ h(x) ∂ ∂θfX(x|θ)dx

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 32 / 33

slide-135
SLIDE 135

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Summary

.

Today

. .

  • Invariance Property
  • Mean Squared Error
  • Unbiased Estimator
  • Cramer-Rao inequality

.

Next Lecture

. . . . . . . .

  • More on Cramer-Rao inequality

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 33 / 33

slide-136
SLIDE 136

. . . . . .

. . . . . Recap . . . . MLE . . . . . Evaluation . . . . . . . . . . . . . . . . . Cramer-Rao . Summary

Summary

.

Today

. .

  • Invariance Property
  • Mean Squared Error
  • Unbiased Estimator
  • Cramer-Rao inequality

.

Next Lecture

. .

  • More on Cramer-Rao inequality

Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 33 / 33