evaluation of point estimators lecture 11 biostatistics
play

Evaluation of Point Estimators Lecture 11 Biostatistics 602 - - PowerPoint PPT Presentation

. . February 14th, 2013 Biostatistics 602 - Lecture 11 Hyun Min Kang February 14th, 2013 Hyun Min Kang Evaluation of Point Estimators Lecture 11 Biostatistics 602 - Statistical Inference . . Summary Cramer-Rao . Evaluation MLE Recap


  1. • The value of . where We define the induced likelihood function L by L x sup L x . . that maximize L x is called the MLE of . Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 . . Definition . . . . . . . . . . Recap MLE 8 / 33 Evaluation Cramer-Rao . Summary Induced Likelihood Function . . . . . . . . . . . . . . . . . . . . . . . . . . • Let L ( θ | x ) be the likelihood function for a given data x 1 , · · · , x n , • and let η = τ ( θ ) be a (possibly not a one-to-one) function of θ .

  2. • The value of . Induced Likelihood Function February 14th, 2013 Biostatistics 602 - Lecture 11 Hyun Min Kang . x is called the MLE of that maximize L sup . . Definition . . Summary . . . . . . . . . . Recap MLE Cramer-Rao Evaluation 8 / 33 . . . . . . . . . . . . . . . . . . . . . . . . . . • Let L ( θ | x ) be the likelihood function for a given data x 1 , · · · , x n , • and let η = τ ( θ ) be a (possibly not a one-to-one) function of θ . We define the induced likelihood function L ∗ by L ∗ ( η | x ) = L ( θ | x ) θ ∈ τ − 1 ( η ) where τ − 1 ( η ) = { θ : τ ( θ ) = η, θ ∈ Ω } .

  3. . . February 14th, 2013 Biostatistics 602 - Lecture 11 Hyun Min Kang sup . . Definition . Induced Likelihood Function Summary . Cramer-Rao . . . . . . . . . 8 / 33 Recap MLE Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . • Let L ( θ | x ) be the likelihood function for a given data x 1 , · · · , x n , • and let η = τ ( θ ) be a (possibly not a one-to-one) function of θ . We define the induced likelihood function L ∗ by L ∗ ( η | x ) = L ( θ | x ) θ ∈ τ − 1 ( η ) where τ − 1 ( η ) = { θ : τ ( θ ) = η, θ ∈ Ω } . • The value of η that maximize L ∗ ( η | x ) is called the MLE of η = τ ( θ ) .

  4. . x L x sup L x sup sup L x sup L x L x L sup . L x L x Hence, L x L x and is the MLE of . Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 . . . . . . . . . . . . . Recap MLE Evaluation Cramer-Rao . Summary . Invariance Property of MLE Theorem 7.2.10 . . . . . Proof - Using Induced Likelihood Function 9 / 33 . . . . . . . . . . . . . . . . . . . . . . . . . . . . If θ is the MLE of ˆ θ , then the MLE of η = τ ( θ ) is τ (ˆ θ ) , where τ ( θ ) is any function of θ .

  5. . L sup sup L x sup L x L x L x sup x . L x Hence, L x L x and is the MLE of . Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 sup 9 / 33 . . . . . . . . . . . Recap MLE Evaluation Cramer-Rao . Summary Invariance Property of MLE . . . . Theorem 7.2.10 Proof - Using Induced Likelihood Function . . . . . . . . . . . . . . . . . . . . . . . . . . If θ is the MLE of ˆ θ , then the MLE of η = τ ( θ ) is τ (ˆ θ ) , where τ ( θ ) is any function of θ . L ∗ (ˆ η L ∗ ( η | x ) η | x ) =

  6. . L . sup sup sup L x L x L x sup x . L x Hence, L x L x and is the MLE of . Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 . 9 / 33 Proof - Using Induced Likelihood Function . . . . . . . . . . Recap MLE Evaluation Cramer-Rao . Summary . Invariance Property of MLE Theorem 7.2.10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . If θ is the MLE of ˆ θ , then the MLE of η = τ ( θ ) is τ (ˆ θ ) , where τ ( θ ) is any function of θ . L ∗ (ˆ η L ∗ ( η | x ) = sup η | x ) = L ( θ | x ) η θ ∈ τ − 1 ( η )

  7. . x . sup sup sup L x L x sup L L . x Hence, L x L x and is the MLE of . Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 . 9 / 33 Proof - Using Induced Likelihood Function . . . . . . . . . . Recap MLE Evaluation Cramer-Rao . Summary Invariance Property of MLE . Theorem 7.2.10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . If θ is the MLE of ˆ θ , then the MLE of η = τ ( θ ) is τ (ˆ θ ) , where τ ( θ ) is any function of θ . L ∗ (ˆ η L ∗ ( η | x ) = sup η | x ) = L ( θ | x ) η θ ∈ τ − 1 ( η ) = L ( θ | x ) θ

  8. . x . . sup sup sup L x sup L L Proof - Using Induced Likelihood Function x Hence, L x L x and is the MLE of . Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 . 9 / 33 . . Evaluation Cramer-Rao . Summary Invariance Property of MLE . Recap Theorem 7.2.10 MLE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . If θ is the MLE of ˆ θ , then the MLE of η = τ ( θ ) is τ (ˆ θ ) , where τ ( θ ) is any function of θ . L ∗ (ˆ η L ∗ ( η | x ) = sup η | x ) = L ( θ | x ) η θ ∈ τ − 1 ( η ) L ( θ | x ) = L (ˆ = θ | x ) θ

  9. . x . . . sup sup sup sup L Hence, L . x L x and is the MLE of . Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 Proof - Using Induced Likelihood Function 9 / 33 Evaluation . Cramer-Rao MLE Recap . . . . . . . . . Summary Invariance Property of MLE . Theorem 7.2.10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . If θ is the MLE of ˆ θ , then the MLE of η = τ ( θ ) is τ (ˆ θ ) , where τ ( θ ) is any function of θ . L ∗ (ˆ η L ∗ ( η | x ) = sup η | x ) = L ( θ | x ) η θ ∈ τ − 1 ( η ) L ( θ | x ) = L (ˆ = θ | x ) θ L (ˆ θ | x ) = L ( θ | x ) θ ∈ τ − 1 ( τ (ˆ θ ))

  10. . Hence, L . . . sup sup sup sup x . L x and is the MLE of . Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 Proof - Using Induced Likelihood Function 9 / 33 . . MLE Recap . . . . . . . Cramer-Rao . Evaluation . Summary Invariance Property of MLE . Theorem 7.2.10 . . . . . . . . . . . . . . . . . . . . . . . . . . . If θ is the MLE of ˆ θ , then the MLE of η = τ ( θ ) is τ (ˆ θ ) , where τ ( θ ) is any function of θ . L ∗ (ˆ η L ∗ ( η | x ) = sup η | x ) = L ( θ | x ) η θ ∈ τ − 1 ( η ) L ( θ | x ) = L (ˆ = θ | x ) θ L (ˆ L ( θ | x ) = L ∗ [ τ (ˆ θ | x ) = θ ) | x ] θ ∈ τ − 1 ( τ (ˆ θ ))

  11. . Theorem 7.2.10 February 14th, 2013 Biostatistics 602 - Lecture 11 Hyun Min Kang sup sup sup sup . . . Proof - Using Induced Likelihood Function . . . 9 / 33 . Invariance Property of MLE Recap . . . . MLE Evaluation . Cramer-Rao Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . If θ is the MLE of ˆ θ , then the MLE of η = τ ( θ ) is τ (ˆ θ ) , where τ ( θ ) is any function of θ . L ∗ (ˆ η L ∗ ( η | x ) = sup η | x ) = L ( θ | x ) η θ ∈ τ − 1 ( η ) L ( θ | x ) = L (ˆ = θ | x ) θ L (ˆ L ( θ | x ) = L ∗ [ τ (ˆ θ | x ) = θ ) | x ] θ ∈ τ − 1 ( τ (ˆ θ )) η | x ) = L ∗ [ τ (ˆ θ ) | x ] and τ (ˆ Hence, L ∗ (ˆ θ ) is the MLE of τ ( θ ) .

  12. . . February 14th, 2013 Biostatistics 602 - Lecture 11 Hyun Min Kang robust). 4 Heavily depends on the underlying distributional assumptions (i.e. not . 3 Not always easy to obtain; may be hard to find the global maximum. . space. 2 By definition, MLE will always fall into the range of the parameter . . 1 Optimal in some sense : We will study this later . Properties of MLE . Summary . Cramer-Rao Evaluation MLE Recap . . . . . . . . . 10 / 33 . . . . . . . . . . . . . . . . . . . . . . . . . .

  13. . . February 14th, 2013 Biostatistics 602 - Lecture 11 Hyun Min Kang robust). 4 Heavily depends on the underlying distributional assumptions (i.e. not . 3 Not always easy to obtain; may be hard to find the global maximum. . . space. 2 By definition, MLE will always fall into the range of the parameter . . 1 Optimal in some sense : We will study this later . . Recap . . . . . . . . . MLE Properties of MLE Evaluation Cramer-Rao . Summary 10 / 33 . . . . . . . . . . . . . . . . . . . . . . . . . .

  14. . 3 Not always easy to obtain; may be hard to find the global maximum. . . 2 By definition, MLE will always fall into the range of the parameter space. . . . . . 4 Heavily depends on the underlying distributional assumptions (i.e. not robust). Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 1 Optimal in some sense : We will study this later . . Recap . . . . . . . . . MLE Properties of MLE Evaluation Cramer-Rao . Summary 10 / 33 . . . . . . . . . . . . . . . . . . . . . . . . . .

  15. . 3 Not always easy to obtain; may be hard to find the global maximum. . . 2 By definition, MLE will always fall into the range of the parameter space. . . . . . 4 Heavily depends on the underlying distributional assumptions (i.e. not robust). Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 1 Optimal in some sense : We will study this later . . Recap . . . . . . . . . MLE Properties of MLE Evaluation Cramer-Rao . Summary 10 / 33 . . . . . . . . . . . . . . . . . . . . . . . . . .

  16. X n are iid samples from a distribution with mean X i is an estimator of E X Therefore X is an unbiased estimator for . . . . X . Let X n n i . The bias is Bias E . n n i X i n n i E X i . Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 . . . . . . . . . . . . . Recap MLE Evaluation Cramer-Rao . Summary . Method of Evaluating Estimators Definition : Unbiasedness If the bias is equal to 0, then . Example . . is an unbiased estimator for 11 / 33 . . . . . . . . . . . . . . . . . . . . . . . . . . . . Suppose ˆ θ is an estimator for θ , then the bias of θ is defined as Bias ( θ ) = E (ˆ θ ) − θ

  17. X n are iid samples from a distribution with mean X i is an estimator of E X Therefore X is an unbiased estimator for . . . . X . Let X n n i . The bias is Bias E . n n i X i n n i E X i . Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 . . . . . . . . . . . . . Recap MLE Evaluation Cramer-Rao . Summary Method of Evaluating Estimators . Definition : Unbiasedness . Example . 11 / 33 . . . . . . . . . . . . . . . . . . . . . . . . . . . . Suppose ˆ θ is an estimator for θ , then the bias of θ is defined as Bias ( θ ) = E (ˆ θ ) − θ If the bias is equal to 0, then ˆ θ is an unbiased estimator for θ .

  18. E X Therefore X is an unbiased estimator for . n . . n The bias is Bias E n X i i . n n i E X i . Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 Example . 11 / 33 Method of Evaluating Estimators . . . . . . . . . Recap MLE Evaluation Cramer-Rao . Summary . . . Definition : Unbiasedness . . . . . . . . . . . . . . . . . . . . . . . . . . Suppose ˆ θ is an estimator for θ , then the bias of θ is defined as Bias ( θ ) = E (ˆ θ ) − θ If the bias is equal to 0, then ˆ θ is an unbiased estimator for θ . X 1 , · · · , X n are iid samples from a distribution with mean µ . Let X = 1 ∑ n i =1 X i is an estimator of µ .

  19. Therefore X is an unbiased estimator for . X i . . n E n n i n . n i E X i . Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 Example . 11 / 33 . Summary Evaluation MLE Method of Evaluating Estimators Recap Cramer-Rao . . . . . Definition : Unbiasedness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Suppose ˆ θ is an estimator for θ , then the bias of θ is defined as Bias ( θ ) = E (ˆ θ ) − θ If the bias is equal to 0, then ˆ θ is an unbiased estimator for θ . X 1 , · · · , X n are iid samples from a distribution with mean µ . Let X = 1 ∑ n i =1 X i is an estimator of µ . The bias is E ( X ) − µ Bias ( µ ) =

  20. Therefore X is an unbiased estimator for . X i . . n E n n n n . i E X i . Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 Example . . Summary . . . . . . . . . Recap MLE Evaluation Cramer-Rao . 11 / 33 . . Definition : Unbiasedness Method of Evaluating Estimators . . . . . . . . . . . . . . . . . . . . . . . . . . Suppose ˆ θ is an estimator for θ , then the bias of θ is defined as Bias ( θ ) = E (ˆ θ ) − θ If the bias is equal to 0, then ˆ θ is an unbiased estimator for θ . X 1 , · · · , X n are iid samples from a distribution with mean µ . Let X = 1 ∑ n i =1 X i is an estimator of µ . The bias is E ( X ) − µ Bias ( µ ) = ( ) 1 ∑ = − µ i =1

  21. Therefore X is an unbiased estimator for . X i . . n E n n n n . i E X i . Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 Example . . Summary . . . . . . . . . Recap MLE Evaluation Cramer-Rao . 11 / 33 . . Definition : Unbiasedness Method of Evaluating Estimators . . . . . . . . . . . . . . . . . . . . . . . . . . Suppose ˆ θ is an estimator for θ , then the bias of θ is defined as Bias ( θ ) = E (ˆ θ ) − θ If the bias is equal to 0, then ˆ θ is an unbiased estimator for θ . X 1 , · · · , X n are iid samples from a distribution with mean µ . Let X = 1 ∑ n i =1 X i is an estimator of µ . The bias is E ( X ) − µ Bias ( µ ) = ( ) 1 ∑ = − µ i =1

  22. Therefore X is an unbiased estimator for . n Example . . n E n X i . n n . Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 . . . Cramer-Rao . . . . . . . . . Recap MLE Evaluation Definition : Unbiasedness 11 / 33 . Summary . Method of Evaluating Estimators . . . . . . . . . . . . . . . . . . . . . . . . . . Suppose ˆ θ is an estimator for θ , then the bias of θ is defined as Bias ( θ ) = E (ˆ θ ) − θ If the bias is equal to 0, then ˆ θ is an unbiased estimator for θ . X 1 , · · · , X n are iid samples from a distribution with mean µ . Let X = 1 ∑ n i =1 X i is an estimator of µ . The bias is E ( X ) − µ Bias ( µ ) = ( ) 1 − µ = 1 ∑ ∑ = E ( X i ) − µ i =1 i =1

  23. Therefore X is an unbiased estimator for . n Example . . n E n X i . n n . Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 . . . Cramer-Rao . . . . . . . . . Recap MLE Evaluation Definition : Unbiasedness 11 / 33 . Summary . Method of Evaluating Estimators . . . . . . . . . . . . . . . . . . . . . . . . . . Suppose ˆ θ is an estimator for θ , then the bias of θ is defined as Bias ( θ ) = E (ˆ θ ) − θ If the bias is equal to 0, then ˆ θ is an unbiased estimator for θ . X 1 , · · · , X n are iid samples from a distribution with mean µ . Let X = 1 ∑ n i =1 X i is an estimator of µ . The bias is E ( X ) − µ Bias ( µ ) = ( ) 1 − µ = 1 ∑ ∑ = E ( X i ) − µ = µ − µ = 0 i =1 i =1

  24. . n . Example . . n E n . X i n n Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 . . Definition : Unbiasedness Evaluation . . . . . . . . . Recap MLE . 11 / 33 Cramer-Rao . Method of Evaluating Estimators Summary . . . . . . . . . . . . . . . . . . . . . . . . . . Suppose ˆ θ is an estimator for θ , then the bias of θ is defined as Bias ( θ ) = E (ˆ θ ) − θ If the bias is equal to 0, then ˆ θ is an unbiased estimator for θ . X 1 , · · · , X n are iid samples from a distribution with mean µ . Let X = 1 ∑ n i =1 X i is an estimator of µ . The bias is E ( X ) − µ Bias ( µ ) = ( ) 1 − µ = 1 ∑ ∑ = E ( X i ) − µ = µ − µ = 0 i =1 i =1 Therefore X is an unbiased estimator for µ .

  25. . Summary February 14th, 2013 Biostatistics 602 - Lecture 11 Hyun Min Kang . than (red) is biased but more likely to be closer to the true • . (blue) is unbiased but has a chance to be very far away from • How important is unbiased? . . Cramer-Rao Evaluation MLE Recap . . . . . . . . . 12 / 33 . . . . . . . . . . . . . . . . . . . . . . . . . .

  26. . Evaluation February 14th, 2013 Biostatistics 602 - Lecture 11 Hyun Min Kang How important is unbiased? Summary . . Cramer-Rao MLE Recap . . . . . . . . . 12 / 33 . . . . . . . . . . . . . . . . . . . . . . . . . . • ˆ θ 1 (blue) is unbiased but has a chance to be very far away from θ = 0 . • ˆ θ 2 (red) is biased but more likely to be closer to the true θ than ˆ θ 1 .

  27. . E . . . . MSE E E E E E E E E . E E E E E E E E E Var Bias Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 . . . . . . . . . . . . . Recap MLE Evaluation Cramer-Rao . Summary Mean Squared Error . Definition . . . Property of MSE 13 / 33 . . . . . . . . . . . . . . . . . . . . . . . . . . Mean Squared Error (MSE) of an estimator ˆ θ is defined as MSE (ˆ θ ) = E [(ˆ θ − θ )] 2

  28. . E . . E E E E E E E E E . E E E E E Var Bias Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 . Property of MSE 13 / 33 . . . . . . . . . . Recap MLE Evaluation Cramer-Rao . Summary Mean Squared Error Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mean Squared Error (MSE) of an estimator ˆ θ is defined as MSE (ˆ θ ) = E [(ˆ θ − θ )] 2 MSE (ˆ E [(ˆ θ − E ˆ θ + E ˆ θ − θ )] 2 θ ) =

  29. . E . Property of MSE . . E E E . E E E Var Bias Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 . . Definition . . . . . . . . . . Recap MLE Evaluation 13 / 33 Summary Cramer-Rao . Mean Squared Error . . . . . . . . . . . . . . . . . . . . . . . . . . Mean Squared Error (MSE) of an estimator ˆ θ is defined as MSE (ˆ θ ) = E [(ˆ θ − θ )] 2 MSE (ˆ E [(ˆ θ − E ˆ θ + E ˆ θ − θ )] 2 θ ) = E [(ˆ θ − E ˆ θ ) 2 ] + E [( E ˆ θ − θ ) 2 ] + 2 E [(ˆ θ − E ˆ θ )] E [( E ˆ = θ − θ )]

  30. . Mean Squared Error February 14th, 2013 Biostatistics 602 - Lecture 11 Hyun Min Kang Bias Var . . Property of MSE . . . Definition . . Summary MLE . . . . . . . . . . Recap 13 / 33 Cramer-Rao Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . Mean Squared Error (MSE) of an estimator ˆ θ is defined as MSE (ˆ θ ) = E [(ˆ θ − θ )] 2 MSE (ˆ E [(ˆ θ − E ˆ θ + E ˆ θ − θ )] 2 θ ) = E [(ˆ θ − E ˆ θ ) 2 ] + E [( E ˆ θ − θ ) 2 ] + 2 E [(ˆ θ − E ˆ θ )] E [( E ˆ = θ − θ )] θ − θ ) 2 + 2( E ˆ E [(ˆ θ − E ˆ θ ) 2 ] + ( E ˆ θ − E ˆ θ ) E [( E ˆ θ − θ )] =

  31. . Mean Squared Error February 14th, 2013 Biostatistics 602 - Lecture 11 Hyun Min Kang . . . Property of MSE . . . Definition . 13 / 33 Summary . . . . . . . . . Recap MLE . Cramer-Rao Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . Mean Squared Error (MSE) of an estimator ˆ θ is defined as MSE (ˆ θ ) = E [(ˆ θ − θ )] 2 MSE (ˆ E [(ˆ θ − E ˆ θ + E ˆ θ − θ )] 2 θ ) = E [(ˆ θ − E ˆ θ ) 2 ] + E [( E ˆ θ − θ ) 2 ] + 2 E [(ˆ θ − E ˆ θ )] E [( E ˆ = θ − θ )] θ − θ ) 2 + 2( E ˆ E [(ˆ θ − E ˆ θ ) 2 ] + ( E ˆ θ − E ˆ θ ) E [( E ˆ θ − θ )] = Var (ˆ θ ) + Bias 2 ( θ ) =

  32. • Suppose that the true • Therefore, we cannot find an estimator that is uniformly the best in • Restrict the class of estimators, and find the ”best” estimator within . estimator can beat E X Var X n , then MSE MSE , and no . in terms of MSE when true E terms of MSE across all among all estimators the small class. Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 MSE MSE . MLE . . . . . . . . . Recap Evaluation Cramer-Rao . Summary Example i.i.d. 14 / 33 . . . . . . . . . . . . . . . . . . . . . . . . . . • X 1 , · · · , X n ∼ N ( µ, 1) • µ 1 = 1 , µ 2 = X .

  33. • Suppose that the true • Therefore, we cannot find an estimator that is uniformly the best in • Restrict the class of estimators, and find the ”best” estimator within . estimator can beat E X Var X n , then MSE MSE , and no . in terms of MSE when true . terms of MSE across all among all estimators the small class. Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 MSE 14 / 33 . . . . . . . . Recap MLE Evaluation Cramer-Rao . Summary Example i.i.d. . . . . . . . . . . . . . . . . . . . . . . . . . . . • X 1 , · · · , X n ∼ N ( µ, 1) • µ 1 = 1 , µ 2 = X . µ 1 − µ ) 2 = (1 − µ ) 2 MSE (ˆ µ 1 ) = E (ˆ

  34. • Suppose that the true • Therefore, we cannot find an estimator that is uniformly the best in • Restrict the class of estimators, and find the ”best” estimator within . , then MSE MSE , and no estimator can beat in terms of MSE when true . . terms of MSE across all among all estimators the small class. Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 n 14 / 33 . Evaluation Cramer-Rao Summary Example MLE Recap . . . . . . . i.i.d. . . . . . . . . . . . . . . . . . . . . . . . . . . . . • X 1 , · · · , X n ∼ N ( µ, 1) • µ 1 = 1 , µ 2 = X . µ 1 − µ ) 2 = (1 − µ ) 2 MSE (ˆ µ 1 ) = E (ˆ E ( X − µ ) 2 = Var ( X ) = 1 MSE (ˆ µ 2 ) =

  35. • Therefore, we cannot find an estimator that is uniformly the best in • Restrict the class of estimators, and find the ”best” estimator within . Summary February 14th, 2013 Biostatistics 602 - Lecture 11 Hyun Min Kang the small class. among all estimators terms of MSE across all n . i.i.d. Example 14 / 33 . MLE Recap Cramer-Rao . . . . . . . . . Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . • X 1 , · · · , X n ∼ N ( µ, 1) • µ 1 = 1 , µ 2 = X . µ 1 − µ ) 2 = (1 − µ ) 2 MSE (ˆ µ 1 ) = E (ˆ E ( X − µ ) 2 = Var ( X ) = 1 MSE (ˆ µ 2 ) = • Suppose that the true µ = 1 , then MSE ( µ 1 ) = 0 < MSE ( µ 2 ) , and no estimator can beat µ 1 in terms of MSE when true µ = 1 .

  36. • Restrict the class of estimators, and find the ”best” estimator within . . February 14th, 2013 Biostatistics 602 - Lecture 11 Hyun Min Kang the small class. n . i.i.d. Example Summary 14 / 33 Cramer-Rao Evaluation MLE . . . . . . . . . Recap . . . . . . . . . . . . . . . . . . . . . . . . . . • X 1 , · · · , X n ∼ N ( µ, 1) • µ 1 = 1 , µ 2 = X . µ 1 − µ ) 2 = (1 − µ ) 2 MSE (ˆ µ 1 ) = E (ˆ E ( X − µ ) 2 = Var ( X ) = 1 MSE (ˆ µ 2 ) = • Suppose that the true µ = 1 , then MSE ( µ 1 ) = 0 < MSE ( µ 2 ) , and no estimator can beat µ 1 in terms of MSE when true µ = 1 . • Therefore, we cannot find an estimator that is uniformly the best in terms of MSE across all θ ∈ Ω among all estimators

  37. . Cramer-Rao February 14th, 2013 Biostatistics 602 - Lecture 11 Hyun Min Kang the small class. n . i.i.d. Example Summary . 14 / 33 Evaluation MLE Recap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . • X 1 , · · · , X n ∼ N ( µ, 1) • µ 1 = 1 , µ 2 = X . µ 1 − µ ) 2 = (1 − µ ) 2 MSE (ˆ µ 1 ) = E (ˆ E ( X − µ ) 2 = Var ( X ) = 1 MSE (ˆ µ 2 ) = • Suppose that the true µ = 1 , then MSE ( µ 1 ) = 0 < MSE ( µ 2 ) , and no estimator can beat µ 1 in terms of MSE when true µ = 1 . • Therefore, we cannot find an estimator that is uniformly the best in terms of MSE across all θ ∈ Ω among all estimators • Restrict the class of estimators, and find the ”best” estimator within

  38. • Find the lower bound of variances of any unbiased estimator of • If W is an unbiased estimator of . . for all , where W is any other unbiased estimator of (minimum variance). . How to find the Best Unbiased Estimator . . . . . . . X , say B . and satisfies Var W X B , then W is the best unbiased estimator. Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 Var W X 2 and Var W . . . . . . . . . . . Recap MLE Evaluation Cramer-Rao . Summary Uniformly Minimum Variance Unbiased Estimator . . . (unbiased) for all X 1 E W . . Definition . 15 / 33 . . . . . . . . . . . . . . . . . . . . . . . . . . W ∗ ( X ) is the best unbiased estimator , or uniformly minimum variance unbiased estimator (UMVUE) of τ ( θ ) if,

  39. • Find the lower bound of variances of any unbiased estimator of • If W is an unbiased estimator of . , where W is any other unbiased estimator of (minimum variance). . How to find the Best Unbiased Estimator . . . . . . . . Var W X , say B . and satisfies Var W X B , then W is the best unbiased estimator. Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 for all X . 2 and Var W . . . . . . . . . Recap MLE Evaluation Cramer-Rao . Summary Uniformly Minimum Variance Unbiased Estimator . Definition . . . . . . 15 / 33 . . . . . . . . . . . . . . . . . . . . . . . . . . W ∗ ( X ) is the best unbiased estimator , or uniformly minimum variance unbiased estimator (UMVUE) of τ ( θ ) if, 1 E [ W ∗ ( X ) | θ ] = τ ( θ ) for all θ (unbiased)

  40. • Find the lower bound of variances of any unbiased estimator of • If W is an unbiased estimator of . How to find the Best Unbiased Estimator . . . . . . . . say B , . . and satisfies Var W X B , then W is the best unbiased estimator. Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 . . . Summary . . . . . . . . . Recap MLE Evaluation Cramer-Rao . Uniformly Minimum Variance Unbiased Estimator . Definition . . . . 15 / 33 . . . . . . . . . . . . . . . . . . . . . . . . . . W ∗ ( X ) is the best unbiased estimator , or uniformly minimum variance unbiased estimator (UMVUE) of τ ( θ ) if, 1 E [ W ∗ ( X ) | θ ] = τ ( θ ) for all θ (unbiased) 2 and Var [ W ∗ ( X ) | θ ] ≤ Var [ W ( X ) | θ ] for all θ , where W is any other unbiased estimator of τ ( θ ) (minimum variance).

  41. • If W is an unbiased estimator of . . . . . How to find the Best Unbiased Estimator . . and satisfies . Var W X B , then W is the best unbiased estimator. Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 . . . Cramer-Rao . . . . . . . . . Recap MLE Evaluation . Summary Uniformly Minimum Variance Unbiased Estimator . Definition 15 / 33 . . . . . . . . . . . . . . . . . . . . . . . . . . W ∗ ( X ) is the best unbiased estimator , or uniformly minimum variance unbiased estimator (UMVUE) of τ ( θ ) if, 1 E [ W ∗ ( X ) | θ ] = τ ( θ ) for all θ (unbiased) 2 and Var [ W ∗ ( X ) | θ ] ≤ Var [ W ( X ) | θ ] for all θ , where W is any other unbiased estimator of τ ( θ ) (minimum variance). • Find the lower bound of variances of any unbiased estimator of τ ( θ ) , say B ( θ ) .

  42. . Definition February 14th, 2013 Biostatistics 602 - Lecture 11 Hyun Min Kang . . How to find the Best Unbiased Estimator . . . . . . . . 15 / 33 . . Summary . Cramer-Rao . Evaluation . MLE . Uniformly Minimum Variance Unbiased Estimator Recap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . W ∗ ( X ) is the best unbiased estimator , or uniformly minimum variance unbiased estimator (UMVUE) of τ ( θ ) if, 1 E [ W ∗ ( X ) | θ ] = τ ( θ ) for all θ (unbiased) 2 and Var [ W ∗ ( X ) | θ ] ≤ Var [ W ( X ) | θ ] for all θ , where W is any other unbiased estimator of τ ( θ ) (minimum variance). • Find the lower bound of variances of any unbiased estimator of τ ( θ ) , say B ( θ ) . • If W ∗ is an unbiased estimator of τ ( θ ) and satisfies Var [ W ∗ ( X ) | θ ] = B ( θ ) , then W ∗ is the best unbiased estimator.

  43. d E h x h x f X x f X x log f X x . . For h x and h x W x , if the differentiation and integrations are interchangeable, i.e. d d d x x d x . h x d x Then, a lower bound of Var W X is Var W X E Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 2 Var W X . . . . . . . . . . . . Recap MLE Evaluation Cramer-Rao . Summary Cramer-Rao inequality . Theorem 7.3.9 : Cramer-Rao Theorem . . is an estimator satisfying . . 1 E W X 16 / 33 . . . . . . . . . . . . . . . . . . . . . . . . . . Let X 1 , · · · , X n be a sample with joint pdf/pmf of f X ( x | θ ) . Suppose W ( X )

  44. d E h x h x f X x f X x log f X x . For h x and h x W x , if the differentiation and integrations are interchangeable, i.e. d d d x x d x 2 Var W X h x d x Then, a lower bound of Var W X is Var W X E Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 . . . . . . . . . . . . . Recap MLE Evaluation Cramer-Rao Summary Cramer-Rao inequality . Theorem 7.3.9 : Cramer-Rao Theorem . . is an estimator satisfying . . 16 / 33 . . . . . . . . . . . . . . . . . . . . . . . . . . Let X 1 , · · · , X n be a sample with joint pdf/pmf of f X ( x | θ ) . Suppose W ( X ) 1 E [ W ( X ) | θ ] = τ ( θ ) , ∀ θ ∈ Ω .

  45. d E h x h x f X x f X x log f X x . d x and h x W x , if the differentiation and integrations are interchangeable, i.e. d d d x h x x . d x Then, a lower bound of Var W X is Var W X E Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 For h x . . Summary . . . . . . . . . Recap MLE Evaluation Cramer-Rao . Cramer-Rao inequality . Theorem 7.3.9 : Cramer-Rao Theorem . . is an estimator satisfying . . 16 / 33 . . . . . . . . . . . . . . . . . . . . . . . . . . Let X 1 , · · · , X n be a sample with joint pdf/pmf of f X ( x | θ ) . Suppose W ( X ) 1 E [ W ( X ) | θ ] = τ ( θ ) , ∀ θ ∈ Ω . 2 Var [ W ( X ) | θ ] < ∞ .

  46. d E h x h x f X x f X x log f X x . d x . interchangeable, i.e. d d d x h x x . d x Then, a lower bound of Var W X is Var W X E Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 . 16 / 33 . . . . . . . . . . . Recap MLE Evaluation . Cramer-Rao Summary Cramer-Rao inequality . Theorem 7.3.9 : Cramer-Rao Theorem . . is an estimator satisfying . . . . . . . . . . . . . . . . . . . . . . . . . . Let X 1 , · · · , X n be a sample with joint pdf/pmf of f X ( x | θ ) . Suppose W ( X ) 1 E [ W ( X ) | θ ] = τ ( θ ) , ∀ θ ∈ Ω . 2 Var [ W ( X ) | θ ] < ∞ . For h ( x ) = 1 and h ( x ) = W ( x ) , if the differentiation and integrations are

  47. log f X x . d . . . . interchangeable, i.e. d Then, a lower bound of Var W X . is Var W X E Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 is an estimator satisfying . . Evaluation . . . . . . . . . Recap MLE Theorem 7.3.9 : Cramer-Rao Theorem 16 / 33 . Cramer-Rao inequality Summary . Cramer-Rao . . . . . . . . . . . . . . . . . . . . . . . . . . Let X 1 , · · · , X n be a sample with joint pdf/pmf of f X ( x | θ ) . Suppose W ( X ) 1 E [ W ( X ) | θ ] = τ ( θ ) , ∀ θ ∈ Ω . 2 Var [ W ( X ) | θ ] < ∞ . For h ( x ) = 1 and h ( x ) = W ( x ) , if the differentiation and integrations are ∫ ∫ h ( x ) ∂ d θ E [ h ( x ) | θ ] = h ( x ) f X ( x | θ ) d x = ∂θ f X ( x | θ ) d x d θ x ∈X x ∈X

  48. . Theorem 7.3.9 : Cramer-Rao Theorem February 14th, 2013 Biostatistics 602 - Lecture 11 Hyun Min Kang E d d interchangeable, i.e. . . . . is an estimator satisfying . . . . MLE . . . . Cramer-Rao inequality . . . . . Recap 16 / 33 Evaluation . Summary Cramer-Rao . . . . . . . . . . . . . . . . . . . . . . . . . . Let X 1 , · · · , X n be a sample with joint pdf/pmf of f X ( x | θ ) . Suppose W ( X ) 1 E [ W ( X ) | θ ] = τ ( θ ) , ∀ θ ∈ Ω . 2 Var [ W ( X ) | θ ] < ∞ . For h ( x ) = 1 and h ( x ) = W ( x ) , if the differentiation and integrations are ∫ ∫ h ( x ) ∂ d θ E [ h ( x ) | θ ] = h ( x ) f X ( x | θ ) d x = ∂θ f X ( x | θ ) d x d θ x ∈X x ∈X Then, a lower bound of Var [ W ( X ) | θ ] is [ τ ′ ( θ )] 2 Var [ W ( X )] ≥ [ { ∂ ∂θ log f X ( x | θ ) } 2 ]

  49. log f X X log f X X log f X X log f X X log f X X log f X X log f X X February 14th, 2013 Using Var X Var W X Cov W X Var , EX EX Biostatistics 602 - Lecture 11 Var W X Var E E Hyun Min Kang Var . . Cov W X . . . . . . . . . Recap MLE Evaluation Cramer-Rao . Summary Proving Cramer-Rao Theorem (1/4) By Cauchy-Schwarz inequality, Replacing X and Y , 17 / 33 . . . . . . . . . . . . . . . . . . . . . . . . . . [ Cov ( X , Y )] 2 ≤ Var ( X ) Var ( Y )

  50. log f X X log f X X log f X X log f X X log f X X . EX Cov W X Var Using Var X EX Var , . E E Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 Var W X 17 / 33 . . . . . . MLE Evaluation Cramer-Rao . Summary Proving Cramer-Rao Theorem (1/4) By Cauchy-Schwarz inequality, Replacing X and Y , . . . Recap . . . . . . . . . . . . . . . . . . . . . . . . . . [ Cov ( X , Y )] 2 ≤ Var ( X ) Var ( Y ) [ ∂ ] 2 [ Cov { W ( X ) , ∂ ] ∂θ log f X ( X | θ ) } ≤ Var [ W ( X )] Var ∂θ log f X ( X | θ )

  51. log f X X log f X X log f X X . Var Using Var X EX EX , E Var Replacing X and Y , E Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 . 17 / 33 Summary By Cauchy-Schwarz inequality, . . . . . . . . . Recap Proving Cramer-Rao Theorem (1/4) MLE Evaluation Cramer-Rao . . . . . . . . . . . . . . . . . . . . . . . . . . . [ Cov ( X , Y )] 2 ≤ Var ( X ) Var ( Y ) [ ∂ ] 2 [ Cov { W ( X ) , ∂ ] ∂θ log f X ( X | θ ) } ≤ Var [ W ( X )] Var ∂θ log f X ( X | θ ) ] 2 [ Cov { W ( X ) , ∂ ∂θ log f X ( X | θ ) } ≥ Var [ W ( X )] [ ∂ ] ∂θ log f X ( X | θ )

  52. . . February 14th, 2013 Biostatistics 602 - Lecture 11 Hyun Min Kang E Var Var . Replacing X and Y , By Cauchy-Schwarz inequality, Proving Cramer-Rao Theorem (1/4) Summary 17 / 33 Cramer-Rao . . . . . . . . . Recap MLE Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . [ Cov ( X , Y )] 2 ≤ Var ( X ) Var ( Y ) [ ∂ ] 2 [ Cov { W ( X ) , ∂ ] ∂θ log f X ( X | θ ) } ≤ Var [ W ( X )] Var ∂θ log f X ( X | θ ) ] 2 [ Cov { W ( X ) , ∂ ∂θ log f X ( X | θ ) } ≥ Var [ W ( X )] [ ∂ ] ∂θ log f X ( X | θ ) Using Var ( X ) = EX 2 − ( EX ) 2 , [ ∂ [{ ∂ [ ∂ } 2 ] ] 2 ] ∂θ log f X ( X | θ ) ∂θ log f X ( X | θ ) − E ∂θ log f X ( X | θ ) =

  53. f X x f X x f X x f X x f X x log f X X log f X X . d x x d x d d d x x . (by assumption) d d Var E Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 x 18 / 33 Cramer-Rao E MLE Evaluation . . Summary . . . . . Proving Cramer-Rao Theorem (2/4) . . . Recap . . . . . . . . . . . . . . . . . . . . . . . . . . [ ∂ [ ∂ ] ∫ ] ∂θ log f X ( X | θ ) = ∂θ log f X ( x | θ ) f X ( x | θ ) d x x ∈X

  54. f X x f X x log f X X log f X X . d x d x d d x d (by assumption) . d Var E Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 x 18 / 33 Proving Cramer-Rao Theorem (2/4) Recap . Summary E Evaluation MLE Cramer-Rao . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . [ ∂ [ ∂ ] ∫ ] ∂θ log f X ( X | θ ) = ∂θ log f X ( x | θ ) f X ( x | θ ) d x x ∈X ∂ ∂θ f X ( x | θ ) ∫ = f X ( x | θ ) f X ( x | θ ) d x x ∈X

  55. f X x log f X X log f X X . d d x d x (by assumption) Var d . E Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 d 18 / 33 . . Summary Proving Cramer-Rao Theorem (2/4) Evaluation MLE Recap . . . . . Cramer-Rao . . E . . . . . . . . . . . . . . . . . . . . . . . . . . . [ ∂ [ ∂ ] ∫ ] ∂θ log f X ( X | θ ) = ∂θ log f X ( x | θ ) f X ( x | θ ) d x x ∈X ∂ ∂θ f X ( x | θ ) ∫ = f X ( x | θ ) f X ( x | θ ) d x x ∈X ∫ ∂ = ∂θ f X ( x | θ ) d x x ∈X

  56. log f X X log f X X . Proving Cramer-Rao Theorem (2/4) February 14th, 2013 Biostatistics 602 - Lecture 11 Hyun Min Kang E Var d d (by assumption) d . E 18 / 33 Summary Cramer-Rao MLE Recap . . . . . . . . . . Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . [ ∂ [ ∂ ] ∫ ] ∂θ log f X ( X | θ ) = ∂θ log f X ( x | θ ) f X ( x | θ ) d x x ∈X ∂ ∂θ f X ( x | θ ) ∫ = f X ( x | θ ) f X ( x | θ ) d x x ∈X ∫ ∂ = ∂θ f X ( x | θ ) d x x ∈X ∫ = f X ( x | θ ) d x d θ x ∈X

  57. log f X X log f X X . Proving Cramer-Rao Theorem (2/4) February 14th, 2013 Biostatistics 602 - Lecture 11 Hyun Min Kang E Var d (by assumption) d . E 18 / 33 Summary . Evaluation Recap . . . . . . . . . MLE Cramer-Rao . . . . . . . . . . . . . . . . . . . . . . . . . . [ ∂ [ ∂ ] ∫ ] ∂θ log f X ( X | θ ) = ∂θ log f X ( x | θ ) f X ( x | θ ) d x x ∈X ∂ ∂θ f X ( x | θ ) ∫ = f X ( x | θ ) f X ( x | θ ) d x x ∈X ∫ ∂ = ∂θ f X ( x | θ ) d x x ∈X ∫ = f X ( x | θ ) d x d θ x ∈X = d θ 1 = 0

  58. . Summary February 14th, 2013 Biostatistics 602 - Lecture 11 Hyun Min Kang E Var d (by assumption) d . E Proving Cramer-Rao Theorem (2/4) 18 / 33 . Cramer-Rao Recap . . . . . . . . . MLE Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . [ ∂ [ ∂ ] ∫ ] ∂θ log f X ( X | θ ) = ∂θ log f X ( x | θ ) f X ( x | θ ) d x x ∈X ∂ ∂θ f X ( x | θ ) ∫ = f X ( x | θ ) f X ( x | θ ) d x x ∈X ∫ ∂ = ∂θ f X ( x | θ ) d x x ∈X ∫ = f X ( x | θ ) d x d θ x ∈X = d θ 1 = 0 [ ∂ [{ ∂ } 2 ] ] ∂θ log f X ( X | θ ) = ∂θ log f X ( X | θ )

  59. log f X X log f X X log f X X log f X x f X x f X x W x f X x d E W X f x February 14th, 2013 f x d x Biostatistics 602 - Lecture 11 x W x f x Hyun Min Kang d x d W x W x d d x (by assumption) d d x . x . . . . . . . . . . Recap MLE Evaluation Cramer-Rao . Summary Proving Cramer-Rao Theorem (3/4) Cov E W X E W X E E W X 19 / 33 . . . . . . . . . . . . . . . . . . . . . . . . . . [ ] W ( X ) , ∂ ∂θ log f X ( X | θ )

  60. log f X X log f X x f X x f X x W x f X x d E W X Hyun Min Kang f x x W x February 14th, 2013 f x d x x W x Biostatistics 602 - Lecture 11 f x x d x d W x E W X d d x (by assumption) d d . 19 / 33 . Cov . . . . . . . . . Recap MLE Evaluation Cramer-Rao . Summary Proving Cramer-Rao Theorem (3/4) E . . . . . . . . . . . . . . . . . . . . . . . . . . [ ] W ( X ) , ∂ ∂θ log f X ( X | θ ) [ ∂ [ ] ] W ( X ) · ∂ = ∂θ log f X ( X | θ ) − E [ W ( X )] E ∂θ log f X ( X | θ )

  61. log f X x f X x f X x W x f X x d E W X x x W x f x d x x W x f x f x d x . W x . d d x (by assumption) d d d Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 E 19 / 33 Recap Evaluation Summary . . . . . . E Cramer-Rao Proving Cramer-Rao Theorem (3/4) Cov . . . . MLE . . . . . . . . . . . . . . . . . . . . . . . . . . [ ] W ( X ) , ∂ ∂θ log f X ( X | θ ) [ ∂ [ ] ] W ( X ) · ∂ = ∂θ log f X ( X | θ ) − E [ W ( X )] E ∂θ log f X ( X | θ ) [ ] W ( X ) · ∂ = ∂θ log f X ( X | θ )

  62. f X x f X x W x f X x d E W X . x W x f x f x d x x W x d d . x (by assumption) d d d Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 E 19 / 33 Proving Cramer-Rao Theorem (3/4) . Cov . Cramer-Rao Evaluation MLE Summary . . . . . Recap . E . . . . . . . . . . . . . . . . . . . . . . . . . . . . [ ] W ( X ) , ∂ ∂θ log f X ( X | θ ) [ ∂ [ ] ] W ( X ) · ∂ = ∂θ log f X ( X | θ ) − E [ W ( X )] E ∂θ log f X ( X | θ ) [ ] W ( X ) · ∂ ∫ W ( x ) ∂ = ∂θ log f X ( X | θ ) = ∂θ log f X ( x | θ ) f ( x | θ ) d x x ∈X

  63. f X x W x f X x d E W X . x W x d d x (by assumption) E d d d Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 . E 19 / 33 Proving Cramer-Rao Theorem (3/4) . Cramer-Rao Cov Evaluation MLE Recap . . . . . . . . . Summary . . . . . . . . . . . . . . . . . . . . . . . . . . [ ] W ( X ) , ∂ ∂θ log f X ( X | θ ) [ ∂ [ ] ] W ( X ) · ∂ = ∂θ log f X ( X | θ ) − E [ W ( X )] E ∂θ log f X ( X | θ ) [ ] W ( X ) · ∂ ∫ W ( x ) ∂ = ∂θ log f X ( X | θ ) = ∂θ log f X ( x | θ ) f ( x | θ ) d x x ∈X ∂ ∂θ f X ( x | θ ) ∫ = W ( x ) f ( x | θ ) d x f ( x | θ ) x ∈X

  64. W x f X x d E W X . E February 14th, 2013 Biostatistics 602 - Lecture 11 Hyun Min Kang d d d (by assumption) x d d . E 19 / 33 Cov . Evaluation MLE Proving Cramer-Rao Theorem (3/4) Recap . . . . . . . . . Summary Cramer-Rao . . . . . . . . . . . . . . . . . . . . . . . . . . [ ] W ( X ) , ∂ ∂θ log f X ( X | θ ) [ ∂ [ ] ] W ( X ) · ∂ = ∂θ log f X ( X | θ ) − E [ W ( X )] E ∂θ log f X ( X | θ ) [ ] W ( X ) · ∂ ∫ W ( x ) ∂ = ∂θ log f X ( X | θ ) = ∂θ log f X ( x | θ ) f ( x | θ ) d x x ∈X ∂ ∂θ f X ( x | θ ) ∫ ∫ W ( x ) ∂ = W ( x ) f ( x | θ ) d x = ∂θ f X ( x | θ ) f ( x | θ ) x ∈X x ∈X

  65. d E W X . Proving Cramer-Rao Theorem (3/4) February 14th, 2013 Biostatistics 602 - Lecture 11 Hyun Min Kang d d d (by assumption) d . E E Cov 19 / 33 Summary . . Recap . . . . . . . . Evaluation MLE Cramer-Rao . . . . . . . . . . . . . . . . . . . . . . . . . . [ ] W ( X ) , ∂ ∂θ log f X ( X | θ ) [ ∂ [ ] ] W ( X ) · ∂ = ∂θ log f X ( X | θ ) − E [ W ( X )] E ∂θ log f X ( X | θ ) [ ] W ( X ) · ∂ ∫ W ( x ) ∂ = ∂θ log f X ( X | θ ) = ∂θ log f X ( x | θ ) f ( x | θ ) d x x ∈X ∂ ∂θ f X ( x | θ ) ∫ ∫ W ( x ) ∂ = W ( x ) f ( x | θ ) d x = ∂θ f X ( x | θ ) f ( x | θ ) x ∈X x ∈X ∫ W ( x ) f X ( x | θ ) = d θ x ∈X

  66. . Summary February 14th, 2013 Biostatistics 602 - Lecture 11 Hyun Min Kang d (by assumption) d . E E Cov Proving Cramer-Rao Theorem (3/4) 19 / 33 . Cramer-Rao . . . . . . . . MLE . Recap Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . [ ] W ( X ) , ∂ ∂θ log f X ( X | θ ) [ ∂ [ ] ] W ( X ) · ∂ = ∂θ log f X ( X | θ ) − E [ W ( X )] E ∂θ log f X ( X | θ ) [ ] W ( X ) · ∂ ∫ W ( x ) ∂ = ∂θ log f X ( X | θ ) = ∂θ log f X ( x | θ ) f ( x | θ ) d x x ∈X ∂ ∂θ f X ( x | θ ) ∫ ∫ W ( x ) ∂ = W ( x ) f ( x | θ ) d x = ∂θ f X ( x | θ ) f ( x | θ ) x ∈X x ∈X ∫ W ( x ) f X ( x | θ ) = d θ x ∈X d θτ ( θ ) = τ ′ ( θ ) = d θ E [ W ( X )] = d

  67. log f X X log f X X log f X X log f X X . E Cov W X Therefore, Cramer-Rao lower bound is Var W X Var Cov W X Var E Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 . 20 / 33 From the previous results Proving Cramer-Rao Theorem (4/4) . . Summary . . . . . . . . Cramer-Rao Recap Evaluation MLE . . . . . . . . . . . . . . . . . . . . . . . . . . [ ∂ [{ ∂ } 2 ] ] ∂θ log f X ( X | θ ) = ∂θ log f X ( X | θ )

  68. log f X X . Proving Cramer-Rao Theorem (4/4) February 14th, 2013 Biostatistics 602 - Lecture 11 Hyun Min Kang E Var Therefore, Cramer-Rao lower bound is Cov . E Var From the previous results 20 / 33 Summary MLE Cramer-Rao . . . . . . . . . Evaluation . Recap . . . . . . . . . . . . . . . . . . . . . . . . . . [ ∂ [{ ∂ } 2 ] ] ∂θ log f X ( X | θ ) = ∂θ log f X ( X | θ ) [ ] W ( X ) , ∂ τ ′ ( θ ) ∂θ log f X ( X | θ ) = ] 2 Cov { W ( X ) , ∂ [ ∂θ log f X ( X | θ ) } Var [ W ( X )] ≥ [ ∂ ] ∂θ log f X ( X | θ )

  69. . Summary February 14th, 2013 Biostatistics 602 - Lecture 11 Hyun Min Kang E Var Therefore, Cramer-Rao lower bound is . E Var From the previous results Proving Cramer-Rao Theorem (4/4) Cov . MLE . . . . . . . . . Cramer-Rao Recap 20 / 33 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . [ ∂ [{ ∂ } 2 ] ] ∂θ log f X ( X | θ ) = ∂θ log f X ( X | θ ) [ ] W ( X ) , ∂ τ ′ ( θ ) ∂θ log f X ( X | θ ) = ] 2 Cov { W ( X ) , ∂ [ ∂θ log f X ( X | θ ) } Var [ W ( X )] ≥ [ ∂ ] ∂θ log f X ( X | θ ) [ τ ′ ( θ )] 2 = { ∂ [ ∂θ log f X ( X | θ ) } 2 ]

  70. log f X X log f X X log f X X . . . Proof . . . . . . Var W X . We need to show that E nE Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 nE 21 / 33 . . . . . . . . Recap MLE Evaluation in the above Cramer-Rao theorem hold, then the lower-bound of Cramer-Rao . Summary Cramer-Rao bound in iid case . Corollary 7.3.10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . If X 1 , · · · , X n are iid samples from pdf/pmf f X ( x | θ ) , and the assumptions Var [ W ( X ) | θ ] becomes

  71. log f X X log f X X . . . Proof . . . . . . . . We need to show that E nE Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 nE 21 / 33 in the above Cramer-Rao theorem hold, then the lower-bound of Summary . . . . . . . . . Recap MLE Evaluation Cramer-Rao . Cramer-Rao bound in iid case . Corollary 7.3.10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . If X 1 , · · · , X n are iid samples from pdf/pmf f X ( x | θ ) , and the assumptions Var [ W ( X ) | θ ] becomes [ τ ′ ( θ )] 2 Var [ W ( X )] ≥ { ∂ [ ∂θ log f X ( X | θ ) } 2 ]

  72. . . . in the above Cramer-Rao theorem hold, then the lower-bound of . nE . Proof . Corollary 7.3.10 We need to show that E nE Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 . 21 / 33 . Evaluation . . . . . . . . . Recap Cramer-Rao bound in iid case MLE Cramer-Rao Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . If X 1 , · · · , X n are iid samples from pdf/pmf f X ( x | θ ) , and the assumptions Var [ W ( X ) | θ ] becomes [ τ ′ ( θ )] 2 Var [ W ( X )] ≥ { ∂ [ ∂θ log f X ( X | θ ) } 2 ] [{ ∂ [{ ∂ } 2 ] } 2 ] ∂θ log f X ( X | θ ) = ∂θ log f X ( X | θ )

  73. log f X X i log f X X i log f X X i log f X X i log f X X j . E n i E n i n E . i i j Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 n 22 / 33 Proving Corollary 7.3.10 Recap Summary E Evaluation MLE Cramer-Rao . . . . . . . . E . . . . . . . . . . . . . . . . . . . . . . . . . . . . [{ ∂  } 2  } 2 ] { ∂ ∏ ∂θ log f X ( X | θ ) f X ( X i | θ ) =   ∂θ log i =1

  74. log f X X i log f X X i log f X X i log f X X j . . E n E n i i E n E i j Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 n 22 / 33 Summary . . Cramer-Rao E Evaluation MLE Recap Proving Corollary 7.3.10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . [{ ∂  } 2  } 2 ] { ∂ ∏ ∂θ log f X ( X | θ ) f X ( X i | θ ) =   ∂θ log i =1  } 2  { ∂ ∑ = log f X ( X i | θ )   ∂θ i =1

  75. log f X X i log f X X i log f X X j . . n E E n i i E j Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 n E 22 / 33 Summary Cramer-Rao Evaluation MLE Proving Corollary 7.3.10 Recap . . . . . . . E . . . . . . . . . . . . . . . . . . . . . . . . . . . . . [{ ∂  } 2  } 2 ] { ∂ ∏ ∂θ log f X ( X | θ ) f X ( X i | θ ) =   ∂θ log i =1  } 2  { ∂ ∑ = log f X ( X i | θ )   ∂θ i =1  } 2  { n ∂ ∑ = ∂θ log f X ( X i | θ )   i =1

  76. . Summary February 14th, 2013 Biostatistics 602 - Lecture 11 Hyun Min Kang E E n . E n E E Proving Corollary 7.3.10 22 / 33 . Cramer-Rao Recap . . . MLE . . . . . . Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . [{ ∂  } 2  } 2 ] { ∂ ∏ ∂θ log f X ( X | θ ) f X ( X i | θ ) =   ∂θ log i =1  } 2  { ∂ ∑ = log f X ( X i | θ )   ∂θ i =1  } 2  { n ∂ ∑ = ∂θ log f X ( X i | θ )   i =1 { ∂ } 2 + [∑ n ∂θ log f X ( X i | θ ) = i =1 ] ∂θ log f X ( X i | θ ) ∂ ∂ ∑ ∂θ log f X ( X j | θ ) i ̸ = j

  77. log f X X log f X X i log f X X i log f X X . E E E n i i n . E nE Hyun Min Kang Biostatistics 602 - Lecture 11 February 14th, 2013 E 23 / 33 . MLE Evaluation Cramer-Rao . . . E Summary . Proving Corollary 7.3.10 Recap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Because X 1 , · · · , X n are independent,   ∂θ log f X ( X i | θ ) ∂ ∂ ∑ ∂θ log f X ( X j | θ )  i ̸ = j [ ∂ [ ∂ ] ] ∑ ∂θ log f X ( X i | θ ) ∂θ log f X ( X j | θ ) = = 0 i ̸ = j

  78. log f X X i log f X X . E February 14th, 2013 Biostatistics 602 - Lecture 11 Hyun Min Kang nE E i n E E E . E Proving Corollary 7.3.10 . . . . . . . . . . Recap MLE Evaluation Cramer-Rao 23 / 33 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . Because X 1 , · · · , X n are independent,   ∂θ log f X ( X i | θ ) ∂ ∂ ∑ ∂θ log f X ( X j | θ )  i ̸ = j [ ∂ [ ∂ ] ] ∑ ∂θ log f X ( X i | θ ) ∂θ log f X ( X j | θ ) = = 0 i ̸ = j [{ ∂ { ∂ } 2 ] [ n } 2 ] ∑ ∂θ log f X ( X | θ ) ∂θ log f X ( X i | θ ) = i =1

  79. log f X X . Proving Corollary 7.3.10 February 14th, 2013 Biostatistics 602 - Lecture 11 Hyun Min Kang nE E n E E E . E E 23 / 33 Summary . Recap . . . . . . . Evaluation MLE . Cramer-Rao . . . . . . . . . . . . . . . . . . . . . . . . . . . Because X 1 , · · · , X n are independent,   ∂θ log f X ( X i | θ ) ∂ ∂ ∑ ∂θ log f X ( X j | θ )  i ̸ = j [ ∂ [ ∂ ] ] ∑ ∂θ log f X ( X i | θ ) ∂θ log f X ( X j | θ ) = = 0 i ̸ = j [{ ∂ { ∂ } 2 ] [ n } 2 ] ∑ ∂θ log f X ( X | θ ) ∂θ log f X ( X i | θ ) = i =1 [{ ∂ } 2 ] ∑ = ∂θ log f X ( X i | θ ) i =1

  80. . Summary February 14th, 2013 Biostatistics 602 - Lecture 11 Hyun Min Kang nE E n E E . E E Proving Corollary 7.3.10 E . Evaluation . . . . . . . . . Recap Cramer-Rao MLE 23 / 33 . . . . . . . . . . . . . . . . . . . . . . . . . . Because X 1 , · · · , X n are independent,   ∂θ log f X ( X i | θ ) ∂ ∂ ∑ ∂θ log f X ( X j | θ )  i ̸ = j [ ∂ [ ∂ ] ] ∑ ∂θ log f X ( X i | θ ) ∂θ log f X ( X j | θ ) = = 0 i ̸ = j [{ ∂ { ∂ } 2 ] [ n } 2 ] ∑ ∂θ log f X ( X | θ ) ∂θ log f X ( X i | θ ) = i =1 [{ ∂ } 2 ] ∑ = ∂θ log f X ( X i | θ ) i =1 [{ ∂ } 2 ] = ∂θ log f X ( X | θ )

  81. log f X X . . February 14th, 2013 Biostatistics 602 - Lecture 11 Hyun Min Kang . and Because nE Var W X Remark from Corollary 7.3.10 Summary . Cramer-Rao Evaluation MLE Recap . . . . . . . . . 24 / 33 . . . . . . . . . . . . . . . . . . . . . . . . . . In iid case, Cramer-Rao lower bound for an unbiased estimator of θ is

  82. . Cramer-Rao February 14th, 2013 Biostatistics 602 - Lecture 11 Hyun Min Kang . and Because nE . Summary . Remark from Corollary 7.3.10 24 / 33 Recap MLE . . . . Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . In iid case, Cramer-Rao lower bound for an unbiased estimator of θ is 1 Var [ W ( X )] ≥ { ∂ [ ∂θ log f X ( X | θ ) } 2 ]

  83. . Evaluation February 14th, 2013 Biostatistics 602 - Lecture 11 Hyun Min Kang nE Remark from Corollary 7.3.10 . . Cramer-Rao Summary 24 / 33 . MLE Recap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . In iid case, Cramer-Rao lower bound for an unbiased estimator of θ is 1 Var [ W ( X )] ≥ { ∂ [ ∂θ log f X ( X | θ ) } 2 ] Because τ ( θ ) = θ and τ ′ ( θ ) = 1 .

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend