statistical inference
play

Statistical Inference Lecture 3: Common Families of Distributions - PowerPoint PPT Presentation

Statistical Inference Lecture 3: Common Families of Distributions MING GAO DASE @ ECNU (for course related communications) mgao@dase.ecnu.edu.cn Mar. 24, 2020 Outline Discrete Distributions 1 Continuous Distributions 2 Exponential Family


  1. Statistical Inference Lecture 3: Common Families of Distributions MING GAO DASE @ ECNU (for course related communications) mgao@dase.ecnu.edu.cn Mar. 24, 2020

  2. Outline Discrete Distributions 1 Continuous Distributions 2 Exponential Family 3 Location and Scale Families 4 Take-aways 5 MING GAO (DaSE@ECNU) Statistical Inference Mar. 24, 2020 2 / 28

  3. Discrete Distributions Discrete distributions A r.v. X is said to have a discrete distribution if the range of X is countable. In most situations, the r.v. has integer-valued outcomes. Discrete uniform distribution A r.v. X has a discrete uniform (1 , N ) distribution if P ( X = x | N ) = 1 N , x = 1 , 2 , · · · , N , where N is a specified integer. This distribution puts equal mass on each of the outcomes 1 , 2 , · · · , N . i =1 i = k ( k +1) i =1 i 2 = k ( k +1)(2 k +1) � k , and � k . 2 6 E ( X ) = � N x =1 xP ( X = x | N ) = N +1 2 ; Var ( X ) = E ( X 2 ) − E ( X ) 2 = ( N +1)( N − 1) . 12 MING GAO (DaSE@ECNU) Statistical Inference Mar. 24, 2020 3 / 28

  4. Discrete Distributions Bernoulli Trials Definition Each performance of an experiment with two possible outcomes is called a Bernoulli trial .

  5. Discrete Distributions Bernoulli Trials Definition Each performance of an experiment with two possible outcomes is called a Bernoulli trial . In general, a possible outcome of a Bernoulli trial is called a success or a failure . If p is the probability of a success and q is the probability of a failure, it follows that p + q = 1 . E ( X ) = 0 · (1 − p )+1 · p = p and E ( X 2 ) = 0 2 · (1 − p )+1 2 · p = p ; Var ( X ) = p (1 − p ) . MING GAO (DaSE@ECNU) Statistical Inference Mar. 24, 2020 4 / 28

  6. Discrete Distributions Binomial distribution Many problems can be solved by determining the probability of k successes when an experiment consists of n mutually independent Bernoulli trials . Let r.v. X i be the i − th experimental outcome ( i = 1 , 2 , · · · , n ), where X i denote whether it successes or not. Hence, we have � 1 , if we obtain head with probability p ; X i = 0 , otherwise with probability (1 − p ) .

  7. Discrete Distributions Binomial distribution Many problems can be solved by determining the probability of k successes when an experiment consists of n mutually independent Bernoulli trials . Let r.v. X i be the i − th experimental outcome ( i = 1 , 2 , · · · , n ), where X i denote whether it successes or not. Hence, we have � 1 , if we obtain head with probability p ; X i = 0 , otherwise with probability (1 − p ) . Let r.v. X = � n i =1 X i . We have � n � p x (1 − p ) n − x , x = 0 , 1 , 2 , · · · , n . P ( X = x | n , p ) = x We call this function the binomial distribution , i.e., B ( k ; n , p ) = P ( X = k ) = C ( n , k ) p k q n − k . MING GAO (DaSE@ECNU) Statistical Inference Mar. 24, 2020 5 / 28

  8. Discrete Distributions Expected value of Binomial r.v.s Theorem The expected number of successes when n mutually independent Bernoulli trials are performed, where p is the probability of success on each trial, is np .

  9. Discrete Distributions Expected value of Binomial r.v.s Theorem The expected number of successes when n mutually independent Bernoulli trials are performed, where p is the probability of success on each trial, is np . Proof. Let X be the r.v. equal to # successes in n trials. We have known that P ( X = k ) = C ( n , k ) p k q n − k . Hence, we have n n � � k · C ( n , k ) p k q n − k E ( X ) = k · P ( X = k ) = k =0 k =1 n n − 1 � n − 1 � � n − 1 � p k q n − k = np � � p j q n − 1 − j = n · k − 1 j k =1 j =0 = np ( p + q ) n − 1 = np MING GAO (DaSE@ECNU) Statistical Inference Mar. 24, 2020 6 / 28

  10. Discrete Distributions Variance of Binomial r.v.s Question: Let r.v. X be the number of successes of n mutually independent Bernoulli trials, where p is the probability of success on each trial. What is the variance of X ?

  11. Discrete Distributions Variance of Binomial r.v.s Question: Let r.v. X be the number of successes of n mutually independent Bernoulli trials, where p is the probability of success on each trial. What is the variance of X ? Solution: n n n E ( X 2 ) = � k 2 · P ( X = k ) = � � k ( k − 1) · P ( X = k ) + k · P ( X = k ) k =0 k =1 k =1 n � � n − 2 p k − 2 q n − k + np = n ( n − 1) p 2 � k − 2 k =2 n − 2 � � n − 2 p j q n − 2 − j + np = n ( n − 1) p 2 � j j =0 = n ( n − 1) p 2 ( p + q ) n − 2 + np = n ( n − 1) p 2 + np , V ( X ) = E ( X 2 ) − ( E ( X )) 2 = n ( n − 1) p 2 + np − ( np ) 2 = np (1 − p ) . MING GAO (DaSE@ECNU) Statistical Inference Mar. 24, 2020 7 / 28

  12. Discrete Distributions Geometric distribution Let r.v. Y be # experiments until the first success obtained in independent Bernoulli trials.

  13. Discrete Distributions Geometric distribution Let r.v. Y be # experiments until the first success obtained in independent Bernoulli trials. P ( Y = k ) = P ( X 1 = 0 ∧ X 2 = 0 ∧ · · · X k − 1 = 0 ∧ X k = 1) = Π k − 1 i =1 P ( X i = 0) · P ( X k = 1) = pq k − 1

  14. Discrete Distributions Geometric distribution Let r.v. Y be # experiments until the first success obtained in independent Bernoulli trials. P ( Y = k ) = P ( X 1 = 0 ∧ X 2 = 0 ∧ · · · X k − 1 = 0 ∧ X k = 1) = Π k − 1 i =1 P ( X i = 0) · P ( X k = 1) = pq k − 1 We call this function the Geometric distribution , i.e., G ( k ; p ) = pq k − 1 . The geometric distribution is sometimes used to model “lifetimes” or “time until failure” of components. For example,, if the probability is 0 . 001 that a light bulb will fail on any given day, what is the probability that it will last at least 30 days? MING GAO (DaSE@ECNU) Statistical Inference Mar. 24, 2020 8 / 28

  15. Discrete Distributions Expectation of Geometric r.v.s Theorem E ( X ) and Var ( X ) when a r.v. X follows a Geometric distribution are 1 q p and p 2 , where p is the probability of success on each trial.

  16. Discrete Distributions Expectation of Geometric r.v.s Theorem E ( X ) and Var ( X ) when a r.v. X follows a Geometric distribution are 1 q p and p 2 , where p is the probability of success on each trial. Proof. We have known that P ( X = k ) = q k − 1 p . Hence, we have ∞ ∞ ∞ � k · q k − 1 p = p ( � � q k − 1 ) E ( X ) = k =0 m =1 k = m ∞ ∞ q m − 1 � � q m − 1 = p ( 1 − q ) = m =1 m =1 1 − q = 1 1 = p

  17. Discrete Distributions Expectation of Geometric r.v.s Theorem E ( X ) and Var ( X ) when a r.v. X follows a Geometric distribution are 1 q p and p 2 , where p is the probability of success on each trial. Proof. We have known that P ( X = k ) = q k − 1 p . Hence, we have ∞ ∞ ∞ � k · q k − 1 p = p ( � � q k − 1 ) E ( X ) = k =0 m =1 k = m ∞ ∞ q m − 1 � � q m − 1 = p ( 1 − q ) = m =1 m =1 1 − q = 1 1 = p MING GAO (DaSE@ECNU) Statistical Inference Mar. 24, 2020 9 / 28

  18. Discrete Distributions Variance of Geometric r.v.s ∞ ∞ k 2 · P ( X = k ) = E ( X 2 ) = � � [ k ( k − 1) + k ] · P ( X = k ) k =0 k =1 ∞ k − 1 j ) q k − 1 + 1 � � = p (2 p k =2 j =1 ∞ ∞ ( jq k − 1 ) + 1 � � = 2 p p j =1 k = j +1 = 2 q p 2 + 1 p = 2 q + p p 2 V ( X ) = 2 q + p − (1 p ) 2 = 2 q − (1 − p ) = q p 2 . p 2 p 2

  19. Discrete Distributions Variance of Geometric r.v.s ∞ ∞ k 2 · P ( X = k ) = E ( X 2 ) = � � [ k ( k − 1) + k ] · P ( X = k ) k =0 k =1 ∞ k − 1 j ) q k − 1 + 1 � � = p (2 p k =2 j =1 ∞ ∞ ( jq k − 1 ) + 1 � � = 2 p p j =1 k = j +1 = 2 q p 2 + 1 p = 2 q + p p 2 V ( X ) = 2 q + p − (1 p ) 2 = 2 q − (1 − p ) = q p 2 . p 2 p 2 MING GAO (DaSE@ECNU) Statistical Inference Mar. 24, 2020 10 / 28

  20. Discrete Distributions Hypergeometric distributions Suppose we have a large urn filled with N balls that are identical in every way except that M are red and N − M are green. Let a r.v., denoted as X , be the number of red balls in a sample of size K . The r.v. X has a hypergeometric distribution given by � M �� N − M � K − x x P ( X = x | N , M , K ) = , x = 1 , 2 , · · · , K � N � K �� N − M � K � M � N � � = ; x =0 K − x x K �� N − M K K � M � � � x K − x P ( X = x ) = = 1 . � N � x =0 x =0 K MING GAO (DaSE@ECNU) Statistical Inference Mar. 24, 2020 11 / 28

  21. Discrete Distributions Hypergeometric distributions Cont’d Expectation and variance K � M �� N − M K � M − 1 �� N − M � � K − x x − 1 K − x � x � E ( X ) = = x M � N � N − 1 � N � K − 1 x =0 K x =1 K � M − 1 �� ( N − 1) − ( M − 1) K � = KM K − 1 − y � y = M N . � N − 1 N � y =0 K − 1 K · ( N − M )( N − K ) Var ( X ) = KM . N ( N − 1) N MING GAO (DaSE@ECNU) Statistical Inference Mar. 24, 2020 12 / 28

  22. Discrete Distributions Possion distributions A r.v. X , taking the values in the nonnegative integers, has a Poisson( λ ) distribution if P ( X = x | λ ) = λ x x ! e − λ , x = 0 , 1 , 2 , · · · . Note that � ∞ x k k ! = e x ; k =0 x ! e − λ = λ e − λ � ∞ E ( X ) = � ∞ x =0 x λ x λ x − 1 ( x − 1)! = λ ; y =0 ∞ ∞ x 2 λ x [ x ( x − 1) + x ] λ x x ! e − λ = x ! e − λ = λ 2 + λ ; E ( X 2 ) = � � x =0 x =1 Var ( X ) = λ MING GAO (DaSE@ECNU) Statistical Inference Mar. 24, 2020 13 / 28

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend