chapter 5
play

Chapter 5 Limit Theorems Peng-Hua Wang Graduate Institute of - PowerPoint PPT Presentation

Chapter 5 Limit Theorems Peng-Hua Wang Graduate Institute of Communication Engineering National Taipei University Chapter Contents 5.1 Some Useful Inequalities 5.2 The Weak Law of Large Numbers 5.3 Convergence in Probability 5.4 The Central


  1. Chapter 5 Limit Theorems Peng-Hua Wang Graduate Institute of Communication Engineering National Taipei University

  2. Chapter Contents 5.1 Some Useful Inequalities 5.2 The Weak Law of Large Numbers 5.3 Convergence in Probability 5.4 The Central Limit Theorem 5.5 The Strong Law of Large Numbers Peng-Hua Wang, June 4, 2012 Probability, Chap 2 - p. 2/15

  3. 5.1 Some Useful Inequalities Peng-Hua Wang, June 4, 2012 Probability, Chap 2 - p. 3/15

  4. Markov Inequality If a random variable X ≥ 0, then P ( X ≥ a ) ≤ E [ X ] , for all a > 0 a Proof. � ∞ � ∞ E [ X ] = x f X ( x ) dx ≥ x f X ( x ) dx 0 a � ∞ � ∞ ≥ a f X ( x ) dx = a f X ( x ) dx a a = a P ( X ≥ a ) � Peng-Hua Wang, June 4, 2012 Probability, Chap 2 - p. 4/15

  5. Markov Inequality ■ Relationship between probability and mean. Use mean to estimate probability. ■ Example 5.1. X is uniformly distributed in ( 0, 4 ) . E [ X ] = 2. ◆ P ( X > 2 ) ≤ 2/2 = 1, P ( X > 2 ) = 1/2 ◆ P ( X > 3 ) ≤ 2/3, P ( X > 3 ) = 1/4 ◆ The bounds can be very loose. Peng-Hua Wang, June 4, 2012 Probability, Chap 2 - p. 5/15

  6. Chebyshev Inequality If a random variable X has mean µ and variance σ 2 , then P ( | X − µ | ≥ c ) ≤ σ 2 c 2 , for all c > 0 Proof. P ( | X − µ | ≥ c ) = P (( X − µ ) 2 ≥ c 2 ) ≤ E [( X − µ ) 2 ] = σ 2 � c 2 c 2 Remark. P ( | X − µ | ≥ k σ ) ≤ 1 k 2 Peng-Hua Wang, June 4, 2012 Probability, Chap 2 - p. 6/15

  7. Chebyshev Inequality ■ Relationship between probability and mean and variance. Use mean and variance to estimate probability. ■ Example 5.2. X is uniformly distributed in ( 0, 4 ) . E [ X ] = 2 and Var ( X ) = 16/12 ◆ P ( | X − 2 | > 1 ) ≤ 16/12, ◆ The bounds can be very loose. Peng-Hua Wang, June 4, 2012 Probability, Chap 2 - p. 7/15

  8. 5.2 The Weak Law Of Large Numbers Peng-Hua Wang, June 4, 2012 Probability, Chap 2 - p. 8/15

  9. Inequality for Sample Mean ■ Let X 1 , X 2 , ... be a sequence of independent identically distributed random variables with mean µ and variance σ 2 . Let M n be the sample mean: M n = X 1 + X 2 + · · · + X n n ■ We have E [ M n ] = µ and Var ( M n ) = σ 2 / n . ■ Apply Chebyshev inequality. For any ǫ > 0 we have P ( | M n − µ | ≥ ǫ ) ≤ σ 2 n ǫ 2 ■ The right-hand side of this inequality goes to zero as n increases. Peng-Hua Wang, June 4, 2012 Probability, Chap 2 - p. 9/15

  10. The Weak Law of Large Numbers ■ Therefore, sample mean (a random variable) approaches the mean in probability as n approaches infinity. ■ The Weak Law of Large Numbers �� � X 1 + X 2 + · · · + X n � � � − µ � ≥ ǫ → 0 P � � n � Peng-Hua Wang, June 4, 2012 Probability, Chap 2 - p. 10/15

  11. 5.4 The Central Limit Theorem Peng-Hua Wang, June 4, 2012 Probability, Chap 2 - p. 11/15

  12. The Central Limit Theorem ■ Let X 1 , X 2 , ... be a sequence of independent identically distributed random variables with mean µ and variance σ 2 . Define Z n as: Z n = X 1 + X 2 + · · · + X n − n µ σ √ n ■ It can be shown that the CDF of Y n converges to the CDF of standard normal distribution. � z 1 e − x 2 /2 dx √ n → ∞ P ( Z n ≤ z ) = Φ ( z ) = lim 2 π − ∞ Peng-Hua Wang, June 4, 2012 Probability, Chap 2 - p. 12/15

  13. Example 5.12 Toss a fair coin for 36 times. Estimate the probability that the head occurs for less than ro equal to 21 times. Let X i be iid Bernoulli rv’s with parameter 0.5. We have µ = 0.5 and σ 2 = 0.5 ( 1 − 0.5 ) . The probability of interest is P ( X 1 + X 2 + · · · + X 36 ≤ 21 ) � X 1 + X 2 + · · · + X 36 − 36 µ ≤ 21 − 36 µ � = P √ √ ≈ Φ ( 1 ) ≈ 0.8413 36 σ 36 σ The exact value is 21 � 36 � ( 0.5 ) 36 = 0.8785 ∑ n n = 0 Peng-Hua Wang, June 4, 2012 Probability, Chap 2 - p. 13/15

  14. Binomial Distribution and CLT ■ Let X i be iid Bernoulli rv’s with parameter p . Let S n = X 1 + X 2 + · · · X n . We know that S n be a binomial rv with parameter n and p . ■ The probability of P ( k ≤ S n ≤ ℓ ) is ℓ � n � p i ( 1 − p ) n − i ∑ P ( k ≤ S n ≤ ℓ ) = i i = k ■ By CLT, we have � � k − np S n − np ℓ − np P ( k ≤ S n ≤ ℓ ) = P np ( 1 − p ) ≤ np ( 1 − p ) ≤ � � � np ( 1 − p ) � � � � ℓ − np k − np ≈ Φ − Φ � � np ( 1 − p ) np ( 1 − p ) Peng-Hua Wang, June 4, 2012 Probability, Chap 2 - p. 14/15

  15. Binomial Distribution and CLT ■ A more accurate approximation can be obtained by replacing k and ℓ by k − 1/2 and ℓ + 1/2. The is called the De Moivre-Laplace approximation. � � � � ℓ + 1/2 − np k − 1/2 − np P ( k ≤ S n ≤ ℓ ) ≈ Φ − Φ � � np ( 1 − p ) np ( 1 − p ) ■ In Example 5.12, P ( X 1 + X 2 + · · · + X 36 ≤ 21 ) � X 1 + X 2 + · · · + X 36 − 36 µ ≤ 21 + 1/2 − 36 µ � √ √ = P 36 σ 36 σ ≈ Φ (( 21.5 − 6 ) /3 ) ≈ 0.879 Peng-Hua Wang, June 4, 2012 Probability, Chap 2 - p. 15/15

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend