cs70 lecture 28
play

CS70: Lecture 28. Variance; Inequalities; WLLN 1. Review: - PowerPoint PPT Presentation

CS70: Lecture 28. Variance; Inequalities; WLLN 1. Review: Independence 2. Variance 3. Inequalities Markov Chebyshev 4. Weak Law of Large Numbers Review: Independence Definition X and Y are independent Pr [ X = x , Y = y ] = Pr [ X


  1. CS70: Lecture 28. Variance; Inequalities; WLLN 1. Review: Independence 2. Variance 3. Inequalities ◮ Markov ◮ Chebyshev 4. Weak Law of Large Numbers

  2. Review: Independence Definition X and Y are independent ⇔ Pr [ X = x , Y = y ] = Pr [ X = x ] Pr [ Y = y ] , ∀ x , y ⇔ Pr [ X ∈ A , Y ∈ B ] = Pr [ X ∈ A ] Pr [ Y ∈ B ] , ∀ A , B . Theorem X and Y are independent ⇒ f ( X ) , g ( Y ) are independent ∀ f ( · ) , g ( · ) ⇒ E [ XY ] = E [ X ] E [ Y ] .

  3. Variance The variance measures the deviation from the mean value. Definition: The variance of X is σ 2 ( X ) := var [ X ] = E [( X − E [ X ]) 2 ] . σ ( X ) is called the standard deviation of X .

  4. Variance and Standard Deviation Fact: var [ X ] = E [ X 2 ] − E [ X ] 2 . Indeed: E [( X − E [ X ]) 2 ] var ( X ) = E [ X 2 − 2 XE [ X ]+ E [ X ] 2 ) = E [ X 2 ] − 2 E [ X ] E [ X ]+ E [ X ] 2 , by linearity = E [ X 2 ] − E [ X ] 2 . =

  5. A simple example This example illustrates the term ‘standard deviation.’ Consider the random variable X such that � µ − σ , w.p. 1 / 2 X = µ + σ , w.p. 1 / 2 . Then, E [ X ] = µ and ( X − E [ X ]) 2 = σ 2 . Hence, var ( X ) = σ 2 and σ ( X ) = σ .

  6. Example Consider X with � − 1 , w. p. 0 . 99 X = 99 , w. p. 0 . 01 . Then E [ X ] = − 1 × 0 . 99 + 99 × 0 . 01 = 0 . 1 × 0 . 99 +( 99 ) 2 × 0 . 01 ≈ 100 . E [ X 2 ] = Var ( X ) ≈ 100 = ⇒ σ ( X ) ≈ 10 . Also, E ( | X | ) = 1 × 0 . 99 + 99 × 0 . 01 = 1 . 98 . Thus, σ ( X ) � = E [ | X − E [ X ] | ] ! σ ( X ) Exercise: How big can you make E [ | X − E [ X ] | ] ?

  7. Uniform Assume that Pr [ X = i ] = 1 / n for i ∈ { 1 ,..., n } . Then n n i × Pr [ X = i ] = 1 ∑ ∑ E [ X ] = i n i = 1 i = 1 1 n ( n + 1 ) = n + 1 = . n 2 2 Also, n n i 2 Pr [ X = i ] = 1 E [ X 2 ] i 2 ∑ ∑ = n i = 1 i = 1 1 + 3 n + 2 n 2 = , as you can verify. 6 This gives = n 2 − 1 var ( X ) = 1 + 3 n + 2 n 2 − ( n + 1 ) 2 . 6 4 12

  8. Variance of geometric distribution. X is a geometrically distributed RV with parameter p . Thus, Pr [ X = n ] = ( 1 − p ) n − 1 p for n ≥ 1. Recall E [ X ] = 1 / p . p + 4 p ( 1 − p )+ 9 p ( 1 − p ) 2 + ... E [ X 2 ] = − [ p ( 1 − p )+ 4 p ( 1 − p ) 2 + ... ] − ( 1 − p ) E [ X 2 ] = p + 3 p ( 1 − p )+ 5 p ( 1 − p ) 2 + ... pE [ X 2 ] = 2 ( p + 2 p ( 1 − p )+ 3 p ( 1 − p ) 2 + .. ) = E [ X ] ! − ( p + p ( 1 − p )+ p ( 1 − p ) 2 + ... ) Distribution. pE [ X 2 ] = 2 E [ X ] − 1 2 ( 1 p ) − 1 = 2 − p = p ⇒ E [ X 2 ] = ( 2 − p ) / p 2 and = var [ X ] = E [ X 2 ] − E [ X ] 2 = 2 − p p 2 − 1 p 2 = 1 − p p 2 . √ 1 − p σ ( X ) = ≈ E [ X ] when p is small(ish). p

  9. Fixed points. Number of fixed points in a random permutation of n items. “Number of student that get homework back.” X = X 1 + X 2 ··· + X n where X i is indicator variable for i th student getting hw back. E ( X 2 ) E ( X 2 ∑ i )+ ∑ = E ( X i X j ) . i i � = j n × 1 1 = n +( n )( n − 1 ) × n ( n − 1 ) = 1 + 1 = 2 . E ( X 2 i ) = 1 × Pr [ X i = 1 ]+ 0 × Pr [ X i = 0 ] = 1 n E ( X i X j ) = 1 × Pr [ X i = 1 ∩ X j = 1 ]+ 0 × Pr [“ anything else’ ′ ] = 1 × 1 × ( n − 2 )! 1 = n ! n ( n − 1 ) Var ( X ) = E ( X 2 ) − ( E ( X )) 2 = 2 − 1 = 1 .

  10. Variance: binomial. n � n � E [ X 2 ] i 2 p i ( 1 − p ) n − i . ∑ = i i = 0 = Really???!!##... Too hard! Ok.. fine. Let’s do something else. Maybe not much easier...but there is a payoff.

  11. Properties of variance. 1. Var ( cX ) = c 2 Var ( X ) , where c is a constant. Scales by c 2 . 2. Var ( X + c ) = Var ( X ) , where c is a constant. Shifts center. Proof: E (( cX ) 2 ) − ( E ( cX )) 2 Var ( cX ) = c 2 E ( X 2 ) − c 2 ( E ( X )) 2 = c 2 ( E ( X 2 ) − E ( X ) 2 ) = c 2 Var ( X ) = E (( X + c − E ( X + c )) 2 ) Var ( X + c ) = E (( X + c − E ( X ) − c ) 2 ) = E (( X − E ( X )) 2 ) = Var ( X ) =

  12. Variance of sum of two independent random variables Theorem: If X and Y are independent, then Var ( X + Y ) = Var ( X )+ Var ( Y ) . Proof: Since shifting the random variables does not change their variance, let us subtract their means. That is, we assume that E ( X ) = 0 and E ( Y ) = 0. Then, by independence, E ( XY ) = E ( X ) E ( Y ) = 0 . Hence, E (( X + Y ) 2 ) = E ( X 2 + 2 XY + Y 2 ) var ( X + Y ) = E ( X 2 )+ 2 E ( XY )+ E ( Y 2 ) = E ( X 2 )+ E ( Y 2 ) = = var ( X )+ var ( Y ) .

  13. Variance of sum of independent random variables Theorem: If X , Y , Z ,... are pairwise independent, then var ( X + Y + Z + ··· ) = var ( X )+ var ( Y )+ var ( Z )+ ··· . Proof: Since shifting the random variables does not change their variance, let us subtract their means. That is, we assume that E [ X ] = E [ Y ] = ··· = 0. Then, by independence, E [ XY ] = E [ X ] E [ Y ] = 0 . Also, E [ XZ ] = E [ YZ ] = ··· = 0 . Hence, E (( X + Y + Z + ··· ) 2 ) var ( X + Y + Z + ··· ) = E ( X 2 + Y 2 + Z 2 + ··· + 2 XY + 2 XZ + 2 YZ + ··· ) = E ( X 2 )+ E ( Y 2 )+ E ( Z 2 )+ ··· + 0 + ··· + 0 = = var ( X )+ var ( Y )+ var ( Z )+ ··· .

  14. Variance of Binomial Distribution. Flip coin with heads probability p . X - how many heads? � 1 if i th flip is heads X i = 0 otherwise i ) = 1 2 × p + 0 2 × ( 1 − p ) = p . E ( X 2 Var ( X i ) = p − ( E ( X )) 2 = p − p 2 = p ( 1 − p ) . p = 0 = ⇒ Var ( X i ) = 0 p = 1 = ⇒ Var ( X i ) = 0 X = X 1 + X 2 + ... X n . X i and X j are independent: Pr [ X i = 1 | X j = 1 ] = Pr [ X i = 1 ] . Var ( X ) = Var ( X 1 + ··· X n ) = np ( 1 − p ) .

  15. Inequalities: An Overview Chebyshev Distribution Markov p n p n p n p n � � n n n µ µ a Pr [ X > a ] Pr [ | X − µ | > � ]

  16. Andrey Markov Andrey Markov is best known for his work on stochastic processes. A primary subject of his research later became known as Markov chains and Markov processes. Pafnuty Chebyshev was one of his teachers. Markov was an atheist. In 1912 he protested Leo Tolstoy’s excommunication from the Russian Orthodox Church by requesting his own excommunication. The Church complied with his request.

  17. Markov’s inequality The inequality is named after Andrey Markov, although it appeared earlier in the work of Pafnuty Chebyshev. It should be (and is sometimes) called Chebyshev’s first inequality. Theorem Markov’s Inequality Assume f : ℜ → [ 0 , ∞ ) is nondecreasing. Then, Pr [ X ≥ a ] ≤ E [ f ( X )] , for all a such that f ( a ) > 0 . f ( a ) Proof: Observe that 1 { X ≥ a } ≤ f ( X ) f ( a ) . Indeed, if X < a , the inequality reads 0 ≤ f ( X ) / f ( a ) , which holds since f ( · ) ≥ 0. Also, if X ≥ a , it reads 1 ≤ f ( X ) / f ( a ) , which holds since f ( · ) is nondecreasing. Taking the expectation yields the inequality, because expectation is monotone.

  18. A picture

  19. Markov Inequality Example: G(p) p and E [ X 2 ] = 2 − p Let X = G ( p ) . Recall that E [ X ] = 1 p 2 . Choosing f ( x ) = x , we get Pr [ X ≥ a ] ≤ E [ X ] = 1 ap . a Choosing f ( x ) = x 2 , we get Pr [ X ≥ a ] ≤ E [ X 2 ] = 2 − p p 2 a 2 . a 2

  20. Markov Inequality Example: P ( λ ) Let X = P ( λ ) . Recall that E [ X ] = λ and E [ X 2 ] = λ + λ 2 . Choosing f ( x ) = x , we get Pr [ X ≥ a ] ≤ E [ X ] = λ a . a Choosing f ( x ) = x 2 , we get Pr [ X ≥ a ] ≤ E [ X 2 ] = λ + λ 2 . a 2 a 2

  21. Chebyshev’s Inequality This is Pafnuty’s inequality: Theorem: Pr [ | X − E [ X ] | > a ] ≤ var [ X ] , for all a > 0 . a 2 Proof: Let Y = | X − E [ X ] | and f ( y ) = y 2 . Then, Pr [ Y ≥ a ] ≤ E [ f ( Y )] = var [ X ] . f ( a ) a 2 This result confirms that the variance measures the “deviations from the mean.”

  22. Chebyshev and Poisson Let X = P ( λ ) . Then, E [ X ] = λ and var [ X ] = λ . Thus, Pr [ | X − λ | ≥ n ] ≤ var [ X ] = λ n 2 . n 2

  23. Chebyshev and Poisson (continued) Let X = P ( λ ) . Then, E [ X ] = λ and var [ X ] = λ . By Markov’s inequality, Pr [ X ≥ a ] ≤ E [ X 2 ] = λ + λ 2 . a 2 a 2 Also, if a > λ , then X ≥ a ⇒ X − λ ≥ a − λ > 0 ⇒ | X − λ | ≥ a − λ . λ Hence, for a > λ , Pr [ X ≥ a ] ≤ Pr [ | X − λ | ≥ a − λ ] ≤ ( a − λ ) 2 .

  24. Fraction of H ’s Here is a classical application of Chebyshev’s inequality. How likely is it that the fraction of H ’s differs from 50 % ? Let X m = 1 if the m -th flip of a fair coin is H and X m = 0 otherwise. Define Y n = X 1 + ··· + X n , for n ≥ 1 . n We want to estimate Pr [ | Y n − 0 . 5 | ≥ 0 . 1 ] = Pr [ Y n ≤ 0 . 4 or Y n ≥ 0 . 6 ] . By Chebyshev, Pr [ | Y n − 0 . 5 | ≥ 0 . 1 ] ≤ var [ Y n ] ( 0 . 1 ) 2 = 100 var [ Y n ] . Now, var [ Y n ] = 1 n 2 ( var [ X 1 ]+ ··· + var [ X n ]) = 1 n var [ X 1 ] ≤ 1 4 n . Var ( X i ) = p ( 1 − lp ) ≤ ( . 5 )( . 5 ) = 1 4

  25. Fraction of H ’s Y n = X 1 + ··· + X n , for n ≥ 1 . n Pr [ | Y n − 0 . 5 | ≥ 0 . 1 ] ≤ 25 n . For n = 1 , 000, we find that this probability is less than 2 . 5 % . As n → ∞ , this probability goes to zero. In fact, for any ε > 0, as n → ∞ , the probability that the fraction of H s is within ε > 0 of 50 % approaches 1: Pr [ | Y n − 0 . 5 | ≤ ε ] → 1 . This is an example of the Law of Large Numbers. We look at a general case next.

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend