foundations of computer science lecture 21 deviations
play

Foundations of Computer Science Lecture 21 Deviations from the Mean - PowerPoint PPT Presentation

Foundations of Computer Science Lecture 21 Deviations from the Mean How Good is the Expectation as a Sumary of a Random Variable? Variance: Uniform; Bernoulli; Binomial; Waiting Times. Variance of a Sum Law of Large Numbers: The 3- Rule


  1. Foundations of Computer Science Lecture 21 Deviations from the Mean How Good is the Expectation as a Sumary of a Random Variable? Variance: Uniform; Bernoulli; Binomial; Waiting Times. Variance of a Sum Law of Large Numbers: The 3- σ Rule

  2. Last Time 1 Expected value of a Sum. ◮ Sum of dice ◮ Binomial ◮ Waiting time ◮ Coupon collecting. 2 Build-up expectation. 3 Expected value of a product. 4 Sum of Indicators. ◮ Random arrangement of hats on heads. Creator: Malik Magdon-Ismail Deviations from the Mean: 2 / 13 Today →

  3. Today: Deviations from the Mean How well does the expected value (mean) summarize a random variable? 1 Variance. 2 Variance of a sum. 3 Law of large numbers 4 The 3- σ rule. Creator: Malik Magdon-Ismail Deviations from the Mean: 3 / 13 Up to Now →

  4. Probability For Analyzing a Random Experiment. Measurement X Summary E [ X ] Experiment Outcomes How good (random) (complex) (random variable) (expectation) is E [ X ]? Experiment. Roll n dice and compute X , the average of the rolls. � 1 � 1 n × n × 3 1 1 2 = 3 1 E [ average ] = E n · sum = n · E [ sum ] = 2 . Creator: Malik Magdon-Ismail Deviations from the Mean: 4 / 13 Average of n Dice →

  5. Average of n Dice 6 5 Average roll 4 3 . 5 3 2 1 10 2 10 3 10 4 10 5 1 10 Number of dice n 0.1 σ 0.1 σ Probability Probability 0 0 1 2 3 3 . 5 4 5 1 2 3 3 . 5 4 5 6 6 Average of 4 dice Average of 100 dice Creator: Malik Magdon-Ismail Deviations from the Mean: 5 / 13 Variance →

  6. Variance: Size of the Deviations From the Mean X = sum of 2 dice. E [ X ] = 7 ← µ ( X ) X 2 3 4 5 6 7 8 9 10 11 12 ∆ − 5 − 4 − 3 − 2 − 1 0 1 2 3 4 5 ← X − µ 1 2 3 4 5 6 5 4 3 2 1 P X 36 36 36 36 36 36 36 36 36 36 36 Pop Quiz. What is E [ ∆ ] ? Variance, σ 2 , is the expected value of the squared deviations, σ 2 = E [ ∆ 2 ] = E [( X − µ ) 2 ] = E [( X − E [ X ]) 2 ] σ 2 = E [ ∆ 2 ] = 36 · 25 + 2 1 36 · 16 + 3 36 · 9 + 4 36 · 4 + 5 36 · 1 + 6 36 · 0 + 5 36 · 1 + 4 36 · 4 + 3 36 · 9 + 2 36 · 16 + 1 36 · 25 = 5 5 6 . Standard Deviation, σ , is the square-root of the variance, � � � σ = E [ ∆ 2 ] = E [( X − µ ) 2 ] = E [( X − E [ X ]) 2 ] � 5 5 σ = 6 ≈ 2 . 52 sum of two dice rolls = 7 ± 2 . 52 . Practice. Exercise 21.2. Creator: Malik Magdon-Ismail Deviations from the Mean: 6 / 13 Risk →

  7. Variance is a Measure of Risk Game 1 Game 2 probability = 2 probability = 2 win $2 3 ; win $102 3 ; X 1 : X 2 : probability = 1 probability = 1 lose $1 lose $201 3 . 3 . E [ X 1 ] = $1 E [ X 2 ] = $1 3 · (2 − 1) 2 + 1 3 · (102 − 1) 2 + 1 2 2 σ 2 ( X 1 ) 3 · ( − 1 − 1) 2 σ 2 ( X 2 ) 3 · ( − 201 − 1) 2 = = 2 × 10 4 . = 2 ≈ X 1 = 1 ± 1 . 41 X 2 = 1 ± 141 For a small expected profit you might risk a small loss (Game 1), not a huge loss. Creator: Malik Magdon-Ismail Deviations from the Mean: 7 / 13 More Convenient Variance →

  8. A More Convenient Formula for Variance σ 2 = E [( X − µ ) 2 ] = E [ X 2 − 2 µ X + µ 2 ] ← Expand ( X − µ ) 2 = E [ X 2 ] − 2 µ E [ X ] + µ 2 ← Linearity of expectation = E [ X 2 ] − µ 2 . ← E [ X ] = µ σ 2 = E [ X 2 ] − µ 2 = E [ X 2 ] − E [ X ] 2 . Variance: Sum of two dice, 12 x =2 P X ( x ) · x 2 E [ X 2 ] = � 36 · 2 2 + 2 1 36 · 3 2 + 3 36 · 4 2 + 4 36 · 5 2 + 5 36 · 6 2 + 6 36 · 7 2 + 5 36 · 8 2 + 4 36 · 9 2 + 3 36 · 10 2 + 2 36 · 11 2 + 1 = 36 · 12 2 = 54 5 6 Since µ = 7 σ 2 = 54 5 6 − 7 2 = 5 5 6 Theorem. Variance ≥ 0 , which means E [ X 2 ] ≥ E [ X ] 2 for any random variable X . Creator: Malik Magdon-Ismail Deviations from the Mean: 8 / 13 Variance of Uniform and Bernoulli →

  9. Variance of Uniform and Bernoulli Uniform. We saw earlier that E [ X ] = 1 2 ( n + 1) . n (1 2 + · · · + n 2 ) = 1 E [ X 2 ] = 1 6 ( n + 1)(2 n + 1) = 1 n × n 6 ( n + 1)(2 n + 1) so σ 2 ( Uniform ) = E [ X 2 ] − E [ X ] 2 = 2 ( n + 1)) 2 = 12 ( n 2 − 1) . 6 ( n + 1)(2 n + 1) − ( 1 1 1 Bernoulli. We saw earlier that E [ X ] = p . E [ X 2 ] = p · 1 2 + (1 − p ) · 0 2 = p so σ 2 ( Bernoulli ) = E [ X 2 ] − E [ X ] 2 = p − p 2 = p (1 − p ) . Creator: Malik Magdon-Ismail Deviations from the Mean: 9 / 13 Linearity of Variance? →

  10. Linearity of Variance? Let X be a Bernoulli and Y = a + X ( a is a constant):  a + 1 with probability p ;    Y =  a with probability 1 − p.   E [ Y ] = p · ( a + 1) + (1 − p ) · a = a + p = a + E [ X ] (as expected) Deviations from the mean µ = a + p :   1 − p with probability p ;     ∆ Y = (deviations independent of a !) with probability 1 − p,  − p     Therefore σ 2 ( Y ) = σ 2 ( X ) . Pop Quiz. Y = b X . Compute E [ Y ] and σ 2 ( Y ) . Theorem. Let Y = a + b X . Then, σ 2 ( Y ) = b 2 σ 2 ( X ) . Creator: Malik Magdon-Ismail Deviations from the Mean: 10 / 13 Variance of a Sum →

  11. Variance of a Sum X = X 1 + X 2 E [ X ] 2 = E [ X 1 + X 2 ] 2 ( ∗ ) = ( E [ X 1 ] + E [ X 2 ]) 2 = E [ X 1 ] 2 + E [ X 2 ] 2 + 2 E [ X 1 ] E [ X 2 ]; ( ∗ ) E [ X 2 ] = E [( X 1 + X 2 ) 2 ] = E [ X 2 1 + X 2 = E [ X 2 1 ] + E [ X 2 2 + 2 X 1 X 2 ] 2 ] + 2 E [ X 1 X 2 ] . ( ∗ ) is by linearity of expectation. σ 2 ( X ) = E [ X 2 ] − E [ X ] 2 2 ] + 2 E [ X 1 X 2 ]) − ( E [ X 1 ] 2 + E [ X 2 ] 2 + 2 E [ X 1 ] E [ X 2 ]) = ( E [ X 2 1 ] + E [ X 2 = E [ X 2 1 ] − E [ X 1 ] 2 + E [ X 2 2 ] − E [ X 2 ] 2 +2 ( E [ X 1 X 2 ] − E [ X 1 ] E [ X 2 ]) � �� � � �� � � �� � σ 2 ( X 1 ) σ 2 ( X 2 ) 0 if X 1 and X 2 are independent Variance of a Sum. For independent random variables, the variance of the sum is a sum of the variances. Practice. Compute the variance of 1 dice roll. Compute the variance of the sum of n dice rolls. Example. The Variance of the Binomial (sum of independent Bernoullis) X = X 1 + · · · + X n (sum of independent Bernoullis), and σ 2 ( X i ) = p (1 − p ) σ 2 (Binomial) = σ 2 ( X 1 ) + · · · + σ 2 ( X n ) = p (1 − p ) + · · · + p (1 − p ) = np (1 − p ) . Creator: Malik Magdon-Ismail Deviations from the Mean: 11 / 13 3- σ Rule →

  12. 3- σ Rule: X = µ ( X ) ± σ ( X ) 3- σ Rule. For any random variable X , the chances are at least (about) 90% that µ − 3 σ < X < µ + 3 σ or X = µ ± 3 σ. Lemma (Markov Inequality). For a positive random variable X , P [ X ≥ α ] ≤ E [ X ] α . � � � E [ X ] = x ≥ 0 x · P X ( x ) ≥ x ≥ α x · P X ( x ) ≥ x ≥ α α · P X ( x ) = α · P [ X ≥ α ] . Proof. Lemma (Chebyshev Inequality). P [ | ∆ | ≥ tσ ] ≤ 1 t 2 . Proof. ≤ E [ ∆ 2 ] σ 2 ( a ) t 2 σ 2 = 1 P [ | ∆ | ≥ tσ ] = P [ ∆ 2 ≥ t 2 σ 2 ] = t 2 . t 2 σ 2 In (a) we used Markov’s Inequality. To get the 3- σ rule, use Chebyshev’s Inequality with t = 3 . Creator: Malik Magdon-Ismail Deviations from the Mean: 12 / 13 Law of Large Numbers →

  13. Law of Large Numbers Expectation of the average of n dice: E [ average ] = E [ 1 n × sum ] = 1 n × E [ sum ] = 1 n × n × 3 1 2 Variance of the average of n dice: σ 2 ( average ) = σ 2 ( 1 n 2 × σ 2 ( sum ) = 1 n 2 × n × σ 2 ( one die ) = 1 1 n × σ 2 ( one die ) n × sum ) = 6 3 -sigma range [ µ − 3 σ, µ + 3 σ ] 3 1 2 1 10 2 10 3 10 4 1 10 n Creator: Malik Magdon-Ismail Deviations from the Mean: 13 / 13

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend