moment generating functions powerful math tricks for dealing with - - PowerPoint PPT Presentation

moment generating functions
SMART_READER_LITE
LIVE PREVIEW

moment generating functions powerful math tricks for dealing with - - PowerPoint PPT Presentation

moment generating functions powerful math tricks for dealing with distributions we wont do much with it, but mentioned/used in book, so a very brief introduction: the k th moment of r.v. X is E[X k ]; M.G.F. is M(t) = E[e tX ] 1 the law


slide-1
SLIDE 1

moment generating functions

powerful math tricks for dealing with distributions we won’t do much with it, but mentioned/used in book, so a very brief introduction: the kth moment of r.v. X is E[Xk]; M.G.F. is M(t) = E[etX]

1

slide-2
SLIDE 2

the law of large numbers & the CLT

2

slide-3
SLIDE 3

weak law of large numbers i.i.d. (independent, identically distributed) random vars X1, X2, X3, … Xi has μ = E[Xi] < ∞ and σ2 = Var[Xi] Consider the empirical mean: The Weak Law of Large Numbers: For any ε > 0, as n → ∞

3

slide-4
SLIDE 4

weak law of large numbers For any ε > 0, as n → ∞ Proof: (assume σ2 < ∞) By Chebyshev inequality,

4

slide-5
SLIDE 5

strong law of large numbers

i.i.d. (independent, identically distributed) random vars X1, X2, X3, … Xi has μ = E[Xi] < ∞ Strong Law ⇒ Weak Law (but not vice versa) Strong law implies that for any ε > 0, there are only finite number of n such that the weak law condition is violated.

5

slide-6
SLIDE 6

diffusion

http://en.wikipedia.org/wiki/Law_of_large_numbers

6

slide-7
SLIDE 7

the law of large numbers

Note: Dn = E[ | Σ1≤i≤n(Xi-μ) | ] grows with n, but Dn/n → 0 Justifies the “frequency” interpretation of probability “Regression toward the mean” Gambler’s fallacy: “I’m due for a win!” “Result will usually be close to the mean” Many web demos, e.g. http://stat-www.berkeley.edu/~stark/Java/Html/lln.htm

7

slide-8
SLIDE 8

normal random variable

  • X is a normal random variable X ~ N(μ,σ2)

8

slide-9
SLIDE 9

normal random variable

X is a normal random variable X ~ N(μ,σ2) Z ~ N(0,1) “standard (or unit) normal” Use Φ(z) to denote CDF, i.e. no closed form ☹

9

slide-10
SLIDE 10

the central limit theorem (CLT)

i.i.d. (independent, identically distributed) random vars X1, X2, X3, … Xi has μ = E[Xi] and σ2 = Var[Xi] As n → 1, Restated: As n → ∞,

10

slide-11
SLIDE 11

CLT in the real world

CLT is the reason many things appear normally distributed Many quantities = sums of (roughly) independent random vars Exam scores: sums of individual problems People’s heights: sum of many genetic & environmental factors Measurements: sums of various small instrument errors ...

11

slide-12
SLIDE 12

Human height is approximately normal. Why might that be true? R.A. Fisher (1918) noted it would follow from CLT if height were the sum of many independent random effects, e.g. many genetic factors (plus some environmental ones like diet). I.e., suggested part of mechanism by looking at shape of the curve.

in the real world…

12

Male Height in Inches

Frequency

slide-13
SLIDE 13

CLT convergence

13

slide-14
SLIDE 14

Chernoff bound was CLT (for binomial) in disguise

  • Suppose X ~ Bin(n,p)
  • μ = E[X] = pn
  • Chernoff bound:

14

slide-15
SLIDE 15

rolling more dice

  • Roll 10 6-sided dice
  • X = total value of all 10 dice
  • Win if: X ≤ 25 or X ≥ 45
  • Roll…

15