convergence of random processes
play

Convergence of Random Processes DS GA 1002 Probability and - PowerPoint PPT Presentation

Convergence of Random Processes DS GA 1002 Probability and Statistics for Data Science http://www.cims.nyu.edu/~cfgranda/pages/DSGA1002_fall17 Carlos Fernandez-Granda Aim Define convergence for random processes Describe two convergence


  1. Convergence of Random Processes DS GA 1002 Probability and Statistics for Data Science http://www.cims.nyu.edu/~cfgranda/pages/DSGA1002_fall17 Carlos Fernandez-Granda

  2. Aim Define convergence for random processes Describe two convergence phenomena: the law of large numbers and the central limit theorem

  3. Types of convergence Law of Large Numbers Central Limit Theorem Monte Carlo simulation

  4. Convergence of deterministic sequences A deterministic sequence of real numbers x 1 , x 2 , . . . converges to x ∈ R , i →∞ x i = x lim if x i is arbitrarily close to x as i grows For any ǫ > 0 there is an i 0 such that for all i > i 0 | x i − x | < ǫ Problem: Random sequences do not have fixed values

  5. Convergence with probability one Consider a discrete random process � X and a random variable X defined on the same probability space If we fix the outcome ω , � X ( i , ω ) is a deterministic sequence and X ( ω ) is a constant We can determine whether � lim X ( i , ω ) = X ( ω ) i →∞ for that particular ω

  6. Convergence with probability one � X converges with probability one to X if �� �� � ω | ω ∈ Ω , lim X ( ω, i ) = X ( ω ) = 1 P i →∞ Deterministic convergence occurs with probability one

  7. Puddle Initial amount of water is uniform between 0 and 1 gallon After a time interval i there is i times less water D ( ω, i ) := ω � i , i = 1 , 2 , . . .

  8. Puddle ω = 0 . 31 ω = 0 . 89 0 . 8 ω = 0 . 52 0 . 6 D ( ω, i ) 0 . 4 � 0 . 2 0 1 2 3 4 5 6 7 8 9 10 i

  9. Puddle If we fix ω ∈ ( 0 , 1 ) ω � lim D ( ω, i ) = lim i i →∞ i →∞ = 0 � D converges to zero with probability one

  10. Puddle 1 D ( ω, i ) 0 . 5 � 0 0 10 20 30 40 50 i

  11. Alternative idea Idea: Instead of fixing ω and checking deterministic convergence: 1. Measure how close � X ( i ) and X are for a fixed i using a deterministic quantity 2. Check whether the quantity tends to zero

  12. Convergence in mean square The mean square of Y − X measures how close X and Y are � ( X − Y ) 2 � If E = 0 then X = Y with probability one Proof : By Markov’s inequality for any ǫ > 0 � ( X − Y ) 2 � � � E ( Y − X ) 2 > ǫ P ≤ = 0 ǫ

  13. Convergence in mean square � X converges to X in mean square if �� � 2 � X − � lim X ( i ) = 0 i →∞ E

  14. Convergence in probability Alternative measure: Probability that | Y − X | > ǫ for small ǫ � X converges to X in probability if for any ǫ > 0 �� � � � � � X − � lim X ( i ) � > ǫ = 0 i →∞ P

  15. Conv. in mean square implies conv. in probability �� � � � � � X − � i →∞ P lim X ( i ) � > ǫ

  16. Conv. in mean square implies conv. in probability �� � �� � � � 2 � � � X − � X − � > ǫ 2 i →∞ P lim X ( i ) � > ǫ = lim i →∞ P X ( i )

  17. Conv. in mean square implies conv. in probability �� � �� � � � 2 � � � X − � X − � > ǫ 2 i →∞ P lim X ( i ) � > ǫ = lim i →∞ P X ( i ) �� � 2 � X − � E X ( i ) ≤ lim ǫ 2 i →∞

  18. Conv. in mean square implies conv. in probability �� � �� � � � 2 � � � X − � X − � > ǫ 2 i →∞ P lim X ( i ) � > ǫ = lim i →∞ P X ( i ) �� � 2 � X − � E X ( i ) ≤ lim ǫ 2 i →∞ = 0

  19. Conv. in mean square implies conv. in probability �� � �� � � � 2 � � � X − � X − � > ǫ 2 i →∞ P lim X ( i ) � > ǫ = lim i →∞ P X ( i ) �� � 2 � X − � E X ( i ) ≤ lim ǫ 2 i →∞ = 0 Convergence with probability one also implies convergence in probability

  20. Convergence in distribution The distribution of ˜ X ( i ) converges to the distribution of X � X converges in distribution to X if lim X ( i ) ( x ) = F X ( x ) i →∞ F � for all x at which F X is continuous

  21. Convergence in distribution Convergence in distribution does not imply that ˜ X ( i ) and X are close as i → ∞ ! Convergence in probability does imply convergence in distribution

  22. Binomial tends to Poisson ◮ � X ( i ) is binomial with parameters i and p := λ/ i ◮ X is a Poisson random variable with parameter λ ◮ � X ( i ) converges to X in distribution � i � p x ( 1 − p ) ( i − x ) lim X ( i ) ( x ) = lim i →∞ p � x i →∞ = λ x e − λ x ! = p X ( x )

  23. Probability mass function of � X ( 40 ) 0 . 15 0 . 1 5 · 10 − 2 0 0 10 20 30 40 k

  24. Probability mass function of � X ( 80 ) 0 . 15 0 . 1 5 · 10 − 2 0 0 10 20 30 40 k

  25. Probability mass function of � X ( 400 ) 0 . 15 0 . 1 5 · 10 − 2 0 0 10 20 30 40 k

  26. Probability mass function of X 0 . 15 0 . 1 5 · 10 − 2 0 0 10 20 30 40 k

  27. Types of convergence Law of Large Numbers Central Limit Theorem Monte Carlo simulation

  28. Moving average The moving average � A of a discrete random process � X is i � A ( i ) := 1 � � X ( j ) i j = 1

  29. Weak law of large numbers Let � X be an iid discrete random process with mean µ � X := µ and bounded variance σ 2 The average � A of � X converges in mean square to µ

  30. Proof � � � A ( i ) E

  31. Proof   � � i �  1 � �  A ( i ) = E X ( j ) E i j = 1

  32. Proof   � � i �  1 � �  A ( i ) = E X ( j ) E i j = 1 � � � i = 1 � E X ( j ) i j = 1

  33. Proof   � � i �  1 � �  A ( i ) = E X ( j ) E i j = 1 � � � i = 1 � E X ( j ) i j = 1 = µ

  34. Proof � � � A ( i ) Var

  35. Proof   � � i �  1 � �  A ( i ) = Var X ( j ) Var i j = 1

  36. Proof   � � i �  1 � �  A ( i ) = Var X ( j ) Var i j = 1 i � � � = 1 � X ( j ) Var i 2 j = 1

  37. Proof   � � i �  1 � �  A ( i ) = Var X ( j ) Var i j = 1 i � � � = 1 � X ( j ) Var i 2 j = 1 = σ 2 i

  38. Proof �� � 2 � � i →∞ E lim A ( i ) − µ

  39. Proof �� � 2 � �� �� 2 � � � � � i →∞ E lim A ( i ) − µ = lim i →∞ E A ( i ) − E A ( i )

  40. Proof �� � 2 � �� �� 2 � � � � � i →∞ E lim A ( i ) − µ = lim i →∞ E A ( i ) − E A ( i ) � � � = lim i →∞ Var A ( i )

  41. Proof �� � 2 � �� �� 2 � � � � � i →∞ E lim A ( i ) − µ = lim i →∞ E A ( i ) − E A ( i ) � � � = lim i →∞ Var A ( i ) σ 2 = lim i i →∞

  42. Proof �� � 2 � �� �� 2 � � � � � i →∞ E lim A ( i ) − µ = lim i →∞ E A ( i ) − E A ( i ) � � � = lim i →∞ Var A ( i ) σ 2 = lim i i →∞ = 0

  43. Strong law of large numbers Let � X be an iid discrete random process with mean µ � X := µ and bounded variance σ 2 The average � A of � X converges with probability one to µ

  44. iid standard Gaussian 2.0 Moving average 1.5 Mean of iid seq. 1.0 0.5 0.0 0.5 1.0 1.5 2.0 0 10 20 30 40 50 i

  45. iid standard Gaussian 2.0 Moving average 1.5 Mean of iid seq. 1.0 0.5 0.0 0.5 1.0 1.5 2.0 0 100 200 300 400 500 i

  46. iid standard Gaussian 2.0 Moving average 1.5 Mean of iid seq. 1.0 0.5 0.0 0.5 1.0 1.5 2.0 0 1000 2000 3000 4000 5000 i

  47. iid geometric with p = 0 . 4 12 Moving average 10 Mean of iid seq. 8 6 4 2 0 0 10 20 30 40 50 i

  48. iid geometric with p = 0 . 4 12 Moving average 10 Mean of iid seq. 8 6 4 2 0 0 100 200 300 400 500 i

  49. iid geometric with p = 0 . 4 12 Moving average 10 Mean of iid seq. 8 6 4 2 0 0 1000 2000 3000 4000 5000 i

  50. iid Cauchy 30 Moving average 25 Median of iid seq. 20 15 10 5 0 5 0 10 20 30 40 50 i

  51. iid Cauchy 10 Moving average Median of iid seq. 5 0 5 10 0 100 200 300 400 500 i

  52. iid Cauchy 30 Moving average 20 Median of iid seq. 10 0 10 20 30 40 50 60 0 1000 2000 3000 4000 5000 i

  53. Types of convergence Law of Large Numbers Central Limit Theorem Monte Carlo simulation

  54. Central Limit Theorem Let � X be an iid discrete random process with mean µ � X := µ and bounded variance σ 2 � � √ n � A − µ converges in distribution to a Gaussian random variable with mean 0 and variance σ 2 The average � A is approximately Gaussian with mean µ and variance σ 2 / i

  55. Height data ◮ Example: Data from a population of 25 000 people ◮ We compare the histogram of the heights and the pdf of a Gaussian random variable fitted to the data

  56. Height data 0.25 Gaussian distribution Real data 0.20 0.15 0.10 0.05 60 62 64 66 68 70 72 74 76 Height (inches)

  57. Sketch of proof Pdf of sum of two independent random variables is the convolution of their pdfs � ∞ f X + Y ( z ) = f X ( z − y ) f Y ( y ) d y y = −∞ Repeated convolutions of any pdf with bounded variance result in a Gaussian!

  58. Repeated convolutions i = 1 i = 2 i = 3 i = 4 i = 5

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend