advanced simulation lecture 2
play

Advanced Simulation - Lecture 2 Patrick Rebeschini January 17th, - PowerPoint PPT Presentation

Advanced Simulation - Lecture 2 Patrick Rebeschini January 17th, 2018 Patrick Rebeschini Lecture 2 1/ 18 Outline Monte Carlo methods rely on random numbers to approximate integrals. Bayesian statistics in particular yields many intractable


  1. Advanced Simulation - Lecture 2 Patrick Rebeschini January 17th, 2018 Patrick Rebeschini Lecture 2 1/ 18

  2. Outline Monte Carlo methods rely on random numbers to approximate integrals. Bayesian statistics in particular yields many intractable integrals. In this lecture we’ll see some statistical problems involving integrals, and discuss the properties of the basic Monte Carlo estimator. Patrick Rebeschini Lecture 2 2/ 18

  3. Bayesian Inference: Gaussian Data Let Y = ( Y 1 , ..., Y n ) be i.i.d. random variables with � θ, σ 2 � with σ 2 known and θ unknown. Y i ∼ N Assign a prior distribution on the parameter: � µ, κ 2 � , then one can check that ϑ ∼ N � θ ; ν, ω 2 � p ( θ | y ) = N where κ 2 σ 2 σ 2 nκ 2 ω 2 = nκ 2 + σ 2 , ν = nκ 2 + σ 2 µ + nκ 2 + σ 2 y. Thus E ( ϑ | y ) = ν and V ( ϑ | y ) = ω 2 . Patrick Rebeschini Lecture 2 3/ 18

  4. Bayesian Inference: Gaussian Data � ν − Φ − 1 (1 − α/ 2) ω, ν + Φ − 1 (1 − α/ 2) ω � , then If C := P ( ϑ ∈ C | y ) = 1 − α. � θ, σ 2 � then If Y n +1 ∼ N � � y n +1 ; ν, ω 2 + σ 2 � p ( y n +1 | y ) = p ( y n +1 | θ ) p ( θ | y ) dθ = N . Θ No need to do Monte Carlo approximations: the prior is conjugate for the model. Patrick Rebeschini Lecture 2 4/ 18

  5. Bayesian Inference: Logistic Regression Let ( x i , Y i ) ∈ R d × { 0 , 1 } where x i ∈ R d is a covariate and 1 P ( Y i = 1 | θ ) = 1 + e − θ T x i Assign a prior p ( θ ) on ϑ . Then Bayesian inference relies on n � p ( θ ) P ( Y i = y i | θ ) i =1 p ( θ | y 1 , ..., y n ) = P ( y 1 , ..., y n ) If the prior is Gaussian, the posterior is not a standard distribution: P ( y 1 , ..., y n ) cannot be computed. Patrick Rebeschini Lecture 2 5/ 18

  6. S&P 500 index 400 300 S&P500 200 01/01/84 01/01/86 01/01/88 01/01/90 01/01/92 time Figure: S&P 500 daily price index ( p t ) between 1984 and 1991. Patrick Rebeschini Lecture 2 6/ 18

  7. S&P 500 index 0.1 0.0 log returns −0.1 −0.2 01/01/84 01/01/86 01/01/88 01/01/90 01/01/92 time Figure: Daily returns y t = log( p t /p t − 1 ) between 1984 and 1991. Patrick Rebeschini Lecture 2 7/ 18

  8. Bayesian Inference: Stochastic Volatility Model Latent stochastic volatility ( X t ) t ≥ 1 of an asset is modeled through X t = ϕX t − 1 + σV t , Y t = β exp ( X t ) W t where V t , W t ∼ N (0 , 1) . Intuitively, log-returns are modeled as centered Gaussians with dependent variances. Popular alternative to ARCH and GARCH models (Engle, 2003 Nobel Prize). Estimate the parameters ( ϕ, σ, β ) given the observations. Estimate X t given Y 1 , ..., Y t on-line based on p ( x t | y 1 , ..., y t ) . No analytical solution available! Patrick Rebeschini Lecture 2 8/ 18

  9. Monte Carlo Integration We are interested in computing � I = ϕ ( x ) π ( x ) dx X where π is a pdf on X and ϕ : X → R . Monte Carlo method: sample n independent copies X 1 , . . . , X n of X ∼ π , compute n � I n = 1 ˆ ϕ ( X i ) . n i =1 Remark : You can think of it as having the following empirical measure approximation of π ( dx ) n � π n ( dx ) = 1 � δ X i ( dx ) n i =1 where δ X i ( dx ) is the Dirac measure at X i . Patrick Rebeschini Lecture 2 9/ 18

  10. Monte Carlo Integration: Limit Theorems Proposition (LLN) : Assume E ( | ϕ ( X ) | ) < ∞ then � I n is a strongly consistent estimator of I . Proposition (CLT) : Assume I and � [ ϕ ( x ) − I ] 2 π ( x ) dx σ 2 = V ( ϕ ( X )) = X are finite then (see computation in previous lecture) �� � 2 � � � = σ 2 � � I n − I = V I n E n and √ n � � D � I n − I → N (0 , 1) . σ Patrick Rebeschini Lecture 2 10/ 18

  11. Monte Carlo Integration: Variance Estimation Proposition : Assume σ 2 = V ( ϕ ( X )) exists then n � � 2 � 1 S 2 ϕ ( X i ) − � n = I n n − 1 i =1 is an unbiased sample variance estimator of σ 2 . Proof : let Y i = ϕ ( X i ) then we have �� � 2 � � � n � 1 S 2 = Y i − Y E E n n − 1 i =1 � n � � 1 i − nY 2 Y 2 = n − 1 E i =1 � � V ( Y ) + I 2 � � � � + I 2 �� 1 = n − n Y V n − 1 = V ( Y ) = V ( ϕ ( X )) . Patrick Rebeschini Lecture 2 11/ 18

  12. Monte Carlo Integration: Error Estimates Chebyshev’s inequality yields the bound � � �� � � � I n V � > c σ c 2 σ 2 /n = 1 � � �� I n − I √ n ≤ P c 2 . An estimate follows from the CLT for large n √ n � � � I n − I ≈ Z ∼ N (0 , 1) , σ so that �� � � � > c σ � � �� I n − I √ n ≈ 2 (1 − Φ ( c )) . P Hence by choosing c = c α s.t. 2 (1 − Φ ( c α )) = α, an approximate (1 − α ) 100%-CI for I is � � � � σ S n � � I n ± c α √ n ≈ I n ± c α √ n and the rate is in 1 / √ n whatever X . Patrick Rebeschini Lecture 2 12/ 18

  13. Toy Example Consider the case where we have a square say S ⊆ R 2 , the sides being of length 2, with inscribed disk D of radius 1. We want to compute through Monte Carlo the area I of D . � � I = π = dx 1 dx 2 D � � = I D ( x 1 , x 2 ) dx 1 dx 2 as D ⊂ S S � � = 4 R 2 I D ( x 1 , x 2 ) π ( x 1 , x 2 ) dx 1 dx 2 where S := [ − 1 , 1] × [ − 1 , 1] and π ( x 1 , x 2 ) = 1 4 I S ( x 1 , x 2 ) is the uniform density on the square S . Patrick Rebeschini Lecture 2 13/ 18

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend