monte carlo simulations and pcnaive
play

Monte Carlo Simulations and PcNaive Heino Bohn Nielsen 1 of 21 - PDF document

Econometrics 2 Fall 2005 Monte Carlo Simulations and PcNaive Heino Bohn Nielsen 1 of 21 Monte Carlo Simulations MC simulations were introduced in Econometrics 1. Formalizing the thought experiment underlying the data sampling. In


  1. Econometrics 2 — Fall 2005 Monte Carlo Simulations and PcNaive Heino Bohn Nielsen 1 of 21 Monte Carlo Simulations • MC simulations were introduced in Econometrics 1. Formalizing the thought experiment underlying the data sampling. • In this course we will frequently use MC simulations. Standard tool in econometrics. • Underlying the econometric results is a layer of di ffi cult statistical theory. (1) Many asymptotic results are technically demanding. Sometimes also di ffi cult to fi rmly understand. → Use MC simulations to obtain intuition. (2) The fi nite sample properties are often analytically intractable. → Analyze fi nite sample properties. 2 of 21

  2. Outline of the Lecture (1) The basic idea in Monte Carlo simulations. (2) Example 1: Sample mean (OLS) of IID normals. (3) Example 2: Illustration of a Central Limit Theorem. (4) Introduction to PcNaive. (5) Example 3: Consistency and unbiasedness of OLS in a cross-sectional regression. Genereal-to-Speci fi c or Speci fi c-to-General? 3 of 21 The Monte Carlo Idea The basic idea of the Monte Carlo method: “Replace a di ffi cult deterministic problem with a stochastic problem with the same solution.” If we can solve the stochastic problem by simulations, labour intensive work can be replaced by cheap capital intensive simulations. Problem: • How can we be sure that the deterministic and stochastic problem have the same solution? General answer is the law of large numbers (LLN): sample moments converge to population moments. 4 of 21

  3. Example • Consider a regression y i = x 0 i β + � i , i = 1 , 2 , ..., N, ( ∗ ) where x i and � i have some speci fi ed properties; and the OLS estimator à N ! − 1 à N ! X X b x i x 0 β = x i y i . i i =1 i =1 We are often interested in E [ b β ] to check for bias. This is di ffi cult in most situations. • But if we could draw realizations of b β , we could estimate E [ b β ] . MC simulation: (1) Construct M arti fi cial data sets from the model ( ∗ ). (2) Find the estimator, b β m , for each data set, m = 1 , 2 , ..., M . Then from the LLN: X M β m → E [ b b M − 1 β ] M → ∞ . for m =1 5 of 21 Note of Caution The Monte Carlo method is a useful tool in econometrics. BUT: (1) Simulations do not replace theory. Simulation can illustrate but not prove theorems. (2) Simulations results are not general. Results are speci fi c to the chosen setup. We have to totally specify the model. (3) Work like good examples. In this course we hope to give you a Monte Carlo intuition. 6 of 21

  4. Example 1: Mean of IID Normals • Consider a model where we know the fi nite sample properties: � i ∼ N (0 , η 2 ) , y i = µ + � i , i = 1 , 2 , ..., N. ( ∗∗ ) • The OLS estimator b µ of µ is the sample mean X N µ = N − 1 b y i . i =1 Note, that b µ is consistent, unbiased and (exactly) normally distributed µ ∼ N ( µ, N − 1 η 2 ) . b • The standard deviation of the estimate, in PcNaive called the estimated standard error, can be calculated as v u q u X N η 2 = t N − 2 µ ) 2 . ESE ( b N − 1 b ( y i − b µ ) = i =1 7 of 21 Ex. 1 (cont.): Illustration by Simulation We can illustrate the results, if we can generate data from ( ∗∗ ). We need: (1) A fully speci fi ed Data Generating Process (DGP), e.g. � i ∼ N (0 , η 2 ) , y i = µ + � i , i = 1 , 2 , ..., N ( # ) µ = 5 η 2 = 1 . An algorithm for drawing random numbers from N ( · , · ) . Specify a sample length, e.g. N = 50 or N ∈ { 10 , 20 , 30 , ..., 100 } . (2) An estimation model for y i and an estimator. Consider OLS in y i = β + u i . ( ## ) Note that the statistical model ( # ) and the DGP ( ## ) need not coincide. 8 of 21

  5. Ex. 1 (cont.): Four Realizations • Suppose we draw � 1 , ..., � 50 from N (0 , 1) and construct a data set, y 1 , ..., y 50 . • We then apply OLS to the regression model y i = β + u i , to obtain the sample mean and the standard deviation in one realization, b ESE ( b β = 4 . 98013 , β ) = 0 . 1477 . • We can look at more realizations b ESE ( b Realization, m β m β m ) 1 4.98013 0.1477 2 5.04104 0.1320 3 4.99815 0.1479 4 4.82347 0.1504 Mean 4.96070 0.1445 9 of 21 Four Realization First realization, Mean=4.98013 Second realization, Mean=5.04104 6 6 4 4 Third realization, Mean=4.99815 Fourth realization, Mean=4.82347 7.5 7.5 5.0 5.0 2.5 2.5 10 of 21

  6. Ex. 1 (cont.): Formalization Now suppose we generate data from ( # ) M times, y m 1 , ..., y m 50 , m = 1 , 2 , ..., M. For each m we obtain a sample mean b β m . • We look at the mean estimate and the Monte Carlo standard deviation: M X MEAN ( b b β ) = M − 1 β m m =1 v u ³ ´ 2 u M X t M − 1 MCSD ( b β m − MEAN ( b b β ) = β ) m =1 • For large M we expect: MEAN ( b β ) to be close to the true µ (LLN). The small sample bias is BIAS = MEAN ( b β ) − µ. 11 of 21 Ex. 1 (cont.): Measures of Uncertainty Note, that MEAN ( b β ) is also an estimator (stochastic variable). The standard deviation of MEAN ( b β ) is the Monte Carlo standard error MCSE ( b 2 MCSD ( b β ) = M − 1 β ) . Note the di ff erence • MCSD ( b β ) measures the uncertainty of b ( ≈ ESE ( b β β m ) ). • MCSE ( b β ) measures the uncertainty of MEAN ( b β ) in the simulation. MCSE ( b β ) → 0 for M → ∞ . 12 of 21

  7. Ex. 1 (cont.): Results Consider the results for N = 50 , M = 5000 : b ESE ( b β m β m ) 1 4.98013 0.1477 2 5.04104 0.1320 . . . . . . . . . 5000 4.92140 0.1254 MEAN ( b β ) =4.9985 MEAN ( ESE ) =0.14083 MCSD ( b β ) =0.14061 MCSE ( b β ) =0.0019886 13 of 21 Ex. 1 (cont.): Results for Di ff erent N Density, T=5 Density, T=10 1.0 1.0 0.5 0.5 3.5 4.0 4.5 5.0 5.5 6.0 6.5 7.0 3.5 4.0 4.5 5.0 5.5 6.0 6.5 7.0 Density, T=50 Estimates, different T 3 5.5 2 5.0 1 4.5 3.5 4.0 4.5 5.0 5.5 6.0 6.5 7.0 0 50 100 150 200 250 14 of 21

  8. Example 2: A Central Limit Theorem (CLT) Recall the idea of a CLT (Lindeberg-Levy): • Let z 1 , ..., z N be IID with E [ z i ] = µ and V [ z i ] = σ 2 . Then N X 1 z i − µ √ → N (0 , 1) for N → ∞ . σ N i =1 This can be extended to • Heterogeneous processes. • (Limited) time dependence. We will illustrate this for two examples • Uniform distribution. • Exponential distribution. 15 of 21 Ex. 2 (cont.): Uniform Distribution Consider as an example z i ∼ Uniform (0 : 1) , i = 1 , 2 , ..., N. It holds that 1 E [ z i ] = 2 (1 − 0) 2 = 1 V [ z i ] = 12 . 12 We look at the estimated distribution of µ ¶ X N X N √ 1 z i − µ 1 z i − 1 √ √ = 12 · , σ 2 N N i =1 i =1 based on M = 20000 replications. 16 of 21

  9. Ex. 2 (cont.): Uniform Distribution N=1 N=2 1.0 0.4 0.5 0.2 -4 -2 0 2 4 -4 -2 0 2 4 N=5 N=10 0.4 0.4 0.2 0.2 -4 -2 0 2 4 -4 -2 0 2 4 17 of 21 Ex. 2 (cont.): Exponential Distribution Consider as a second example z i ∼ Exp (1) , i = 1 , 2 , ..., N. It holds that E [ z i ] = 1 V [ z i ] = 1 2 = 1 . We look at the estimated distribution of X N X N 1 z i − µ 1 √ √ = ( z i − 1) , σ N N i =1 i =1 based on M = 20000 replications. 18 of 21

  10. Ex. 2 (cont.): Exponential Distribution N=1 N=2 0.75 0.4 0.50 0.2 0.25 -2.5 0.0 2.5 5.0 7.5 -2.5 0.0 2.5 5.0 N=5 N=50 0.4 0.4 0.2 0.2 -4 -2 0 2 4 6 -4 -2 0 2 4 19 of 21 PcNaive • PcNaive is a menu-driven module in GiveWin. • Technically, PcNaive generates Ox code, which is then executed by Ox. Output is returned in GiveWin. • Outline: (1) Set up the DGP. — AR(1) — Static — PcNaive — General (2) Specify the estimation model. (3) Choose estimators and test statistics to analyze. (4) Set speci fi cations: M , N etc. (5) Select output to generate. (6) Save and run. 20 of 21

  11. Example 3: PcNaive Static DGP DGP: y i = α 1 · x 1 i + α 2 · x 2 i + � i , � i ∼ N (0 , 1) µ ¶ ∙ µ ¶ µ ¶¸ x 1 i 0 1 c ∼ N , x 2 i 0 c 1 for i = 1 , 2 , ..., N . Estimation model: Apply OLS to the linear regression model y i = β 0 + β 1 · x 1 i + β 2 · x 2 i + u i . Example: (1) Unbiasedness and consistency of OLS in this setting. (2) E ff ect of including a redundant regressor. (3) E ff ect of excluding a relevant regressor. 21 of 21

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend