Bootstrap Joint Prediction Regions Michael Wolf Dan Wunderli - - PowerPoint PPT Presentation

bootstrap joint prediction regions
SMART_READER_LITE
LIVE PREVIEW

Bootstrap Joint Prediction Regions Michael Wolf Dan Wunderli - - PowerPoint PPT Presentation

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References Bootstrap Joint Prediction Regions Michael Wolf Dan Wunderli Department of Economics University of Zurich The Problem The Solution


slide-1
SLIDE 1

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

Bootstrap Joint Prediction Regions

Michael Wolf Dan Wunderli

Department of Economics University of Zurich

slide-2
SLIDE 2

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

Motivational Quote

. . . a central bank seeking to maximize its probability of achieving its goals is driven, I believe, to a risk-management approach to

  • policy. By this I mean that policymakers need to consider not only

the most likely future path for the economy but also the distribution of possible outcomes about that path. Alan Greenspan (2003)

slide-3
SLIDE 3

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

Outline

1

The Problem

2

The Solution

3

Two Previous Methods

4

Monte Carlo

5

Empirical Application

6

Conclusions

slide-4
SLIDE 4

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

Outline

1

The Problem

2

The Solution

3

Two Previous Methods

4

Monte Carlo

5

Empirical Application

6

Conclusions

slide-5
SLIDE 5

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

The Problem

Object of interest: Observed time series {y1, . . . , yT} Interested in the future path YT,H ≡ (yT+1, . . . , yT+H)′, where H is the maximum forecast horizon

slide-6
SLIDE 6

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

The Problem

Object of interest: Observed time series {y1, . . . , yT} Interested in the future path YT,H ≡ (yT+1, . . . , yT+H)′, where H is the maximum forecast horizon For starters: Denote a forecast h periods ahead by ˆ yT(h) Want a path-forecast ˆ YT(H) ≡ (ˆ yT(1), . . . , ˆ yT(H))′

slide-7
SLIDE 7

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

The Problem

Object of interest: Observed time series {y1, . . . , yT} Interested in the future path YT,H ≡ (yT+1, . . . , yT+H)′, where H is the maximum forecast horizon For starters: Denote a forecast h periods ahead by ˆ yT(h) Want a path-forecast ˆ YT(H) ≡ (ˆ yT(1), . . . , ˆ yT(H))′ In the end: Also want a joint prediction region (JPR) that contains the entire future path YT,H with prespecified probability 1 − α For purposes of interpretation, such a JPR should be of the form

  • f simultaneous prediction intervals for yT+h, for h = 1, . . . , H
slide-8
SLIDE 8

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

Restriction To Rectangular JPRs

In general: YT,H is a H-dimensional vector In principle, a JPR can be any region in RH that contains the vector YT,H with probability 1 − α For example, an elliptical JPR based on the classical Scheff´ e method (details later)

slide-9
SLIDE 9

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

Restriction To Rectangular JPRs

In general: YT,H is a H-dimensional vector In principle, a JPR can be any region in RH that contains the vector YT,H with probability 1 − α For example, an elliptical JPR based on the classical Scheff´ e method (details later) In practice: Want an implied ‘prediction interval’ for yT+h at each horizon h So the JPR should represent simultaneous prediction intervals: in other words, one wants a rectangular JPR

slide-10
SLIDE 10

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

Restriction To Rectangular JPRs

In general: YT,H is a H-dimensional vector In principle, a JPR can be any region in RH that contains the vector YT,H with probability 1 − α For example, an elliptical JPR based on the classical Scheff´ e method (details later) In practice: Want an implied ‘prediction interval’ for yT+h at each horizon h So the JPR should represent simultaneous prediction intervals: in other words, one wants a rectangular JPR Note: One can always start with a JPR of arbitrary shape and then ‘project’ it onto the axes of RH to obtain a rectangular JPR But, clearly, such a procedure is sub-optimal Instead, one should construct a ‘direct’ rectangular JPR

slide-11
SLIDE 11

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

Restriction To Rectangular JPRs

An illustration of elliptical (and projected) JPR versus rectangular JPR:

−3 −2 −1 1 2 3 −3 −2 −1 1 2 3

slide-12
SLIDE 12

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

The Non-Solution

How not to do it: Compute a marginal prediction interval for yT+h at level 1 − α for each h = 1, . . . , H Then ‘string together’ these H intervals

slide-13
SLIDE 13

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

The Non-Solution

How not to do it: Compute a marginal prediction interval for yT+h at level 1 − α for each h = 1, . . . , H Then ‘string together’ these H intervals Advantage: (Relatively) easy to do: How to compute reliable marginal prediction intervals has been worked out finally Disadvantage: The joint coverage probability for the path YT,H is less than 1 − α Furthermore, ceteris paribus this probability decreases in H

slide-14
SLIDE 14

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

The Non-Solution

How not to do it: Compute a marginal prediction interval for yT+h at level 1 − α for each h = 1, . . . , H Then ‘string together’ these H intervals Advantage: (Relatively) easy to do: How to compute reliable marginal prediction intervals has been worked out finally Disadvantage: The joint coverage probability for the path YT,H is less than 1 − α Furthermore, ceteris paribus this probability decreases in H Amazingly: This method is still widely used For example, in fan charts published by the Bank of England and the Central Bank of Norway

slide-15
SLIDE 15

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

The Non-Solution

An (unfortunate) example:

slide-16
SLIDE 16

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

Outline

1

The Problem

2

The Solution

3

Two Previous Methods

4

Monte Carlo

5

Empirical Application

6

Conclusions

slide-17
SLIDE 17

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

Some Notation

In the real world: Data {y1, . . . , yT, yT+1, . . . yT+H} generated by mechanism P Vector of prediction errors: ˆ UT(H) ≡ (ˆ uT(1), . . . , ˆ uT(H))′ ≡ ˆ YT(H) − YT,H Prediction standard error for ˆ uT(h) denoted by ˆ σT(h) Vector of standardized prediction errors: ˆ ST(H) ≡ (ˆ uT(1)/ˆ σT(1), . . . , ˆ uT(H)/ˆ σT(H))′

slide-18
SLIDE 18

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

Some Notation

In the real world: Data {y1, . . . , yT, yT+1, . . . yT+H} generated by mechanism P Vector of prediction errors: ˆ UT(H) ≡ (ˆ uT(1), . . . , ˆ uT(H))′ ≡ ˆ YT(H) − YT,H Prediction standard error for ˆ uT(h) denoted by ˆ σT(h) Vector of standardized prediction errors: ˆ ST(H) ≡ (ˆ uT(1)/ˆ σT(1), . . . , ˆ uT(H)/ˆ σT(H))′ In the bootstrap world: Data {y∗

1, . . . , y∗ T, y∗ T+1, . . . y∗ T+H} generated by mechanism ˆ

PT Vector of bootstrap prediction errors: ˆ U∗

T(H) ≡ (ˆ

u∗

T(1), . . . , ˆ

u∗

T(H))′ ≡ ˆ

Y∗

T(H) − Y∗ T,H

Prediction standard error for ˆ u∗

T(h) denoted by ˆ

σ∗

T(h)

Vector of bootstrap standardized prediction errors: ˆ S∗

T(H) ≡ (ˆ

u∗

T(1)/ˆ

σ∗

T(1), . . . , ˆ

u∗

T(H)/ˆ

σ∗

T(H))′

slide-19
SLIDE 19

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

Some Notation

In the real world: Data {y1, . . . , yT, yT+1, . . . yT+H} generated by mechanism P Vector of prediction errors: ˆ UT(H) ≡ (ˆ uT(1), . . . , ˆ uT(H))′ ≡ ˆ YT(H) − YT,H Prediction standard error for ˆ uT(h) denoted by ˆ σT(h) Vector of standardized prediction errors: ˆ ST(H) ≡ (ˆ uT(1)/ˆ σT(1), . . . , ˆ uT(H)/ˆ σT(H))′ In the bootstrap world: Data {y∗

1, . . . , y∗ T, y∗ T+1, . . . y∗ T+H} generated by mechanism ˆ

PT Vector of bootstrap prediction errors: ˆ U∗

T(H) ≡ (ˆ

u∗

T(1), . . . , ˆ

u∗

T(H))′ ≡ ˆ

Y∗

T(H) − Y∗ T,H

Prediction standard error for ˆ u∗

T(h) denoted by ˆ

σ∗

T(h)

Vector of bootstrap standardized prediction errors: ˆ S∗

T(H) ≡ (ˆ

u∗

T(1)/ˆ

σ∗

T(1), . . . , ˆ

u∗

T(H)/ˆ

σ∗

T(H))′

Note: The methodology is completely generic All implementation details are up to the applied researcher

slide-20
SLIDE 20

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

High-Level Assumption

Relevant quantities: ˆ JT denotes the probability law under P of ˆ ST(H)|yT, yT−1, . . . ˆ J∗

T denotes the probability law under ˆ

PT of ˆ S∗

T(H)|y∗ T, y∗ T−1, . . .

In the asymptotic framework, T tends to infinity and H remains fixed. Assumption 2.1 ˆ JT converges in distribution to a non-random continuous limit law ˆ J. Furthermore, ˆ J∗

T consistently estimates this limit law: ρ(ˆ

JT, ˆ J∗

T) → 0

in probability, for any metric ρ metrizing weak convergence.

slide-21
SLIDE 21

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

Flexible Criterion To Construct JPRs

Possible concern: When H is large, it may be deemed too strict that all elements of the future path must be contained in the JPR (with prob. 1 − α)

slide-22
SLIDE 22

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

Flexible Criterion To Construct JPRs

Possible concern: When H is large, it may be deemed too strict that all elements of the future path must be contained in the JPR (with prob. 1 − α) We thus adapt a concept from the multiple-testing literature to offer a flexible solution: Generalized family-wise error rate (k-FWE) k-FWE ≡ P{At least k of the yT+h not contained in the JPR}

slide-23
SLIDE 23

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

Flexible Criterion To Construct JPRs

Possible concern: When H is large, it may be deemed too strict that all elements of the future path must be contained in the JPR (with prob. 1 − α) We thus adapt a concept from the multiple-testing literature to offer a flexible solution: Generalized family-wise error rate (k-FWE) k-FWE ≡ P{At least k of the yT+h not contained in the JPR} Implication: For k = 1, one wants to catch the entire future path in the JPR For k > 1, one is willing to miss up to k − 1 elements in the JPR, but is afforded a smaller region in return (see below)

slide-24
SLIDE 24

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

Flexible Criterion To Construct JPRs

Possible concern: When H is large, it may be deemed too strict that all elements of the future path must be contained in the JPR (with prob. 1 − α) We thus adapt a concept from the multiple-testing literature to offer a flexible solution: Generalized family-wise error rate (k-FWE) k-FWE ≡ P{At least k of the yT+h not contained in the JPR} Implication: For k = 1, one wants to catch the entire future path in the JPR For k > 1, one is willing to miss up to k − 1 elements in the JPR, but is afforded a smaller region in return (see below) Goal: The applied researcher chooses the value of k, given his needs The JPR should then deliver k-FWE ≤ α, at least asymptotically

slide-25
SLIDE 25

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

How To Make It Happen

Some further notation: Let X ≡ (x1, . . . , xH)′ be a vector with H elements k-max(X) returns the kth-largest value of X |X| denotes the vector (|x1|, . . . , |xH|)′

slide-26
SLIDE 26

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

How To Make It Happen

Some further notation: Let X ≡ (x1, . . . , xH)′ be a vector with H elements k-max(X) returns the kth-largest value of X |X| denotes the vector (|x1|, . . . , |xH|)′ The ideal JPR, controlling the k-FWE in finite samples, is of the form:

  • .
  • × . . . ×
  • ˆ

yT(h) ± dmax

|·|,1−α(k) · ˆ

σT(h)

  • × . . . ×
  • .
  • where dmax

|·|,1−α(k) is the 1 − α quantile of random variable k-max

ST(H)|

  • .
slide-27
SLIDE 27

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

How To Make It Happen

Some further notation: Let X ≡ (x1, . . . , xH)′ be a vector with H elements k-max(X) returns the kth-largest value of X |X| denotes the vector (|x1|, . . . , |xH|)′ The ideal JPR, controlling the k-FWE in finite samples, is of the form:

  • .
  • × . . . ×
  • ˆ

yT(h) ± dmax

|·|,1−α(k) · ˆ

σT(h)

  • × . . . ×
  • .
  • where dmax

|·|,1−α(k) is the 1 − α quantile of random variable k-max

ST(H)|

  • .

The feasible JPR, controlling the k-FWE asymptotically, is of the form:

  • .
  • × . . . ×
  • ˆ

yT(h) ± dmax,∗

|·|,1−α(k) · ˆ

σT(h)

  • × . . . ×
  • .
  • (1)

where dmax,∗

|·|,1−α(k) is the 1 − α quantile of random variable k-max

S∗

T(H)|

  • .
slide-28
SLIDE 28

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

Formal Result

Proposition 2.1 Under Assumption 2.1, the JPR (1) for YT,H satisfies lim sup

T→∞

k-FWE ≤ α where k-FWE ≡ P{At least k of the yT+h not contained in the JPR} .

slide-29
SLIDE 29

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

Formal Result

Proposition 2.1 Under Assumption 2.1, the JPR (1) for YT,H satisfies lim sup

T→∞

k-FWE ≤ α where k-FWE ≡ P{At least k of the yT+h not contained in the JPR} . Alternative JPRs: The JPR (1) is two-sided Alternatively, lower and upper one-sided JPRs can be constructed in a similar fashion; see the paper for details

slide-30
SLIDE 30

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

Bootstrap Details

Algorithm 2.1 (Computation of the JPR Multiplier)

1

Generate bootstrap data {y∗

1, . . . , y∗ T, y∗ T+1, . . . , y∗ T+H} from ˆ

PT

2

Not making use of the stretch {y∗

T+1, . . . , y∗ T+H}, compute

forecasts ˆ y∗

T(h) and prediction standard errors ˆ

σ∗

T(h)

3

Compute bootstrap prediction errors ˆ u∗

T(h) ≡ ˆ

y∗

T(h) − y∗ T+h

4

Compute standardized bootstrap prediction errors ˆ s∗

T(h) ≡ ˆ

u∗

T(h)/ˆ

σ∗

T(h) and let ˆ

S∗

T(H) ≡

  • ˆ

s∗

T(1), . . . , ˆ

s∗

T(H)

5

Compute k-max∗

|·| ≡ k-max

  • ˆ

S∗

T(H)

  • 6

Repeat this process B times =⇒

  • k-max∗

|·|,1, . . . , k-max∗ |·|,B

  • 7

dmax,∗

|·|,1−α(k) is the empirical 1 − α quantile of these B statistics

slide-31
SLIDE 31

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

Multivariate Time Series

More general scenario: One observes a K-variate time series {Z1, . . . , ZT} The goal is to predict the next stretch of H observations for a particular component of Zt, say the first one w.l.o.g. Write Zt ≡ (yt, z2,t, . . . , zK,t)′ The forecasts ˆ yT(h) and the prediction standard errors ˆ σT(h) are computed from {Z1, . . . , ZT} rather than from {y1, . . . , yT} only Ditto in the bootstrap world

slide-32
SLIDE 32

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

Multivariate Time Series

More general scenario: One observes a K-variate time series {Z1, . . . , ZT} The goal is to predict the next stretch of H observations for a particular component of Zt, say the first one w.l.o.g. Write Zt ≡ (yt, z2,t, . . . , zK,t)′ The forecasts ˆ yT(h) and the prediction standard errors ˆ σT(h) are computed from {Z1, . . . , ZT} rather than from {y1, . . . , yT} only Ditto in the bootstrap world More general relevant quantities: ˆ JT denotes the probability law under P of ˆ ST(H)|ZT, ZT−1, . . . ˆ J∗

T denotes the probability law under ˆ

PT of ˆ S∗

T(H)|Z∗ T, Z∗ T−1, . . .

slide-33
SLIDE 33

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

Multivariate Time Series

More general scenario: One observes a K-variate time series {Z1, . . . , ZT} The goal is to predict the next stretch of H observations for a particular component of Zt, say the first one w.l.o.g. Write Zt ≡ (yt, z2,t, . . . , zK,t)′ The forecasts ˆ yT(h) and the prediction standard errors ˆ σT(h) are computed from {Z1, . . . , ZT} rather than from {y1, . . . , yT} only Ditto in the bootstrap world More general relevant quantities: ˆ JT denotes the probability law under P of ˆ ST(H)|ZT, ZT−1, . . . ˆ J∗

T denotes the probability law under ˆ

PT of ˆ S∗

T(H)|Z∗ T, Z∗ T−1, . . .

Unchanged methodology: Given the modifications above, the bootstrap methodology to construct JPRs remains unchanged Proposition 2.1 continues to hold

slide-34
SLIDE 34

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

Outline

1

The Problem

2

The Solution

3

Two Previous Methods

4

Monte Carlo

5

Empirical Application

6

Conclusions

slide-35
SLIDE 35

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

(Modified) Scheff´ e JPR

Jord` a and Marcellino (2010) propose an ’asymptotic’ JPR based on Assumption 3.1 √ T ˆ YT(H) − YT,H|ZT, ZT−1, . . .

  • d

→ N(0, ΞH) and ˆ ΞH

P

→ ΞH .

slide-36
SLIDE 36

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

(Modified) Scheff´ e JPR

Jord` a and Marcellino (2010) propose an ’asymptotic’ JPR based on Assumption 3.1 √ T ˆ YT(H) − YT,H|ZT, ZT−1, . . .

  • d

→ N(0, ΞH) and ˆ ΞH

P

→ ΞH . Furthermore, let P be the lower-triangular Cholesky decomposition

  • f ˆ

ΞH/T, satisfying PP′ = ˆ ΞH/T.

slide-37
SLIDE 37

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

(Modified) Scheff´ e JPR

Jord` a and Marcellino (2010) propose an ’asymptotic’ JPR based on Assumption 3.1 √ T ˆ YT(H) − YT,H|ZT, ZT−1, . . .

  • d

→ N(0, ΞH) and ˆ ΞH

P

→ ΞH . Furthermore, let P be the lower-triangular Cholesky decomposition

  • f ˆ

ΞH/T, satisfying PP′ = ˆ ΞH/T. The proposed Scheff´ e JPR is obtained in three steps: (S1)

  • Y : T( ˆ

YT(H) − Y)′ ˆ Ξ−1

H ( ˆ

YT(H) − Y) ≤ χ2

H,1−α

  • (classical Scheff´

e JPR)

slide-38
SLIDE 38

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

(Modified) Scheff´ e JPR

Jord` a and Marcellino (2010) propose an ’asymptotic’ JPR based on Assumption 3.1 √ T ˆ YT(H) − YT,H|ZT, ZT−1, . . .

  • d

→ N(0, ΞH) and ˆ ΞH

P

→ ΞH . Furthermore, let P be the lower-triangular Cholesky decomposition

  • f ˆ

ΞH/T, satisfying PP′ = ˆ ΞH/T. The proposed Scheff´ e JPR is obtained in three steps: (S1)

  • Y : T( ˆ

YT(H) − Y)′ ˆ Ξ−1

H ( ˆ

YT(H) − Y) ≤ χ2

H,1−α

  • (classical Scheff´

e JPR) (S2) ˆ YT(H) ± P

χ2

H,1−α

H

1H

  • (by Bowden’s (1970) Lemma . . . )
slide-39
SLIDE 39

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

(Modified) Scheff´ e JPR

Jord` a and Marcellino (2010) propose an ’asymptotic’ JPR based on Assumption 3.1 √ T ˆ YT(H) − YT,H|ZT, ZT−1, . . .

  • d

→ N(0, ΞH) and ˆ ΞH

P

→ ΞH . Furthermore, let P be the lower-triangular Cholesky decomposition

  • f ˆ

ΞH/T, satisfying PP′ = ˆ ΞH/T. The proposed Scheff´ e JPR is obtained in three steps: (S1)

  • Y : T( ˆ

YT(H) − Y)′ ˆ Ξ−1

H ( ˆ

YT(H) − Y) ≤ χ2

H,1−α

  • (classical Scheff´

e JPR) (S2) ˆ YT(H) ± P

χ2

H,1−α

H

1H

  • (by Bowden’s (1970) Lemma . . . )

(S3) ˆ YT(H) ± P

χ2

h,1−α

h

H

h=1

(by some ‘stepwise’ method)

slide-40
SLIDE 40

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

(Modified) Scheff´ e JPR

Criticisms: Assumption 3.1 is reasonable in the context of estimation but not in the context of prediction

slide-41
SLIDE 41

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

(Modified) Scheff´ e JPR

Criticisms: Assumption 3.1 is reasonable in the context of estimation but not in the context of prediction The way from (S1) to (S3) is not exactly paved with theoretical justification

slide-42
SLIDE 42

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

(Modified) Scheff´ e JPR

Criticisms: Assumption 3.1 is reasonable in the context of estimation but not in the context of prediction The way from (S1) to (S3) is not exactly paved with theoretical justification The width of the proposed JPR (S3) at forecast horizon h may not be (weakly) monotonically increasing in h: this can happen, since the multipliers

  • χ2

h,1−α/h are strictly

decreasing in h (for commonly used values of α)

slide-43
SLIDE 43

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

(Modified) Scheff´ e JPR

Multipliers of the (modified) Scheff´ e JPR for H = 12 and α = 0.1:

2 4 6 8 10 12 0.0 0.5 1.0 1.5 2.0

Jorda and Marcellino (2010) Multipliers

Forecast Horizon h

slide-44
SLIDE 44

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

NP Heuristic JPR

Staszewska-Bystrova (2010) proposes the following alternative bootstrap JPR:

slide-45
SLIDE 45

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

NP Heuristic JPR

Staszewska-Bystrova (2010) proposes the following alternative bootstrap JPR: Generate B bootstrap path-forecasts ˆ Y∗,b

T (H), for b = 1, . . . , B

Discard αB of these bootstrap path-forecasts: those ˆ Y∗,b

T (H)

that are ‘furthest’ away from the original path-forecast ˆ YT(H) (where distance is measured by the Euclidian distance, say) The neighboring-paths (NP) JPR is defined as the envelope

  • f the remaining (1 − α)B bootstrap path-forecasts ˆ

Y∗,b

T (H)

slide-46
SLIDE 46

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

NP Heuristic JPR

Criticisms: The method is purely heuristic: no proof of asymptotic validity, under some suitable high-level assumption, is given

slide-47
SLIDE 47

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

NP Heuristic JPR

Criticisms: The method is purely heuristic: no proof of asymptotic validity, under some suitable high-level assumption, is given The method seems to restricted to (V)AR models, since it uses the backward representation of a (V)AR model to generate the bootstrap path-forecasts ˆ Y∗

T(H)

slide-48
SLIDE 48

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

NP Heuristic JPR

Criticisms: The method is purely heuristic: no proof of asymptotic validity, under some suitable high-level assumption, is given The method seems to restricted to (V)AR models, since it uses the backward representation of a (V)AR model to generate the bootstrap path-forecasts ˆ Y∗

T(H)

The shape of the JPR can be jagged, which is unattractive

slide-49
SLIDE 49

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

Property of Balance

Under the additional assumption that the marginal distribution of ˆ yT(h) − yT+h ˆ σT(h) is independent of h asymptotically, it is easily seen that our bootstrap JPR (1) has the property of being balanced, asymptotically: P

  • yT+h ∈
  • ˆ

yT(h) ± dmax,∗

|·|,1−α(k) · ˆ

σT(h)

  • is independent of h
slide-50
SLIDE 50

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

Property of Balance

Under the additional assumption that the marginal distribution of ˆ yT(h) − yT+h ˆ σT(h) is independent of h asymptotically, it is easily seen that our bootstrap JPR (1) has the property of being balanced, asymptotically: P

  • yT+h ∈
  • ˆ

yT(h) ± dmax,∗

|·|,1−α(k) · ˆ

σT(h)

  • is independent of h

All forecasts ˆ yT(h) are treated as equally important: the probability of violating the k-FWE criterion is spread out evenly over all horizons h.

slide-51
SLIDE 51

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

Property of Balance

Under the additional assumption that the marginal distribution of ˆ yT(h) − yT+h ˆ σT(h) is independent of h asymptotically, it is easily seen that our bootstrap JPR (1) has the property of being balanced, asymptotically: P

  • yT+h ∈
  • ˆ

yT(h) ± dmax,∗

|·|,1−α(k) · ˆ

σT(h)

  • is independent of h

All forecasts ˆ yT(h) are treated as equally important: the probability of violating the k-FWE criterion is spread out evenly over all horizons h. Another way to argue that balance is a desirable property is by considering the following (extremely) unbalanced JPR: PIT(1) × (−∞, ∞) × . . . × (−∞, ∞) where PIT(1) is a marginal prediction interval for yT+1.

slide-52
SLIDE 52

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

Outline

1

The Problem

2

The Solution

3

Two Previous Methods

4

Monte Carlo

5

Empirical Application

6

Conclusions

slide-53
SLIDE 53

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

Preliminaries

We consider the general AR(p) model yt = ν + ρ1yt−1 + . . . + ρpyt−p + ǫt (2) which can be alternatively expressed as yt = ν + ρyt−1 + ψ1∆yt−1 + . . . + ψp−1∆yt−p+1 + ǫt (3) to bring out the role of the largest autoregressive root ρ ≡ ρ1 + . . . + ρp.

slide-54
SLIDE 54

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

Preliminaries

We consider the general AR(p) model yt = ν + ρ1yt−1 + . . . + ρpyt−p + ǫt (2) which can be alternatively expressed as yt = ν + ρyt−1 + ψ1∆yt−1 + . . . + ψp−1∆yt−p+1 + ǫt (3) to bring out the role of the largest autoregressive root ρ ≡ ρ1 + . . . + ρp. Estimation strategy: Estimate formulation (3) by OLS, yielding ˆ ρOLS Transform to the bias-corrected estimator (e.g., see White, 1961) ˆ ρBC ≡ ˆ ρOLS + 1 + 3 ˆ ρOLS T (4)

slide-55
SLIDE 55

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

Preliminaries

We consider the general AR(p) model yt = ν + ρ1yt−1 + . . . + ρpyt−p + ǫt (2) which can be alternatively expressed as yt = ν + ρyt−1 + ψ1∆yt−1 + . . . + ψp−1∆yt−p+1 + ǫt (3) to bring out the role of the largest autoregressive root ρ ≡ ρ1 + . . . + ρp. Estimation strategy: Estimate formulation (3) by OLS, yielding ˆ ρOLS Transform to the bias-corrected estimator (e.g., see White, 1961) ˆ ρBC ≡ ˆ ρOLS + 1 + 3 ˆ ρOLS T (4) Regress yt − ˆ ρBCyt−1 on (1, ∆yt−1, . . . , ∆yt−p−1) by OLS to get corresponding estimators of (ν, ψ1, . . . , ψp−1) Use the one-to-one relations between the formulations (2)–(3) to get set of estimators (ˆ ν, ˆ ρ1, . . . , ˆ ρp) and (centered) residuals {ˆ ǫt}

slide-56
SLIDE 56

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

Preliminaries

Computation of the forecasts ˆ yT(h): in the usual way.

slide-57
SLIDE 57

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

Preliminaries

Computation of the forecasts ˆ yT(h): in the usual way. Computation of the prediction standard errors: Convert the AR(p) model (ˆ ν, ˆ ρ1, . . . , ˆ ρp) to an MA(∞) model { ˆ θ0, ˆ θ1, ˆ θ2, . . .}, with ˆ θ0 = 1 Then ˆ σT(h) ≡ ˆ σǫ

  • ˆ

θ2

0 + . . . + ˆ

θ2

h−1

slide-58
SLIDE 58

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

Preliminaries

Computation of the forecasts ˆ yT(h): in the usual way. Computation of the prediction standard errors: Convert the AR(p) model (ˆ ν, ˆ ρ1, . . . , ˆ ρp) to an MA(∞) model { ˆ θ0, ˆ θ1, ˆ θ2, . . .}, with ˆ θ0 = 1 Then ˆ σT(h) ≡ ˆ σǫ

  • ˆ

θ2

0 + . . . + ˆ

θ2

h−1

Generation of the bootstrap data {y∗

1, . . . , y∗ T}:

Conditional on {y1, . . . , yp}, using the AR(p) model The AR

∗(p) model and the ˆ

σ∗

T(h) are obtained as in the real world

slide-59
SLIDE 59

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

Preliminaries

Computation of the forecasts ˆ yT(h): in the usual way. Computation of the prediction standard errors: Convert the AR(p) model (ˆ ν, ˆ ρ1, . . . , ˆ ρp) to an MA(∞) model { ˆ θ0, ˆ θ1, ˆ θ2, . . .}, with ˆ θ0 = 1 Then ˆ σT(h) ≡ ˆ σǫ

  • ˆ

θ2

0 + . . . + ˆ

θ2

h−1

Generation of the bootstrap data {y∗

1, . . . , y∗ T}:

Conditional on {y1, . . . , yp}, using the AR(p) model The AR

∗(p) model and the ˆ

σ∗

T(h) are obtained as in the real world

Generation of the bootstrap data {y∗

T+1, . . . , y∗ T+H}:

Conditional on {yT−H+1, . . . , yT}, using the AR(p) model

slide-60
SLIDE 60

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

Preliminaries

Computation of the forecasts ˆ yT(h): in the usual way. Computation of the prediction standard errors: Convert the AR(p) model (ˆ ν, ˆ ρ1, . . . , ˆ ρp) to an MA(∞) model { ˆ θ0, ˆ θ1, ˆ θ2, . . .}, with ˆ θ0 = 1 Then ˆ σT(h) ≡ ˆ σǫ

  • ˆ

θ2

0 + . . . + ˆ

θ2

h−1

Generation of the bootstrap data {y∗

1, . . . , y∗ T}:

Conditional on {y1, . . . , yp}, using the AR(p) model The AR

∗(p) model and the ˆ

σ∗

T(h) are obtained as in the real world

Generation of the bootstrap data {y∗

T+1, . . . , y∗ T+H}:

Conditional on {yT−H+1, . . . , yT}, using the AR(p) model Generation of the bootstrap path-forecast ˆ Y∗

T(H):

Conditional on {yT−H+1, . . . , yT}, using the AR

∗(p) model

=⇒ Employ the bootstrap approach of Pascual et al. (2001).

slide-61
SLIDE 61

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

Monte Carlo Details

The model: Use AR(2) model with various parameters and normal errors The sample size is T ∈ {100, 400} Estimate the lag order from the (bootstrap) data by the BIC Competing methods: Joint Marginals Scheff´ e (S3) NP Heuristic k-FWE JPR (1) with k ∈ {1, 2, 3} Nominal coverage level: 1 − α = 90%

slide-62
SLIDE 62

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

Monte Carlo Details

The model: Use AR(2) model with various parameters and normal errors The sample size is T ∈ {100, 400} Estimate the lag order from the (bootstrap) data by the BIC Competing methods: Joint Marginals Scheff´ e (S3) NP Heuristic k-FWE JPR (1) with k ∈ {1, 2, 3} Nominal coverage level: 1 − α = 90% Note: A much wider set of simulation results, including non-normal errors, are reported in the paper

slide-63
SLIDE 63

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

Monte Carlo Results I

T = 100 T = 400 (ρ1, ρ2) = (1.75, −0.85) H=6 H=12 H=24 H=6 H=12 H=24 Joint Marginals 72.1 61.8 49.2 76.2 64.5 48.0 Scheff´ e 87.9 86.0 64.4 89.2 88.8 66.1 NP Heuristic 89.2 91.5 93.1 89.8 90.7 90.5 1-FWE JPR 90.4 90.5 89.6 89.8 89.7 89.7 2-FWE JPR 90.4 89.8 89.7 89.9 89.8 89.7 3-FWE JPR 90.0 90.3 89.0 90.0 89.7 89.6 (ρ1, ρ2) = (1.25, −0.75) H=6 H=12 H=24 H=6 H=12 H=24 Joint Marginals 63.6 46.1 27.0 65.3 47.1 25.5 Scheff´ e 63.7 23.2 07.5 66.5 21.6 04.2 NP Heuristic 87.9 86.7 85.8 88.8 87.8 86.0 1-FWE JPR 90.0 89.4 89.3 89.9 89.8 89.9 2-FWE JPR 90.2 89.5 89.5 89.9 89.9 89.8 3-FWE JPR 89.8 89.5 89.3 89.9 89.8 89.7

slide-64
SLIDE 64

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

Monte Carlo Results II

T = 100 T = 400 (ρ1, ρ2) = (−0.65, 0.15) H=6 H=12 H=24 H=6 H=12 H=24 Joint Marginals 65.1 48.9 30.4 64.5 47.2 26.2 Scheff´ e 02.6 00.2 00.0 02.9 00.1 00.0 NP Heuristic 88.8 87.9 86.8 89.1 88.0 86.1 1-FWE JPR 90.4 90.1 89.7 90.0 90.0 89.7 2-FWE JPR 90.5 89.9 89.8 90.1 90.0 90.0 3-FWE JPR (k=3) 89.7 89.7 89.6 90.0 89.8 89.8 (ρ1, ρ2) = (−0.7, −0.2) H=6 H=12 H=24 H=6 H=12 H=24 Joint Marginals 59.9 39.5 18.2 59.6 37.3 14.9 Scheff´ e 03.0 00.1 00.0 01.9 00.1 00.0 NP Heuristic 87.8 86.9 85.3 88.7 87.7 85.5 1-FWE JPR 89.4 89.3 88.7 89.9 89.8 89.8 2-FWE JPR 89.2 89.4 89.8 90.0 90.0 90.0 3-FWE JPR 89.4 89.7 89.8 90.0 90.1 89.9

slide-65
SLIDE 65

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

Monte Carlo Results: Summary

Joint Marginals: As expected, the performance decreases in H and is poor Stringing together marginal prediction intervals does not yield a proper JPR

slide-66
SLIDE 66

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

Monte Carlo Results: Summary

Joint Marginals: As expected, the performance decreases in H and is poor Stringing together marginal prediction intervals does not yield a proper JPR Scheff´ e: The performance ranges from acceptable to horrible It decreases strongly in ρ ≡ ρ1 + ρ2 and in H

slide-67
SLIDE 67

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

Monte Carlo Results: Summary

Joint Marginals: As expected, the performance decreases in H and is poor Stringing together marginal prediction intervals does not yield a proper JPR Scheff´ e: The performance ranges from acceptable to horrible It decreases strongly in ρ ≡ ρ1 + ρ2 and in H NP Heuristic: The performance ranges from good to acceptable It decreases slightly in H

slide-68
SLIDE 68

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

Monte Carlo Results: Summary

Joint Marginals: As expected, the performance decreases in H and is poor Stringing together marginal prediction intervals does not yield a proper JPR Scheff´ e: The performance ranges from acceptable to horrible It decreases strongly in ρ ≡ ρ1 + ρ2 and in H NP Heuristic: The performance ranges from good to acceptable It decreases slightly in H k-FWE JPR: The performance ranges from very good to good It is remarkably stable over both H and the value of k

slide-69
SLIDE 69

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

Outline

1

The Problem

2

The Solution

3

Two Previous Methods

4

Monte Carlo

5

Empirical Application

6

Conclusions

slide-70
SLIDE 70

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

Data Set & Methodology

Data set: Quarterly data on US real GDP from Q1/1947 until Q3/2011 The data are seasonally adjusted and expressed in billions

  • f chained 2005 dollars

We focus on the first differences of the log-series (in percent), which correspond to log quarter-to-quarter growth There are a total of 258 observations We choose H = 12, which corresponds to a period of three years The nominal coverage is 1 − α = 90%

slide-71
SLIDE 71

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

Data Set & Methodology

Data set: Quarterly data on US real GDP from Q1/1947 until Q3/2011 The data are seasonally adjusted and expressed in billions

  • f chained 2005 dollars

We focus on the first differences of the log-series (in percent), which correspond to log quarter-to-quarter growth There are a total of 258 observations We choose H = 12, which corresponds to a period of three years The nominal coverage is 1 − α = 90% Methodology: We use the same AR(p) methodology used in the Monte Carlo study (with the lag order p estimated by the BIC) More complex approaches could be used alternatively:

A nonlinear (SE)TAR model as in Potter (1995) A VAR model, using extra variables, as in Stock and Watson (2001) Others . . .

However, our goal is to keep it (acceptably) simple and focus

  • n the relative performances of the various JPRs
slide-72
SLIDE 72

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

Data Set

Quarterly US real GDP: original series and ∇ log-series:

US Real GDP

Time 1950 1960 1970 1980 1990 2000 2010 2000 4000 6000 8000 10000

US Log Real GDP Growth (in %)

Time 1950 1960 1970 1980 1990 2000 2010 −3 −2 −1 1 2 3 4

slide-73
SLIDE 73

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

Illustration Exercise

To illustrate the salient features of the various JPRs: Use the last T = 120 observations to forecast the future path from Q4/2011 until Q3/2014 Then compute corresponding JPRs

slide-74
SLIDE 74

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

Illustration Exercise

To illustrate the salient features of the various JPRs: Use the last T = 120 observations to forecast the future path from Q4/2011 until Q3/2014 Then compute corresponding JPRs Fitting the model: The lag order chosen by the BIC is ˆ p = 1 The model fitted by OLS is ˆ yt+1 = 0.318 + 0.542 · yt The bias correction (4) yields the final fitted model ˆ yt+1 = 0.304 + 0.564 · yt

slide-75
SLIDE 75

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

Illustration Exercise

First set of comparisons:

2 4 6 8 10 12 −1 1 2

US Log Real GDP Growth: Path−Forecast and JPRs

Forecast Horizon h Path−Forecast Scheffe NP Heuristic 1−FWE JPR

slide-76
SLIDE 76

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

Illustration Exercise

Major findings: Scheff´ e has a smaller volume than the other two JPRs

slide-77
SLIDE 77

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

Illustration Exercise

Major findings: Scheff´ e has a smaller volume than the other two JPRs The width of Scheff´ e at horizon h monotonically decreases from h = 7 to h = 12, if only slightly

slide-78
SLIDE 78

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

Illustration Exercise

Major findings: Scheff´ e has a smaller volume than the other two JPRs The width of Scheff´ e at horizon h monotonically decreases from h = 7 to h = 12, if only slightly NP Heuristic and 1-FWE JPR have a comparable volume, but the shape of NP Heuristic is unattractively jagged (which cannot be blamed on a small number of bootstrap repetitions, since we used B = 10, 000)

slide-79
SLIDE 79

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

Illustration Exercise

Second set of comparisons:

2 4 6 8 10 12 −1 1 2

US Log Real GDP Growth: Path−Forecast and JPRs

Forecast Horizon h Path−Forecast 3−FWE JPR 2−FWE JPR 1−FWE JPR

slide-80
SLIDE 80

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

Illustration Exercise

Major finding: The volume of k-FWE JPR decreases in the value of k If the applied researcher is willing to miss up to one (or two) elements of the future path in the JPR (with probability 90%), he obtains a smaller and more informative region in return

slide-81
SLIDE 81

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

Backtest Exercise

To get a feel for the out-of-sample performance of the various JPRs: Using the stretch {yt, . . . , yt+119} only, compute the JPR for the next H = 12 periods Compare the computed JPR against the path (yt+120, . . . , yt+131)′ to evaluate the ‘success’ in terms of the k-FWE criterion Do this for t = 1, . . . , 258 − 120 − 12 = 126 Then report the empirical coverage probability as the fraction

  • f the ‘successes’ out of these 126 ‘trials’
slide-82
SLIDE 82

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

Backtest Exercise

To get a feel for the out-of-sample performance of the various JPRs: Using the stretch {yt, . . . , yt+119} only, compute the JPR for the next H = 12 periods Compare the computed JPR against the path (yt+120, . . . , yt+131)′ to evaluate the ‘success’ in terms of the k-FWE criterion Do this for t = 1, . . . , 258 − 120 − 12 = 126 Then report the empirical coverage probability as the fraction

  • f the ‘successes’ out of these 126 ‘trials’

Using this rolling-window approach, we get a fair, if not overly accurate, assessment of the out-of-sample performance.

slide-83
SLIDE 83

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

Backtest Exercise

Empirical out-of-sample coverages for US log real GDP growth: Method Coverage Joint Marginals 64.6 Scheff´ e 73.2 NP Heuristic 89.7 1-FWE JPR 89.9 2-FWE JPR 85.1 3-FWE JPR 87.3

slide-84
SLIDE 84

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

Outline

1

The Problem

2

The Solution

3

Two Previous Methods

4

Monte Carlo

5

Empirical Application

6

Conclusions

slide-85
SLIDE 85

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

Conclusions

Constructing joint prediction regions (JPRs) for a future path, along with a path-forecast, has not received the deserved attention so far.

slide-86
SLIDE 86

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

Conclusions

Constructing joint prediction regions (JPRs) for a future path, along with a path-forecast, has not received the deserved attention so far. We offer generic bootstrap JPRs that allow the applied researcher to determine the implementation details as he sees them most fit, given the application at hand.

slide-87
SLIDE 87

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

Conclusions

Constructing joint prediction regions (JPRs) for a future path, along with a path-forecast, has not received the deserved attention so far. We offer generic bootstrap JPRs that allow the applied researcher to determine the implementation details as he sees them most fit, given the application at hand. Compared to two previous proposals, our bootstrap JPRs are shown to be asymptotically consistent, under a mild high-level assumption, and they also enjoy better finite-sample performance.

slide-88
SLIDE 88

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

Conclusions

Constructing joint prediction regions (JPRs) for a future path, along with a path-forecast, has not received the deserved attention so far. We offer generic bootstrap JPRs that allow the applied researcher to determine the implementation details as he sees them most fit, given the application at hand. Compared to two previous proposals, our bootstrap JPRs are shown to be asymptotically consistent, under a mild high-level assumption, and they also enjoy better finite-sample performance. In addition, we go beyond previous proposals by offering the more flexible k-FWE criterion: if the applied researcher is willing to miss a small number of elements of the future path, he is afforded a smaller, more informative region in return.

slide-89
SLIDE 89

The Problem The Solution Two Previous Methods Monte Carlo Empirical Application Conclusions References

Bowden, D. C. (1970). Simultaneous confidence bands for linear regression

  • models. Journal of the American Statistical Association, 65(329):413–421.

Greenspan, A. (2003). Remarks at a symposium sponsored by the Federal Reserve Bank of Kansas City, Jackson Hole, Wyoming on August 29, 2003. Available at http://www.federalreserve.gov/boarddocs/speeches/2003/20030829/. Jord` a, `

  • O. and Marcellino, M. G. (2010). Path-forecast evaluation. Journal of

Applied Econometrics, 25:635–662. Pascual, L., Romo, J., and Ruiz, E. (2001). Effects of parameter estimation on prediction densities: a bootstrap approach. International Journal of Forecasting, 17(1):83–103. Potter, S. M. (1995). A nonlinear approach to US GNP. Journal of Applied Econometrics, 2:109–125. Staszewska-Bystrova, A. (2010). Bootstrap prediction bands for forecast paths from vector autoregressive models. Journal of Forecasting, Online Version, DOI:10.1002/for.1205. Stock, J. H. and Watson, M. W. (2001). Vector autoregressions. Journal of Economic Perspectives, 15(4):101–115. White, J. (1961). Asymptotic expansions for the mean and variance of the serial correlation coefficient. Biometrika, 48:85–95.