state varying factor models of large dimensions
play

State-Varying Factor Models of Large Dimensions Markus Pelger and - PowerPoint PPT Presentation

Introduction Model Empirical Applications Conclusion State-Varying Factor Models of Large Dimensions Markus Pelger and Ruoxuan Xiong Stanford University European Econometric Society Meeting August 27, 2018 Markus Pelger and Ruoxuan Xiong


  1. Introduction Model Empirical Applications Conclusion State-Varying Factor Models of Large Dimensions Markus Pelger and Ruoxuan Xiong Stanford University European Econometric Society Meeting August 27, 2018 Markus Pelger and Ruoxuan Xiong State-Varying Factor Models of Large Dimensions 1/24

  2. Introduction Model Empirical Applications Conclusion Introduction Motivation Conventional large-dimensional latent factor model assumes the exposures to factors (factor loadings) are constant over time Observation: Asset prices’ exposures to the market (and other risk factors) are time-varying Example: Term-structure factor exposure is different in recessions and booms. Figure: PCA Factor Loadings for Treasuries in Boom and Recession (a) Level Factor (b) Slope Factor (c) Curvature Factor Markus Pelger and Ruoxuan Xiong State-Varying Factor Models of Large Dimensions 2/24

  3. Introduction Model Empirical Applications Conclusion Introduction This paper Research Question: Find latent factors and loadings that are state-dependent. 1 Test if factor model is state-dependent. 2 Key elements of estimator Statistical factors instead of pre-specified (and potentially 1 miss-specified) factors Uses information from large panel data sets: Many cross-section 2 units with many time observations Factor structure can be time-varying as a general non-linear 3 function of the state process Markus Pelger and Ruoxuan Xiong State-Varying Factor Models of Large Dimensions 3/24

  4. Introduction Model Empirical Applications Conclusion Introduction Contribution of this paper Contribution Theoretical PCA estimator combined with kernel projection for factors, state-varying factor loadings and common components Asymptotic inferential theory for estimators for N , T → ∞ : consistency normal distribution and standard errors Test for state-dependency of latent factor model Generalized correlation test statistic detects for which states model changes Non-standard superconsistency Empirical State-dependency of factor loadings in US Treasury securities Markus Pelger and Ruoxuan Xiong State-Varying Factor Models of Large Dimensions 4/24

  5. Introduction Model Empirical Applications Conclusion Literature Literature (partial list) Large-dimensional factor models with constant loadings Bai (2003): Distribution theory Fan et al. (2013): Sparse matrices in factor modeling Large-dimensional factor models with time-varying loadings Su and Wang (2017): Local time-window Pelger (2018), A¨ ıt-Sahalia and Xiu (2017): High-frequency Fan et al. (2016): Projected PCA Large-dimensional factor models with structural breaks Stock and Watson (2009): Inconsistency Breitung and Eickmeier (2011), Chen et al. (2014): Detection Markus Pelger and Ruoxuan Xiong State-Varying Factor Models of Large Dimensions 5/24

  6. Introduction Model Empirical Applications Conclusion Model The Model State-varying factor model X it is the observed data for the i -th cross-section unit at time t State variable S t at time t X it = Λ i ( S t ) F t + e it i = 1 , · · · N , t = 1 , · · · T r × 1 ���� 1 × r idiosyncratic ���� � �� � factors loadings N cross-section units (large), time horizon T (large) r systematic factors (fixed) Factors F , loadings Λ( S t ), idiosyncratic components e are unknown Data X and state process S t observed Markus Pelger and Ruoxuan Xiong State-Varying Factor Models of Large Dimensions 6/24

  7. Introduction Model Empirical Applications Conclusion Model The Model Examples (with one factor) equivalent to multi-factor representation Loadings linear in state: Λ i ( S t ) = Λ i , 1 + Λ i , 2 S t X it = Λ i , 1 F t +Λ i , 2 ( S t F t ) + e it ���� � �� � F t , 1 F t , 2 Loadings nonlinear in discrete state: Λ i ( S t ) = g i ( S t ), S t ∈ { s 1 , s 2 } X it = g i ( s 1 ) ✶ { S t = s 1 } F t + g i ( s 2 ) ✶ { S t = s 2 } F t + e it � �� � � �� � � �� � � �� � Λ i , 1 Λ i , 2 F t , 1 F t , 2 Our model Loadings nonlinear in non-discrete state: Λ i ( S t ) = g i ( S t ) with continuous distribution function for S t ⇒ Cumbersome/No multi-factor representation Markus Pelger and Ruoxuan Xiong State-Varying Factor Models of Large Dimensions 7/24

  8. Introduction Model Empirical Applications Conclusion Assumptions The Model: Main Assumptions Approximate state-varying factor model Systematic factors explain a large portion of the variance Idiosyncratic risk is nonsystematic: Weak time-series and cross-sectional correlation State: recurrent (infinite observations around the state to condition on) with continuous stationary PDF Factor Loadings: deterministic functions of the state and the functions are Lipschitz continuous (observations in the nearby state are useful) ∃ C , � Λ i ( s + ∆ s ) − Λ i ( s ) � ≤ C | ∆ s | Markus Pelger and Ruoxuan Xiong State-Varying Factor Models of Large Dimensions 8/24

  9. Introduction Model Empirical Applications Conclusion Assumptions The Model Robustness to Noise in State Process Model The observed state process is a noisy approximation of the underlying state process (e.g. omitted state) X it = (Λ i ( S t ) + ε it ) ⊤ F t + e it i = 1 , 2 , · · · , N and t = 1 , 2 , · · · , T or in vector notation = Λ( S t ) + E t + e t = Λ( S t ) F t + ψ t + e t X t F t F t ���� ���� ���� ���� ���� � �� � N × 1 r × 1 N × r r × 1 N × 1 N × r Under weak conditions noise in state process can be treated like idiosyncratic noise. ⇒ All results hold! Markus Pelger and Ruoxuan Xiong State-Varying Factor Models of Large Dimensions 9/24

  10. Introduction Model Empirical Applications Conclusion Estimation The Model: Intuition Intuition for Estimation Constant loadings: Loadings are principal components of covariance matrix Cov ( X t ) = Λ Cov ( F t )Λ ⊤ + Cov ( e t ) . State-varying loadings: Loadings for S t = s are principal components of covariance matrix conditioned on the state S t = s : Cov ( X t | S t = s ) = Λ( s ) Cov ( F t | S t = s )Λ( s ) ⊤ + Cov ( e t | S t = s ) . ⇒ Intuition: Estimate conditional covariance matrix Cov ( X t | S t = s ) with kernel projection and apply PCA to it. Markus Pelger and Ruoxuan Xiong State-Varying Factor Models of Large Dimensions 10/24

  11. Introduction Model Empirical Applications Conclusion Estimation The Model: Nonparametric Estimation Obtective function and nonparametric estimation The estimators minimize mean squared error conditioned on state: N T 1 � � F s , ˆ ˆ K s ( S t )( X it − Λ i ( s ) ′ F t ) 2 Λ( s ) = arg min NT ( s ) F s , Λ( s ) i =1 t =1 � S t − s � 2 π exp {− u 2 Kernel function K s ( S t ) = 1 1 h K (e.g. K ( u ) = 2 } ) √ h T ( s ) = � T p t =1 K s ( S t ), T ( s ) − → π ( s ) (stationary density of S t = s ) T Bandwidth parameter h determines local “state window” Markus Pelger and Ruoxuan Xiong State-Varying Factor Models of Large Dimensions 11/24

  12. Introduction Model Empirical Applications Conclusion Estimation The Model: Nonparametric Estimation Nonparametric estimation Project square root of kernel on the data and factors X s it = K 1 / 2 F s t = K 1 / 2 ( S t ) X it ( S t ) F t s s PCA solves optimization problem N T 1 � � F s , ˆ ˆ ( X s it − Λ i ( s ) ′ F s t ) 2 Λ( s ) = arg min NT ( s ) F s , Λ( s ) t =1 i =1 ⇒ Apply PCA to conditional covariance matrix F s are the eigenvectors corresponding to top k eigenvalues of ˆ 1 NT ( s ) ( X s ) ′ X s estimated conditional covariance matrix Λ( s ) are coefficients from regressing X s on ˆ ˆ F s Markus Pelger and Ruoxuan Xiong State-Varying Factor Models of Large Dimensions 12/24

  13. Introduction Model Empirical Applications Conclusion Asymptotic Results The Model: Nonparametric Estimation Major challenge: Bias term from using nearby states X s t = Λ( S t ) F s t + e s t = Λ( s ) F s t + e s + (Λ( S t ) − Λ( s )) F s . t t � �� � � �� � ¯ ∆ X s X s t t ∆ X s it = Λ i ( S t ) F s t − Λ i ( s ) F s t = O p ( h ) Kernel bias complicates problem and lowers convergence rates Theorem: Consistency √ √ Assume N , Th → ∞ and δ NT , h h → 0 with δ NT , h = min ( N , Th ): � 2 � � � � T � ˆ δ 2 1 � F s t − ( H s ) T F s � = O p (1) NT , h t =1 t � T � 2 � � � � N 1 � ˆ δ 2 � Λ i ( s ) − ( H s ) − 1 Λ i ( s ) � = O p (1) � NT , h i =1 N for known full rank matrix H s Markus Pelger and Ruoxuan Xiong State-Varying Factor Models of Large Dimensions 13/24

  14. Introduction Model Empirical Applications Conclusion Asymptotic Results Limiting Distribution of Estimated Factors Theorem (Factors) √ Nh / ( Th ) → 0, Nh → ∞ and Nh 2 → 0. Then Assume √ � � ( S t ) ˆ K − 1 / 2 F s t − ( H s ) ′ F t N s N r ) − 1 ( ˆ F s ) ′ F s 1 � ( V s = √ Λ i ( s ) e it + o p (1) T N i =1 D N (0 , ( V s ) − 1 Q s Γ s t ( Q s ) ′ ( V s ) − 1 ) − → ( F s ) ′ ˆ Rotation matrix H s = Λ( s ) ′ Λ( s ) F s r ) − 1 ( V s N T √ K − 1 / 2 ( S t ) ˆ F s t converges to some rotation of F t at rate N s Efficiency mainly depends on asymptotic distribution of � N 1 i =1 Λ i ( s ) e it √ N Markus Pelger and Ruoxuan Xiong State-Varying Factor Models of Large Dimensions 14/24

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend