Quasi Maximum Likelihood Estimation and Inference of Large Approximate Dynamic Factor Models via the EM algorithm
Abstract: We study estimation of large Dynamic Factor models implemented through the Expectation Maximization (EM) algorithm, jointly with the Kalman smoother. We prove that as both the cross-sectional dimension, $n$, and the sample size, $T$, diverge to infinity: (i) the estimated loadings are $\sqrt T$-consistent, asymptotically normal and equivalent to their Quasi Maximum Likelihood estimates; (ii) the estimated factors are $\sqrt n$-consistent, asymptotically normal and equivalent to their Weighted Least Squares estimates. Moreover, the estimated loadings are asymptotically as efficient as those obtained by Principal Components analysis, while the estimated factors are more efficient if the idiosyncratic covariance is sparse enough.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.