Fisher Information Matrix Scores
- Fisher Information Matrix (FIM) scores are quantitative measures derived from the local curvature of the likelihood, guiding uncertainty assessment in parameter estimation.
- They distinguish between observed information (sample-based Hessian) and expected information (average over the data distribution), affecting coverage properties.
- The expected FIM is shown to yield more accurate mean-squared coverage error for confidence intervals compared to the observed FIM under regular conditions.
The Fisher Information Matrix (FIM) is a central object in asymptotic theory and interval estimation for parametric models, providing local curvature information about the likelihood and thus quantifying parameter uncertainty. "FIM scores" typically refer both to the quantitative entries of the FIM or its inverse—and, by extension, to the resulting measures of statistical precision or confidence in inference tasks. Two primary forms of the Fisher information matrix arise in practice: the observed FIM, computed directly from the sample via the Hessian of the log-likelihood at the maximum-likelihood estimate (MLE), and the expected FIM, computed as the expectation over the data-generating distribution of the observed information. A long-standing question in theory and practice concerns which form yields more accurate coverage properties for confidence regions or intervals constructed via asymptotic normality of the MLE.
1. Formal Definitions: Observed and Expected FIM
Let be independent random variables with joint likelihood and log-likelihood . The multivariate parameter is .
The two principal forms of the Fisher information matrix are:
- Observed information at :
- Expected information at :
At the MLE , one typically uses
and its expectation, 0, for the expected information.
The inverse of either matrix provides the standard asymptotic (plug-in) estimator for the covariance matrix of the MLE.
2. Asymptotic Normality and Confidence Region Construction
Classical large-sample theory, under standard regularity conditions (smoothness of likelihood, identifiability, and interchange of differentiation and expectation), yields the following expansion for the score at the MLE:
1
for some 2 between 3 and the true parameter 4. Rearrangement and scaling by 5 gives
6
By the central limit theorem and consistency, as 7:
8
9
So 0.
Approximate confidence intervals for a scalar parameter 1 (the 2th component of 3) take the form:
4
The variance term 5 is typically substituted by the 6 entry of either 7 or 8 at 9.
3. MSE Criterion for FIM-Based Interval Coverage
Rather than comparing confidence interval lengths, Jiang & Spall focus on actual coverage probability—how closely the constructed interval attains the nominal confidence level, on average. For component 0, let 1 denote the true (unknown) asymptotic covariance, and use
2
to denote the realized coverage when 3 is the plug-in variance estimate (either 4 or 5).
Define the mean-squared error (MSE) of coverage error for each estimator: 6 A smaller MSE indicates that the approximate interval more closely (in the mean-squared sense) attains its nominal coverage on average.
4. Main Theorem: Superiority of the Expected FIM in Coverage MSE
Theorem (Jiang & Spall):
Under the paper's regularity assumptions (existence and boundedness of derivatives, LLN and CLT for i.n.i.d. data, etc.),
7
If 8 and 9 differ nontrivially in the limit, this inequality is strict.
Interpretation:
Asymptotically, confidence intervals constructed with 0 never have larger mean-squared coverage error than those constructed with 1, and typically perform strictly better component-wise.
Proof Sketch
- MLE Error Expansion:
2
where 3 are standardized normal scores and 4 are entries of 5.
- Inverse Matrix Expansions:
6
7
with 8, zero mean, and controlled variance.
- Taylor Expansion of the Coverage Function:
9
- Combine Terms to Compare Mean-Squared Errors:
The difference in MSEs is driven by the variance of 0 (vanishing in expectation for 1, but present for 2).
5. Practical Implications and Application Guidelines
- Interval Estimation:
For constructing confidence intervals or regions based on the asymptotic normality of the MLE, the user must choose to "score" parameter uncertainty with either the observed Hessian or the expected Fisher information.
- Empirical Recommendation:
Jiang & Spall's result establishes that, under regularity, using the expected FIM never worsens and generally improves the MSE of coverage. Whenever a closed-form or accurate numerical estimate of 3 is available, its use in the construction of confidence intervals or regions is strongly justified.
- Exceptions and Variations:
In scalar cases with available ancillary statistics (Efron & Hinkley), observed information conditional on those statistics can sometimes yield even better coverage. However, for multivariate and i.n.i.d. settings, the expected FIM is at least as good as, and often strictly better than, the observed FIM for mean-squared coverage error.
- Moderate and Finite Sample Sizes:
While the superiority of 4 manifests asymptotically as 5 increases, the observed FIM may fluctuate more in small samples due to data-induced noise. When 6 can be computed efficiently, it is typically preferred in practice for "scoring" uncertainty.
6. Broader Context and Impact
The findings of Jiang & Spall directly challenge a persistent heuristic preference for the observed FIM in finite-sample estimation without conditional ancillarity. The result is robust across multivariate, non-i.i.d. scenarios and is not limited to specific model classes. In application domains—such as precision forecasting, uncertainty quantification, and experimental design—adopting the expected FIM for uncertainty "scoring" in MLE-based inference yields intervals whose empirical coverage is at least as accurate as, and often more conservative/accurate than, those from observed-FIM-based procedures.
Concisely: In all regular settings where the plug-in asymptotic normal confidence region is used to quantify parameter uncertainty, the expected FIM should be used in place of the observed FIM—inverting 7 at 8 yields, per parameter, intervals that realize coverage probabilities no worse, and generally better, than their observed-FIM-based counterparts in mean-squared error. This conclusion now stands on rigorous asymptotic and componentwise grounds (Jiang, 2021).