Papers
Topics
Authors
Recent
Search
2000 character limit reached

Observed-Information Variance Estimator

Updated 30 January 2026
  • The observed-information-based variance estimator is a statistical tool that computes the inverse of the observed Fisher information at the MLE to approximate variance with high accuracy in finite samples.
  • It outperforms the expected Fisher information estimator by reducing approximation error rates, as demonstrated through rigorous theoretical and empirical analysis.
  • The methodology supports advanced adaptive designs such as LOAD and MOAD, boosting efficiency in sequential experiments and diverse inferential applications.

An observed-information-based variance estimator is a statistical estimator that leverages the inverse of the observed Fisher information matrix, computed at the maximum likelihood estimator (MLE), to approximate the sampling variance of the MLE in parametric models. This approach is distinguished from traditional methods that use the expected Fisher information, offering notable advantages in finite-sample accuracy and adaptability in sequential or adaptive experimental designs. The methodology has significant implications for the design of experiments, sequential analysis, and statistical efficiency in a variety of inference contexts.

1. Definition and Theoretical Foundations

Consider a parametric statistical model with log-likelihood function (θ)\ell(\theta), based on an independent sample of size nn. The observed Fisher information matrix at parameter value θ\theta is

Iobs(θ)=2(θ)θθI_{\mathrm{obs}}(\theta) = -\frac{\partial^2 \ell(\theta)}{\partial\theta\,\partial\theta^\top}

evaluated at θ=θ^\theta = \hat{\theta}, where θ^\hat{\theta} is the MLE. For models with (θ)=i=1nlogf(yiθ)\ell(\theta) = \sum_{i=1}^n \log f(y_i|\theta),

Iobs(θ^)=i=1n2θθlogf(yiθ)θ=θ^.I_{\mathrm{obs}}(\hat{\theta}) = -\sum_{i=1}^n \frac{\partial^2}{\partial\theta\,\partial\theta^\top} \log f(y_i|\theta)\bigg|_{\theta=\hat{\theta}}.

With mild regularity conditions, θ^\hat{\theta} is asymptotically normal, and its variance is commonly approximated as

Var(θ^)[Iobs(θ^)]1.\mathrm{Var}(\hat{\theta}) \approx [I_{\mathrm{obs}}(\hat{\theta})]^{-1}.

This observed-information-based variance estimator is in contrast to the classical expected Fisher information,

Iexp(θ)=Eθ[2(θ)/θθ],I_{\mathrm{exp}}(\theta) = E_\theta[-\partial^2 \ell(\theta)/\partial\theta\,\partial\theta^\top],

which is deterministic and computable a priori. Empirical and theoretical analyses demonstrate that the inverse observed Fisher information yields more accurate variance approximations in moderate to small samples than the expected information (Lane, 2017).

2. Comparative Performance: Observed vs. Expected Information

Performance of the observed-information-based estimator relative to its expected counterpart is characterized by the rate at which it approximates the true variability of the MLE. Efron and Hinkley (1978) showed for single-parameter location families that, conditioning on a maximal ancillary statistic aa,

Var(θ^a)[Iobs(θ^)]1=Op(n1),\mathrm{Var}(\hat{\theta}|a) - [I_{\mathrm{obs}}(\hat{\theta})]^{-1} = O_p(n^{-1}),

whereas

Var(θ^a)[Iexp(θ^)]1=Op(n1/2).\mathrm{Var}(\hat{\theta}|a) - [I_{\mathrm{exp}}(\hat{\theta})]^{-1} = O_p(n^{-1/2}).

Thus, the observed information delivers an error an order of magnitude smaller than the expected information in variance approximation. Furthermore, Lindley et al. (1997) established that among a broad class of estimators (including bootstrap and jackknife), the inverse observed information minimizes asymptotic mean squared error for estimating n(θ^θ)2n(\hat{\theta}-\theta)^2, with accuracy up to O(n3/2)O(n^{-3/2}) (Lane, 2017).

3. Adaptive Experimental Designs Using Observed Information

When experiments can be executed in sequential runs, observed Fisher information accumulated in earlier stages can guide adaptive allocation in subsequent runs, concentrating sampling effort where it most efficiently reduces the variance of the MLE. Two major adaptive procedures utilize the observed-information-based variance estimator:

  • Local Observed-Information Adaptive Design (LOAD): Begins with a fixed locally optimal design (FLOD) at a pilot parameter value θ0\theta_0, allocates initial samples accordingly, and updates weights in each run based on the discrepancy between observed and expected elemental information, adjusting allocations to improve local observed efficiency.
  • Maximum-Likelihood-Estimated Observed-Information Adaptive Design (MOAD): Updates the pilot parameter at each stage to the current MLE and solves an augmented design problem that directly targets minimization of the variance criterion on the expected value of the final observed information (Lane, 2017).

Both adaptive strategies materially increase the efficiency of inference, as quantified by relative D- and A-optimality criteria.

4. Empirical Evaluation and Efficiency Gains

Simulation studies substantiate the superior efficiency of observed-information-based adaptive designs. In gamma regression and normal regression scenarios:

  • The LOAD achieves higher local observed efficiency—median closer to 1 and smaller variability—than FLOD or MOAD, for both D- and A-criteria, across various sample sizes.
  • MOAD offers the best observed efficiency at the final MLE among the considered approaches.
  • Both LOAD and MOAD reduce the unconditional variance of the MLE relative to standard fixed designs (FLOD).
Method Relative Efficiency (D-criterion, n=36n=36) Relative Efficiency (D-criterion, n=100n=100)
LOAD 1.32 1.05
MOAD 1.14 1.06

Similar patterns are observed in normal regression with square-root link, with local observed efficiency gains for LOAD and improved observed efficiency at MLE for MOAD (Lane, 2017).

5. Practical Implementation and Computation

Implementation is facilitated by the decomposability of the observed information into elemental observed information contributions per design point. Computation of the FLOD can be performed using standard optimal design libraries. Allocation updates in LOAD follow specific formulas involving ratios of observed to expected information, negative truncation, and renormalization of weights. After the final run, the variance estimator is computed as the inverse of the observed information evaluated at the final MLE. The MOAD differs primarily by recalibrating the pilot parameter at each stage to the MLE and optimizing on a mixed design matrix.

Key computational steps:

  • For LOAD: compute FLOD, allocate first-run samples, iteratively update allocation weights by observed/expected elemental info ratios, and finalize with the inversion of observed information at θ^\hat{\theta}.
  • For MOAD: update the pilot θ0\theta_0 by the current MLE at every stage, use a convex combination of current and empirical information matrices for optimizing allocation (Lane, 2017).

6. Broader Significance and Connections

Observed-information-based variance estimation is integral to optimal design of experiments and sequential analysis. The approach is not limited to parametric inference but has analogs in high-frequency financial econometrics, where the time-bridge class of variance estimators employs observed data extrema (highs and lows) and their occurrence times to substantially improve efficiency over realized variance and related estimators (Saichev et al., 2011). The methodological framework demonstrates versatility across inferential domains, offering practitioners tools for both theoretical gains in estimator variance and practical improvements in adaptive experimentation.

The observed-information approach remains homogeneous and asymptotically unbiased under Itô process dynamics and can, with minimal additional data recording, yield effective sample size gains up to a factor of two in volatility estimation. This suggests broad utility in risk management, volatility modelling, and potentially in contexts with complex noise or heavy-tail behavior, pending further empirical calibration of the underlying distributions (Saichev et al., 2011).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (2)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Observed-Information-Based Variance Estimator.