Integrated Nested Laplace Approximation (INLA)
- INLA is a deterministic Bayesian inference method that uses nested Laplace approximations to efficiently compute marginal posteriors in latent Gaussian models.
- It employs a hierarchy of approximations over latent fields and hyperparameters, achieving fast convergence and scalability compared to MCMC.
- The R-INLA implementation demonstrates high accuracy and performance in diverse applications like spatial econometrics, phylodynamics, and state-space modeling.
Integrated Nested Laplace Approximation (INLA) is a deterministic framework for fast, accurate, and scalable approximate Bayesian inference in hierarchical models where the latent structure is a Gaussian Markov Random Field (GMRF). INLA achieves marginalization over high-dimensional latent fields and hyperparameters using a hierarchy of nested Laplace approximations, enabling efficient evaluation of posterior marginal distributions for both latent variables and hyperparameters. This methodology is especially suited for Latent Gaussian Models (LGMs), a class encompassing generalized linear mixed models, spatial econometric models, phylodynamic models, and state-space models with Gaussian latent structures. INLA is implemented in the R-INLA package and has become a default tool for applied Bayesian analysis of LGMs, widely outperforming Markov Chain Monte Carlo (MCMC) in computational speed and stability for compatible model classes.
1. Model Structure and Theoretical Foundations
The prototypical LGM analyzed by INLA is structured as follows:
- Data likelihood: , for .
- Latent field: , with sparse precision matrix .
- Hyperprior: .
The joint posterior is
The primary inferential targets are the one-dimensional marginals:
where denotes latent components and hyperparameters. INLA is designed to efficiently approximate these marginals without ever reconstructing the full joint posterior (Martino et al., 2019).
2. Hierarchical Nested Laplace Approximation
The core innovation of INLA is the application of two (or more) nested Laplace approximations to achieve computationally tractable marginalization in high dimension. The workflow is:
- First-level Laplace (over latent field ):
- For fixed , approximate the conditional posterior by a Gaussian:
where and is the negative Hessian at the mode. - The marginal likelihood (for fixed ):
Second-level Laplace / Numerical Integration (over ):
- Approximate using a grid, adaptive quadrature, or further Laplace expansion (when is low).
- For each hyperparameter grid point , approximate using Gaussian, Laplace, or Simplified Laplace approximations (Hubin et al., 2016).
- Compute the marginal for by weighted summation:
This nested structure combines analytic approximations for high-dimensional latent fields with efficient low-dimensional numerical integration with respect to hyperparameters (Simpson et al., 2011, Gomez-Rubio et al., 2017).
3. Approximations for Conditional Latent Marginals
Three principal strategies are implemented for approximating univariate marginals at each grid-point:
Gaussian Approximation: Uses the mean and variance from the fitted Gaussian at the mode.
Simplified Laplace Approximation (SLA): Applies a third-order Taylor expansion in a standardized local variable :
matching moments and derivatives to a Skew-Normal distribution for improved accuracy, particularly under moderate skewness (Chiuchiolo et al., 2022).
- Full Laplace Approximation: Employs numerical spline correction to more closely match the shape of the conditional marginal.
An advanced variant, Extended Simplified Laplace Approximation (ESLA), further includes a fourth-order term to accommodate more severe skewness and kurtosis, fitting an Extended Skew Normal (ESN) distribution with four matching conditions. ESLA reduces mode bias by 70–80% over SLA in highly skewed scenarios but adds negligible computational overhead, and reverts safely to SLA when instability is detected (Chiuchiolo et al., 2022).
4. Computational Implementation and Algorithmic Workflow
INLA leverages sparse matrix factorization and modern numerical optimization to maintain complexity, where is the number of hyperparameter grid points. The high-level algorithm is:
Select Hyperparameter Grid: Design a grid or Central Composite Design tailored to the posterior mode and Hessian.
For Each in Grid:
- Find (mode of posterior in );
- Compute Hessian ;
- Evaluate Laplace denominator, conditional marginals, and marginal likelihood.
- Integrate Over : Use weighted summation to compute the posterior marginals for and (Martino et al., 2019, Gomez-Rubio et al., 2017).
- Diagnostics and Model Selection: Calculate marginal likelihood for model comparison, DIC, and posterior predictive checks.
The R-INLA package provides an efficient interface for constructing and fitting such models, automatically handling grid construction, Laplace approximations, diagnostics, and reporting posterior summaries (Martino et al., 2019).
5. Extensions: Beyond Direct LGMs
The basic INLA framework applies only when the entire model can be phrased as a latent GMRF conditional on a modest number of hyperparameters. For models outside this class, extensions include:
- Conditional INLA (INLA within MCMC): Partition parameters so that, conditional on a subset , the remaining model is a LGM. Sample with MCMC (e.g., Metropolis–Hastings), and use INLA for conditional marginal computations. Posterior marginals for remaining parameters are averaged over the chain (Gómez-Rubio et al., 2017, Morales-Otero et al., 2022).
- Importance Sampling with INLA (IS-INLA, AMIS-INLA): For scenarios with a moderate number of non-Gaussian parameters, run INLA conditionally and combine via (adaptive) multiple importance sampling. Empirical studies confirm that IS-INLA is efficient when a good proposal is available, while AMIS-INLA is robust and effective in higher dimension (Berild et al., 2021, Morales-Otero et al., 2022).
- Bayesian Model Averaging (INLA-BMA): For models where multiple conditional submodels (e.g., different spatial dependence structures) must be averaged, INLA is used for each, and marginal posteriors are combined using model weights proportional to conditional marginal likelihoods (Gómez-Rubio et al., 2019).
- Efficient Marginalization (Low-Discrepancy Sequences): For marginalizing high-dimensional hyperparameters, replacing the standard grid by a Korobov lattice (LDS) plus adaptive polynomial correction provides superior accuracy and computational efficiency in practical cases (Brown et al., 2019).
6. Applications and Empirical Performance
INLA has demonstrated accuracy and efficiency across a range of model classes:
- Generalized Linear Mixed Models: For both Gaussian and non-Gaussian responses, INLA reproduces posterior marginals to within 0.01 log-units compared to MCMC, with orders-of-magnitude lower computation time (Hubin et al., 2016).
- Spatial Econometric Models: The R-INLA package implements spatial lag and error models using sparse GMRF representations, permitting rapid inference and model comparison, matching MCMC in accuracy up to sampling error (Gomez-Rubio et al., 2017, Gómez-Rubio et al., 2019).
- Phylodynamics: Bayesian nonparametric inference of population trajectories from coalescent genealogies is accelerated by – relative to MCMC, reproducing credible intervals accurately (Palacios et al., 2012).
- State Space Models: INLA enables fast construction of Gaussian proposals for particle filtering and particle MCMC, improving effective sample size and reducing variance in sequential inference tasks (Amri, 2023).
- Models with Double-Hierarchies: By conditioning and using AMIS-INLA, double-hierarchical GLMs are efficiently fitted and checked against MCMC, matching posterior means and variances while achieving high effective sample sizes (Morales-Otero et al., 2022).
Table: Comparative performance of INLA and competitors in model classes (selected results).
| Model Class | INLA Accuracy | INLA Cost | MCMC Cost | Reference |
|---|---|---|---|---|
| Linear regression | log-units vs. MCMC | Seconds | Minutes | (Hubin et al., 2016) |
| Spatial lag (SLM, SEM) | Matches MCMC marginals | Seconds–minutes | Minutes–hours | (Gomez-Rubio et al., 2017) |
| Bayesian Lasso | Matches MCMC (mean/var) | 10 minutes (AMIS-INLA) | 14 hours | (Berild et al., 2021) |
| Phylodynamic (GP) | Bands overlap with MCMC | Seconds | Hours | (Palacios et al., 2012) |
| State space (PMMH) | Superior ESS, lower variance | Fewer particles needed | Higher N required | (Amri, 2023) |
7. Limitations and Practical Guidance
Key assumptions underpinning INLA’s accuracy are:
- The latent hierarchy is (conditionally) a GMRF.
- The hyperparameter space is low-dimensional (typically up to 3–5, sometimes up to 10).
- The posterior in is unimodal and log-concave (mild deviations are tolerable).
Limitations arise when:
- The hyperparameter dimension grows (grid or adaptive quadrature becomes infeasible).
- The posterior is highly multimodal or non-Gaussian in or .
- The model structure involves latent fields that depart from the GMRF class.
In such cases, conditional INLA plus MCMC or importance sampling should be used (Gómez-Rubio et al., 2017, Morales-Otero et al., 2022). For models with expected extreme skewness or small sample size, use the ESLA strategy in R-INLA to improve accuracy in the latent marginals (Chiuchiolo et al., 2022).
Typical guidelines:
- Use default SLA for generic models, switching to ESLA for small or large skewness.
- Monitor diagnostics for error in the Laplace approximations and compare against gold-standard MCMC in critical applications.
- Employ efficient sparse-matrix libraries, and take advantage of R-INLA’s parallelization and diagnostics features for large-scale problems (Martino et al., 2019, Palacios et al., 2012).
References
- (Chiuchiolo et al., 2022): An Extended Simplified Laplace strategy for Approximate Bayesian inference of Latent Gaussian Models using R-INLA
- (Gomez-Rubio et al., 2017): Estimating Spatial Econometrics Models with Integrated Nested Laplace Approximation
- (Hubin et al., 2016): Estimating the marginal likelihood with Integrated nested Laplace approximation (INLA)
- (Berild et al., 2021): Importance Sampling with the Integrated Nested Laplace Approximation
- (Morales-Otero et al., 2022): Fitting Double Hierarchical Models with the Integrated Nested Laplace Approximation
- (Martino et al., 2019): Integrated Nested Laplace Approximations (INLA)
- (Palacios et al., 2012): Integrated Nested Laplace Approximation for Bayesian Nonparametric Phylodynamics
- (Gómez-Rubio et al., 2019): Bayesian model averaging with the integrated nested Laplace approximation
- (Brown et al., 2019): A Novel Method of Marginalisation using Low Discrepancy Sequences for Integrated Nested Laplace Approximations
- (Gómez-Rubio et al., 2017): Markov Chain Monte Carlo with the Integrated Nested Laplace Approximation
- (Amri, 2023): Designing Proposal Distributions for Particle Filters using Integrated Nested Laplace Approximation
- (Simpson et al., 2011): Fast approximate inference with INLA: the past, the present and the future