Multifidelity ABC SMC Inference
- Multifidelity ABC SMC is a simulation-based inference technique that combines low-fidelity pre-filtering with high-fidelity evaluations to efficiently approximate complex Bayesian posteriors.
- It employs adaptive importance sampling, hierarchical thresholding, and MCMC rejuvenation to balance computational cost with estimation accuracy.
- The method achieves significant speedups and reduced high-fidelity calls, making it valuable for expensive likelihood simulations in probabilistic modeling.
Multifidelity Approximate Bayesian Computation Sequential Monte Carlo (multifidelity ABC SMC) refers to a broad class of computational inference algorithms that leverage a hierarchy of simulation models of varying cost and accuracy to accelerate sequential Monte Carlo (SMC) for likelihood-free Bayesian inference. These methods exploit the availability of both low-fidelity (LF) and high-fidelity (HF) simulators to maximize computational efficiency, particularly in situations where a direct evaluation of the likelihood is intractable and synthetic data must be compared to observations via an approximate Bayesian computation (ABC) framework. Multifidelity ABC SMC has emerged as a critical development in simulation-based inference for probabilistic models with expensive likelihoods, combining principles of model reduction, early rejection, adaptive importance sampling, and rigorous error bounds.
1. Conceptual Framework
Multifidelity ABC SMC augments classic ABC SMC by strategically embedding model hierarchies into the SMC population workflow. At a high level, particles (parameter proposals) are first screened using cheaper, lower-fidelity model evaluations. Only those that pass a pre-determined criterion proceed to expensive high-fidelity simulation and ABC acceptance testing. This hierarchical processing is motivated by the high computational cost typically associated with sufficiently accurate simulators. The essential ingredients are:
- A prior distribution for parameters .
- A sequence of decreasing ABC tolerance thresholds .
- For each , access to a low-fidelity simulator and a high-fidelity simulator .
- An ABC distance metric between simulated and observed data .
- An SMC kernel, typically adaptive, for propagating particle populations between generations.
Hierarchical importance weighting or continuation probabilities ensure correct ABC targeting despite the two-stage filtering. The ensemble may additionally exploit Markov chain Monte Carlo (MCMC) rejuvenation steps.
2. Algorithmic Strategies
Prominent exemplars of multifidelity ABC SMC algorithms include the MAPS (Multifidelity ABC with Pre-Filtering SMC) method (Cao et al., 2 Feb 2026) and the MF-ABC-SMC of Prescott & Baker (Prescott et al., 2020). Each approach adopts a two-stage paradigm grounded in adaptive importance sampling:
Hierarchical Importance Sampling and Pre-Filtering
- Proposals are first evaluated with LF model runs. If for all , (LF threshold), the proposal is rejected outright (zero weight).
- If at least one LF simulation satisfies , the particle advances, and HF simulations are drawn: .
- The standard ABC indicator, , defines the HF ABC likelihood at the iteration-specific threshold .
- The resulting importance weight is:
- In algorithms using continuation probabilities, per-particle adaptive probabilities control the frequency of advancing to HF simulation, with optimal values calibrated to minimize MSE or maximize computational efficiency (Prescott et al., 2020).
Adaptive Thresholds and ESS Monitoring
Adaptive selection of LF and HF acceptance thresholds ( and ) is central. Quantile-based or efficiency-predicted thresholding is employed to control the effective sample size (ESS) and limit adverse variance escalation.
SMC and Rejuvenation
Particles progress across ABC-tolerance levels through iteration. After each propagation step, resampling and MCMC mutation (such as pseudo-marginal MH moves conditioned on LF/HF acceptance) are employed to maintain sample diversity and mixing.
3. Theoretical Guarantees and Error Analysis
Both MAPS and MF-ABC-SMC frameworks provide guarantees of asymptotic posterior concentration and controlled bias as ABC thresholds decrease and sample size increases:
- Under standard ABC regularity conditions and an explicit LF filter false-rejection constraint (), the bias in the filtered posterior satisfies an bound:
This quantifies the impact of LF pre-filtering on ABC posterior approximation (Cao et al., 2 Feb 2026).
- In MF-ABC-SMC, the unbiasedness of ABC importance sampling is preserved through the coupling of weights and continuation probabilities, with explicit error terms dependent on the probabilities of false positives and negatives between the LF and HF models (Prescott et al., 2020).
The rates at which model discrepancies (e.g., ) decay are key to ensuring that multifidelity accelerations do not unduly bias inference.
4. Efficiency, Computational Tradeoffs, and Surrogate Construction
Efficiency enhancements derive from the early rejection of proposals using LF surrogates:
- The average per-particle cost is
where and are the costs of LF and HF runs, and is the fraction of proposals reaching the HF stage (Cao et al., 2 Feb 2026).
- Increasing or the LF threshold reduces the chance of discarding promising particles but increases overall LF evaluation costs.
- Surrogate (LF) models are commonly derived by mesh coarsening, time truncation, or dimension-reduction techniques, ensuring that they are at least one order of magnitude faster than the HF simulations (Prescott et al., 2020).
Continuation probabilities in MF-ABC-SMC are set as piecewise constants:
and optimized analytically to maximize efficiency given empirical MSE and computational cost models.
5. Diagnostics and Practical Recommendations
Diagnostic strategies are crucial to the robust application of multifidelity ABC SMC:
- The false-negative error metric quantifies the LF/HF alignment: a low implies LF filtering does not overly discard true positives. It is computed by comparing LF outputs on a pilot batch with confirmed HF acceptances at a desired ABC threshold (Cao et al., 2 Feb 2026).
- Monitor ESS and at each SMC iteration to detect potential degeneracy or excessive bias accumulation.
- Hyperparameters such as , , and the setting have been found effective in practice.
- In case studies (toy, Ornstein–Uhlenbeck, Kuramoto network), MAPS reduces HF simulations by approximately 35–45% compared to standard SMC with equivalent posterior fidelity (Cao et al., 2 Feb 2026), while MF-ABC-SMC demonstrates multiple-fold acceleration, especially during early SMC generations (Prescott et al., 2020).
6. Comparisons, Variants, and Related Approaches
Multifidelity ABC SMC is distinct from but related to multilevel Monte Carlo ABC (MLMC-ABC), which leverages a telescoping sum decomposition across ABC tolerance “levels,” running a coupled SMC sampler to efficiently exploit variance decay properties (Jasra et al., 2017). MLMC-ABC targets optimal cost-MSE scaling via allocation of computational effort across ABC approximations with varying error and cost. While both frameworks offer substantial computational gain over single-level ABC SMC, multifidelity ABC SMC focuses on model hierarchy (LF/HF), whereas MLMC typically varies simulation granularity or kernel accuracy. Both admit theoretical analysis of bias, variance, and computational complexity, linked to coupling properties and mixing assumptions.
An encapsulation of practical multifidelity ABC SMC methods is provided in the table:
| Approach | Early Rejection via LF | Adaptivity | Theoretical Bias Bound | Tested Speedup (Typical) |
|---|---|---|---|---|
| MAPS (Cao et al., 2 Feb 2026) | Deterministic pre-filter | Adaptive thresholds | Explicit control via | ~35–45% HF calls reduction |
| MF-ABC-SMC (Prescott et al., 2020) | Stochastic continuation/acceptance | Adaptive kernel & thresholds | Bias via coupling probabilities | ×2–10 wall-time acceleration |
7. Implementation and Future Directions
Efficient implementations of multifidelity ABC SMC are publicly available, for example via the R package for MAPS at https://github.com/caofff/MAPS (Cao et al., 2 Feb 2026). Key considerations include the design of high-quality LF surrogates, tight coupling of random seeds to control false positive/negative probabilities, and dynamic adjustment of thresholds and continuation parameters.
A plausible implication is that continued advances in multifidelity surrogate construction and tighter posterior error analysis will further expand the range of scientific applications accessible to ABC methods, especially in domains with substantial model complexity or scarce computational budgets. Emerging hybridization with MLMC and automated surrogate design remains an active research frontier.