Repeatedly Nested Expectations
- Repeatedly Nested Expectations (RNEs) are mathematical constructs involving recursive compositions of non-linear expectation operators, crucial for modeling multi-level uncertainty.
- They appear in diverse applications such as stochastic control, finance, probabilistic programming, and network games where layered decision processes and inner estimations are essential.
- Advanced estimation methods like Nested Monte Carlo, multilevel techniques, and quantum algorithms address the computational challenges by mitigating bias and improving convergence rates.
Repeatedly nested expectations (RNEs) are mathematical objects that arise when one must compute compositions of expectations, where each stage involves a potentially nonlinear function of an inner expectation—often recursively, and with arbitrary depth. These structures are central in diverse areas, including probabilistic programming, optimal stopping, risk-sensitive control, multi-agent network games, and the simulation of stochastic differential equations. RNEs present unique computational and theoretical challenges due to the growth in complexity with each level of nesting and the proliferation of bias and variance with standard estimation approaches.
1. Formal Definition and Mathematical Structure
An -level repeatedly nested expectation has the canonical form: where each is a (possibly conditional) probability measure on the respective space, and are link functions, possibly nonlinear and usually assumed to possess some regularity (Lipschitz, or higher smoothness).
Alternatively, a recursive definition applies: with the full expectation .
The structure of RNEs is such that each level's integrand is itself a function of the output of a deeper expectation; this recursive dependence is the origin of significant computational hardness, most notably the “curse of depth” for naive estimators (Rainforth et al., 2016).
2. Occurrence in Applications
RNEs are central in multiple domains:
- Stochastic Control & Finance: Valuation of American and Bermudan options, dynamic programming in finite-horizon optimal stopping, and risk-averse optimization all require the iteration of conditional expectations, sometimes across several sources of randomness (Beck et al., 2020, Sun et al., 8 Feb 2026).
- Uncertainty Quantification: In global sensitivity analysis and Bayesian design, quantifying risk or expected improvement often yields deeply nested expectations (Hironaka et al., 2023).
- Probabilistic Programming: The semantics of probabilistic programs induce repeated nesting when models invoke stochastic conditions or inner inference procedures (Rainforth et al., 2016).
- Networked Multi-Agent Systems: In economic theory, higher-order expectations—agents’ beliefs about others’ beliefs—are mathematically expressed as iterated expectation operators across network topologies (Golub et al., 2020).
3. Computational Schemes and Complexity
3.1 Nested Monte Carlo (NMC)
The straightforward approach estimates nested expectations with a separate Monte Carlo procedure at each level. For a -fold nested expectation, the mean-squared error (MSE) using samples at layer is: and with a total budget , a balanced allocation yields
This convergence rate deteriorates exponentially in : the cost for -accuracy is (Rainforth et al., 2016). Moreover, general NMC estimators are necessarily biased due to the nonlinearity of outer functions with respect to inner expectations.
3.2 Multilevel and Full-History Recursive Approaches
Advances in multilevel Monte Carlo (MLMC) and related techniques overcome this curse of depth. Recursive multilevel Picard (MLP) algorithms (Beck et al., 2020) construct full-history estimators sharing simulations across levels via telescoping sums, under suitable Lipschitz or contractivity assumptions, yielding
where is the problem dimension and (Beck et al., 2020). Optimal randomized multilevel methods (e.g., the READ estimator) further achieve
for fixed depth and strong regularity, and for merely Lipschitz continuous link functions (Syed et al., 2023).
| Method | Error Rate | Cost Scaling | Conditions |
|---|---|---|---|
| NMC | General, but biased | ||
| MLP/READ | Lipschitz/contractive links | ||
| Kernel Quadrature | Problem-specific | (best) | Sufficient smoothness (Chen et al., 25 Feb 2025) |
4. Unbiased Estimation and Randomized Multilevel Methods
Unbiased estimators for RNEs circumvent the negative result on general-purpose NMC bias (Rainforth et al., 2016) by employing randomized telescoping expansions and antithetic corrections (e.g., Russian Roulette estimators). The READ estimator (Syed et al., 2023) applies at each nesting level a randomized MLMC procedure:
- For each layer, a geometric number of inner samples is drawn, with estimators constructed to ensure the overall output is unbiased.
- The complexity is controlled via careful choice of geometric probabilities; for models with bounded second derivatives with respect to the inner argument ("LBS" condition), the central limit theorem applies and batch averages converge with cost.
- Under just Lipschitz regularity, similar nearly-optimal bounds hold for the mean absolute error.
These techniques support massive parallelization, as independent unbiased replicates can be averaged arbitrarily.
5. Advanced Algorithms: Kernel Quadrature and Sparse Grid Methods
Nested Kernel Quadrature (NKQ)
For problems where the integrands are sufficiently smooth (lying in Sobolev spaces), NKQ (Chen et al., 25 Feb 2025) replaces MC estimators at each layer with reproducing kernel Hilbert space methods. The recursive application of kernel quadrature at each level leverages smoothness to provide convergence rates up to , dramatically outperforming MC when for Sobolev index and dimension , and especially efficient for moderate-depth, low-dimensional, smooth problems.
Sparse-Grid Monte Carlo
For cases where sampling from inner conditional distributions is infeasible or carries high computational burden, sparse-grid approaches (Hironaka et al., 2023) employ stratification and telescoping sums over partitions of joint samples, achieving
when the outer (non-nested) dimension is . This is efficient for low but suffers from the curse of dimensionality as increases.
6. Quantum Algorithms for RNEs
Quantum algorithms further improve the scaling of RNE estimation. A quantum version of derandomized MLMC—quantizing each level via Quantum Amplitude Estimation—achieves worst-case cost
for fixed nesting depth and under suitable bounded Lipschitz conditions on the integrands (Sun et al., 8 Feb 2026). This matches the quantum lower bound for single-level expectation, hence is optimal up to logarithmic factors. Applications include optimal stopping (e.g., Bermudan option pricing), nested risk estimation, and probabilistic program semantics, where the RNE problem is prevalent. This quantum approach represents an almost-quadratic speedup over the best classical algorithms.
7. Higher-Order Expectations in Networked Systems
In multi-agent settings and network games, RNEs manifest as "higher-order average expectations": where represent network weights and are conditional expectations relative to agent ’s information (Golub et al., 2020). The limiting consensus expectation is determined by the stationary distribution of an associated Markov chain, integrating network structure and private information. Applications include economic coordination, speculative markets, and the analysis of "contagion" phenomena such as cascades of optimism or the "tyranny of the least-informed".
References
- (Rainforth et al., 2016) Rainforth et al., On the Pitfalls of Nested Monte Carlo
- (Beck et al., 2020) Hutzenthaler et al., Nonlinear Monte Carlo methods with polynomial runtime for high-dimensional iterated nested expectations
- (Syed et al., 2023) Li et al., Optimal randomized multilevel Monte Carlo for repeatedly nested expectations
- (Hironaka et al., 2023) Hironaka and Goda, Estimating nested expectations without inner conditional sampling and application to value of information analysis
- (Chen et al., 25 Feb 2025) Oates et al., Nested Expectations with Kernel Quadrature
- (Sun et al., 8 Feb 2026) Childs et al., Optimal Quantum Speedups for Repeatedly Nested Expectation Estimation
- (Golub et al., 2020) Golub and Morris, Expectations, Networks, and Conventions