Greedy Time-Marching Scheme
- Greedy Time-Marching Scheme is a family of iterative algorithms that solves PDEs and control problems by selecting the update with the optimal local residual to maintain causality.
- The approach encompasses fast-marching methods and multi-level extensions that use active band detection and Dijkstra-like ordering to reduce complexity and ensure convergence.
- Meshfree time-stepping employs greedy trial subspace selection to balance spatial and temporal errors, enhancing stability and reducing computation in high-dimensional settings.
A greedy time-marching scheme refers to a family of algorithms for the numerical solution of PDEs and optimal control problems, characterized by iterative advancement in either time or pseudo-time through the prioritized or residual-aware selection of solution updates. The unifying principle is greedy update selection: at every step, the scheme chooses the update (spatial node, trial function, stencil element, or system dimension) that locally minimizes, maximizes, or balances a prescribed measure of approximation, causality, or conditioning. Two major branches of this paradigm are (1) greedy fast-marching methods for static Hamilton-Jacobi (HJ) equations, particularly in minimum-time control and front-propagation, and (2) greedy trial subspace selection in meshfree time-stepping schemes for evolutionary parabolic PDEs, aiming to optimize spatial discretization relative to temporal error. These approaches leverage causality, local monotonicity, and residual-driven adaptivity to achieve significant computational gains while ensuring numerical stability and fidelity to the target PDE.
1. Fast Marching and Greedy Node Acceptance in Minimum-Time Hamilton–Jacobi Problems
The classical greedy time-marching scheme applied to the stationary Hamilton–Jacobi equation for minimum-time optimal control involves propagating a front through the domain by iteratively accepting spatial nodes with the smallest tentative arrival value. The control system is given by
with the speed function and the unit sphere. The central object is the value function , the minimal time to reach a target set from . The equation
with boundary admits a unique viscosity solution under standard hypoellipticity and inward-pointing conditions.
On a Cartesian mesh of step size , each node carries a label ("Far", "Considered", "Accepted"). At each iteration, the greedy scheme accepts the node in "Considered" with minimal tentative value , updating its neighbors using a local monotone solver (the eikonal solver) and preserving causality. The "single-pass" nature implies each node is updated times, and the method mimics Dijkstra's algorithm. This greedy, Dijkstra-style ordering is essential for correctness and efficiency (Akian et al., 2023).
2. Multi-Level Extension: State-Space Reduction and Band Shrinking
The multi-level fast-marching approach introduces several nested Cartesian grids of decreasing mesh sizes: . At each level , two partial fast-marching computations (to-target and from-source) yield approximate value functions , . These are combined into
Thresholding at identifies "active" bands likely to contain optimal geodesics. Subsequent finer grids restrict computation to neighborhoods of these active bands, greatly reducing the number of nodes involved at each level. The greedy ordering is preserved within these bands, providing a causality-respecting, adaptive marching scheme. As bands shrink toward true geodesics, the sequence of approximated value functions converges to the unique viscosity solution (Akian et al., 2023).
3. Convergence Properties and Error Balancing
Restricting the Hamilton–Jacobi PDE to the band with state constraints ensures the same solution on interior geodesic points, due to viscosity uniqueness under boundary regularity. If the single-level fast-marching with step has uniform error , then choosing ensures bands contain all -approximate geodesics. The final solution at the finest level converges at rate (Akian et al., 2023).
For parabolic PDEs with meshfree time-marching, error balancing is achieved by equating the temporal discretization error (of order for a th-order scheme) and spatial discretization error (of order for th-order kernel approximation). This balance is encoded in the greedy selection and stopping criteria, e.g.,
ensuring neither error dominates the overall accuracy (Su et al., 2024).
4. Greedy Trial Subspace Selection in Meshfree Time-Stepping
For parabolic systems such as the reaction–diffusion–coupled bulk–surface models,
kernel-based collocation is used for high-order spatial discretization. Trial centers define the trial subspace , where is a radial positive-definite kernel (e.g., Gaussian, Matérn, Wendland). The greedy block-selection algorithm iteratively builds an effective trial basis and collocation set by maximizing the effect on primal and dual residuals. The process proceeds until stopping criteria based on conditioning and residual norms are satisfied, reducing system dimension and controlling conditioning (Su et al., 2024).
After trial subspace selection at the first timestep, subsequent time-marching proceeds by solving overdetermined least squares systems in the fixed (reduced) trial space, unless the solution becomes highly nonstationary, in which case the greedy selection may be re-invoked. This method is particularly effective for slowly-varying solutions, ensuring and guarding against under- or over-resolution in either spatial or temporal discretization (Su et al., 2024).
5. Complexity, Efficiency, and Empirical Outcomes
In classical fast-marching on a -dimensional grid of step , the arithmetic cost is . Achieving accuracy (with ) results in complexity . The multi-level fast-marching algorithm reduces the required complexity to , where
For typical smooth data (, band "stiffness" ), , substantially accelerating computation for higher dimension or lower (Akian et al., 2023).
For meshfree time-stepping with block-greedy subspace selection, numerical simulations report reducing the trial space to of the original basis size in high-dimensional bulk–surface problems. The method stabilizes time-marching under ill-conditioned kernels and preserves both spatial and temporal accuracy, with selection overhead amortized over multiple timesteps (Su et al., 2024).
| Scheme | Complexity | Key Feature |
|---|---|---|
| Single-grid FM | All nodes updated, full mesh | |
| Multi-level FM | Band-shrinking around geodesics | |
| Meshfree Greedy | selection, reduced | Adaptive trial subspace, residual balancing |
6. Applications and Theoretical Justification
Greedy time-marching schemes are broadly applicable to minimum-time control, front propagation, and pattern formation in coupled bulk–surface systems. In minimum-time HJ equations, the methods efficiently resolve optimal trajectories and arrival times. In meshfree parabolic PDE settings, kernel collocation with block-greedy trial selection addresses the curse of dimensionality and loss of stability in high-smoothness or densely sampled settings, and is validated by near-optimal recovery rates in native RKHS norms. These methods inherently ensure monotonicity, causality, and stability, critical for applications sensitive to under-resolved fronts or stiffness-induced numerical blowup (Akian et al., 2023, Su et al., 2024).
A further implication is the emergence of frameworks that blend traditional PDE discretization with adaptive data selection through greedy strategies, opening new avenues for high-dimensional and complex-coupling scenarios.
7. Outlook
Greedy time-marching schemes continue to evolve at the intersection of computational optimal control, numerical analysis, and data-driven approximation. Their structural compatibility with causality and adaptivity underpins both theoretical advances in convergence and practical gains in scalable modeling of complex evolutionary and front-propagation phenomena. Ongoing developments focus on higher-order schemes, fully implicit marching, efficient re-selection procedures for rapidly changing solutions, and rigorous quantification of the tradeoffs between spatial adaptivity and residual-driven selection. In sum, greedy time-marching provides a foundational mechanism for robust, scalable solution of high-dimensional control and PDE problems (Akian et al., 2023, Su et al., 2024).