Nonlinear Model Reduction Methods
- Nonlinear Model Reduction Methods are techniques that construct low-dimensional surrogate models to efficiently capture the dynamics of high-dimensional nonlinear systems.
- They employ algorithms like POD-Galerkin, DEIM, autoencoders, and operator-theoretic approaches to manage strong nonlinearity and complex behaviors.
- Theoretical metrics based on Kolmogorov n-widths and error analysis guide these methods in preserving energy, stability, and accuracy in practical applications.
Nonlinear model reduction methods encompass a broad suite of algorithmic and theoretical approaches for constructing low-dimensional surrogate models that efficiently capture the input-output and dynamical behavior of high-dimensional nonlinear systems. Unlike their linear counterparts, these methods must grapple with slow decays in Kolmogorov n-widths, strong nonlinearity, moving shocks or coherent structures, and the challenge of inheriting structural properties such as energy conservation or stability from the full-order model. Current techniques include projection-based approaches (Galerkin, Petrov-Galerkin, DEIM), manifold and autoencoder-based methods, operator-theoretic reductions, transformation-based approaches, balanced truncation strategies, and various combinations thereof. This article surveys the foundational concepts, algorithmic structures, critical metrics, and limitations that define the field.
1. Mathematical Principles of Nonlinear Model Reduction
Classical model reduction for parametrized dynamical systems and PDEs is rooted in the approximation of the trajectory or solution manifold by low-dimensional subspaces. For a system
the goal is to approximate , where is a matrix of reduced basis vectors and are reduced coordinates. Projection-based techniques, such as Proper Orthogonal Decomposition (POD)-Galerkin, project the dynamics onto the subspace spanned by , yielding
but the computation of the nonlinear term remains dependent on unless additional approximations, e.g., Discrete Empirical Interpolation Method (DEIM), are applied (Sipp et al., 2020, Wang, 2013).
When linear subspace models fail—especially for transport-dominated, multiscale, or highly nonlinear behavior (where the Kolmogorov n-width decays only algebraically)—the field turns to nonlinear parameterizations. Here, the solution is expressed on a low-dimensional nonlinear manifold , e.g., , with living in a latent space, and realized through a decoder network, polynomial mapping, or diffeomorphic transformation (Schulze et al., 15 Jun 2025, Hesthaven et al., 1 Feb 2026, Kleikamp et al., 2022, Cocola et al., 2023).
Balanced truncation extends to nonlinear settings via empirical Gramians, variational analysis, and “lifting” transformations, but has required either trajectory-dependent Gramian computations or specially constructed polynomial representations of the original nonlinear system (Kawano et al., 2019, Ritschel et al., 2020, Redmann, 4 Aug 2025, Kramer et al., 2018). Structure preservation (energy, stability, input-output behavior) remains a major consideration in both the design and theoretical analysis of these methods.
2. Algorithmic Approaches
2.1. Projection-Based ROMs and Hyperreduction
The dominant family of methods uses projection onto linear or nonlinear trial manifolds derived from snapshot data. Key approaches are:
- POD-Galerkin: Project dynamics onto the span of leading POD modes; appropriate for moderate nonlinearities and slow modes when sufficient basis vectors capture the dominant behavior (Sipp et al., 2020, Schulze et al., 15 Jun 2025).
- POD-DEIM: Approximate nonlinear terms using a collateral basis for nonlinear snapshots and a sparse sampling/interpolation procedure to break -scaling (Sipp et al., 2020, Wang, 2013, Cenedese et al., 2021, Tolle et al., 2021).
- Petrov-Galerkin/LSPG/APG: Employ time-varying, residual-based test spaces; APG uses a Markovian closure from the Mori–Zwanzig formalism and demonstrates improved robustness and stability in nonlinear or advection-dominated settings (Parish et al., 2018).
- POD-DMD: Replace explicit nonlinear evaluation with data-driven surrogates built by Dynamic Mode Decomposition of the nonlinear term (Alla et al., 2016, Schulze et al., 15 Jun 2025).
2.2. Manifold and Autoencoder Approaches
When linear subspaces are insufficient, decoders trained via autoencoders, polynomial regression, or kernel methods define nonlinear trial manifolds (Cocola et al., 2023, Glas et al., 7 Jan 2025, Eivazi et al., 2024): where parametrizes the manifold, and both encoder and decoder can be realized as neural networks. The reduced dynamics are constructed by projecting the full dynamics onto the tangent space of the manifold, often using a least-squares Petrov–Galerkin residual projection.
Recent findings demonstrate that, under smoothness conditions and appropriately extended datasets (embedding time and parameter as explicit inputs), a linear encoder is sufficient and training complexity can be halved without sacrificing accuracy (Glas et al., 7 Jan 2025).
2.3. Operator-Theoretic and Data-Driven Methods
Approaches such as Koopman operator theory (EDMDc, KW), Dynamic Mode Decomposition with control (DMDc), and manifold learning with latent predictors provide non-intrusive ROMs by framing the reduced latent dynamics as linear or weakly nonlinear in a lifted or encoded space (Schulze et al., 15 Jun 2025). These approaches are favored for real-time and many-query applications in process engineering and input-rich scenarios.
2.4. Balanced Truncation and System Lifting
Balanced truncation is extended to nonlinear systems via empirical Gramians computed along fixed trajectories, structured quadratization/lifting, or Lyapunov-based differential inequalities, resulting in balanced nonlinear ROMs (Kawano et al., 2019, Ritschel et al., 2020, Redmann, 4 Aug 2025, Kramer et al., 2018). For models with complex nonlinearities, auxiliary variables are introduced to yield polynomial or quadratic-bilinear lifted systems amenable to analytic Gramian computation.
2.5. Transformation and Symmetry Approaches
For transport-dominated phenomena, explicit symmetry-separation or freezing transformations (shifted POD, Lagrangian basis, OT/Wasserstein projections) ‘align’ moving structures, enabling low-dimensional linear reduction in the transformed coordinates (Hesthaven et al., 1 Feb 2026, Kleikamp et al., 2022). Diffeomorphic transformations of the space-time domain are highly effective for hyperbolic problems with moving shocks, enabling rapid singular value decay of transported features in the velocity field (Kleikamp et al., 2022).
3. Structural Properties and Quantitative Metrics
Reduction methods are evaluated with a suite of quantitative metrics:
| Metric | Definition / Purpose |
|---|---|
| POD truncation error | Relative energy loss due to number of kept modes () |
| Model-prediction error | Deviation of ROM trajectory from POD projection () |
| Eigenvalue error | For stability analysis: |
| Energy-structural error | Violation of skew-symmetry (e.g. in convective systems) |
| ROM accuracy (RMSE) | Root-mean-square error over states and time |
| output error | Used in balanced truncation () |
Structural preservation (e.g., energy conservation, stability, correct steady-state reconstruction) is critical for robustness (Sipp et al., 2020). Methods relying on strong approximations or breaking antisymmetry can manifest as loss of stability or energy blow-up during transients, as seen in POD-DEIM for nonstationary flows (Sipp et al., 2020).
4. Theoretical Guarantees and Complexity
Theoretical performance is often tied to the decay rates of Kolmogorov -widths, the structure of the underlying solution manifold, and the properties of the reduced dynamics:
- Library/dictionary approaches: Partition the parameter domain and assign a dedicated low-dimensional affine (possibly nonlinear) subspace per cell; can achieve linear-width approximation rates with a trade-off in library size and complexity [2005.025