Latent Space Element Method (LSEM)
- LSEM is a non-intrusive, element-based surrogate modeling framework that leverages local latent ODE surrogates to approximate and predict PDE dynamics.
- It employs a modular autoencoder-based architecture to learn and couple local latent dynamics, enabling scalable simulation across varying geometries.
- The method eliminates the need for intrusive PDE operator access and Schwarz iterations, delivering rapid inference and smooth, artifact-free solution reconstructions.
The Latent Space Element Method (LSEM) is a non-intrusive, element-based surrogate modeling framework for partial differential equations (PDEs) that leverages local latent-space ordinary differential equation (ODE) surrogates, direct latent coupling, and smooth reconstruction. It enables scalable, geometry-flexible, and interpretable surrogate solvers that can train efficiently on small reference domains and predict on arbitrarily large or heterogeneous assemblies, without requiring intrusive access to PDE residuals or operator code. This approach generalizes the Data-Driven Finite Element Method (DD-FEM) paradigm in a fully latent and non-intrusive manner, avoiding Schwarz interface iterations and enabling rapid inference on domains and initial conditions far outside the training set (Chung et al., 5 Jan 2026).
1. Motivation and Conceptual Foundations
Reduced-order modeling (ROM) and data-driven surrogate methods have revolutionized scientific computing for PDEs, but their traditional formulations are limited by two core constraints: (i) the need for intrusive access to underlying PDE operators (as in projection-based POD-Galerkin and SINDy-ROMs) and (ii) inflexibility to varying geometries or domain sizes (as in standard neural operators and many physics-informed neural networks) (Chung et al., 5 Jan 2026). DD-FEM introduced the notion of reusable, localized ROMs associated with finite elements and classical domain decomposition, but existing frameworks incur substantial overhead from iterative Schwarz coupling and require intrusive element-level operator extraction.
LSEM eliminates both bottlenecks by learning a set of transferable, local latent ODE surrogates ("elements") trained using only high-fidelity solution snapshots over small reference patches. Domain-scale PDE surrogates are then assembled by coupling latent blocks directly through learned, direction-specific latent fluxes, reconstructing the global solution via window-based partition-of-unity blending. This approach delivers a “train small, predict large” paradigm for surrogate PDE solvers, enabling extensible and scalable scientific computing on domains and source conditions not seen during training (Chung et al., 5 Jan 2026).
2. Single-Element Model and Latent ODE Surrogates
Each LSEM "element" is associated with an autoencoder—composed of an encoder and a decoder —mapping local high-dimensional states to a low-dimensional latent , and approximating .
Local dynamics are modeled by a learned latent ODE,
where is a feature library (polynomial or nonlinear in ), and is a learnable coefficient matrix. In the simplest linear case, . Eigenvalue-based regularization for ensures dynamical stability when required. Encoder/decoder architectures for 1D Burgers’ and KdV equations utilize fully connected layers (e.g., ), with softplus or activations (Chung et al., 5 Jan 2026).
3. Latent Coupling and Global System Assembly
Neighboring elements exchange information in latent space through directionally parameterized coupling blocks. For element with latent and set of neighbors ,
where each direction (e.g., left/right, upwind/downwind) admits its own learnable matrix . The global latent state then evolves by
with block-diagonal (internal) and off-diagonal (directional coupling) structure in . The coupling is trained end-to-end within the global latent-dynamics loss, without the need for explicit interface residuals or Schwarz iterations. The exact directional decomposition is problem-dependent (e.g., for 1D, upwind/downwind for hyperbolic PDEs) (Chung et al., 5 Jan 2026).
4. Field Reconstruction and Partition-of-Unity Blending
To reconstruct the high-dimensional global solution from overlapping per-element predictions , LSEM employs a smooth window-based blending: where are compactly supported, cosine windows, typically possessing exact partition-of-unity properties, thus ensuring smoothness and continuity across element boundaries even for overlapping tiles. This eliminates the need for explicit interface corrections and yields mesh-free, artifact-free solution interpolations (Chung et al., 5 Jan 2026).
5. Training Protocols and Hyperparameters
Training of LSEM is conducted entirely in a non-intrusive fashion:
- Snapshot collection: Multiple high-fidelity simulations on small reference assemblies (e.g., 4-element overlaps) yield snapshot collections.
- Subdomain extraction: Each global state is restricted to element as .
- Autoencoders: Each element model is trained to minimize
with additive latent noise regularizing decoder stability.
- Latent-dynamics and coupling training: The joint loss
supervises both internal and coupling dynamics, with stability regularization where needed (e.g., for linear ).
- Optimization: Hyperparameters for tested systems include (Burgers), layers and learning rate for 2000 epochs, or and deeper networks for KdV, trained via Adam or SOAP (Chung et al., 5 Jan 2026).
6. Numerical Experiments and Scalability
LSEM has been validated on the 1D Burgers’ and Korteweg–de Vries equations. In both, models trained on 4 overlapping elements generalize to assemblies with 12–24 elements—spatial domains up to 6–8 times larger than in training—without retraining or Schwarz iterations:
- Burgers’ equation: Training error relative , scaled-domain error , inference faster than FOM.
- KdV equation: Training error , scaled-domain , inference speedup FOM. Reconstructed solution fields show smoothness across interfaces with no mode-jumping or artifacts. The method scales linearly in domain size due to the block structure of the assembled latent ODE, and prediction leverages the same trained local models in new configurations (Chung et al., 5 Jan 2026).
7. Interpretability, Limitations, and Future Directions
LSEM explicitly separates internal subdomain dynamics from direction-resolved latent coupling, with analogous structure to numerical flux stencils in conventional discretizations. This yields interpretability of learned operators and clarity in error analysis. The method’s non-intrusiveness reduces the reliance on operator-level access, requiring only solution snapshots for training.
Current limitations include restriction to one-dimensional problems and structured overlaps; extension to 2D and 3D will require adaptation of windowing, latent coupling, and possibly incorporation of CNN/GNN-based architectures. Expanding LSEM to more complex PDEs (multiphysics, turbulence), unstructured meshes, and adaptive element refinement are open directions. Integration with uncertainty quantification and multi-fidelity modeling are noted prospects (Chung et al., 5 Jan 2026).
LSEM establishes a foundation-model approach to surrogate PDE solvers by combining the geometric flexibility of modular local models with the scalability and interpretability of latent dynamical systems, without reliance on PDE operator intrusiveness. This represents a convergence of advances in latent PDE solvers and domain-decomposition-inspired surrogate assembly (Ranade et al., 2021, Chung et al., 5 Jan 2026).