Sparse Proper Generalized Decomposition (sPGD)
- sPGD is a methodology that decomposes high-dimensional, parametrized fields into sparse, low-rank separable modes, enabling efficient simulation predictions.
- It employs non-intrusive collocation and greedy mode extraction to enforce sparsity and maintain computational tractability in real-time evaluations.
- Integration with Rank-Reduction Autoencoders and latent space regression allows on-the-fly reconstruction of fields, validated by low error metrics in two-phase microstructure predictions.
Sparse Proper Generalized Decomposition (sPGD) is a non-intrusive, collocation-based methodology for constructing low-rank, separated representations of high-dimensional, parametrized fields in simulation-based engineering and design. By expressing quantities of interest—such as stress fields—as sums of products of spatial and parametric modes, sPGD enables rapid multiparametric solution predictions while maintaining computational tractability. This approach is a cornerstone within the Generative Parametric Design (GPD) framework, facilitating real-time geometry generation and on-the-fly, reduced-order field evaluation for complex materials and microstructures (Idrissi et al., 12 Dec 2025).
1. Separated Representation in sPGD
sPGD targets the approximation of multiparametric fields , e.g., the von Mises stress distribution, using a sum of separable modes:
where denotes the -th spatial mode and are one-dimensional parametric functions in each parameter dimension (). The number of terms is chosen such that the number of degrees of freedom, enforcing sparsity in the representation. In practice, each parametric function is further expanded in a finite basis, such as a Kriging or polynomial basis of size , and coefficients are selected to retain sparsity:
This construction is particularly effective when only a reduced set of collocation points in parameter space is available, rather than full PDE assemblies.
2. Non-Intrusive Collocation and Mode Extraction
Unlike classical intrusive PGD approaches, sPGD utilizes a collocation-based, non-intrusive projection. The weak form,
is enforced at a limited set of collocation points via Dirac test functions . Each new mode is greedily extracted using finite element projections at these samples, iteratively improving the accuracy of the separated expansion. This workflow retains computational efficiency, as the weak form is evaluated only at selected parameter points, and the subsequent decomposition maintains a low effective rank through mode selection.
3. Rank-Reduction Autoencoders (RRAE) for sPGD Modes
To further compress the sPGD solution, each separated mode (spatial and parametric) is encoded using a Rank-Reduction Autoencoder (RRAE). The RRAE architecture processes three sets of objects:
- Spatial Modes (): Treated as three-channel 2D images (size ), encoded with four convolutional layers followed by a multi-layer perceptron (MLP), leading to a latent representation of which is truncated by SVD to resulting in .
- First Parameter Modes (): Each set of three curves sampled at points (flattened to a -vector), embedded via a shallow MLP to , SVD-truncated to , and decoded by a deeper MLP to form .
- Second Parameter Modes (): Processed analogously to , but with lower internal dimension (, ), yielding .
The overall RRAE training minimizes the normalized Frobenius norm reconstruction error:
with rank constraints imposed by the SVD truncation.
4. Latent Space Regression and On-the-Fly Solution Assembly
Each geometry is encoded to a low-dimensional latent variable (dimension ) by a dedicated geometry RRAE. A multilayer perceptron (“”) is trained to map this geometry code to the PGD latent codes . These three regressors are configured as follows:
- One MLP with three $128$-unit hidden layers for ,
- Two MLPs with two $64$-unit hidden layers each for and .
The mapping is trained using the Adam optimizer, with mean absolute error as the loss, over – epochs and batch size $32$.
For a new geometry , the workflow is:
- Encode the geometry to .
- Predict sPGD latent codes .
- Decode the spatial modes and parametric modes , .
- Assemble the separated sPGD approximation:
This yields real-time evaluations at computational cost , primarily matrix–vector multiplications and lightweight neural passes.
5. Application: Two-Phase Microstructure Prediction
The sPGD-RRAE scheme has been validated on a dataset of $599$ pixelated representative volume elements (RVEs) modeling two-phase microstructures ( images), characterized by two independent Young’s modulus parameters MPa and MPa. The sPGD expansion uses basis functions per parameter and retains three modes. Results include:
- MAPE: on training, on test collocation points.
- Spatial Mode MAE: –.
- Parametric Mode MAE: .
- Latent Space Fidelity: True vs. decoded latent codes align on the identity for both train and test sets.
- Generated Microstructures: GMM sampling of geometry latent codes yields FID, indistinguishable from test set distributions, confirming the capability to generate realistic novel morphologies.
- Comparison to FE+PGD: On novel designs, the reconstructed sPGD fields retain qualitative agreement with full finite element plus PGD references, with minor smoothing near phase interfaces.
6. Integration within Generative Parametric Design (GPD) Framework
The synergy between sparse PGD, RRAEs, and latent regression underlies the GPD framework, which delivers a unified pipeline for generative design and rapid field prediction (Idrissi et al., 12 Dec 2025). By encoding both geometry and high-fidelity field solutions into compact, coupled latent spaces, the framework enables exploration and optimization of new morphologies with instant access to their parametric physical responses. This capability accelerates the development of digital and hybrid twins, supporting predictive modeling and real-time engineering decision-making.
Extensions under consideration include direct end-to-end training of RRAEs and regressors for improved latent space alignment, adaptive rank determination (aRRAE), and extensions to 3D geometries, additional parameter types (e.g., anisotropy, nonlinearities), multiphysics problems, and complex domain families.
7. Significance and Prospects
sPGD, as deployed in GPD, combines the efficiency of separated, sparse model reduction with learned latent representations for both geometry and parametric modes. This offers a scalable route to real-time answers in high-dimensional, multiparametric engineering design tasks. A plausible implication is the applicability of this paradigm to a broad class of simulation-based optimization and digital twin workflows, especially where rapid, on-the-fly solutions for novel geometries are required. The framework’s extensibility to higher-dimensional parameter spaces and more complex physics suggests ongoing relevance for mathematical modeling and computational mechanics (Idrissi et al., 12 Dec 2025).