Papers
Topics
Authors
Recent
Search
2000 character limit reached

Sparse Proper Generalized Decomposition (sPGD)

Updated 19 December 2025
  • sPGD is a methodology that decomposes high-dimensional, parametrized fields into sparse, low-rank separable modes, enabling efficient simulation predictions.
  • It employs non-intrusive collocation and greedy mode extraction to enforce sparsity and maintain computational tractability in real-time evaluations.
  • Integration with Rank-Reduction Autoencoders and latent space regression allows on-the-fly reconstruction of fields, validated by low error metrics in two-phase microstructure predictions.

Sparse Proper Generalized Decomposition (sPGD) is a non-intrusive, collocation-based methodology for constructing low-rank, separated representations of high-dimensional, parametrized fields in simulation-based engineering and design. By expressing quantities of interest—such as stress fields—as sums of products of spatial and parametric modes, sPGD enables rapid multiparametric solution predictions while maintaining computational tractability. This approach is a cornerstone within the Generative Parametric Design (GPD) framework, facilitating real-time geometry generation and on-the-fly, reduced-order field evaluation for complex materials and microstructures (Idrissi et al., 12 Dec 2025).

1. Separated Representation in sPGD

sPGD targets the approximation of multiparametric fields u(x;μ)u(x;\mu), e.g., the von Mises stress distribution, using a sum of separable modes:

σ(x,μ1,μ2)    i=1mFi(x)M1i(μ1)M2i(μ2),\sigma(x,\mu_1,\mu_2)\;\approx\;\sum_{i=1}^m F^i(x)\,M^i_1(\mu_1)\,M^i_2(\mu_2),

where Fi(x)F^i(x) denotes the ii-th spatial mode and Mji(μj)M^i_j(\mu_j) are one-dimensional parametric functions in each parameter dimension (j=1,2j=1,2). The number of terms mm is chosen such that mm \ll the number of degrees of freedom, enforcing sparsity in the representation. In practice, each parametric function MjmM^m_j is further expanded in a finite basis, such as a Kriging or polynomial basis of size DsD_s, and coefficients λk\lambda_k are selected to retain sparsity:

Mjm(μj)  =  k=1Dsϕk(μj)λk  =  ΦjTλj.M^m_j(\mu_j)\;=\;\sum_{k=1}^{D_s}\phi_k(\mu_j)\,\lambda_{k}\;=\;\boldsymbol\Phi^T_j\,\boldsymbol\lambda_j.

This construction is particularly effective when only a reduced set of collocation points in parameter space is available, rather than full PDE assemblies.

2. Non-Intrusive Collocation and Mode Extraction

Unlike classical intrusive PGD approaches, sPGD utilizes a collocation-based, non-intrusive projection. The weak form,

Ωx×Ωμs(x,μ)(σapp(x,μ)σref(x,μ))dxdμ=0,\int_{\Omega_x\times\Omega_\mu} s^*(x,\mu)\bigl(\sigma_{\rm app}(x,\mu)-\sigma_{\rm ref}(x,\mu)\bigr)\,dx\,d\mu = 0,

is enforced at a limited set of collocation points μk\mu^k via Dirac test functions s(x,μ)=δ(μμk)s^*(x,\mu)=\delta(\mu-\mu^k). Each new mode is greedily extracted using finite element projections at these samples, iteratively improving the accuracy of the separated expansion. This workflow retains computational efficiency, as the weak form is evaluated only at selected parameter points, and the subsequent decomposition maintains a low effective rank through mode selection.

3. Rank-Reduction Autoencoders (RRAE) for sPGD Modes

To further compress the sPGD solution, each separated mode (spatial and parametric) is encoded using a Rank-Reduction Autoencoder (RRAE). The RRAE architecture processes three sets of objects:

  • Spatial Modes (Fi(x)F^i(x)): Treated as three-channel 2D images (size 148×148×3148{\times}148{\times}3), encoded with four convolutional layers followed by a multi-layer perceptron (MLP), leading to a latent representation of Lx=3000L_x=3\,000 which is truncated by SVD to kx=12k_x=12 resulting in γxR12\gamma_x\in\mathbb{R}^{12}.
  • First Parameter Modes (M1i(μ1)M^i_1(\mu_1)): Each set of three curves sampled at 10001\,000 points (flattened to a 30003\,000-vector), embedded via a shallow MLP to L1=1700L_1=1\,700, SVD-truncated to k1=3k_1=3, and decoded by a deeper MLP to form γ1R3\gamma_1\in\mathbb{R}^3.
  • Second Parameter Modes (M2i(μ2)M^i_2(\mu_2)): Processed analogously to M1iM^i_1, but with lower internal dimension (L2=800L_2=800, k2=3k_2=3), yielding γ2R3\gamma_2\in\mathbb{R}^3.

The overall RRAE training minimizes the normalized Frobenius norm reconstruction error:

Lrecon=XX~FXF×100%,\mathcal L_{\rm recon} = \frac{\|X - \widetilde X\|_F}{\|X\|_F}\times100\%,

with rank constraints imposed by the SVD truncation.

4. Latent Space Regression and On-the-Fly Solution Assembly

Each geometry is encoded to a low-dimensional latent variable α\alpha (dimension kα=4k_\alpha=4) by a dedicated geometry RRAE. A multilayer perceptron (“fMLPf_{\rm MLP}”) is trained to map this geometry code to the PGD latent codes (γx,γ1,γ2)(\gamma_x, \gamma_1, \gamma_2). These three regressors are configured as follows:

  • One MLP with three $128$-unit hidden layers for γx\gamma_x,
  • Two MLPs with two $64$-unit hidden layers each for γ1\gamma_1 and γ2\gamma_2.

The mapping is trained using the Adam optimizer, with mean absolute error as the loss, over 20002\,00030003\,000 epochs and batch size $32$.

For a new geometry xx^\dagger, the workflow is:

  1. Encode the geometry to α\alpha.
  2. Predict sPGD latent codes (γx,γ1,γ2)(\gamma_x, \gamma_1, \gamma_2).
  3. Decode the spatial modes FiF^i and parametric modes M1iM^i_1, M2iM^i_2.
  4. Assemble the separated sPGD approximation:

σapp(x,μ1,μ2)=i=13F^i(x)  M^1i(μ1)  M^2i(μ2).\sigma_{\rm app}(x,\mu_1,\mu_2) = \sum_{i=1}^3 \hat{F}^i(x)\;\hat{M}^i_1(\mu_1)\;\hat{M}^i_2(\mu_2).

This yields real-time evaluations at computational cost O(kx+k1+k2)O(k_x + k_1 + k_2), primarily matrix–vector multiplications and lightweight neural passes.

5. Application: Two-Phase Microstructure Prediction

The sPGD-RRAE scheme has been validated on a dataset of $599$ pixelated representative volume elements (RVEs) modeling two-phase microstructures (1482148^2 images), characterized by two independent Young’s modulus parameters μ1=E1[800,2400]\mu_1=E_1\in[800,2400] MPa and μ2=E2[12000,68000]\mu_2=E_2\in[12\,000,68\,000] MPa. The sPGD expansion uses Ds=8D_s=8 basis functions per parameter and retains three modes. Results include:

  • MAPE: <9%<9\% on training, <8%<8\% on test collocation points.
  • Spatial Mode MAE: 103\approx10^{-3}10210^{-2}.
  • Parametric Mode MAE: <7×103<7\times10^{-3}.
  • Latent Space Fidelity: True vs. decoded latent codes align on the identity for both train and test sets.
  • Generated Microstructures: GMM sampling of geometry latent codes yields FID49\approx49, indistinguishable from test set distributions, confirming the capability to generate realistic novel morphologies.
  • Comparison to FE+PGD: On novel designs, the reconstructed sPGD fields retain qualitative agreement with full finite element plus PGD references, with minor smoothing near phase interfaces.

6. Integration within Generative Parametric Design (GPD) Framework

The synergy between sparse PGD, RRAEs, and latent regression underlies the GPD framework, which delivers a unified pipeline for generative design and rapid field prediction (Idrissi et al., 12 Dec 2025). By encoding both geometry and high-fidelity field solutions into compact, coupled latent spaces, the framework enables exploration and optimization of new morphologies with instant access to their parametric physical responses. This capability accelerates the development of digital and hybrid twins, supporting predictive modeling and real-time engineering decision-making.

Extensions under consideration include direct end-to-end training of RRAEs and regressors for improved latent space alignment, adaptive rank determination (aRRAE), and extensions to 3D geometries, additional parameter types (e.g., anisotropy, nonlinearities), multiphysics problems, and complex domain families.

7. Significance and Prospects

sPGD, as deployed in GPD, combines the efficiency of separated, sparse model reduction with learned latent representations for both geometry and parametric modes. This offers a scalable route to real-time answers in high-dimensional, multiparametric engineering design tasks. A plausible implication is the applicability of this paradigm to a broad class of simulation-based optimization and digital twin workflows, especially where rapid, on-the-fly solutions for novel geometries are required. The framework’s extensibility to higher-dimensional parameter spaces and more complex physics suggests ongoing relevance for mathematical modeling and computational mechanics (Idrissi et al., 12 Dec 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Sparse Proper Generalized Decomposition (sPGD).