Finite-Basis PINNs: Scalable PDE Solvers
- FBPINNs are a domain decomposition method that uses localized neural networks with smooth window functions to ensure global continuity.
- They enhance training efficiency and accuracy for high-frequency, multi-scale PDEs by mitigating spectral bias through per-patch rescaling and blending.
- FBPINNs support parallel Schwarz-type training and multilevel extensions, offering robust scalability and reduced error metrics.
Finite-Basis Physics-Informed Neural Networks (FBPINNs) generalize the standard physics-informed neural network (PINN) paradigm by introducing explicit domain decomposition and an overlapping, partition-of-unity architecture. This approach divides the computational domain into multiple overlapping subdomains, each equipped with a localized neural network and blended globally via smooth window functions. FBPINNs provide scalable, mesh-free, and automated frameworks for solving partial differential equations (PDEs) and related inverse problems, demonstrating improved accuracy, training efficiency, and scalability—especially for high-frequency, multiscale, and large-domain problems—relative to conventional monolithic PINNs (Moseley et al., 2021, Dolean et al., 2023, Dolean et al., 19 Nov 2025).
1. Mathematical Foundations and Core Ansatz
FBPINNs begin by decomposing the global domain into overlapping subdomains . Each subdomain is assigned a smooth partition-of-unity window function with compact support, satisfying: The global trial solution is constructed as: Here, is a small neural network (perceptron or other architecture) localized to . The blending via ensures global continuity without explicit interface penalties. This ansatz is inspired by classical finite element and domain decomposition methods but employs neural networks to represent the finite basis functions within each patch (Moseley et al., 2021, Dolean et al., 2022, Dolean et al., 2023, Dolean et al., 19 Nov 2025).
2. Window Functions, Local Rescaling, and Partition-of-Unity
A key architectural aspect of FBPINNs is the choice of partition-of-unity (window) functions, which enable seamless gluing of subdomain solutions. A common choice is squared cosine windows: where and denote the center and half-width of in direction , and the overlap ratio controls the blending width (Dolean et al., 19 Nov 2025, Dolean et al., 2023). The partition-of-unity guarantees global coverage and smooth transitions. Local rescaling ("per-patch normalization") maps each subdomain to a canonical coordinate system (e.g., ), mitigating the spectral bias and allowing each local network to efficiently approximate high-frequency local features (Moseley et al., 2021, Dolean et al., 2023, Dolean et al., 2022).
3. Loss Functions and Training Schedules
The composite loss functional in FBPINNs consists of physics-informed residuals evaluated at collocation points (interior and boundary), with optional data-fitting or parameter constraints for inverse problems: where is the differential operator, and possibly additional data or parameter constraints (Anderson et al., 2024, Dolean et al., 2022, Saha et al., 2024). The overlapping architecture naturally enforces interface smoothness, requiring no additional "gluing" penalties.
FBPINN training exploits Schwarz-type iterative schedules:
- Additive Schwarz (fully parallel): all subdomains updated simultaneously.
- Multiplicative Schwarz (sequential): one subdomain updated at a time.
- Hybrid Schwarz (coloring): updates on non-overlapping subdomain sets in parallel (Dolean et al., 2022).
Multilevel extensions introduce hierarchies of coarse-to-fine domain decompositions, enhancing global communication between subdomains and improving scalability for problems with large (Dolean et al., 2023, Dolean et al., 19 Nov 2025).
4. Algorithmic Extensions: ELM Linearization and Preconditioned Training
Recent work extends FBPINNs in two orthogonal directions:
- ELM-FBPINN: Each subdomain network is replaced by an Extreme Learning Machine (single hidden layer with random weights, only training linear coefficients). The global problem then becomes a high-dimensional, but sparse, linear or least-squares system, drastically reducing training time. This yields orders-of-magnitude speedup and retains scalability with respect to the number of subdomains but is currently demonstrated only on 1D examples (Anderson et al., 2024).
- Multi-Preconditioned LBFGS (MP-LBFGS): For standard nonlinear FBPINNs (with general MLP subnetworks), parallel, subdomain-local quasi-Newton corrections are constructed and optimally combined through a subspace minimization. MP-LBFGS reduces global epochs and communication, accelerates convergence, and achieves lower final validation error compared to standard LBFGS, especially for large numbers of subdomains (Salvadó-Benasco et al., 13 Jan 2026).
5. Theoretical Justification and Spectral Bias Mitigation
The domain decomposition structure of FBPINNs directly addresses the spectral bias of neural networks. By localizing the representation, high-frequency modes of the global solution correspond to lower-frequency modes within each rescaled patch. This accelerates convergence and enables accurate resolution of multi-scale and oscillatory solutions without excessively enlarging network size or collocation density (Moseley et al., 2021, Dolean et al., 2022, Heinlein et al., 2024, Dolean et al., 2023, Dolean et al., 19 Nov 2025). Multilevel and coarse-space correction strategies further restore scalability, with coarse global networks capturing global (low-frequency) modes and local subnetworks capturing high-frequency or residual content (Dolean et al., 2022, Dolean et al., 2023).
6. Empirical Performance and Applications
Empirical evaluations consistently demonstrate that FBPINNs outperform standard PINNs in accuracy, efficiency, and data requirements for problems involving high-frequency features, multiscale structure, and larger domains. Representative results include:
- For the Helmholtz equation at moderate frequency, FBPINNs reduce relative errors (e.g., in real part) compared to PINNs under both Adam+L-BFGS and ENGD optimization (Dolean et al., 19 Nov 2025).
- Multi-level FBPINNs further improve scaling and reduce error, consistently achieving one to two orders-of-magnitude lower errors at comparable cost and rapid convergence, even in the presence of many subdomains (Dolean et al., 2023, Heinlein et al., 2024).
- Parameter identification and model discovery via domain-decomposed FBPINNs yields lower MSE and enhanced robustness to noise, especially when data is quasi-stationary or sparse (Saha et al., 2024).
In addition, FBPINN concepts have been adapted to Kolmogorov-Arnold Networks ("FBKANs") and operator learning architectures, retaining the core domain decomposition, windowing, and parallelization strategies (Howard et al., 2024, Heinlein et al., 2024).
7. Practical Considerations, Limitations, and Future Directions
FBPINNs inherit key advantages:
- Scalability to large and multi-scale problems via parallelizable, patch-wise neural architectures.
- Robust mitigation of spectral bias.
- Automatic imposition of continuity and smoothness across subdomains.
However, challenges include:
- Only moderate error reduction at very high frequencies; performance plateaus for extremely oscillatory regimes.
- Sensitivity to overlap ratio and smoothness of window functions.
- In some settings, training the imaginary component (e.g., in Helmholtz) remains challenging (Dolean et al., 19 Nov 2025).
- For ELM-FBPINN, application to higher-dimensional PDEs, best practices for random initialization, and ill-conditioning in complex geometries remain open questions (Anderson et al., 2024).
Prospective extensions include:
- Full multilevel (coarse-to-fine) domain decompositions (Dolean et al., 2023, Dolean et al., 19 Nov 2025).
- Adaptive overlap or windowing strategies.
- Physics-informed operator networks using FBPINN-style domain partitioning (Heinlein et al., 2024).
- Further algorithmic improvements in nonlinear preconditioning, communication patterns, and robust hyperparameter selection in large-scale high-performance computing environments (Salvadó-Benasco et al., 13 Jan 2026).
FBPINNs thus represent a principled, domain decomposition-inspired advance in physics-informed machine learning, providing a pathway to mesh-free, scalable, and accurate neural solvers for challenging PDEs and inverse problems spanning high-frequency, multi-scale, and data-sparse regimes (Moseley et al., 2021, Dolean et al., 2023, Dolean et al., 19 Nov 2025).