FEENet: Finite Element Eigenfunction Network
- Finite Element Eigenfunction Network (FEENet) is a hybrid spectral learning framework that leverages FEM eigenfunction theory to accurately solve PDEs on complex and irregular domains.
- It decouples geometry encoding via offline eigenbasis computation from neural regression, predicting spectral coefficients to ensure resolution-independent and interpretable solutions.
- Benchmark tests demonstrate FEENet's superior accuracy and significantly reduced training times compared to DeepONet and MIONet across various PDE scenarios.
The Finite Element Eigenfunction Network (FEENet) is a hybrid spectral learning framework designed for the efficient and accurate solution of partial differential equations (PDEs) on complex or irregular geometries. FEENet leverages the eigenfunction theory of self-adjoint elliptic differential operators in conjunction with the finite element method (FEM) to construct a geometry-adapted spectral basis, facilitating the representation of PDE solutions in terms of spectral coefficients. Neural operator learning is reduced to the prediction of these spectral coefficients, combining the structure-preserving benefits of FEM with the resolution-independence and data efficiency characteristic of neural operators. FEENet achieves superior accuracy and computational efficiency compared to reference architectures such as DeepONet and MIONet, particularly on challenging geometries and for nonlocal operators (Li et al., 31 Jan 2026).
1. Theoretical Foundations: Spectral FEM and Solution Expansion
FEENet is rooted in the spectral decomposition of PDE solution spaces. Given a bounded domain with homogeneous Dirichlet boundary conditions, consider a self-adjoint, strongly elliptic operator (e.g., Laplace–Beltrami). The associated eigenvalue problem is
The weak formulation seeks such that
Discretization with an FEM basis leads to generalized eigenproblems (with and the stiffness and mass matrices). The eigenfunctions form an -orthonormal, geometry-adapted basis. Any admits the expansion
truncated in practice to dominant modes.
Projection of a wide range of elliptic and parabolic PDEs (including Poisson and heat equations, as well as nonlocal operators ) onto this eigenbasis results in diagonalizable operators and analytical or numerically robust forms for solution reconstruction.
2. FEENet Architecture and Learning Workflow
FEENet decouples geometry encoding from neural learning by splitting the workflow into offline and online components:
- Offline eigenbasis computation:
- Mesh and assemble FEM stiffness and mass matrices for .
- Compute the first FEM eigenpairs via sparse eigensolvers.
- Store eigenfunctions on the mesh; these serve as the fixed “trunk” of the operator.
- Branch network for spectral coefficient regression:
- Inputs: Forcing , initial data , parameters , at sensor points.
- A fully-connected network, typically with a single hidden layer of size ([P, M] layout), maps to predict spectral coordinates .
- ReLU activation, Xavier initialization, Adam optimizer; only branch parameters are trainable (trunk/fixed basis is non-trainable).
- Spectral synthesis and time dependence:
- Solution is reconstructed via
For homogeneous heat equations:
For inhomogeneous problems, additional precomputed terms are incorporated.
- Training objective:
- Minimize the mean-squared error between predicted and ground truth fields,
FEENet is agnostic to the particular choice of , enabling learning for both local and nonlocal operator mappings.
3. Computational Considerations and Resolution Independence
The FEENet framework exhibits several computational advantages:
- Eigenbasis computation: One-time cost per geometry; for 400 modes, timescales were approximately 0.12 min (Square), 1.07 min (Fins), 6.53 min (Bunny) on representative hardware.
- Training efficiency: Dramatically reduced compared to DeepONet/MIONet. For example, training on the Square (Poisson) required 4 min for FEENet, versus 13 min for DeepONet; the Bunny (inhomogeneous heat) required 144+7 min (FEENet) versus 2,208 min (MIONet).
- Inference and resolution independence: Once the FEM eigenfunctions are known as continuous FEM objects, can be efficiently evaluated at arbitrary points in . Numerical tests showed virtually identical and errors on both training and much finer grids, confirming full mesh agnosticism.
- Interpretability: Each branch output coefficient corresponds directly to physical eigenmodes, aiding analysis and debugging.
- Nonlocal operator extension: FEENet natively handles nonlocal operators due to the diagonal action in the eigenbasis.
This architecture differs fundamentally from approaches such as FEONet or Sparse FEONet, which learn the coefficient map directly on the FEM basis without spectral diagonalization (Ko et al., 2 Jan 2026).
4. Benchmark Results and Empirical Performance
FEENet has been systematically benchmarked against DeepONet and MIONet on canonical elliptic and parabolic PDEs across increasingly complex geometries:
| Problem & Geometry | Error Metric | FEENet | Reference (DeepONet/MIONet) | Training Time (min) |
|---|---|---|---|---|
| Poisson, Square (2D) | : , : | DeepONet: , | 4.16 (+0.12 eigen) / 12.7 | |
| Heat (homog.), Fins (2D, nonconvex) | : , : | DeepONet: , $1.6$ | 24.8 (+1.07) / 676 | |
| Heat (inhomog.), Bunny (3D) | : , : | MIONet: , | 143.6 (+6.5) / 2,208 |
Across all configurations, FEENet delivers one to two orders of magnitude lower errors, 20–50x shorter training times, and robust performance on highly irregular domains. Increasing the eigenfunction truncation demonstrates systematic error reduction, with abrupt L2 and H1 error decreases at higher (e.g., modes). Qualitative assessments reveal that FEENet captures small-scale features even near complex domain boundaries; competitor methods often manifest large, spurious deviation in such regions (Li et al., 31 Jan 2026).
5. Comparison with FEONet and Related Approaches
FEENet and FEONet share FEM foundations but diverge in representation and training philosophy. FEONet (and Sparse FEONet) learns the parameter-to-coefficient map in the standard FEM basis via operator networks, often without requiring training data by leveraging bilinear form evaluations for loss construction (Ko et al., 2 Jan 2026). Sparse FEONet further exploits mesh-locality to impose graph-sparse neural architectures, yielding significant parameter reductions and mesh-size independent stability.
FEENet, in contrast, projects solutions into an eigenfunction basis intrinsic to the geometry, targeting spectral coefficients via data-driven optimization. Key distinctions:
| Property | FEENet | FEONet/Sparse FEONet |
|---|---|---|
| Representation | FEM eigenfunction (spectral) basis | Standard local FEM basis |
| Training loss | Field-wise supervised loss | Data-free variational residual |
| Network architecture | FC branch, fixed trunk | FC or sparse operator network |
| Interpretability | Physical eigenmode coefficients | FEM node coefficients |
| Resolution independence | Yes (spectral trunk continuous) | No (fixed mesh basis) |
While both approaches provide mathematical expressivity, FEENet exhibits advantages in interpretability, generalization to nonlocal operators, and empirical performance on complex geometries.
6. Significance and Future Directions
FEENet exemplifies a successful fusion of structure-preserving numerical methods (FEM eigensolvers) with neural regression of spectral coefficients, resulting in a scalable, robust, and interpretable neural operator for PDE solution tasks. By decoupling geometry handling from coefficient learning, FEENet achieves mesh-independent inference, efficient training, and superior accuracy on intricate physical domains. Natural extensions include adapting the spectral trunk to time-dependent or parameter-varying geometries, incorporating additional physical constraints in the coefficient network, and extending the spectral trunk for nonlinear or higher-order PDEs (Li et al., 31 Jan 2026).
A plausible implication is that the FEENet paradigm may catalyze further development of hybrid-physics neural operators, particularly in scientific applications that demand geometric fidelity and efficient operator generalization.