Papers
Topics
Authors
Recent
Search
2000 character limit reached

FEENet: Finite Element Eigenfunction Network

Updated 7 February 2026
  • Finite Element Eigenfunction Network (FEENet) is a hybrid spectral learning framework that leverages FEM eigenfunction theory to accurately solve PDEs on complex and irregular domains.
  • It decouples geometry encoding via offline eigenbasis computation from neural regression, predicting spectral coefficients to ensure resolution-independent and interpretable solutions.
  • Benchmark tests demonstrate FEENet's superior accuracy and significantly reduced training times compared to DeepONet and MIONet across various PDE scenarios.

The Finite Element Eigenfunction Network (FEENet) is a hybrid spectral learning framework designed for the efficient and accurate solution of partial differential equations (PDEs) on complex or irregular geometries. FEENet leverages the eigenfunction theory of self-adjoint elliptic differential operators in conjunction with the finite element method (FEM) to construct a geometry-adapted spectral basis, facilitating the representation of PDE solutions in terms of spectral coefficients. Neural operator learning is reduced to the prediction of these spectral coefficients, combining the structure-preserving benefits of FEM with the resolution-independence and data efficiency characteristic of neural operators. FEENet achieves superior accuracy and computational efficiency compared to reference architectures such as DeepONet and MIONet, particularly on challenging geometries and for nonlocal operators (Li et al., 31 Jan 2026).

1. Theoretical Foundations: Spectral FEM and Solution Expansion

FEENet is rooted in the spectral decomposition of PDE solution spaces. Given a bounded domain ΩRd\Omega \subset \mathbb{R}^d with homogeneous Dirichlet boundary conditions, consider a self-adjoint, strongly elliptic operator L\mathcal{L} (e.g., Laplace–Beltrami). The associated eigenvalue problem is

Δϕk=λkϕk    in  Ω,ϕk=0    on  Ω.-\Delta \phi_k = \lambda_k \phi_k\;\;\mathrm{in}\;\Omega,\qquad \phi_k=0 \;\;\mathrm{on}\; \partial\Omega.

The weak formulation seeks ϕkH01(Ω)\phi_k \in H_0^1(\Omega) such that

Ωϕkvdx=λkΩϕkvdx,    vH01(Ω).\int_\Omega \nabla\phi_k \cdot \nabla v\,dx = \lambda_k \int_\Omega \phi_k v\,dx,\;\;\forall v \in H_0^1(\Omega).

Discretization with an FEM basis {ψi}i=1N\{\psi_i\}_{i=1}^N leads to generalized eigenproblems KΦ=MΦΛK\Phi = M\Phi\Lambda (with KK and MM the stiffness and mass matrices). The eigenfunctions {ϕk}k1\{\phi_k\}_{k\geq 1} form an L2(Ω)L^2(\Omega)-orthonormal, geometry-adapted basis. Any uH01(Ω)u \in H_0^1(\Omega) admits the expansion

u(x)=k=1akϕk(x),ak=Ωu(x)ϕk(x)dx,u(x) = \sum_{k=1}^\infty a_k \phi_k(x),\qquad a_k = \int_\Omega u(x)\phi_k(x)\,dx,

truncated in practice to MM dominant modes.

Projection of a wide range of elliptic and parabolic PDEs (including Poisson and heat equations, as well as nonlocal operators g(L)g(\mathcal{L})) onto this eigenbasis results in diagonalizable operators and analytical or numerically robust forms for solution reconstruction.

2. FEENet Architecture and Learning Workflow

FEENet decouples geometry encoding from neural learning by splitting the workflow into offline and online components:

  • Offline eigenbasis computation:
    • Mesh Ω\Omega and assemble FEM stiffness KK and mass MM matrices for L\mathcal{L}.
    • Compute the first MM FEM eigenpairs {(λk,ϕk)}k=1M\{ (\lambda_k, \phi_k) \}_{k=1}^M via sparse eigensolvers.
    • Store eigenfunctions on the mesh; these serve as the fixed “trunk” of the operator.
  • Branch network for spectral coefficient regression:
    • Inputs: Forcing f(xi)f(x_i), initial data u0(xi)u_0(x_i), parameters μ\mu, at PP sensor points.
    • A fully-connected network, typically with a single hidden layer of size MM ([P, M] layout), maps RPRM\mathbb{R}^P \to \mathbb{R}^M to predict spectral coordinates c=(c1,...,cM)c = (c_1, ..., c_M).
    • ReLU activation, Xavier initialization, Adam optimizer; only branch parameters are trainable (trunk/fixed basis is non-trainable).
  • Spectral synthesis and time dependence:

    • Solution is reconstructed via

    u^(x)=k=1Mckϕk(x).\hat{u}(x) = \sum_{k=1}^M c_k \phi_k(x).

    For homogeneous heat equations:

    u^(x,t)=k=1Mck(0)eDλktϕk(x).\hat{u}(x,t) = \sum_{k=1}^M c_k(0) e^{-D\lambda_k t} \phi_k(x).

    For inhomogeneous problems, additional precomputed terms are incorporated.

  • Training objective:

    • Minimize the mean-squared L2L^2 error between predicted and ground truth fields,

    L(θ)=1Ni=1Nuiu^i(;θ)L2(Ω)2.L(\theta) = \frac{1}{N}\sum_{i=1}^N \|u_i - \hat{u}_i(\cdot; \theta)\|_{L^2(\Omega)}^2.

FEENet is agnostic to the particular choice of L\mathcal{L}, enabling learning for both local and nonlocal operator mappings.

3. Computational Considerations and Resolution Independence

The FEENet framework exhibits several computational advantages:

  • Eigenbasis computation: One-time cost per geometry; for 400 modes, timescales were approximately 0.12 min (Square), 1.07 min (Fins), 6.53 min (Bunny) on representative hardware.
  • Training efficiency: Dramatically reduced compared to DeepONet/MIONet. For example, training on the Square (Poisson) required \sim4 min for FEENet, versus 13 min for DeepONet; the Bunny (inhomogeneous heat) required 144+7 min (FEENet) versus 2,208 min (MIONet).
  • Inference and resolution independence: Once the FEM eigenfunctions ϕk(x)\phi_k(x) are known as continuous FEM objects, u^(x)\hat{u}(x) can be efficiently evaluated at arbitrary points in Ω\Omega. Numerical tests showed virtually identical L2L^2 and H1H^1 errors on both training and much finer grids, confirming full mesh agnosticism.
  • Interpretability: Each branch output coefficient ckc_k corresponds directly to physical eigenmodes, aiding analysis and debugging.
  • Nonlocal operator extension: FEENet natively handles nonlocal operators g(L)g(\mathcal{L}) due to the diagonal action in the eigenbasis.

This architecture differs fundamentally from approaches such as FEONet or Sparse FEONet, which learn the coefficient map directly on the FEM basis without spectral diagonalization (Ko et al., 2 Jan 2026).

4. Benchmark Results and Empirical Performance

FEENet has been systematically benchmarked against DeepONet and MIONet on canonical elliptic and parabolic PDEs across increasingly complex geometries:

Problem & Geometry Error Metric FEENet Reference (DeepONet/MIONet) Training Time (min)
Poisson, Square (2D) L2L^2: 5.8×1045.8\times10^{-4}, H1H^1: 4.6×1034.6\times10^{-3} DeepONet: 1.9×1021.9\times10^{-2}, 4.2×1024.2\times10^{-2} 4.16 (+0.12 eigen) / 12.7
Heat (homog.), Fins (2D, nonconvex) L2L^2: 8.3×1038.3\times10^{-3}, H1H^1: 1.1×1021.1\times10^{-2} DeepONet: 7.2×1017.2\times10^{-1}, $1.6$ 24.8 (+1.07) / 676
Heat (inhomog.), Bunny (3D) L2L^2: 8.6×1038.6\times10^{-3}, H1H^1: 4.0×1024.0\times10^{-2} MIONet: 1.6×1011.6\times10^{-1}, 3.5×1013.5\times10^{-1} 143.6 (+6.5) / 2,208

Across all configurations, FEENet delivers one to two orders of magnitude lower errors, 20–50x shorter training times, and robust performance on highly irregular domains. Increasing the eigenfunction truncation MM demonstrates systematic error reduction, with abrupt L2 and H1 error decreases at higher MM (e.g., M=800M=800 modes). Qualitative assessments reveal that FEENet captures small-scale features even near complex domain boundaries; competitor methods often manifest large, spurious deviation in such regions (Li et al., 31 Jan 2026).

FEENet and FEONet share FEM foundations but diverge in representation and training philosophy. FEONet (and Sparse FEONet) learns the parameter-to-coefficient map in the standard FEM basis via operator networks, often without requiring training data by leveraging bilinear form evaluations for loss construction (Ko et al., 2 Jan 2026). Sparse FEONet further exploits mesh-locality to impose graph-sparse neural architectures, yielding significant parameter reductions and mesh-size independent stability.

FEENet, in contrast, projects solutions into an eigenfunction basis intrinsic to the geometry, targeting spectral coefficients via data-driven optimization. Key distinctions:

Property FEENet FEONet/Sparse FEONet
Representation FEM eigenfunction (spectral) basis Standard local FEM basis
Training loss Field-wise supervised L2L^2 loss Data-free variational residual
Network architecture FC branch, fixed trunk FC or sparse operator network
Interpretability Physical eigenmode coefficients FEM node coefficients
Resolution independence Yes (spectral trunk continuous) No (fixed mesh basis)

While both approaches provide mathematical expressivity, FEENet exhibits advantages in interpretability, generalization to nonlocal operators, and empirical performance on complex geometries.

6. Significance and Future Directions

FEENet exemplifies a successful fusion of structure-preserving numerical methods (FEM eigensolvers) with neural regression of spectral coefficients, resulting in a scalable, robust, and interpretable neural operator for PDE solution tasks. By decoupling geometry handling from coefficient learning, FEENet achieves mesh-independent inference, efficient training, and superior accuracy on intricate physical domains. Natural extensions include adapting the spectral trunk to time-dependent or parameter-varying geometries, incorporating additional physical constraints in the coefficient network, and extending the spectral trunk for nonlinear or higher-order PDEs (Li et al., 31 Jan 2026).

A plausible implication is that the FEENet paradigm may catalyze further development of hybrid-physics neural operators, particularly in scientific applications that demand geometric fidelity and efficient operator generalization.

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Finite Element Eigenfunction Network (FEENet).