B-NAVEM: Neural VEM for Polygon Meshes
- B-NAVEM is a neural-based extension of NAVEM that constructs local basis functions with exact inter-element continuity and approximate interior harmonicity.
- It employs a PINN strategy with a boundary-correction operator to achieve precise polynomial traces on mesh boundaries while relaxing interior harmonicity constraints.
- Numerical experiments demonstrate competitive convergence rates for smooth PDE problems despite increased training time and memory compared to traditional P-NAVEM.
B-NAVEM is a globally continuous neural-based extension of the Neural Approximated Virtual Element Method (NAVEM) for polygonal mesh discretization in partial differential equations. Distinct from classical VEM and other NAVEM variants, B-NAVEM employs a Physics-Informed Neural Network (PINN) strategy to construct local basis functions that ensure exact continuity across mesh elements, while approximately satisfying harmonicity within each element interior. This architecture makes B-NAVEM suitable for problems requiring both accurate representation of solution spaces and robust inter-element conformality (Berrone et al., 14 Jan 2026).
1. Conceptual Foundations and NAVEM Context
B-NAVEM belongs to the NAVEM family of neural-based discretizations aiming to replicate and enhance the properties of Virtual Element Methods on generic polygonal (and possibly polyhedral) meshes. Traditional NAVEM (also referred to as H-NAVEM) explicitly constructs basis functions as harmonic combinations, enforcing exact (harmonicity) within each element (Property (i)), with polynomial edge trace only approximately realized via a loss function (Property (ii)). B-NAVEM innovates by enforcing exact continuity and polynomial trace across element boundaries via a boundary-correction operator, while relaxing the interior harmonicity constraint to be achieved only approximately via a PINN loss, thereby advancing both mesh conformity and solution space expressivity (Berrone et al., 14 Jan 2026).
2. Mathematical Formulation
Given a polygon with vertices , the construction proceeds as follows:
- The boundary data interpolant is defined by
such that on and .
- A bubble function , vanishing on , is constructed.
- The boundary-enforcing operator is
- For any scalar neural network output , the local B-NAVEM basis is defined by
$\varphi_{j,E}^{\NAPEM}(\mathbf x) = \psi_E^0(\mathbf x) N(\mathbf x) + \psi_{j,E}(\mathbf x)$
Exact edge continuity is achieved by construction. Harmonicity in the interior is enforced in the loss function:
$L_{\rm harm}(E) = \lVert \Delta \varphi_{j,E}^{\NAPEM} \rVert_{L^2(E)}$
over all sampled , driving the basis towards within , but not necessarily reaching exactness.
3. Neural Network Architecture and Training Protocol
B-NAVEM employs a multilayer feed-forward neural network, typically with hidden layers and width 40 (for pentagons and hexagons, reduced for memory efficiency compared to P-NAVEM). The input encoding consists of the coordinate , the vertex data, and the basis index . The scalar output is mapped via the boundary operator . Training utilizes a PINN paradigm: each polygon is tessellated, and on each triangle, interior points are sampled. The loss includes both boundary property terms and the harmonicity residual. Precomputation of interpolation functions and gradients for efficiency is standard (Berrone et al., 14 Jan 2026).
Optimization typically proceeds with 2,000 Adam steps (decaying learning rate ), followed by up to 10,000 BFGS quasi-Newton iterations. B-NAVEM's training incurs higher computational expense than P-NAVEM due to the requirement of second derivatives (Laplacian) in its PINN loss.
4. Inter-Element Continuity and Polynomial Trace
B-NAVEM guarantees exact continuity of its basis functions across mesh elements. On each shared edge , the boundary interpolant coincides with the VEM trace of the global Lagrange basis and is unambiguously shared between adjacent polygons:
$\varphi_{j,E}^{\NAPEM}|_e = \psi_{j,E}|_e = \varphi_{j,E}|_e$
No extra constraints, interface degrees of freedom, or post-processing for conformity are necessary. Polynomial reproduction, while present via enforced boundary traces, is not exact for arbitrary polynomials throughout the element, distinguishing B-NAVEM from P-NAVEM (Berrone et al., 14 Jan 2026).
5. Computational Performance and Comparison
In numerical experiments on Voronoi and convex-concave quadrilateral mesh families, B-NAVEM demonstrates competitive convergence rates. It matches VEM in yielding and -seminorm errors for smooth diffusion-advection-reaction problems, with error constants close to those of P-NAVEM (P-NAVEM often achieves strictly better constants, especially on concave meshes). Online assembly and solution times are similar to P-NAVEM; however, training time for B-NAVEM is up to eight times longer due to the computational cost of harmonicity enforcement.
Memory usage is higher for B-NAVEM when the number of polygon classes and network size increases, occasionally necessitating network parameter reduction for larger mesh elements (e.g., pentagons/hexagons). A plausible implication is that B-NAVEM’s PINN-based strategy scales less efficiently with growing mesh complexity compared to polynomial-based losses.
6. Advantages, Limitations, and Research Outlook
B-NAVEM’s primary advantage is the robust enforcement of exact inter-element continuity and boundary polynomial trace, achieved with a single neural model per polygon class. Approximating harmonicity throughout the element supports solution regularity and preserves key VEM-like discretization properties. However, the lack of exact polynomial reproduction in the interior may slightly impact convergence constants for certain PDE problems, as seen in direct comparisons with P-NAVEM.
A plausible implication is that B-NAVEM is best suited for scenarios prioritizing mesh conformity and harmonic behavior over interior polynomial exactness. Use cases include domains where Laplacian regularity is essential and highly nonstandard polygonal meshes arise. The demonstrated computational tradeoffs highlight directions for future work in loss-function design and network architecture optimization for even larger mesh families (Berrone et al., 14 Jan 2026).