MAD-BNO: Boundary Neural Operator for Elliptic PDEs
- MAD-BNO is an operator learning framework that fuses analytic boundary data generation with boundary integral methods to recover solutions for linear elliptic PDEs.
- It employs a simple bias-free linear neural architecture to accurately map boundary data, achieving resolution-independence and enhanced training efficiency.
- The approach synthesizes error-free Dirichlet-Neumann pairs, enabling scalable solution recovery for Laplace, Poisson, and Helmholtz equations across complex geometries.
The Mathematical Artificial Data Boundary Neural Operator (MAD-BNO) is an operator learning framework that merges the physics-consistent generation of analytic boundary data through fundamental solutions with boundary-integral-based neural architectures. MAD-BNO is designed to recover the solution to a class of linear partial differential equations (PDEs) given only data sampled on the boundary, with no reliance on measured or simulated interior data. The method leverages the Mathematical Artificial Data (MAD) paradigm to synthesize large error-free Dirichlet-Neumann (D-N) boundary-data pairs, then trains a neural operator to realize the true boundary mapping, enabling subsequent interior solution reconstruction via classical boundary integral equations. This yields a highly efficient and resolution-independent route to operator learning for classical elliptic PDEs (Laplace, Poisson, Helmholtz) in two and three dimensions, supporting arbitrary geometries and boundary conditions (Wu et al., 16 Jan 2026, Wu et al., 9 Jul 2025). Integrations with physics-informed loss terms and low-rank kernel learning further generalize the boundary-to-domain paradigm (Kashi et al., 2024).
1. Mathematical Foundations and Boundary-Only Operator Learning
At the core of MAD-BNO is the recognition that, for many linear elliptic PDEs, the solution inside a domain is fully determined by values (and derivatives) prescribed on the boundary. For the canonical Dirichlet problem,
one can abstractly define the operator
with an analogous formulation for Neumann or mixed boundary conditions (Kashi et al., 2024). The MAD-BNO framework is built to learn approximations to this operator, using exclusively boundary data during training (Wu et al., 16 Jan 2026, Wu et al., 9 Jul 2025).
For domains where the Green's function (fundamental solution) for the PDE is known, the Dirichlet-to-Neumann (DtN) and Neumann-to-Dirichlet (NtD) maps are true linear boundary operators. Once these mappings are learned, the complete boundary data can be assembled, and the interior solution is efficiently reconstructed by the boundary integral representation,
Classic examples include the single- and double-layer potentials for Laplace, Poisson, and Helmholtz equations (Meng et al., 2024).
2. Generation of Mathematical Artificial Data
A cornerstone of MAD-BNO is synthetic, physics-enforced artificial data generation via analytic combinations of fundamental solutions. For a PDE with linear operator and known Green's function , one forms artificial solutions as
with sampled outside and coefficients chosen on the probability simplex . The Dirichlet and Neumann data are analytically evaluated on the boundary: The method extends analogously to Poisson (via splitting with a volume potential) and Helmholtz equations (using complex Bessel/Hankel kernel combinations and derivative identities).
Because linear combinations of fundamental solutions are dense in , such artificial data span the space of relevant boundary conditions, providing exact, noise-free operator training data without recourse to costly numerical PDE solvers or experimental measurements (Wu et al., 16 Jan 2026, Wu et al., 9 Jul 2025).
3. Neural Operator Architectures for Boundary Mapping
MAD-BNO employs a neural operator mapping known boundary data to unknown boundary quantities: for Dirichlet/Neumann splits, or specializing further for problems requiring only Dirichlet-to-Neumann or Neumann-to-Dirichlet mapping (Wu et al., 16 Jan 2026). The architecture is notably found to be most effective as a single bias-free linear layer,
where is the number of discretization points on the boundary, and is a vector of sampled boundary data (Wu et al., 16 Jan 2026). Nonlinearities or deep stacking do not improve—and typically degrade—accuracy, given the strict linearity of the true map for fixed domains and PDEs.
For more general or non-elliptic problems, or when learning parameterized families of operators (e.g., varying geometry), higher-capacity networks or kernel factorization approaches (see LP-FNO/MAD-BNO extensions) may be required (Kashi et al., 2024, Meng et al., 2024).
4. Solution Recovery via Boundary Integral Equations
Upon learning the full set of boundary data, the interior solution at any is reconstructed using standard boundary integral equations,
Numerically, boundary integrals are discretized (e.g., trapezoidal rule), and volume integrals in cases such as Poisson are computed via Gaussian quadrature, with specialized handling of singularities (Wu et al., 16 Jan 2026, Meng et al., 2024).
This approach generalizes seamlessly to Dirichlet, Neumann, and mixed boundary conditions, as well as to three-dimensional settings, provided the required surface or curve quadrature can be carried out given the learned boundary data. Scaling to larger boundary discretizations is feasible and can be accelerated by adopting Fast Multipole Methods (FMM) or hierarchical matrix approximations (Wu et al., 16 Jan 2026).
5. Training Procedures and Quantitative Benchmarks
MAD-BNO training relies on large datasets of analytically generated Dirichlet-Neumann boundary pairs. For each PDE type and geometry:
- random boundary configurations are synthesized using the MAD framework, each sampled at (2D) or (3D) boundary collocation points.
- The loss function is a mean-squared error on the predicted boundary data,
with equal weighting of Dirichlet and Neumann predictions.
- Optimization is performed using Adam (), batch size , for up to epochs (Wu et al., 16 Jan 2026).
MAD-BNO demonstrates superior training efficiency compared to interior-sampled DeepONet or PINN-based (physics-informed) baselines. Representative 2D Laplace results:
- Training time: MAD-BNO (2.61 h), MAD-DeepONet (14.93 h), PI-DeepONet (31.09 h)
- Final : MAD-BNO (), MAD (), PI-DeepONet ()
- For Poisson and Helmholtz, similar or better accuracy is achieved, and for high-frequency Helmholtz (), competing approaches fail, while MAD-BNO maintains errors (Wu et al., 16 Jan 2026).
6. Resolution-Independence, Generalization, and Scalability
A key property inherited from the operator-theoretic formulation is strict resolution-independence: MAD-BNO generalizes accurately to finer boundary discretizations and larger test domains without retraining. Interior (domain-wide) solution inference is always performed via the integral representation, which automatically resolves at any chosen point or mesh.
The boundary-only approach is naturally extensible to arbitrary two- and three-dimensional geometries, as only the boundary needs to be discretized and processed. In three dimensions, the architecture scales to over boundary points, and further efficiencies can be gained with FMM acceleration on integral recovery (Wu et al., 16 Jan 2026, Wu et al., 9 Jul 2025).
7. Connections and Extensions: From MAD-BNO to General Operator Networks
The MAD-BNO architecture has close conceptual ties to a spectrum of operator learning frameworks:
- MAD-BNO vs. LP-FNO: LP-FNO learns a (low-rank) data-driven boundary-to-domain map using outer product factorization of boundary embeddings, viewed as learning a parametric version of the Green's function . MAD-BNO, in contrast, fixes the kernel to the analytic Green's function, learning only the boundary operators, thus recovering the true operator structure directly (Kashi et al., 2024).
- Boundary-Integral Neural Operators: Architectures such as BI-DeepONet and BI-TDONet embed the boundary integral equation directly, targeting the map for variable geometry, with SVD-inspired modular factorization offering both efficiency and generalization to new domains (Meng et al., 2024).
- Physics-Informed or PDE-Constrained Losses: MAD-BNO can incorporate PDE residuals into the loss to enforce additional physical constraints, achieving close connections with classical operator-theoretic and PDE-constrained learning (Kashi et al., 2024).
A plausible implication is that future neural operator frameworks can hybridize analytic kernel factorization, boundary-only data, and data-driven kernel learning, extending MAD-BNO to nonlinear or non-elliptic regimes, or to regimes where the kernel is not analytically available (e.g., Stokes/Maxwell), with learned modal expansions embedding additional physical or geometric priors.
References
- "Operator learning on domain boundary through combining fundamental solution-based artificial data and boundary integral techniques" (Wu et al., 16 Jan 2026)
- "Mathematical artificial data for operator learning" (Wu et al., 9 Jul 2025)
- "Learning the boundary-to-domain mapping using Lifting Product Fourier Neural Operators for partial differential equations" (Kashi et al., 2024)
- "Solving Partial Differential Equations in Different Domains by Operator Learning method Based on Boundary Integral Equations" (Meng et al., 2024)