Geometry-Informed Neural Operator (GINO)
- Geometry-informed neural operators (GINO) are deep learning frameworks that combine explicit geometric parameterizations with neural networks to learn solution operators for PDEs on variable domains.
- They employ diverse geometry encodings—including NURBS, point clouds, and graph-based methods—integrated via attention and concatenation techniques for robust modeling.
- GINO models achieve significant speedups and accuracy improvements over traditional solvers, making them effective for forward simulation, inverse design, and surrogate modeling.
A geometry-informed neural operator (GINO) is a class of operator learning methods that combine deep neural networks with explicit geometric parameterization or encoding, enabling the approximation of solution operators for partial differential equations (PDEs) defined on domains with arbitrary or variable geometry. GINO models, including physics-informed variants, have proven effective in forward simulation, inverse design, and surrogate modeling across a range of engineering and scientific domains where geometric flexibility and mesh independence are critical.
1. Mathematical Formulation and Problem Setting
Geometry-informed neural operators address problems where the domain geometry is either variable or plays a central role in the governing physics. The typical setting introduces a parameterization of the domain, such as a shape encoded by a vector of geometric parameters, and defines the solution operator as a mapping from geometry (and possibly additional PDE or boundary parameters) to the solution field. For example, in (Nair et al., 2024), the governing equation is the 2D exterior Helmholtz equation for acoustic scattering: with Neumann and impedance boundary conditions on the scatterer boundary and outer boundary , respectively. The solution operator is defined as: where encodes the geometry of the scatterer via NURBS (non-uniform rational B-splines) parameters. GINO aims to learn an operator such that for arbitrary shapes.
More generally, for PDEs on domains parameterized by code , and with (possibly variable) PDE parameters , GINO models learn the mapping
with satisfying the appropriate boundary value problem on (Zhong et al., 2024).
2. Geometry Encoding and Parameterization
GINO frameworks implement geometry encoding in various ways, adapted to the type and complexity of the geometry:
- NURBS-based parametric encoding: For 2D boundaries, a closed curve (e.g., the scatterer boundary ) is represented as a NURBS curve, with a fixed degree (e.g., ) and a compact set of control points (e.g., 8 free points), resulting in a low-dimensional geometry code (Nair et al., 2024).
- Point cloud representations: For arbitrary or complex shapes (2D and 3D), the boundary or surface is sampled as a point cloud. Learning-based encoders process raw coordinates using locality-preserving sampling and grouping (e.g., PointNet++) with positional encodings (e.g., NeRF) and aggregate geometric information using attention mechanisms (Liu et al., 28 Apr 2025).
- Graph-based encoders: Inputs can be irregular point clouds with associated geometric features (coordinates, signed distance, curvature, normals). Graph message passing or graph-integral layers exploit adjacency built from spatial proximity (Li et al., 2023, Naghavi et al., 10 Jun 2025).
- Spectral and intrinsic geometry: On manifolds, encoding uses local curvature, metric tensors, geodesic or Laplace–Beltrami eigenbasis (Quackenbush et al., 2024, Tang et al., 18 Dec 2025).
- Statistical descriptors: Local mesh statistics, such as neighbor distances, covariance of local neighborhoods, and eigenvalues of local patches, are aggregated for robust geometry features, notably in geometry-aware transformer architectures (Wen et al., 24 May 2025).
Geometry encodings are fused into the operator model via concatenation, attention, or more sophisticated compositional architectures.
3. Architecture of Geometry-Informed Neural Operators
Several architectures instantiate the GINO paradigm:
- Physics-informed DeepONet with geometry parameterization: Separate subnetworks process geometry code (branch net) and field coordinates (trunk net). The outputs are combined via an inner product expansion:
where encodes geometry and encodes spatial location (Nair et al., 2024).
- Attention-based models: Transformer layers encode sets of surface or domain points, integrating local and global geometry information through self-attention and cross-attention mechanisms. Geometry tokens are dynamically fused with query points in the solution decoder (Liu et al., 28 Apr 2025, Chen et al., 12 Feb 2026).
- Graph-augmented operators: Graph neural operator blocks operate directly on irregular meshes or point sets, transforming inputs to a latent regular grid where global operators (e.g., Fourier Neural Operator (FNO) layers) are applied (Li et al., 2023, Naghavi et al., 10 Jun 2025).
- Multiscale architectures: Gaussian-mollified kernel splitting (inspired by Ewald summation) decomposes kernel operators into long-range (Fourier-convolutive) and short-range (Taylor or MLP-based) contributions, ensuring accurate handling of geometric singularities and maintaining low complexity (Han et al., 2 Feb 2026).
- Spectral-manifold architectures: Operator action is realized in the Laplace–Beltrami eigenbasis, with pole–residue decomposition to capture general, decaying, or non-periodic dynamics on arbitrary Riemannian manifolds. Basis coefficients are functions of geometric potentials (distance to boundary, curvature) (Tang et al., 18 Dec 2025).
The selection of geometry encoding module (NURBS branch, point cloud transformer, graph kernel layer, Laplace-based spectral module) is dictated by the geometric complexity and available domain representation.
4. Training Strategies and Physics-Informed Losses
GINO models are typically trained with a combination of physics-based and data-driven objectives:
- Physics-informed losses: Enforce the governing PDE and boundary conditions at collocation points using automatic differentiation for interior () and boundary residuals (), enabling simulation-free training (Nair et al., 2024, Zhong et al., 2024, Sarkar et al., 13 Aug 2025).
- Supervised data-driven loss: When labeled simulation data is available, train on the mean squared or relative error between predicted and ground truth fields over query points (Li et al., 2023, Han et al., 2 Feb 2026).
- Domain decomposition: Local neural operators are trained on small subdomains (e.g., simple random polygons), and a Schwarz-style iterative "Schwarz Neural Inference" method is used to recover global solutions for arbitrary geometries, yielding significant gains in data efficiency and geometric generalization (Huang et al., 1 Apr 2025).
- Statistical and data augmentation: Random rotations, scalings, and local pre- and post-processing via spatial symmetries improve generalization across unseen geometries.
- Self-supervised physics-imposed differentiation: Especially for problems involving moving boundaries or complex domain parameterizations, stochastic projection-based gradient estimates supplant automatic differentiation to reduce computational cost and expand activation function choice (Sarkar et al., 13 Aug 2025).
Hyperparameters (learning rate, batch sizes, collocation point densities, neighborhood radii, and architecture width/depth) are problem-specific and critical for model accuracy and efficiency.
5. Computational Performance and Generalization
A recurring theme in recent GINO research is demonstration of superior performance relative to both classical solvers and geometry-agnostic neural operators:
- Speed: GINO achieves up to speedup over FEM-based acoustic solvers for 2D scattering (Nair et al., 2024), and over speedup for 3D aerodynamics compared to GPU-based CFD setups (Li et al., 2023).
- Accuracy and generalization: On out-of-sample shapes:
- test error for surface pressure prediction on cars: (Li et al., 2023).
- Mean error for acoustic scattering: $0.078$ on circular, $0.147$ on arbitrary NURBS shapes (Nair et al., 2024).
- Across PDE benchmarks, geometry-aware operator transformers outperform FNO, Geo-FNO, and DeepONet on arbitrary domains (Liu et al., 28 Apr 2025, Wen et al., 24 May 2025).
- Discretization convergence: Many GINO models possess discretization invariance, such that trained operators yield physically consistent results on arbitrary discretizations or mesh refinements—enabling zero-shot super-resolution (Li et al., 2023, Naghavi et al., 10 Jun 2025).
- Robustness: Models are robust to geometric noise (perturbed input coordinates) and maintain low error under mesh refinement (Naghavi et al., 10 Jun 2025). Geometry parameterization via NURBS or point cloud ensures invariance to point order and sampling density (Nair et al., 2024, Liu et al., 28 Apr 2025).
- Mesh independence and scalability: Architectures such as geometry-aware operator transformers (GAOT) combine efficient multiscale graph encoding, robust geometry embeddings, and transformer processing for both accuracy and throughput at large scales (Wen et al., 24 May 2025). Training and inference are tractable at up to points, and GANOT attains state-of-the-art performance on datasets with $0.5$ million surface points.
6. Extensions and Applications
GINO has demonstrated strong performance and adaptability in multiple contexts:
- Inverse design and shape optimization: Once trained, GINO can serve as a fast, differentiable surrogate in outer optimization loops for geometry-aware control and inverse problems (Nair et al., 2024, Chen et al., 12 Feb 2026).
- Multi-physics and parameterized operator learning: Branch/trunk nets are extendable to handle multi-physics problems (e.g., thermo-mechanical coupling) by concatenating extra input parameters (Liu et al., 28 Apr 2025).
- 3D and non-Euclidean domains: Spectral-geometric formulations and Laplace–Beltrami-based architectures extend to arbitrary Riemannian manifolds, non-Euclidean surfaces, and high-genus domains (Quackenbush et al., 2024, Tang et al., 18 Dec 2025).
- Medical applications: Real-time, geometry-informed neural operators for cardiac activation time prediction enable interactive planning in cardiac resynchronization therapy, with models exhibiting mesh-independence and clinical workflow compatibility (Naghavi et al., 10 Jun 2025).
- CFD, solid mechanics, elasticity, and more: Geometry-agnostic point-cloud and transformer-based GINO variants serve as surrogates for a variety of operators across 2D and 3D engineering applications, including elasticity, Poisson’s equation, time-dependent/conductive flows, and plasticity (Liu et al., 28 Apr 2025, Li et al., 2023, Wen et al., 24 May 2025).
7. Limitations and Future Directions
Current GINO models display several areas for further advancement:
- Sharp features and local errors: Undersampling near sharp corners or high-curvature boundaries can degrade local accuracy. Adaptive sampling strategies and clustering are proposed remedies (Nair et al., 2024, Liu et al., 28 Apr 2025).
- Computational scaling: Cross-attention and message-passing costs can grow rapidly with point count; hybrid attention, dynamic sparsification, or hierarchical pooling may alleviate bottlenecks (Liu et al., 28 Apr 2025, Wen et al., 24 May 2025).
- Time-dependence: While most architectures focus on static PDEs, time-dependent models require extensions with time-marching, auto-regressive, or causal-attention layers (Sarkar et al., 13 Aug 2025, Ramezankhani et al., 16 Jun 2025).
- Generative inverse design: Coupling forward GINO surrogates with diffusion models, variational inference, or Bayesian optimization for full inverse design and uncertainty quantification remains a promising direction (Liu et al., 28 Apr 2025).
- Physical inductive bias: Further encoding of strong physics, exploiting boundary-integral structures, or combining classical numerical approaches with operator learning may enhance both interpretability and generalization (Han et al., 2 Feb 2026, Zhong et al., 2024).
- Unified geometric frameworks: Synthesizing boundary parameterizations, point clouds, SDFs, and intrinsic representations may yield universal GINO models with enhanced expressivity and data efficiency.
Selected references
- Physics and geometric-informed DeepONet for acoustic scattering (Nair et al., 2024).
- Geometry-Informed Neural Operator Transformer (Liu et al., 28 Apr 2025).
- GINO for large-scale 3D PDEs using graph-integral layers and discretization-convergent FNOs (Li et al., 2023).
- Physics-Informed Geometry-Aware Neural Operator (Zhong et al., 2024).
- Kernel integral perspective and multiscale point cloud neural operators (Han et al., 2 Feb 2026).
- Operator learning with domain decomposition and Schwarz neural inference (Huang et al., 1 Apr 2025).
- Spectro-spatial (πG-Sp²GNO) and Laplace–Beltrami (GLNO) operator learning on arbitrary geometry (Sarkar et al., 13 Aug 2025, Tang et al., 18 Dec 2025).
- GINO applications in cardiac therapy planning (Naghavi et al., 10 Jun 2025).
- Geometry-aware operator transformer (GAOT) for efficiency/accuracy on arbitrary domains (Wen et al., 24 May 2025).
- Arbitrary geometry-encoded transformer (ArGEnT) operator learning (Chen et al., 12 Feb 2026).