Papers
Topics
Authors
Recent
Search
2000 character limit reached

Geometry-Conditioned Learning

Updated 25 January 2026
  • Geometry-conditioned learning is a paradigm that integrates geometric optimization, regularization, and input encoding to adaptively learn metrics, curvatures, and latent shapes.
  • It unifies manifold, geometric deep, operator, and graph learning approaches by explicitly embedding geometric structure into model training and architecture design.
  • This method enhances model expressiveness, generalization, and physical plausibility while reducing overfitting and data requirements compared to fixed-geometry approaches.

Geometry-conditioned learning is a paradigm wherein models incorporate geometric structure—either by explicit geometric optimization, geometric regularization, or geometric input encoding—into the learning process. Unlike traditional approaches that operate within fixed geometry (often Euclidean), geometry-conditioned learning treats the geometric structure of the model’s space, data, or operators as a primary object of optimization or adaptation. This enables dynamic and adaptive modeling capacities, provides strong inductive priors for structure- or invariance-aware domains, and elevates geometry from “background” context to explicit variable or regularizer. The concept unifies frameworks spanning manifold learning, geometric deep learning, PDE operator learning, and geometry-based regularization, under the central principle that geometric structures—metrics, curvatures, topologies, or latent shapes—should be learned or controlled based on data.

1. Metric-based Geometry Optimization of Latent Spaces

In the framework introduced by “Learning Geometry: A Framework for Building Adaptive Manifold Models through Metric Optimization” (Zhang, 30 Oct 2025), geometry-conditioned learning is formalized as the joint optimization of a Riemannian metric tensor field gg on a manifold MM (of fixed topology). The model’s latent or parameter space is treated as a differentiable manifold (M,g)(M, g), with the metric gg being the chief subject of optimization.

The learning objective minimizes a composite variational loss,

L(g)=Ldata(g)+λLgeometry(g),L(g)= L_{\rm data}(g) + \lambda L_{\rm geometry}(g),

where LdataL_{\rm data} ensures data fidelity and LgeometryL_{\rm geometry} penalizes geometric complexity via integrated curvature and volume control. LgeometryL_{\rm geometry} comprises the total scalar curvature MR(g)pdVg\int_M |R(g)|^p\,dV_g, metric smoothness, and volume deviation penalties. This construction encourages the learned geometry to remain simple while flexibly adapting to the data distribution, preventing overfitting and capturing intrinsic nonlinear, multi-scale geometry unattainable with fixed-metric spaces.

Practically, the continuous metric field is discretized via a triangular mesh, with the metric parameterized by edge-lengths subject to triangle inequalities. Discrete geometric quantities, including vertex-wise angle defects (for curvature) and triangle areas, are computed, and the variational loss is optimized using automatic differentiation and constrained gradient descent.

This geometry-conditioning approach markedly increases model expressive power. Nonuniform adaptive curvature enables the model manifold to “stretch” or “flatten” in response to data density, capturing sophisticated structures such as loops or clusters, which fixed-geometry models cannot. The paper draws a direct analogy to the Einstein–Hilbert action in general relativity, interpreting the data-driven geometric regularizer as a statistical analog of spacetime curvature minimization.

2. Geometry-Aware Deep Architectures and Operator Learning

Geometry-conditioned learning extends to operator learning, particularly for the recovery of integral transformations with nontrivial geometry such as double fibration transforms. “Rates and architectures for learning geometrically non-trivial operators” (Roddenberry et al., 10 Dec 2025) proves that compactly supported operators satisfying the Bolker condition can be learned with superalgebraic error decay—thus overcoming the curse of dimensionality—when the network parameterization explicitly encodes geometry.

The key architectural innovation is the “smoothed level-set” kernel,

Lλu(y)=w(y)Xeλf(y,x)2u(x)dxXeλf(y,x)2dx,L^\lambda u(y) = w(y) \, \frac{\int_X e^{-\lambda |f(y, x)|^2} u(x) \, dx}{\int_X e^{-\lambda |f(y, x)|^2} dx},

where the function f(y,x)f(y, x) encodes the geometric locus (level set) of integration. This kernel can be factorized in a cross-attention form: f(y,x)=ψY(y),ψX(x)+b(y),f(y, x) = \langle \psi_Y(y), \psi_X(x) \rangle + b(y), enabling learnable query/key architectures that explicitly align the network’s attention weights with the integration manifold structure. The result is a provably universal and stable parameterization: neural networks can approximate arbitrary smooth level sets, capturing geometric structure efficiently.

This mechanism enables the learning of complex physical forward operators (such as geodesic X-ray or Radon transforms) from very few samples, while unstructured networks fail or require far more data. The error rate decays as ϵ(J)=O(Jr)\epsilon(J) = O(J^{-r}) for arbitrarily large rr, contrasting sharply with the O(ϵd)O(\epsilon^{-d}) sample requirements for standard feedforward architectures.

3. Geometry-Conditioned Embedding and Propagation in Graphs

Geometry-conditioned learning in graphs is exemplified by the “Graph Geometry Interaction Learning” (GIL) model (Zhu et al., 2020). GIL performs simultaneous message propagation in Euclidean and hyperbolic embedding spaces, employing Möbius transformations and distance-aware GAT layers for the hyperbolic component. After several layers of parallel propagation, node features in both spaces are fused through geometry-preserving operations—Euclidean-to-hyperbolic and hyperbolic-to-Euclidean mappings—based on mutual distances.

A node-specific mechanism learns convex combination weights so each node adaptively balances Euclidean and hyperbolic classifiers: P(i)=λi,0PDc(i)+λi,1PRn(i),P(i) = \lambda_{i,0} \, P_{\mathbb{D}_c}(i) + \lambda_{i,1} \, P_{\mathbb{R}^n}(i), where the weights are data-driven and reflect local geometric structure (tree-like regions drive hyperbolic preference; grid-like favor Euclidean). This enables per-node geometry conditioning, conferring state-of-the-art results on both node-classification and link-prediction tasks across datasets with mixed geometry.

4. Geometric Regularization and Discrete Partition Geometry

The analysis in “The Geometry of Machine Learning Models” (Gajer et al., 4 Aug 2025) generalizes geometry-conditioned learning to model partition spaces. Here, a model’s decision regions (partitions of input space) are described as Riemannian simplicial complexes, with explicit tracking of cell volumes, facet areas, dihedral angles, and derived curvature measures. Geometric regularizers penalize problematic aspects of the partition (e.g., small angles, extreme cell volumes), while extended Laplacians and simplicial splines enable geometric interpolation, smoothing, and diagnostic tools.

During learning, geometric penalties and curvature diagnostics (such as vertex-based ball-growth curvature and edgewise statistical Ricci curvature) are incorporated into the loss, structuring partition evolution and regularization in a geometry-aware manner. For neural networks, the geometry of decision partitions is tracked via layerwise differential-form pullbacks. This explicit geometry conditioning supports model refinement, early stopping, interpretability, and the embedding of physics-inspired constraints.

5. Geometry-Conditioned Learning in Image and Mesh Generation

In high-dimensional generative modeling, geometry conditioning is used both for material modeling and mesh-based synthesis. “Collaborative Control for Geometry-Conditioned PBR Image Generation” (Vainer et al., 2024) introduces bidirectional cross-network communication between an RGB diffusion UNet and a PBR-specific UNet. The PBR network directly receives a per-pixel normal map (encoding surface geometry) alongside its latent, while cross-attention layers enforce alignment between the two network branches. This results in a geometry-anchored denoising process that produces physically consistent outputs even in out-of-distribution regimes, with ablations showing that geometry-conditioned communication layers are crucial for generalization.

Similarly, “SCULPT: Shape-Conditioned Unpaired Learning of Pose-dependent Clothed and Textured Human Meshes” (Sanyal et al., 2023) trains a two-stage architecture: a geometry generator (predicting per-vertex clothing offsets) is driven by latent vectors, pose, and clothing-type attributes, while a texture generator is conditioned on both geometry intermediate features and appearance attributes. The cross-block feature fusion ensures that generated textures conform to the underlying synthetic geometry—realized as a geometry-conditioned unpaired learning procedure, shown to significantly outperform unconditioned baselines.

6. Geometry-Aware Learning in Operator, Solver, and Control Systems

Geometry-conditioned strategies are also prominent in numerical solvers and control systems on geometric domains. Attention-based hybrid solvers for PDEs (Versano et al., 2024) use a geometry-aware DeepONet with masked self-attention layers that block information transfer across domain boundaries, encoding geometry via signed distance fields and mask matrices. The result is a preconditioner that generalizes robustly to new geometries without fine-tuning.

In “Neural Preconditioning via Krylov Subspace Geometry” (Dimola et al., 21 Jul 2025), solver-aligned geometry conditioning is attained by encoding the geometric parameters of mixed-dimensional PDEs directly as input channels to a U-Net preconditioner. Training is explicitly aligned to the geometric evolution of Krylov subspaces, using a differentiable implementation of FGMRES and optimizing angles between residuals and subspaces. Key geometric variables (e.g., distance fields representing the 1D geometry) are crucial for robust generalization across parametrized domain families.

In reinforcement learning, “Geometric Reinforcement Learning For Robotic Manipulation” (Alhousani et al., 2022) leverages geometry conditioning by conducting policy optimization in tangent spaces of non-Euclidean manifolds (e.g., rotations on S3S^3, SPD matrices). The policy is learned in a fixed tangent (parameterization) space and mapped to the local tangent of the current state via parallel transport, before being exponentiated onto the manifold. Experimental results on both simulation and real robot tasks show consistently improved learning when policies are aligned with the underlying manifold geometry.

7. Applications, Expressive Power, and Broader Impact

Geometry-conditioned learning enables a class of models and solvers that discover, regularize, and operate on complex geometric structures in data and latent spaces. Applications span:

  • Scientific model discovery: adaptive geometry learning uncovers latent structure and invariants in high-dimensional data (Zhang, 30 Oct 2025).
  • Operator learning: integration over dynamically parameterized manifolds is achieved with architecture-encoded kernels (Roddenberry et al., 10 Dec 2025).
  • Representation learning: geometric regularization provably smooths and robustifies learned representations (Gajer et al., 4 Aug 2025).
  • Generative modeling: geometry conditioning in texture and mesh generation yields physically coherent outputs and improved attribute control (Sanyal et al., 2023).
  • Graph learning: per-sample geometry blending enhances node and link prediction accuracy (Zhu et al., 2020).
  • Structured solvers: geometry-aware deep preconditioners generalize across variable PDE geometries (Versano et al., 2024, Dimola et al., 21 Jul 2025).
  • Reinforcement learning: manifold-aware policy learning enables data efficiency and accuracy on non-Euclidean domains (Alhousani et al., 2022).

Empirically, geometry-conditioned frameworks consistently demonstrate improved model expressiveness, generalization, physical plausibility, and diagnostic interpretability over fixed-geometry or geometry-agnostic approaches.

The current frontiers in geometry-conditioned learning involve full dynamic topology evolution (true meta-learning of both topology and metric), integration with high-dimensional and discrete combinatorial structures, and efficient geometric regularization for large-scale, real-world data. The connection to foundational physics (e.g., the Einstein–Hilbert action analogies) and universal approximation for operator learning indicate a deep and enduring role for geometry as both constraint and modeling freedom in machine learning.

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Geometry-Conditioned Learning.