Papers
Topics
Authors
Recent
Search
2000 character limit reached

Hamiltonian ODE Graph Networks

Updated 4 February 2026
  • Hamiltonian ODE Graph Networks (HOGN) are models that integrate Hamiltonian dynamics into graph neural networks to enforce energy conservation and adapt to varying geometries.
  • They employ symplectic integrators and message-passing architectures to simulate complex dynamics while preserving energy and structural invariants.
  • HOGNs demonstrate superior adversarial robustness and zero-shot generalization, achieving high accuracy in both physics-based simulations and node embedding tasks.

Hamiltonian ODE Graph Networks (HOGN) are a class of models that embed the inductive bias of Hamiltonian dynamics into graph neural networks (GNNs), enabling principled modeling of complex interacting systems with strict conservation laws and adaptive geometry. They have been applied to both dynamics simulation and graph node embedding tasks, offering significant advantages in energy preservation, adversarial robustness, geometry adaptation, and zero-shot generalization.

1. Foundations and Mathematical Structure

Hamiltonian ODE Graph Networks formulate the evolution of a system of interacting entities—represented as nodes on a graph—according to Hamilton's equations on a phase space. For a node with "position" qn∈Rdq^n \in \mathbb R^d and auxiliary "momentum" pn∈Rdp^n \in \mathbb R^d, the evolution is prescribed by a node-level or global Hamiltonian function H(q,p;θ)H(q, p; \theta), where θ\theta are learnable parameters:

q˙i=∂H∂pi,p˙i=−∂H∂qi\dot q^i = \frac{\partial H}{\partial p_i}, \qquad \dot p_i = -\frac{\partial H}{\partial q^i}

The Hamiltonian function HH can be parameterized in multiple ways:

  • Pseudo-Riemannian/Geodesic-inspired: Hgeo(q,p)=12gnetij(q)pipjH_{\mathrm{geo}}(q,p) = \frac{1}{2} g_{\mathrm{net}}^{ij}(q) p_i p_j with learnable metric gnet(q)g_{\mathrm{net}}(q).
  • Fully flexible MLP: HFC(q,p)=MLPθ([q;p])H_{\mathrm{FC}}(q,p) = \mathrm{MLP}_\theta([q;p]), with no explicit geometry constraints.
  • Graph-structured Hamiltonians: H(q,p)H(q, p) can be a global output of a graph neural network receiving node and edge features, enabling the Hamiltonian to encode both physical symmetries and graph topology (Kang et al., 2023, Kang et al., 2023, Sanchez-Gonzalez et al., 2019).

This structure ensures the flow in the (q,p)(q,p) phase space is symplectic and conservative, exactly or approximately preserving energy H(q(t),p(t))H(q(t),p(t)) over time, depending on the numerical integrator and Hamiltonian parameterization.

2. Graph Network Integration and Permutation Invariance

Graph topology is integrated via message-passing architectures. Each node is updated using information from its local neighborhood via learned GNN layers. In dynamics settings, nodes correspond to physical particles, and edges model interactions such as springs or potentials. In node embedding tasks, the graph structure is injected at the aggregation stage after Hamiltonian evolution.

Permutation invariance is achieved by summing or averaging over node and edge features, ensuring the Hamiltonian HH and the resulting dynamics are independent of node ordering. Variants such as G-SympGNN and LA-SympGNN further enhance permutation equivariance and symplecticity through explicit node- and edge-wise energy parameterizations or linear-algebraic message-passing (Varghese et al., 2024).

Table: Graph message-passing vs. Hamiltonian parameterization

Parameterization Graph Structure Permutation Invariant
Local Hamiltonian No (default) Yes (node-level sum)
Graph Hamiltonian Yes (GNN) Yes
Split-step (SympGNN) Yes (explicit sum and message op) Yes

3. Numerical Integration and Training Framework

The continuous-time Hamiltonian ODE is discretized using explicit symplectic integrators, such as:

  • Euler method: qk+1=qk+h∇pH(qk,pk),pk+1=pk−h∇qH(qk,pk)q_{k+1} = q_k + h \nabla_p H(q_k, p_k),\quad p_{k+1} = p_k - h \nabla_q H(q_k, p_k)
  • Störmer–Verlet (leapfrog): Used especially for physical dynamics to minimize long-term energy drift (Bishnoi et al., 2023, Rahma et al., 6 Jun 2025).
  • Velocity-Verlet: Similarly preserves the symplectic structure and is efficiently compatible with autodiff frameworks.

ODE solvers, including explicit Euler and higher-order Runge–Kutta schemes, are employed for node embedding architectures. During training, gradients are propagated through the ODE integration using the adjoint sensitivity method, supporting efficient end-to-end optimization (Kang et al., 2023, Kang et al., 2023, Zhao et al., 2023, Sanchez-Gonzalez et al., 2019).

Recent advances introduce random feature-based parameterizations for the GNNs within HOGNs, enabling non-iterative training (e.g., Extreme Learning Machine or SWIM) and achieving 100–600× faster training compared to standard gradient-based optimizers (Rahma et al., 6 Jun 2025).

4. Adaptivity, Stability, and Over-smoothing Mitigation

By learning a Hamiltonian energy function H(q,p)H(q,p) over the cotangent bundle, HOGNs generalize beyond fixed-geometry (e.g., exponential map-based) manifolds. The architecture can automatically capture and adapt to local geometric structure (e.g., Euclidean, hyperbolic, or mixed curvature), a key advantage in handling datasets with heterogeneous subgraphs or varying hyperbolicity (Kang et al., 2023).

The conservation law inherent in the Hamiltonian flow anchors the node embedding or simulation trajectory, providing strong control over layer-wise feature evolution. This mitigates two systemic GNN pathologies:

  • Over-smoothing: Deep propagation does not drive embeddings to a constant subspace, as measured by stable accuracy with increasing layer depth (up to 20 or 64 layers without collapse) (Kang et al., 2023, Kang et al., 2023).
  • Adversarial robustness: Empirically, HOGNs exhibit minimal accuracy degradation (drop ≤ 5 points) under adversarial perturbations of edges or features, compared to significant drops in standard GCN/GAT/GRAND models. The flow is stable (BIBO and—when the Hamiltonian takes a natural form—Lyapunov stable) and resilient to chaotic drift under both evasion and poisoning attacks (Zhao et al., 2023, Kang et al., 2023).

5. Task Domains and Empirical Results

Node Embedding and Classification

  • HOGN (HamGNN, HDG) matches or improves upon Euclidean GNNs (GCN/GAT) on high-hyperbolicity (Euclidean) graphs (Cora, Citeseer, Pubmed) and outperforms all Euclidean and hyperbolic GNNs on low-hyperbolicity ("tree-like") graphs (Disease, Airport) and mixed-geometry unions (accuracy ∼95% vs. 75–90% for baselines) (Kang et al., 2023, Kang et al., 2023).
  • Link prediction (using inner product similarity of final q(T)q(T)): HOGN achieves higher ROC-AUC than standard baselines across multiple geometries.

Physics-based Simulation

  • Hamiltonian ODE Graph Networks learn accurate large-scale dynamical models (mass-spring systems, pendulum chains, Lennard-Jones molecular dynamics) and preserve total energy over long rollouts (energy drift ≪ competing models).
  • Zero-shot generalization: Training on minimal systems (e.g., 8-node chains), HOGN transfers to 4096-node graphs or 2000-particle MD without retraining, with sub-10−510^{-5} trajectory errors (Rahma et al., 6 Jun 2025, Bishnoi et al., 2023, Varghese et al., 2024).
  • Hybrid system composition: HOGN models trained independently on different systems can be composed for accurate simulation of hybrid systems, a property not matched by non-Hamiltonian baselines (Bishnoi et al., 2023).

Adversarial Robustness

  • On standard attack benchmarks (PGD-GIA, TDGIA, MetaGIA, Nettack, Metattack), Hamiltonian neural flow GNNs (HANG, HANG-quad variants) substantially outperform all standard baselines, often doubling robust accuracy (Zhao et al., 2023).
  • HOGN achieves accuracy stability unaffected by the depth of message-passing layers, resisting feature collapse under both deep architectures and adversarial manipulations.

6. Symbolic Regression and Interpretability

HOGNs support interpretable scientific discovery by enabling closed-form symbolic regression on their learned kinetic and potential energy functions. By regressing outputs of the Hamiltonian’s MLPs onto standard operator libraries, the model can recover underlying physical laws (e.g., 0.5m∥x˙∥20.5m\|\dot x\|^2, 12k(r−1)2\tfrac{1}{2}k(r-1)^2, Lennard-Jones potentials) from trajectory data, including in highly nonlinear or hybrid settings (Bishnoi et al., 2023).

This interpretability extends to hybrid systems: the symbolic regression can separately reconstruct the physical terms corresponding to each composed subsystem.

7. Limitations and Extensions

  • HOGNs are fundamentally restricted to conservative (Hamiltonian) dynamics; modeling dissipative or frictional forces requires extending the formalism, for instance via Rayleigh dissipation or Langevin terms (Bishnoi et al., 2023).
  • Scalability of dense GNNs can be a computational bottleneck for fully-connected physical graphs, but the inherent locality of message-passing supports efficient scaling on sparse/topological graphs (Rahma et al., 6 Jun 2025, Varghese et al., 2024).
  • Future research directions include symplecticity-preserving graph message passing, treating continuum systems or deformable bodies via mesh-based HOGN, and incorporating E(3) equivariance for enhanced physical fidelity (Bishnoi et al., 2023, Varghese et al., 2024).

References

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Hamiltonian ODE Graph Networks (HOGN).