Hamiltonian ODE Graph Networks
- Hamiltonian ODE Graph Networks (HOGN) are models that integrate Hamiltonian dynamics into graph neural networks to enforce energy conservation and adapt to varying geometries.
- They employ symplectic integrators and message-passing architectures to simulate complex dynamics while preserving energy and structural invariants.
- HOGNs demonstrate superior adversarial robustness and zero-shot generalization, achieving high accuracy in both physics-based simulations and node embedding tasks.
Hamiltonian ODE Graph Networks (HOGN) are a class of models that embed the inductive bias of Hamiltonian dynamics into graph neural networks (GNNs), enabling principled modeling of complex interacting systems with strict conservation laws and adaptive geometry. They have been applied to both dynamics simulation and graph node embedding tasks, offering significant advantages in energy preservation, adversarial robustness, geometry adaptation, and zero-shot generalization.
1. Foundations and Mathematical Structure
Hamiltonian ODE Graph Networks formulate the evolution of a system of interacting entities—represented as nodes on a graph—according to Hamilton's equations on a phase space. For a node with "position" and auxiliary "momentum" , the evolution is prescribed by a node-level or global Hamiltonian function , where are learnable parameters:
The Hamiltonian function can be parameterized in multiple ways:
- Pseudo-Riemannian/Geodesic-inspired: with learnable metric .
- Fully flexible MLP: , with no explicit geometry constraints.
- Graph-structured Hamiltonians: can be a global output of a graph neural network receiving node and edge features, enabling the Hamiltonian to encode both physical symmetries and graph topology (Kang et al., 2023, Kang et al., 2023, Sanchez-Gonzalez et al., 2019).
This structure ensures the flow in the phase space is symplectic and conservative, exactly or approximately preserving energy over time, depending on the numerical integrator and Hamiltonian parameterization.
2. Graph Network Integration and Permutation Invariance
Graph topology is integrated via message-passing architectures. Each node is updated using information from its local neighborhood via learned GNN layers. In dynamics settings, nodes correspond to physical particles, and edges model interactions such as springs or potentials. In node embedding tasks, the graph structure is injected at the aggregation stage after Hamiltonian evolution.
Permutation invariance is achieved by summing or averaging over node and edge features, ensuring the Hamiltonian and the resulting dynamics are independent of node ordering. Variants such as G-SympGNN and LA-SympGNN further enhance permutation equivariance and symplecticity through explicit node- and edge-wise energy parameterizations or linear-algebraic message-passing (Varghese et al., 2024).
Table: Graph message-passing vs. Hamiltonian parameterization
| Parameterization | Graph Structure | Permutation Invariant |
|---|---|---|
| Local Hamiltonian | No (default) | Yes (node-level sum) |
| Graph Hamiltonian | Yes (GNN) | Yes |
| Split-step (SympGNN) | Yes (explicit sum and message op) | Yes |
3. Numerical Integration and Training Framework
The continuous-time Hamiltonian ODE is discretized using explicit symplectic integrators, such as:
- Euler method:
- Störmer–Verlet (leapfrog): Used especially for physical dynamics to minimize long-term energy drift (Bishnoi et al., 2023, Rahma et al., 6 Jun 2025).
- Velocity-Verlet: Similarly preserves the symplectic structure and is efficiently compatible with autodiff frameworks.
ODE solvers, including explicit Euler and higher-order Runge–Kutta schemes, are employed for node embedding architectures. During training, gradients are propagated through the ODE integration using the adjoint sensitivity method, supporting efficient end-to-end optimization (Kang et al., 2023, Kang et al., 2023, Zhao et al., 2023, Sanchez-Gonzalez et al., 2019).
Recent advances introduce random feature-based parameterizations for the GNNs within HOGNs, enabling non-iterative training (e.g., Extreme Learning Machine or SWIM) and achieving 100–600× faster training compared to standard gradient-based optimizers (Rahma et al., 6 Jun 2025).
4. Adaptivity, Stability, and Over-smoothing Mitigation
By learning a Hamiltonian energy function over the cotangent bundle, HOGNs generalize beyond fixed-geometry (e.g., exponential map-based) manifolds. The architecture can automatically capture and adapt to local geometric structure (e.g., Euclidean, hyperbolic, or mixed curvature), a key advantage in handling datasets with heterogeneous subgraphs or varying hyperbolicity (Kang et al., 2023).
The conservation law inherent in the Hamiltonian flow anchors the node embedding or simulation trajectory, providing strong control over layer-wise feature evolution. This mitigates two systemic GNN pathologies:
- Over-smoothing: Deep propagation does not drive embeddings to a constant subspace, as measured by stable accuracy with increasing layer depth (up to 20 or 64 layers without collapse) (Kang et al., 2023, Kang et al., 2023).
- Adversarial robustness: Empirically, HOGNs exhibit minimal accuracy degradation (drop ≤ 5 points) under adversarial perturbations of edges or features, compared to significant drops in standard GCN/GAT/GRAND models. The flow is stable (BIBO and—when the Hamiltonian takes a natural form—Lyapunov stable) and resilient to chaotic drift under both evasion and poisoning attacks (Zhao et al., 2023, Kang et al., 2023).
5. Task Domains and Empirical Results
Node Embedding and Classification
- HOGN (HamGNN, HDG) matches or improves upon Euclidean GNNs (GCN/GAT) on high-hyperbolicity (Euclidean) graphs (Cora, Citeseer, Pubmed) and outperforms all Euclidean and hyperbolic GNNs on low-hyperbolicity ("tree-like") graphs (Disease, Airport) and mixed-geometry unions (accuracy ∼95% vs. 75–90% for baselines) (Kang et al., 2023, Kang et al., 2023).
- Link prediction (using inner product similarity of final ): HOGN achieves higher ROC-AUC than standard baselines across multiple geometries.
Physics-based Simulation
- Hamiltonian ODE Graph Networks learn accurate large-scale dynamical models (mass-spring systems, pendulum chains, Lennard-Jones molecular dynamics) and preserve total energy over long rollouts (energy drift ≪ competing models).
- Zero-shot generalization: Training on minimal systems (e.g., 8-node chains), HOGN transfers to 4096-node graphs or 2000-particle MD without retraining, with sub- trajectory errors (Rahma et al., 6 Jun 2025, Bishnoi et al., 2023, Varghese et al., 2024).
- Hybrid system composition: HOGN models trained independently on different systems can be composed for accurate simulation of hybrid systems, a property not matched by non-Hamiltonian baselines (Bishnoi et al., 2023).
Adversarial Robustness
- On standard attack benchmarks (PGD-GIA, TDGIA, MetaGIA, Nettack, Metattack), Hamiltonian neural flow GNNs (HANG, HANG-quad variants) substantially outperform all standard baselines, often doubling robust accuracy (Zhao et al., 2023).
- HOGN achieves accuracy stability unaffected by the depth of message-passing layers, resisting feature collapse under both deep architectures and adversarial manipulations.
6. Symbolic Regression and Interpretability
HOGNs support interpretable scientific discovery by enabling closed-form symbolic regression on their learned kinetic and potential energy functions. By regressing outputs of the Hamiltonian’s MLPs onto standard operator libraries, the model can recover underlying physical laws (e.g., , , Lennard-Jones potentials) from trajectory data, including in highly nonlinear or hybrid settings (Bishnoi et al., 2023).
This interpretability extends to hybrid systems: the symbolic regression can separately reconstruct the physical terms corresponding to each composed subsystem.
7. Limitations and Extensions
- HOGNs are fundamentally restricted to conservative (Hamiltonian) dynamics; modeling dissipative or frictional forces requires extending the formalism, for instance via Rayleigh dissipation or Langevin terms (Bishnoi et al., 2023).
- Scalability of dense GNNs can be a computational bottleneck for fully-connected physical graphs, but the inherent locality of message-passing supports efficient scaling on sparse/topological graphs (Rahma et al., 6 Jun 2025, Varghese et al., 2024).
- Future research directions include symplecticity-preserving graph message passing, treating continuum systems or deformable bodies via mesh-based HOGN, and incorporating E(3) equivariance for enhanced physical fidelity (Bishnoi et al., 2023, Varghese et al., 2024).
References
- Node Embedding from Neural Hamiltonian Orbits in Graph Neural Networks (Kang et al., 2023)
- Node Embedding from Hamiltonian Information Propagation in Graph Neural Networks (Kang et al., 2023)
- Discovering Symbolic Laws Directly from Trajectories with Hamiltonian Graph Neural Networks (Bishnoi et al., 2023)
- Hamiltonian Graph Networks with ODE Integrators (Sanchez-Gonzalez et al., 2019)
- Adversarial Robustness in Graph Neural Networks: A Hamiltonian Approach (Zhao et al., 2023)
- SympGNNs: Symplectic Graph Neural Networks for identifiying high-dimensional Hamiltonian systems and node classification (Varghese et al., 2024)
- Rapid training of Hamiltonian graph networks without gradient descent (Rahma et al., 6 Jun 2025)