Node Embedding from Neural Hamiltonian Orbits
- The paper proposes a Hamiltonian-based node embedding framework that leverages learnable energy functions to generate robust, geometry-aware representations, mitigating over-smoothing in deep GNNs.
- It employs continuous-time Hamiltonian dynamics and symplectic integrators to preserve energy conservation, ensuring numerical stability over deep layers.
- The work demonstrates significant improvements over classical GNNs on diverse graph structures, validated across node classification and link prediction tasks.
Node embedding from neural Hamiltonian orbits refers to a family of graph representation learning techniques leveraging principles from Hamiltonian dynamics. These approaches address the limitations of classical Graph Neural Networks (GNNs) in handling diverse graph geometries and the over-smoothing phenomena in deep architectures by constructing node embeddings as trajectories (orbits) evolving under learnable Hamiltonian systems. The resulting embeddings are capable of automatically inferring latent geometry, preserving information under deep propagation, and providing robustness to perturbations (Kang et al., 2023, Liu et al., 2023, Kang et al., 2023).
1. Hamiltonian Formulation for Node Embedding
Each node in the graph is associated with “position” vector and “momentum” vector , which are assembled as and . A learnable Hamiltonian function , typically parameterized as a small neural network (e.g., a two-layer GCN or MLP), assigns an “energy” to each state in the phase space .
- In HDG (Kang et al., 2023), the Hamiltonian is realized as
where is the adjacency matrix and denotes feature concatenation.
- In HamGNN (Kang et al., 2023), can adopt several forms, including a metric-based quadratic form for manifold learning,
or a general MLP on .
- In SAH-GNN (Liu et al., 2023), the total energy decomposes into kinetic and potential terms,
with a key innovation being the use of a learnable symplectic structure.
No explicit regularizer is necessary as the Hamiltonian flow guarantees energy conservation by construction.
2. Continuous-time Hamiltonian Dynamics
Node features evolve according to canonical Hamiltonian ODEs:
defining a symplectic orbit on . The time evolution preserves exactly (the energy-conservation law), ensuring stable long-range propagation and well-separated node representations.
- In the most general framework (HamGNN), the system can recover any local geometry, with the exponential map (shortest-path geodesics) being a special case for quadratic .
- In SAH-GNN, the symplectic structure itself is learned on a Riemannian manifold, generalizing standard Hamiltonian evolution and adapting phase space geometry to the task (Liu et al., 2023).
3. Discretization and GNN Integration
Discretization of the continuous flow is essential for implementation and message passing. The standard procedure is:
- Set integration time , step-size , and select a symplectic integrator.
- Most approaches adopt symplectic (semi-implicit) Euler:
- Higher-order symplectic schemes (e.g., Leapfrog/Störmer–Verlet) can be used for improved numerical stability (Liu et al., 2023).
- Each integration step is analogous to a GNN layer; stacking iterations yields an -layer network.
Between integration steps, standard message passing is performed:
where is a normalized adjacency.
After the final time step, the embedding is , sometimes refined by an additional readout network.
4. End-to-End Training and Algorithms
All model parameters, including the Hamiltonian network, momentum network, and (when applicable) the symplectic matrix, are learned via backpropagation through the discretized ODE steps. Automatic differentiation computes required gradients , .
- Objective for node classification: cross-entropy on labeled nodes with softmax readout of .
- Objective for link prediction: cross-entropy or margin ranking loss on embedding pairs.
- Symplectic matrix learning (SAH-GNN): Riemannian gradient descent projects updates onto the tangent space of the symplectic Stiefel manifold, followed by Cayley retraction to guarantee manifold constraints.
- Energy regularization (optional in SAH-GNN):
although exact integrators and learned symplectic structure yield near-perfect conservation.
Pseudocode for generic implementation appears in (Kang et al., 2023, Kang et al., 2023), and (Liu et al., 2023).
5. Geometry Adaptation and Robustness
A principal advantage is the ability to learn and adapt to latent graph geometry. The Hamiltonian is agnostic to manifold type; it discerns local or mixed curvature (Euclidean, hyperbolic, pseudo-Riemannian) during training, without pre-prescribed structure (Kang et al., 2023, Kang et al., 2023).
The conservation of precludes energy "collapse," directly addressing over-smoothing: node features remain diverse even after deep propagation (empirically verified up to 32–64 layers, with GCN/HGCN collapsing by contrast (Kang et al., 2023, Kang et al., 2023)).
Empirical studies under adversarial perturbations (SPEIT, TDGIA, Nettack) show only ≈5% performance drop for Hamiltonian models, outperforming attention-based and diffusion GNNs in resilience (Kang et al., 2023).
6. Experimental Results and Analysis
Benchmarking
Key datasets: Cora, Citeseer, Pubmed (higher hyperbolicity), Disease, Airport (tree-like, low hyperbolicity), and various mixed-geometry graphs formed by union.
Baselines
- Euclidean GNNs: GCN, SAGE, SGC, GAT
- Hyperbolic/Mixed GNNs: HGNN, HGCN, HGAT, LGCN, -GCN, Q-GCN, GIL
- Neural ODE/PDE: GRAND, GraphCON
Metrics
- Node classification accuracy.
- Link prediction ROC–AUC.
Results
| Model | Disease (%) | Airport (%) | Cora (%) | Mixed Geo (best) |
|---|---|---|---|---|
| HDG / HamGNN | 91.3–91.5 | 94.5–95.5 | 82+ | >5% over best baseline in 3/4 cases |
| Best baseline | 90.8 | 91.5 | 82.8 | — |
- Hamiltonian-based models outperform all baselines on tree-like and mixed-geometry graphs, and on link-prediction tasks except one marginal case (Kang et al., 2023, Kang et al., 2023, Liu et al., 2023).
- All variants of (metric-based, unconstrained, convex, relaxations) attain comparable results; the metric-based/geodesic variant is both simple and effective.
- Ablations show 10% performance gap over plain ODEs without Hamiltonian structure; choice of ODE solver (Euler, RK4, Dopri5) is not critical.
- Inference time is competitive: HDG is 5.8 ms/sample vs GCN 2.96 ms, HGCN 6.1 ms, GraphCON 4.3 ms.
Stability
Energy curves remain nearly constant over long training, confirming theoretical guarantees. When stacking many layers, Hamiltonian models maintain accuracy, while GCN/HGCN degrade rapidly (Kang et al., 2023, Kang et al., 2023).
7. Extensions, Limitations, and Future Work
- Symplectic learning: SAH-GNN (Liu et al., 2023) extends with a learnable shape for the symplectic structure, optimized on the symplectic Stiefel manifold, enabling adaption to arbitrary graph data.
- ODE integration: Future directions include exact symplectic integrators for stricter energy conservation and generalization to full (non-diagonal) metrics or low-rank structures.
- Applicability: Current methods focus on node representations; further development is needed for graph-level classification, dynamic or heterophilic graphs, and theoretical analyses of generalization properties (Kang et al., 2023).
- Implementation: Open-source code is provided for HamGNN (Kang et al., 2023).
A plausible implication is that Hamiltonian-inspired node embedding frameworks offer a unified, geometrically flexible, and robust solution to representation learning on complex graphs, generalizing and improving upon traditional GNNs without significant increases in inference time or hyperparameter sensitivity.
References
(Kang et al., 2023) Node Embedding from Hamiltonian Information Propagation in Graph Neural Networks (Liu et al., 2023) Symplectic Structure-Aware Hamiltonian (Graph) Embeddings (Kang et al., 2023) Node Embedding from Neural Hamiltonian Orbits in Graph Neural Networks