Papers
Topics
Authors
Recent
Search
2000 character limit reached

HenonNet Modules: Symplectic Neural Maps

Updated 8 February 2026
  • HenonNet modules are specialized neural network blocks that construct structure-preserving symplectic maps, ensuring conservation of invariants in Hamiltonian systems.
  • They employ neural parameterizations of canonical transformations and layered Henon maps to efficiently simulate toroidal magnetic field-line Poincaré flows.
  • Time-adaptive and non-autonomous extensions enable modeling of varying Hamiltonian dynamics with provable universal approximation in separable cases.

HenonNet modules are specialized neural network building blocks designed to construct structure-preserving, symplectic maps for applications in Hamiltonian dynamics and plasma physics. At their core, these modules implement neural parameterizations of canonical transformations, enabling the approximation of complex dynamical flows—most notably for toroidal magnetic field-line Poincaré maps—while guaranteeing strict preservation of physical invariants such as magnetic flux and phase-space volume. This class of networks is also extensible to time-adaptive and non-autonomous Hamiltonian systems, leveraging their compositional symplectic architecture for provable universal approximation in the separable case (Burby et al., 2020, Janik et al., 24 Sep 2025).

1. Foundations: Symplectic Henon Map as Module

A HenonNet module is rooted in the construction of canonical, symplectic maps inspired by the classical Henon map. For nn degrees of freedom, a phase space vector is (q,p)Rn×Rn(q,p)\in\mathbb R^n\times\mathbb R^n. Each module is parameterized by:

  • A scalar-valued “potential” function V:RnRV:\mathbb R^n \to \mathbb R realized as a feed-forward neural network.
  • A constant shift vector ηRn\eta \in \mathbb R^n.

The module acts as: H[V,η](q,p)=(Q,P),    Q=p+η,    P=q+pV(p).H[V,\eta](q,p) = (Q, P), \;\; Q = p + \eta,\;\; P = -q + \nabla_p V(p). This map is canonically symplectic, preserving the dqidpi\sum dq^i \wedge dp^i form, and can be shown to originate from a Type-II generating function, guaranteeing exact conservation of invariants such as magnetic flux in Hamiltonian flows (Burby et al., 2020).

2. Neural Parameterization and Architectural Layering

Each potential VV in a Henon module is implemented by a single-hidden-layer neural network: V(p)=W(2)ϕ(W(1)p+b(1))+b(2)V(p) = W^{(2)}\phi\left(W^{(1)}p + b^{(1)}\right) + b^{(2)} with MM hidden units and a nonlinearity ϕ\phi (commonly tanh\tanh). The parameter count per module is (n+2)M+1(n+2)M + 1 for nn degrees of freedom.

A Henon layer consists of four successive applications of H[V,η]H[V,\eta]: $L[V,\eta] = H[V,\eta} \circ H[V,\eta} \circ H[V,\eta} \circ H[V,\eta}$ ensuring that L[0,0]=idL[0,0]=\mathrm{id} (identity), which stabilizes network initialization and preserves symplecticity under arbitrary parameter settings.

A HenonNet is then constructed by stacking NN such layers: H(q,p)=L[VN,ηN]...L[V1,η1](q,p)\mathcal H(q,p) = L[V_N, \eta_N] \circ ... \circ L[V_1, \eta_1](q,p) yielding a universal symplectic approximator (by Turaev’s theorem) for any canonical map on (q,p)(q, p) (Burby et al., 2020).

3. Time-Adaptive and Non-autonomous Extensions

Time-adaptive HenonNets (T-HenonNets) introduce explicit dependence on the integration step hh: HV,η(h;p,q)=(hV(p)q,    p+η)\mathcal H_{V,\eta}(h; p, q) = \left( h\,\nabla V(p) - q,\;\; p + \eta \right) and stack these in layers, preserving symplecticity for each hh (Janik et al., 24 Sep 2025). The input to the network includes both (p,q)(p, q) and hh.

Non-autonomous HenonNets (NAT-HenonNets) extend this to explicitly time-dependent potentials: Vi:R×RdRV_i: \mathbb R \times \mathbb R^d \to \mathbb R with time-advanced at each sub-shear, enabling modeling of systems with explicit time dependence. However, networks based on this principle are intrinsically limited to separable Hamiltonians; non-separable (p,qp,q-coupled) Hamiltonians cannot, in general, be represented by compositions of such shears due to intrinsic Taylor expansion constraints (Janik et al., 24 Sep 2025).

4. Training Methodology and Loss Structures

HenonNets are commonly trained via supervised regression to data generated by high-order symplectic or Runge-Kutta integrators. The MSE loss is: L({Wk,ηk})=1NiH[{Vk[Wk],ηk}](xi)yi2\mathcal L(\{W_k, \eta_k\}) = \frac{1}{N}\sum_i \|\mathcal H[\{V_k[W_k],\eta_k\}](x_i) - y_i\|^2 where (xi,yi)(x_i, y_i) pairs are typically sampled from phase space and advanced using a reference field-line or Hamiltonian integrator. Owing to structure preservation by construction, no additional regularization on symplecticity is required; optional L2L_2 weight decay may be used for parameter regularization (Burby et al., 2020, Janik et al., 24 Sep 2025).

5. Theoretical Guarantees: Universal Approximation and Limitations

T-HenonNets satisfy a universal approximation theorem: for any Cr+1C^{r+1} separable Hamiltonian flow, there exists an mm-layer T-HenonNet approximator with error O(1/m)O(1/m) on compact sets, provided sufficiently expressive potentials (i.e., activation satisfies the rr-finite property) (Janik et al., 24 Sep 2025). In the non-separable case, networks constructed from composition of these modules cannot represent coupling terms because all maps inherently lack mixed (p,q)(p, q) derivatives other than those allowed by separable ansatz.

6. Implementation: Pseudocode, Hyperparameters, and Empirical Results

A typical T-HenonNet forward pass, with mm layers and four sub-shears per layer:

1
2
3
4
5
6
def T_HenonNet_forward(x, h):
    for i in range(m):
        for k in range(4):
            g = grad_Vi(p)   # ∇V_i at current p
            p, q = h * g - q, p + eta_i
    return p, q
For non-autonomous variants, the state is augmented with tt, which is advanced each sub-shear.

Typical architectural parameters for dim=1\text{dim}=1 (pendulum): m=8m=8, hidden size H=16H=16, yielding 400\sim400 total parameters. Training batch sizes of 400–2000 and epochs up to $20,000$ are used to achieve convergence on stiff systems.

Numerical experiments confirm that both fixed-step and time-adaptive HenonNets achieve phase error 103\sim 10^{-3} and negligible energy drift on the mathematical pendulum after 100 steps; however, failure manifests rapidly for non-separable test problems, directly illustrating the architectural expressivity limitations (Janik et al., 24 Sep 2025).

7. Significance and Practical Applications

HenonNet modules provide exactly symplectic, structure-preserving neural maps for learning or emulating complex dynamical systems, notably field-line Poincaré maps in toroidal magnetic configurations (Burby et al., 2020). Their use of compositional, trainable canonical maps offers computational speedup—evaluating tens of times faster than classical integrators—and ensures preservation of critical invariants. In plasma physics, this enables fast, data-driven modeling of field topology and confinement properties, including mimicking sticky chaotic regions near magnetic islands via neural invariant manifolds. A plausible implication is that such models could offer new approaches to magnetic confinement design that extend beyond traditional KAM torus-based methods.

The modular framework established by HenonNet layers—potential network, canonical shift, intrinsic flux preservation, and learnable composition—forms a blueprint for symplectic neural integration schemes applicable wherever physics-informed, structure-preserving dynamics are required (Burby et al., 2020, Janik et al., 24 Sep 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (2)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to HenonNet Modules.