Convergence Guarantees for the Two-Timescale RSGN Learning System

Establish rigorous convergence guarantees for the full two-timescale learning system of Resonant Sparse Geometry Networks (RSGN) that combines gradient descent with Hebbian structural learning, determining conditions under which the combined training dynamics converge.

Background

Resonant Sparse Geometry Networks (RSGN) introduce a two-timescale learning paradigm in which fast learning uses gradient descent to optimize differentiable components (including the ignition embedding function, transformation matrices, and output projections), while slow learning adjusts network structure through Hebbian-inspired updates to connection affinities and threshold adaptation, along with pruning and sprouting mechanisms. This coupling of continuous optimization with local structural plasticity enables input-dependent sparse routing in hyperbolic space.

The paper provides partial theoretical results such as bounded gradient properties for soft-thresholded activations and stability conditions for Hebbian affinity updates, as well as a computational complexity analysis. However, a complete proof that the combined gradient-based and Hebbian structural updates lead to convergence of the overall system is explicitly identified as unresolved, motivating a formal analysis of convergence for the joint dynamics.

References

Theoretical Understanding: While we provide complexity analysis and stability conditions, complete convergence guarantees for the full system combining gradient descent with Hebbian structural learning remain an open theoretical question.

Resonant Sparse Geometry Networks  (2601.18064 - Hays, 26 Jan 2026) in Section 6 (Discussion), Limitations and Future Work