- The paper introduces a novel framework that integrates structural plasticity with GPU acceleration to dynamically update sparse neural connectivity.
- It extends the GeNN simulator with sparse data structures and custom connectivity updates, enabling efficient supervised learning with DEEP R and topographic map formation.
- The framework achieves up to 10× faster training and 90× fewer parameters for classifiers, while refining receptive fields in real time, advancing neuromorphic research.
Flexible GPU-Accelerated Framework for Structural Plasticity in Sparse Spiking Neural Networks
Introduction
The paper "A flexible framework for structural plasticity in GPU-accelerated sparse spiking neural networks" (2510.19764) addresses the challenge of simulating structural plasticity in Spiking Neural Networks (SNNs) on GPU hardware. Structural plasticity, defined as the dynamic formation and elimination of synapses, is crucial not only for learning but also for optimizing neural architectures post-damage and for resource management. Despite its importance, structural plasticity is underexplored due to the computational overhead required to simulate such dynamic connectivity changes in large models using conventional dense frameworks optimized for backpropagation-based training.
Methods and Implementation
The paper introduces a novel framework integrated with the GeNN simulator, capable of efficiently handling structural plasticity on GPUs, which are inherently suitable for parallel computations. GeNN, originally developed for computational neuroscience, has been extended to handle sparse data structures and custom connectivity updates, which are integral to simulating structural plasticity. The framework enables defining connectivity updates with minimal data movement, avoiding reallocation of memory, and retaining efficient indexing.
Two main applications of the framework are presented:
- Supervised Learning with DEEP R: The DEEP R algorithm is employed to dynamically rewire connections in sparse networks, maintaining a constant level of connectivity throughout training. This allows for efficient training of sparse SNN classifiers on datasets such as N-MNIST and DVS gesture, achieving comparable accuracy to dense models but with significantly reduced computational cost.
- Topographic Map Formation: A model simulates the process of topographic map formation using structural plasticity rules that incorporate both activity-dependent elimination and distance-dependent formation. The interplay between synaptic and structural plasticity leads to refined receptive fields in a spatially embedded network.
Results
The proposed framework demonstrates substantial performance gains in simulating both the classifier training and the topographic map formation tasks:
- Classifier Training: Sparse classifiers trained using DEEP R achieved performance on par with dense models, with up to 10× reduction in training time and 90× fewer parameters, showcasing the efficacy of structural plasticity in optimizing network architectures.
- Topographic Map Formation: The model shows successful refinement of receptive fields faster than real-time, with insights into the evolution of connectivity and simulation speed across various network sizes. The structural plasticity updates, constituting the largest portion of simulation runtime, are efficiently managed within the proposed framework.
Discussion
This study highlights the feasibility of integrating structural plasticity rules in SNN simulators, addressing both computational neuroscience and machine learning contexts. The flexibility of the framework allows researchers to explore various rules and mechanisms of structural plasticity, potentially leading to advancements in biologically plausible learning algorithms and neuromorphic computing applications.
The framework's design, while tailored to GeNN, holds potential for adaptation in other simulation software and neuromorphic hardware, expanding the possibilities for studying and implementing structural plasticity at scale. Future work may include exploring gradient-guided formation of connections, neurogenesis, and extending the framework to other neuromorphic systems and hardware.
Conclusion
The framework presented in this paper provides a valuable tool for exploring structural plasticity in sparse SNNs, leveraging the computational power of GPUs to overcome the challenges associated with simulating dynamic connectivity. By facilitating efficient simulations and allowing flexible adaptation of plasticity rules, the framework promises to advance research in both the theoretical understanding and practical applications of neuro-inspired architectures.