Papers
Topics
Authors
Recent
Search
2000 character limit reached

A flexible framework for structural plasticity in GPU-accelerated sparse spiking neural networks

Published 22 Oct 2025 in cs.NE and q-bio.NC | (2510.19764v2)

Abstract: The majority of research in both training Artificial Neural Networks (ANNs) and modeling learning in biological brains focuses on synaptic plasticity, where learning equates to changing the strength of existing connections. However, in biological brains, structural plasticity - where new connections are created and others removed - is also vital, not only for effective learning but also for recovery from damage and optimal resource usage. Inspired by structural plasticity, pruning is often used in machine learning to remove weak connections from trained models to reduce the computational requirements of inference. However, the machine learning frameworks typically used for backpropagation-based training of both ANNs and Spiking Neural Networks (SNNs) are optimized for dense connectivity, meaning that pruning does not help reduce the training costs of ever-larger models. The GeNN simulator already supports efficient GPU-accelerated simulation of sparse SNNs for computational neuroscience and machine learning. Here, we present a new flexible framework for implementing GPU-accelerated structural plasticity rules and demonstrate this first using the e-prop supervised learning rule and DEEP R to train efficient, sparse SNN classifiers and then, in an unsupervised learning context, to learn topographic maps. Compared to baseline dense models, our sparse classifiers reduce training time by up to 10x while the DEEP R rewiring enables them to perform as well as the original models. We demonstrate topographic map formation in faster-than-realtime simulations, provide insights into the connectivity evolution, and measure simulation speed versus network size. The proposed framework will enable further research into achieving and maintaining sparsity in network structure and neural communication, as well as exploring the computational benefits of sparsity in a range of neuromorphic applications.

Summary

  • The paper introduces a novel framework that integrates structural plasticity with GPU acceleration to dynamically update sparse neural connectivity.
  • It extends the GeNN simulator with sparse data structures and custom connectivity updates, enabling efficient supervised learning with DEEP R and topographic map formation.
  • The framework achieves up to 10× faster training and 90× fewer parameters for classifiers, while refining receptive fields in real time, advancing neuromorphic research.

Flexible GPU-Accelerated Framework for Structural Plasticity in Sparse Spiking Neural Networks

Introduction

The paper "A flexible framework for structural plasticity in GPU-accelerated sparse spiking neural networks" (2510.19764) addresses the challenge of simulating structural plasticity in Spiking Neural Networks (SNNs) on GPU hardware. Structural plasticity, defined as the dynamic formation and elimination of synapses, is crucial not only for learning but also for optimizing neural architectures post-damage and for resource management. Despite its importance, structural plasticity is underexplored due to the computational overhead required to simulate such dynamic connectivity changes in large models using conventional dense frameworks optimized for backpropagation-based training.

Methods and Implementation

The paper introduces a novel framework integrated with the GeNN simulator, capable of efficiently handling structural plasticity on GPUs, which are inherently suitable for parallel computations. GeNN, originally developed for computational neuroscience, has been extended to handle sparse data structures and custom connectivity updates, which are integral to simulating structural plasticity. The framework enables defining connectivity updates with minimal data movement, avoiding reallocation of memory, and retaining efficient indexing.

Two main applications of the framework are presented:

  1. Supervised Learning with DEEP R: The DEEP R algorithm is employed to dynamically rewire connections in sparse networks, maintaining a constant level of connectivity throughout training. This allows for efficient training of sparse SNN classifiers on datasets such as N-MNIST and DVS gesture, achieving comparable accuracy to dense models but with significantly reduced computational cost.
  2. Topographic Map Formation: A model simulates the process of topographic map formation using structural plasticity rules that incorporate both activity-dependent elimination and distance-dependent formation. The interplay between synaptic and structural plasticity leads to refined receptive fields in a spatially embedded network.

Results

The proposed framework demonstrates substantial performance gains in simulating both the classifier training and the topographic map formation tasks:

  • Classifier Training: Sparse classifiers trained using DEEP R achieved performance on par with dense models, with up to 10×10\times reduction in training time and 90×90\times fewer parameters, showcasing the efficacy of structural plasticity in optimizing network architectures.
  • Topographic Map Formation: The model shows successful refinement of receptive fields faster than real-time, with insights into the evolution of connectivity and simulation speed across various network sizes. The structural plasticity updates, constituting the largest portion of simulation runtime, are efficiently managed within the proposed framework.

Discussion

This study highlights the feasibility of integrating structural plasticity rules in SNN simulators, addressing both computational neuroscience and machine learning contexts. The flexibility of the framework allows researchers to explore various rules and mechanisms of structural plasticity, potentially leading to advancements in biologically plausible learning algorithms and neuromorphic computing applications.

The framework's design, while tailored to GeNN, holds potential for adaptation in other simulation software and neuromorphic hardware, expanding the possibilities for studying and implementing structural plasticity at scale. Future work may include exploring gradient-guided formation of connections, neurogenesis, and extending the framework to other neuromorphic systems and hardware.

Conclusion

The framework presented in this paper provides a valuable tool for exploring structural plasticity in sparse SNNs, leveraging the computational power of GPUs to overcome the challenges associated with simulating dynamic connectivity. By facilitating efficient simulations and allowing flexible adaptation of plasticity rules, the framework promises to advance research in both the theoretical understanding and practical applications of neuro-inspired architectures.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.