- The paper introduces CogniSNN that employs random graph architectures to achieve enhanced depth-scalability and dynamic path-plasticity in spiking neural networks.
- It utilizes a modified ResNode design and a critical path-based learning algorithm to maintain spike-based computations and mitigate network degradation in deep pathways.
- Experimental results on neuromorphic datasets demonstrate competitive accuracy with reduced latency and energy consumption compared to traditional architectures.
"CogniSNN: An Exploration to Random Graph Architecture based Spiking Neural Networks with Enhanced Depth-Scalability and Path-Plasticity"
Abstract
The paper introduces CogniSNN, a novel paradigm for Spiking Neural Networks (SNNs) utilizing Random Graph Architectures (RGA). This approach deviates from traditional hierarchical frameworks, aiming to emulate the complex and dynamic connectivity patterns found in biological neural systems. This emulation allows for enhanced depth-scalability and path-plasticity—terms reflecting the network’s ability to extend pathway depth dynamically and adjust pathways for lifelong learning. CogniSNN features a modified residual node (ResNode) design to counteract network degradation in deep pathways and a critical path-based algorithm for path reusability in adapting to new tasks. Experimental results demonstrate CogniSNN's comparable or superior performance against state-of-the-art models on several neuromorphic datasets.
Introduction
Spiking Neural Networks (SNNs) attempt to mirror certain characteristics of biological neurons to provide advantages in interpretability and energy efficiency over traditional Artificial Neural Networks (ANNs). However, mainstream SNN models often still rely on static, hierarchical architectures reminiscent of ANNs, limiting their adaptability to dynamic environments. This paper proposes a model that aims to bridge this gap: CogniSNN, an SNN architecture grounded in Random Graph Architecture (RGA) inspired by the human brain's intricate web of randomized connections.
Existing SNNs do not effectively leverage the depth-scalability and path-plasticity inherent in biological intelligence—the former pertaining to dynamic extensions in network pathways, and the latter allowing for selective, adaptive path restructuring across different tasks. The biological neural structure, analogous to an RGA, is inherently capable of addressing complex cognitive tasks through dynamic activation of pathways.
Model Architecture
CogniSNN Architecture
CogniSNN utilizes RGA, structured using Erdős–Rényi and Watts–Strogatz graph frameworks, to form the backbone of the network. This approach allows model pathways to dynamically extend and selectively adapt—key components for depth-scalability and path-plasticity. At the core of CogniSNN is the ResNode, an innovative adaptation of residual learning, optimally configured to maintain spike-based computations without degenerating into real-value operations, thus preserving energy-efficient and biologically plausible computation.
Depth-Scalability and Path-Plasticity
- Depth-Scalability: Achieved through OR and tailored pooling mechanisms within ResNodes. These components facilitate pathway depth extension while mitigating network degradation and non-spike transmission issues often seen in deeper architectures.
- Path-Plasticity: Implemented via a critical path-based learning without forgetting (LwF) strategy. This approach utilizes graph theory's betweenness centrality to prioritize essential pathways, allowing for selective, task-relevant parameter retraining—effective in managing both similar and distinct tasks without catastrophic forgetting.
Experimental Results
CogniSNN's performance was evaluated across multiple datasets—DVS-Gesture, CIFAR10-DVS, N-Caltech101, and Tiny-ImageNet. It consistently achieved accuracy rates comparable to or surpassing existing models while exhibiting reduced computational overhead. The empirical results affirm the effectiveness of RGA-based SNNs in achieving both the reduction of latency and maintenance of parameter efficiency.
- Performance on Neuromorphic Data: CogniSNN substantially outpaces traditional chain-like architectures and offers competitive advantages in latency reduction.
- Energy Efficiency: The proposed model demonstrated lower energy consumption compared to conventional ADD operations, particularly beneficial for tasks requiring extended network activations.
Conclusion
CogniSNN represents a significant stride towards aligning spiking neural networks with the complex, adaptable structures of biological systems. By integrating depth-scalability and path-plasticity, this model offers a promising platform for advancing brain-like processing in computational neuroscience. Future research may explore integrated architectures for enhanced continual learning capabilities and optimized path selection algorithms to further harness the power of RGAs in neural computing. The outcomes of this study pave the way for future explorations that integrate computational neuroscience with intelligent system design, fostering the evolution of more autonomous and adaptive AI agents.