- The paper reviews multiple spiking neural network models, including LIF, SRM, and Izhikevich, demonstrating their potential in image classification tasks.
- It details SNN coding schemes and learning rules, notably rate, temporal, and phase coding, along with STDP, R-STDP, and adapted backpropagation.
- It shows strategies for converting DNNs to SNNs, achieving high accuracy (up to 99.42%) and improved energy efficiency in brain-like simulations.
Models Developed for Spiking Neural Networks
Spiking Neural Networks (SNNs) represent an advancement in artificial neural networks, offering a biologically plausible modeling of the brain's dynamics. The paper "Models Developed for Spiking Neural Networks" reviews the evolution and performance of SNNs, focusing particularly on their application in image classification tasks. This is in contrast to Deep Neural Networks (DNNs), which, despite their success, lack the biological plausibility of SNNs. The paper compares various SNN structures, emphasizing their potential to address complex machine learning tasks with increased energy efficiency.
Introduction to Spiking Neural Networks
SNNs emerge as the third generation of artificial neural networks, surpassing previous models in terms of biological plausibility. Unlike DNNs, SNNs leverage spatio-temporal information for processing, using spikes to transmit binary signals akin to those found in the brain. This structure facilitates high sparsity rates, contributing to their energy efficiency. The paper outlines common challenges faced by DNNs, such as high data and energy consumption, and contrasts these with the capabilities of the human brain which can learn efficiently with sparse data.
Building Blocks of SNNs
Models of Biological Neurons
The paper discusses three predominant neuron models that form the core of SNNs, each offering varying degrees of complexity and computational demand:
- Leaky Integrate-and-Fire (LIF) Model: Simplifies spike firing by using an electrical circuit analogy, focusing on the potential threshold for neuron firing.
- Spike-response Model (SRM): Extends LIF's capabilities with temporal functions, providing flexibility in simulating neuron dynamics with both fixed and variable thresholds.
- Izhikevich Model: Balances biological plausibility with computational efficiency to replicate spiking patterns observed in cortical neurons.
Neural Coding Schemes
SNNs convert analog input signals into spike trains via different coding mechanisms:
- Rate Coding: Encodes information through the firing rate, closely linked to input signal intensity.
- Temporal Coding: Uses spike timing to convey information rapidly and sparsely, offering efficiency in processing.
- Phase Coding: Captures information through neuron firing patterns correlated with background oscillations.
Learning Rules
SNNs employ biologically inspired learning rules:
- Spike-timing-dependent plasticity (STDP): Adjusts synaptic weights based on the relative firing times of pre- and post-synaptic neurons.
- Reward-modulated STDP (R-STDP): Enhances STDP by incorporating reinforcement learning principles to optimize synaptic changes based on feedback.
- Backpropagation Adaptations: Despite challenges in differentiating spike trains, adaptations allow backpropagation use in SNNs, enabling supervised learning.
ANN-to-SNN Conversion
The paper details strategies for converting trained DNNs to SNNs, optimizing for minimal accuracy loss during the conversion, particularly through weight normalization techniques and layer adjustments.
Developed Models for SNNs
Several models reviewed demonstrate differing approaches and outcomes in terms of accuracy on tasks such as MNIST image classification:
STDP Networks
The Kheradpisheh et al. model utilizes STDP with two convolutional layers, achieving a notable accuracy of 98.40%, while the Shirsavar et al. model enhances this setup to reach 99.42% accuracy by optimizing runtime and training processes.
R-STDP Networks
Mozafari et al. integrate R-STDP to improve decision-making capabilities by rewarding correct neuronal responses, reaching a test accuracy of 97.20%.
Backpropagation Networks
Lee et al. employ a modified backpropagation approach using leaky integrate-and-fire neurons and lateral inhibition, achieving high accuracy through continuous signal treatment.
ANN-to-SNN Networks
Diehl et al. propose converting existing DNNs to SNNs for inference, employing methods to reduce conversion loss effectively, reaching 99.14% accuracy with minimal biological plausibility.
Conclusion
The potential of spiking neural networks lies in their ability to efficiently model brain-like processes, offering significant energy savings and computational advantages over traditional deep learning systems. While SNNs have showcased promising results in specific areas such as digital recognition tasks, further research is needed to extend their application to broader and more complex machine learning problems. Future advancements may arise from deeper exploration into brain dynamics and the integration of more sophisticated models that replicate biological processes.