Papers
Topics
Authors
Recent
Search
2000 character limit reached

Models Developed for Spiking Neural Networks

Published 8 Dec 2022 in cs.NE, cs.CV, and q-bio.NC | (2212.04377v1)

Abstract: Emergence of deep neural networks (DNNs) has raised enormous attention towards artificial neural networks (ANNs) once again. They have become the state-of-the-art models and have won different machine learning challenges. Although these networks are inspired by the brain, they lack biological plausibility, and they have structural differences compared to the brain. Spiking neural networks (SNNs) have been around for a long time, and they have been investigated to understand the dynamics of the brain. However, their application in real-world and complicated machine learning tasks were limited. Recently, they have shown great potential in solving such tasks. Due to their energy efficiency and temporal dynamics there are many promises in their future development. In this work, we reviewed the structures and performances of SNNs on image classification tasks. The comparisons illustrate that these networks show great capabilities for more complicated problems. Furthermore, the simple learning rules developed for SNNs, such as STDP and R-STDP, can be a potential alternative to replace the backpropagation algorithm used in DNNs.

Citations (3)

Summary

  • The paper reviews multiple spiking neural network models, including LIF, SRM, and Izhikevich, demonstrating their potential in image classification tasks.
  • It details SNN coding schemes and learning rules, notably rate, temporal, and phase coding, along with STDP, R-STDP, and adapted backpropagation.
  • It shows strategies for converting DNNs to SNNs, achieving high accuracy (up to 99.42%) and improved energy efficiency in brain-like simulations.

Models Developed for Spiking Neural Networks

Spiking Neural Networks (SNNs) represent an advancement in artificial neural networks, offering a biologically plausible modeling of the brain's dynamics. The paper "Models Developed for Spiking Neural Networks" reviews the evolution and performance of SNNs, focusing particularly on their application in image classification tasks. This is in contrast to Deep Neural Networks (DNNs), which, despite their success, lack the biological plausibility of SNNs. The paper compares various SNN structures, emphasizing their potential to address complex machine learning tasks with increased energy efficiency.

Introduction to Spiking Neural Networks

SNNs emerge as the third generation of artificial neural networks, surpassing previous models in terms of biological plausibility. Unlike DNNs, SNNs leverage spatio-temporal information for processing, using spikes to transmit binary signals akin to those found in the brain. This structure facilitates high sparsity rates, contributing to their energy efficiency. The paper outlines common challenges faced by DNNs, such as high data and energy consumption, and contrasts these with the capabilities of the human brain which can learn efficiently with sparse data.

Building Blocks of SNNs

Models of Biological Neurons

The paper discusses three predominant neuron models that form the core of SNNs, each offering varying degrees of complexity and computational demand:

  • Leaky Integrate-and-Fire (LIF) Model: Simplifies spike firing by using an electrical circuit analogy, focusing on the potential threshold for neuron firing.
  • Spike-response Model (SRM): Extends LIF's capabilities with temporal functions, providing flexibility in simulating neuron dynamics with both fixed and variable thresholds.
  • Izhikevich Model: Balances biological plausibility with computational efficiency to replicate spiking patterns observed in cortical neurons.

Neural Coding Schemes

SNNs convert analog input signals into spike trains via different coding mechanisms:

  • Rate Coding: Encodes information through the firing rate, closely linked to input signal intensity.
  • Temporal Coding: Uses spike timing to convey information rapidly and sparsely, offering efficiency in processing.
  • Phase Coding: Captures information through neuron firing patterns correlated with background oscillations.

Learning Rules

SNNs employ biologically inspired learning rules:

  • Spike-timing-dependent plasticity (STDP): Adjusts synaptic weights based on the relative firing times of pre- and post-synaptic neurons.
  • Reward-modulated STDP (R-STDP): Enhances STDP by incorporating reinforcement learning principles to optimize synaptic changes based on feedback.
  • Backpropagation Adaptations: Despite challenges in differentiating spike trains, adaptations allow backpropagation use in SNNs, enabling supervised learning.

ANN-to-SNN Conversion

The paper details strategies for converting trained DNNs to SNNs, optimizing for minimal accuracy loss during the conversion, particularly through weight normalization techniques and layer adjustments.

Developed Models for SNNs

Several models reviewed demonstrate differing approaches and outcomes in terms of accuracy on tasks such as MNIST image classification:

STDP Networks

The Kheradpisheh et al. model utilizes STDP with two convolutional layers, achieving a notable accuracy of 98.40%, while the Shirsavar et al. model enhances this setup to reach 99.42% accuracy by optimizing runtime and training processes.

R-STDP Networks

Mozafari et al. integrate R-STDP to improve decision-making capabilities by rewarding correct neuronal responses, reaching a test accuracy of 97.20%.

Backpropagation Networks

Lee et al. employ a modified backpropagation approach using leaky integrate-and-fire neurons and lateral inhibition, achieving high accuracy through continuous signal treatment.

ANN-to-SNN Networks

Diehl et al. propose converting existing DNNs to SNNs for inference, employing methods to reduce conversion loss effectively, reaching 99.14% accuracy with minimal biological plausibility.

Conclusion

The potential of spiking neural networks lies in their ability to efficiently model brain-like processes, offering significant energy savings and computational advantages over traditional deep learning systems. While SNNs have showcased promising results in specific areas such as digital recognition tasks, further research is needed to extend their application to broader and more complex machine learning problems. Future advancements may arise from deeper exploration into brain dynamics and the integration of more sophisticated models that replicate biological processes.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.