Spiking Neural Networks on SpiNNaker
- Spiking Neural Networks on SpiNNaker are neuromorphic systems that use many-core, event-driven architectures to simulate brain-like circuits in real time.
- Key methodologies include hardware multicast routing, dynamic voltage/frequency scaling, and spike-timing-dependent plasticity to efficiently map large-scale neural networks.
- Applications span from associative memory models and digital spiking logic to event-based machine learning, offering low-latency and energy-efficient computing.
Spiking Neural Networks (SNNs) on SpiNNaker comprise a major advance in neuromorphic computation, leveraging digital many-core architectures and event-driven communication to map large-scale, biologically realistic spiking circuits. The SpiNNaker platform and its second generation, SpiNNaker2, have been used to study diverse neural phenomena, demonstrate scalable machine learning, and implement robust, efficient associative memories and hybrid AI systems.
1. SpiNNaker Architectures and Event-Driven Principles
SpiNNaker is a multicore, digital neuromorphic platform specifically designed for real-time, energy-efficient simulation of SNNs. The original SpiNNaker chips each integrate 18 ARM968E-S cores at 200 MHz, with a hardware multicast router supporting asynchronous delivery of spike packets to arbitrary fan-outs, essential for brain-scale SNNs (Reiter et al., 2020). SpiNNaker2 advances these principles with 152 ARM Cortex-M4F-based processing elements (PEs) per chip, each provisioned with 128 kB of local SRAM and low-latency DMA engines. Both generations use distributed RAM (SDRAM for SpiNNaker1, on-chip SRAM and off-chip DRAM for SpiNNaker2) for synapse matrices and routing tables (Gonzalez et al., 2024).
Key architectural features across generations:
- Many-core event-driven computation: Cores sleep until woken by incoming spike events, then sequentially process synaptic updates and neuron state evolution, reusing the same processor logic for every neuron (Mayr et al., 2019, Gonzalez et al., 2024).
- Hardware multicast routing: Each spike is mapped to a neuron-unique 32-bit key. Multicast tables compressed in hardware allow a single spike packet to rapidly fan out to thousands of synaptic destinations, analogous to axonal projections in biology (Mayr et al., 2019, Rhodes et al., 2019).
- Dynamic voltage/frequency scaling (DVFS) and body biasing (SpiNNaker2): Per-core monitoring of event load enables adaptive power management, reducing energy when activity is low (Höppner et al., 2021).
- Integrated accelerators (SpiNNaker2): Each PE includes hardware exponential/logarithm units, MAC arrays for DNN tasks, random number generators, and 2D convolution (Gonzalez et al., 2024, Höppner et al., 2021).
- Software stacks: PyNN and sPyNNaker enable high-level definition of neuron populations, sparse or all-to-all connectivity, and plasticity updates, with automated placement, mapping, and route-table generation (Mayr et al., 2019, Gonzalez et al., 2024).
2. Neuron, Synapse, and Plasticity Models
SpiNNaker implementations of SNNs predominantly employ the leaky integrate-and-fire (LIF) neuron model:
where synaptic inputs are typically current-based or conductance-based, and spikes are generated when , resetting the membrane (Reiter et al., 2020, Casanueva-Morato et al., 2022, Gonzalez et al., 2024). Parameters are tunable per-network (e.g., , , , , refractory period).
Synapses can be static (current or conductance with fixed exponential decay kernels) or plastic, with hardware/software support for spike-timing-dependent plasticity (STDP):
where , and updates are typically presynaptic-event-driven for SpiNNaker's efficiency (Casanueva-Morato et al., 2022, Gonzalez et al., 2024). Eligibility traces and weight update kernels run on-core; weights, plasticity traces, and routing tables are stored in local/external RAM (Höppner et al., 2021).
Other neuron models, such as programmable state-machine neurons or Izhikevich-type neurons, are supported via code generation or lookup-table accelerators (mainly SpiNNaker2) (Gonzalez et al., 2024).
3. Network Architectures and Memory Models on SpiNNaker
SpiNNaker excels in mapping structured SNN topologies, including central pattern generators, cortical microcircuits, and bio-inspired associative memories.
- Hippocampal CA3 Memory Models: (Casanueva-Morato et al., 2022) implemented two types of CA3 auto-associator networks using LIF cells. The "oscillatory" model features a DG input and all-to-all recurrent excitation (plastic, STDP), plus direct CA3-CA3 inhibition, yielding attractor dynamics with continuous oscillation. The "regulated" model introduces an inhibitory interneuron pool, externally gated for energy-efficient, recall-on-demand operation. In both, STDP encodes input patterns during explicit learning phases; recall is triggered by partial cues. Oscillatory models best capture biological rhythm but are less robust to non-orthogonal patterns and are less energy-efficient. Regulated models achieve stable pattern storage, support non-orthogonal associations, and minimize power consumption by suppressing activity outside learning/recall epochs.
- Spiking Logic and Memory: (Ayuso-Martinez et al., 2022, Ayuso-Martinez et al., 2022) present fully deterministic logic gates and RAM structures, built from minimal LIF circuits with static synapses and tuned weights/delays. These enable construction of composite digital modules (decoders, multiplexers, D-latches) as spiking networks, demonstrating sub-4 ms per operation with O(1–3) neurons/gate and resource-efficient mapping.
- Large-Scale Cortical Microcircuits: (Rhodes et al., 2019) demonstrates real-time simulation of the 77,000 neuron, 0.3 billion synapse cortical benchmark by partitioning neurons and synapses among dedicated core types and leveraging SpiNNaker's multicast hardware to exceed GPU/HPC real-time and energy performance.
4. Mapping, Programming, and Scalability
SNN simulators on SpiNNaker—whether PyNN, sPyNNaker, or custom C—divide neuron populations and synapse tables into core-sized units (typically 100–1000 LIF neurons/core; up to 10,000 for lightweight tasks), with routing tables programmed for efficient multicast (Gonzalez et al., 2024, Reiter et al., 2020).
- Core allocation: Each population is mapped to a core or tile; larger layers are tiled spatially or functionally. For recurrent architectures (e.g., CA3, RNNs), all-to-all or sparse matrix representations are used (CSR for large, sparse connections in LLMs (Nazeer et al., 2023)).
- Routing: Hardware routers map neuron IDs to output links and local cores by TCAM or compressed table entries (Mayr et al., 2019, Höppner et al., 2021). Delays are quantized to integer multiples of the base simulation step (typically 1 ms for SNNs).
- Memory management: Synaptic weights, neuron states, and trace buffers are fit within on-core SRAM where feasible, with overflow to banked SDRAM/DRAM as needed for very large models (Gonzalez et al., 2024).
- DVFS and adaptive mapping: In SpiNNaker2, per-core event load and activity-driven voltage/frequency adjustment enable proportional energy scaling and performance control (Höppner et al., 2021). Mapping tools automatically assign cores to populations, balancing load and minimizing cross-chip communication.
Performance scales from single chips to multi-million core assemblies, with latency per spike remaining sub-millisecond per hop. Large models (e.g., 5 million cores, 35,000 chips on SpiNNaker2) are accommodated by toroidal NoC topology (Gonzalez et al., 2024).
5. Performance Benchmarks and Applications
- Energy and Latency: SpiNNaker2 achieves ≈ 10 pJ/synaptic event (LIF+STDP) (Gonzalez et al., 2024), batch-one inference energy of ∼5–7 μJ/sample (e.g., EGRU for DVS Gestures is 5 μJ/sample compared to GPU's 20 μJ/sample (Gonzalez et al., 2024, Nazeer et al., 2023)). Energy scaling for very large SNNs stays below 0.63 μJ/synaptic event (first-gen SpiNNaker) (Rhodes et al., 2019).
- Scalability: Real-time operation holds from 1,000 neurons up to cortical models with 77,000+ neurons and over 0.3 billion synapses, with aggregate spike throughput >107 spikes/sec/chip (Gonzalez et al., 2024, Rhodes et al., 2019).
- Associative Memory Performance: The oscillatory CA3 model stores and recalls up to 4 orthogonal patterns in a 20-neuron network; the regulated model handles both orthogonal and non-orthogonal patterns with single-presentation recall latency <14 ms (Casanueva-Morato et al., 2022).
- Pattern and Logic Networks: Spiking logic gates on SpiNNaker sustain input frequencies up to 500 Hz, perform logic in 1–3 ms, and compose deterministically into RAM blocks with sub-6 ms write latencies (Ayuso-Martinez et al., 2022, Ayuso-Martinez et al., 2022).
- Adaptive and Event-Driven ML: Applications include event-based visual tracking (Glover et al., 2019), neuromorphic implementations of recurrent and convolutional SNNs for gesture recognition (Arfa et al., 9 Apr 2025), and language modeling with event-driven RNNs (EGRU) matching LSTM baselines for perplexity at an order-of-magnitude lower energy per inference (Nazeer et al., 2023).
Representative use cases span spiking CPGs for robotics (Angelidis et al., 2021), digital logic and memory, and machine learning tasks where energy efficiency and low latency are paramount in edge and embedded scenarios.
6. Comparative Analysis, Limitations, and Outlook
SpiNNaker’s digital, event-driven architecture secures uniformity and reproducibility across runs, with ≤1% conversion loss for ANNs-to-SNNs and no hardware retraining needed, in contrast to analog neuromorphic systems which require hardware-in-the-loop calibration for comparable accuracy (Ostrau et al., 2020).
Trade-offs and current limitations:
- Oscillatory vs. regulated memory networks: Biological plausibility (oscillatory) trades off against stability/energy efficiency (regulated CA3), with the latter supporting practical embedded deployment (Casanueva-Morato et al., 2022).
- Logic and memory modules: Larger fan-in/fan-out and O() scaling for gates like XOR limit scaling on single cores; 1 ms step granularity restricts maximum throughput; no dynamic (online) learning in combinational logic libraries (Ayuso-Martinez et al., 2022, Ayuso-Martinez et al., 2022).
- Quantization, Precision, and Energy: Aggressive 8-bit quantization with threshold scaling on SpiNNaker2 preserves accuracy within 1% of floating-point SNNs; quantization-aware training further improves efficiency (Arfa et al., 9 Apr 2025).
- Hybrid and Event/Ratelike Coding: SpiNNaker2’s MAC arrays expand application space to include both spiking and rate-based DNN computations, enabling hybrid neural models and bridging low-latency event-driven SNNs with conventional ANNs (Yan et al., 2020, Gonzalez et al., 2024).
Future research directions identified include: event-driven spike logic construction, biologically realistic gating for attractor networks, dynamic on-chip learning (e.g., e-prop integration), and scale-up to foundation model–scale SNNs leveraging large SpiNNaker2 clusters (Arfa et al., 9 Apr 2025, Gonzalez et al., 2024).
References
- Casanueva-Morato et al., "Spike-based computational models of bio-inspired memories in the hippocampal CA3 region on SpiNNaker" (Casanueva-Morato et al., 2022)
- Friedmann et al., "SpiNNaker2: A Large-Scale Neuromorphic System for Event-Based and Asynchronous Machine Learning" (Gonzalez et al., 2024)
- Ayuso-Martinez et al., "Spike-based building blocks for performing logic operations using Spiking Neural Networks on SpiNNaker" (Ayuso-Martinez et al., 2022)
- Ayuso-Martinez et al., "Construction of a spike-based memory using neural-like logic gates based on Spiking Neural Networks on SpiNNaker" (Ayuso-Martinez et al., 2022)
- Rhodes et al., "Real-Time Cortical Simulation on Neuromorphic Hardware" (Rhodes et al., 2019)
- Yavuz et al., "The SpiNNaker 2 Processing Element Architecture for Hybrid Digital Neuromorphic Computing" (Höppner et al., 2021)
- Zenke et al., "Benchmarking Deep Spiking Neural Networks on Neuromorphic Hardware" (Ostrau et al., 2020)
- Brunner et al., "Language Modeling on a SpiNNaker 2 Neuromorphic Chip" (Nazeer et al., 2023)
- Gendlin et al., "Efficient Deployment of Spiking Neural Networks on SpiNNaker2 for DVS Gesture Recognition Using Neuromorphic Intermediate Representation" (Arfa et al., 9 Apr 2025)