Papers
Topics
Authors
Recent
Search
2000 character limit reached

Spiking Bayesian Neural Networks

Updated 10 February 2026
  • Spiking Bayesian Neural Networks (SBNNs) are models that integrate Bayesian inference with event-driven spiking dynamics to represent uncertainty and enhance energy efficiency.
  • They employ techniques like variational Bayesian inference, message-passing, and sampling methods to optimize synaptic weight distributions and network learning.
  • SBNNs are implemented in diverse architectures—from hardware-efficient binary networks to deep convolutional models—supporting applications in continual learning and uncertainty-aware diagnostics.

A Spiking Bayesian Neural Network (SBNN) is an overview of Bayesian learning principles and spiking neural network (SNN) models, in which uncertainty is represented and manipulated via distributions over synaptic weights, neuronal or circuit-level noise, or spike-based representations of probabilistic quantities. SBNNs are defined by the use of spiking dynamics for information processing or inference, and the use of explicit Bayesian formalisms—most commonly variational Bayesian inference, sampling, or Bayesian message-passing—at the algorithmic or neural-circuit level. The aim is to unify the statistical rigor and uncertainty-quantification capabilities of Bayesian modeling with the energy efficiency, event-driven coding, and neurobiological plausibility of SNNs. SBNNs have been realized in architectures spanning hardware-oriented binary networks, deep convolutional models, hybrid Bayesian-signal-processing pipelines, and factor-graph message-passing spiking assemblies, with applications ranging from neuromorphic computing and continual learning to uncertainty-aware medical diagnostics.

1. Theoretical Foundations and Motivations

The core motivation for SBNNs derives from the convergence of three areas: (1) the Bayesian brain hypothesis, positing probabilistic encoding and inference in neural circuits; (2) the computational and energy advantages of SNNs as implementable on neuromorphic substrates; and (3) the demand for uncertainty quantification and robust continual learning in real-world and adaptive AI systems. In contrast to deterministic SNNs, SBNNs actively learn and represent distributions—over weights, latent states, or outputs—to encode epistemic or aleatoric uncertainty. Key theoretical frameworks include mean-field variational Bayes (for Gaussian or Bernoulli posteriors over weights), probabilistic graphical models realized with spike-based message-passing, and sampling-based approximations to posteriors using neural or device-induced noise (Jang et al., 2020, Skatchkovsky et al., 2022, Chegini et al., 23 Apr 2025, Walker et al., 23 May 2025, Adamiat et al., 11 Dec 2025, Adamiat et al., 19 Dec 2025, Tavanaei et al., 2016, Paulin et al., 2014).

2. Bayesian SNN Architectures and Their Mathematical Formalisms

SBNNs instantiate Bayesian reasoning at multiple levels. Architectures include:

  • Deep Bayesian SNNs with Variational Inference: Weights are treated as stochastic variables with priors (Gaussian for real-valued, Bernoulli for binary/binarized networks). A factorized posterior is typically fit via the minimization of the variational free energy (evidence lower bound, ELBO). For binary weights, the mean-field Bernoulli posterior is parameterized by logits and optimized using natural gradient or reparameterization tricks (e.g., Gumbel-Softmax) for surrogate gradient backpropagation (Jang et al., 2020, Walker et al., 23 May 2025, Katti et al., 2024, Skatchkovsky et al., 2022).
  • Rate/Spike-Based Message Passing for Bayesian Inference: For tasks such as Gaussian or Bernoulli belief propagation, SNNs encode messages as spike train populations, with local spiking microcircuits (equality, sum, scaling nodes) converting input messages to output messages according to sum-product rules; weights are trained using STDP or are analytically programmed for exact Gaussian inference (Adamiat et al., 11 Dec 2025, Adamiat et al., 19 Dec 2025, Shukla et al., 2019).
  • Online EM and STDP-Driven GMM Learning: In HMM–SNN hybrids, each HMM state’s emission is modeled by a small spike-based network that behaves as a winner-take-all GMM, with STDP rules effecting local EM-like parameter updates for mixture means and priors, connecting Bayesian parameter learning to neurobiologically plausible plasticity (Tavanaei et al., 2016).
  • Sampling-Based SBNNs and Physical Stochasticity: Some SBNN frameworks exploit intrinsic device noise (e.g., magnetic tunnel junctions, MTJs) or neuron-intrinsic stochastic thresholds as a resource for posterior sampling. In these models, distributions are realized directly as noisy spike trains, and device randomness matches the theoretical variational distribution under appropriate calibration (Zheng et al., 3 Feb 2026, Kungl et al., 2018).

A summary of primary Bayesian formulations for weights, outputs, and inference dynamics is provided below:

Component Probabilistic Model Algorithm/Rule
Synaptic weights q(w)=Bern(p)q(w) = \text{Bern}(p) or q(w)=N(m,P1)q(w) = \mathcal{N}(m, P^{-1}) Variational Bayes, Gumbel-Softmax, natural gradient (Jang et al., 2020, Skatchkovsky et al., 2022)
Node/Message Gaussian N(m,σ2)\mathcal{N}(m,\sigma^2) / Bernoulli Ber(p)\text{Ber}(p) Rate/probability represented by spike rates, decoded by counting (Adamiat et al., 11 Dec 2025, Adamiat et al., 19 Dec 2025)
Update Local STDP (online EM, message passing), gradient or moment-matching Surrogate gradient, online stochastic updates, expectation-propagation (Tavanaei et al., 2016, Yao et al., 30 Jun 2025)

3. Inference and Learning Algorithms

  • Variational Bayesian Learning in SNNs: The objective is to minimize

F(q)=Eq(w)[LD(w)]+ρKL(q(w)p(w))\mathcal{F}(q) = \mathbb{E}_{q(w)}[L_D(w)] + \rho\,\mathrm{KL}(q(w)\Vert p(w))

where the expectation is estimated via reparameterized Monte Carlo sampling, and gradients are computed using surrogate gradient or importance-weighted straight-through (IW-ST) estimators. For binary weights, Gumbel-Softmax surrogates are used for gradient compatibility with SNN non-differentiabilities (Jang et al., 2020, Walker et al., 23 May 2025, Katti et al., 2024).

  • Bayesian Message Passing and STDP: Modular spiking microcircuits are constructed so that populations of neurons realize sum-product updates on either Gaussian or Bernoulli distributions, with each node’s output spike train encoding the outgoing message. STDP learning tunes synapses so that spiking output rates realize the desired functional transformations; for factor graphs with Bernoulli variables, three-neuron microcircuits suffice per factor (Adamiat et al., 19 Dec 2025, Adamiat et al., 11 Dec 2025).
  • Expectation-Propagation (EP) in SNNs: EP algorithms recast SNNs as factor graphs, performing alternating forward and backward passes that update Gaussian or categorical messages over weights and activations by moment-matching cavity and tilted distributions. The approach provides batch-wise marginalization of nuisance variables (hidden states) and can treat continuous or binary weights in unified terms (Yao et al., 30 Jun 2025).
  • Monte Carlo Sampling with Device Stochasticity: In SBNNs implemented on physical substrates with intrinsic noise (e.g., MTJ neurons, BrainScaleS wafer), random events supply the necessary stochasticity for approximate sampling from posterior distributions without explicit generation of random numbers. The device threshold is learned or calibrated to align the empirical firing-rate distribution with the algorithmic Bayesian posterior (Zheng et al., 3 Feb 2026, Kungl et al., 2018).

4. Uncertainty Quantification and Calibration

SBNNs provide uncertainty quantification by leveraging the posterior distribution over weights or hidden states. At prediction time, epistemic uncertainty is estimated by Monte Carlo averaging over sampled network instantiations from the posterior; predictive entropy and mutual information are commonly reported calibration metrics (Chegini et al., 23 Apr 2025, Skatchkovsky et al., 2022, Katti et al., 2024). Enhanced calibration is consistently reported compared to point-estimate or frequentist SNNs, permitting risk-aware decision-making and “referral-for-review” in applications demanding reliability (e.g., medical diagnosis) (Chegini et al., 23 Apr 2025).

Results across multiple studies demonstrate that SBNN ensemble predictors (i.e., Bayesian model averaging) are better calibrated than single-point maximum a posteriori (MAP) estimators, and predictive uncertainty increases systematically for misclassified or out-of-distribution samples (Jang et al., 2020, Skatchkovsky et al., 2022).

5. Neuromorphic and Hardware Implementations

SBNNs are optimized for hardware efficiency by leveraging binary weights, event-driven computation, and spike-sparse activity:

  • Binary-weight SBNNs implement neural multiplication with 2-to-1 multiplexers and perform inference with orders of magnitude fewer spikes than full-precision counterparts, with negligible accuracy loss (Katti et al., 2024, Jang et al., 2020).
  • Physical stochasticity is exploited for in-silico random number generation, allowing inherently Bayesian inference on neuromorphic substrates without separate RNG circuits (Zheng et al., 3 Feb 2026). Device-characterization and calibration procedures are employed to map algorithmic thresholds to physical ones.
  • Accelerated Bayesian inference is demonstrated on platforms such as Zynq-7000 FPGA (Katti et al., 2024), BrainScaleS wafer (Kungl et al., 2018), and Intel Lava (Skatchkovsky et al., 2022), delivering improved energy efficiency (up to 30× lower power consumption), ultra-low-latency (T≈4), and milliwatt-scale power envelopes for real-time vision or sensor-processing tasks.

6. Applications, Empirical Performance, and Limitations

SBNNs have achieved state-of-the-art performance on neuromorphic benchmarks such as DVS-Gesture and SHD, medical image classification, speech recognition, continual learning, and hardware-robust inference (Chegini et al., 23 Apr 2025, Jang et al., 2020, Walker et al., 23 May 2025, Skatchkovsky et al., 2022, Katti et al., 2024, Tavanaei et al., 2016). Key highlights include:

However, SBNNs currently face open challenges in extending principled Bayesian inference to deep, highly recurrent, or hierarchical networks at scale, especially with non-Gaussian or discrete latent states. Training may be sensitive to variational approximations, and full posterior sampling can require substantial resources if not realized in hardware.

7. Outlook: Research Directions and Neurobiological Relevance

SBNNs unify statistical inference and neurobiologically plausible computation, suggesting explanations for probabilistic coding and learning in the brain. Their modular design, hardware efficiency, and native uncertainty quantification position them as fundamental building blocks for robust, event-driven AI systems and brain-inspired computing (Shukla et al., 2019, Adamiat et al., 11 Dec 2025, Chegini et al., 23 Apr 2025). Future directions include scalable architectures for high-dimensional message-passing, adaptive SBNNs for dynamic, continual environments, and deeper integration of device-level noise as Bayesian resources. In neuroscience, SBNNs offer testable hypotheses for the implementation of probabilistic inference via spike dynamics, STDP, and distributed winner-take-all microcircuits (Paulin et al., 2014, Tavanaei et al., 2016, Shukla et al., 2019).

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Spiking Bayesian Neural Network (SBNN).