Papers
Topics
Authors
Recent
Search
2000 character limit reached

Homomorphic Encryption for SNNs

Updated 3 February 2026
  • Homomorphic Encryption for SNNs is a privacy-driven approach that enables computations on encrypted, spike-based neural networks using neuromorphic principles.
  • HE schemes like TFHE, BFV, and CKKS are adapted to SNNs by approximating nonlinear firing functions and managing noise through bootstrapping and scheme-switching.
  • Optimizing discretization, coding strategies, and parameter selection boosts encrypted SNN performance, offering a balanced trade-off between accuracy and computational efficiency.

Homomorphic encryption (HE) for spiking neural networks (SNNs) is a domain at the intersection of privacy-preserving machine learning and neuromorphic computing. SNNs, often referred to as third-generation neural networks, use temporally discrete, binary spike signals for information processing, mimicking neurobiological computation and providing significant energy and sparsity advantages. Homomorphic encryption enables computation on encrypted data without decryption, thus safeguarding privacy throughout the inference pipeline. Technical advances have established the feasibility of SNN inference under various HE schemes, exploiting the SNN's discrete, spike-driven structure to alleviate traditional HE bottlenecks in evaluating nonlinear functions.

1. Homomorphic Encryption Schemes for SNNs

SNNs have been evaluated under several prominent HE schemes, each imposing unique constraints and engineering trade-offs on the SNN design and homomorphic evaluation strategy.

  • TFHE (Torus Fully Homomorphic Encryption): TFHE natively supports programmable bootstrapping over the torus, enabling efficient evaluation of arbitrary Boolean/integer functions with noise management via bootstrap operations. SNNs benefit especially from this integer message space and bootstrapping-based evaluation of spike-generation and reset steps (Li et al., 2023, Li et al., 2023).
  • BFV (Brakerski/Fan-Vercauteren): BFV supports arithmetic over modular integers, with operations expressed as additions and multiplications modulo a plaintext modulus tt and ciphertext modulus qq. Nonlinearities such as SNN firing thresholds are approximated by low-degree polynomials for homomorphic tractability (Nikfam et al., 2023).
  • CKKS (Cheon-Kim-Kim-Song): CKKS operates over approximate arithmetic for real and complex values, offering rich SIMD (slot) packing. While supporting ciphertext-ciphertext operations and rescaling for noise control, it cannot natively evaluate non-polynomial functions such as thresholding. Hence, polynomial/chebyshev approximations or hybrid scheme-switching with TFHE for exact comparison are employed in recent frameworks (Njungle et al., 5 Oct 2025).

Each scheme demands a trade-off between accuracy, computational complexity, memory footprint, and exactness of nonlinear operation evaluation.

2. SNN Model Representation and Homomorphic Adaptation

The core SNN computation under homomorphic encryption is based on the discrete-time Leaky Integrate-and-Fire (LIF) or Integrate-and-Fire (IF) neuron models. The homomorphic adaptation involves several important steps:

  • Discretization: All continuous values (weights, membrane potentials, thresholds) are scaled and rounded to integers. For IF models: Vt+1=Vt+ItV_{t+1} = V_t + I_t, st=1Vtθs_t = 1_{V_t \geq \theta}, Vt+1Vt+1θstV_{t+1} \leftarrow V_{t+1} - \theta s_t. For LIF: V[t+1]=αV[t]+βIsyn[t]V[t+1] = \alpha V[t] + \beta I_{\text{syn}}[t], s[t]=H(V[t]vth)s[t]=H(V[t]-v_{\rm th}) (Li et al., 2023, Li et al., 2023, Nikfam et al., 2023, Njungle et al., 5 Oct 2025).
  • Homomorphic Linear Operations: Weighted synaptic inputs and membrane integration are computed via ciphertext addition and scalar multiplication, fully compatible with linear HE operations.
  • Nonlinear Firing (Fire) and Reset: The firing function 1Vtθ1_{V_t \geq \theta} and reset logic are implemented via:

Table: Firing/Reset Operation Implementations

HE Scheme Nonlinearity Evaluation Description
TFHE Programmable Bootstrap Arbitrary integer function g:ZpZpg:\mathbb{Z}_p\to\mathbb{Z}_p
BFV Polynomial Approximation Soft-step (e.g. cubic) polynomial
CKKS Chebyshev Polynomial / Switch Degree-N approx or TFHE for Boolean step

Flexible implementation of Fire/Reset is essential for accurate SNN inference on ciphertexts.

3. Circuit Depth, Bootstrapping, and Noise Management

Efficient SNN computation under HE requires careful management of homomorphic noise and multiplicative depth.

  • Noise growth: Each linear operation increases noise. Nonlinear polynomials and ciphertext-ciphertext operations accelerate this growth. Bootstrapping resets noise, enabling deeper computation (Li et al., 2023, Li et al., 2023, Njungle et al., 5 Oct 2025).
  • Bootstrapping strategy: In frameworks based on TFHE (FHE-DiSNN/FHE-DiCSNN), each neuron per timestep undergoes two bootstraps: one for FHE-Fire and one for FHE-Reset. Techniques such as Poisson/convolutional encoding allow pre-encryption or reduction of required bootstrapping (Li et al., 2023, Li et al., 2023).
  • Parameter selection: The plaintext modulus pp (TFHE/BFV) and scaling factor Δ\Delta (CKKS) must be chosen to prevent overflow and guarantee correct decryption/deserialization at inference end. Tuning NN (ring size), noise growth parameters, and bootstrapping interval is scenario-dependent (Nikfam et al., 2023, Njungle et al., 5 Oct 2025).

A plausible implication is that SNNs tolerate discretization and quantization well due to their event-driven, binary nature, enabling lower message moduli and faster evaluation cycles compared to conventional DNNs under HE.

4. Coding Strategies, Network Architectures, and Accuracy

Input spike coding and network architecture have a material impact on accuracy, runtime, and bootstrapping overhead.

  • Poisson Coding: Traditional spike encoding expands each input value into a long discrete spike train; while amenable to TFHE, this greatly increases the number of homomorphic operations (Li et al., 2023).
  • Convolutional Encoding: Replacing Poisson coding with learned convolutional kernel-based encodings reduces the number of time steps required (e.g., from T20T\gg 20 to T=2T=2–$4$), shortening runtime while preserving accuracy (Li et al., 2023).
  • SNN Architectures: Implemented networks include LeNet-5 SNNs, convolutional SNNs (CSNNs), and deep networks such as SNN ResNet-19. Convolutional layers and pooling are mapped to homomorphic rotations, scalar multiplications, and slot-wise sums (Njungle et al., 5 Oct 2025, Nikfam et al., 2023).

Table: Experimental Accuracy and Runtime (Selected Results)

Framework Dataset Architecture Accuracy (Enc.) Plain Model Time/img (s)
FHE-DiSNN MNIST (k=30,T=20) 95.1% 95.7% 16
FHE-DiCSNN MNIST CSNN 97.94% 98.47% 0.75
PrivSpike (poly) MNIST LeNet-5 95.70% 98.9% 28
PrivSpike (swch) MNIST LeNet-5 98.10% 98.9% 110
PrivSpike CIFAR-10 ResNet-19 79.3% 83.19% 784–3,264
BFV-SNN FshnMNIST LeNet-5 93.2% (tt=1000) 930

Notably, "FHE-DiCSNN" achieves encrypted accuracy within 0.53% of plaintext SNN while providing sub-second inference times, exploiting convolutional encoding and parallelized TFHE bootstrapping (Li et al., 2023). "PrivSpike" leverages CKKS's slot packing to scale to deeper SNNs but faces substantial latency when employing high-precision scheme-switching (Njungle et al., 5 Oct 2025).

5. Comparative Analysis and Trade-Offs

Several studies provide head-to-head comparisons between SNN and DNN implementations under HE, as well as among their own framework's variants.

  • SNNs vs. DNNs under HE: Encrypted SNNs notably outperform encrypted DNNs for small message moduli. On FashionMNIST, for t=10t=10, SNN achieves 60.5% vs. DNN's 22.1%—a 40% accuracy gain, attributed to SNNs' spike-driven robustness to quantization (Nikfam et al., 2023).
  • Nonlinearity evaluation trade-offs: Polynomial approximations are fast but lose some accuracy; scheme-switching (CKKS→TFHE) provides near lossless accuracy on LIF firing at the cost of substantial memory and compute overhead (e.g., 4×–5× longer in PrivSpike) (Njungle et al., 5 Oct 2025).
  • Bootstrapping costs and engineering: In TFHE, each bootstrap resets noise but dominates runtime (e.g., 0.8s per SNN step in FHE-DiSNN, 0.1s in FHE-DiCSNN with parallelization) (Li et al., 2023, Li et al., 2023). Parallel bootstrapping and reduced coding time steps (convolutional encodings) are effective mitigations.

A plausible implication is that, for privacy-critical, resource-constrained neuromorphic inference, SNNs under FHE provide distinct advantages over second-generation DNNs, both in quantization tolerance and homomorphic evaluation tractability.

6. Current Limitations and Prospects

Notable limitations and open research directions remain:

  • Dataset and Network Scaling: Most frameworks are demonstrated only on MNIST-scale datasets and relatively shallow SNNs. Performance, accuracy, and latency for deeper SNNs or larger datasets (e.g., CIFAR-10, ImageNet) are bottlenecked by required bootstrapping and parameter scaling (Li et al., 2023, Njungle et al., 5 Oct 2025).
  • Complex Neuron Models: The current state-of-the-art supports IF/LIF neurons; more complex spiking neurons (QIF, EIF) would require nontrivial programmable bootstraps or high-degree polynomial approximations, increasing noise and runtime cost (Li et al., 2023).
  • Training on Ciphertexts: Existing work targets inference only. Efficient end-to-end private training, perhaps using homomorphic accumulators or hybrid secure multi-party computation, is an open challenge (Njungle et al., 5 Oct 2025).
  • Hardware Acceleration: TFHE bootstrapping is the primary computational bottleneck. FPGA/GPU hardware acceleration is a proposed solution for sub-second real-time encrypted SNN inference (Li et al., 2023).

Future work may address hybrid HE/enclave approaches, adaptive polynomial degrees per SNN layer, compact coding strategies (burst/population), and mixed-precision HE pipelines to further optimize the accuracy/efficiency trade-off.

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Homomorphic Encryption for SNNs.