Papers
Topics
Authors
Recent
Search
2000 character limit reached

Spark Transformer: Reactivating Sparsity in FFN and Attention

Published 7 Jun 2025 in cs.LG and stat.ML | (2506.06644v2)

Abstract: The discovery of the lazy neuron phenomenon in trained Transformers, where the vast majority of neurons in their feed-forward networks (FFN) are inactive for each token, has spurred tremendous interests in activation sparsity for enhancing large model efficiency. While notable progress has been made in translating such sparsity to wall-time benefits, modern Transformers have moved away from the ReLU activation function crucial to this phenomenon. Existing efforts on re-introducing activation sparsity often degrade model quality, increase parameter count, complicate or slow down training. Sparse attention, the application of sparse activation to the attention mechanism, often faces similar challenges. This paper introduces the Spark Transformer, a novel architecture that achieves a high level of activation sparsity in both FFN and the attention mechanism while maintaining model quality, parameter count, and standard training procedures. Our method realizes sparsity via top-k masking for explicit control over sparsity level. Crucially, we introduce statistical top-k, a hardware-accelerator-friendly, linear-time approximate algorithm that avoids costly sorting and mitigates significant training slowdown from standard top-$k$ operators. Furthermore, Spark Transformer reallocates existing FFN parameters and attention key embeddings to form a low-cost predictor for identifying activated entries. This design not only mitigates quality loss from enforced sparsity, but also enhances wall-time benefit. Pretrained with the Gemma-2 recipe, Spark Transformer demonstrates competitive performance on standard benchmarks while exhibiting significant sparsity: only 8% of FFN neurons are activated, and each token attends to a maximum of 256 tokens. This sparsity translates to a 2.5x reduction in FLOPs, leading to decoding wall-time speedups of up to 1.79x on CPU and 1.40x on GPU.

Summary

  • The paper introduces a Spark Transformer that enforces activation sparsity using statistical top-k masking in both FFN and attention modules.
  • It achieves up to 2.5× reduced FLOPs and significant speedups on CPUs and GPUs by reallocating parameters effectively.
  • Empirical evaluations demonstrate maintained model quality with only 8% active neuron utilization, highlighting practical efficiency gains.

Spark Transformer: Reactivating Sparsity in FFN and Attention

Introduction

The Spark Transformer architecture introduces an innovative approach to achieving activation sparsity in both feed-forward networks (FFN) and attention mechanisms of Transformers, without sacrificing model quality or increasing training complexity. Building upon the observed lazy neuron phenomenon, where the majority of neurons remain inactive for each token, Spark Transformer leverages statistical top-kk sparsity to reallocate parameters and enhance computational efficiency. This paper presents improved inference speeds, achieving a significant reduction in FLOPs per token and demonstrating increased wall-time speedups on both CPUs and GPUs. Figure 1

Figure 1

Figure 1: FLOPs per token vs. !quality (1/6 of full training)

Architectural Contributions

Spark FFN

The Spark FFN leverages top-kk masking to explicitly enforce a sparse activation pattern. Instead of the ReLU-based activation, Spark FFN utilizes GELU, combined with statistical top-kk, to approximate the kk largest values without sorting. This approach facilitates linear-time complexity and reduces training slowdown:

Spark-FFN(;1,2,,k,r)=σ(Topk(1[: ⁣r])(2[r ⁣:]))\operatorname{Spark-FFN}(; _1, _2, , k, r) = \sigma({Top}_k(_1^\top [:\!r]) \odot (_2^\top [r\!:]))

Sparsity is realized by repurposing existing FFN parameters to predict active entries, minimizing quality degradation.

Spark Attention

By employing a similar paradigm, Spark Attention focuses on restricting the attention span of each token to a fixed number, effectively reducing the number of evaluations per token:

Spark-Attention(;,,k,r)=σ1(Topk()(1[: ⁣r])σ2(2[r ⁣:]))\operatorname{Spark-Attention}(; , , k, r) = \sigma_1({Top}_k^{(-\infty)}(_1^\top [:\!r]) \odot \sigma_2(_2^\top [r\!:] ))

This reallocation reduces the computational overhead associated with large context lengths, offering accelerated training and inference times. Figure 2

Figure 2: Architecture of Spark FFN and Spark Attention.

Technical Benefits

Statistical Top-kk Operator

The statistical top-kk operator is a linear-complexity algorithm that effectively combats the inefficiencies associated with traditional sorting-based methods. By estimating the threshold through Gaussian distribution fitting, the algorithm provides differentiability and computational efficiency essential for activation sparsity:

θ(,k)=mean()+std()Q(1kd)\theta(, k) = mean() + std() \cdot Q(1 - \frac{k}{d})

This operator maintains activation sparsity, ensuring only essential neurons contribute to final outputs.

Empirical Evaluation

The Spark Transformer demonstrates competitive performance across standard benchmarks, showing neutrality in model quality while maintaining substantial sparsity:

  • Sparsity Levels: 8\% activation in FFN neuron utilization.
  • Reduced FLOPs: 2.5×\times reduction in computational FLOPs.
  • Inference Speedups: Achieving up to 1.79×\times speedup on CPU and 1.40×\times speedup on GPU. Figure 3

Figure 3

Figure 3: Sparsity level in Spark FFN.

Inference and Training Efficiency

Using customized computational approaches, the Spark Transformer facilitates speed enhancements on hardware platforms constrained by high-memory access and compute overheads:

  • Sparse Matrix Operations: Optimized for CPUs and GPUs, leveraging SIMD operations and CUDA kernels to skip unnecessary computation and storage loads.
  • Wall-time Improvements: Effective increase in token processing times due to reduced FLOPs and optimized memory bandwidth usage. Figure 4

Figure 4

Figure 4: Vector-Masked Matrix Multiplication.

Conclusion

Spark Transformer represents a significant advancement in achieving efficient large-scale model deployment. Through its novel approach to enforcing activation sparsity, it offers both theoretical benefits and practical optimizations that promise enhanced performance across deployment environments. Future work could explore further integration of Spark mechanisms with quantization and other model optimization strategies to continue improving efficiency in state-of-the-art neural network models.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 2 tweets with 27 likes about this paper.