Clustered Proximity Attention in Transformers
- Clustered Proximity Attention (CPA) is a fast self-attention mechanism that leverages query/key clustering to achieve linear time and memory complexity in sequence and spatial tasks.
- CPA employs techniques like LSH, K-means clustering, and polar partitioning to restrict attention computations to a small set of candidate keys while ensuring bounded approximation error.
- Empirical results in ASR, NLU, and routing demonstrate that CPA balances speed and accuracy, offering significant memory savings and tunable trade-offs based on application needs.
Clustered Proximity Attention (CPA) is a class of fast, sparsity-inducing self-attention mechanisms for Transformers that reduce the quadratic time and memory complexity of standard softmax-attention to linear in the sequence or node count. CPA algorithms achieve this by leveraging query/key grouping—through clustering or locality-aware partitioning—thereby restricting each attention computation to a small set of relevant candidates, while maintaining empirical accuracy and bounded approximation error in sequence modeling and combinatorial optimization contexts (Vyas et al., 2020, Basharzad et al., 27 Jan 2026).
1. Foundations of Clustered Proximity Attention
CPA was first introduced in the context of sequence modeling as a linear-time approximation to standard self-attention. In the traditional formulation, given queries , keys , and values for a sequence of length , the full attention computes the matrix:
with the output , requiring compute and memory. This scaling is prohibitive for large sequences or graphs (Vyas et al., 2020).
CPA circumvents this by dividing queries into clusters , computing attention from cluster centroids to keys, and broadcasting the aggregate result. In spatial-decision tasks, such as vehicle routing, CPA uses geometric locality to form fixed-size spatial clusters and restrict the attention set of each node to its cluster plus special tokens (e.g., depot) (Basharzad et al., 27 Jan 2026).
2. Algorithms and Mathematical Formulations
The core CPA methodology differs slightly by domain and implementation. Two representative algorithms are as follows:
2.1. Sequence Modeling CPA
- Clustering: Queries are assigned to clusters via Locality-Sensitive Hashing (LSH) to -bit codes, followed by K-means clustering in Hamming space. Each query belongs to exactly one cluster, represented by a partitioning matrix , with centroids .
- Attention Calculation: Compute centroid-to-key attention , then aggregate values .
- Broadcasting: Each query inherits output .
- Complexity: for centroid attention, for clustering (Vyas et al., 2020).
2.2. Geometric CPA for Vehicle Routing
- Partitioning: Each node’s coordinates are transformed to polar form around a depot and assigned a partitioning score (normalized angle/radius, mixing parameter ). Customers are sorted and cut into contiguous clusters of size , with clusters.
- Attention Masking: For each attention head, node attends to its cluster and the depot, reducing complexity to per layer for constant .
- Boundary Smoothing: Optional jitter added to smooths cluster boundaries between rounds (Basharzad et al., 27 Jan 2026).
| Variant/Domain | Clustering Mechanism | Attention Scope | Complexity per Layer |
|---|---|---|---|
| Sequence modeling (Vyas et al., 2020) | LSH + K-means (Hamming) | Cluster centroids, refined with top-m | |
| Spatial routing (Basharzad et al., 27 Jan 2026) | Polar partitioning + bucket sort | Per cluster (size ) plus depot |
3. Error Analysis and Approximation Guarantees
CPA provides theoretical bounds on the approximation error induced by clustering:
- If for cluster assigned to query , then
Thus, attention error is small for queries close to their centroid (Vyas et al., 2020).
An improved variant, top- key refinement, selects for each cluster the keys with highest centroid attention and computes exact per-query/key dot products for those keys. Letting be the refined attention and the full attention,
i.e., refinement never increases attention error in (Vyas et al., 2020).
4. Implementation and Pseudocode
The CPA pipeline consolidates into the following steps (for sequence models):
- Project to -bit codes via LSH.
- Cluster in Hamming space to form clusters.
- Compute centroids and centroid-to-key attention .
- Broadcast cluster attention values to all member queries.
- For top- key refinement, identify top- keys per cluster and recompute exact attention for these per query.
- Aggregate final output as a sum of centroid-based and refined per-key values.
For geometric CPA in routing, node partitioning uses polar-based bucketization, with each attention head assigned to one of several partitioning rounds (varying ). Within each layer, projections and attention are computed as in standard Transformers but restricted to cluster-local keys and a single global token (depot). Boundary smoothing randomizes cluster assignments at edges, improving stability and performance (Basharzad et al., 27 Jan 2026).
5. Empirical Results and Applications
Automatic Speech Recognition (ASR)
- On WSJ and Switchboard, improved CPA (i-CPA) attains 2× speed-up and up to 2% lower PER/WER relative to standard attention under equal FLOP or wall-clock budgets.
- Convergence speed increases: ~50% reduction in GPU hours compared to vanilla Transformers (Vyas et al., 2020).
Natural Language Understanding (Finetuned BERT)
- Using clusters ( of sequence length), i-CPA matches RoBERTa accuracy across GLUE and SQuAD, losing less than 1% F1 (Vyas et al., 2020).
Combinatorial and Vehicle Routing Problems
- SEAFormer with CPA achieves memory and computation, enabling training and inference on VRP instances with thousands of nodes (e.g., 5,000–7,000 customers), matching state-of-the-art divide-and-conquer methods in solution quality. Memory savings of 85–92% are reported for large instances compared to full attention. On VRP-100 tasks, multiple partitioning rounds and boundary smoothing further reduce optimality gaps to 0.56% (Basharzad et al., 27 Jan 2026).
6. Trade-offs, Hyperparameters, and Practical Considerations
CPA exposes several hyperparameters:
- Number of clusters or cluster size : Affects memory/speed trade-off. Larger clusters improve solution quality but increase per-layer cost.
- Top- key refinement (): Raising reduces approximation error at extra cost.
- Partitioning rounds and mixing parameter : Multiple rounds lead to more diverse local neighborhoods and better performance at marginally increased overhead. interpolates between angular and radial clustering in spatial CPA.
- Boundary smoothing width : Minor jitter in partitioning avoids hard allocation boundaries.
- Implementation: Sparse attention routines (e.g., FlashAttention) are used for cluster-based masking, with minimal change to standard Transformer projections.
Empirical ablation demonstrates that cluster size and balancing speed and quality in large-scale routing; partitioning rounds and smoothing halve optimality gaps with minimal cost (Basharzad et al., 27 Jan 2026).
7. Applications, Limitations, and Future Directions
CPA facilitates scalable Transformers in domains where full attention is prohibitive:
- Generative and discriminative sequence modeling (ASR, BERT finetuning) (Vyas et al., 2020).
- Large-scale combinatorial optimization, especially routing on spatial graphs (Basharzad et al., 27 Jan 2026).
Limitations include slight quality drops at extreme compression ratios (very small , ) and task-specific clustering requirements. For best performance, hyperparameters may require tuning per domain and instance size. Extensions to non-Euclidean metrics or dynamic graphs remain open for further research.
CPA achieves substantial reductions in compute and memory overhead with well-bounded approximation error, and functions as a drop-in replacement for full attention in large-scale Transformer models, unlocking applications previously infeasible due to resource constraints (Vyas et al., 2020, Basharzad et al., 27 Jan 2026).