Typical Sparsity Level in Neural Activations
Determine the expected number k of simultaneously active latent concepts (the sparsity level) in neural network activation vectors across different settings to assess feasibility of unique recovery under compressed-sensing bounds and to inform the choice of sparsity constraints in sparse coding and sparse autoencoder training.
References
It is unclear a priori what the expected value of $k$ would be across different activations.
— Stop Probing, Start Coding: Why Linear Probes and Sparse Autoencoders Fail at Compositional Generalisation
(2603.28744 - Pacela et al., 30 Mar 2026) in Implication box "How sparse must the codes be?", Section 3