Small Singular Values Matter: A Random Matrix Analysis of Transformer Models
Abstract: As LLMs become increasingly central to AI applications, understanding their inner workings is essential. In this work, we analyze the spectra of weight matrices in pretrained transformer models through the lens of random matrix theory (RMT) to uncover learned structures. We find that certain regions of the weight matrix spectra deviate markedly from RMT predictions, indicating richer feature encoding. By comparing the corresponding singular vectors to eigenvectors of activation covariance matrices, we observe substantial overlap precisely where the spectra deviate from RMT expectations. Our analysis further reveals the important role of small singular values in LLMs, showing that these values contain significant information, a claim supported by increased perplexity when they are removed from the model. Although these small values may appear unimportant prior to task-specific fine-tuning, removing them afterward significantly degrades performance, revealing that fine-tuning refines the model primarily in these spectral regions. These results emphasize the critical role of small singular values, suggesting that removing them in an already aligned transformer can be detrimental, as it may compromise model alignment.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.