Papers
Topics
Authors
Recent
Search
2000 character limit reached

Transformers as Measure-Theoretic Associative Memory: A Statistical Perspective and Minimax Optimality

Published 2 Feb 2026 in stat.ML and cs.LG | (2602.01863v1)

Abstract: Transformers excel through content-addressable retrieval and the ability to exploit contexts of, in principle, unbounded length. We recast associative memory at the level of probability measures, treating a context as a distribution over tokens and viewing attention as an integral operator on measures. Concretely, for mixture contexts $ν= I{-1} \sum_{i=1}I μ{(i*)}$ and a query $x_{\mathrm{q}}(i*)$, the task decomposes into (i) recall of the relevant component $μ{(i*)}$ and (ii) prediction from $(μ{i*},x\mathrm{q})$. We study learned softmax attention (not a frozen kernel) trained by empirical risk minimization and show that a shallow measure-theoretic Transformer composed with an MLP learns the recall-and-predict map under a spectral assumption on the input densities. We further establish a matching minimax lower bound with the same rate exponent (up to multiplicative constants), proving sharpness of the convergence order. The framework offers a principled recipe for designing and analyzing Transformers that recall from arbitrarily long, distributional contexts with provable generalization guarantees.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)

Collections

Sign up for free to add this paper to one or more collections.