Three tiers of computation in transformers and in brain architectures
Abstract: Human language and logic abilities are computationally quantified within the well-studied grammar-automata hierarchy. We identify three hierarchical tiers and two corresponding transitions and show their correspondence to specific abilities in transformer-based LMs. These emergent abilities have often been described in terms of scaling; we show that it is the transition between tiers, rather than scaled size itself, that determines a system's capabilities. Specifically, humans effortlessly process language yet require critical training to perform arithmetic or logical reasoning tasks; and LMs possess language abilities absent from predecessor systems, yet still struggle with logical processing. We submit a novel benchmark of computational power, provide empirical evaluations of humans and fifteen LMs, and, most significantly, provide a theoretically grounded framework to promote careful thinking about these crucial topics. The resulting principled analyses provide explanatory accounts of the abilities and shortfalls of LMs, and suggest actionable insights into the expansion of their logic abilities.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.