In-context learning emerges in chemical reaction networks without attention
Abstract: We investigate whether chemical processes can perform in-context learning (ICL), a mode of computation typically associated with transformer architectures. ICL allows a system to infer task-specific rules from a sequence of examples without relying solely on fixed parameters. Traditional ICL relies on a pairwise attention mechanism which is not obviously implementable in chemical systems. However, we show theoretically and numerically that chemical processes can achieve ICL through a mechanism we call subspace projection, in which the entire input vector is mapped onto comparison subspaces, with the dominant projection determining the computational output. We illustrate this mechanism analytically in small chemical systems and show numerically that performance is robust to input encoding and dynamical choices, with the number of tunable degrees of freedom in the input encoding as a key limitation. Our results provide a blueprint for realizing ICL in chemical or other physical media and suggest new directions for designing adaptive synthetic chemical systems and understanding possible biological computation in cells.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.