Neurosymbolic processing in contemporary reasoning models

Determine whether contemporary large language models that perform chain-of-thought reasoning ("reasoning models") actually implement neurosymbolic processing—i.e., an internal combination of deep learning with symbolic reasoning—such that their natural-language chain-of-thought traces correspond to genuine underlying computational steps manipulating symbol-like representations.

Background

In Section 4.3, the paper explores whether the step-by-step reasoning outputs of modern chain-of-thought LLMs might reflect underlying computational processes that are closer to symbol manipulation than previously assumed. The authors discuss indications such as model-internal shortcuts and non-natural-language tokens that appear to carry learned semantic roles, suggesting a possible mapping between internal computation and explicit reasoning traces.

They point to neurosymbolic AI as a theoretical framework that integrates deep learning with symbolic reasoning and raise the possibility that some observed behaviors of reasoning models could be manifestations of such integration. However, they emphasize that it remains unresolved whether current models truly realize this neurosymbolic character in practice.

References

This could be found in neurosymbolic accounts of AI, in which the combination of deep learning and symbolic reasoning is combined (Sheth, Roy & Gaur 2023). Whether we have witnessed this here, however, is an open question.

Simulated Reasoning is Reasoning  (2601.02043 - Kempt et al., 5 Jan 2026) in Section 4.3 (Hidden Neurosymbolism?)