Fast-thinking bias in reasoning models

Ascertain whether large language models that perform chain-of-thought reasoning exhibit a bias analogous to human 'fast thinking' (System 1) as described by Kahneman.

Background

The paper contrasts human 'fast' and 'slow' thinking, noting that fast thinking relies on heuristics and is efficient but susceptible to systematic biases. It then examines whether analogous biases might emerge in reasoning models that operate probabilistically and often rely on heuristics during chain-of-thought inference.

While tools like retrieval-augmented generation can improve factuality, the authors argue that practical, context-heavy domains may still provoke brittle or biased outputs. They explicitly identify uncertainty about whether a fast-thinking-like bias will characterize these models.

References

It is an open question of whether reasoning models will employ a similar bias of 'fast thinking'.

Simulated Reasoning is Reasoning  (2601.02043 - Kempt et al., 5 Jan 2026) in Section 5.3 (Robustness vs. Brittleness)