On the Notion that Language Models Reason

Publikation: Konferencebidrag - EJ publiceret i proceeding eller tidsskriftPaperForskningpeer review

Abstract

Language models (LMs) are said to be exhibiting reasoning, but what does this entail? We assess definitions of reasoning and how key papers in the field of natural language processing (NLP) use the notion and argue that the definitions provided are not consistent with how LMs are trained, process information, and generate new tokens. To illustrate this incommensurability we assume the view that transformer-based LMs implement an implicit finite-order Markov kernel mapping contexts to conditional token distributions. In this view, reasoning-like outputs correspond to statistical regularities and approximate statistical invariances in the learned kernel rather than the implementation of explicit logical mechanisms. This view is illustrative of the claim that LMs are "statistical pattern matchers"" and not genuine reasoners and provides a perspective that clarifies why reasoning-like outputs arise in LMs without any guarantees of logical consistency. This distinction is fundamental to how epistemic uncertainty is evaluated in LMs. We invite a discussion on the importance of how the computational processes of the systems we build and analyze in NLP research are described.
OriginalsprogEngelsk
Publikationsdato1 dec. 2025
Antal sider9
DOI
StatusUdgivet - 1 dec. 2025
BegivenhedEurIPS 2025: Epistemic Intelligence in Machine Learning, - Bella Center, Copenhagen, Danmark
Varighed: 6 dec. 20257 dec. 2025
https://eurips.cc/workshops/

Konference

KonferenceEurIPS 2025
LokationBella Center
Land/OmrådeDanmark
ByCopenhagen
Periode06/12/202507/12/2025
Internetadresse

Fingeraftryk

Dyk ned i forskningsemnerne om 'On the Notion that Language Models Reason'. Sammen danner de et unikt fingeraftryk.

Citationsformater