Skip to main navigation Skip to search Skip to main content

On the Notion that Language Models Reason

Research output: Contribution to conference - NOT published in proceeding or journalPaperResearchpeer-review

Abstract

Language models (LMs) are said to be exhibiting reasoning, but what does this entail? We assess definitions of reasoning and how key papers in the field of natural language processing (NLP) use the notion and argue that the definitions provided are not consistent with how LMs are trained, process information, and generate new tokens. To illustrate this incommensurability we assume the view that transformer-based LMs implement an implicit finite-order Markov kernel mapping contexts to conditional token distributions. In this view, reasoning-like outputs correspond to statistical regularities and approximate statistical invariances in the learned kernel rather than the implementation of explicit logical mechanisms. This view is illustrative of the claim that LMs are "statistical pattern matchers"" and not genuine reasoners and provides a perspective that clarifies why reasoning-like outputs arise in LMs without any guarantees of logical consistency. This distinction is fundamental to how epistemic uncertainty is evaluated in LMs. We invite a discussion on the importance of how the computational processes of the systems we build and analyze in NLP research are described.
Original languageEnglish
Publication date1 Dec 2025
Number of pages9
DOIs
Publication statusPublished - 1 Dec 2025
EventEurIPS 2025: Epistemic Intelligence in Machine Learning, - Bella Center, Copenhagen, Denmark
Duration: 6 Dec 20257 Dec 2025
https://eurips.cc/workshops/

Conference

ConferenceEurIPS 2025
LocationBella Center
Country/TerritoryDenmark
CityCopenhagen
Period06/12/202507/12/2025
Internet address

Keywords

  • Language models
  • Inference
  • Markov model
  • Reasoning

Fingerprint

Dive into the research topics of 'On the Notion that Language Models Reason'. Together they form a unique fingerprint.

Cite this