Skip to main navigation Skip to search Skip to main content

Newer, Larger, Better? A Critique of the Unreflective LLM Adoption in Communication Research

  • Paul Balluff
  • , Justin Chun-ting Ho
  • , Johannes B. Gruber
  • , Sean Palicki
  • , Alexis Palmer
  • , Luca Rossi
  • , Irina Shklovski
  • , Chung-hong Chan
  • GESIS – Leibniz Institute for the Social Sciences
  • National Yang Ming Chiao Tung University
  • Technical University of Munich
  • Dartmouth College
  • Tulane University
  • University of Copenhagen

Research output: Journal Article or Conference Article in JournalJournal articleResearch

Abstract

The growing adoption of large language models (LLMs) in political communication research has prompted excitement but also concern. In this opinion piece, we offer an informed and critical overview of common LLM use cases in the field, including text analysis, synthetic data generation, and experiments. We argue that while these tools can be appealing, they often introduce serious epistemic, environ-mental, and infrastructural trade-offs that are insufficiently acknowledged. Beyond technical limitations, we highlight deeper issues related to scholarly autonomy, methodological opacity, resources inequality, and corporate dependency. Rather than dismissing innovation, we advocate for critical reflexivity and a renewed commitment to methodological rigor. While examining shortcomings of LLMs in current practices, we also point to viable alternatives. In essence, we call for a more deliberate, context-sensitive integration of LLMs in social science–one that prioritizes transparency, sustainability, and scientific integrity.
Original languageEnglish
JournalPolitical Communication
Volume0
Issue number0
Pages (from-to)1-10
Number of pages10
DOIs
Publication statusPublished - 2026

Keywords

  • Large Language Models
  • Research Methods

Fingerprint

Dive into the research topics of 'Newer, Larger, Better? A Critique of the Unreflective LLM Adoption in Communication Research'. Together they form a unique fingerprint.

Cite this