Abstract
The adoption of Large Language Models (LLMs) is not only transforming software engineering (SE) practice but is also poised to
fundamentally disrupt how research is conducted in the field. While
perspectives on this transformation range from viewing LLMs as
mere productivity tools to considering them revolutionary forces,
we argue that the SE research community must proactively engage
with and shape the integration of LLMs into research practices, emphasizing human agency in this transformation. As LLMs rapidly
become integral to SE research—both as tools that support investigations and as subjects of study—a human-centric perspective is
essential. Ensuring human oversight and interpretability is necessary for upholding scientific rigor, fostering ethical responsibility,
and driving advancements in the field. Drawing from discussions
at the 2nd Copenhagen Symposium on Human-Centered AI in SE,
this position paper employs McLuhan’s Tetrad of Media Laws to
analyze the impact of LLMs on SE research. Through this theoretical lens, we examine how LLMs enhance research capabilities
through accelerated ideation and automated processes, make some
traditional research practices obsolete, retrieve valuable aspects of
historical research approaches, and risk reversal effects when taken
to extremes. Our analysis reveals opportunities for innovation and
potential pitfalls that require careful consideration. We conclude
with a call to action for the SE research community to proactively
harness the benefits of LLMs while developing frameworks and
guidelines to mitigate their risks, to ensure continued rigor and
impact of research in an AI-augmented future.
fundamentally disrupt how research is conducted in the field. While
perspectives on this transformation range from viewing LLMs as
mere productivity tools to considering them revolutionary forces,
we argue that the SE research community must proactively engage
with and shape the integration of LLMs into research practices, emphasizing human agency in this transformation. As LLMs rapidly
become integral to SE research—both as tools that support investigations and as subjects of study—a human-centric perspective is
essential. Ensuring human oversight and interpretability is necessary for upholding scientific rigor, fostering ethical responsibility,
and driving advancements in the field. Drawing from discussions
at the 2nd Copenhagen Symposium on Human-Centered AI in SE,
this position paper employs McLuhan’s Tetrad of Media Laws to
analyze the impact of LLMs on SE research. Through this theoretical lens, we examine how LLMs enhance research capabilities
through accelerated ideation and automated processes, make some
traditional research practices obsolete, retrieve valuable aspects of
historical research approaches, and risk reversal effects when taken
to extremes. Our analysis reveals opportunities for innovation and
potential pitfalls that require careful consideration. We conclude
with a call to action for the SE research community to proactively
harness the benefits of LLMs while developing frameworks and
guidelines to mitigate their risks, to ensure continued rigor and
impact of research in an AI-augmented future.
| Originalsprog | Engelsk |
|---|---|
| Tidsskrift | CoRR |
| Sider (fra-til) | 1503-1507 |
| Antal sider | 4 |
| ISSN | 0000-0000 |
| DOI | |
| Status | Udgivet - 2025 |