Abstract
How do product teams evaluate LLM-powered products? As organizations integrate large language models (LLMs) into digital products, their unpredictable nature makes traditional evaluation approaches inadequate, yet little is known about how practitioners navigate this challenge. Through interviews with nineteen practitioners across diverse sectors, we identify ten evaluation practices spanning informal 'vibe checks' to organizational meta-work. Beyond confirming four documented challenges, we introduce a novel fifth we call the results-actionability gap, in which practitioners gather evaluation data but cannot translate findings into concrete improvements. Drawing on patterns from successful teams, we contribute strategies to bridge this gap, supporting practitioners' formalization journey from ad-hoc interpretive practices (e.g., vibe checks) toward systematic evaluation. Our analysis suggests these interpretive practices are necessary adaptations to LLM characteristics rather than methodological failures. For HCI researchers, this presents a research opportunity to support practitioners in systematizing emerging practices rather than developing new evaluation frameworks.
| Original language | English |
|---|---|
| Title of host publication | Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems |
| Number of pages | 17 |
| Place of Publication | New York, NY, USA |
| Publisher | Association for Computing Machinery |
| Publication date | 13 Apr 2026 |
| Pages | 1-17 |
| ISBN (Electronic) | 979-8-4007-2278-3 |
| DOIs | |
| Publication status | Published - 13 Apr 2026 |
| Event | Conference on Human Factors in Computing Systems - Centre de Convencions Internacional de Barcelona., Barcelona, Spain Duration: 13 Apr 2026 → 17 Apr 2026 https://chi2026.acm.org/ |
Conference
| Conference | Conference on Human Factors in Computing Systems |
|---|---|
| Location | Centre de Convencions Internacional de Barcelona. |
| Country/Territory | Spain |
| City | Barcelona |
| Period | 13/04/2026 → 17/04/2026 |
| Internet address |
Keywords
- Large language models (LLMs)
- Evaluation
- Industry
- practice
- Interview study
- Best practices
Fingerprint
Dive into the research topics of 'Results-Actionability Gap: Understanding How Practitioners Evaluate LLM Products in the Wild'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver