Don’t Get Too Excited - Eliciting Emotions in LLMs

Research output: Conference Article in Proceeding or Book/Report chapterArticle in proceedingsResearchpeer-review

Abstract

This paper investigates the challenges of affect control in large language models (LLMs), focusing on their ability to express appropriate emotional states during extended dialogues. We evaluated state-of-the-art open-weight LLMs to assess their affective expressive range in terms of arousal and valence. Our study employs a novel methodology combining LLM-based sentiment analysis with multiturn dialogue simulations between LLMs.
We quantify the models' capacity to express a wide spectrum of emotions and how they fluctuate during interactions. Our findings reveal significant variations among LLMs in their ability to maintain consistent affect, with some models demonstrating more stable emotional trajectories than others.
Furthermore, we identify key challenges in affect control, including difficulties in producing and maintaining extreme emotional states and limitations in adapting affect to changing conversational contexts. These findings have important implications for the development of more emotionally intelligent AI systems and highlight the need for improved affect modelling in LLMs.
Original languageEnglish
Title of host publicationProceedings of the 2025 CHI Conference on Human Factors in Computing Systems
Number of pages9
PublisherAssociation for Computing Machinery
Publication date2025
ISBN (Electronic){979-8-4007-1395-8/2025/04
DOIs
Publication statusPublished - 2025
EventHuman Factors in Computing Systems - Japan, Yokohama, Japan
Duration: 26 Apr 20251 May 2025
https://chi2025.acm.org/

Conference

ConferenceHuman Factors in Computing Systems
LocationJapan
Country/TerritoryJapan
CityYokohama
Period26/04/202501/05/2025
Internet address

Keywords

  • Emotion in AI
  • Large Language Models (LLMs)
  • Valence-arousal space
  • Emotion recognition
  • Affective computing
  • Human-Computer interaction (HCI)
  • Emottionally intelligent agents
  • Trust and engagement in AI
  • Conversational agents
  • Natural language processing (NLP)

Fingerprint

Dive into the research topics of 'Don’t Get Too Excited - Eliciting Emotions in LLMs'. Together they form a unique fingerprint.

Cite this