Don’t Get Too Excited - Eliciting Emotions in LLMs

Publikation: Konference artikel i Proceeding eller bog/rapport kapitelKonferencebidrag i proceedingsForskningpeer review

Abstract

This paper investigates the challenges of affect control in large language models (LLMs), focusing on their ability to express appropriate emotional states during extended dialogues. We evaluated state-of-the-art open-weight LLMs to assess their affective expressive range in terms of arousal and valence. Our study employs a novel methodology combining LLM-based sentiment analysis with multiturn dialogue simulations between LLMs.
We quantify the models' capacity to express a wide spectrum of emotions and how they fluctuate during interactions. Our findings reveal significant variations among LLMs in their ability to maintain consistent affect, with some models demonstrating more stable emotional trajectories than others.
Furthermore, we identify key challenges in affect control, including difficulties in producing and maintaining extreme emotional states and limitations in adapting affect to changing conversational contexts. These findings have important implications for the development of more emotionally intelligent AI systems and highlight the need for improved affect modelling in LLMs.
OriginalsprogEngelsk
TitelProceedings of the 2025 CHI Conference on Human Factors in Computing Systems
Antal sider9
ForlagAssociation for Computing Machinery
Publikationsdato2025
ISBN (Elektronisk){979-8-4007-1395-8/2025/04
DOI
StatusUdgivet - 2025
BegivenhedConference on Human Factors in Computing Systems - Japan, Yokohama, Japan
Varighed: 26 apr. 20251 maj 2025
https://dblp.org/db/conf/chi/index.html
https://chi2025.acm.org/

Konference

KonferenceConference on Human Factors in Computing Systems
LokationJapan
Land/OmrådeJapan
ByYokohama
Periode26/04/202501/05/2025
Internetadresse

Fingeraftryk

Dyk ned i forskningsemnerne om 'Don’t Get Too Excited - Eliciting Emotions in LLMs'. Sammen danner de et unikt fingeraftryk.

Citationsformater