Skip to main navigation Skip to search Skip to main content

MuMTAffect - A Multimodal Multitask Affective Framework for Personality and Emotion Recognitionfrom Physiological Signals

Research output: Conference Article in Proceeding or Book/Report chapterArticle in proceedingsResearchpeer-review

Abstract

We present MuMTAffect, a novel Multimodal Multitask Affective Embedding Network designed for joint emotion classification and personality prediction (re-identification) from short physiological signal segments. MuMTAffect integrates multiple physiological modalities pupil dilation, eye gaze, facial action units, and galvanic skin responseusing dedicated transformer-based encoders for each modality and a fusion transformer to model cross-modal interactions. Inspired by the Theory of Constructed Emotion, the architecture explicitly separates core-affect encoding (valence–arousal) from higher-level conceptualization, thereby grounding predictions in contemporary affective neuroscience. Personality-trait prediction is leveraged as an auxiliary task to generate robust, user-specific affective embeddings, significantly enhancing emotion recognition performance. We evaluate MuMTAffect on the AFFEC dataset, demonstrating that stimulus-level emotional cues (Stim Emo) and galvanic skin response substantially improve arousal classification, while pupil and gaze data enhance valence discrimination. The inherent modularity of MuMTAffect allows effortless integration of additional modalities, ensuring scalability and adaptability. Extensive experiments and ablation studies underscore the efficacy of our multimodal multitask approach in creating personalized, context-aware affective computing systems, highlighting pathways for further advancements in cross-subject generalization.
Original languageEnglish
Title of host publicationProceedings of the 3rd ACM Multimedia International Workshop on Multimodal and Responsible Affective Computing
Number of pages9
PublisherAssociation for Computing Machinery
Publication date2025
Pages100–108
ISBN (Electronic)9798400720529
DOIs
Publication statusPublished - 2025
EventWorkshop on Multimodal and Responsible Affective Computing - Royal Dublin Convention Centre, Dublin, Ireland
Duration: 25 Nov 202529 Nov 2025
Conference number: 3
https://react-ws.github.io/2025/

Workshop

WorkshopWorkshop on Multimodal and Responsible Affective Computing
Number3
LocationRoyal Dublin Convention Centre
Country/TerritoryIreland
CityDublin
Period25/11/202529/11/2025
Internet address

Keywords

  • multimodal emotion recognition
  • personality prediction
  • physiological signals
  • transformers
  • multitask learning
  • cognitive modeling
  • affective computing
  • theory of constructed emotion

Fingerprint

Dive into the research topics of 'MuMTAffect - A Multimodal Multitask Affective Framework for Personality and Emotion Recognitionfrom Physiological Signals'. Together they form a unique fingerprint.

Cite this