Mining Multimodal Sequential Patterns: A Case Study on Affect Detection

Héctor Pérez Martínez, Georgios N. Yannakakis

    Research output: Conference Article in Proceeding or Book/Report chapterArticle in proceedingsResearchpeer-review

    Abstract

    Temporal data from multimodal interaction such as speech and bio-signals cannot be easily analysed without a preprocessing phase through which some key characteristics of the signals are extracted. Typically, standard statistical signal features such as average values are calculated prior to the analysis and, subsequently, are presented either to a multimodal fusion mechanism or a computational model of the interaction. This paper proposes a feature extraction methodology which is based on frequent sequence mining within and across multiple modalities of user input. The proposed method is applied for the fusion of physiological signals and gameplay information in a game survey dataset. The obtained sequences are analysed and used as predictors of user affect resulting in computational models of equal or higher accuracy compared to the models built on standard statistical features.
    Original languageEnglish
    Title of host publicationICMI 11. Proceedings of the 13th international conference on multimodal interfaces
    Number of pages8
    PublisherAssociation for Computing Machinery
    Publication date2011
    Pages3-10
    ISBN (Print)978-1-4503-0641-6
    Publication statusPublished - 2011

    Fingerprint

    Dive into the research topics of 'Mining Multimodal Sequential Patterns: A Case Study on Affect Detection'. Together they form a unique fingerprint.

    Cite this