Mining Multimodal Sequential Patterns: A Case Study on Affect Detection

Héctor Pérez Martínez, Georgios N. Yannakakis

    Publikation: Konference artikel i Proceeding eller bog/rapport kapitelKonferencebidrag i proceedingsForskningpeer review

    Abstract

    Temporal data from multimodal interaction such as speech and bio-signals cannot be easily analysed without a preprocessing phase through which some key characteristics of the signals are extracted. Typically, standard statistical signal features such as average values are calculated prior to the analysis and, subsequently, are presented either to a multimodal fusion mechanism or a computational model of the interaction. This paper proposes a feature extraction methodology which is based on frequent sequence mining within and across multiple modalities of user input. The proposed method is applied for the fusion of physiological signals and gameplay information in a game survey dataset. The obtained sequences are analysed and used as predictors of user affect resulting in computational models of equal or higher accuracy compared to the models built on standard statistical features.
    OriginalsprogEngelsk
    TitelICMI 11. Proceedings of the 13th international conference on multimodal interfaces
    Antal sider8
    ForlagAssociation for Computing Machinery
    Publikationsdato2011
    Sider3-10
    ISBN (Trykt)978-1-4503-0641-6
    StatusUdgivet - 2011

    Emneord

    • Temporal Data Analysis
    • Multimodal Interaction
    • Feature Extraction
    • Frequent Sequence Mining
    • User Affect Prediction

    Fingeraftryk

    Dyk ned i forskningsemnerne om 'Mining Multimodal Sequential Patterns: A Case Study on Affect Detection'. Sammen danner de et unikt fingeraftryk.

    Citationsformater