Analysis of the Effect of Dataset Construction Methodology on Transferability of Music Emotion Recognition Models

Research output: Conference Article in Proceeding or Book/Report chapterArticle in proceedingsResearchpeer-review

View graph of relations

Indexing and retrieving music based on emotion is a powerful retrieval paradigm with many applications. Traditionally, studies in the field of music emotion recognition have focused on training and testing supervised machine learning models using a single music dataset. To be useful for today’s vast music libraries, however, such machine learning models must be widely applicable beyond the dataset for which they were created. In this work, we analyze to what extent models trained on one music dataset can predict emotion in another dataset constructed using a different methodology, by conducting cross-dataset experiments with three publicly available datasets. Our results suggest that training a prediction model on a homogeneous dataset with carefully collected emotion annotations yields a better foundation than prediction models learned on a larger, more varied dataset, with less reliable annotations.
Original languageEnglish
Title of host publicationProceedings of the ACM International Conference on Multimedia Retrieval (ICMR)
EditorsCathal Gurrin, Björn Þór Jónsson, Noriko Kando, Klaus Schöffmann, Yi-Ping Phoebe Chen, Noel E. O'Connor
Place of PublicationDublin, Ireland
PublisherAssociation for Computing Machinery
Publication dateJun 2020
ISBN (Electronic)978-1-4503-7087-5
Publication statusPublished - Jun 2020

    Research areas

  • Music emotion recognition, Cross-dataset, Model transferability


No data available

ID: 85596862