Analysis of the Effect of Dataset Construction Methodology on Transferability of Music Emotion Recognition Models

Sabina Hult, Line Bay Kreiberg, Sami Sebastian Brandt, Björn Thór Jónsson

Research output: Conference Article in Proceeding or Book/Report chapterArticle in proceedingsResearchpeer-review

Abstract

Indexing and retrieving music based on emotion is a powerful retrieval paradigm with many applications. Traditionally, studies in the field of music emotion recognition have focused on training and testing supervised machine learning models using a single music dataset. To be useful for today’s vast music libraries, however, such machine learning models must be widely applicable beyond the dataset for which they were created. In this work, we analyze to what extent models trained on one music dataset can predict emotion in another dataset constructed using a different methodology, by conducting cross-dataset experiments with three publicly available datasets. Our results suggest that training a prediction model on a homogeneous dataset with carefully collected emotion annotations yields a better foundation than prediction models learned on a larger, more varied dataset, with less reliable annotations.
Original languageEnglish
Title of host publicationProceedings of the ACM International Conference on Multimedia Retrieval (ICMR)
EditorsCathal Gurrin, Björn Þór Jónsson, Noriko Kando, Klaus Schöffmann, Yi-Ping Phoebe Chen, Noel E. O'Connor
Place of PublicationDublin, Ireland
PublisherAssociation for Computing Machinery
Publication dateJun 2020
Pages316-320
ISBN (Electronic)978-1-4503-7087-5
DOIs
Publication statusPublished - Jun 2020

Keywords

  • Music emotion recognition
  • Cross-dataset
  • Model transferability

Fingerprint

Dive into the research topics of 'Analysis of the Effect of Dataset Construction Methodology on Transferability of Music Emotion Recognition Models'. Together they form a unique fingerprint.

Cite this