Indexing and retrieving music based on emotion is a powerful retrieval paradigm with many applications. Traditionally, studies in the field of music emotion recognition have focused on training and testing supervised machine learning models using a single music dataset. To be useful for today’s vast music libraries, however, such machine learning models must be widely applicable beyond the dataset for which they were created. In this work, we analyze to what extent models trained on one music dataset can predict emotion in another dataset constructed using a different methodology, by conducting cross-dataset experiments with three publicly available datasets. Our results suggest that training a prediction model on a homogeneous dataset with carefully collected emotion annotations yields a better foundation than prediction models learned on a larger, more varied dataset, with less reliable annotations.
Title of host publication
Proceedings of the ACM International Conference on Multimedia Retrieval (ICMR)
Cathal Gurrin, Björn Þór Jónsson, Noriko Kando, Klaus Schöffmann, Yi-Ping Phoebe Chen, Noel E. O'Connor
This page is printed from https://en.itu.dk/Research/portalplaceholder?layoutfraction=googleanalytics&langRef=https://pure.itu.dk/portal/da/organisations/software-engineering(ee5c661f-e0ab-4afb-acba-6ec467d5f1a9)/persons.html?filter=current&pageSize=25&page=0