Mood Expression in Real-Time Computer Generated Music using Pure Data

Marco Scirea, Mark Nelson, Yun-Gyung Cheong, Byung Chull Bae

Research output: Conference Article in Proceeding or Book/Report chapterArticle in proceedingsResearchpeer-review

Abstract

This paper presents an empirical study that investigated if procedurally generated music based on a set of musical features can elicit a target mood in the music listener. Drawn from the two-dimensional affect model proposed by Russell, the musical features that we have chosen to express moods are intensity, timbre, rhythm, and dissonances. The eight types of mood investigated in this study are being bored, content, happy, miserable, tired, fearful, peaceful, and alarmed. We created 8 short music clips using PD (Pure Data) programming language, each of them represents a particular mood. We carried out a pilot study and present a preliminary result.
Original languageEnglish
Title of host publicationProceedings of the ICMPC-APSCOM 2014 Joint Conference
Number of pages5
Publisher College of Music, Yonsei University
Publication dateAug 2014
Pages263-267
Publication statusPublished - Aug 2014

Keywords

  • music
  • mood
  • affective computing

Fingerprint

Dive into the research topics of 'Mood Expression in Real-Time Computer Generated Music using Pure Data'. Together they form a unique fingerprint.

Cite this