Music has the ability to evoke a wide variety of emotions in human listeners. Research has shown that treatment for depression and mental health disorders is significantly more effective when it is complemented by music therapy. However, because each human experiences music-induced emotions differently, there is no systematic way to accurately predict how people will respond to different types of music at an individual level. In this experiment, a model is created to predict humans’ emotional responses to music from both their electroencephalographic data (EEG) and the acoustic features of the music. By using recursive feature elimination (RFE) to extract the most relevant and performing features from the EEG and music, a regression model is fit and accurately correlates the patient’s actual music-induced emotional responses and model’s predicted responses. By reaching a mean correlation of r = 0.788, this model is significantly more accurate than previous works attempting to predict music-induced emotions (e.g. a 370% increase in accuracy as compared to Daly et al. (2015)). The results of this regression fit suggest that accurately predicting how people respond to music from brain activity is possible. Furthermore, by testing this model on specific features extracted from any musical clip, music that is most likely to evoke a happier and pleasant emotional state in an individual can be determined. This may allow music therapy practitioners, as well as music-listeners more broadly, to select music that will improve mood and mental health.
Keywords:
Subject: Medicine and Pharmacology - Neuroscience and Neurology
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.