Papers
Topics
Authors
Recent
Search
2000 character limit reached

A Multimodal Approach towards Emotion Recognition of Music using Audio and Lyrical Content

Published 10 Oct 2018 in eess.AS, cs.CL, cs.CV, cs.MM, and cs.SD | (1811.05760v1)

Abstract: We propose MoodNet - A Deep Convolutional Neural Network based architecture to effectively predict the emotion associated with a piece of music given its audio and lyrical content.We evaluate different architectures consisting of varying number of two-dimensional convolutional and subsampling layers,followed by dense layers.We use Mel-Spectrograms to represent the audio content and word embeddings-specifically 100 dimensional word vectors, to represent the textual content represented by the lyrics.We feed input data from both modalities to our MoodNet architecture.The output from both the modalities are then fused as a fully connected layer and softmax classfier is used to predict the category of emotion.Using F1-score as our metric,our results show excellent performance of MoodNet over the two datasets we experimented on-The MIREX Multimodal dataset and the Million Song Dataset.Our experiments reflect the hypothesis that more complex models perform better with more training data.We also observe that lyrics outperform audio as a better expressed modality and conclude that combining and using features from multiple modalities for prediction tasks result in superior performance in comparison to using a single modality as input.

Citations (9)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.