You can view the full text of this article for free using the link below.

Title: Multimodal music emotion recognition method based on multi data fusion

Authors: Fanguang Zeng

Addresses: Academy of Music, Pingdingshan University, Pingdingshan, 467002, China

Abstract: In order to overcome the problems of low recognition accuracy and long recognition time in traditional multimodal music emotion recognition methods, a multimodal music emotion recognition method based on multiple data fusion is proposed. The multi-modal music emotion is decomposed by the non-negative matrix decomposition method to obtain the multi-modal data of audio and lyrics, and extract the audio modal emotional features and text modal emotional features respectively. After the multi-modal data of the two modal emotional features are weighted and fused through the linear prediction residual, the normalised multi-modal data is used as the training sample and input into the classification model based on support vector machine, so as to identify multimodal music emotion. The experimental results show that the proposed method takes the shortest time for multimodal music emotion recognition and improves the recognition accuracy.

Keywords: multi data fusion; multimodal music; emotional recognition; non-negative matrix decomposition method; support vector machines; SVMs.

DOI: 10.1504/IJART.2023.133662

International Journal of Arts and Technology, 2023 Vol.14 No.4, pp.271 - 282

Received: 28 Dec 2022
Accepted: 01 Mar 2023

Published online: 28 Sep 2023 *

Full-text access for editors Full-text access for subscribers Free access Comment on this article