Multi-model fusion framework based on multi-input cross-language emotional speech recognition Online publication date: Wed, 24-Feb-2021
by Guohua Hu; Qingshan Zhao
International Journal of Wireless and Mobile Computing (IJWMC), Vol. 20, No. 1, 2021
Abstract: Aiming at the limitations of characteristic performance and network performance in cross-language emotional speech recognition, this paper proposes a multi-model fusion framework based on multi-input cross-language emotional speech recognition. First of all, four kinds of emotional speech shared by four languages are selected as the experimental samples. Secondly, the affective features of three different modes of multi-lingual emotional speech signals are combined with SVM and two deep neural networks (MobileNet26 and ResNet38) to form the basic framework of multi-input corresponding multi-model fusion, in which the feature map in the deep neural network model carries out global maximum pooling and global average pooling so as to capture different features to double the diversity of the model. Finally, through the comparative experimental results, it is found that the multi-model fusion framework can distinguish the emotional differences of multiple languages more effectively than the single network model. At the same time, through the learning of large languages, we can also achieve transfer learning in small language emotional speech recognition to effectively increase the learning ability of the model.
Existing subscribers:
Go to Inderscience Online Journals to access the Full Text of this article.
If you are not a subscriber and you just want to read the full contents of this article, buy online access here.Complimentary Subscribers, Editors or Members of the Editorial Board of the International Journal of Wireless and Mobile Computing (IJWMC):
Login with your Inderscience username and password:
Want to subscribe?
A subscription gives you complete access to all articles in the current issue, as well as to all articles in the previous three years (where applicable). See our Orders page to subscribe.
If you still need assistance, please email subs@inderscience.com