Title: Recognising continuous emotions in dialogues based on DISfluencies and non-verbal vocalisation features for a safer network environment

Authors: Huan Zhao; Xiaoxiao Zhou; Yufeng Xiao

Addresses: School of Information Science and Engineering, Hunan University (Changsha), No. 2, Lushan South Road, Hunan, 410082, China ' School of Information Science and Engineering, Hunan University (Changsha), No. 2, Lushan South Road, Hunan, 410082, China ' School of Information Science and Engineering, Hunan University (Changsha), No. 2, Lushan South Road, Hunan, 410082, China

Abstract: With the development of networks and social media, audio and video have become a more popular way to communicate. Those audio and video can spread information to create some negative effect, e.g., negative sentiment with suicide tendency or threatening messages to make people panic. In order to keep a safe network environment, it is necessary to recognise emotion in dialogues. To improve recognition of continuous emotion in dialogues, we propose to combine DISfluencies and non-verbal vocalisations (DIS-NV) features with bidirectional long short-term memory (BLSTM) model to predict continuous emotion. DIS-NV features are effective emotion features, including filled pauses features, fillers features, shutters features, laughter feature and breath feature. Bidirectional long short-term memory (BLSTM) can learn past information and use future information. State-of-the-art recognition attains 62% accuracy. Our experimental method can increase accuracy to 76%.

Keywords: continuous emotion; bidirectional long short-term memory; BLSTM; dialogue; knowledge-inspired features; safe network environment; DISfluencies and non-verbal vocalisation; DIS-NV; AVEC2012; discretisation; speech emotion recognition; low-level descriptors; LLD.

DOI: 10.1504/IJCSE.2019.100237

International Journal of Computational Science and Engineering, 2019 Vol.19 No.2, pp.169 - 176

Received: 06 Mar 2018
Accepted: 11 Jun 2018

Published online: 20 Jun 2019 *

Full-text access for editors Full-text access for subscribers Purchase this article Comment on this article