Title: Multi-applicable text classification based on deep neural network

Authors: Jingjing Yang; Feng Deng; Suhuan Lv; Rui Wang; Qi Guo; Zongchun Kou; Shiqiang Chen

Addresses: Beijing Key Laboratory of High Dynamic Navigation Technology, Beijing Information Science and Technology University, Beijing 100192, China; School of Automation, Beijing Information Science and Technology University, Beijing 100192, China ' Beijing Key Laboratory of High Dynamic Navigation Technology, Beijing Information Science and Technology University, Beijing 100192, China; Key Laboratory of Modern Measurement and Control Technology, Ministry of Education, Beijing Information Science and Technology University, Beijing 100192, China; School of Automation, Beijing Information Science and Technology University, Beijing 100192, China ' State Key Laboratory of Nickel and Cobalt Resources Comprehensive Utilization, Jinchang 737104, China ' State Key Laboratory of Nickel and Cobalt Resources Comprehensive Utilization, Jinchang 737104, China ' State Key Laboratory of Nickel and Cobalt Resources Comprehensive Utilization, Jinchang 737104, China ' Jinchuan Group Co., Ltd., Jinchang 737100, China ' Beijing Key Laboratory of High Dynamic Navigation Technology, Beijing Information Science and Technology University, Beijing 100192, China; School of Automation, Beijing Information Science and Technology University, Beijing 100192, China

Abstract: Most long text classification methods based on deep learning have problems such as semantics sparsity and long-distance dependence. To tackle these problems, a novel multi-applicable text classification based on deep neural network (MTDNN) is proposed, which contains a bidirectional encoder representation from transformer (BERT), a dimension reduction layer, and the bidirectional long short-term memory (Bi-LSTM) combining attention mechanism. BERT is used to pre-train the words into the word embedding vectors. The dimension reduction layer extracts the feature phrase representations with higher weight from the word embedding vectors. The Bi-LSTM captures both the forward and backward context representations. An attention mechanism is employed to focus on the information outputted from the Bi-LSTM. The experimental results illustrate that the accuracy of the MTDNN for long text, short text classification, and sentiment analysis reaches 94.95%, 93.53% and 92.32%, respectively. The results show that our method outperforms the other state-of-the-art text classification methods.

Keywords: text classification; deep neural network; BERT; long short-term memory; LSTM; attention mechanism; multi-applicable.

DOI: 10.1504/IJSNET.2022.127841

International Journal of Sensor Networks, 2022 Vol.40 No.4, pp.277 - 286

Received: 15 Jun 2022
Accepted: 16 Jun 2022

Published online: 19 Dec 2022 *

Full-text access for editors Full-text access for subscribers Purchase this article Comment on this article