Title: Multimodal systems for speech recognition

Authors: Orken Zh. Mamyrbayev; Keylan Alimhan; Beibut Amirgaliyev; Bagashar Zhumazhanov; Dinara Mussayeva; Farida Gusmanova

Addresses: Institute of Information and Computational Technologies CS MES RK, Pushkin 125 Street, Almaty, 050010, Kazakhstan ' Tokyo Denki University, 5 Senjuasahicho, Adachi, Tokyo, 120-8551, Japan ' Institute of Information and Computational Technologies CS MES RK, Pushkin 125 Street, Almaty, 050010, Kazakhstan ' Institute of Information and Computational Technologies CS MES RK, Pushkin 125 Street, Almaty, 050010, Kazakhstan ' Institute of Economy CS MES RK, Kurmangazy 24 street, Almaty, 050010, Kazakhstan ' Al-Farabi Kazakh National University, Al-Farabi Ave. 71, Almaty 050040, Kazakhstan

Abstract: In this article, we have implemented a system of multimodal recognition of Kazakh speech, based on speech and lip recognition. During the feature extraction phase, several methods have been used, such as voice activity detection (VAD), mel-frequency cepstral coefficients, perceptual linear prediction, relative perceptual linear prediction, and their first-order time derivatives. The main problems of recognition of Kazakh speech, VAD algorithms and speech segmentation, lip movement recognition are considered in the article. The description of probabilistic modelling of audiovisual speech based on coupled hidden Markov models (HMMs), information fusion methods with weight coefficients for audio and video speech modalities, and parametric representation of signals is provided. Quantitative results in multimodal recognition of continuous Kazakh speech indicate high accuracy and reliability of the automatic system. This approach has been used and compared in terms of computational time and recognition speed and gives very interesting results.

Keywords: voice activity detection; VAD; speech segmentation; multimodal systems; speech recognition; information systems.

DOI: 10.1504/IJMC.2020.107097

International Journal of Mobile Communications, 2020 Vol.18 No.3, pp.314 - 326

Received: 21 Mar 2018
Accepted: 18 Oct 2018

Published online: 04 May 2020 *

Full-text access for editors Full-text access for subscribers Purchase this article Comment on this article