Identifying optimised speaker identification model using hybrid GRU-CNN feature extraction technique
by Md. Iftekharul Alam Efat; Md. Shazzad Hossain; Shuvra Aditya; Jahanggir Hossain Setu; K.M. Imtiaz-Ud-Din
International Journal of Computational Vision and Robotics (IJCVR), Vol. 12, No. 6, 2022

Abstract: Extracting vigorous and discriminative features and selecting an appropriate classifier model to identify speakers from voice clips are challenging tasks. Thus, we considered signal processing techniques and deep neural networks for feature extraction along with state-of-art machine-learning models as classifiers. Also, we introduced a hybrid gated recurrent unit (GRU) and convolutional neural network (CNN) as a novel feature extractor for optimising the subspace loss to extract the best feature vector. Additionally, space-time is contemplated as a computational parameter for finding the optimal speaker identification pipeline. Later, we have inspected the pipeline in a large-scale VoxCeleb dataset comprising 6,000 real world speakers with multiple voices achieving GRU-CNN + R-CNN for the highest accuracy and F1-score as well as GRU-CNN + CNN for maximum precision and LPC + KNN for the highest recall. Also, LPCC + R-CNN and MFCC + R-CNN are accomplished as optimal in terms of memory usage and time respectively.

Online publication date: Thu, 27-Oct-2022

The full text of this article is only available to individual subscribers or to users at subscribing institutions.

 
Existing subscribers:
Go to Inderscience Online Journals to access the Full Text of this article.

Pay per view:
If you are not a subscriber and you just want to read the full contents of this article, buy online access here.

Complimentary Subscribers, Editors or Members of the Editorial Board of the International Journal of Computational Vision and Robotics (IJCVR):
Login with your Inderscience username and password:

    Username:        Password:         

Forgotten your password?


Want to subscribe?
A subscription gives you complete access to all articles in the current issue, as well as to all articles in the previous three years (where applicable). See our Orders page to subscribe.

If you still need assistance, please email subs@inderscience.com