Title: American sign language classification using deep learning

Authors: Harsh Parikh; Nisarg Panchal; Vraj Patel; Ankit K. Sharma

Addresses: Department of Instrumentation and Control Engineering, Nirma University, Ahmedabad, Gujarat, India ' Department of Electrical Engineering, Nirma University, Ahmedabad, Gujarat, India ' Department of Instrumentation and Control Engineering, Nirma University, Ahmedabad, Gujarat, India ' Department of Electronics and Instrumentation Engineering, Nirma University, Ahmedabad, Gujarat, India

Abstract: Image classification is a process that incorporates analysing and extracting useful information from an image. It addresses a wide range of real-world issues and has applications in the fields of artificial intelligence, robotics, biomedical imaging, motion recognition, among many others. In this paper, we have utilised support vector machine (SVM), decision trees (DT), k-nearest neighbour (kNN), convolutional neural networks (CNN), VGG-16, ResNet-50, MobileNet-V2, and DenseNet-201 on American Sign Language dataset. This paper describes a system that uses deep learning and machine learning to recognise gestures in images and assign an English alphabet corresponding to the gesture. The results will be useful for classification of American sign language as the necessary comparison metrics and performance of all these models are studied and documented in this paper.

Keywords: image classification; convolutional neural network; American sign language; ASL; decision trees; k-nearest neighbour; support vector machine; SVM; transfer learning; VGG-16; ResNet-50; DenseNet-201; MobileNet-V2.

DOI: 10.1504/IJBM.2024.141950

International Journal of Biometrics, 2024 Vol.16 No.6, pp.640 - 659

Accepted: 07 Mar 2024
Published online: 03 Oct 2024 *

Full-text access for editors Full-text access for subscribers Purchase this article Comment on this article