Forthcoming Articles

International Journal of Biometrics

International Journal of Biometrics (IJBM)

Forthcoming articles have been peer-reviewed and accepted for publication but are pending final changes, are not yet published and may not appear here in their final order of publication until they are assigned to issues. Therefore, the content conforms to our standards but the presentation (e.g. typesetting and proof-reading) is not necessarily up to the Inderscience standard. Additionally, titles, authors, abstracts and keywords may change before publication. Articles will not be published until the final proofs are validated by their authors.

Forthcoming articles must be purchased for the purposes of research, teaching and private study only. These articles can be cited using the expression "in press". For example: Smith, J. (in press). Article Title. Journal Title.

Articles marked with this shopping trolley icon are available for purchase - click on the icon to send an email request to purchase.

Online First articles are also listed here. Online First articles are fully citeable, complete with a DOI. They can be cited, read, and downloaded. Online First articles are published as Open Access (OA) articles to make the latest research available as early as possible.

Open AccessArticles marked with this Open Access icon are Online First articles. They are freely available and openly accessible to all without any restriction except the ones stated in their respective CC licenses.

Register for our alerting service, which notifies you by email when new issues are published online.

International Journal of Biometrics (13 papers in press)

Regular Issues

  • Optimised VGH algorithm-based deep CNN classifier for diabetic retinopathy   Order a copy of this article
    by Bhagyashree Somnath Madan, Avinash Sharma 
    Abstract: High consequential condition of diabetic retinopathy (DR) has an impact on diabetic patients, inaccurate detection results the permanent vision loss. Fundus images are used to predict the severity of DR from the lesions segmentation which takes more time and complex process for manual segmentation. Therefore, the proposed research developed the vision guided horse optimiser (VGH algorithm) based on the deep convolution neural network (DCNN) classifier to identify DR. The significance of the present research is based on the vision guided horse-optimised deep convolution neural network (VGH-optimised DCNN) model for classifying DR. Image contrast enhancement, as well as illumination correction, is utilised in the stage of pre-processing, and the automatic Otsu approach, as well as the contour-based threshold approach, is used to segment both optic disc as well as blood vessel. Various methods are compared with the VGH-optimised DCNN to enhance the model performance. Thus, the accuracy, F1-score, sensitivity, as well as specificity of the developed model by varying epoch25 is 97.59%, 96.76%, 96.64%, and 96.42% at the training percentage of 90, whereas, at the k-fold value 6, the VGH-optimised DCNN model attains the values of 97.10%, 96.77%, 96.22%, and 97.07% utilising the IDRID dataset.
    Keywords: diabetic retinopathy; optic disc segmentation; blood vessel segmentation; deep CNN; vision guided horse optimiser.
    DOI: 10.1504/IJBM.2026.10072442
     
  • Design of an ensembled face recognition system using optimal local features for unconstraint and age difference environment   Order a copy of this article
    by Dipak Kumar, Ravi Kant Kumar, Jogendra Garain, Dakshina Ranjan Kisku, Jamuna Kanta Sing, Phalguni Gupta 
    Abstract: Biometrics authentication especially using face recognition, is still a big challenging problem. Face exhibits various semantic information with its numerous expressions. The exhibitions of dynamic expressions of faces are the main challenges for the biometric system. However, researchers continuously trying to enhance facial recognition robustness. This work proposes a multi-classifier-based ensemble system for effective face recognition. Our proposed system has two modules: 1) an optimisation module that decreases computational cost using feature sets; 2) a fusion module with multiple classifiers to advance accuracy. The feature set is extracted using local descriptors named LBP, DS-LBP, and LGS. Feature sets are optimised using a genetic algorithm (GA). Optimised feature sets are classified distinctly, and results are united using decision-level fusion methods like AND-rule, OR-rule, and majority voting. Experiments accomplished on LFW, BioID, and LAG datasets and it shows that the proposed ensemble system is more efficient and robust.
    Keywords: face recognition system; local binary pattern; LBP; local graph structure; SLGS; densely sampled local binary pattern; DS-LBP; ensemble system; genetic algorithms; majority voting.
    DOI: 10.1504/IJBM.2026.10072443
     
  • Deep learning-driven multi-sample periocular recognition for biometric authentication   Order a copy of this article
    by Nabil Hezil, Amir Benzaoui, Ghania Droua-Hamdani, Khadidja Belattar, Ahmed Bouridane 
    Abstract: The COVID-19 pandemic has accelerated the adoption of contactless biometric modalities such as face, iris, voice, and periocular recognition, which offer a safer alternative to traditional methods by reducing the risk of disease transmission in public and private spaces. While face recognition technologies have shown robust performance even with partial facial occlusions, their accuracy significantly diminishes when individuals wear medical masks, highlighting the importance of periocular biometrics for reliable personal identification. To enhance security and accuracy, multi-biometric systems-combining multiple biometric traits outperform single-modality approaches. We propose a multi-input convolutional neural network (MICNN) framework that fuses the left and right periocular traits from the same face image for enhanced biometric recognition. We evaluate our method on two challenging periocular datasets, achieving highly competitive correct recognition rates of 99.62% and 98.33%, respectively, outperforming recent benchmarks. These results underscore the efficacy of multi-sample periocular recognition using deep learning for contactless biometric identification.
    Keywords: multi-sample biometrics; periocular recognition; fusion; deep learning; cascade object detector; multi-input convolutional neural network; MICNN; feed-forward neural networks; FFNNs.
    DOI: 10.1504/IJBM.2026.10072550
     
  • Real-time biometric recognition using photoplethysmography signals and deep learning approaches   Order a copy of this article
    by Ali Cherry, Hayat Kourani, Mohamad Haj-Hassan, Mohamad Abou Ali, Soumaya Berro, Wassim Salameh 
    Abstract: This research investigates photoplethysmography (PPG) signals as a reliable, non-invasive method for biometric recognition, addressing modern security needs with a unique and hard-to-replicate approach. A dedicated acquisition system was designed to collect high-quality PPG data from 40 individuals under a strict protocol, ensuring data consistency and accuracy. Advanced filtration minimised noise, while preprocessing steps - including data augmentation and normalisation - enhanced dataset diversity, optimising model training. Five deep learning models were evaluated: a 1D convolutional neural network (1D CNN), long short-term memory (LSTM), bidirectional LSTM (Bi-LSTM), gated recurrent unit (GRU) and dense neural network (DNN). The 1D CNN performed exceptionally well, achieving a classification accuracy of 99.94%, making it ideal for real-time identification applications. These findings underscore the promise of PPG signals for biometric systems and demonstrate the powerful advantages of deep learning, particularly the 1D CNN, in delivering high identification accuracy across varied environments and practical applications.
    Keywords: biometric identification; deep learning; photoplethysmography; signals processing; real-time processing; classification accuracy; graphical user interface.
    DOI: 10.1504/IJBM.2026.10073273
     
  • A local occlusion face recognition based on improved deep residual network   Order a copy of this article
    by Yan Lei, Xinhua Wang, Zitong Wang 
    Abstract: In order to improve the accuracy of local occlusion face recognition, a local occlusion face recognition method based on an improved deep residual network is proposed. Firstly, by utilising collaborative sparse measurement and compressive sensing techniques combined with local 2D sparse and non-local 3D sparse transformations, image denoising and restoration can be achieved. Secondly, the greyscale morphology theory is used to perform edge detection on locally occluded facial images. By segmenting image regions and calculating appropriate thresholds for each region, the edge feature information of the facial image is accurately extracted and recognised. Finally, an optimised sRSNet-I model was designed for local occlusion face recognition, utilising neural network layers for feature extraction and classification to achieve local occlusion face recognition. The experimental results show that the recognition accuracy of our method consistently maintains a high level of over 96%, and the recognition time remains between 0.54 and 0.62 seconds.
    Keywords: improved deep residual network; partial occlusion; facial recognition; edge feature information.
    DOI: 10.1504/IJBM.2026.10073266
     
  • Music singing emotion classification and recognition method based on deep representation learning   Order a copy of this article
    by Li Pan, Jiayao Liang, Yuyang Chen 
    Abstract: This paper proposes a music performance emotion classification and recognition method based on deep representation learning to improve accuracy. Firstly, an emotional vocabulary system is constructed, and feature parameters such as range, intensity, and rhythm are extracted based on mathematical modelling. Secondly, a multi-layer neural network model is constructed through deep representation learning to select the acoustic features with the highest correlation with emotional terms for encoding and dimensionality reduction. A deep learning architecture is designed that includes a dynamic input layer, embedding layer, convolutional layer, and maximum sentiment term pooling layer. Finally, unsupervised pre training and feature concatenation strategies are adopted to automatically mine and classify complex emotional patterns in music singing. The experimental results show that the proposed method achieves an accuracy of 90% to 98% in extracting emotional features of music singing, and a classification recognition accuracy of 97.32% to 99.74%.
    Keywords: deep representation learning; music singing; emotion classification recognition; acoustic feature encoding.
    DOI: 10.1504/IJBM.2026.10073283
     
  • Current trends in gait analysis for biometric recognition using wearable sensors: a systematic review   Order a copy of this article
    by Abdullai Dwumfour, Jamal-Deen Abdulai, Ebenezer Owusu, Godwin B. Akrong, Solomon Mensah 
    Abstract: This paper systematically reviews recent advances in gait-based biometric recognition using wearable sensors, addressing the growing need for secure authentication technologies. While prior reviews focused on vision based methods, sensor-based approaches lacked comprehensive insights and systematic methodologies. This study bridges the gap by examining wearable sensing modalities, publicly available gait datasets, and techniques for classification, feature extraction, and feature reduction to enhance recognition accuracy. Six research questions guided the review, retrieving 11,321 studies from databases like IEEE, ScienceDirect, ACM, and Scopus, and were later refined to 90 articles using the PRISMA method. The findings show accelerometers, gyroscopes, and IMUs as the top wearable sensors in gait analysis, with secondary datasets, particularly OU-ISIR, being widely used. Convolutional neural networks (CNNs) emerged as the leading method for classification, feature extraction, and reduction. This research offers insightful observations about state-of-the-art techniques and trends in gait-based biometric recognition through wearable sensors.
    Keywords: gait analysis; biometrics; authentication; wearable sensors; gait recognition.
    DOI: 10.1504/IJBM.2026.10073529
     
  • Symmetric LBP; the use of the symmetric finite difference formula in the local binary pattern   Order a copy of this article
    by Zeinab Sedaqatjoo, Hossein Hosseinzadeh 
    Abstract: The paper presents a mathematical perspective on the application of the symmetric finite difference (FD) formula in the local binary pattern (LBP). The symmetric FD formula is frequently employed in numerical analysis for approximating partial derivatives when the number of data points is odd. The use of this formula in LBP leads to more accurate approximations, but, our findings demonstrate that it results in linearly dependent directional derivatives, leading to correlated patterns. To mitigate this issue, we propose considering only four directional derivatives instead of the conventional eight. This adjustment produces four-bit local binary patterns and reduces the range of LBP values from 256 to 16. Then a new variant of LBP, referred to as symmetric LBP, is proposed in this article enhances the standard LBP both analytically - by improving the approximation of directional derivatives - and computationally - by disregarding four directional derivatives. In practical applications, the symmetric LBP is utilised for face detection and facial expression recognition tasks. We compare its performance against standard LBP and other well-known feature extraction methods. The experimental results underscore the efficiency of symmetric LBP in feature extraction.
    Keywords: local binary pattern; LBP; feature extraction; facial expression recognition; symmetric finite difference.
    DOI: 10.1504/IJBM.2026.10073847
     
  • SignFlow: a CNN driven system for real-time ASL-to-text-translation   Order a copy of this article
    by Sugandhi Kunduvalappil, Kavya Sree Chowdary Kari 
    Abstract: Improving communication between sign language and vocal language users remains a major challenge. This research addresses this gap by introducing a method that translates American Sign Language (ASL) gestures into written text using a convolutional neural network (CNN). The system allows individuals with hearing impairments to communicate more effectively with non-sign language users. Utilising the latest advancements in computer vision and machine learning, our system processes sign language gestures in real time and converts them into text. We evaluated our approach using a custom SignFlow dataset alongside the MNIST sign language dataset. Our results showed an impressive accuracy of 99.9% in terms of recognition accuracy, precision, recall, and F1-score, significantly surpassing existing solutions. This system makes a valuable contribution to reducing communication barriers without requiring both users to know sign language.
    Keywords: sign language recognition; communication barrier; CNN; assistive technology.
    DOI: 10.1504/IJBM.2026.10074006
     
  • User continuous authentication on mobile devices based on self-supervised learning   Order a copy of this article
    by Youcef Ouadjer, Sid-Ahmed Berrani, Mourad Adnane, Nesrine Bouadjenek 
    Abstract: Continuous authentication on mobile devices aims at continuously verifying the user identity. It generally relies on behavioural attributes, such as gesture patterns. In this context, it is possible to model user behaviour thanks to deep convolutional neural networks (CNN). However, in order to achieve a high level of performance, CNN models require large-scale annotated datasets and a lot of computational resources. In this paper, we aim to develop a continuous authentication system involving less complex and less computationally expensive models, so that it can be efficiently embedded on mobile devices. To overcome the lack of large-scale annotated datasets, self-supervised contrastive learning is employed with an effective data augmentation method based on additive Gaussian noise. Our solution is evaluated using different mobile CNN architectures on two public datasets. Obtained results show noteworthy performances on both identification and verification: The proposed system achieves the best results in terms of time efficiency and also effectiveness. In scenarios where labelled training data is scarce, our authentication system reveals notable robustness, demonstrating its capacity to operate under real-world constraints of small annotated datasets for training.
    Keywords: continuous authentication; biometrics; self-supervised learning; gesture patterns.
    DOI: 10.1504/IJBM.2025.10074870
     
  • FAUNet: iris segmentation using feature aggregation-based UNet in unconstrained environmen   Order a copy of this article
    by Chinmoy Ghosh, Sagnik Roy, Satyendra Nath Mandal 
    Abstract: This study introduces FAUNet, a powerful deep neural network model for iris segmentation. There are many machine learning-based iris segmentation approaches; however, they only perform well on limited ocular images. For segmentation using mask creation in unconstrained pictures, a deep learning-based model is effective. Current segmentation algorithms, especially UNet variations, struggle with noise and occlusions in datasets like UBIRIS.v2, which includes images with motion blur, reflections, and occlusions. The project aims to enhance segmentation accuracy by combining high- and low-level feature aggregation with a feature aggregation module (FAM) in the UNet architecture. The suggested model outperforms previous UNet variations such as UNet++, IRUNet, and ATTUNet using the publicly available dataset UBIRIS.v2. The model achieves 99.50% segmentation accuracy in training and 99.46% in validation. Comparative analysis evaluates the models real-world efficacy and relevance. This work stresses feature aggregations role in decreasing noise and occlusions, enhancing biometric identification accuracy and reliability.
    Keywords: segmentation; iris recognition; feature aggregation; mask; universal network; UNet; FAUNet; UBIRIS.v.
    DOI: 10.1504/IJBM.2025.10074871
     
  • InCO-GBdeN: multi-modal gradient boosting enabled deep convolutional neural network for emotion detection   Order a copy of this article
    by Shruti G. Taley, M.A. Pund 
    Abstract: Emotion detection finds valuable applications in healthcare, human computer interaction, and so on. Numerous techniques and strategies have been developed recently, which impact the detection accuracy and robustness of the models. In addition, these conventional models have instability, generalisation, and interpretability issues that reduce the models efficiency. To overcome these issues, a novel gradient-boosting-enabled deep convolutional neural network with an intellect clustering optimisation (InCO-GBdeN) model for effective emotion detection. The GBdeN model tends to reduce the dimensional size of multimodal signals and thereby, improves interpretability as well as generalisation ability for effective detection. The development of the InCO algorithm assists the parameter tuning process with the help of gaining-sharing behaviours, which avails accurate detection with fast convergence and improves the stability for active detection. Moreover, the developed InCO-GBdeN model is simple, lightweight, and easy to analyse positive and negative emotions such as happiness, anger, and so on. The performance of the InCO-GBdeN model for emotion detection reports an accuracy of 95.80%, precision of 95.95%, and recall of 95.62% respectively, when compared to other state-of-the-art methods.
    Keywords: emotion detection; multi-modal signals; gradient boosting; intellect clustering optimisation; deep learning; DL.
    DOI: 10.1504/IJBM.2026.10075010
     
  • Feature extraction methods for iris recognition systems: a comprehensive survey   Order a copy of this article
    by Amir Azizi, Panayiotis Charalambous 
    Abstract: In iris recognition, optimal feature extraction and subset selection are critical for achieving high-accuracy identification. While traditional techniques such as statistical analysis, frequency-domain transformations, and texture descriptors have been extensively employed, the past decade has witnessed the rise of deep learning as the dominant paradigm, reshaping the field. This paper presents a comprehensive survey of recent developments, with a particular emphasis on deep learning-based methods for feature extraction and selection. Over 250 sources, including scholarly articles, benchmark datasets, and institutional resources, are reviewed. The paper begins with an overview of classical approaches, followed by an in-depth examination of various deep learning architectures. Each method is analysed in terms of its core principles, innovations, and limitations. Key challenges are identified, and emerging trends are highlighted to guide future research and practical implementations in iris-based biometric systems.
    Keywords: biometric; deep learning; iris recognition; image processing.
    DOI: 10.1504/IJBM.2026.10075083