International Journal of Biometrics (39 papers in press)
Leveraging Bio-Maximum Inverse Rank Method for Iris and Palm Recognition
by Mallikarjuna Reddy A, Reddy Sudheer K, Santhosh Kumar Ch N, K.Srinivasa Reddy
Abstract: Biometrics is vital to recognize and confirm the identity of human beings by estimating and distinguishing the natural qualities that include iris, retinal, face recognition, fingerprint, palm detection, and others. Numerous biometric designs and frameworks are most successful in distinguishing human identities by employing several techniques. In this article, the authors present bi-modular biometric frameworks. For iris and ribbon print, a bi-modular biometric is employed. Wavelet and Gabor-edge channels are employed to separate highlights in various balances. This article aims to present the BMIR (Bio Maximum Inverse Rank) model that is vigorous regarding varieties in scores and other factors of a module. Category support and choice-based strategies are employed to join the magnitudes of these modules. The authors have employed three data sets to carry out the investigation effectively. The investigation shows the accuracy, sufficiency, and appropriateness of the proposed hybrid model when compared with the existing frameworks.
Keywords: Ranking; Iris Recognition; biometrics; knowledge acquisition; neural network.
Local Double Directional Stride Maximum Patterns for Facial expression Retrieval
by Uma Maheswari, Varaprasad G, Viswanadha Raju S
Abstract: Nowadays, face recognition and expression recognition are playing vital role in various applications such as medical field, entertainment, criminal analysis, social media, online business etc. Local texture feature descriptors such as LBP, LTrP, LTP, DBC are usually popular to recognize the faces and expressions as well. In this paper, proposing a new feature descriptor Local Double Directional stride Maximum Pattern to identify the facial expression based on the direction, the pattern generates by calculating the first order derivatives in four directions using DBC, and then second order derivatives are calculated maximum and minimum intensity values in four directions among three pixels in every direction to construct the feature. This helps to discriminates the directional output which is possible to cover in an image. Facial expression recognition and retrieval performance is measured and compared in stand of precision, recall and ARR on the benchmark datasets such as JAFFE, CK+, LFW, FERET etc. with the existing methods.
Keywords: Face recognition; expression recognition; Local Double Directional Stride Maximum Pattern; DBC (Directional Binary Code); LBP (Local Binary Patterns); LTP (Local Ternary Patters); LTrP (Local Tetra Patterns).
DeepVeil: Deep learning for identification of face, gender, expression recognition under veiled conditions
by Ahmad B. A. Hassanat, Abeer Ahmed Al Bustanji, Ahmad S. Tarawneh, Malek Alrashidi, Hani Alharbi, Mohammed Alanazi, Mansoor Alghamdi, Ibrahim S. AlkhazI, V. B. Surya Prasath
Abstract: Biometric recognition based on the full face is an extensive research area. However, using only partially visible faces, such as in the case of veiled persons, is a challenging task. Deep convolutional neural network (CNN) is used in this work to extract the features from veiled-person face images. We found that the sixth and the seventh fully connected layers, FC6 and FC7 respectively, in the structure of the VGG19 network provide robust features with each of these two layers containing 4096 features. The main objective of this work is to test the ability of deep learning based automated computer system to identify not only persons, but also to perform recognition of gender, age, and facial expressions such as eye smile. Our experimental results indicate that we obtain high accuracy for all the tasks. The best recorded accuracy values are up to 99.95% for identifying persons, 99.9% for gender recognition, 99.9% for age recognition and 80.9% for facial expression (eye smile) recognition.
Keywords: Veiled-face recognition; deep learning; convolutional neural; networks; age recognition; gender recognition; facial expression recognition; eye smile recognition.
Recognition of depression patients with electroencephalogram
by Lijie Zhou
Abstract: The recognition of patients with depression is a very important problem, and there are few relevant studies at present. As the application of electroencephalogram (EEG) signals becomes mature in clinical diagnosis, the relationship between EEG and depression has been widely concerned. In this study, firstly, EEG signals were analyzed, then EEG signals were collected for processing and feature extraction, and depression patients were recognized by the support vector machine (SVM) method. The experimental results demonstrated that SVM showed different accuracy in different features, leads, and wavebands. When all leads were used, the accuracy of SVM was the highest. When power spectral density (PSD) was used as the feature, the accuracy of SVM was higher than 70%, and its accuracy on the ? wave was the highest; when activity was used as the feature, the accuracy of SVM was higher than 75%, and its accuracy was the highest on the ? wave. The comparison of random forest (RF) and k-nearest neighbor (KNN) demonstrated that SVM showed the highest accuracy. The results show that the EEG signal based method has a good performance in recognizing depression patients and can be popularized and applied in practice.
Keywords: electroencephalogram signal; depression; support vector machine; feature extraction.
Machine Learning Based Iris Liveness Identification using Fragmental Energy of Cosine Transformed Iris Images
by Smita Khade, Sudeep Thepade, Swati Ahirrao
Abstract: Iris Biometric Identification provides a contactless authentication preventing the spread of COVID19 like disease. These systems face issues related to spoofing attacks attempted with the help of contact lenses, replayed video, print attack; making it vulnerable and unsafe. The paper proposes the iris liveness detection method to mitigate spoofing attacks, taking Fragmental Coefficients of Cosine transformed iris image to be used as features. Seven variants of feature formation are considered in experimental validations of proposed method and the features are used to train eight assorted Machine Learning Classifiers and ensembles for iris liveness identification. Recall, F-measure, Precision and Accuracy are used to evaluate performances of the projected iris liveness identification variants. The experimentation carried out on four standard datasets have shown better iris liveness identification by the Fragmental Coefficients of Cosine transformed iris image with size 4*4 using Random Forest algorithm having 99.18% accuracy immediately followed by ensemble of classifiers.
Keywords: Iris Images; liveness detection; Discrete cosine Transform; machine learning; classification; Biometric; Feature Extraction.
Revocable Iris Templates using Partial Sort and Randomized Look-up Table Mapping
by Mulagala Sandhya, Dilip Kumar Vallabhadas, Shubham Rathod
Abstract: The computerized needs of the world are increased more. In the
ongoing years, biometric systems end up helpless against the spillage of template
information. If a biometric template is stolen, it is lost permanently and cant
be restored or reissued like learning-based or ownership-based strategies. There
are many biometric traits that can be used for the purpose of recognition, but out
of all these biometrics, the iris is one of the strongest because of its accuracy
which is considerably high. In this paper, we develop a new cancellable biometric
scheme using the Indexing-First-One (IFO) hashing coupled with a technique
called partial sort. The IFO hashing uses new mechanisms called the P-order
Hadamard product and modulo threshold function paired with the partial sort
technique which has considerably strengthened it further. Due to the use of IFO
hashing, we get a perfect balance between security and accuracy. We used
the very sophisticated CASIA-v3 database which provides us with a wide range
of iris templates for our experiments. As compared to the previous cancellable
schemes, the analysis of the results of these experiments provides us with good
accuracy. Along with this improved accuracy, the modified IFO hashing schemes
also provide strong resistance to various privacy and security attacks.
Keywords: Iris; Cancelable template; Min-hashing; Jacaard similarity; Security and privacy; Partial Sort; Look-up table.
Robust Perceptual Fingerprint Image Hashing: A Comparative Study
by Wafa Birouk, Atidel Lahoulou, Ali Melit, Ahmed Bouridane
Abstract: This paper presents a robust perceptual hashing scheme for biometric
template protection where the input fingerprint image is mapped into a sequence
of Boolean values. Our aim is to develop a method that relies on the use of four
functions namely SIFT, Harris, DWT and SVD. After extracting the minutiae,
the scale-invariant feature transform (SIFT) is applied in order to extract the
robust features against geometric attacks. The resulting vector is then filtered
using Harris criterion to maintain only the stable key-points. Next, the fingerprint template is produced by image binarization and decomposed into blocks. The hash code is finally obtained by concatenating the singular values computed on the approximation coefficients of each image block. Similarity between hash codes is evaluated by the normalized Hamming distance (HD). Comparative analysis to three similar methods indicates that the proposed hashing scheme shows better performances in terms of discriminative capability as well as robustness against acceptable image manipulations, such as JPEG compression, gamma correction, speckle noise, Gaussian blur, shearing and slight rotation.
Keywords: Fingerprint image; Perceptual hashing; Minutiae extraction; SIFT; Harris; SVD; DWT; Acceptable attacks.
Performance Enhancement of Symmetric Hashed Fingerprint Template using Dynamic Threshold Matching Algorithm
by Ajish Sreedharan
Abstract: Among the biometric template protection, the fingerprint template protection is strenuous because the fingerprint template is store as minutiae points. The enhancement of the security of the fingerprint biometric template results in the degradation of the matching performance. The modified symmetric hash method uses a key-value as a multiplication parameter for the hashing of the fingerprint biometric template. The irreversibility and unlikability analysis of the modified symmetric hashed fingerprint template exhibits better security. The multiplication of the fingerprint minutiae template by a key-value mitigates the accuracy of matching performance. This paper proposes a dynamic threshold matching algorithm in which the threshold values are derived from the key-value. Experimental results on FVC 2004 database indicate that the combination of modified symmetric hash function and the dynamic threshold matching algorithm prompt better security and excellent matching performance.
Keywords: Fingerprint Template; Modified Symmetric Hashing; Accuracy; Irreversibility; Unlinkability.
Effect of facial expression in face biometry for a multimodal approach
by Samik Chakraborty, Dipayan Chatterjee, Subrata Golui, Madhuchhanda Mitra, Saurabh Pal
Abstract: With the advancement of technology, security has become an inseparable part of it. But many factors often influence the accuracy of the authentication system. In this current scenario, the multimodal biometric system is used where information from different modalities are fused to address the weakness of the system. In the present work, a robust biometric authentication system proposed using face and facial expression as biometric modalities. Facial recognition is the most commonly used biometric system over the years. Facial expressions of an individual are unique and it is integrated as an additional layer along with face recognition system to enhance the security of the system as the current scenario tends towards intelligent security systems for real-time surveillance. After preprocessing, Eigen value-based and Local Binary Pattern (LBP) based features are extracted from the face and facial expression and the information are fused. Finally, the authentication is done using Image Euclidian distance (IMED) based classifier. This proposed work evaluated using the JAFFE and Yale database and 95.71% and 88.89% authentication accuracy is achieved respectively.
Keywords: Face biometry; Facial expression; LBP; Eigenfaces; IMED Classifier.
Recognition method of basketball players' throwing action based on image segmentation
by Cong Zhang, Miao Wang, Limin Zhou
Abstract: In order to solve the problems of low recognition accuracy and long recognition time in traditional basketball players' throwing action recognition methods, this paper proposes a new basketball players' throwing action recognition method based on image segmentation. The covariance matrix of noise data of basketball players' throwing action is constructed. The throwing action of basketball players is expressed by acceleration and angular velocity, and the acceleration vector of throwing action is obtained. The feature extraction of throwing action is completed by discrete Fourier transform algorithm. The image of basketball players' throwing action is segmented by threshold and edge, and the change features of throwing action are obtained by kernel function to complete the recognition of basketball players' throwing action. The experimental results show that the accuracy of the proposed method is about 98%, and the time cost is about 2.1s.
Keywords: Image Segmentation; Throw Action Recognition; Kalman Filter; Covariance Matrix; Multidimensional Vector.
Deep Architecture Based Face Spoofing Identification in Real-Time Application
by Mayank Kumar Rusia, Dushyant Kumar Singh
Abstract: Face biometric-based recognition is always a demanding and universally accepted method, especially for access control purposes. However, face recognition systems are usually affected by identity threats, such as face spoofing. Face spoofing is an attempt to acquire the face identity privilege of another person illegally. Developing an efficient and real-time spoofing detection system that quickly detects any illegal access attempts to prevent vulnerability violations is indispensable. This manuscript proposes an automated and efficient technique for face spoofing detection based on a customized convolutional neural network named SpoofNET. The proposed model can easily distinguish genuine and spoof faces with less complex convolutional blocks. This novel method can be deployed in any application that desires low computation and low-resolution input samples. This manuscript also introduces the dataset synthesized in our lab, validated with the existing NUAA dataset. The proposed model achieved 99.3% validation accuracy with better generalization potentiality for the synthesized dataset.
Keywords: Face Liveness Detection; Face Spoofing Attack; Convolutional Neural Network; Computer Vision; Biometrics; Image Processing.
Cumulative Foot Pressure Image Recognition via Gabor Filters and Sparse Representation Classifier
by Pedram Ahmad Khan Beigi, Aboozar Ghaffari
Abstract: Analysis of cumulative foot pressure images (CFPIs) can be utilized like other biometrics for personal recognition. This biometrics, along with other biometrics such as gait, can be helpful in identification. Also, this overcomes some drawbacks of common biometrics (such as face and gait) in data recording and taking peoples foot data without knowing them. In the process of capturing cumulative foot pressure image, the spatial and temporal changes of ground reaction force during one gait cycle are recorded. In this biometric, there are some challenges such as walking in different speed. In this paper, we present a new approach based on Gabor filters as a feature vector and sparse representation classification (SRC). To reduce the feature dimension, Eigenfoot and linear discriminative analysis (LDA) are also used. To obtain translation and rotation invariant representation, a normalization preprocess is applied. We evaluate the proposed approach via dataset D of the Chinese Academy of Science (CASIA) Gait-Footprint dataset containing the cumulative foot pressure images of 88 persons. The experimental results indicate that the proposed method has higher accuracy than other tested methods, and also is robust to changes in walking speed.
Keywords: Cumulative foot pressure; Gabor filters; sparse representation classifier; eigenfoot; LDA.
Human Footprint Biometrics for Personal Identification using Artificial Neural Networks
by Kapil Nagwanshi, Amit Kumar Gupta, Tilottama Goswami, Sunil Pathak, Maleika Heenaye Mamode Khan
Abstract: The philosophy of this study focuses on human footprint identification applicable for high-security applications such as the safety of public places, crime scene investigation, impostor identification, biotech labs and blue-chip labs, identification of infants in hospitals. The paper proposes one of the low-cost hardware to scan the biometric human footprints that utilise image pre-processing and enhancement capabilities for obtaining the features. The algorithm enhances the footprint matching performance by selecting the three sets of local invariant feature detectors histogram of gradients, maximally stable external regions, speed up robust features; local binary pattern as texture descriptor, corner point detector, and PCA. Copyright 20XX Inderscience Enterprises Ltd. 2 K.K. Nagwanshi et al. Furthermore, descriptive statistics are generated from all the above mentioned footprint features and concatenated to create the final feature vector. The proposed footprint biometric identification will correctly identify or classify the person by training the system with patterns of the interested subjects using an artificial neural network model specially designed for this task. The proposed method gives the classification accuracy at a very encouraging level of 99.55%.
Keywords: artificial neural networks; ANNs; biometric; classification; footprint; segmentation.
Improving face recognition using deep autoencoders and feature fusion
by Ali Khider, Rafik Djemili, Ahmed Bouridane, Richard Jiang
Abstract: Uncontrolled environments are the main challenges of real face recognition systems, recent success of deep learning and features fusion has led to various performance improvements. This paper proposes a novel scheme called feature autoencoder (FAE), where an autoencoder model is not trained directly from the raw facial images, rather it uses a fusion of features constructed by Gabor filter, local binary pattern and local phase quantisation. For each feature, a linear discriminant analysis is applied to reduce its high dimensionality and a limited adaptive histogram equalisation process is employed for contrast enhancement. The proposed scheme has been evaluated using known datasets such as AR, ORL and YALE, and the experimental results carried out on these databases have been compared using three classifiers: k-nearest neighbour, multiclass support vector machine and softmax classifier demonstrating the effectiveness of proposed approach and parameters. The experimental results obtained and compared with recent and similar approaches on six databases: ORL, YALE, AR, extended YALE B, CMU PIE, and LFWcrop suggest that the proposed technique outperforms similar techniques, the recognition rate got from them are 100%, 100%, 99.66%, 99.40%, 97.31%, and 90.68% respectively.
Keywords: uncontrolled environments; face recognition; deep learning; sparse; autoencoder; feature extraction; fusion.
MR Image Enhancement and Brain Tumor Detection using Soft Computing and BWT with Auto-Enhance Technique
by Nilesh Bahadure, Nagrajan Raju, Prasenjeet Patil
Abstract: In this research work new algorithm using soft-computing is presented for medical image enhancement with auto-enhance technique. Image enhancement is one of the most important classes of image analysis in image processing. This paper presents the complete review of the different performance parameters of popular image enhancement techniques and proposes a new methodology for improvising visualisation with preserving high-intensity value. The images with the colour intensity value cannot be processed directly by most of the enhancement techniques, hence a suitable colour model is chosen for processing and the proposed algorithm for the same are implemented. Accurate analysis of information from the region of interest area from the images is always a central issue in the image analysis, so with the help of this improved algorithm based on the soft computing technique, it is possible to enhance the images with best in the class, clarity and visualisation. Simulation and experimental result on the different test images proves that the proposed algorithm gives better result as compared to other state of the art image enhancement techniques.
Keywords: Berkeley wavelet transformation; BWT; fuzzy clustering means; FCMs; magnetic resonance imaging; MRI.
Investigation of COVID-19 Symptoms Using Deep Learning Based Image Enhancement Scheme for X-Ray Medical Images
by V. Pandimurugan, A.V. Prabu, S. Rajasoundaran, SIDHESWAR ROUTRAY, Nilesh Bahadure, D. Ratna Kishore
Abstract: Image enhancement is the inevitable technique for investigating various biological features. The biological image data can be obtained from computer tomography (CT), magnetic resonance imaging (MRI), and X-ray imaging. X-ray imaging is useful for getting the information from lungs and respiratory system. COVID-19 is a life-threatening contiguous disease for the past two years in the world. Patients chest images playing an important role in the diagnosis of early detection of disease intensity. We propose a generative adversarial network (GAN) method that identifies COVID-19 from medical images and improves diagnostic sensitivity. Here we used virtual colouring methods and a platform for training the images by using a deep parental training method. Similarly, it gives optimal classification results with the help of well-defined image enhancement techniques and image extraction approaches. In our method, the accuracy level lies between 87.8% and 89.6% correspondingly for the dataset and synthetic dataset.
Keywords: COVID-19; image classification; medical image enhancement generative adversarial network; deep learning.
Pulmonary Lung Nodule Detection and classification through Image Enhancement and Deep Learning
by Nuthanakanti Bhaskar, Ganashree T. S, Raj Kumar Patra
Abstract: In the medical image capturing process, the noise will be added in images and to analyse these images proper enhancement and pre-process is required. Most of the researchers considered the same on CT lung images by using ROI selection, morphological operations, histogram equalisation, and binary thresholding methods and they achieved around 95% of accuracy. To get better accuracy in pre-processing stage of this present work resampling, morphological closure, image denoising techniques have been applied. In the image segmentation stage: For labelled nodule regions, the LIDC dataset is used, and for cancer/non-cancer labels, the KDSB17 dataset is used. In the segmentation stage: the U-net model has been applied. To minimise false positives of detected nodules CNN is applied which converges to an 84.4% validation accuracy. The AUC of the CNN model was 0.6231, with a validation loss of 0.5646 and accuracy of 96%.
Keywords: image enhancement; deep learning; lung cancer; U-net; CNN.
Vehicle Recognition Using Convolution Neural Network (CNN)
by Maleika Heenaye Mamode Khan, Abubakar Chonnoo, Oumeir Rengonny
Abstract: A significant challenge in the development of automatic vehicle make and model recognition (VMMR) is the distinguishing features between the different shapes based on the appearance of objects. The automatic recognition of vehicles based on their geometric shapes is in high demand. The diversity of make and models of vehicles further complicates this process. There are few applications that can recognise vehicles based on their geometric shape. To bridge this gap, convolution neural network (CNN) was adopted to predict the make and model of a car from either the rear view or front view of the vehicle using the pre-trained MobileNet. First, YOLOV3 has been used to detect the vehicle. The colour and the license plate of the vehicles are also extracted. An accuracy of 94.1% in the recognition of make of cars, 98.7% for the model, 99.1% for car plate registration number, and 90.3% for the colour was achieved.
Keywords: convolution neural network; CNN; deep learning; segmentation; vehicle make and model recognition; VMMR.
A Hybrid Approach for Face Recognition using LBP and Multi Level Classifier
by Mukesh Gupta, Pankaj Dadheech, ANKIT KUMAR, Ashok Kumar Saini, Neha Janu, Sanwta Ram Dogiwal
Abstract: General face recognition, a task performed by humans in daily activities, is derived from a virtually uncontrolled environment. This paper presents a facial recognition system based on random forest and support vector machine. When compared to previous methods, this approach achieves high accuracy. In this paper, we proposed a hybrid method using SVM and random forest classification. The RF+SVM method predicts rapid growth in popularity. This combined method aids in high recognition speed with a wide range of faces and emotions. We also compared the algorithm to previous techniques. Each experiment made use of a free internet database. In the experiment, 400 photographs of 40 people are used. The reason for the improved results in this papers hybrid vehicle classification methodology is that it combines the advantages of both traditional SVM and RF class methods. The proposed system has an accuracy of 98.6%.
Keywords: biometrics; database; face recognition; SVM classifier; random forest.
Collaborative Representation With Hole Filling Techniques For Kinect RGBD Face Recognition
by Aniketh Gaonkar, Narayan Vetrekar, Rajendra Gad
Abstract: We present in this paper, three different filtering techniques based on kernel function to fill missing information in depth map images obtained from low resolution sensor such as Kinect to improve performance accuracy of RGB-D face recognition system. We propose in this study, RGB-D face recognition scheme that combines depth map and colour image using wavelet average fusion followed by collaborative representation classifier (CRC) for comparison of reference and probe images. We present the evaluation results based on our GU-RGBD face database and IIIT-D face database to present the significance of our three different filters employed in hole filling. Further, our investigation presents an extensive experimental analysis using eight different feature extraction techniques independently across three different filters to demonstrate the potential of our proposed approach. The proposed approach of hole filling, improves the performance accuracy of RGB-D face recognition as compared to without employing filtering operations.
Keywords: face biometric; depth map; wavelet average fusion; feature extraction; collaborative representation classifier; CRC.
Local Triangular Patterns: Novel Handcrafted Feature Descriptors for Facial Expression Recognition
by Mukku Nisanth Kartheek, Munaga V. N. K. Prasad, Raju Bhukya
Abstract: Facial expressions are common across cultures and are used universally by humans to express their internal emotions, intentions and thoughts. The main task in facial expression recognition (FER) systems is to develop efficient feature descriptors that could effectively classify the facial expressions into various categories. In this work, towards extracting significant features, local triangular patterns (LTrP) named mini triangular pattern (mTP) and mega triangular pattern (MTP) have been proposed in order to minimise the feature vector length and to maximise the recognition accuracy. The proposed mTP method extracts two features in a 3 x 3 circular neighbourhood, whereas, MTP method extracts two features in a 5 x 5 circular neighbourhood. The proposed methods (mTP and MTP) are implemented on three benchmark FER datasets namely TFEID, MUG and KDEF. The experiments have been performed with respect to both six and seven expressions in person independent setup to simulate a real-world scenario. The experimental results demonstrated the efficiency of the proposed methods when compared to the standard existing methods.
Keywords: binary pattern; facial expressions; feature descriptor; handcrafted features; triangular patterns.
Performance Optimization of Face Recognition based on LBP with SVM and Random Forest classifier
by Ashutosh Kumar, Gajanand Sharma, Rajneesh Pareek, Satyajeet Sharma, Pankaj Dadheech, Mukesh Kumar Gupta
Abstract: Face recognition requirements are well described, as many industrial applications rely on them to achieve one or more goals. The local binary pattern is used for feature extraction, and the support vector machine (SVM) classifier is used for classification. To recognise faces or objects, we first offer the system with a testing database for training purposes. After that, we send an object image to the system, and the system extracts only relevant information or a portion of the face and processes it using LBP and SVM. When the illumination of the object image varies, the accuracy of facial recognition drops, and when just one training sample is provided, it does not provide the best matching results. In this paper, we describe a model that works by extracting local binary patterns from distinct sample photos, training the SVM classifier for the same, and then categorising input probe images using binary and multiclass SVM. In this case, the accuracy for an 80% training and 20% testing ratio is 97.5%.
Keywords: face recognition; PCA; LBP; support vector machine; SVM; random forest.
Face Recognition under Large Age Gap using Age Face Generation
by Rajesh Tripathi, Anand S. Jalal
Abstract: Age invariant face recognition (AIFR) is a challenging problem in the area of the face recognition. To handle large age gap for face recognition, we proposed a robust approach based on deep learning for face recognition under a large age gap. The presented approach consists of four important steps. The pre-processing is done for face detection. Age face generation is processed with the help of modified age conditional generative adversarial network (acGAN). Generated age face images are mixed with train dataset and augmentation is applied to increase the size of training data for handling biasness of the deep learning models towards dataset size. A modified residual convolutional neural network is applied for training and testing of face images. The performance has been evaluated using two-fold cross-validation on standard and challenging LAG dataset. The proposed approach achieved the 92.5% recognition accuracy which better than existing face recognition approaches for a large age gap.
Keywords: age invariant; deep learning; generative adversarial network; large age gap; convolutional neural network.
Sensor-enabled Biometric Signature based authentication method for smartphone users
by Sumaiya Ahmad, Sarthak Mishra, Farhana Javed Zareen, Suraiya Jabin
Abstract: With the ubiquity of smartphones, the need for foolproof authentication mechanisms that support authentication on the go and remote authentication is on the rise. Signature being the most sophisticated authentication method, this paper explores idea of designing signature based robust authentication method combined with smartphone sensor based features. We used our already created sensor enabled biometric signature database iSignDB which has recorded images of signatures along with associated smartphone sensor readings while signing on the touchscreen of the smartphone. Additionally, we considered six statistical features with the objective of designing a highly accurate authentication model. We successfully trained our authentication model over iSignDB using bidirectional LSTM method and obtained FAR 2%, FRR 12%, and EER 2% against skilled; and FAR 5%, FRR 12%, EER 6% against random forgery attacks respectively, which is comparable to the existing state-of-the-art models for smartphone signature biometrics. The main contribution of this work is extensive feature vector, and robust signature-based authentication method.
Keywords: authentication; biometrics; smartphone sensors; online biometric signatures; recurrent neural network; RNN; bidirectional LSTM; BiLSTM; iSignDB.
AUTOGENIC, PROGNOSTIC, AND COLLECTIVE SIGNATURE AFFIRMATION FRAMEWORK BASED ON DIVERSE SET OF FEATURES
by Sameera Khan, Megha Mishra, Vishnu Kumar Mishra
Abstract: Use of forged signatures for fraudulent practices has become extremely common in recent days. Therefore, a significant role is performed by the automatic signature verification (SV) process. Such verifiers need large number of specimens of a persons signature to establish the intrapersonal variability adequately. It is important to deal with the problem of data unavailability for training. A method to train with a single reference signature is proposed here to minimise the aforementioned limitation. This methodology is analysed by utilising a novel Gaussian gated recurrent unit neural network (2GRUNN) classifier. The single signature image is retrieved from database. Then, by using sinusoidal transformation, the signature duplication is performed. Next, pre-processing, feature extraction (FE), and feature selection (FS) are conducted. By employing linear chaotic shell game optimisation (LCSGO), the FS is executed. Extracted feature is fed to the proposed 2GRUNN for classification. Lastly, the results are compared with the existing methodologies.
Keywords: offline signature verification; signature duplication; sinusoidal transformation; shell game optimisation; SGO; synthetic signature; synthetic signature database.
Identification based on feature fusion of multimodal biometrics and deep learning
by Chahreddine Medjahed, Freha Mezzoudj, Abdelatif Rahmoun, Christophe Charrier
Abstract: This paper proposes a novel methodology for individuals identification based on convolutional neural network (CNN) and machine learning (ML) algorithms. The technique is based on fusioning biometric modalities at the feature level. For this purpose, several hybrid multimodal-biometric systems are used as a benchmark to measure accuracy of identification. In these systems, a CNN is used for each modality to extract modality-specific features for pattern of datasets. Machine learning algorithms are used to identify (classify) individuals. In this paper, we emphasise on performing fusion of biometric modalities at the feature level. We propose to apply the proposed algorithms on two challenging databases: FEI face database and IITD Palm Print V1 dataset. The results are showing good accuracies with many proposed multimodal biometric person identification systems. Through experimental runs on several multi-modal systems, it is clearly shown that best identification performance is obtained when using ResNet18 as deep learning tools for feature extraction along with linear discrimination machine learning algorithm.
Keywords: biometrics; multi-biometric system; feature level fusion; score level fusion; deep learning; machine learning.
Special Issue on: Machine Learning Algorithms for Biometrics
The model of fast face recognition against age interference in deep learning
by Yuzhe Zhang, Peilin Wu, Jinhui Zhao, Hao Feng, Rongtao Liao
Abstract: In order to overcome the low recognition efficiency of traditional anti-age face recognition methods, the paper proposes a new anti-age-disturbing face recognition modeling method based on deep learning. Build a standard face recognition information database and use this as the matching standard for face recognition. Construct a deep learning convolutional neural network, install the propagation process and training strategy of deep learning, build an age discrimination model, build a loss function and an objective function and solve it. On this basis, the facial features are extracted, after matching with the data in the standard database and similarity calculation, the final rapid facial recognition result is obtained. Experimental results show that the highest recognition accuracy of the designed face recognition model is 99.2%, and the recognition speed of the designed model is faster.
Keywords: deep learning; convolution neural network; anti age interference; face recognition model;.
Arm Movement Recognition Of Badminton Players In The Third Hit Based On Visual Search
by Ling Liu
Abstract: In order to overcome the problems of low recognition accuracy, high repetition rate of corner points and long recognition time of badminton players' arm movements in the third beat, a new recognition method based on visual search is proposed in this paper. This method uses visual search technology, uses binocular visual camera to record video, uses image stabilization technology to implement histogram equalization processing on the recorded visual image, obtains the visual image after noise elimination, uses scale invariant feature conversion algorithm to collect image sift corner features, combines with support vector machine algorithm to construct feature recognition model, and realizes the third shot arm movement of badminton players distinguish. The experimental results show that the signal-to-noise ratio (SNR) of the collected visual images is less than 10dB, the repetition rate of corners is always lower than 60%, and the recognition time is basically controlled within 0.8s.
Keywords: Visual search; badminton players; arm movement; recognition; visual image; corner features.
Multi pose facial expression recognition based on convolutional neural network
by Yongliang Feng
Abstract: In order to overcome the problems of low expression similarity and low recognition rate in multi pose facial expression recognition, a new multi pose facial expression recognition method based on convolutional neural network is proposed. The convolution layer is constructed directly by Gabor wavelet with fixed weights, and the full connection layer is constructed by support vector machine (SVM). The structure of convolution neural network is determined by matching growth rules, and the network parameters are trained by back-propagation algorithm. Adaboos algorithm is used to cut facial expression, gradient integral projection and dual threshold binarization are used to locate eyes. The scale normalization and gray scale normalization are used to realize multi pose facial expression recognition. The experimental results show that the highest expression similarity is 98.43%, the recognition rate is close to 100% under different rotation angles, and the recognition rate is as high as 99.96% for different expressions.
Keywords: Convolution neural network; Multi pose; Facial expression; Eecognition; Adaboos algorithmrn.
Facial feature localization and subtle expression recognition based on deep convolution neural network
by Qiaojun Li, Peipei Wang
Abstract: In order to solve the problems existing in traditional face recognition methods, such as low accuracy of face feature location, poor accuracy of subtle expression recognition and long recognition time, a face feature localization and subtle expression recognition based on deep convolution neural network is proposed. The principle of deep convolution neural network is analyzed, and the feature extraction of human face is placed in convolution layer and pooling layer. The foreground and background entropy of face image are obtained by binarization method of face image, and optical flow characteristics of all positions of frame image are obtained by using deep convolution neural network, and the recognition of facial subtle expression is completed. The experimental results show that the accuracy of the proposed method is up to 98%, the recognition accuracy of facial expression is high, and the recognition time is short .
Keywords: deep convolution; neural network; facial feature localization; subtle expression; recognition.
Research on Adaptive Conversion of AI Language based on Rough Set
by Yuping Fang, Da Fang
Abstract: In order to solve the problems of high complexity and low computational efficiency in traditional artificial intelligence language conversion methods, an adaptive artificial intelligence language conversion method based on rough set is proposed. Artificial intelligence language (AI) preprocessing is realized by pre-emphasizing, adding window, frame processing and endpoint detection. The attribute reduction algorithm based on rough set theory is used to select the features of ARTIFICIAL intelligence language. The dimension of input feature vector is reduced. The experimental results show that after feature extraction, the computational efficiency is obviously improved, and the efficiency of the proposed method is the highest, averaging close to 100%. Compared with the traditional method, the complexity of the proposed method is lower, and the average complexity is 1.68% during the 10 experimental iterations.This method simplifies the adaptive conversion process of ARTIFICIAL intelligence language and has high computational efficiency.
Keywords: Rough det; AI language; Adaptive conversion; Feature selection; Redundant information deleting.
Face feature tracking algorithm for long-distance runners based on multi region fusion
by Li Wan
Abstract: In order to overcome the problems of high error rate and poor tracking effect of traditional algorithms, a multi-region fusion based feature tracking algorithm for long-distance runners was proposed in this paper. Firstly, the multi-region template voting strategy is adopted to classify and obtain face features by dividing face feature similarity threshold in different regions through regional feature similarity classification. Then the mean-Shift tracking algorithm was used to complete the target object modeling, and the pap coefficient was used as the evaluation standard of model similarity measurement, and the face features were tracked through iterative operation. Experimental results show that the recognition accuracy of this algorithm is higher than 92% in different situations, and the tracking error of the center position is always below 20 pixels in different angles and complex environments, which fully proves the effectiveness of this algorithm.
Keywords: Multi region fusion; Long-distance runner; Feature classification; Face feature tracking; Mean Shift tracking algorithm.
Multi-information fingerprint identification method based on Interactive Genetic Algorithm
by Yuansheng Liu
Abstract: In order to overcome the shortcomings of traditional fingerprint identification methods, such as low computational efficiency and high false classification rate, a new multi-information fingerprint identification method based on interactive genetic algorithm is proposed. Firstly, the multi-information fingerprint image is preprocessed to extract the feature points. Then, combined with the interactive genetic algorithm, the rotation angle of the two fingerprint images is determined. At the same time, the fingerprint offset code is coded, and the matching setting function is set. The matching degree of the two fingerprints is determined by the interactive genetic algorithm, which effectively realizes the multi-information fingerprint identification. Finally, the simulation experiment is carried out. The experimental results show that the proposed method can effectively reduce the false classification rate, reduce the average matching time of fingerprint image, and improve the operation efficiency. The minimum error rate is only 1.02%.
Keywords: Interactive genetic algorithm; Feature point extraction; Multi-information fingerprint identification; Offset coding.
Dynamic facial expression recognition of sprinters based on multi-scale detail enhancement
by Xiang CAO, Pengfei Li
Abstract: In order to solve the problems of low average gradient and long recognition time in traditional facial expression recognition method, a multi-scale detail enhancement method for facial dynamic expression recognition of sprint athletes is proposed. A principal component analysis method was used to establish the facial expression feature subspace of sprinters, to project and reduce the dimension of the facial dynamic expression feature vector of sprinters, and to obtain the low frequency information and high frequency information of the facial image of sprinters by bilateral filtering. The multi-scale details of expression are enhanced by using side suppression network model and improving image S curve. The feature vector of facial dynamic expression is input into support vector machine to recognize the facial dynamic expression of sprinter. Experimental results show that the average value of annoying gradient is about 98 and the shortest time s. is about 1.9 .
Keywords: Multi-scale; Image detail enhancement; Gabor wavelet transform; Feature vector; Expression recognition; Support vector machine.
Research on automatic recognition method of basketball shooting action based on background subtraction method
by Linzhu Li, Kun Wang
Abstract: In order to overcome the problems existing in the existing methods, such as high rate of false recognition, high rate of missing recognition and low rate of recognition, an automatic recognition method of basketball shooting action based on background subtraction method is proposed. The background information of the target image is obtained by the background subtraction method to improve the clarity of the whole contour of the moving object. A certain number of video frames are extracted, and the background in the image is effectively separated. The two-dimensional Fourier transform is used to balance the video frames to obtain the foreground target information and the background of the video sequence, so as to complete the automatic recognition of basketball shooting action. Experimental results show that the proposed method can effectively reduce the rate of false recognition and missed recognition, and improve the recognition rate.
Keywords: Background subtraction method; basketball shooting; action automatic recognition; optical flow method; error rate.
Recognition Method of Unspecified Face Expressions Based on Machine Learning
by ZheShu Jia, DeYun Chen
Abstract: Traditional face recognition methods usually complete facial expression recognition for designated faces, and the pixel set at the edge of face image is chaotic, which leads to poor accuracy of unspecified facial expression recognition. In order to improve the accuracy of unknown facial expression recognition, a method of unknown facial expression recognition based on machine learning is proposed. The feature detection model of unspecified facial expressions is constructed, and the features are divided into regional blocks. Fusion block feature information. Establishing spatial feature projection model, weighting feature information entropy, extracting statistical features and edge information entropy features, reorganizing features and matching edge pixel sets, and completing the recognition of various facial expression features. Experimental results show that the accuracy of this method is significant, reaching 1, which effectively improves the recognition efficiency and anti-interference performance.
Keywords: Machine learning; unspecified person; facial expression; recognition; feature extraction; information enhancement.
Automatic recognition of javelin athletes throwing angle based on recognizable statistical characteristics
by Zhe Dong, Xiongying Wang
Abstract: In order to overcome the problem that the traditional recognition method has poor statistical performance on the regularity of body feature data before recognizing the throwing angle, which leads to the deviation of javelin flight trajectory judgment results, this paper proposes an automatic recognition method of javelin athletes' throwing angle based on the recognizable statistical characteristics. Firstly, the technical characteristics of javelin throwers of different genders are extracted by using the statistical process of distinguishing features. Then, the angle of recognition equipment is calibrated and the position of trigger signal is combined to realize the automatic recognition of javelin throw angle. Experimental results show that: the javelin flight trajectory identified by this method is the closest to the actual trajectory, and the recognition accuracy of the throwing angle can reach more than 98%. It shows that the method can effectively realize the accurate recognition of javelin athletes throwing angle.
Keywords: Recognizable statistical characteristics; javelin movement; throwing angle recognition; two-dimensional image; trigger signal position; recognition accuracy.
Face feature tracking algorithm of aerobics athletes based on Kalman filter and Meanshift
by Shu Yang
Abstract: In order to solve the problems of low accuracy and long time-consuming in face image tracking of Aerobics Athletes in traditional methods, a face feature tracking algorithm based on Kalman filter and meanshift is proposed. Three frame difference method is used to extract the colour features of Aerobics Athletes' face images, measure the geometric feature similarity of Aerobics Athletes' face images, calculate the gray value of local images of Aerobics Athletes' face features, and match corner features by NCC matching algorithm. The Kalman filter method is introduced to denoise the different pixels of the feature image, and the mean shift of the Aerobics Athletes' face features is obtained by means of the mean shift algorithm to realise the tracking of the Aerobics Athletes' face features. The experimental results show that the tracking accuracy of the proposed method is up to 97%, and the shortest tracking time is about 1.5 s.
Keywords: Multi region fusion; aerobics athletes; image resolution; background modelling; feature tracking;.
Research on leg posture recognition of sprinters based on SVM classifier
by Yang HE
Abstract: In order to overcome the problems of low recognition rate, high time consumption and high misclassification rate caused by the difficulty in obtaining the global motion pattern information of sprinters in traditional posture recognition methods, a leg pose recognition method based on SVM classifier is proposed. Using multivariate statistical model to denoise the sprint image, the effective leg movement pattern information of sprinters is extracted. In the SVM classifier, the samples are divided by decision function to realize the recognition of sprinters' leg posture. In order to verify the effectiveness of the method in this paper, a comparative experiment is designed. Experimental results show that the recognition rate of the proposed method is more than 90%, the time consumption of recognition process is always less than 0.5s, and the misclassification rate of leg pose features is always below 5%, which fully demonstrates the high recognition performance of the method.
Keywords: SVM classifier; Sprint; Wavelet transform; Feature vector; Leg pose recognition; Feature extraction.