International Journal of Biometrics (26 papers in press)
Machine Learning Based Iris Liveness Identification using Fragmental Energy of Cosine Transformed Iris Images
by Smita Khade, Sudeep Thepade, Swati Ahirrao
Abstract: Iris Biometric Identification provides a contactless authentication preventing the spread of COVID19 like disease. These systems face issues related to spoofing attacks attempted with the help of contact lenses, replayed video, print attack; making it vulnerable and unsafe. The paper proposes the iris liveness detection method to mitigate spoofing attacks, taking Fragmental Coefficients of Cosine transformed iris image to be used as features. Seven variants of feature formation are considered in experimental validations of proposed method and the features are used to train eight assorted Machine Learning Classifiers and ensembles for iris liveness identification. Recall, F-measure, Precision and Accuracy are used to evaluate performances of the projected iris liveness identification variants. The experimentation carried out on four standard datasets have shown better iris liveness identification by the Fragmental Coefficients of Cosine transformed iris image with size 4*4 using Random Forest algorithm having 99.18% accuracy immediately followed by ensemble of classifiers.
Keywords: Iris Images; liveness detection; Discrete cosine Transform; machine learning; classification; Biometric; Feature Extraction.
Revocable Iris Templates using Partial Sort and Randomized Look-up Table Mapping
by Mulagala Sandhya, Dilip Kumar Vallabhadas, Shubham Rathod
Abstract: The computerized needs of the world are increased more. In the
ongoing years, biometric systems end up helpless against the spillage of template
information. If a biometric template is stolen, it is lost permanently and cant
be restored or reissued like learning-based or ownership-based strategies. There
are many biometric traits that can be used for the purpose of recognition, but out
of all these biometrics, the iris is one of the strongest because of its accuracy
which is considerably high. In this paper, we develop a new cancellable biometric
scheme using the Indexing-First-One (IFO) hashing coupled with a technique
called partial sort. The IFO hashing uses new mechanisms called the P-order
Hadamard product and modulo threshold function paired with the partial sort
technique which has considerably strengthened it further. Due to the use of IFO
hashing, we get a perfect balance between security and accuracy. We used
the very sophisticated CASIA-v3 database which provides us with a wide range
of iris templates for our experiments. As compared to the previous cancellable
schemes, the analysis of the results of these experiments provides us with good
accuracy. Along with this improved accuracy, the modified IFO hashing schemes
also provide strong resistance to various privacy and security attacks.
Keywords: Iris; Cancelable template; Min-hashing; Jacaard similarity; Security and privacy; Partial Sort; Look-up table.
Robust Perceptual Fingerprint Image Hashing: A Comparative Study
by Wafa Birouk, Atidel Lahoulou, Ali Melit, Ahmed Bouridane
Abstract: This paper presents a robust perceptual hashing scheme for biometric
template protection where the input fingerprint image is mapped into a sequence
of Boolean values. Our aim is to develop a method that relies on the use of four
functions namely SIFT, Harris, DWT and SVD. After extracting the minutiae,
the scale-invariant feature transform (SIFT) is applied in order to extract the
robust features against geometric attacks. The resulting vector is then filtered
using Harris criterion to maintain only the stable key-points. Next, the fingerprint template is produced by image binarization and decomposed into blocks. The hash code is finally obtained by concatenating the singular values computed on the approximation coefficients of each image block. Similarity between hash codes is evaluated by the normalized Hamming distance (HD). Comparative analysis to three similar methods indicates that the proposed hashing scheme shows better performances in terms of discriminative capability as well as robustness against acceptable image manipulations, such as JPEG compression, gamma correction, speckle noise, Gaussian blur, shearing and slight rotation.
Keywords: Fingerprint image; Perceptual hashing; Minutiae extraction; SIFT; Harris; SVD; DWT; Acceptable attacks.
Performance Enhancement of Symmetric Hashed Fingerprint Template using Dynamic Threshold Matching Algorithm
by Ajish Sreedharan
Abstract: Among the biometric template protection, the fingerprint template protection is strenuous because the fingerprint template is store as minutiae points. The enhancement of the security of the fingerprint biometric template results in the degradation of the matching performance. The modified symmetric hash method uses a key-value as a multiplication parameter for the hashing of the fingerprint biometric template. The irreversibility and unlikability analysis of the modified symmetric hashed fingerprint template exhibits better security. The multiplication of the fingerprint minutiae template by a key-value mitigates the accuracy of matching performance. This paper proposes a dynamic threshold matching algorithm in which the threshold values are derived from the key-value. Experimental results on FVC 2004 database indicate that the combination of modified symmetric hash function and the dynamic threshold matching algorithm prompt better security and excellent matching performance.
Keywords: Fingerprint Template; Modified Symmetric Hashing; Accuracy; Irreversibility; Unlinkability.
Effect of facial expression in face biometry for a multimodal approach
by Samik Chakraborty, Dipayan Chatterjee, Subrata Golui, Madhuchhanda Mitra, Saurabh Pal
Abstract: With the advancement of technology, security has become an inseparable part of it. But many factors often influence the accuracy of the authentication system. In this current scenario, the multimodal biometric system is used where information from different modalities are fused to address the weakness of the system. In the present work, a robust biometric authentication system proposed using face and facial expression as biometric modalities. Facial recognition is the most commonly used biometric system over the years. Facial expressions of an individual are unique and it is integrated as an additional layer along with face recognition system to enhance the security of the system as the current scenario tends towards intelligent security systems for real-time surveillance. After preprocessing, Eigen value-based and Local Binary Pattern (LBP) based features are extracted from the face and facial expression and the information are fused. Finally, the authentication is done using Image Euclidian distance (IMED) based classifier. This proposed work evaluated using the JAFFE and Yale database and 95.71% and 88.89% authentication accuracy is achieved respectively.
Keywords: Face biometry; Facial expression; LBP; Eigenfaces; IMED Classifier.
Recognition method of basketball players' throwing action based on image segmentation
by Cong Zhang, Miao Wang, Limin Zhou
Abstract: In order to solve the problems of low recognition accuracy and long recognition time in traditional basketball players' throwing action recognition methods, this paper proposes a new basketball players' throwing action recognition method based on image segmentation. The covariance matrix of noise data of basketball players' throwing action is constructed. The throwing action of basketball players is expressed by acceleration and angular velocity, and the acceleration vector of throwing action is obtained. The feature extraction of throwing action is completed by discrete Fourier transform algorithm. The image of basketball players' throwing action is segmented by threshold and edge, and the change features of throwing action are obtained by kernel function to complete the recognition of basketball players' throwing action. The experimental results show that the accuracy of the proposed method is about 98%, and the time cost is about 2.1s.
Keywords: Image Segmentation; Throw Action Recognition; Kalman Filter; Covariance Matrix; Multidimensional Vector.
Deep Architecture Based Face Spoofing Identification in Real-Time Application
by Mayank Kumar Rusia, Dushyant Kumar Singh
Abstract: Face biometric-based recognition is always a demanding and universally accepted method, especially for access control purposes. However, face recognition systems are usually affected by identity threats, such as face spoofing. Face spoofing is an attempt to acquire the face identity privilege of another person illegally. Developing an efficient and real-time spoofing detection system that quickly detects any illegal access attempts to prevent vulnerability violations is indispensable. This manuscript proposes an automated and efficient technique for face spoofing detection based on a customized convolutional neural network named SpoofNET. The proposed model can easily distinguish genuine and spoof faces with less complex convolutional blocks. This novel method can be deployed in any application that desires low computation and low-resolution input samples. This manuscript also introduces the dataset synthesized in our lab, validated with the existing NUAA dataset. The proposed model achieved 99.3% validation accuracy with better generalization potentiality for the synthesized dataset.
Keywords: Face Liveness Detection; Face Spoofing Attack; Convolutional Neural Network; Computer Vision; Biometrics; Image Processing.
Cumulative Foot Pressure Image Recognition via Gabor Filters and Sparse Representation Classifier
by Pedram Ahmad Khan Beigi, Aboozar Ghaffari
Abstract: Analysis of cumulative foot pressure images (CFPIs) can be utilized like other biometrics for personal recognition. This biometrics, along with other biometrics such as gait, can be helpful in identification. Also, this overcomes some drawbacks of common biometrics (such as face and gait) in data recording and taking peoples foot data without knowing them. In the process of capturing cumulative foot pressure image, the spatial and temporal changes of ground reaction force during one gait cycle are recorded. In this biometric, there are some challenges such as walking in different speed. In this paper, we present a new approach based on Gabor filters as a feature vector and sparse representation classification (SRC). To reduce the feature dimension, Eigenfoot and linear discriminative analysis (LDA) are also used. To obtain translation and rotation invariant representation, a normalization preprocess is applied. We evaluate the proposed approach via dataset D of the Chinese Academy of Science (CASIA) Gait-Footprint dataset containing the cumulative foot pressure images of 88 persons. The experimental results indicate that the proposed method has higher accuracy than other tested methods, and also is robust to changes in walking speed.
Keywords: Cumulative foot pressure; Gabor filters; sparse representation classifier; eigenfoot; LDA.
Human Footprint Biometrics for Personal Identification using Artificial Neural Networks
by Kapil Nagwanshi, Amit Kumar Gupta, Tilottama Goswami, Sunil Pathak, Maleika Heenaye Mamode Khan
Abstract: The philosophy of this study focuses on human footprint identification applicable for high-security applications such as the safety of public places, crime scene investigation, impostor identification, biotech labs and blue-chip labs, identification of infants in hospitals. The paper proposes one of the low-cost hardware to scan the biometric human footprints that utilise image pre-processing and enhancement capabilities for obtaining the features. The algorithm enhances the footprint matching performance by selecting the three sets of local invariant feature detectors histogram of gradients, maximally stable external regions, speed up robust features; local binary pattern as texture descriptor, corner point detector, and PCA. Copyright 20XX Inderscience Enterprises Ltd. 2 K.K. Nagwanshi et al. Furthermore, descriptive statistics are generated from all the above mentioned footprint features and concatenated to create the final feature vector. The proposed footprint biometric identification will correctly identify or classify the person by training the system with patterns of the interested subjects using an artificial neural network model specially designed for this task. The proposed method gives the classification accuracy at a very encouraging level of 99.55%.
Keywords: artificial neural networks; ANNs; biometric; classification; footprint; segmentation.
Improving face recognition using deep autoencoders and feature fusion
by Ali Khider, Rafik Djemili, Ahmed Bouridane, Richard Jiang
Abstract: Uncontrolled environments are the main challenges of real face recognition systems, recent success of deep learning and features fusion has led to various performance improvements. This paper proposes a novel scheme called feature autoencoder (FAE), where an autoencoder model is not trained directly from the raw facial images, rather it uses a fusion of features constructed by Gabor filter, local binary pattern and local phase quantisation. For each feature, a linear discriminant analysis is applied to reduce its high dimensionality and a limited adaptive histogram equalisation process is employed for contrast enhancement. The proposed scheme has been evaluated using known datasets such as AR, ORL and YALE, and the experimental results carried out on these databases have been compared using three classifiers: k-nearest neighbour, multiclass support vector machine and softmax classifier demonstrating the effectiveness of proposed approach and parameters. The experimental results obtained and compared with recent and similar approaches on six databases: ORL, YALE, AR, extended YALE B, CMU PIE, and LFWcrop suggest that the proposed technique outperforms similar techniques, the recognition rate got from them are 100%, 100%, 99.66%, 99.40%, 97.31%, and 90.68% respectively.
Keywords: uncontrolled environments; face recognition; deep learning; sparse; autoencoder; feature extraction; fusion.
MR Image Enhancement and Brain Tumor Detection using Soft Computing and BWT with Auto-Enhance Technique
by Nilesh Bahadure, Nagrajan Raju, Prasenjeet Patil
Abstract: In this research work new algorithm using soft-computing is presented for medical image enhancement with auto-enhance technique. Image enhancement is one of the most important classes of image analysis in image processing. This paper presents the complete review of the different performance parameters of popular image enhancement techniques and proposes a new methodology for improvising visualisation with preserving high-intensity value. The images with the colour intensity value cannot be processed directly by most of the enhancement techniques, hence a suitable colour model is chosen for processing and the proposed algorithm for the same are implemented. Accurate analysis of information from the region of interest area from the images is always a central issue in the image analysis, so with the help of this improved algorithm based on the soft computing technique, it is possible to enhance the images with best in the class, clarity and visualisation. Simulation and experimental result on the different test images proves that the proposed algorithm gives better result as compared to other state of the art image enhancement techniques.
Keywords: Berkeley wavelet transformation; BWT; fuzzy clustering means; FCMs; magnetic resonance imaging; MRI.
Investigation of COVID-19 Symptoms Using Deep Learning Based Image Enhancement Scheme for X-Ray Medical Images
by V. Pandimurugan, A.V. Prabu, S. Rajasoundaran, SIDHESWAR ROUTRAY, Nilesh Bahadure, D. Ratna Kishore
Abstract: Image enhancement is the inevitable technique for investigating various biological features. The biological image data can be obtained from computer tomography (CT), magnetic resonance imaging (MRI), and X-ray imaging. X-ray imaging is useful for getting the information from lungs and respiratory system. COVID-19 is a life-threatening contiguous disease for the past two years in the world. Patients chest images playing an important role in the diagnosis of early detection of disease intensity. We propose a generative adversarial network (GAN) method that identifies COVID-19 from medical images and improves diagnostic sensitivity. Here we used virtual colouring methods and a platform for training the images by using a deep parental training method. Similarly, it gives optimal classification results with the help of well-defined image enhancement techniques and image extraction approaches. In our method, the accuracy level lies between 87.8% and 89.6% correspondingly for the dataset and synthetic dataset.
Keywords: COVID-19; image classification; medical image enhancement generative adversarial network; deep learning.
Pulmonary Lung Nodule Detection and classification through Image Enhancement and Deep Learning
by Nuthanakanti Bhaskar, Ganashree T. S, Raj Kumar Patra
Abstract: In the medical image capturing process, the noise will be added in images and to analyse these images proper enhancement and pre-process is required. Most of the researchers considered the same on CT lung images by using ROI selection, morphological operations, histogram equalisation, and binary thresholding methods and they achieved around 95% of accuracy. To get better accuracy in pre-processing stage of this present work resampling, morphological closure, image denoising techniques have been applied. In the image segmentation stage: For labelled nodule regions, the LIDC dataset is used, and for cancer/non-cancer labels, the KDSB17 dataset is used. In the segmentation stage: the U-net model has been applied. To minimise false positives of detected nodules CNN is applied which converges to an 84.4% validation accuracy. The AUC of the CNN model was 0.6231, with a validation loss of 0.5646 and accuracy of 96%.
Keywords: image enhancement; deep learning; lung cancer; U-net; CNN.
Vehicle Recognition Using Convolution Neural Network (CNN)
by Maleika Heenaye Mamode Khan, Abubakar Chonnoo, Oumeir Rengonny
Abstract: A significant challenge in the development of automatic vehicle make and model recognition (VMMR) is the distinguishing features between the different shapes based on the appearance of objects. The automatic recognition of vehicles based on their geometric shapes is in high demand. The diversity of make and models of vehicles further complicates this process. There are few applications that can recognise vehicles based on their geometric shape. To bridge this gap, convolution neural network (CNN) was adopted to predict the make and model of a car from either the rear view or front view of the vehicle using the pre-trained MobileNet. First, YOLOV3 has been used to detect the vehicle. The colour and the license plate of the vehicles are also extracted. An accuracy of 94.1% in the recognition of make of cars, 98.7% for the model, 99.1% for car plate registration number, and 90.3% for the colour was achieved.
Keywords: convolution neural network; CNN; deep learning; segmentation; vehicle make and model recognition; VMMR.
A Hybrid Approach for Face Recognition using LBP and Multi Level Classifier
by Mukesh Gupta, Pankaj Dadheech, ANKIT KUMAR, Ashok Kumar Saini, Neha Janu, Sanwta Ram Dogiwal
Abstract: General face recognition, a task performed by humans in daily activities, is derived from a virtually uncontrolled environment. This paper presents a facial recognition system based on random forest and support vector machine. When compared to previous methods, this approach achieves high accuracy. In this paper, we proposed a hybrid method using SVM and random forest classification. The RF+SVM method predicts rapid growth in popularity. This combined method aids in high recognition speed with a wide range of faces and emotions. We also compared the algorithm to previous techniques. Each experiment made use of a free internet database. In the experiment, 400 photographs of 40 people are used. The reason for the improved results in this papers hybrid vehicle classification methodology is that it combines the advantages of both traditional SVM and RF class methods. The proposed system has an accuracy of 98.6%.
Keywords: biometrics; database; face recognition; SVM classifier; random forest.
Collaborative Representation With Hole Filling Techniques For Kinect RGBD Face Recognition
by Aniketh Gaonkar, Narayan Vetrekar, Rajendra Gad
Abstract: We present in this paper, three different filtering techniques based on kernel function to fill missing information in depth map images obtained from low resolution sensor such as Kinect to improve performance accuracy of RGB-D face recognition system. We propose in this study, RGB-D face recognition scheme that combines depth map and colour image using wavelet average fusion followed by collaborative representation classifier (CRC) for comparison of reference and probe images. We present the evaluation results based on our GU-RGBD face database and IIIT-D face database to present the significance of our three different filters employed in hole filling. Further, our investigation presents an extensive experimental analysis using eight different feature extraction techniques independently across three different filters to demonstrate the potential of our proposed approach. The proposed approach of hole filling, improves the performance accuracy of RGB-D face recognition as compared to without employing filtering operations.
Keywords: face biometric; depth map; wavelet average fusion; feature extraction; collaborative representation classifier; CRC.
Local Triangular Patterns: Novel Handcrafted Feature Descriptors for Facial Expression Recognition
by Mukku Nisanth Kartheek, Munaga V. N. K. Prasad, Raju Bhukya
Abstract: Facial expressions are common across cultures and are used universally by humans to express their internal emotions, intentions and thoughts. The main task in facial expression recognition (FER) systems is to develop efficient feature descriptors that could effectively classify the facial expressions into various categories. In this work, towards extracting significant features, local triangular patterns (LTrP) named mini triangular pattern (mTP) and mega triangular pattern (MTP) have been proposed in order to minimise the feature vector length and to maximise the recognition accuracy. The proposed mTP method extracts two features in a 3 x 3 circular neighbourhood, whereas, MTP method extracts two features in a 5 x 5 circular neighbourhood. The proposed methods (mTP and MTP) are implemented on three benchmark FER datasets namely TFEID, MUG and KDEF. The experiments have been performed with respect to both six and seven expressions in person independent setup to simulate a real-world scenario. The experimental results demonstrated the efficiency of the proposed methods when compared to the standard existing methods.
Keywords: binary pattern; facial expressions; feature descriptor; handcrafted features; triangular patterns.
Performance Optimization of Face Recognition based on LBP with SVM and Random Forest classifier
by Ashutosh Kumar, Gajanand Sharma, Rajneesh Pareek, Satyajeet Sharma, Pankaj Dadheech, Mukesh Kumar Gupta
Abstract: Face recognition requirements are well described, as many industrial applications rely on them to achieve one or more goals. The local binary pattern is used for feature extraction, and the support vector machine (SVM) classifier is used for classification. To recognise faces or objects, we first offer the system with a testing database for training purposes. After that, we send an object image to the system, and the system extracts only relevant information or a portion of the face and processes it using LBP and SVM. When the illumination of the object image varies, the accuracy of facial recognition drops, and when just one training sample is provided, it does not provide the best matching results. In this paper, we describe a model that works by extracting local binary patterns from distinct sample photos, training the SVM classifier for the same, and then categorising input probe images using binary and multiclass SVM. In this case, the accuracy for an 80% training and 20% testing ratio is 97.5%.
Keywords: face recognition; PCA; LBP; support vector machine; SVM; random forest.
Face Recognition under Large Age Gap using Age Face Generation
by Rajesh Tripathi, Anand S. Jalal
Abstract: Age invariant face recognition (AIFR) is a challenging problem in the area of the face recognition. To handle large age gap for face recognition, we proposed a robust approach based on deep learning for face recognition under a large age gap. The presented approach consists of four important steps. The pre-processing is done for face detection. Age face generation is processed with the help of modified age conditional generative adversarial network (acGAN). Generated age face images are mixed with train dataset and augmentation is applied to increase the size of training data for handling biasness of the deep learning models towards dataset size. A modified residual convolutional neural network is applied for training and testing of face images. The performance has been evaluated using two-fold cross-validation on standard and challenging LAG dataset. The proposed approach achieved the 92.5% recognition accuracy which better than existing face recognition approaches for a large age gap.
Keywords: age invariant; deep learning; generative adversarial network; large age gap; convolutional neural network.
Sensor-enabled Biometric Signature based authentication method for smartphone users
by Sumaiya Ahmad, Sarthak Mishra, Farhana Javed Zareen, Suraiya Jabin
Abstract: With the ubiquity of smartphones, the need for foolproof authentication mechanisms that support authentication on the go and remote authentication is on the rise. Signature being the most sophisticated authentication method, this paper explores idea of designing signature based robust authentication method combined with smartphone sensor based features. We used our already created sensor enabled biometric signature database iSignDB which has recorded images of signatures along with associated smartphone sensor readings while signing on the touchscreen of the smartphone. Additionally, we considered six statistical features with the objective of designing a highly accurate authentication model. We successfully trained our authentication model over iSignDB using bidirectional LSTM method and obtained FAR 2%, FRR 12%, and EER 2% against skilled; and FAR 5%, FRR 12%, EER 6% against random forgery attacks respectively, which is comparable to the existing state-of-the-art models for smartphone signature biometrics. The main contribution of this work is extensive feature vector, and robust signature-based authentication method.
Keywords: authentication; biometrics; smartphone sensors; online biometric signatures; recurrent neural network; RNN; bidirectional LSTM; BiLSTM; iSignDB.
AUTOGENIC, PROGNOSTIC, AND COLLECTIVE SIGNATURE AFFIRMATION FRAMEWORK BASED ON DIVERSE SET OF FEATURES
by Sameera Khan, Megha Mishra, Vishnu Kumar Mishra
Abstract: Use of forged signatures for fraudulent practices has become extremely common in recent days. Therefore, a significant role is performed by the automatic signature verification (SV) process. Such verifiers need large number of specimens of a persons signature to establish the intrapersonal variability adequately. It is important to deal with the problem of data unavailability for training. A method to train with a single reference signature is proposed here to minimise the aforementioned limitation. This methodology is analysed by utilising a novel Gaussian gated recurrent unit neural network (2GRUNN) classifier. The single signature image is retrieved from database. Then, by using sinusoidal transformation, the signature duplication is performed. Next, pre-processing, feature extraction (FE), and feature selection (FS) are conducted. By employing linear chaotic shell game optimisation (LCSGO), the FS is executed. Extracted feature is fed to the proposed 2GRUNN for classification. Lastly, the results are compared with the existing methodologies.
Keywords: offline signature verification; signature duplication; sinusoidal transformation; shell game optimisation; SGO; synthetic signature; synthetic signature database.
Identification based on feature fusion of multimodal biometrics and deep learning
by Chahreddine Medjahed, Freha Mezzoudj, Abdelatif Rahmoun, Christophe Charrier
Abstract: This paper proposes a novel methodology for individuals identification based on convolutional neural network (CNN) and machine learning (ML) algorithms. The technique is based on fusioning biometric modalities at the feature level. For this purpose, several hybrid multimodal-biometric systems are used as a benchmark to measure accuracy of identification. In these systems, a CNN is used for each modality to extract modality-specific features for pattern of datasets. Machine learning algorithms are used to identify (classify) individuals. In this paper, we emphasise on performing fusion of biometric modalities at the feature level. We propose to apply the proposed algorithms on two challenging databases: FEI face database and IITD Palm Print V1 dataset. The results are showing good accuracies with many proposed multimodal biometric person identification systems. Through experimental runs on several multi-modal systems, it is clearly shown that best identification performance is obtained when using ResNet18 as deep learning tools for feature extraction along with linear discrimination machine learning algorithm.
Keywords: biometrics; multi-biometric system; feature level fusion; score level fusion; deep learning; machine learning.
Deep Learning Based Lightweight Approach to Thermal Super Resolution
by Shashwat Pandey, Darshika Sharma, Basant Kumar, Himanshu Singh
Abstract: In this paper, we propose a thermal image super-resolution (SR) technique using a lightweight deep learning model which we refer to as thermal lightweight network (TherLiNet). We refine interpolated images using convolutional layers interleaved with different activation functions along with residual learning in the network. The effectiveness of the proposed architecture is evaluated against widely used deep learning-based super resolution models namely, super-resolution convolutional neural network (SRCNN), thermal enhancement network (TEN) and very deep super resolution (VDSR). Training and testing is done with different thermal datasets using different scale factors. To further explore the possibilities, red green blue (RGB) guided training is also performed and evaluated on thermal image datasets. Peak signal to noise ratio (PSNR) and structural similarity index (SSIM), the most widely accepted parameters have been used for evaluation of the proposed model. The model is also compared to other models based on computation time to generate results. We also demonstrate the results in terms of qualitative values of the model compared to other super-resolution (SR) techniques.
Keywords: thermal images; super resolution; deep learning; RGB guidance; thermal lightweight network; TherLiNet; thermal super resolution.
Optimized Denoising Sparse Autoencoder for the Detection of Outliers for Face Recognition
by X. Ascar Davix, D. Judson, R. Jeba
Abstract: Face recognition is a challenging research in the area of biometric applications due to the variations of input data such as not well centred faces, different pose, occlusions and poor resolution images. Detection and removal of outliers from the input data is essential to improve the performance of the face recognition algorithm. In this deep learning era, deep networks performed well in image classification. Deep networks extract features automatically from the data and updates the weights to reduce loss function. In this paper, we have presented optimised denoising sparse autoencoder (ODSAE) system to detect and remove the outliers in the input dataset. The autoencoder technique performs well in nonlinear transformations. It deals with convolutional layers for learning and provides meaningful information from the input. Softmax classifier is used for the classification of images. The experiment is carried out on Yale and AR face datasets and the results revealed better accuracy in removing outliers.
Keywords: outlier; face recognition; denoising sparse autoencoder; biometric; deep learning.
A Comprehensive Study of Machine Learning Approaches for Keystroke Dynamics Authentication
by Tanya Teotia, Mridula Sharma, Haytham Elmiligi
Abstract: The most popular behavioural biometrics that is currently being considered as a second factor of authentication is keystroke dynamics. However, the adoption of this authentication technology faces several challenges, such as lack of a standard benchmark and evaluation methodology that could be used to compare the accuracy and performance of different frameworks. In this paper, we provide a comprehensive design space exploration of various machine learning frameworks to authenticate users based on keystroke dynamics. The paper also studies the machine learning design flow, discusses details of every single step in the process, and provides comparative analysis of possible options available for developers. The paper presents a comparative analysis of various machine learning frameworks supported by experimental analysis. Our experimental work analyses the efficiency of various machine algorithms, compares the impact of filter-based and wrapper-based feature selection techniques, and compares the accuracy of machine learning classifiers by using different feature sets.
Keywords: machine learning; keystroke dynamics; classification; feature extraction; feature selection.
An empirical analysis of deep ensemble approach on COVID-19 and tuberculosis X-ray images
by Aakanksha Sharaff, Madhur Singhal, Arham Chouradiya, Pavan Gupta
Abstract: COVID-19, a pandemic and a highly contagious disease can severely damage the respiratory organs. Tuberculosis is also one of the leading respiratory diseases that affect public health. While COVID-19 has pushed the world into chaos and tuberculosis is still a persistent problem in many countries. A chest X-ray can provide plethora of information regarding the type of disease and the extent of damage to the lungs. Since X-rays are widely accessible and can be used in the diagnosis of COVID-19 or tuberculosis, this study aims to leverage those property to classify them in the category of COVID-19 infected lungs, tuberculosis infected lungs or normal lungs. In this paper, an ensemble deep learning model consisting of pre-trained models for feature extraction is used along with machine learning classifiers to classify the X-ray images. Various ensemble models were implemented and highest achieved accuracy among them was observed as 93%.
Keywords: ensemble learning; COVID-19; tuberculosis; machine learning; MobileNet; Xception; ResNet50.