Forthcoming articles

 


International Journal of Biomedical Engineering and Technology

 

These articles have been peer-reviewed and accepted for publication in IJBET, but are pending final changes, are not yet published and may not appear here in their final order of publication until they are assigned to issues. Therefore, the content conforms to our standards but the presentation (e.g. typesetting and proof-reading) is not necessarily up to the Inderscience standard. Additionally, titles, authors, abstracts and keywords may change before publication. Articles will not be published until the final proofs are validated by their authors.

 

Forthcoming articles must be purchased for the purposes of research, teaching and private study only. These articles can be cited using the expression "in press". For example: Smith, J. (in press). Article Title. Journal Title.

 

Articles marked with this shopping trolley icon are available for purchase - click on the icon to send an email request to purchase.

 

Articles marked with this Open Access icon are freely available and openly accessible to all without any restriction except the ones stated in their respective CC licenses.

 

Register for our alerting service, which notifies you by email when new issues of IJBET are published online.

 

We also offer RSS feeds which provide timely updates of tables of contents, newly published articles and calls for papers.

 

International Journal of Biomedical Engineering and Technology (185 papers in press)

 

Regular Issues

 

  • A hybridized neural network and optimization algorithms for prediction & classification of neurological disorders   Order a copy of this article
    by Pravin R. Kshirsagar, Sudhir G. Akojwar, Nidhi D. Bajaj 
    Abstract: Artificial Neural Network (ANN) techniques are considered to be most efficient in biomedical areas for their effective results in classifying several complex disorders. Application of standard optimization techniques in combination with ANN could highly optimize the parameters of this Network and make it even more reliable and efficient.In this paper, a hybrid model of Artificial Neural Network and Particle Swarm Optimization (PSO) Algorithm for the Classification & Prediction of various Neurological Disorders is designed. The proposed system works on the EEG signals obtained from patients suffering from Focal Epilepsy, Brain Dead, Slow-wave conditions, etc. It particularly is capable of performing Classification and Prediction of Neurological diseases based on the EEG signal input. Here Probabilistic Neural Network (PNN) is used as it is very efficient for classification purposes. Classification results are verified by using 10-fold cross validation technique. Prediction is performed by using Modified PSO. The EEG database is obtained from CIIMS Hospital, Nagpur. The results are highly reliable with graphs for Predicted signal and Prediction Error. Percentage of accuracy, sensitivity and mean squared error are calculated as well. With the help of this system classification of EEG signals can now be easily done with accuracy greater than 99% and in a short span of time.This paper gives a study of a novel approach for designing an ANN-PSO classifier and a new prediction technique. Furthermore, this work will be helpful in future to assist doctors in hospitals. As it is known that EEG signals are difficult to predict, since doctors take time to analyze them so the authors propose an automated approach to prediction, which may aid the physician.
    Keywords: Probablistic Neural Network; Partical Swarm Optimisation; Genetic Algorithm; Modified PSO; Prediction; Classification.

  • Multimodality Image Fusion using Centre-Based Genetic Algorithm and Fuzzy Logic   Order a copy of this article
    by S.P. Velmurugan, P. Sivakumar, M. Pallikonda Rajasekaran 
    Abstract: The Fusion method is used to detect and treatment for the disease in a successful manner which integrates various modalities. Nowadays, medical image fusion system is a demanding task in healthcare applications such as tumor detection, analysis, research, and treatment etc. In this paper, we propose a multimodality medical image fusion using center based genetic algorithm (CBGA) and fuzzy logic which is examined by the use of the quantitative measure. Here, at first, we estimate the segmentation map from the source images (MRI and CT). After that, the source images are decomposed based on lifting wavelet transform. Then, a fuzzy-based approach is used to fuse high-frequency wavelet coefficients of the MRI and CT images. Mainly, the output of three various fusion rules incorporated by fuzzy logic (weighted averaging, selection using pixel-based decision map (PDM), and selection using region-based decision map (RDM)), based on a dissimilarity measure of the source images. Then a CBGA is used to fuse the low-frequency wavelet coefficient of the MRI and CT images. At last, we combine low and high-frequency wavelet coefficients of the source images to obtain the fused image.
    Keywords: high frequency; low frequency; wavelet coefficient; fuzzy logic system; RDM; PDM; weighted averaging; lifting wavelet transform.

  • Inhomogeneity correction and hybrid based segmentation in cardiac MRI   Order a copy of this article
    by A. Venkata Nageswararao, S. Srinivasan, Ebenezer Priya 
    Abstract: Image segmentation is an important step in medical image analysis and segmentation of ventricles in Cardiac Magnetic Resonance (CMR) images is challenging due to an in-built artifact called intensity-inhomogeneity. The short axis cine Magnetic Resonance Images (MRI) recorded under a steady state free precision protocol were corrected for intensity-inhomogeneity using Bias Corrected Fuzzy C-means Method (BCFCM), Level Set (LS) and Multiplicative Intrinsic Component Optimization (MICO) methods. The statistical measures show that bias correction by BCFCM has better performance than MICO and LS. In addition, the original and bias corrected images are validated by Multifractal Analysis (MFA). The results show that in bias corrected images, the low frequency components are removed thereby enhancing the sharpness of the ventricular boundaries. Further, ventricular segmentation is performed using the proposed automatic hybrid Sobel edge detector with optimized level set method. The validation parameters of segmented results show that the ventricular detection in bias corrected images matches better with ground truth.
    Keywords: Cardiac Magnetic Resonance; Intensity-inhomogeneity; Bias Corrected Fuzzy C-means; Level Set; Multiplicative Intrinsic Component Optimization and multifractal analysis.

  • Hand punch movement kinematics of boxers with different qualification levels   Order a copy of this article
    by Yaodong Gu, Sergey Popik, Sergey Dobrovolsky 
    Abstract: This study focuses on qualitative and quantitative analysis of temporal and spatiotemporal characteristics of right hand cross of 18 boxers at various proficiency levels. Boxers carried out punches from different positions: 1) in full coordination; 2) with lower limbs fixed; 3) with lower limbs and trunk fixed. Analysis of punches allowed revealing kinematical features of cross punch and identification of causes for potential deviations in the technique of punching.
    Keywords: kinematical features; hand punch; temporal parameters; spatiotemporal characteristics; punch velocity; body parts.

  • CLASSIFICATION OF SKIN DISEASE USING ENSEMBLE BASED CLASSIFIER   Order a copy of this article
    by Thenmozhi K., Rajesh Babu M 
    Abstract: Cancerous Skin disease such as melanoma ad nevi typically results from environmental factors (such as exposure to sunlight) among other causes. The necessary tools needed for early detection of these diseases are still not a reality in most communities. In this paper, the framework is proposed to deal with the detection and classification of various skin disease. The two techiques commonly used for reduction of dimensionality are feature extraction and feature selection. In feature extraction, features are extracted from original data using principal component analysis and linear discrimant analysis and then extracted feature is reduced by feature selection technique called fisher ratio method in which the subset of sufficient features is selected for classification. This technique improves the performance and enhances the speed of classifier. The ensemble based classifier such as Bayesian, self-organized map and support vector machine are used to classify the various skin disease from the dataset. The proposed technique achieves better accuracy and less execution time than existing approach.
    Keywords: skin cancer; melanoma; ensemble classifier; fisher ratio.

  • Microcalcifications segmentation from mammograms for breast cancer detection   Order a copy of this article
    by Ismahan Hadjidj, Amel Feroui, Aicha Belgherbi, Abdelhafid Bessaid 
    Abstract: The presence of microcalcifications (MCs) in X-ray mammograms provides an important early sign of women breast cancer. However, their detection still remains very complex due, to the diversity in shape, size, their distributions and to the low contrast between the cancerous areas and surrounding bright structures in mammograms. This paper presents an effective approach based on mathematical morphology for detection of (MCs) in digitized mammograms. The developed approach performs an initial step in order to extract the breast area and removing unwanted artifacts out of the mammogram. Subsequently, an enhancement process is applied to improve appearance and increase the contrast of images and to eliminate noise. Once the breast region has been found, a segmentation phase through morphological watershed is performed in order to detect (MCs). The performance of our approach is evaluated using a total of 22 mammograms extracted from the MIAS mammographic database, shows the presence of (MCs).The obtained results were compared with manual detection, marked by an expert mammographic radiologist. This results show that the system is very effective, especially in terms of sensitivity.
    Keywords: (Breast cancer; microcalcifications; mammograms; breast region; mathematical morphology; watershed transform.).

  • Analysis of brainstem in Alzheimer MR images using lattice Boltzmann level set   Order a copy of this article
    by Ramesh Munirathinam, Sujatha Chinnaswamy Manoharan 
    Abstract: Alzheimers Disease (AD) is a progressive neurological disorder resulting in the cognitive impairment of elderly people. Analysis of morphological variations before the onset of cognitive impairment is important for early diagnosis of AD. The brain stem is considered as a significant pathological core for the study of psychological and behavioral symptoms occurring before cognitive impairment. In this work, the brainstem is extracted from T1 weighted brain MR images using a region-based level set method. The segmentation results are validated using regional statistic and overlap measures. Geometric and texture features are extracted for analyzing the morphological variations. The results show that the region based segmentation is capable of extracting the brainstem more accurately by preserving the weak and blurred edges of the brainstem. The extracted geometric and textural features are able to show a significant difference in the normal and AD brain stem images. The feature values obtained from AD images are found to be less compared to the normal images. This might be due to the neuronal loss in the brain stem describing the atrophy. Hence, this analysis proved to be clinically significant.
    Keywords: Brainstem; Region based level set; Lattice Boltzmann method.

  • Three-Dimensional MRI Brain Tumor Classification using Hybrid Ant Colony Optimization and Gray Wolf Optimizer with Proximal Support Vector Machine   Order a copy of this article
    by Rajesh Sharma R, Akey Sungheetha, Marikkannu P 
    Abstract: A hybrid approach employing Ant Colony Optimization (ACO) and Gray Wolf Optimizer (GWO) is proposed in this paper along with proximal support vector machine classifier to carry out brain tumor classification for the given 3D MRI brain images. The proposed hybrid Ant Colony Optimization Gray Wolf Optimizer (ACO GWO) is employed for selecting the optimal features required for performing the classification process. Proximal Support Vector Machine (PSVM) is employed over the Support Vector Machine (SVM), Back Propagation Network (BPN) and K-Nearest Neighbor (k-NN) for evaluating the effectiveness of the classifier approach.
    Keywords: ACO; GWO; PSVM; BPN; K-NN.

  • Decision Support System for Type II diabetes and its risk factor prediction using Bee based harmony search and Decision tree Algorithm   Order a copy of this article
    by Selvakumar S, Sheik Abdullah A, R. Suganya 
    Abstract: Type II diabetes is one of the main causes of disability in adults as well as the main cause of death in most of the countries. The objective of this paper was to develop a prediction model for the investigation of type II diabetes and its risk factors which target towards the reduction of diabetic events. The risk factors related to type II diabetes has been considered for developing the predictive model. A total of 732 cases were collected from Government hospital in the district of Theni, Tamil Nadu, India. Predictive analysis was carried out using bee based harmony search algorithm and C4.5 decision tree algorithm with its splitting criterion. From the experimental results, and analysis it was found to be the risk corresponding to postprandial plasma glucose, A1c- Glycosulated Hemoglobin, mean blood glucose level, with a prediction accuracy of about 92.87% respectively. It is estimated that the age group corresponding to 34 to 73 was found more prevalent and the mathematical model proves that age, postprandial plasma glucose and mean blood glucose level have strong co-relation with its corresponding data values. There by, predictive data analysis could help in the identification of risk factors with respect to the subject of type II diabetes, with accordance to the therapeutic procedures and treatment analysis.
    Keywords: Decision support systems; Decision trees; Diabetes; Optimization; Risk analysis.

  • INVESTIGATION ON ROI SIZE AND LOCATION TO CLASSIFY MAMMOGRAMS   Order a copy of this article
    by Amit Kamra, Akshay Girdhar, Poonam Sood 
    Abstract: Abstract- Breast cancer is the major cause of death among women and early detection can lead to a longer survival. Computer Aided Diagnosis (CAD) system helps radiologists in the accurate detection of breast cancer. In medical images a Region of Interest (ROI) is a portion of image which carries the important information related to the diagnosis and it forms the basis for applying shape and texture techniques for cancer detection. Several ROI sizes and locations have been proposed for computer aided diagnosis systems. In the present work various ROI sizes have been used to determine the appropriate ROI size to classify fatty and dense mammograms. Two types of mammograms i.e. fatty and dense are used from the MIAS database. Various texture features have been determined from each ROI size for the analysis of texture characteristics. Fisher discriminant ratio is used to select the most relevant features for classification. Finally linear SVM is used for the purpose of classification. Highest classification accuracy of 96.1% was achieved for ROI size 200
    Keywords: ROI; Breast Cancer; Digital Mammograms; Breast Tissue; feature selection; classification.

  • Using worn out insole to express human foot   Order a copy of this article
    by Ahmad Yusairi Bani Hashim 
    Abstract: Human is the only primate that can perform ideal bipedal walking. Due to this configuration also, humans are not able to climb trees as efficient as a chimpanzee. As such, the human foot is unique by itself, which is evident in its print. There have been some studies that looked at the footprints and their relationship to the ideal bipedal walking. There has none, if a work has been done, on the determination of the foots work volume given a footprint. Five volunteers participated in the study who supplied their used shoes. The insole images were processed to find the effective regions using the binarization, inversion, and edge finding techniques. The spots that were suspected to be the resultant of the repeated applications of the ground reaction forces were identified, and twelve nodes were marked on the designated spots. The drawn footprints looked similar geometrically; however, each footprint had a unique node positions profile. The aspect ratio of the foot length, width, and height seemed to congregate to 3:1:1 and 30 degrees footprint angle. The actual plots revealed that the range of the ratio and angle range was 2.20 to 3.00, and 30 degrees to 45 degrees, respectively. Therefore, the human foot identity is based on the number of nodes uniquely localized on the footprint, which defines the foot workspace. It may be expressed by simple measurements of the foot height, the foot width, the foot height following the standardized node locations.
    Keywords: Ground reaction force; insole; footprint identity; image processing.

  • Study of Structural Complexity of Optimal Order Digital Filters for De-noising ECG Signal   Order a copy of this article
    by Seema Sande, M.K. Soni, Dipali Bansal 
    Abstract: Selection and implementation of optimal order digital filter for denoising ECG signal on FPGA based on Signal to Noise Ratio (SNR), error and accuracy using wavelet toolbox is a tedious task. To overcome this problem, an attempt has been made in MATLAB to obtain a noise free ECG signal based on SNR and Mean Square Error (MSE) to select optimal order digital filter. The filter with maximum SNR and minimum MSE is selected as an optimal order filter. An ECG sample from MIT-BIH Arrhythmia database is considered for investigation and the signal is artificially corrupted by adding noise and is filtered through different low pass IIR and FIR digital filters, which are designed and realized in MATLAB. Power Spectrum Density (PSD) and Fast Fourier Transform (FFT) are used to validate the performance of optimal order filters. The hardware complexity of optimal order digital filters structure is checked in terms of multipliers, adders and delays in MATLAB and their performance is compared based on the number of basic elements, PSD and visual inspections. A final summary report on the study of complexity of the structure of optimal order filters is presented. It is found that the Chebyshev-I and the Elliptic IIR filters require low filter order for a given specification to denoise ECG signal. Hence, requires less number of basic elements. Similarly, Kaiser, Hamming, Hanning window FIR filter require less number of basic elements. Hanning window gives undesirable results. Elliptic IIR digital filter and Kaiser Window FIR filter give better performance as compared to others from PSD graphs and visual inspections.
    Keywords: ECG signal; MIT-BIH; SNR; MSE; PSD; FFT.

  • Detection of masses in mammographic breast cancer images using Modified Histogram based Adaptive Thresholding (MHAT) method   Order a copy of this article
    by Bhagwaticharan Patel, G.R. Sinha, Dilip Soni 
    Abstract: Breast cancer is the leading type of cancer diagnosed in women nowadays and for breast screening, mammography is preferred to detect and diagnose the cancer by detecting the masses with the help of Computer-aided Diagnosis (CAD) system. It helps to assist radiologists in getting accurate diagnosis and its analysis for improving breast cancer prediction. Detection is the most effective way to reduce mortality rate and hence many countries have started mass screening programs that have resulted in a large increase in the number of mammograms requiring interpretation. An approach is proposed to effectively detect the masses in mammographic breast cancer images by using Modified Histogram based adaptive thresholding (MHAT) method. The proposed algorithm was tested over several images taken from the digital database. The algorithm has been tested over with 100 mammographic images and the experimental results show that the detection method has a sensitivity of 98.3% at 0.78 false positives with accuracy of 99 % per image. We evaluated the performance of our MHAT algorithm by comparing with respect to the ground-truth boundary drawn by an expert radiologist. The improvement is statistically significant. The results are clinically relevant, according to the radiologists who evaluated the results.
    Keywords: Adaptive thresholding; breast cancer; computer aided detection; mammography; mass; segmentation; screening.

  • Adaptive Thresholding of Wavelet Coefficients using Generalized False Discovery Rate to Compress ECG Signal   Order a copy of this article
    by Supriya Rajankar, Sanjay Talbar 
    Abstract: In signal compression the selection of an appropriate threshold is the challenging task. The paper proposes an algorithm to determine the signal adaptive threshold based on estimating wavelet coefficients by Generalized False Discovery Rate (FDR) to compress ECG signal. The hypothesis testing and thresholding are closely related. So, multiple hypotheses testing is used to determine an adaptive threshold called as False Discovery Threshold (FDT). The p-value of each wavelet detail coefficient is computed and arranged in an ascending manner. The dynamic critical significance levels are calculated using k-FWER and k-FDR. These significance levels are compared with the corresponding p-value to satisfy desired FDR, which provides the FDT. The run length encoding followed by Huffman coding provides compression. The paper also proposes a new performance evaluation parameter: mean Structural Similarity Index (mSSIM) to check the similarity between original and reconstructed ECG signals. Generalized FDR based thresholding provides very less PRD value compared to standard codecs in the literature and structural similarity very close to one, which signifies better reconstruction of the signal.
    Keywords: Generalized False Discovery Rate; step up procedure; BH procedure; k-FDR; k-FWER.

  • A New Heart Sounds Segmentation Approach Based on the Correlation between ECG and PCG signals   Order a copy of this article
    by FANDI Radia, HADJ SLIMANE Zine-Eddine 
    Abstract: During the cardiac cycle, both electrical and mechanical events are present. In fact, the electrocardiogram (ECG) allows the exploration of the electrical activity of the heart, while the phonocardiogram (PCG) constitutes a complementary tool to record the mechanical activity of the heart. The aim of our work is to develop a new approach for heart sounds segmentation based on the correlation between ECG and PCG signals for measurement of systolic time interval. Two different groups of simultaneous recordings of ECG and PCG signals were tested in this study. The performance of automatic heart sounds localization process is compared with that of manual measurements made by two experts. The results obtained are interesting especially for pathological cases heart sound.
    Keywords: ECG signal; PCG signal; Heart Sounds (HS); Pearson’s correlation coefficient; Automatic localization (AL); systolic duration (SD); manual annotation (MA).

  • SECURE AND INTELLIGENT ARCHITECTURE FOR CLOUD BASED HEALTH CARE APPLICATIONS IN WIRELESS BODY SENSOR NETWORKS   Order a copy of this article
    by Antony Rani, Baburaj  
    Abstract: Wireless Sensor Networks (WSNs) are becoming a significant enabling technology for a wide variety of applications. Advances in WSN have facilitated the realization of pervasive health monitoring for both homecare and hospital environments. Sensor nodes are capable of sensing, processing, and communicating one or more vital signs, and they can be used in Wireless Body Sensor Networks (WBSNs) for health monitoring. Many studies were performed and are ongoing in order to develop flexible, reliable, secure, real-time, and power-efficient WBSNs suitable for healthcare applications. This paper concentrates on the development of intelligent secure architecture for cloud based healthcare applications in Wireless Sensor Networks. This paper describes the applications, issues and challenges of BSN in healthcare, energy efficient body sensor network architecture using aggregation, secure data collection, storage and data sharing using an authentication algorithm and a novel prediction model which act as an expert system for disease management.
    Keywords: Cloud computing; e-health care system; Integrated secure authentication; Received signal strength; Wireless body area network.

  • Automatic detection of stereotyped movements in autistic children using the kinect sensor   Order a copy of this article
    by Maha Jazouli, Aicha Majda, Djamal Merad, Rachid Aalouane, Arsalane Zarghili 
    Abstract: Autism spectrum disorders (ASD) is a developmental disorder that affects communications, social skills or behaviours that can occur in some people. Children or adults with ASD often have repetitive motor movements or unusual behaviours. The objective of this work is to automatically detect stereotypical motor movements in real-time using Kinect sensor. The approach is based on the $P point-cloud recognizer to identify multi-stroke gestures as point-clouds. This paper presents new methodology to automatically detect five stereotypical motor movements: body rocking, hand flapping, fingers flapping, hand on the face and hands behind back. With many ASD-children, our proposed system gives us satisfactory results. That can help to implement a smart video surveillance system and then helps clinicians in the diagnosing ASD.
    Keywords: ASD; Autism; stereotyped movement; repetitive motor movements ; repetitive behaviours; gesture detection ; Kinect Sensor; point cloud; nearest neighbour classifier; gesture recognition.

  • Hybrid Approach towards Feature Selection for Breast Tumor classification from Screening Mammograms   Order a copy of this article
    by Sudha M.N, Selvarajan S 
    Abstract: A hybrid approach has been developed to extract the optimal features from the breast tumors using Hybrid Harmony Search and presented in this paper. The texture feature, intensity histogram feature, radial distance feature and shape features have been extracted and the optimal feature set has been obtained using Hybrid Harmony Search (HHS). The hybrid scheme for feature selection is obtained by combining cuckoo search and harmony search. The minimum distance classifier, k-NN classifier and SVM classifier are used for classification purpose and its produces 98.19%, 98.34% and 97.18% average classification accuracy respectively with minimum number of features. The performance of the new hybrid algorithm is compared with the Genetic Algorithm, Particle Swarm Optimization algorithm, Cuckoo Search and Harmony Search. The result shows that the hybrid of Cuckoo and Harmony search algorithm is more accurate than the other algorithm. The proposed system can provide valuable information to the physician in medical pathology.
    Keywords: Breast cancer classification; Segmentation; Feature Extraction; Hybrid Harmony Search.

  • Overlapping Group Sparse Denosing: A good choice for noise removal from EMG signal in intermittent masseter muscle activity   Order a copy of this article
    by Behrouz Alizadeh Savareh, Gholam Hossein Meftahi, Azadeh Bashiri, Boshra Hatef 
    Abstract: Purpose: Biological signals are often impregnated with a variety of noises and noise removal for precise processing is very important. The aim of this study is to test a method called Overlapping Group Sparse Denoising performance on removing noises from EMG of sequential masseter activity.rnMethod: Overlapping group sparse denoising method was studied on EMG signals obtained from the masseter muscle. The EMG signals obtained from three groups of people (control, migraine without aura and tension headache) was investigated in this study.rnResult: Four metrics (MSE, MAE, SNR and PSNR) calculated for analyzing the method performance in denoising EMG signals.rnConclusion: The results indicated that using mentioned method was successful. The method can be helpful for denoising signals with intervals of clustering activations and deactivation like sound signals.rn
    Keywords: EMG; Masseter; Overlapping; Group Sparse Denoising.

  • A Clique-Based Scheduling In Real-Time Query Processing Optimization for Cloud-Based Wireless Body Area Networks   Order a copy of this article
    by Smys S., Dinesh Kumar A 
    Abstract: A wireless body area network is a wireless network of wearable computing devices. BAN devices may be installed inside the body, inserts, may be surface-mounted on the body in an altered position or may be accompanied devices which people can convey in distinctive positions, in clothes pockets, by hand or in different bags for real time health monitoring of patients. However WBAN tackles serious problems such as degrading throughput, high query latency, energy consumption. In this paper, we proposed Clique based WBAN scheduling (CBWS) algorithm and Cloud based WBAN algorithm. In the first methodology, the sensors are not active together at the same time, so it can be clustered into different groups to avoid interference. So coloring based technique is used to schedule all groups in a time slot. And these sensor data are provided to the database for user query processing. In the second methodology, a cloud based technique is utilized to secure the stored information and also optimize the real time query processing is done to obtain energy minimization and query latency. Also Multi Queue Scheduling (MQS) algorithm is used. The MQS categorized the jobs as small, medium and long as per the burst time in a cloud environment. The experimental results show that we achieves minimal energy consumption, query latency and better throughput and also provide secure and powerful storage infrastructure in real-time environment.
    Keywords: WBAN sensors; Clique based technique; Cloud based technique; User query processing; Energy consumption.

  • AN APPROACH FOR DETECTION OF EDGES IN CAROTID ULTRASOUND IMAGES AND ANALYSIS OF INTIMA-MEDIA COMPLEX USING MORPHOLOGICAL FEATURES   Order a copy of this article
    by Sumathi Krishnaswamy, Mahesh Veezhinathan 
    Abstract: Atherosclerosis is the first clinical manifestation of cardiovascular disease. It is a complex vascular disease that causes a condition of narrowing, stiffening and hardening of artery walls. This condition leads to serious pathologies like heart attack and stroke. Progression of atherosclerosis is marked by Intima-media thickness which is proven to be an early indicator of the disease. In this work, an attempt is made to preserve the edges of Intima-Media Complex (IMC) using non linear Total Variation (TV) diffusion filter. The edge maps generated with optimal values of threshold are used as stopping boundary in segmenting a database of 100 longitudinal images of common carotid arteries using Variational Level Set method. To analyze the performance of segmentation method the results are validated against ground truth values. Geometric features are extracted from the segmented region and statistical analysis is performed. It is observed that the segmented IMC is found to have high correlation with ground truth values. Bland-Altman plots show that, the values between 95% confidence interval are with overall good fit with minimal bias between segmentation method and ground truth (manually segmented by an expert). Hence the edge map extracted using TV filter shall enhance the edges and improve the performance of automated segmentation. Further, the extracted features could discriminate the structural changes in abnormal images from normal. Analysis of features plays clinically a significant role in finding the pathological conditions of carotid arteries.
    Keywords: Cardiovascular disease; Atherosclerosis; Intima-Media Layer; Carotid artery; plaque; Stroke; Total Variation Diffusion Filter and Level set method.

  • Evaluation of heart rate dynamics during meditation using Poincare phase plane symbolic measures   Order a copy of this article
    by Chandrakar Kamath 
    Abstract: Meditation has long been known to affect human physiology which is mediated through autonomic nervous system. The main objective of this study was to assess dynamic changes in cardiac inter-beat intervals and autonomic nervous system during meditation hypothesizing that Poincar
    Keywords: Forbidden words; Heart rate variability; Meditation; Poincaré phase plane; RR interval time series; Symbolic complexity measures; Symbolic dynamic entropy.

  • Performance Analysis of Iris-based Identification System based on Exudates   Order a copy of this article
    by D.M.D. PREETHI, V.E. JAYANTHI 
    Abstract: In the current scenario responsibility of the system administrator is to have a secured system is a challenging task. Iris recognition is proven to provide unique biometric data, very difficult in duplication. Exudate is one of the disorders that occur in the retinal part of eye. Exudate is an earliest and most prevalent symptom of diseases leading to blindness such as diabetic retinopathy and wet macular degeneration. Proposed work is to examine the effect of exudate present on iris and leads to improve the level of security or used to match the unmatched person due to the structural and textural changes of iris. Hough Transform (HT) is employed to identify the unique features in iris. STARE database is used for performance analysis of iris recognition system. Proposed system shows that structural changes and quality degradations are observed in exudate image, may lead to iris recognition system fail, reenrollment is essential.
    Keywords: diabetic retinopathy; structural changes; exudates; medical imaging; security;.

  • MIMO Human Handwriting Model in Stochastic Environment   Order a copy of this article
    by Ines Chihi, A. Abdelkrim 
    Abstract: The present paper deals with a Multi Inputs Multi Outputs (MIMO) human handwriting model to characterize the pen-tip displacement on the plan from two forearm muscles activities, called ElectroMyoGraphy signals (EMG). The proposed model takes into account perturbations and incertitude whish can affect the handwriting process (instability of the writing support, psychical state of the writer, brusque movement, etc). In this sense, an experimental approach was presented to record the displacements of a pen-tip moving on the plane and two EMG signals during the handwriting motion. The velocity of the writing and ARMAX structure are used to characterize this biological act. Recursive Extended Least Square algorithm (RELS) is used to estimate the parameters of the proposed handwriting model. The obtained structure shows good concordance with the experimental recorded data.
    Keywords: MIMO handwriting model; pen-tip displacement; ElectroMyoGraphy signals; perturbations and incertitude’s; velocity of the writing; ARMAX structure; Recursive Extended Least Square algorithm.

  • Viscoelastic Blood Flow Through Stenosed Artery in Presence of Magnetic Field   Order a copy of this article
    by M.D. ASIF IKBAL 
    Abstract: An unsteady analysis of non-Newtonian blood flow under stenotic condition in presence of a transverse magnetic field has been carried out. The flowing blood is characterized by generalised Oldroyd-B having shear-thinning rheology. The arterial wall is considered to be rigid having cosine shaped stenosis in its lumen. The governing equations of motion accompanied by appropriate choice of the initial and boundary conditions are solved numerically by MAC (Marker and Cell) method and the results are checked for numerical stability with desired degree of accuracy. The quantitative analysis has been carried out finally which includes the respective profiles of the flow-field. The key factors like the wall shear stress and flow separation are also examined for further qualitative insight into the flow through arterial stenosis. The present results show quite consistency with several existing results in the literature which substantiate sufficiently to validate the applicability of the model under consideration.
    Keywords: Non-Newtonian Fluid; Stenosis; MAC; Transverse Magnetic Field.

  • Biomechanical Analysis of Implantation of Polyamide/ Hydroxyapatite Shifted Architecture Porous Scaffold in a Injured Femur Bone   Order a copy of this article
    by Kumaresan Thavasiappan, Gandhinathan R, Ramu M, Gunaseelan M 
    Abstract: Femur bone is one of the strongest and important bones which supports the major weight of human body. Unexpected sudden impact on the femur bone during accidents may cause severe fractures. When this bone injury is not self repairable, bone scaffold is the only remedy. This research paper analyzes the biomechanical effects of the femur bone implanted with porous scaffold made of Polyamide/Hydroxyapatite material. Porous scaffolds are temporary load bearing members, consisting of 3D porous geometry to support internal cell growth. This study focuses on the study of suitability of the porous scaffold fixed on a damaged femur bone under different loading conditions. This research uses the CT scan data of femur bone of a 75 kg healthy person and presents detailed information on the biomechanical analysis of the femur bone during common physical activities using finite element analysis.
    Keywords: Femur bone; Biomechanical analysis; Polyamide (PA); Hydroxyapatite (HA); Finite Element Analysis (FEA); Porous Scaffold.

  • A Wavelet and Adaptive Threshold Based Contrast Enhancement of Masses in Mammograms For Visual Screening   Order a copy of this article
    by Pratap Vikhe, Vijaya Thool 
    Abstract: The screening of mammograms is a difficult task for the radiologist, due to variation in contrast and homogeneous structure of the masses and surrounding breast tissues. Therefore, an adaptive threshold based contrast enhancement method is proposed in this paper for enhancement of suspicious masses in mammograms. Homomorphic filtering andwavelet based denosing has been used prior to enhancement in the describe method. The approach contains, artifact suppression using pre-processing. Then wavelet transform is applied on the preprocessed mammogram, homomorphic filter is used to filter the approximate coefficient and wavelet shrinkage operator is applied on detail coefficients for denoising. Finally, contrast enhancement approach is used to enhance the suspicious region based on adaptive threshold technique. Two databases, namely Digital Database for Screening Mammography (DDSM) and Mammographic Image Analysis Society (MIAS), were used to test proposed method. The obtain results using proposed method gives better visibility for suspicious masses for all types of mammograms.
    Keywords: Mammogram Screening; Contrast Enhancement;Wavelet transform;Adaptive threshold.

  • TBAC: Tree-Based Access Control Approach for Secure Access of PHR in Cloud   Order a copy of this article
    by Athena .J, Sumathy V 
    Abstract: Personal Health Record (PHR) system is a currently emerging patient-oriented model for sharing the health information through a cloud environment. Previously, single attribute authority-based security scheme was used for sharing the PHRs in the cloud. But, this security scheme is not practically applicable due to the security and privacy issues. The existing access control approaches require more time to encrypt and decrypt the PHR file. This paper proposes a Tree-Based Access Control (TBAC) approach for fine-grained and secure access of the PHR in the cloud environment. In our approach, Tree-based Group Diffie-Hellman (TBGDH) algorithm is used to generate the key instance for the encryption process. The Attribute-based Encryption (ABE) approach is used with different hierarchical levels of the users to protect the personal health data. The access policies are based on the user attribute.
    Keywords: Attribute-Based Encryption (ABE); Diffie-Hellman; Cloud Computing; Key Generation; Multi-Authority ABE (MA-ABE); Personal Health Record (PHR); Tree-based Access Control (TBAC) and Tree-based Group Diffie Hellman (TBGDH).
    DOI: 10.1504/IJBET.2016.10005093
     
  • An Effective Design of Parity Check Matrix for LDPC Codes Using Multi-Objective Gravitational Search Algorithm   Order a copy of this article
    by S. Suresh Kumar, M. Rajaram 
    Abstract: The low Density Parity-Check code (LDPC) is an efficient contender for capacity approaching error correction over many important channels. In this paper, we design a parity check matrix for error correction using Multi-Objective Gravitational Search Algorithm. The overall steps of the proposed technique are explained with three steps: (i) new objective function generation (ii) optimized parity check matrix generation using MGSA (iii) An LDPC encoding-decoding system design. At first, a multi-objective function is derived based on four efficient objectives like low density, maximum hamming minimum distance, maximum marginal entropy, and maximum cyclic lifting degree. Our GSA based approach is developed with efficient agent representation, fitness computation along with usual GSA operators. Once the parity check matrix is computed, this matrix is utilized for LDPC encoding. The proposed approach is evaluated through Bit Error rate (BER) measure. From the results, we ensure that the proposed technique outperformed the existing technique by achieving the BER of 0.0513, 0.0348, and 0.0270 in fourth iteration.
    Keywords: LDPC; parity check matrix; SNR; BER; MGSA; objective function.

  • NIBG: An Efficient Near-Infrared Spectroscopy Based Device For Telemonitoring Blood Glucose Level of Diabetes Outpatients in eHealth System   Order a copy of this article
    by Oladayo Olakanmi, Ismaila A. Kamil, Olayinka P. Atilola 
    Abstract: Major conventional blood glucose measuring systems pose a lot of inconveniences to diabetes patients, such as recurring pains incurred from finger-prick, infections from biosensor implant into the subcutaneous tissue, recurring costs incurred from the purchase of strips and biosensors, and inability to perform real time monitoring of blood glucose with such devices, making them unsuitable for e-Health systems. This paper proposes a pain-free, non-infectious and Non-Invasive Blood Glucometer (NIBG) for blood glucose measurement and monitoring. The device is based on Near Infrared (NIR) transmittance spectroscopy which does not involve pricking into the blood capillary during measurement thereby reducing the risk of microvascular complication associated with invasive methods. Also the approach may reduce blood glucose measurement apathy thereby preventing long term complications of diabetes. Both the validation and clinical trial tests results indicated that the device is clinically accurate.
    Keywords: non-invasive; spectroscopy;electronic health; near-infrared; diabetes.

  • Detection of Missing Tooth from Dental radiographic and photographic images in Forensic Odontology   Order a copy of this article
    by Jaffino G, Banumathi A, Ulaganathan Gurunathan, Vijayakumari B, Prabinjose J 
    Abstract: Victim identification using dental radiographs is receiving much attention nowadays, since tooth is the more robust key component for forensic odontology. Missing tooth detection is one of the notable issues in dental identification system. Hence this work proposes a novel method for identifying the persons by considering the missing tooth in addition with the contours of other teeth. Online Region based Active Contour Model (ORACM) is used for shape extraction, and distance based matching algorithm is used to identify the person by comparing both ante-mortem and post-mortem dental images. It concentrates on radiographs and photographs of dental images since adequate radiograph images may not be available for all circumstances. The proposed method is well suited for person identification, and it could assist forensic odontologists to identify the victims in an accurate manner.
    Keywords: Forensic odontology; missing tooth; Bitewing dental images; hit rate.
    DOI: 10.1504/IJBET.2017.10012585
     
  • Performance Comparison of MeRMaId-ICA and Np-ICA in Composite Abdominal ElectroCardioGram Separation   Order a copy of this article
    by Anisha Milton, Kumar S.S, Benisha M 
    Abstract: Blind source separation strives to disintegrate a multivariate composite non invasive abdominal electrocardiogram signal into independent non-gaussian signals such as maternal and fetal electrocardiograms. This paper proffers two separation algorithms especially for fetal Electrocardiogram (FECG) extraction as it has a great role in diagnosing the fetal heart deformities namely Minimum Renyis Mutual Information called MeRMaId algorithm and Non parametric Independent Component Analysis (Np-ICA) algorithm. Both the algorithms are experimentally evaluated on synthetic and real abdominal data. Performance juxtaposition of these two algorithms is done by scrutinizing the signal to noise ratio at assorted noise levels and signal to interference ratio at assorted amplitude levels, and computing correlation coefficient () of the original and the estimated maternal and fetal electrocardiograms.rnrn
    Keywords: Minimum Renyi’s Mutual Information; Non parametric Independent Component Analysis; Maternal electrocardiograms and fetal electrocardiogram.rnrn.

  • Time-Time Analysis of Electroencephalogram Signals for Epileptic Seizure Detection   Order a copy of this article
    by Poonam Sheoran, J.S. Saini 
    Abstract: The detection and classification of epileptic seizures using the Electroencephalography (EEG) signals has been actively worked upon by the researchers from past few decades. This paper attempts a novel application of Time-Time Transform for analysis of electroencephalogram time-series for epileptic seizure detection by transforming it into secondary time-limited local constituent time-series. This technique (TT-Transform) of time-time representation of the EEG time series is derived from S-transform, i.e., Stockwell Transform (an extension of the wavelets), a method that represents a non-stationary time series as a set of complex time-localized spectra. With the help of TT-transform, a more informative representation of the time features of EEG signals has been obtained, around a particular point on the time axis which has been seen to prove very effective in seizure detection. As the TT-transform is completely invertible, it indicates frequency filtering & signal to noise improvements in the time domain. Features obtained upon application of TT-transform on EEG time-series are classified using Quadratic Discriminant Analysis and the correct classification rate obtained is 100%.
    Keywords: Short time Fourier Transform (STFT); Stockwell Transform (ST); Time-Time Transform (TT); Electroencephalogram (EEG); Time-Frequency analysis; Quadratic Discriminant Analysis (QDA).

  • Flexible Naor and Reingold Key Based Data Encryption/Decryption Scheme for Secured Remote Health Monitoring of Cardiac Patient using WBAN   Order a copy of this article
    by Muthuvel Somasundaram, R. Sivakumar 
    Abstract: Wireless Body Area Networks (WBANs) increase the demand of healthcare service to improve the quality of patients life. In WBANs, the security issues arise during monitoring of the patient body function and recording. Recently, many research works have been designed for secured remote health monitoring services. However, there is a need for effective remote health monitoring service model for enhancing the security and privacy of patients medical report. In order to overcome such limitations, a secure patient activity data monitoring through privacy preservation technique in handling medical health care services of patient in remote locations is proposed. The privacy preservation technique introduced in this work is Flexible Naor and Reingold Key based Data Encryption/Decryption (FNRK-DED) model. The FNRK-DED model is flexible on providing the data security for the different size of information. The FNRK-DED model develops a Naor and Reingold Key based Data Encryption algorithm to encrypt the cardiac patients medical information of different formats by preserving their input length. In encryption process, input is given as the cardiac patients medical report that needs to be encrypted with secret key. The output of an encryption process is n bits of cipher text which is transmitted through a wireless network in order to achieve secured remote health monitoring services in WBANs. In recipient side, the decryption process is performed to extract the encrypted cardiac patients medical report by using Naor and Reingold Key based Data Decryption (NRKDD) algorithm. The FNRK-DED model conducts the simulation works on parameters such as data privacy rate, security ratio based on patients health information, energy consumption rate and response time.
    Keywords: Wireless Body Area Networks; healthcare service; patient’s; security; Naor and Reingold Key; privacy preservation; cipher text.

  • Analysis and Evaluation of Classification and Segmentation of Brain Tumor Images   Order a copy of this article
    by M.P. Thiruvenkatasuresh, V. Venkatachalam 
    Abstract: Apparently, the development of a model to detect the tumor part in Brain images is of utmost significance. This is because Brain tumor can be considered as one of the serious and life- threatening tumors actually created either by the abnormal and uncontrolled cell division within the brain or from cancers primarily present in other parts of the body. In the initial phase of our work, brain tumor database images are occurred to the preprocessing module using adaptive median filter technique to gain clarity of the image. In addition to the preprocessing process, feature extraction techniques are applied and extracted to the features then the classification method as Support Vector Machine (SVM) classifier is used to classify the images as normal and abnormal. After classification the abnormal images are observed for segmentation process using Fuzzy C-Means (FCM) clustering process along with the occupied optimization methods. For optimizing centroid, the FCM used Social Spider Optimization (SSO) technique with Genetic Algorithm, these optimization techniques are used and the segmentation sensitivity attains 99%. In the final stage, the optimal centroid is utilized to obtain the extracted tumor part of the image. The proposed scheme has attained the maximum accuracy when compared to existing classification technique ANFIS and segmentation technique FCM (GWO).
    Keywords: Brian tumor images; Support Vector Machine (SVM); Fuzzy C-Means (FCM); Social Spider Optimization (SSO) and Genetic Algorithm (GA).

  • Computer Aided Automatic Detection of Glioblastoma Tumor in Brain Using CANFIS Classifier   Order a copy of this article
    by C.G. Ravichandran, K. Rajesh 
    Abstract: Detection and diagnosis of brain tumor is complicated due to its similar characteristics between tumor pixels and non tumor pixels in brain image. This paper proposes an efficient methodology for the detection and segmentation of Glioblastoma tumor region in brain. The proposed methodology for Glioblastoma tumor classifications has the following stages as noise reduction, image fusion, feature extraction and classification. The median filter is used to remove the noises in the brain images and pixel level image fusion is applied to obtain the enhanced brain image. The features are extracted from the fused image and Co-Active Neuro Fuzzy Inference System (CANFIS) classifier is used to classify the brain image into either benign or malignant. Further, morphological operations are applied on the classified malignant brain image inorder to segment the Glioblastoma tumor region. The proposed methodology achieves 96.43% sensitivity, 99.99% specificity and 99.91% accuracy with respect to ground truth images.
    Keywords: Glioblastoma tumor; median filter; malignant; features; classification.

  • Adaptive Digital Medical image watermarking approach in 2D-Wavelet domain using speed up robust feature method   Order a copy of this article
    by YARABHAM GANGADHAR, Giridhar Akula V.S., Chenna Reddy P 
    Abstract: In the recent trends, the utilization of digital images had got their importance in many fields. One of the major fields which had the more prominence utilization of digital images is health care. The patients information is stored in digital images for maintaining privacy and there is a need for preserving the content of the images. Watermarking serves well for protecting digital images. In this paper, Adaptive Digital Medical image watermarking approach is proposed. This proposed method utilizes the 2D-Wavelt domain for converting the medical image in to number of sub-bands. The interesting points are identified using the Speed up robust feature method (SURF). The scaling parameter is calculated for the digital images using the bipolar sigmoid function. The control parameter is introduced to adjust the scaling parameter and it plays a crucial role in deciding the strength of the watermark. Experimental evaluation is carried out using three parameters, Peak Signal to Noise Ratio (PSNR), Structure Similarity Measure (SSIM) and Normalized correlation coefficient (NCC). The experimental results proved that the proposed method had superior performance in both visible and invisible watermarking.
    Keywords: 2D-Wavelet domain; Watermarking; Medical images; Region of interest; SURF.

  • Study & Analysis of the Effect of RF Mobile Phone Waves on Human Brain When Operating at Charging / not Charging Mode   Order a copy of this article
    by Anupriya Saini, Manoj Duhan 
    Abstract: In the era of this technology, cell phone is a great boon for the communication purpose but this has many effects on human health too. The harmful EM (Electromagnetic) radiations emitted from the cell phone are having increased adverse effects on human health due to excess usage of mobile phone. The negative health effects on human are brain tumors, weakening of the immune system, increasing blood pressure, genetic damage, memory errors and many others. The radiations emitted from cell phone are invisible & untouchable and hence are more dangerous. This paper discusses the experiment which was conducted to study the effects of RF mobile phone waves on human brain when operating in charging/not charging mode. Electroencephalogram (EEG) is the technique used in this method to measure the electrical activity of the brain. The data used in the experiment was collected in a laboratory with the help of 5 volunteers. The result shows that it is more dangerous to use mobile phone when it is operating in charging mode. The value of PSD (Power Spectral Density) is the maximum at frequency 0-4Hz, decreases at 4-8Hz and then increases at 8-12 Hz frequency and continues decreasing at higher frequencies. Especially, at charging mode, during ringing P3-O1 & T5-O1 channels of the brain are more affected and during call ongoing, channels P4-O2 & T6-O2 are severely affected. When call is ongoing, the most affected channel is P4-O2 during charging of mobile phone. Thus, in order to reduce the effects of EMF radiations on human brain, the mobile phone during the charging must be avoided and call must be attended after the removal of mobile phone from charging.
    Keywords: EEG; Electroencephalogram; EM; Electromagnetic; RF; Radio frequency; GSM; global system for mobile communication; DSP; digital signal processing; PSD; power spectral density; RMS; recorders and medicare system.

  • Fuzzy Prediction and early detection of stomach diseases by means of combined iteration fuzzy models   Order a copy of this article
    by Riad Taha Al-Kasasbeh, Nikolay Korenevskiy, Mahdi Salman Alshamasin, Florin Ionescu, Elena Boitсova, Etab Al-Kasasbeh 
    Abstract: The work discusses aspects of decision rule synthesis for prediction and early diagnostics of stomach diseases. The distinguishing feature of heterogeneous fuzzy rules of decision making is the fact that they use information about the energetic condition of biologically active points and also features traditionally used in medical practice such as alchohol consumption, smoking tobacco, inheritance, etc. Use of different types of the original data allows us to provide diagnostic efficiency in decisions at the level 0.9 or greater, which makes it possible to recommend the research outcome for medical practice.
    Keywords: stomach diseases; fuzzy logic biologically active points; membership functions.

  • CLASSIFICATION OF WALL SHEAR STRESS OF HUMAN COMMON CAROTID ARTERY AND ASCENDING AORTA ON THE BASIS OF AGE   Order a copy of this article
    by Renu Saini, Sharda Vashisth, Ruchika Bhatia 
    Abstract: Wall shear stress is one of the major factors responsible for increase in cardiovascular disease. The present work calculate the level of wall shear stress in the common carotid artery (CCA) and aorta to see which artery has more chance of having cardiovascular diseases and at which age. Wall shear stress (WSS) is determined in the CCA and aorta of presumed healthy volunteers, having age between 10 and 60 years. A real 2D model of both aorta and common carotid artery is constructed for different age groups using computational fluid dynamics (CFD). WSS of both the arteries is calculated and compared for different age groups. It is found that with increase in diameter of common carotid artery and ascending aorta with advancing age wall shear stress decreases. WSS of aorta is found less than common carotid artery.
    Keywords: Ageing; Blood flow; Carotid artery; aorta; Wall shear stress.

  • PERFORMANCE ANALYSIS OF WAVELET BASIS FUNCTION IN DE-TRENDING AND OCULAR ARTIFACT REMOVAL FROM ELECTROENCEPHALOGRAM   Order a copy of this article
    by P. Prema, T. Kesavamurthy, K. Ramadoss 
    Abstract: The Event Related Potential (ERP) Brain- Computer Interface (BCI) system extensively uses the scalp electroencephalogram (EEG) for communication and motor control. It is a non-invasive procedure and the signal record has ERPs buried in EEG due to its low strength and it is usually contaminated with artifacts. For BCI control applications, the ocular artifacts produced by eye movement and blink which are dominant over the other physiological artifacts are undesirable. The objective of the study is to effectively remove the ocular artifact from EEG using Discrete Wavelet Transform (DWT) combined with Recursive Least Mean Square (RLS) adaptive noise cancellation technique using the optimal basis function with Stein's Unbiased Risk Estimate (SURE) thresholding. The proposed methodology is tested on the datasets created from the experimental setup measuring the performance metrics - Mean Square Error (MSE), Artifact to Signal Ratio (ASR), correlation coefficient and coherence. The results show that db4 wavelet performs better in de-trending and ocular artifact suppression by providing better signal to noise ratio and high level of coherence from 5Hz onwards while preserving the original EEG signal.
    Keywords: Ocular artifacts; EEG; EOG; physiological interference; DWT; adaptive filter; brain-computer interface; coherence; correlation; performance metrics.

  • DESIGN AND IMPLEMENTATION OF TEXTILE ANTENNA AND THEIR COMPARATIVE ANALYSIS OF PERFORMANCE PARAMETERS WITH OFF AND ON BODY CONDITION   Order a copy of this article
    by E.Thanga Selvi, Meena Alias Jeyanthi 
    Abstract: A wearable antenna is an important part of body area network (BAN) to communicate the health care information and secured information to the central hub. This paper presents the comparative study of rectangular shaped microstrip patch textile antenna for different conductive textile materials and conductive non textile materials. The textile antenna is designed, evaluated using ADS 2013.06 software and measured by N9926A 14GHz Field Fox Handheld Vector Network Analyzer for the ISM(industrial, scientific and medical) band application operated at the resonant frequency of (2.4-2.485) GHz. These textile antennas are analyzed and compared by the performance parameters like VSWR, reflection coefficient, bandwidth, impedance, directivity and gain. The results of the proposed design shows the return loss of -53.32dB, the VSWR as 1, and 100% efficiency, narrow bandwidth, effective directional radiation pattern, 50 ohm impedance, highest gain and directivity of about 5dB and 5dBi. The micro strip patch antennas are good candidates for body-worn applications, as they radiate perpendicularly to the planar structure and their ground plane shields the body tissues efficiently. This proposed textile antenna may also be suitable for wearable tele communication, wearable tele medicine application, body centric area network, Wi-Fi, WLAN applications and for communication purposes such as tracking, navigation, mobile computing and public safety.
    Keywords: BAN(body area network),Inset feed; ISM band; pure copper polyester taffeta fabric; copper foil tape with conductive adhesive; copper sheet; N9926A 14GHz Field Fox Handheld Vector Network Analyzer.

  • A combined hierarchical algorithm of mammograms registration using mutual information and a point based matching approach   Order a copy of this article
    by Meryem Loucif, Bornia Tighuiouart 
    Abstract: Mammographic image registration is an important process in the comparative analysis of mammograms that aims to correctly interpret them. In this paper, we proposed a combined hierarchical framework for a non rigid mammograms registration. So, the conjoint use of the multiresolution and the multigrid hierarchical strategies allows to increase progressively the warp complexity with a fixe size of the manipulated data contributed then to a finest registration with acceptable computing cost. To perform registration process, a hybridization of an intensity-based registration method using mutual information and a point-based matching approach using the Thin Plat Spline approach is achieved. This hybridization allows not only to outperform the registration accuracy by using the mutual information as a similarity measure adapted to non similar images but also to decrease the algorithm convergence time. Registration performance evaluated by aligning mammograms from the MIAS database and results demonstrate the effectiveness of the proposed algorithm for mammograms registration.
    Keywords: mammograms registration; MIAS database; mutual information; geometric matching; multiresolution approach; Gaussian pyramid; combined hierarchical algorithm; progressive subdivision strategy; Thin Plat Spline.

  • A Comparison of sEMG and MMG signal Classification for automated muscle fatigue detection   Order a copy of this article
    by M.R. Al-Mulla, Francisco Sepulveda 
    Abstract: This study compares the classification performance of both sEMG and MMG signal from fatiguing dynamic contraction of the biceps brachii. Commonly used statistical features are compared with a recently developed evolved pseudo-wavelet. Based on the literature, wavelet-based methods are a promising feature extraction technique for both types of signals (sEMG and MMG) during dynamic contractions. MMG results show that the evolved pseudo-wavelet improved the classification rate of muscle fatigue by 4.70 percentage points to 27.94 percentage points when compared to other standard wavelet functions, giving an average correct classification of 80.63%, with statistical significance (p < 0.05). For sEMG signals the evolved pseudo-wavelet improved the classification rate of muscle fatigue by 4.45 percentage points to 14.96 percentage points when compared to other standard wavelet functions (p < 0.05), giving an average correct classification of 87.90%. The comparison demonstrates that for both the sEMG and the MMG signal, the feature giving best classification results was the evolved pseudo-wavelet.
    Keywords: Localised Muscle Fatigue; Electromyography; Mechanomyography; Wavelet analysis; Pseudo-wavelets.

  • Analysis of Speech Imagery Using Brain Connectivity Estimators on Consonant-Vowel-Consonant (CVC) Words   Order a copy of this article
    by Sandhya Chengaiyan, Kavitha Anandan 
    Abstract: Speech imagery refers to the perceptual experience of uttering speech to oneself without any articulation. In this paper, the neural correlations between brain regions associated with articulated and imagined speech processes of Consonant-Vowel-Consonant (CVC) words are analyzed using brain connectivity estimators. EEG Coherence, a synchronization parameter establishes the correlation between several cortical areas. To analyze the causal dependence, Partial Directed Coherence (PDC) and Directed Transfer Function (DTF) estimators are derived from multi- channel EEG data. From inter and intra hemispheric coherences it has been observed that theta, beta and gamma frequencies were dominant and words with same vowel having one consonant common have similar coherence values. Results inferred from intra-hemispheric PDC and DTF parameters show that the frontal and temporal regions of the left hemisphere are more activated for all the given speech imagery tasks. Thus, the analysis provides a significant step in understanding the neural interactions of the brain while thinking and articulating processes.
    Keywords: Speech Imagery; Electroencephalography (EEG); EEG Coherence; Partial Directed Coherence(PDC); Directed Transfer Function(DTF); Consonant-Vowel-Consonant(CVC) words.

  • Wireless Speech Control System for Robotic Arm   Order a copy of this article
    by Biswajeet Champaty, Suraj K. Nayak, Ashirbad Pradhan, Sirsendu S. Ray, Biswajit Mohapatra, Indranil Banerjee, Arfat Anis, Kunal Pal 
    Abstract: Speech-controlled devices have been explored for potential applications in rehabilitation technology. These systems have shown great promises in improving the independence of differently-abled persons by providing hands-free operation of the rehabilitative aids. The present study delineates the development of a speech-activated wireless control system for controlling rehabilitative devices. A robotic arm was used as the representative rehabilitative device and this technology can be extended to operate other rehabilitative aids (e.g. wheelchairs). The performance of the device was evaluated using 10 volunteers. All the volunteers were able to accurately complete the desired movements of the robotic arm with relative ease. The developed device is simple and user-friendly.
    Keywords: Speech; Rehabilitative device; Robotic arm; XBee; Arduino.

  • Mix-Model for Optimization of Textural Features Applied to Multiple Sclerosis Lesion-Tumor Segmentation   Order a copy of this article
    by A. Lakshmi, T. Arivoli, Pallikonda Rajasekaran Murugan 
    Abstract: Segmentation of biomedical images plays an important role in many applications especially in medical imaging, forming an important step in enabling qualification in the field of medical research as well as clinical practices. Magnetic Resonance Imaging is normally used to distinguish and enumerate Multiple Sclerosis lesions in the brain. Recently Multiple Sclerosis lesion of segmentation is the challenging issue due to special variation, low size and unclear boundaries. Since usual technique for brain MRI tumor detection and classification is manual investigation but it is varied from person to person and also very time consuming. Many new methods have been proposed to segment lesions automatically. This paper proposed segmentation of MRI brain tumor using cellular automata and classification of tumor by Pointing Kernal Classifier (PKC). The utilization of modified Cuckoo Search with the priority values and the PKC in proposed Mix Model for Optimization of Textural Features (M-MOTF) provides the significant improvement in classification performance with low dimensionality. The proposed system has been validated with the support of real time data set from Frederick National Laboratory and the experimental results showed improved performance.
    Keywords: Pointing Kernal Classifier (PKC);Mix Model for Optimization of Textural Features (M-MOTF);Distributed Adaptive Median Filtering(DAMF);Multi-Angle Cellular Automata (M-ACA);Dynamic Angle Projection Pattern(DAPP).

  • Optimized DWT using cooperative particle Swarm Optimizer for hybrid domain based Medical and Natural Image Denoising   Order a copy of this article
    by A. Velayudham, K. Madhan Kumar, R. Kanthavel 
    Abstract: The quest for productive image denoising systems still is a valid challenge, at the intersection of practical investigation and measurements. In spite of the sophistication of the recently proposed systems, most calculations have not yet achieved an attractive level of applicability. In this research, an optimal wavelet filter coefficient design-based methodology is proposed for image denoising. The method utilizes new wavelet filter whose coefficients are derived by discrete wavelet (haar) transform using CPSO optimization and bilateral filter. The optimal wavelet coefficient based denoising methods minimize the noise, while bilateral filter further decreases the noise and increases the PSNR without any loss of relevant image information. Overall, the proposed approach consists of two stages namely, (i) Design of optimal wavelet filter, ii) Image denoising using a bilateral filter. At first, wavelet optimal coefficients are selected using cooperative particle swarm optimizer (CPSO). After that, the hybrid domain based algorithm (wavelet with bilateral filter) is applied to the noisy image which is helpful to obtain the denoised image. A comparative study of the performance of different existing approaches and the proposed denoised approach is made in terms of PSNR, SDME, SSIM and GP. When compared, the proposed algorithm gives better PSNR compared to the existing methods.
    Keywords: image denoising; optimal wavelet; bilateral filter; cooperative particle swarm optimizer; wavelet coefficient; sub-bands.
    DOI: 10.1504/IJBET.2017.10012231
     
  • Effective Facial Expression Recognition System Based on Hybrid Classification Technique   Order a copy of this article
    by J. Sunetha, K. Sandhya Rani 
    Abstract: In our daily life, facial expression recognition has possible functions in various sectors but still it is not understand, because the absence of efficient expression identification methods. Many methods are used to develop the effectiveness of the identification through indicating issues in face detection and extraction aspects in identification expressions. In the first phase, the noise is eliminating from the image by using of preprocessing techniques and to obtain the quality image in order to decrease the computational complexity. The following phase is feature extraction phase. In this phase, we are extracting related features like eyes, mouth and nose. The shape feature of eye part can be extracted by Active Appearance Model (AAM). The texture feature of nose and mouth can be extracted using Gray-Level Co-Occurrence Matrix (GLCM). Then in the final phase, we have to categorize a facial expression for this categorization process by introducing an Adaptive Genetic Fuzzy Classifier (AGFS) and Neural Network (NN). Finally, score level fusion of this two classification result will be done to obtain the face emotions.
    Keywords: Dr.K.Sandhya Rani; M.Sc ; M.Tech; Ph.D is presently working as Computer Science Professor in Sri Padmavati Mahila Visvavidyalayam ( SPMVV ); Tirupati. This university is established in 1983 and a second women’s university in the country. She is a senior most faculty in the department. She is having 30 years of teaching experience & 18 years of research experience in SPMVV and published more than 50 papers in reputed National and International journals. Her thrust areas of research are Pattern Recognition; Data Mining and Big Data Analytics. She served the department and university in various positions such as Head; Dean Student Affairs; NAAC Coordinator and Dean Development.
    DOI: 10.1504/IJBET.2019.10011474
     
  • Evaluation of the Effect of Posture on Carotid-to-Toe Pulse Transit Time Values Estimated Using System Identification   Order a copy of this article
    by Aws Zuhair Sameen, Rosmina Jaafar, Edmond Zahedi 
    Abstract: Pulse Transit Time (PTT), a marker of arterial stiffness, is defined as the time a pulse wave needs to travel from one point of the blood circulation to another point. Monitoring PTT values of a person is useful in non-invasive, cuff-less estimation of blood pressure measurement. The challenge is how to estimate the PTT values continuously from the subject with accurate readings. In this paper, PTT values estimated from two PPG signals from the reflective photoplethysmography of carotid and toe that are collected from 12 healthy subjects (8 males and 4 females) age 23.8
    Keywords: Pulse Transit Time (PTT); System Identification; Photoplethysmogram (PPG); ARX model; Time Delay Estimation.

  • GENETIC TIME SERIES MOTIF DISCOVERY FOR TIME SERIES CLASSIFICATION   Order a copy of this article
    by RAMANUJAM ELANGOVAN, Padmavathi S 
    Abstract: Time Series is a sequence of continuous data and unbounded group of observations found in many applications. Time series motif discovery is an essential and important task in time series mining. Several algorithms have been proposed to discover motifs in time series. These algorithms require user-defined parameters such as length of the motif, support or confidence. However, selection of these parameters is not an easy issue. To overcome the challenge, this paper proposes a Genetic Algorithm with constraints to discover good trade-off between representative and interesting motif. The discovered motifs are validated for their potential interest in Time Series Classification problem using Nearest Neighbor classifier. Extensive experiments show that the proposed approach can efficiently discover motifs with different length and to be more accurate and statistically significant than state-of-the-art time series techniques. Finally, the paper demonstrates the efficiency of motif discovery in large medical data from MIT-BIH-Arrhythmia database to classify the abnormal signals.
    Keywords: Genetic Algorithm; constraints; Time series classification; UCR Archive; MIT-BIH-Arrhythmia;.

  • Facial Expression Synthesis Images Using Hybrid Neural Network with Particle Swarm Optimization Techniques   Order a copy of this article
    by Deepti Chandra, Rajendra Hegadi, Sanjeev Karmakar 
    Abstract: In the advance life trend, the facial expression is the visual facial outer structure of the human affective state, intellectual action, and human interchanges and facial expression go about as the key role in the movement of communication. The human - computer interfaces are processed by the computer which is able to connect with people through facial expression and the above surfaces the path for the base of communication. This is further contrasted with the human-human association. In this paper, the facial expression synthesis performance is done using different facial expressions such as angry, sad, smile, surprise and cry of various peoples. The pre-processed image and the landmark points are shaped automatically by the viola-Jones algorithm. In the proposed method, two procedures are used namely Hybrid Neural Network (HNN) and Particle Swarm Optimization (PSO) algorithm. By training particle swarm optimization and hybrid neural network, we take the desired output. In the result section, various evaluation metrics namely Peak to Signal Noise Ratio (PSNR), Mean Square Error (MSE) and a Second-Derivative-Like Measure of Enhancement Value (SDME) is calculated using diverse algorithms. In this evaluation performance, the particle swarm optimization is given enhanced output while comparing it with other techniques and the existing methods of facial expression.
    Keywords: Facial expression; Hybrid Neural network; viola-Jones algorithm; Particle Swarm Optimization (PSO).
    DOI: 10.1504/IJBET.2017.10011900
     
  • Embedded Binary PSO Integrating classical methods for Multilevel Improved Feature Selection in liver and kidney disease diagnosis   Order a copy of this article
    by Gunasundari Selvaraj, Janakiraman Subbiah, Meenambal Selvaraj 
    Abstract: Feature selection is an important preprocessing technique in the field of data mining. This process removes irrelevant data thereby reduces the number of features. This paper presents a new algorithm called embedded binary particle swarm optimization (BPSO) to improve the performance of BPSO for feature selection. Embedded BPSO incorporates classical methods to select elite feature subset. The population is refined or extended at regular intervals using the best features from sequential forward selection and sequential backward selection methods. In this study, probabilistic neural network and support vector machine with 3-fold cross validation are used to evaluate the particles. The embedded algorithm is verified in the feature selection module of the liver and kidney cancer diagnostic system. The elite features extracted from wrapper based embedded algorithm are used to characterize diseases using the classifier. Findings show that the proposed system is proficient in selecting the best features with minimum error rate.
    Keywords: Binary particle swarm optimization; feature selection; sequential forward selection; sequential backward selection; liver cancer; kidney cancer; computer-aided diagnostic system; medical imaging; benign; malignant.

  • Iterative Modelling of the Closing-based Differential Morphological Profile   Order a copy of this article
    by Arif Muntasa, Indah Agustien Siradjuddin 
    Abstract: One of an image processing applications is retinal image optic disk detection. The similarity of the retinal gray scale image between the object and background has been interesting many researchers to develop the research. In this research, the Closing based Differential Morphological Profile is proposed to detect the optic disc on the retinal image. The closing process is performed by iterative. It is started by pre-processing and followed by Different Morphological Profile based on the closing operation i.e. the dilation and erosion processes. The dilation process is iteratively performed and followed by erosion process. The process results are enhanced to obtain the better image when the binary image transformation is conducted. The noise removal process is also necessity to eliminate the detection error. Furthermore, determining of the point centre of the object detection result will be used to create the optic disk. The detection rates of the proposed approach show that the maximum detection accuracy outperformed to the other methods, i.e. 2D-Gaussian Filtering Based Mathematical Morphology Approach, Differential Morphological Profile, Morphological Reconstruction Techniques and Hybrid Fuzzy Classifier.
    Keywords: Differential morphological profile; iterative modelling; optic disk image detection; closing operation.

  • Aligning Large Biomedical Ontologies for Semantic Interoperability using Graph Partitioning   Order a copy of this article
    by Sangeetha Balachandran, Vidhyapriya Ranganathan, Divya Vetriveeran 
    Abstract: Ontologies, formal specifications of domain knowledge, play an imperative role in the semantic web and are developed by several domain experts in the biomedical field. Ontology alignment or mapping is the process of identifying correspondences among the concepts in the ontology to facilitate data integration between heterogeneous data sources. The alignments generated augment information retrieval process, web service composition, drug discovery, identifying new gene patterns in species. In particular, the proposed ontology mapping system addresses three pivotal issues: (i) To Facilitate the automated alignment process by incorporating Random Forests (RF), an ensemble learning system that is stable for outliers. In addition, it facilitates training the individual random trees in parallel, and thereby reducing the execution time (ii) To Improve the execution time by partitioning the ontologies using cluster-walktrap [24] methodology and identify the correspondence between the concepts in parallel and (iii) To identify equivalence and non-equivalence correspondences based on the description, labels and object properties associated with concepts in the ontologies. The ontologies subjected to the mapping system are partitioned into sub-ontologies and the sub-ontologies having higher cosine similarity measure is selected as the candidate ontologies for further mapping. The performance of the system is pragmatically evaluated on benchmark datasets in the anatomy and large biomedical ontology tracks of the Ontology Alignment and Evaluation Initiative (OAEI) 2013 and 2014. With the aid of the proposed system, quantifiable improvement is noticed to an extent of about 4.4% in average precision, recall and F-measure. The performance of the proposed ontology mapping process is improved to an extent of 3%, compared to the state-of-the-art ontology mapping tools, for large biomedical ontologies. The alignments generated are represented using Alignment API suggested by OAEI for consistent representation and further, to ease the process of evaluation.
    Keywords: Ontology alignment; Ontology Mapping; Semantic information retrieval; Data integration; Biomedical informatics; Semantic interoperability.

  • Characterization of Breast Tissue using compact Microstrip antenna   Order a copy of this article
    by Vanaja Selvaraj, Poonguzhali Srinivasan, Divya Baskaran, Rahul Krishnan 
    Abstract: This paper presents a more improved method to characterize the breast tissue by employing a unique microstrip antenna. The pattern of the proposed antenna consists of a radiating patch with a rectangular slot, three stubs, a feed-line and a partial ground plane. Several parameters are used to analyze the microstrip antenna. The antenna designed provides a wide usable frequency band range from 2.4-4.76 GHz. In order to observe the interaction between the antenna and breast tissue, a heterogeneous breast model having dielectric characteristics similar to the human tissue is used. The tumor in the breast tissue is analyzed by measuring the resonant frequency of the reflected signal. The results show that the shift in resonant frequency decreases as the size of tumor increases due to dielectric variation in the breast tissue.
    Keywords: wideband; heterogeneous; microstrip antenna; breast tissue.

  • A Hybrid K-Means Algorithm Improving Low-Density Map Based Medical Image Segmentation with Density Modification   Order a copy of this article
    by Srinivasa Reddy A., Pakanati Chenna Reddy 
    Abstract: Segmentation is grouping of a set of pixels, which are mapped from the structures inside the prostate and the background image. The main aim of this research is to provide a better segmentation technique for medical images by solving the drawbacks that currently exist in the density map based discriminability of feature values. In this paper, we have proposed a method for image segmentation based density map segmentation properties medical image. The accurateness of the resultant value possibly not up to the level of anticipation while the dimension of the dataset is high because we cannot say that the dataset chosen are free from noises and faults. The kernel change i.e. segmentation is made by using Hybrid K-means Clustering Algorithm. Thus this method is used to provide the segmentation processing information as well as also be noise free output in an efficient way. Hence, the developed model is implemented in the working platform of Matlab and the output is compared with the existing techniques such as FCM, K-means to evaluate the performance of our proposed system.
    Keywords: Medical Image Segmentation; Hybrid K-Means Algorithm; Skull striping; FCM; K-Means; Genetic Algorithm.
    DOI: 10.1504/IJBET.2017.10012965
     
  • AN ENHANCED TBAHIBE-LBKQS TECHNIQUES FOR PRIVACY PRESERVATION IN CLOUD   Order a copy of this article
    by Rachel Nallathamby, Rene Robin CR 
    Abstract: In recent days, providing security to the data stored in cloud is an important and challenging task. For this purpose, several existing privacy preservation and encryption algorithms are proposed in the existing works. But, it has some drawbacks such as, high cost, required more amount of time for execution and low level security. In order to overcome all these drawbacks, this paper proposes a novel techniques such as, Tiered Blind and Anonymous Hierarchical Identity Based Encryption (TBAHIBE) and Location Based Keyword Query Search (LBKQS) for providing privacy preservation to the data stored in cloud environment. In this work, the privacy is provided to the medical data stored in the Electronic Health Record (EHR). It includes two modules such as, secure data storage and location based keyword query search. In the first module, the medical data of the egg and sperm donor, receptor, doctor and lab technician are stored in the encrypted format by using the proposed TBAHIBE technique. Here, the authenticated persons can view the medical data, for instance, the doctor can view the donor and receptor medical details. In the second module, the location based search is enabled based on the keyword and query. Here, the doctor, patient and other users can fetch the medical details in a filtered format. The main advantage of this paper is, it provides high privacy to the medical data in a secured way. The experimental results evaluate the performance of the proposed system in terms of computation cost, communication cost, query evaluation, encryption time, decryption time and key generation time.
    Keywords: Cloud Computing; Privacy Preservation; Egg Donor; Sperm Donor; Tiered Blind and Anonymous Hierarchical Identity Based Encryption (TBAHIBE) and Location Based Keyword Query Search (LBKQS); Electronic Health Record (EHR).

  • Low cost Device for early diagnosis of Chronic Obstructive Pulmonary Disease   Order a copy of this article
    by Monica Subashini Mohan Chandran, Tushar Talwar, Rohit Mazumder 
    Abstract: Chronic Obstructive Pulmonary Disease (COPD) is characterized by increasing breathlessness. Many people mistake their increased breathlessness and coughing as a normal part of aging. In the early stages of the disease, the symptoms are unnoticed. The symptoms in the more developed stages of the disease are seen. Thats why it is important that we have an easy to use device for diagnosis. All available devices in the market either need doctors help to interpret or are inaccurate. The proposed device solves the purpose of primary diagnosis of COPD, which enables the patient to do a self-test of his lungs capacity. The lung capacity is estimated by the amount of air the exhaled during the test. The sensor system of the device includes a rotary sensor which enables the patient to have precise and accurate information every time. The device has a modern and interactive application which gives the patient access to a detailed report on his lungs condition. The application also features exercise mode in which the patient can do simple breathing exercises to prevent and treat COPD from early stage. The device has been validated and processed by conducting self-diagnosis with people between ages of 20-45 with 87% accuracy. Thus our device can be used for primary diagnosis of COPD.
    Keywords: COPD;rotary sensor;lung capacity; diagnosis.

  • Mathematical Model based ontology for Human Papillomavirus in India   Order a copy of this article
    by GEETHA RAJESH KUMAR, SIVASUBRAMANIAN S 
    Abstract: Cervical cancer is a life threatening disease contracting women population in great numbers. It is the fifth most common cancer having high impact on human mortality. The second most common cancer prevalent among women worldwide is cervical cancer. Cervical cancer is due to sexually transmitted virus known as Human Papillomavirus (HPV). In this paper a Mathematical model and Ontological representation of this model HPVMath ontology has been formulated to expose the viability of HPV which leads to cervical cancer in women. Mathematical models translate data in to trials which gives deep insights about women population: not suspected for HPV, suspected for HPV, with HPV without cervical cancer, with HPV with cervical cancer. These trials from short term findings can lead to long term health outcomes. In addition HPVMath ontology representation formalizes a common view for HPV prevalence which can in turn assist medical practitioners and generate awareness among common men. This paper explores and defines the circumstances of HPV in to an ethical focus in an age characterization by the worldwide environmental threats.
    Keywords: Cervical cancer; HPV; Mathematical model; Ontology.

  • Design and Developing a Photoplethysmographic Device Dedicated to the Evaluation of Representative Indexes in the Response to Vascular Filling Using Systolic Time Intervals   Order a copy of this article
    by Nasr Kaid Ali Moulhi, Mohammed Benabdellah, Amine Aissa Mokbil Ali 
    Abstract: In this study, we develop an interface (human -machine) for monitoring the cardiovascular-respiratory system, through the evaluation of analogous indices obtained from a finger photoplethysmography pulse oximetry waveform. This interface consists of sensors, electronics associated with these sensors, acquisition card to make the communication with a local computer post and a graphical interface developed in Visual Basic Environment for signals tracing and data archiving. In this work we achieved the evaluation of representative indexes for the response to vascular filling, using systolic time intervals (STIs) namely, pre ejection period (PEP), respiratory change in pre ejection period (ΔPEP), left ventricular ejection time (LVET) and systolic time ratio (STR). Given that STIs are highly correlated to the fundamental cardiac functions. In order to achieve this goal, a data collection study was conducted using synchronized acquisitions of electrocardiogram (ECG), photoplethysmogram (PPG) and pneumotachogram (PTG) signals.
    Keywords: ECG; PPG; PTG; PEP,ΔPEP; LVET; STR; STIs; RS232; Microcontroller; Visual Basic; Vascular Filling.

  • Integration of global and local features based on Hybrid Similarity Matching Scheme for Medical Image Retrieval System   Order a copy of this article
    by Ajitha Gladis 
    Abstract: Similarity measure is a challenging task in content-based medical image retrieval (CBMIR) systems and the matching scheme is designed to improve the retrieval performance. However, there are several major shortcomings with conventional approaches for a matching scheme which can extensively affect their application of medical image retrieval (MIR). To overcome the issues, in this paper a multi-level matching (MLM) method for MIR using hybrid feature similarity is proposed. Here, images are represented by multi -level features including local level and global level. The Color and edge directivity descriptor (CEDD) is used as a color and edge based descriptor. Speeded-up Robust Features (SURF) and Local binary pattern (LBP) is used as a local descriptor. The hybrid of both global and local features yields enhanced retrieval accuracy, which is analyzed over collected image databases. From the experiment, the proposed method achieves better accuracy value about 92%, which is higher than other methods.
    Keywords: CBIR; local features; global features; multi-level matching; hybrid; similarity; descriptor; CEDD; LBP; SURF.
    DOI: 10.1504/IJBET.2017.10012232
     
  • Automatic stenosis grading system for diagnosing coronary artery disease using coronary angiogram   Order a copy of this article
    by NANDHU KISHORE A.H., JAYANTHI VE. 
    Abstract: The coronary angiogram is considered as the golden standard for diagnosing the coronary artery disease. The paper proposes a system that helps to describe the level of stenosis in coronary angiogram image by using mathematical morphology and thresholding technique. A novel method is introduced to determine the percentage of stenosis and its grading. Based on the diagnostic results, myocardial infarction (MI) is treated well-in-advance. A real time clinical dataset consisting of 25 conventional coronary angiographies with 865 frames is used to evaluate the performance of the proposed system. The execution of the proposed system is inspected by a cardiologist and confirmed the system has produced excellent segmentation and stenosis grading automatically. Sensitivity, specificity, accuracy and precision of the system are 94.74%, 83.33%, 92% and 94.74% respectively with an average computational time of 0.84 sec. Kappa value also shows perfect system agreement for stenosis grading.
    Keywords: coronary artery disease; coronary angiogram; coronary artery segmentation; stenosis detection; stenosis grading.

  • Particle Swarm Optimization aided Weighted Averaging Fusion Strategy for CT and MRI Medical Images   Order a copy of this article
    by Madheswari Kanmani, Venkateswaran Narasimhan 
    Abstract: Multimodal medical image fusion is a technique that combines two or more images into a single output image in order to enhance the accuracy of clinical diagnosis. In this paper, a non-subsampled contourlet transform (NSCT) image fusion framework that combines CT and MRI images is proposed. The proposed method decomposes the source images into low and high frequency bands using NSCT and the information across the bands are combined using weighted averaging fusion rule. The weights are optimized by particle swarm optimization (PSO) with an objective function that jointly maximizes the entropy and minimizes root mean square error to give improved image quality, which makes different from existing fusion methods in NSCT domain. The performance of the proposed fusion framework is illustrated using five sets of CT and MRI images and various performance metrics indicate that the proposed method is highly efficient and suitable for medical application in better decision making.
    Keywords: Image fusion; CT image; MRI image; NSCT; PSO; Weighted average fusion strategy.

  • Design of artificial pancreas based on the SMGC and self-tuning PI control in type-I diabetic patient   Order a copy of this article
    by Akshaya Kumar Patra, Pravat Kumar Rout 
    Abstract: Optimal closed loop control of blood glucose (BG) level has been a major focus for the past so many years to realize an artificial self regulating insulin device for Type-I Diabetes Mellitus (TIDM) patients. There is urgency for controlled drug delivery system to design with appropriate controller not only to regulate the blood glucose but also for other chronic clinical disorders requiring continuous long term medication. As a solution to the above problem, a novel optimal self-tuning PI controller is proposed whose gains dynamically vary with respect to the error signal. The controller is verified with a nonlinear model of the diabetic patient under various uncertainties arises in various physiological conditions and wide range of disturbances. A comparative analysis of self-tuning PI controller performance has been done with the sliding mode Gaussian control (SMGC) and other optimal control techniques. Obtained results clearly reveal the better performance of the proposed method to regulate the BG level within the normoglycaemic range (70-120 mg/dl) in terms of accuracy, robustness and handling uncertainties.
    Keywords: type-I diabetes mellitus; insulin dose; artificial pancreas; micro-insulin dispenser; SMGC; self-tuning PI control.

  • Optimized Denoising scheme via Opposition based Self-adaptive learning PSO algorithm for Wavelet Based ECG Signal Noise Reduction   Order a copy of this article
    by Vinu Sundararaj 
    Abstract: Electrocardiographic (ECG) signal is significant to diagnose cardiac arrhythmia among various biological signals. The accurate analysis of noisy Electrocardiographic (ECG) signal is very motivating challenge. According to this automated analysis, the noises present in Electrocardiogram signal need to be removed for perfect diagnosis. Numerous investigators have been reported different techniques for denoising the Electrocardiographic signal in recent years. In this paper, an efficient scheme for denoising electrocardiogram (ECG) signals is proposed based on a wavelet based threshold mechanism. This scheme is based on an Opposition based Self-Adaptive Learning particle swarm optimization (OSLPSO) in dual tree complex wavelet packet scheme, in which the OSLPSO is utilized to for threshold optimization. Different abnormal and normal Electrocardiographic signals are tested to evaluate this approach from MIT/BIH arrhythmia database, by artificially adding white Gaussian noises with variation of 5dB, 10dB and 15dB. Simulation result illustrate that the proposed system is well performance in various noise level, and obtains better visual quality compare with other methods.
    Keywords: Electrocardiogram; denoising; DTCWPT; Self-Adaptive Learning; Opposition learning; Particle swarm optimization; MIT/BIH arrhythmia; Thresholding.
    DOI: 10.1504/IJBET.2017.10012138
     
  • Optimal ECC Based Signcryption Algorithm for Secured Video Compression Process in H.264 Encoder   Order a copy of this article
    by S. Rajagopal, A. Shenbagavalli 
    Abstract: Combination of cryptography method and video technology is a Video encryption. Video encryption process is a total and demonstrable security of video data. For the purpose of protecting the video sequence, we have intended to recommend a video compression procedure along with its encryption for providing secured video compression framework. In this document we have suggested a method for ECC based Signcryption algorithm for Secured video compression process. At first, encryption process will be used on the motion vector by applying ECC (Elliptic Curve Cryptography) based Signcryption algorithm. The suggested method uses ECC method for the generation of public and private key. At the point when contrasted with the other encryption algorithms like small key, more security, increased velocity, little storage space and low data transfer capacity ECC has particular preferences. The suggested method employs the Improved Artificial Bee Colony algorithm (IABC) in order to optimize the private key. Next the optimal selection of private key applied to encrypt the motion vector. By means of different security attacks such as Man in Middle (MiM) attack, Brute Force and Denial of Service (DOS) attacks, the security of the suggested method will be examined.
    Keywords: Video encryption; Video compression; signcryption; Elliptic Curve Cryptography; Improved artificial Bee Colony algorithm; Brute force; DOS attack.
    DOI: 10.1504/IJBET.2017.10011678
     
  • Automatic biometric verification algorithm based on the bifurcation points and crossovers of the retinal vasculature branches   Order a copy of this article
    by Talib Hichem Betaouaf, Etienne Decenciere, Abdelhafid Bessaid 
    Abstract: Biometric identification systems allow for the automatic recognition of individuals on one or more biometric characteristics. In this paper, we propose an automatic identity verification algorithm based on the structure of the vascular network of the human retina. More precisely, the biometric template consists of the geometric coordinates of bifurcation points and crossovers of the vascular network branches. The main goal of our work is to achieve an efficient system while minimizing the processing time and the size of the data handled. Therefore, this algorithm uses a novel combination of powerful techniques for feature extraction based on mathematical morphology, like the watershed transformation for the segmentation of the retinal vasculature and Hit-or-Miss transform for the detection of bifurcation points and crossovers.We detail each step of the method from acquisition and enhancement of retinal images to signature comparison through automatic registration.We test our algorithm on a retinal images database (DRIVE). Finally, we present and discuss the evaluation results of our algorithm, and compare it with some of the literature.
    Keywords: Biometrics; biometric verification; retinal blood vessel; image segmentation; bifurcation points.

  • Detection of Fovea Region in Retinal Images Using Optimization based Modified FCM and ARMD disease classification with SVM   Order a copy of this article
    by T. Vandarkuzhali, C.S. Ravichandran 
    Abstract: The underlying motive resting with the current investigation is invested in designing a superior recognition system for locating the fovea region from the retinal image by significantly steering clear of the roadblocks encountered at present. The significant scheme streams through three specific processes particularly, Blood-vessel segmentation, Optic-disc detection, Fovea detection and ARMD disease classification. In the initial stage, the retinal images are enhanced with the help of AHE approach and then segmented by adaptive-watershed technique. The successive stage opens up with recognition of optic-disc by means of MRG system. And, in the last stage, the fovea region is effectively spotted with the help of OBMFCM technique. Along with the fovea-region segmentation, analysis is made for the classification of dry/wet ARMD with SVM classifier. The record-breaking technique is performed in the platform of Matlab2014 and its charismatic upshots are assessed and contrasted with those of the parallel fovea recognition approach.
    Keywords: Optimization based modified Fuzzy C-Means (OBMFCM); Age Related Macula Degeneration (ARMD); Adaptive Histogram Equalization (AHE); Modified Region Growing (MRG); Support Vector Machine (SVM);.

  • Detection and Diagnosis of Dilated Cardiomyopathy from the Left Ventricular parameters in Echo-cardiogram sequences   Order a copy of this article
    by G.N. Balaji, T.S. Subashini, A. Suresh, M.S. Prashanth 
    Abstract: The heart has a complicated anatomy and is in constant movement. The cardiologist use echocardiogram to visualize the anatomy and its movement. It is difficult for the cardiologist to prognosticate or affirm the diseases like heart muscle damage, valvular problems etc., due to presence of less information in echocardiograms. In this paper a system is proposed which automatically segments the left ventricle from the given echocardiogram video sequences using the combination of fuzzy C-means clustering and morphological operations and from which the left ventricle parameters and shape features are evoked. These features are then employed to linear discriminant analysis, K- nearest neighbor and Hopfield neural network to determine whether the heart is normal or affected with DCM. With LV parameters evaluated and shape features extracted it was found that HNN was able to model normal and abnormal hearts very well with an accuracy of 88% compared to LDA and K-NN.
    Keywords: Echocardiogram; Left Ventricle (LV); Dilated Cardiomyopathy (DCM); Fuzzy C-Means clustering (FCM) and Morphological operations.

  • Detection of Epilepsy using Discrete Cosine Harmonic Wavelet Transform based features and Neural Network Classifier   Order a copy of this article
    by G.R. Kiranmayi, V. Udayashankara 
    Abstract: Epilepsy is a neurological disorder caused by the sudden hyper activity in certain parts of the brain. Electroencephalogram (EEG) is the commonly used cost effective modality for the detection of epilepsy. This paper presents a method to detect epilepsy using Discrete Cosine Harmonic Wavelet Transform (DCHWT) and a neural network classifier. DCHWT is a Harmonic Wavelet Transform (HWT) based on Discrete Cosine Transform (DCT), which is proved to be a spectral estimation technique with reduced bias is used in this work. The proposed method involves decomposition of EEG signals into DCHWT subbands, extraction of features from sub bands and classification using an artificial neural network (ANN) classifier. The main focus of this study is the automatic detection of epilepsy from interictal EEG. This is still a challenge to the researchers as interictal EEG looks like normal EEG which makes the detection difficult. The proposed method is giving classification accuracy of 93.33% to 100% for various classes.
    Keywords: epilepsy; harmonic wavelet transform; HWT; discrete cosine harmonic wavelet transform; DCHWT; ictal EEG; interictal EEG; EEG subbands; neural network classifier.

  • 2D MRI Intermodal Hybrid Brain Image Fusion using Stationary Wavelet Transform   Order a copy of this article
    by Babu Gopal, Sivakumar Rajagopal 
    Abstract: Medical image fusion involves combination of multimodal sensor images to obtain both anatomical and functional data to be used by radiologists for the purpose of disease diagnosis, monitoring and research. This paper provides a comparative analysis of multiple fusion techniques that can be used to obtain accurate information from the intermodal MRI T1 T2 images. The source images are initially decomposed using Stationary Wavelet Transform (SWT) into approximation and detail components while the approximation components are reconstructed by Discrete Curvelet Transform (DCT), the SWT and DCT are good for point and line discontinuities. This paper also provides a comparative study of the different types of image fusion techniques available for MRI image decomposition. These approximation and detail components are fused using the different fusion rules. Final fused image is obtained by inverse SWT transformation. The fused image is used to localize the abnormality of brain images that lead to accurate identification of brain diseases such as 95.7% of brain lesion, 97.3% of Alzheimer's disease and 98% of brain tumor. Various performance parameters are evaluated to compare the fusion techniques and the proposed method which provides better result is analyzed. This comparison is done based on the method which provided the fused image with more Entropy, Average pixel intensity, Standard deviation and Correlation coefficient and Edge strength.
    Keywords: Inter-modal Image Fusion; MRI T1-T2; Stationary Wavelet Transform; Discrete Curvelet Transform; Principal Component Analysis.

  • Design of Wireless Contact-lens antenna for Intraocular Pressure monitoring   Order a copy of this article
    by Priya Lakshmipathy, Vijitha J, Alagappan M 
    Abstract: Intraocular pressure is an important aspect in the evaluation of patients at risk from glaucoma. Glaucoma is an ocular disorder that results in the damage of optic nerve, often associated with increased aqueous pressure in the eye. Wireless technology reduces discomfort, risk of infection and monitor patients in remote places by providing timely health information. In order to transmit the ocular pressure through the wireless media a proposed design of wireless contact-lens antenna has been designed. The contact-lens coupled structure antenna was designed for the betterment of reflection co-efficient and for minimizing the density of materials by gap-coupled configuration in comparison with the conventional on-lens loop antennas. The return loss of the designed contact lens antenna was -21 dB at 2.6 GHz with a diameter ranging from 14 to 15 mm. The simulation result of designed antenna return loss was obtained using Advanced Design System.
    Keywords: Ocular pressure; reflection co-efficient; wireless technology; coupler antenna; glaucoma; conventional on-lens loop antenna; return loss; Advance Design System; aqueous pressure; ocular disorder.

  • Effect of repetitive Transcranial Magnetic Stimulation on motor function and spasticity in spastic cerebral palsy   Order a copy of this article
    by Meena Gupta, Bablu Lal Rajak, Dinesh Bhatia, Arun Mukherjee 
    Abstract: To study the effectiveness of repetitive Transcranial magnetic stimulation (r-TMS) therapy in recovery of motor disability by normalizing muscle tones in spastic cerebral palsy (SCP) patients. Twenty SCP participants were selected from UDAAN-for the disabled, Delhi and were divided equally into two groups - control group (CG) and experimental group (EG). Ten participants in CG (mean age 8.11+SD4.09) were given physical therapy for 30 minutes daily for 20 days and those in EG (mean age 7.93+SD4.85) were administered 5Hz r-TMS frequency for 15 minutes consisting of 1500 pulses daily followed by physical therapy of same duration as provided to CG. Universally accepted - Modified Ashworth Scale and Gross Motor Function was used as outcome measures. The pre and post assessment was completed in both study groups. The GMFM result showed improvement in motor function of EG by 1.95% as compared to 0.55% in CG. Additionally, MAS score of EG showed significant spasticity reduction in muscle of lower extremity as compared to CG. Thus, our study demonstrates that r-TMS therapy followed by PT was responsible for improving the motor performance by decreasing spasticity in SCP patients in limited number of sessions.
    Keywords: motor disability; spasticity; spastic cerebral palsy; physical therapy; Transcranial magnetic stimulation.

  • An optimized pixel-based classification approach for automatic white blood cells segmentation   Order a copy of this article
    by SETTOUTI Nesma, BECHAR Mohammed El Amine, CHIKH Mohammed Amine 
    Abstract: Pixel-based classification is a potential process for image segmentation. In this process, the image is segmented into subsets by assigning a label region for each pixel. It is an important step towards pattern detection and recognition. In this paper, we are interested in the cooperation of the pixel classification and the region growing methods for the automatic recognition of WBC (White Blood Cells). The pixel-based classification is an automatic approach for classifying all pixels in image, and do not take into account the spatial information of the region of interest. On the other hand, region growing methods take the spatial repartition of the pixels into account, where neighborhood relations are considered. However, region-growing methods have a major drawback, indeed they need pixel groups called "point of interest" to initialize the growing process. We propose an optimized pixel based-classification by the cooperation of region growing strategy performed in two phases: the first, is a learning step with a characterization of each pixel of the image. The second, is a region-growing application by neighboring pixel classification from pixels of interest extracted by the ultimate erosion technique. This process has proved that the cooperation allows to obtain a nucleus and cytoplasm segmentation as closer to what is expected by human experts (as expected in the reference images).
    Keywords: Automatic white blood cell segmentation ; Region growing approach ; pixel-based classification ; mathematical morphology ; Random Forest.
    DOI: 10.1504/IJBET.2017.10013088
     
  • The analysis of foot loadings in high-level table tennis athletes during short topspin ball between forehand and backhand serve   Order a copy of this article
    by Yaodong Gu, Changxiao Yu, Shirui Shao 
    Abstract: The quality of backswing has a close relationship with forward swing, which could raise more power for next phase and help athletes be in an active status. The purposes of this study are to help coaches to improve understanding of backswing motion and as a guidance to improve athletic performance in practice. Twelve high-level male table tennis athletes have been selected, and their foot loadings during short topspin ball were measured by Emed force plate. Anterior-posterior center of pressure (COP) displacement in backhand serve showed significantly shorter compared with forehand at backward-end stage. Mean and peak pressures were higher under the big toe and lateral forefoot of the front foot in forehand than backhand serves during backswing. Including above two regions and lateral mid-foot of the front foot, contact areas were also higher for forehand serve compared with backhand. Otherwise, for backhand serves, the COP velocity was much faster than forehand during backswing. Compared with backhand serves, our results showed that the forehand serve at backward-end has a more sufficient preparation that can accumulate more power for improving the racket speed for forward swing. For forehand serve, it not only mainly used lateralis of front foot off the ground, but also showed larger contact areas on them compared with backhand at backswing-end. Results indicated that forehand serves of short topspin ball showed stronger and more stable than backhand.
    Keywords: Foot loading; pressure distribution; service stance style; COP velocity ratio.

  • A Locally Adaptive Edge Preserving Filter for Denoising of Low Dose CT using Multi-level Fuzzy Reasoning Concept   Order a copy of this article
    by Priyank Saxena, R. Sukesh Kumar 
    Abstract: To reduce the radiation exposure, low dose CT (LDCT) imaging has been particularly used in modern medical practice. The fundamental difficulty for LDCT lies in its heavy noise pollution in the projection data which leads to the deterioration of the image quality and diagnostic accuracy. In this study, a novel two-stage locally adaptive edge preserving filter based on multi-level fuzzy reasoning (LAEPMLFR) concept is proposed as an image space de-noising method for LDCT images. The first stage of structured pixel region employs multi-level fuzzy reasoning to handle uncertainty present in the local information introduced by noise. The second stage employs a Gaussian filter to smooth both structured and non-structured pixel region in order to retain the low frequency information of the noisy image. Comparing with traditional de-noising methods, the proposed method demonstrated noticeable improvement on noise reduction while maintaining the image contrast and edge details of LDCT images.
    Keywords: Multi-level fuzzy reasoning; Noise reduction; Bilateral filtering; Low dose CT; Edge detection; Image smoothing; Peak Signal to Noise Ratio; Image Quality Index; Gaussian filter.

  • Edge preserving de-noising method for efficient segmentation of cochlear nerve by magnetic resonance imaging   Order a copy of this article
    by Jeevakala Singarayan, A.Brintha Therese 
    Abstract: This article presents a de-noising method to improve the visual quality, edge preservation, and segmentation of cochlear nerve (CN) from Magnetic resonance (MR) images. The de-noising method is based on Non-local means (NLM) filter combining with stationary wavelet transform (SWT). The edge information is extracted from the residue of the NLM filter by processing it through the cycle spinning (CS). The visual interpretation of the proposed approach shows that it not only preserves CN edges but, also reduces the Gibbs phenomenon at their edges. The de-noising abilities of the proposed method strategy are assessed utilizing parameters such as root mean square error (RMSE), signal to noise ratio (SNR), image quality index (IQI) and feature similarity index (FSIM). The efficiencies of the proposed methods are further illustrated by segmenting the cochlear nerve (CN) of the inner ear by the region growing technique. The segmentation efficiencies are evaluated by calculating the cross- sectional area (CSA) of the CN for different de-noising methods. The comparative results show the significant improvement in edge preservation of CN from MR images after de-noising the image with proposed technique.
    Keywords: Non-Local Means (NLM); Stationary Wavelet Transform (SWT); de-noising; Rician noise; cochlear nerve (CN); MR images; SNR.

  • Feature Based Classification and Segmentation of Mitral Regurgitation Echocardiography Images Quantification Using PISA Method   Order a copy of this article
    by Pinjari Abdul Khayum, R. Sudheer Babu 
    Abstract: Echocardiography is the enormously admired scientific specification for the evaluation of valvular regurgitation and gives significant knowledge on the bareness of Mitral Regurgitation (MR). MR is a general heart disease which does not cause indications till its final phase. A technique is advanced for jet area separation and quantification in MR assessment in arithmetical expressions. Previous to this separation method count preprocessing and some attributes are mined from the record to arrangement method. From the cataloging method Support Vector Machine (SVM) classifier developed to confidential echocardiogram images. Entire abnormal images to the Modified Region Growing (MRG) separation method to segment jet area of MR. This segmented jet area in MR quantification process passed out with the support of Proximal Isovelocity Surface Area (PISA). This procedure is based on mass diverse limitations like blood flow rate, regurgitant fraction, and EROA etc. From the outcomes this projected effort associated with the current method fuzzy with PISA quantification process, the projected work attained accuracy rate 99.05% in the study of jet area segmenting and quantification method.
    Keywords: Echocardiogram; Mitral valve; Mitral Regurgitation; classification; segmentation and quantification.
    DOI: 10.1504/IJBET.2017.10012825
     
  • Multi-Objective Particle Swarm Optimization for mental task classification using Hybrid features and Hierarchical Neural Network Classifier   Order a copy of this article
    by MADHURI BAWANE 
    Abstract: Recognition of mental tasks using Electroencephalograph (EEG) signals is of prime importance in man machine interface and assistive technologies. Considerably low recognition rate of mental tasks is still an issue. This work combines Power Spectral Density (PSD) features and Lazy Wavelet Transform (LWT) coefficients to present a new approach to feature extraction from EEG signals. A simple but novel neural network classifier called hierarchical neural network is proposed for the task recognition. A novel methodology based on Multi Objective Particle Swarm Optimization (MOPSO) to select discriminative features and the number of hidden layer nodes is proposed. The extracted features are presented to the hierarchical classifier to discriminate left-hand movement, right-hand movement and word generation task. Features in the time frequency domain are extracted using LWT, while those in time domain are extracted using PSD. The hybrid features present complementary information about the task represented by EEG. The features are applied to MOPSO to select the prominent features and decide number of hidden nodes of the neural network classifier. These features train the Hierarchical neural network with hidden layer neurons decided by the MOPSO. Effective selection of the features and the number of hidden layer nodes of the hierarchical classifier improve the classification accuracy. The results are verified on standard brain computer interface (BCI) database and our own B-alert experimental system database. The benchmarking indicates that the proposed work outperforms the state of the art.
    Keywords: Mental task classification; MOPSO; LWT; Hybrid features; Hierarchical Classifier.

  • Energy efficient and low noise OTA with improved NEF for Neural Recording Applications   Order a copy of this article
    by Bellamkonda Saidulu, Arun Manohran 
    Abstract: Analog Front End (AFE) design plays a prominent role to specify the overall performance of neural recording systems. In this paper, we present a power efficient low noise Operational Transconductance Amplifier (OTA)which is power consumable block for multichannel neural recording system with shared structure. Inversion Coefficient(IC)methodology is used to size the transistors. This work focuses on the neural recording applications which show >40dB and up to 7.2 kHz bandwidth. The proposed architecture, which is referred as partial sharing operational transconductance amplifier with source degeneration, results in reduced noise, hence improving NEF. Simulation results are carried in UMC 0.18m and show an improved gain of 66 dB, the phase margin of 94, input-referred voltage noise 0.6V/sqrt(Hz) and power consumption of 2.15Wwith supply of 1.8V.
    Keywords: Neural Amplifier; Telescopic Cascode; Partial OTA Sharing Structure; Self-cascode Composite Current Mirror; Source Degeneration; NEF.

  • Comparison of Missing tooth and Dental work detection using Dental radiographs in Human Identification   Order a copy of this article
    by Jaffino George Peter, Banumathi A, Ulaganathan Gurunathan, Prabin Jose J 
    Abstract: Victim identification plays a vital role for identifying a person in major disasters at the time of critical situation when all the other biometric information was lost. At that time there is a less chance for identifying a person. The major issues of dental radiographs are dental work and missing or broken tooth was addressed in this paper. This algorithm can be established by comparing both ante mortem (AM) and postmortem (PM) dental images. This research work is mainly focuses on the detection of dental work and broken tooth or missing tooth, then comparison of active contour model with mathematical model based shape extraction for dental radiographic images are proposed. In this work, a new mathematical tooth approximation is presented and it is compared with Online Region based Active Contour Model (ORACM) is used for shape extraction. Similarity and distance based technique gives better matching about both the AM and PM dental radiographs. Exact prediction of each method has been calculated and it is validated with suitable performance measures. The accuracy achieved for contour method is 94%, graph partition method is 96% and finally the hit rate of this method is plotted with Cumulative Matching Characteristic (CMC) curve.
    Keywords: Victim identification; dental work; missing tooth; Active Contour Model; Isoperimetric graph partitioning; CMC curve.rnrn.
    DOI: 10.1504/IJBET.2017.10012600
     
  • Design and prototyping of a novel low stiffness cementless hip stem   Order a copy of this article
    by Ibrahim Eldesouky, Hassan Elhofy 
    Abstract: Present biocompatible materials that are suitable for load bearing implants have high stiffness compared to the natural human bone. This mechanical mismatch causes a condition known as stress shielding. The current trend for overcoming this problem is to use porous scaffold instead of solid implants to reduce the implant stiffness. Due to the wide spread of metal additive manufacturing machines, porous orthopaedic implants can be mass produced. In this regard, a porous scaffold is incorporated in the design of a low stiffness hip stem. A 3D finite element analysis is performed to evaluate the performance of the new stem with the patient descending the stairs. The results of the numerical study show that the proposed design improves stress and strain distributions in the proximal region which reduce the stress shielding effect. Finally, a prototype of the proposed design is produced using a 3D printer as proof of concept.
    Keywords: 3D printing; additive manufacturing; auxetic scaffold; low stiffness; stress shielding.

  • Image Analysis for Brain Tumour Detection using GA-SVM with Auto-Report Generation Technique   Order a copy of this article
    by Nilesh Bhaskarrao Bahadure, Arun Kumar Ray, Har Pal Thethi 
    Abstract: In this study, we have presented image analysis for the brain tumour segmentation and detection based on Berkeley wavelet transformation, enabled by genetic algorithm and support vector machine. The proposed system uses double classification analysis to conclude tumour type. The decision on the tumour type i.e. benign or malignant is facilitated by the classifier on the basis of the features extraction and on the basis of area of the tumour. The improvement in the accuracy of the classifier has been investigated through double decision-making system. The proposed system also investigated autoreport generation technique using user-friendly graphical user interface in matlab. It is the first kind of its study, to aid with the feature of auto-report generation technique invented for quick and improved diagnosis analysis by the radiologists or clinical supervisors. The experimental results of proposed technique is been evaluated and validated for performance and quality analysis on magnetic resonance (MR) medical images based on accuracy, sensitivity, specificity and dice similarity index coefficient. The experimental results achieved 97.77% accuracy, 98.98% sensitivity, 94.44% specificity and an average of 0.9849 dice similarity index coefficient, demonstrating the effectiveness of the proposed technique for identifying normal and abnormal tissues from MR images. The experimental result is validated by extracting 89 features and selecting the relevant features appropriately using Genetic algorithm optimize by support vector machine. The simulation results prove the significance in terms of quality analysis on segmentation score and classification accuracy in comparison to the state of the art techniques.
    Keywords: Berkeley Wavelet Transformation; Feature Extraction; GeneticrnAlgorithm; Magnetic Resonance Imaging (MRI); Support Vector Machine.

  • Evaluation of endothelial response to reactive hyperemia in peripheral arteries using a physiological model   Order a copy of this article
    by Mohammad Habib Parsafar, Edmond Zahedi, Bijan Vosoughi Vahdat 
    Abstract: A common approach for the non-invasive evaluation of endothelial function - a good predictor of cardiovascular events - is the measurement of brachial artery diameter changes in flow-mediated dilation (FMD) during reactive hyperemia using ultrasound imaging. However, this method is both costly and operator-dependent, limiting its application to research cases. In this study, an attempt is made toward model-based evaluation of endothelial response to reactive hyperemia. To this end, a physiological model between normalized central blood pressure and finger photoplethysmogram (FPPG) is proposed. The genetic algorithm is utilized for estimating the models parameters in thirty subjects grouped as: normal BP (N=10), high BP (N=10) and elderly (N=10). The change in beat-to-beat fitness between model output and measured FPPG (BB_fit index) during the cuff-release interval is fairly described with a first order dynamic. Results show that the time constant of this first order system is significantly greater for normal BP compared to high BP (p-value=0.004) and elderly subjects (p-value=0.01). Indeed, endothelial response to reactive hyperemia is more pronounced in normal BP and young subjects compared to high BP and elderly, delaying the return of the vasculature to the baseline state. Our findings hint that the proposed model can be utilized in physiological model-based studies of cardiovascular health, resulting eventually in a reliable index for vascular characterization using conventional FMD test.
    Keywords: flow-mediated dilation; photoplethysmography; endothelial function; cardiovascular modeling; viscoelasticity; tube-load model.

  • Automated ECG beat classification using DWT and Hilbert transform based PCA-SVM classifier   Order a copy of this article
    by Santanu Sahoo, Monalisa Mohanty, Sukanta Sabut 
    Abstract: The analysis of electrocardiogram (ECG) signals provides valuable information for automatic recognition of arrhythmia conditions. The objective of this work is to classify five types of arrhythmia beat using wavelet and Hilbert transform based feature extraction techniques. In pre-processing, wavelet transform is used to remove noise interference in recorded signal and the Hilbert transform method is applied to identify the precise R-peaks. A combination of wavelet, temporal and morphological or heartbeat interval features has been extracted from the processed signal for classification. The principal component analysis (PCA) is used to select the informative features from the extracted features and fed as input to the support vector machine (SVM) classifier to classify arrhythmia beats automatically. We obtained better performance results in the PCA-SVM based classifier with an average accuracy, sensitivity and specificity of 98.50%, 95.68% and 99.18% respectively in cubic-SVM classifier for classifying five types of ECG beats at fold eight in ten-fold cross validation technique. The effectiveness of our method is found to be better compared to published results, therefore the proposed method may be used efficiently in the ECG analysis.
    Keywords: Electrocardiogram; Wavelet; Hilbert transform; support vector machine; principal component analysis; arrhythmia.

  • Impact-induced traumatic brain injury: Effect of human head model on tissue responses of the brain   Order a copy of this article
    by Hesam Moghaddam, Asghar Rezaei, Ghodrat Karami, Mariusz Ziejewski 
    Abstract: The objective of this research is twofold; first to understand the role of the finite element (FE) head model in predicting tissue responses of the brain, and second to investigate the fidelity of pressure response in validating FE head models. Two validated FE head models are impacted in two directions under two impact severities and their tissue responses are compared. ICP peak values are less sensitive to the head model and brain material. Maximum ICPs occur on the outer surface, vanishing linearly toward the center of the brain. It is concluded that while different head models may simply reproduce the results of ICP variations due to impact, shear stress is affected by the head model, impact condition, and brain material.
    Keywords: Intracranial pressure (ICP); shear stress; injury mechanism; finite element head model; brain injury; reproducibility.

  • Selection of Surface Electrodes for Electrogastrography and Analysis of Normal and Abnormal Electrogastrograms using Hjorth Information   Order a copy of this article
    by Paramasivam Alagumariappan, Kamalanand Krishnamurthy, Ponnuswamy Mannar Jawahar 
    Abstract: Electrogastrogram (EGG) signals recorded non-invasively using surface electrodes, represent the electrical activity of the stomach muscles and are used to diagnose several digestive abnormalities. The surface electrodes play a significant role in the acquisition of EGG signals from human digestive system. In this work, an attempt has been made to demonstrate the role of contact area of surface electrodes for efficient measurement of EGG signals. Two different surface electrodes with contact diameter of 16 mm and 19 mm have been adopted for acquisition of EGG signals. Further, the Hjorth parameters of the EGG signals acquired from normal and abnormal cases suffering from diarrhea, vomiting and stomach ulcer were analyzed. Results demonstrate that the activity, mobility and complexity of the EGG signals increases with increase in contact area of the surface electrodes. Further, it is observed that there is a significant variation in Hjorth parameters for normal and abnormal cases.
    Keywords: Surface electrodes; contact area; electrogastrogram; activity; mobility; complexity; Hjorth parameters; Information measures.

  • Plasma cell identification based on evidential segmentation and supervised learning   Order a copy of this article
    by Ismahan Baghli, Mourtada Benazzouz, Mohamed Amine Chikh 
    Abstract: Myeloma disease is among the most common type of cancer, it is characterized by proliferation of plasma cells, kind of white blood cell (textsc{wbc}). Early diagnosis of the disease can improve the patient's survival rate. The manual diagnosis involves clinicians to visually examine microscopic bone marrow images for any signs of cells proliferation. This step is often laborious and can be highly subjective due to clinician's expertise. Automatic system based on textsc{wbc} identification and counting provides more accurate result than manual method. This system is mainly based on three major steps: cell's segmentation, cell's characterization and cell's classification. In the proposed system, microscopic images of bone marrow blood are segmented by the combination of watershed transform and the evidence theory, the segmented cells are characterized with shape and Colour texture features, and then classified into plasma cells or not plasma cells with three supervised classifiers; Support Vector Machines, K Nearest Neighbour and Decision Tree. Experimental results show that the recognition of plasma cells with the K nearest neighbour achieved 97% of correct rate with 100% of specificity.
    Keywords: Myeloma; Plasma cell; Bone marrow images; Segmentation; Evidence theory; Watershed; Characterization; Shape; Colour texture; Classification.

  • Engineering Approaches for ECG Artifact Removal from EEG: A Review   Order a copy of this article
    by Chinmayee Dora, Pradyut Kumar Biswal 
    Abstract: Electroencephalograms (EEG) signal, obtained by recording the brain waves are used to analyze health problems related to neurology and clinicalrnneurophysiology. This signal is often contaminated by a range of physiological and non-physiological artifacts, which leads to a misinterpretation in EEG signal analysis. Hence, artifact removal is one of the preprocessing step required for clinical usefulness of the EEG signal. One of the physiological artifact i.e. Electrocardiogram (ECG) contaminated EEG can affect the clinical analysis and diagnosis of brain health in various ways. This paper presents a review of engineering approaches adopted till date for ECG artifact identification and removal from contaminated EEG signal. In addition, the technical approach, computational extensiveness, input requirement and the results achieved with every method is discussed. Along with that, the feasibility study for real time implementation of the algorithms is discussed. Also, an analysis of these methods has been reported based on their performance.
    Keywords: EEG; ECG; Artifacts; ICA; Wavelet; EMD; EAS; ANC; Autoregression; ANFIS; TVD; SVM.

  • Enhanced Cache Sharing through Cooperative Data Cache Approach in MANET   Order a copy of this article
    by Lilly Sheeba S., Yogesh P 
    Abstract: In a Mobile Adhoc NETwork (MANET) under normal cache sharing scenarios, when the data is transmitted from source to destination, all the nodes along the path store the information on the cache layer before reaching the destination. This may result in increased node overhead, increased cache memory utility and very high end-to-end delay. In this paper, we propose an Enhanced Cache Sharing through Cooperative Data Cache (ECSCDC) approach for MANETs. During the transmission of desired data from the Data Centre back to the request originator, the data packets will be cached by the intermediate caching nodes only if required, by using the asymmetric cooperative cache approach. Those caching nodes that can retain the data in its cache, for future data retrieval is selected based on scaled power community index. By simulation results, we show that the proposed technique reduces the communication overhead, access latency and average traffic ratio near the data centre while increasing the cache hit ratio.
    Keywords: Mobile ad hoc network; caching; cache sharing; cache replacement.

  • Quality Function Deployment Model Using Fuzzy Programming with Optimization Technique for Customer Satisfaction   Order a copy of this article
    by Mago Stalany V., Sudhahar C. 
    Abstract: Quality Function Deployment (QFD) is a customer-driven superiority organization and product expansion scheme for attaining advanced consumer approval. This document is to inspect the execution of QFD at fuzzy surroundings and to expand equivalent events to contract by the fuzzy data. At this time, regard as the consumer approval limitations are ease, refund and safeness in FQFD (Fuzzy QFD) study using Fuzzy Logic Controller (FLC) by optimization method. This optimization method to develop the accurateness values of FLC in QFD procedure, now PSO is utilized for optimization method. For hard data production, dissimilar relationship utility is employed to the comparative inclination association on fuzzy statistics, it is not essential to develop two fuzzy statistics to obtain standard masses in FQFD. Since, the outcome of fuzzy to recognizing the precedence stage of the consumer approval and utmost accurateness stage of the FQFD procedure.
    Keywords: Fuzzy quality function deployment; Quality Function Deployment; Customer requirements; design quality; Relative preference relation.

  • Concordance between serum and transcutaneous bilirubin levels with the Bilispect   Order a copy of this article
    by Adriana Montealegre, Nathalie Charpak, Zandra Grosso, Yaris Vargas, Julieta Villegas 
    Abstract: Screening and follow-up of neonatal hyperbilirubinemia has been done with the use of bilirubin serum levels. This method is invasive and exposes patients to anaemia. A transcutaneous measurement, the Bilispect
    Keywords: Bilirubin; jaundice; neonatal; correlation study.

  • Influence of hip geometry to intracapsular fractures in Sri Lankan women: prediction of country specific values   Order a copy of this article
    by Shanika Arachchi, Narendra Pinto 
    Abstract: Falls are very common in daily life. Hip is a highly vulnerable location during a fall. Trochanter can be compressed due to side falls resulting either intracapsular or extracapsular fracture. The relationship of bone geometry to the fracture risk can be analyzed as a determinant of mechanical resistance of the bone, as well as a promising fracture prediction tool. Intracapsular fractures are highly depend on the hip geometry compared to the extracapsular fractures. This study aims to find out the influence of hip geometry to the intracapsular fractures among Sri Lankan women. HAL, NSA, FNW and moment arm length of intracapsular patients have compared with a normal group. Concurrently, the moment applied to the proximal femur during the sideway fall is computed and compared with a normal group. We observed that fractured group has greater NSA, HAL and FNW compared to the normal group. Furthermore, intracapsular fracture females have longer moment arm of the force in the sideway fall resulting a greater load on femoral neck compared to the normal group.
    Keywords: falls; hip fractures; hip geometry; Neck Shaft Angle; Femoral Neck Width.

  • FLUID STRUCTURE INTERACTION STUDY ON STRAIGHT AND UNDULATED HOLLOW FIBER HEMODIALYSER MEMBRANES   Order a copy of this article
    by Sangeetha M S, Kandaswamy A 
    Abstract: In hemodialysis therapy, the dialyser is subjected to blood flow continuously for several hours and is also being reused; the stress experienced by the fibers owing to blood flow is of utmost importance because it reflects on the mechanical stability of the membrane. It is tedious to study the stress experienced by an individual fiber in real-time; computer aided techniques enables to gain better insights about the load bearing capacity of the membrane. A finite-element strategy is implemented to study the effect of flow induced stress in hemodialyser membrane. A 3D model of the membrane was developed in straight and undulated (crimped) fiber orientations. Fluid structure interaction study was conducted to analyse the stress distribution due to varying blood flow. It is observed that in both the fiber orientations, the stress varies inversely with the blood flow rate. The effect of varying the length of the fiber, wall thickness and crimp frequency is also studied. From the analysis it is found that the crimped fibers experiences less stress compared to straight fiber. Such analysis aids to predict and evaluate the performance of the hemodialyser membrane. Keywords: finite-element strategy; hemodialyser membrane; crimping; fluid structure interaction; computer aided techniques
    Keywords: finite-element strategy; hemodialyser membrane; crimping; fluid structure interaction; computer aided techniques.

  • A NOVEL CLASSIFICATION APPROACH TO DETECT THE PRESENCE OF FETAL CARDIAC ANOMALY FROM FETAL ELECTROCARDIOGRAM   Order a copy of this article
    by Anisha M, Kumar S.S, Benisha M 
    Abstract: Fetal cardiac anomaly interpretation from Fetal Electrocardiogram (FECG) is a challenging effort. Fetal cardiac activity can be assessed by scrutinizing FECG because clinically crucial features are hidden in the amplitudes and waveform time duration of FECG, and Fetal Heart Rate (FHR). These features are vital in fetal cardiac anomaly interpretation. Hence here an attempt is made to detect the presence of fetal cardiac anomaly using Support Vector Machine (SVM) classifier with polynomial kernel based on the patterns extricated from FHR, frequency domain of FECG signals, fetal cardiac time intervals and FECG morphology. Performance evaluation is done on real FECG signals with different combination of features set and the obtained results are compared. SVM showed good performance with 92% of classification accuracy when all the features are fed to the classifiers. Results evince that the proposed approach has immense prospective and guarantee in early fetal cardiac anomaly detection from FECG.
    Keywords: Fetal Electrocardiogram; Fetal Heart Rate; SVM; fetal cardiac anomaly; fetal cardiac activity.

  • A Multimodal Biometric Approach for the Recognition of Finger Print, Palm Print and Hand Vein using Fuzzy Vault   Order a copy of this article
    by R. Vinothkanna, Amitabh Wahi 
    Abstract: For the security reasons, Person Identification has got primary place by means of some of the physiological features. For this a biometric person recognition system is used, which decides who the user is. In this paper, multimodal biometry is utilized for the person identification with the help of 3 physiological features such as finger print, palm print and hand vein. Initially, in the pre-processing stage, the unwanted portions, noise content and the blur effects are removed from the input finger print, palm print and hand vein images. Then the features from these three modalities are extracted. Finger print features are extracted directly from the pre-processed image , palm print and hand vein features are extracted using maximum curvature points in the image cross-sectional profile. Then using chaff points and all the extracted feature points, a combined feature vector point is obtained. After getting the combined feature vector points, the sectret key points are added with the combined feature vector points to generate the fuzzy vault. Finally, in the Recognition stage, test persons combined vector is compared with the fuzzy vault data base. If the combined vector is matched with the fuzzy vault, then the authentication is granted and then the secret key is generated to confirm with the person. Otherwise, the authentication is denied. Now we can obtain the corresponding finger print, hand vein and palm print images.
    Keywords: Multimodal biometric; Maximum curvature points; Cross-sectional profile; Chaff points; Fuzzy Vault.

  • Estimation of a point along overlapping Cervical Cell Nuclei in Pap smear image using Color Space Conversion   Order a copy of this article
    by Deepa T.P., A. Nagaraja Rao 
    Abstract: The identification of normal and abnormal cells is considered as one of the most challenging task for computer assisted Pap smear analysis system. It is even more difficult when cells are overlapped, as abnormal cells are hidden below normal cells and affects their visibility. Hence, there is need for algorithm which segments cells in the cluster formed by overlapped cells which can be achieved using image processing techniques. The complexity of the problem depends on whether only cytoplasm of two cells are overlapped, or only nuclei are overlapped with disjoint cytoplasm, and in some case both cytoplasm and nuclei are overlapped. Sometimes, Pap smear sample contains mixture of cells with disjoint/overlapped cytoplasm and nuclei. The segmentation of nuclei helps to find cell count which is one of the important features in Pap smear analysis. There is a need for method which can simultaneously segment disjoint and overlapped nuclei. In case of overlapped nuclei, identifying the point of overlapping accurately is one of the significant steps and plays important role in segmenting overlapped cells. This paper discusses such method which segment disjoint nuclei and identifies the point of intersection called as Concavity point in the cluster of cells where only nuclei are overlapped.
    Keywords: Papanicolaou Smear; Overlapping; Morphological and Microscopic Findings; cell nuclei.

  • Multiobjective Pareto optimization of a pharmaceutical product formulation using radial basis function network and nondominated sorting differential evolution   Order a copy of this article
    by Satyaeswari Jujjavarapu, Ch. Venkateswarlu 
    Abstract: Purpose In a pharmaceutical formulation involving several composition factors and responses, its optimal formulation requires the best configuration of formulation variables that satisfy the multiple and conflicting response characteristics. This work aims at developing a novel multiobjective optimization strategy by integrating an evolutionary optimization algorithm with an artificial intelligence model and evaluates it for optimal formulation of a pharmaceutical product. Methods A multiobjective Pareto optimization strategy is developed by combining a radial basis function network (RBFN) with a non-dominated sorting differential evolution (NSDE) and applied for optimal formulation of a trapidil product involving conflicting response characteristics. Results RBFN models are developed using spherical central composite design data of trapidil formulation variables representing the amounts of microcrystalline cellulose, hydroxypropyl methylcellulose and compression pressure, and the corresponding response characteristic data of release order and rate constant. The RBFN models are combined with NSDE and Pareto optimal solutions are generated by augmenting it with Na
    Keywords: Pharmaceutical formulation; Multiple regression model; Response surface method; Radial basis function network; Differential evolution; Multiobjective optimization.

  • Implementation of Circular Hough transform on MRI Images for Eye Globe Volume Estimation   Order a copy of this article
    by Tengku Ahmad Iskandar Tengku Alang, Tian Swee Tan, Azhany Yaakub 
    Abstract: Eye globe volume estimation have gained attention in both medical and biomedical engineering field. However, most of the methods used manual analysis which is tedious and prone to errors due to various inter- or intraoperator variability studies. In the present study, we estimated the volume of eye globe, in MRI images of normal eye globe condition using the Circular Hough transform (CHT) algorithm. To test the performance of the proposed method, 24 Magnetic Resonance images which constitute 14 males and 10 females (normal eye globe condition) with T1-weighted MRI images are randomly selected from the database. The mean (
    Keywords: Circular Hough transform (CHT); Magnetic Resonance Imaging (MRI); MRI images; eye globe detection; T1-weighted.

  • Electroencephalogram (EEG) Signal Quality Enhancement by Total Variation Denoising Using Non-Convex Regularizer   Order a copy of this article
    by PADMESH TRIPATHI 
    Abstract: Medical practitioners have great interest in getting the denoised signal before analysing it. EEG is widely used in detecting several neurological diseases such as epilepsy, narcolepsy, dementia, sleep apnea syndrome, alzheimers, insomnia, parasomnia, Creutzfeldt-Jakob diseases (CJD) and schizophrenia etc. In the process of EEG recordings, a lot of background noise and other kind of physiological artefacts are present, hence, data is contaminated. Therefore, to analyse EEG properly, it is necessary to denoise it first. Total variation denoising is expressed as an optimization problem. Solution of this problem is obtained by using a non-convex penalty (regularizer) in the total variation denoising. In this article, non-convex penalty is used for denoising the EEG signal. The result has been compared with wavelet methods. Signal to noise ratio (SNR) and root mean square error have been computed to measure the performance of the method. It has been observed that the approach used here works well in denoising the EEG signal and hence enhancing its quality.
    Keywords: Electroencephalogram; wavelet; artefact; denoising; regularizer; convex optimization; epilepsy; tumors; empirical mode decomposition; principal component anslysis; total variation.

  • IDENTIFYING THE ANOMALY IN LV WALL MOTION USING EIGEN SPACE   Order a copy of this article
    by Wiselin Jiji 
    Abstract: In this paper, we have experimented Left Ventricular (LV) Wall motion abnormalities using Eigen LV Space. We employ three phases of operations in order to perform efficient identification of LV motion abnormalities. In the First phase, LV border detection technique was used to detect LV area. In the second phase, Eigen LV spaces of six abnormalities are to be converged as the search space. In the third phase, query is projected on this search space which leads matching of closest Disease. The results proved using Receiver Operating Characteristic (ROC) curve show that the proposed architecture provides high contribute to Computer-Aided Diagnosis. Experiments were made on a set of 20 Abnormal and 20 Normal cases. We trained with 8 Normal & 8 Abnormal cases and yielded an accuracy of 88.8% for the proposed works and 75.81% and 79% respectively for earlier works. Our empirical evaluation has a superior diagnosis performance when compared to the performance of other recent works.
    Keywords: Eigen Space; LV BORDER DETECTION,Indexing.

  • Recent advances on Ankle Foot Orthosis for Gait Rehabilitation: A Review   Order a copy of this article
    by Jitumani Sarma, Nitin Sahai, Dinesh Bhatia 
    Abstract: Since the early 1980s, hydraulic and pneumatic device are used to explore methods of orthotic devices for lower limb. Over the past decades, significant development has been made by researchers in rehabilitation robotics associating assistive orthotic device for the lower limb extremities. The aim in writing this review article is to present a detailed insight towards the development of the controlled Ankle Foot Orthotic (AFO) device for enhancing the functionality of people disabled by injury to the lower limb or by neuromuscular disorders such as multiple sclerosis, spinal muscular atrophy etc. Different types of approaches towards design, actuation and control strategies of passive and active AFOs are analyzed in this article considering gait rehabilitation. In currently available commercialized ankle foot orthotic devices for lower limb, to overcome the weakness and instability produced by drop foot and to follow natural gait is still a challenge. This paper also focuses the impact of active control of AFO device mainly to enhance the functionality of lower limb reducing the deformities. Researchers have put in huge amount of efforts in terms of modeling, simulating and controlling of such devices mainly for gait rehabilitation with kinematic and dynamics analysis.
    Keywords: Foot drop; Ankle Foot Orthosis; Gait; dorsiflexion; plantarflexion.

  • Computer Aided Designing and Finite Element Analysis for development of porous 3-D tissue scaffold-A review   Order a copy of this article
    by Nitin Sahai, Manashjit Gogoi 
    Abstract: Biodegradable porous tissue scaffolds plays a crucial role in development of tissue/organ and development of these biomimetic porous tissue scaffold with accurate porosity could be achieved with the help of latest analysis techniques known as Computer Aided Tissue Engineering (CATE) which consists of Computer Tomography(CT) scan, Magnetic Resonance Imagining (MRI), Functional Magnetic Resonance Imagining FMRI, Computer Aided Designing (CAD), Finite Element Method (FEM) and other modern design and manufacturing technologies for development of 3-D architecture of porous tissue scaffolds can be fabricated with reproducible accuracy in pore size. The aim of this paper is to review and elaborate the various recent methods developed in Computer Aided Designing, Finite Element Analysis and Solid Freeform Fabrication (SFF) for development of porous 3 Dimensional tissue scaffolds.
    Keywords: Biomaterials; Scaffolds; Tissue Engineering; Computer Aided Tissue Engineering; Finite Element Method.

  • PHASE BASED FRAME INTERPOLATION METHOD FOR VIDEOS WITH HIGH ACCURACY USING ODD FRAMES   Order a copy of this article
    by Amutha S, Vinsley SS 
    Abstract: In this project, an innovative complexity motion that has lowvector processing algorithm at the end side is proposed for motion-compensated video vector frame interpolation or frame rate up-conversion. By processing this algorithm we normally shows the problems of broken edges and deformed structure problems in an frame interpolation by hierarchically refining motion vectors on different block sizes. Such broken edges are taken out using frame interpolation method by taking the odd frames and interpolate that image so that to have the high quality resolution of images so that blur in the images obtained from the video is being removed. By using blending techniques it is easy to remove the image blur and also to improve the quality of the image obtained from the video. So the image has been obtained with high resolution. In the proposed method the input has been taken as video instead of images in the existing system and the recovery output is taken as images and further process has been undergone to get the output as video. There are some different techniques in this method such as phase based interpolation technique, multistage motion compensated interpolation etc are commonly used to get high purpose image with reduced blur in the images to get the clear image of the input videos. Experimental results prove that the proposed system visual quality to be better and is also rugged, even in video sequences comprising of fast motions and complex scenes.
    Keywords: MCFI,BMA; phase based interpolation; steerable pyramid; blending technique.

  • Investigation on staging of breast cancer using miR-21 as a biomarker in the serum   Order a copy of this article
    by Bindu SALIM, Athira M V, Kandaswamy Arumugam, Madhulika Vijayakumar 
    Abstract: Circulating microRNAs (miRNA) are a novel class of stable, minimally invasive disease biomarkers that are valuable in diagnostics and therapeutics. MiR-21 is an oncogenic miRNA that regulates the expression of multiple cancer-related target genes and it is highly expressed in the patients serum suffering from breast cancer. The focus of the present study was on measuring the expression profile of the most significantly up-regulated miR-21 in breast cancer patients serum to evaluate their correlation with the clinical stage of cancer by using molecular beacon probe. miR-21 expression was also quantitatively analyzed by TaqMan real-time PCR techniques. Ten serum samples from the confirmed breast cancer patients and one healthy control sample were used for the evaluation of miR-21 gene expression. The expression levels of miR-21 were significantly high in breast cancer serum samples compared to healthy control samples with significant differences corresponding to clinical stages of II, III, and IV. The findings indicate that serum miR-21 would serve as a potential marker for therapeutic regimes as well as monitoring the patient status by simple blood test.
    Keywords: Breast Cancer; Biomarker; miR-21; Clinical stage; Real-time PCR.

  • Pose and Occlusion Invariant Face Recognition System for Video Surveillance Using Extensive Feature Set   Order a copy of this article
    by A. Vivek Yoganand, A. Celine Kavida, D. Rukmani Devi 
    Abstract: Face recognition presents a challenging problem in the field of image analysis and computer vision. Different video sequences of the same subject may contain variations in resolution, illumination, pose, and facial expressions. These variations contribute to the challenges in designing an effective video-based face-recognition algorithm. In this proposed method, we are presenting a face recognition method from video sequence with various pose and occlusion. Initially, shot segmentation process is done to separate the video sequence into frames. Then, Face part is detected from each frame for further processing. Face detection is the first stage of a face recognition system. After detecting the face exactly the facial features are extracted. Here SURF features, appearance features, and holo-entropy is used to find out the uniqueness of the face image. The Active appearance model (AAM) can be used to find out the appearance based features in the face image. These features are used to select the optimal key frame in the video sequence which is based on the supervised learning method, Modified Artificial Neural Network (MANN) using Bat algorithm. Here bat algorithm is used for optimizing the weights of Neurons. Finally, based on the feature library, the face image can be recognized.
    Keywords: face recognition; Active appearance model; Modified Artificial Neural Network; bat algorithm.

  • Automatic segmentation of Nasopharyngeal carcinoma from CT images   Order a copy of this article
    by Bilel Daoud, Ali Khalfallah, Leila Farhat, Wafa Mnejja, Ken’ichi Morooka, Med Salim Bouhlel, Jamel Daoud 
    Abstract: The nasopharyngeal carcinoma (NPC) called also Cavum cancer becomes a public health problem for the Maghreb countries and Southeast Asia. The detection of this cancer could be carried out from computed tomography (CT) scans. In this context, we proposed two approaches based on image clustering to locate affected tissues by the Cavum cancer. These approaches are based respectively on E-M and Otsu segmentation. Compared to the physician manual contouring, our automatic detection proves that the detection of the cancer while using the Otsu clustering in terms of precision, recall and F-measure is more efficient than E-M. Then, we merged the results of these two methods by using the AND and the OR logical operators. The AND fusion yields to an increase of the precision while the OR fusion raises the recall. However, the detection of the NPC using Otsu remain the best solution in terms of F-Measure. Compared to previous studies that provide a surface analysis of the NPC, our approach provides a 3D estimation of this tumor ensuring a better analysis of the patient folder.
    Keywords: Cavum Cancer; DICOM images; image segmentation; E-M; Otsu; recall; precision; F-measure.

  • Descendant Adaptive Filter to Remove Different Noises from ECG Signals   Order a copy of this article
    by Mangesh Ramaji Kose, Mitul Kumar Ahirwal, Rekh Ram Janghel 
    Abstract: Electrocardiogram (ECG) signals are electrical signals generated corresponding to activity of heart. ECG signals are recorded and analyzed to monitor heart condition. In initial raw form, ECG signals are contaminated with different types of noises. These noises may be electrode motion artifact noise, baseline wander noise and muscle noise also known as electromyogram (EMG) noise etc. In this paper a descendent structure consists of adaptive filters is used to eliminate the three different types of noises (i.e. motion artifact noise, baseline wander noise and muscle noise). Two different adaptive filtering algorithms have been implemented; least mean square (LMS) and recursive least square (RLS) algorithm. Performance of these filters are compared on the basis of different fidelity parameters such as mean square error (MSE), normalized root mean squared error (NRMSE), signal-to-noise ratio (SNR), percentage root mean squared difference (PRD), and maximum error (ME) has been observed.
    Keywords: Adaptive Filters; ECG; Artifacts; LMS; RLS; SMA.

  • Epileptic Seizure Detection in EEG Using Improved Entropy   Order a copy of this article
    by A. Phareson Gini, M.P. Flower Queen 
    Abstract: Epilepsy is a chronic disorder of the brain that impacts people all around the world. This is categorized by recurrent seizure and it is difficult to recognize when someone is consuming an epileptic seizure. The electroencephalogram (EEG) signal plays a significant part in the recognition of epilepsy. The EEG signal generates complex information and it has been stowed in EEG recording schemes. It is tremendously challenging to investigate the chronicled EEG signal and the analysis of the epileptic activity in a time consuming procedure. In this article, we suggest a novel ANN based Epileptic Seizure Detection in EEG signal with the help of the Improved Entropy technique. The anticipated technique comprises pre-processing, feature abstraction and EEG organization which utilizing artificial neural network. In primary phase we sample all the input information set. In the second phase, a fuzzy entropy algorithm is utilized to abstract the features of experimented signal. In organization segment, we utilize artificial neural network for to recognize Epilepsy seizures in exaggerated patient. Lastly, we associated the anticipated technique with prevailing technique for the perceiving epileptic sections. The function is utilized to compute the following parameters like accuracy, specificity, FAR, sensitivity, FRR, GAR which established the effectiveness of the anticipated Epilepsy seizure recognition system.
    Keywords: Entropy; EEG; ANN.

  • Kurtosis Maximization for Blind Speech Separation in Hindi Speech Processing System using Swarm Intelligence and ICA   Order a copy of this article
    by Meena Patil, J.S. Chitode 
    Abstract: Blind Source Separation (BSS) method divides mixed signals blindly without any data on the mixing scheme. This is a main issue in an actual period world whether have to identify a specific person in the crowd or it is a zone of speech signal is removed. Besides, these BSS approaches are collective with shape and also statistical features to authenticate the performance of each one in outline classification. For resolving this issue proposes an active BSS algorithm on the basis of the Group Search Optimization (GSO). The kurtosis of the signals is used as the objective performance and the GSO is utilized to resolve it in the suggested algorithm. Primarily, source signals are taken into account as the Independent Component Analysis (ICA) to generate the mixing signals to BSS yield the maximum kurtosis. The source signal constituent that is divided out is then smeared off from mixtures with the help of the deflation technique. Each and every source signals establish that important development of the computation amount and the quality of signal separation is attained using the projected BSS-GSO algorithm if associated with the preceding algorithms.
    Keywords: Blind source separation (BSS); Speech signal; optimization; ICA; Mixing signals and Unknown signals.
    DOI: 10.1504/IJBET.2017.10011983
     
  • Electrocardiogram compression using the Non-Linear Iterative Partial Least Squares algorithm: a comparison between adaptive and non-adaptive approach   Order a copy of this article
    by Pier Ricchetti, Denys Nicolosi 
    Abstract: Data Compression is applicable in reducing amount of data to be stored and it can be applied in several data collecting processes, being generated by lossy or lossless compression algorithms. Due to its large amount of data, the use of compression is desirable in ECG signals. In this work, we present the accepted Non-Linear Iterative Partial Least Squares (NIPALS) method as an option to ECG compression method, as recommended by Nicolosi. In addition, we compare the results based in an adaptive and non-adaptive version of this method, by using the MIT Arrhythmia Database. As a help to obtain a better comparison, we have developed an abnormality indicator related to possible abnormalities in the waveform, and a decision method that helps to choose between adaptive or non-adaptive approach. Results showed that the adaptive approach is better than the non-adaptive approach, for the NIPALS compression algorithm.
    Keywords: data compression; component analysis; adaptive; comparison; PCA; principal component analysis; nipals; nonlinear iterative partial least squares; ECG; electrocardiogram; compression algorithms.

  • A tumour segmentation approach from flair MRI brain images using SVM and genetic algorithm   Order a copy of this article
    by S.U. Aswathy, G. Glan Devadhas, S.S. Kumar 
    Abstract: This paper puts forth a framework of a medical image analysis system for brain tumor segmentation. Image segmentation helps to segregate objects right from the background, thus proving to be a powerful tool in Medical Image Processing.This paper presents an improved segmentation algorithm rooted in Support Vector Machine and Genetic Algorithm. SVM are the basis technique used for segmentation and classification of medical images. The MRI database used consists of FLAIR images. The proposed system consists of two stages. The first Stage perform pre-processing the MRI image, followed by block division. The Second Stage includes feature extraction, feature selection and finally, the SVM based training and testing. The feature extraction is done by first order histogram and co-occurrence matrix and GA using KNN is used to select subset features. The performance of the proposed system is evaluated in terms of specificity, sensitivity, accuracy, time elapsed and figure of merit.
    Keywords: segmentation; support vector machine; genetic algorithm; k nearest neighbors.

  • Dual Modality Tran-Admittance Mammography and Ultrasound Reflection to Improve Accuracy of Breast Cancer Detection   Order a copy of this article
    by Khusnul Ain 
    Abstract: Breast tissue and cancer have high impedance ratio. Imaging impedance can produce high contrast images. TAM (Trans-admittance mammography) is one of prototype based on impedance to detect breast cancer. The TAM is only able to produce a projection image. It needs ratio between anomalous and normal admittance to obtain anomalous volume. Size and ratio of anomalous are very important to know precisely because it is associated with the stage and type of cancer. Acoustic data produces depth and volume anomalies appropriately. Combining TAM data and acoustic data are expected to provide promising results. The study was conducted by measuring trans-admittance of breast phantom by the TAM. It is conducted at a frequency 0.5; 1; 5; 10; 50 and 100 kHz. The acoustic data obtained by scanning breast phantom. Combination of depth and anomaly volume from ultrasonic reflection on the TAM device can provide the right information for ratio of conductivity anomalies and reference.
    Keywords: dual modality; trans-admittance mammography; ultrasound reflection; accurate; breast cancer.

  • Low Power DNA Protein Sequence alignment using FSM State Transition controller   Order a copy of this article
    by Sancarapu Nagaraju, Penubolu Sudhakara Reddy 
    Abstract: In this paper we proposed an efficient computation technique for DNA patterns on reconfigurable hardware (FPGAs) platform. The paper also presents the results of a comparative study between existing dynamic and heuristic programming methods of the widely-used Smith-Waterman pair-wise sequence alignment algorithm with FSM based core implementation. Traditional software implication based sequence alignment methods cant meet the actual data rate requirements. Hardware based approach will give high scalability and one can process parallel tasks with a large number of new databases. This paper explains FSM (Finite State Machine) based core processing element to classify the protein sequence. In addition, we analyze the performance of bit based sequence alignment algorithms and present the inner stage pipelined FPGA (Field Programmable Gate Array) architecture for sequence alignment implementations. Here synchronized controllers are used to carry out parallel sequence alignment. The complete architecture is designed to carry out parallel processing in hardware, with FSM based bit wised pattern comparison with scalability as well as with a minimum number of computations. Finally, the proposed design proved to be high performance and its efficiency in terms of resource utilization is proved on FPGA implementation.
    Keywords: DNA; Protein Sequence; FSM; Smith-Waterman algorithm; FPGA; Low Power.

  • A review on multimodal medical image fusion   Order a copy of this article
    by BYRA REDDY G R, Dr. Prasanna Kumar H 
    Abstract: Medical image fusion is defined as combining two or more images from single or multiple imaging modalities like Ultrasound, Computerized Tomography, Magnetic Resonance Imaging, Single Photon Emission Computed Tomography, Positron Emission Tomography and Mammography. Medical image fusion is used to optimize the storage capacity, minimizes the redundancy and to improve quality of the image. The goal of medical image fusion is to combine complementary information from multiple imaging modalities of the same scene. This review paper describes about different imaging modalities, fusion methods and major application domains.
    Keywords: Image fusion; Ultrasound; Mammography; Magnetic Resonance,Computed Tomography.

  • Heart Sound Interference Cancellation from Lung Sound Using Dynamic Neighborhood Learning-Particle Swarm Optimizer Based Optimal Recursive Least Square Algorithm   Order a copy of this article
    by Mary Mekala A, Srimathi Chandrasekaran 
    Abstract: Cancellation of acoustic interferences from the lung sound recordings is a challenging task. Lung sound signals provide critical analysis of lung functions. Thus lung related diseases can be diagnosed with noiseless lung sound signals. A Recursive Least Square (RLS) algorithm based adaptive noise cancellation technique can be used to reduce the heart sounds from lung sounds are proposed in this paper. In RLS, the forgetting factor is the major parameter which determines the performance of the filter. Finding the optimal forgetting factor for the given input is the vital step in RLS operation. An improved PSO algorithm is used to find the optimal forgetting factor for the proposed RLS algorithm. Three different normal breath sounds mixed with heart sound signals are used to test the algorithm. The results are assessed with the correlation coefficient between the original uncorrupted lung sound signal and the interference cancelled lung signals by the proposed optimal filter. The power spectral density plots are also used to measure the accuracy of the proposed optimal RLS algorithm.
    Keywords: Lung sound signals;Dynamic Neighborhood Learning; Recursive Least Square; Adaptive noise cancellation; Optimization;Forgetting Factor;Heart Sound Signals;Correlation Coefficient;Power Spectral Density;Bronchial Sound.

  • Prediction of Risk Factors for Prediabetes using a Frequent Pattern based Outlier Detection   Order a copy of this article
    by Rajeswari A.M., Deisy C. 
    Abstract: Prediabetes is the forerunner stage of diabetes. Prediabetes develops type-2 diabetes slowly without any predominant symptoms. Hence, prediabetes has to be predicted apriori to stay healthier. The risk factors for prediabetes are abnormal in nature and are found to be present in a few negative test samples (without diabetes) of Pima Indian Diabetes data. The conventional classifiers will not be able to spot these abnormal samples among the negative samples as a separate group. Hence, we propose an algorithm Frequent Pattern Based Outlier Detection (FPBOD) to spot such abnormal samples (outliers) as a separate group. FPBOD uses an associative classification technique with few surprising measures like Lift, Leverage and Dependency degree to detect outliers. Among which, Lift measure detects more precise outliers that are able to correctly classify the person who didnt have diabetes, but just takes the risky chance of being a diabetic patient.
    Keywords: outlier detection; prediabetes; frequent pattern based outlier detection; associative classification; surprising measure.

  • Design of Analytic Wavelet Transform with Optimal Filter Coefficients for Cancer Diagnosis Using Genomic Signals   Order a copy of this article
    by Deepa Grover, Sonica Sondhi, Banerji B. 
    Abstract: DNA sequence analysis and gene expression analysis through genomic signal processing played an important role in cancer diagnosis in recent years. Cancer diagnosis through gene expression data, Discrete Fourier transform, Discrete Wavelet transform (DWT) and IIR Low pass filter are frequently used but suffer from drawbacks like longer essential time-support. Analytic wavelet transform with optimal filter coefficients for cancer diagnosis using genomic signals is designed in this paper. The proposed technique consists of three modules namely, pre-processing module, optimization module and transform and cancer diagnosis module. Initially the filter coefficients are optimally found out using Group Search Optimizer. Then, the optimal coefficients and the pre-processed DNA sequence is applied to analytic wavelet transform and subsequently, diagnosis for the cancer cell is made based on the threshold. DNA sequences obtained from National Centre for Biotechnology Information (NCBI) forms the database for the evaluation. Evaluation metrics parameters employed are sensitivity, specificity and accuracy. Comparison is made to the base method and analytic transform technique for more analysis. From the results, we can observe that the proposed technique has achieved good results attaining accuracy of 91.6% which is better than other compared techniques.
    Keywords: Genomic Signal Processing (GSP); Cancer diagnosis; GSO; Analytic transform; thresholding.

  • A Review of Non-Invasive BCI Devices   Order a copy of this article
    by Veena N., Anitha N. 
    Abstract: BCI provisions humans beings to control various devices with the help of brain waves. It is quite useful for the people who are totally paralyzed from neuromuscular diseases such as spinal cord injury, brain stem stoke. BCI permits a muscular free channel for conveying the user intent into action which help the people with motor disabilities to control their surroundings. Various non-invasive technologies like Electroencephalogram (EEG), Magnetoencephalography (MEG), functional Magnetic Resonance Imaging (fMRI) etc, are available for capturing the brain signal. In this article, various non-invasive BCI devices are analysed and nature of signals captured by it are reported. We also explore the use of signals for diseases diagnosis, there features and availability of those devices in the market.
    Keywords: EEG; MEG; fMRI; Non-invasive; Psychological; Physiological.

  • R peak Detection for Wireless ECG using DWT and Entropy of coefficients   Order a copy of this article
    by Tejal Dave, Utpal Pandya 
    Abstract: Investigation of patients Electrocardiogram helps to diagnose various heart related diseases. With correct R peak detection in ECG wave, classification of arrhythmia can be carried out accurately. However, accurate R peak detection is a big challenge especially in wireless patient monitoring system. In wireless ECG system, in order to reduce the power consumption; it is desirable to capture ECG at lower sampling rate. This paper proposes an algorithm for R peak detection using discrete wavelet transform in which detailed coefficients are selected based on entropy. The proposed algorithm is validated with MIT-BIH database and its performance is compared with similar work. For MIT-BIH case, positive predictivity and sensitivity for proposed algorithm are 99.85 and 99.73, respectively. Application of proposed algorithm on wireless ECG, acquired at adjustable sampling rate from different subjects using prototyped Bluetooth ECG module, shows efficacy of algorithm to detect R-peak of ECG with high accuracy.
    Keywords: Electrocardiogram; Wireless Monitoring System; Entropy; Discrete Wavelet Transform.
    DOI: 10.1504/IJBET.2017.10013949
     
  • Muscle fatigue and performance analysis during fundamental laparoscopic surgery tasks   Order a copy of this article
    by Ali Keshavarz Panahi, Sohyung Cho, Michael Awad 
    Abstract: A limited working area and impaired degree of freedom have led surgeons to encounter ergonomic challenges when performing minimally invasive surgery (MIS). As a result, they become vulnerable to associated risks such as muscle fatigue, potentially impacting their performance and causing critical errors in operations. The goal of this study is to first establish the extent of muscle fatigue and time-to-fatigue in vulnerable muscle groups, before determining whether the former has any effect on surgical performance. In the experiment, surface electromyography (sEMG) was deployed to record the muscle activations of 12 subjects (6 males and 6 females) while performing fundamentals of laparoscopic surgery (FLS) tasks for a total of 3 hours. In all, 16 muscle groups were tested. The resultant data were then reconstructed using recurrence quantification analysis (RQA) to achieve the first goal. In addition, a subjective fatigue assessment was conducted to draw comparisons with the RQA results. The subjects performance was also investigated via a FLS task performance analysis, the results demonstrating that RQA can detect muscle fatigue in 12 muscle groups. The same approach also enabled an estimation of time-to-fatigue for said groups. The results also indicated that RQA and subjective fatigue assessment are very closely correlated (p-value <0.05). Although muscle fatigue was established in all 12 groups, the performance analysis results showed that the subjects execution of their duties improved over time.
    Keywords: Minimally Invasive Surgery; Fatigue Analysis; Recurrence Quantification Analysis; FLS Task Performance Analysis; Subjective Fatigue Assessment.

  • Automatic Diagnosis of Stomach Adenocarcinoma using Riesz Wavelet   Order a copy of this article
    by ANISHIYA P, Sasikala M 
    Abstract: Adenocarcinoma originates from the glands. It causes changes in the gland architecture. The detection of adenocarcinoma requires histopathological examination of tissue specimens. At present, diagnosis and grading of the cancer depends on the visual interpretation of the biopsy samples by pathologist and thus, it may lead to a considerable amount of inter and intra-observer variability. To overcome this drawback and to reduce the reliance on the human interpretation and thereby reducing the workload of pathologists, many methods have been proposed. In this paper, a novel method to quantify a tissue for the purpose of automated cancer diagnosis and grading is introduced. The stomach tissue images are preprocessed to compensate for color variations. The Riesz wavelet transform is applied to the preprocessed stomach tissue images. From Riesz wavelet coefficients, 14 different statistical features were extracted. Wrapper based feature selection is used. The reliability check on the final dataset is performed using ANOVA. In diagnosis, the tissue is classified into normal(non-malignant), well differentiated, moderately differentiated, poorly differentiated and tissue. The proposed system yielded a classification accuracy of 93.2% in diagnosing and 98.33% in Grading.
    Keywords: Stomach adenocarcinoma; histopathological image analysis; Color Normalization; Riesz wavelet transform; cancer diagnosis; Hilbert transform; Simoncelli wavelet; ANOVA; Support Vector Machine.

  • Generalized Warblet Transform Based Analysis of Biceps Brachii Muscles Contraction Using Surface Electromyography Signals   Order a copy of this article
    by Diptasree MaitraGhosh, Ramakrishnan Swaminathan 
    Abstract: In this work, an attempt has been made to utilize the time-frequency spectrum obtained using Generalized Warblet Transform (GWT) for fatigue analysis. Signals are acquired from the biceps brachii muscles of twenty healthy volunteers during isometric contractions. The first and last 500 ms lengths of a signal are assumed as nonfatigue and fatigue zones respectively. Further, the signals from these zones are subjected to GWT for the computation of time-frequency spectrum. Features such as Instantaneous Mean Frequency (IMNF), Instantaneous Median Frequency (IMDF), Instantaneous Spectral Entropy (ISPEn), and Instantaneous Spectral Skewness (ISSkw) are estimated. The results show that the IMNF, IMDF and ISPEn increased by 24%, 34% and 36% respectively in nonfatigue condition. In contrast, 22% higher ISSkw is observed for fatigue condition. The statistical analysis indicates that the features are significant with p<0.001. It appears that the current method is useful in analyzing muscle fatigue disorders using sEMG signals.
    Keywords: sEMG; biceps brachii; muscle fatigue; GWT.

  • Automated Emotion State Classification using Higher Order Spectra and Interval features of EEG   Order a copy of this article
    by Rashima Mahajan 
    Abstract: Automated analysis of electroencephalogram (EEG) signals for emotion state analysis has been emerging progressively towards the development of affective brain computer interfaces. However, conventional EEG signal analysis techniques such as event related potential (ERP) and power spectrum estimation fail to provide high emotion state classification rates due to Fourier phase suppression when utilized with distinct machine learning tools. Further, only limited types of emotions has been explored for automated recognition using EEG, even though there are varieties of emotional states to illustrate the humans feelings. An attempt has been made to develop an efficient emotion classification algorithm via EEG utilizing statistics of fourth order spectra. A four-dimensional emotional model in terms of arousal, valence, liking and dominance is proposed using emotion specific EEG signals from DEAP dataset. A compact set of five temporal peak/interval related features and three trispectrum based features have been extracted to map the feature space. Through the feature map, a multiclass-support vector machine (SVM) based classifier using one-against-one algorithm is configured to yield a maximum classification accuracy of 81.6% using while classifying four emotional states. A comparison of multiclass-SVM with other classifiers such as feed forward neural network (FFNN) and radial basis function network (RBF) has been made. Significant improvement using a proposed compact hybrid EEG feature set and a multiclass -SVM has been achieved for automated emotion state classification.
    Keywords: Brain Computer Interface (BCI); Electroencephalogram (EEG); Emotions; Multiclass-SVM; Trispectrum; Temporal; DEAP.

  • Integrated neuromuscular fatigue analysis system for soldiers load carriage trial using DWT   Order a copy of this article
    by Arosha S. Senanayake, Dk N. Filzah Pg Damit, Owais A. Malik, Nor Jaidi Tuah 
    Abstract: This research work addresses neuromuscular fatigue of soldiers using wearable sensors during load carriage trial. Ten healthy male soldiers participated in the experiment. EMG was recorded bilaterally from selected lower extremity muscle groups and EEG was recorded on the frontal cortex of the brain. Each subject was asked to run and/or march on the treadmill with and without load at 6.4kmh-1, with inclination increased at every 5 minutes, until volitional exhaustion was reached. Feature extraction was performed using discrete wavelet transform on both signals. Results demonstrated significant changes in power levels at lower and middle frequency bands for EMG in most muscles during both unloaded and loaded conditions. However for EEG signals, significant changes of power distribution were observed at the frontal cortex during unloaded conditions only. Furthermore, through data visualisation, fatigue was detected at the muscle level first, before enforcing to send signals to the brain for decision making in order to stop the exercise.
    Keywords: neuromuscular fatigue; EMG; EEG; load carriage.

  • The Effects of Implant Design, Bone Quality, and Insertion Process on Primary Stability of Dental Implants: An In-vitro Study   Order a copy of this article
    by Mansour Rismanchian, Mojtaba Afshari 
    Abstract: The aim of this study was to analyse the maximum insertion torque (MIT) with respect to the implant thread face angle, bone quality and the insertion process. Two implants were designed and made: One with High Face Angle (HFA) and the other with Low Face Angle (LFA). Bovine bones were classified, and then the MIT values of the implants were measured per round using standard protocol. Finally, three-way ANOVA was used to interpret the MIT values. For the main effects, MIT values (mean and standard error) of LFA (17.4±0.7 N cm) and HFA implants (20.8±0.7 N cm) were significantly different. The MIT values for the D4 (9.4±0.6 N cm), D3 (19.1±0.7 N cm), and D2 (28.8±1.1 N cm) bones were significantly different. Considering the higher primary stability as an essential provision for immediate load implants, it is recommended to use implants featured with higher thread face angles.
    Keywords: Implant Design; Maximum Insertion Torque; Primary Stability; In-vitro.

  • DIABETIC RETINOPATHY DETECTION USING LOCAL TERNARY PATTERN   Order a copy of this article
    by Anitha Anbazhagan, Uma Maheswari 
    Abstract: There is a need for intelligent way of detecting Diabetic Retinopathy (DR) in recent years, which is widely spreading in developing countries. The worst part of diabetic retinopathy is initially asymptomatic, but if untreated it can lead to blindness. In this paper, an automated technique for early screening of fundus images is proposed. This work focuses on analyzing texture of the fundus image to classify them into normal or diabetic retinopathy (DR). For texture analysis, Local Ternary Pattern (LTP) is applied and compared with Local Binary Pattern (LBP). As LBP is more sensitive to noise and illumination variation, LTP is employed and its discriminative power is explored. LTP is obtained for all three color components Red (R), Green (G) and Blue (B) for different radius R = 1, 2, 3 and 5 considering 8 neighborhood pixels. Also the importance of RGB component over Green channel component is analyzed. The histogram of LTP and variance provides a statistical set of features, which are given to the classifiers KNN and Random Forest. 10-fold cross validation approach is incorporated in both classifiers. Random forest classifier provides a sensitivity and specificity of 100%. The average sensitivity and specificity of nearly 91% are achieved. DR is detected by analyzing the retina background without segmenting the lesions. This implies that the proposed algorithm is very fast and can be used as screening test for retinal abnormalities detection.
    Keywords: Local Ternary Pattern; Local Binary Pattern; Diabetic Retinopathy; Random Forest; Computer Aided Diagnosis; Fundus images; KNN.

  • Enhancement of Low Quality Blood Smear Image using Contrast and Edge Corrections   Order a copy of this article
    by Umi Salamah, Riyanarto Sarno, Agus Zainal Arifin, Anto Satriyo Nugroho, Ismail Eko Prayitno Rozi, Puji Budi Setia Asih 
    Abstract: A blood smear is one of the medical images that widely used to disease diagnosis. A Low quality of smear that produced by poor microscopy specification complicates the reading of the feature. The characteristics are blurred, diminished true colour of object, unclear boundary, and low contrast between object and background. In this study, we propose image enhancement technique to improve the readability of the features in the quality of blood smear image. The proposed method consists of contrast and edge corrections. Contrast correction utilizes the integration of contrast correction globally and locally. Edge correction uses Unsharp Masking Filetring to improve the object edge. Experiments are performed on three diseases images. The results show that the proposed method achieves best entropy and pretty good on MSE and PSNR, so it can produce images contained more information than the other methods and have a well effectiveness.
    Keywords: enhancement; low quality; blood smear; contrast; edge; correction; local; global; unclear boundary; readability feature.

  • An Adaptive Weighted Denoising Filter Framework for Impulse and Shot Noise : A Mammogram Image Applications   Order a copy of this article
    by Ramya A, Murugan D, Manish T. I, Ganesh Kumar T 
    Abstract: Breast screening contraption has not yet became an effective diagnosis method for detection of abnormalities. This is mainly due to the physical interference from the screening system; these tribulations usually develop a noise due to electric charge and also occur due to photon counting in optical screening machine. This work focuses on denoising the mammogram breast image to a greater extends with two step process. In the first phase mammogram image dataset is subjected to neural network to detect the corrupted image from non noisy images by using different feature set. Then average weighted of four different filters such as Geometric-Mean filter (GM), Decision Based Median filter (DBM), Directional Weighted Median filter (DWM) and Frost filter is applied to impulse and shot noise pixels, to preserve its corrupted edges and to avoid smoothing out of details. Additionally, this combined filter is subjected to exponential transformation to yield the enhanced output. The proposed filter is applied to these two noises and compared with existing filters with respective quality factors such as Peak to Signal Noise Ratio (PSNR), Mean Absolute Error (MAE). The outcome result shows that the proposed methods yield promising than the existing filters for mammogram images.
    Keywords: Image Denoise; Neural Network; Impulse noise; Shot noise; Adaptive Weighted filter; Mammogram Image.

  • Modelling and Simulation Analysis of Porous Polymeric Scaffold for the Replacement of Bruchs Membrane as a Therapy for Age-Related Macular Degeneration   Order a copy of this article
    by Susan Immanuel, Aswin Bob Ignatius, Alagappan Muthupalaniappan 
    Abstract: Age Related Macular Degeneration [AMD] is a common disease that is prevalent among people of age 50. It is characterized by the loss of Retinal Pigment Epithelial Cells [RPE] from the macula of eye due to the deposition of drusen in the Bruchs membrane that supports the RPE cells. Treatment for AMD is to replace Bruchs membrane with scaffolds that provide conducive environment for the RPE cells to adhere and proliferate. In this study a scaffold made of porous Poly (Caprolactone) [PCL] was designed using the tool COMSOL Multiphysics and its properties like structural integrity and the fluid flow were analysed using Brinkmans equation. The model has a square geometry with a dimension of 100 x 100 x 4
    Keywords: Age Related Macular Degeneration; Retinal Pigment Epithelial Cells; Bruch’s membrane; PCL scaffold; Brinkman’s equation.

  • Continuous User Authentication Using Multimodal Biometric Traits with Optimal Feature Level Fusion   Order a copy of this article
    by Prakash A., Krishnaveni R., Dhanalakshmi R. 
    Abstract: Biometric process demonstrates the authenticity or approval of an individual in view of his/her physiological or behavioral characteristics. Subsequently, for higher security feature, the blend of at least two or more multimodal biometrics (multiple modalities) is requiring. Multimodal biometric technology gives potential solutions for continuous user-to-device authentication in high security. In this research paper proposed Continuous Authentication (CA) process using multimodal biometric traits considers finger and iris print images to various feature extraction process. At that point, features are extracted into optimal Feature Level Fusion (FLF) process. The final Feature Vector is acquired by concatenating Directional Information and Center Area Features. Disregard the optimal feature process the inspired Fruit Fly Optimization (FFO) model is considered, and then these fused model into authentication procedure to find the matching score values (Euclidian distance) with the imposter and genuine user. From the approach results are accomplish most extreme accuracy, sensitivity and specificity compared with existing papers with better FPR and FRR value for the authentication process. The result shows 92.23% accuracy for theproposed model when compared to GA, PSO which is attained in MATLAB programming software.
    Keywords: biometrics; authentication; feature vectors; optimization; Feature level fusion; fingerprint; iris.

  • A comparison of foot kinematics between pregnant and non-pregnant women using the Oxford Foot Model during walking   Order a copy of this article
    by Minjun Liang, Yang Song, Wenlan Lian 
    Abstract: To investigate the effects between pregnant women (PW) and non-pregnant women (NW) on foot kinematics during walking using the Oxford Foot Model (OFM). The results contribute to understanding of foot biomechanics of pregnant women and may provide suggestions for customized footwear design. 20 women with last trimester of pregnancy and 23 non-pregnant women were involved in this study. Three-dimensional motion of the forefoot, hindfoot and tibia during walking were recorded by a Vicon motion analysis system. Two Force Platforms were used to record the ground reaction force. Compared with NW, PW demonstrated greater plantarflexion and internal rotation of hindfoot and internal tibial rotation during initial contact, greater forefoot eversion and hindfoot external rotation during push off. Moreover, PW showed greater external tibial rotation than NW during toe off and the center of pressure (COP) trajectory moved to the 2nd to 3th metatarsal at this stage. It appears that the altered foot kinematics of pregnant women may contribute to the redistribution of joints loads and the maintenance stability at a comfortable walking pace. In addition, falling risks for pregnant women prone to be higher at initial contact phase and overuse injury may raise in the 2nd to 3th metatarsal when walking for long time.
    Keywords: pregnant women; non-pregnant women; musculoskeletal; center of gravity; center of pressure.

  • CHARACTERIZATION OF FREQUENCY MODULATED WAVE OF NIR PHOTONS TRANSPORT IN HUMAN LOWER FOREARM PHANTOM   Order a copy of this article
    by Ebraheem Sultan, Nizar Alkhateeb 
    Abstract: The free-space broadband frequency modulated near infrared (NIR) photon technique has been used to model and characterize the optical behaviour in a lower forearm human phantom. The NIR system measures the broadband (30MHz-1000MHz) insertion loss (IL) and the insertion phase (IP) of the modulated light. This helps to characterize and model the different penetration depth and path-related modulated photon movement in the human phantom. A phantom that resembles human lower forearm (three layers) is used, along with the experimental modules and the Finite Element (FE) tools, to perform characterization and modelling. The study is divided into two stages. The first stage dedicates performing IL and IP measurements over 30MHz-1000MHz with a back-scattering measurement method to characterize the behaviour of the modulated photons. The second stage dedicates, modelling the modulated broadband photon behavior using a Finite Element simulator called as COMSOL. Results in both stages are analyzed using a signal processing method. This helps to identify the frequency modulated photon stamp associated with different layers and verifies the accuracy of the 3D FE modelling. Comparison between experimental and 3D FE modelling is computed at different frequencies is shown to be less than 6%, and results give an understanding of how the modulated waves of photons behave when traveling in the human forearms. These results can be used to further investigate the functionality surrounding any biological activity around the human forearm.
    Keywords: NIR Spectroscopy; COMSOL; 3D FE; Modulated wave of photons; human lower forearm photon transport; optical transmitter; VCSEL; optical receiver; APD; IP and IL.

  • Complex Diffusion Regularization based Low dose CT Image Reconstruction   Order a copy of this article
    by Kavkirat Kaur, Shailendra Tiwari 
    Abstract: The Computed Tomography (CT) is considered as a significant imaging tool for the clinical diagnosis. Due to the low-dose radiation in CT, the projection data is highly affected by the Gaussian noise. Thus, there is a demand of a framework that can eliminate the noise and provide high-quality images. This paper presents a new statistical image reconstruction algorithm by proposing a suitable regularization method. The proposed framework is the combination of two basic terms namely data fidelity and regularization. Maximizing the log likelihood gives the data fidelity term, which represents the distribution of noise in low dose X-Ray CT images. Maximum likelihood expectation maximization algorithm is introduced as a data-fidelity term. The ill-possedness problem of data fidelity term is overcome with the help of complex diffusion filter. It is introduced as a regularization term into the proposed framework that minimizes the noise without blurring edges and preserving the fine structure information into the reconstructed image. The proposed model has been evaluated on both simulated and real standard thorax phantoms. The final results are compared with the other standard methods and it is analyzed that the proposed model has many desirable properties such as better noise robustness, less computational cost, enhanced denoising effect.
    Keywords: Computed Tomography (CT); Noise Reduction; Maximum likelihood expectation maximization (MLEM) algorithm; Complex Diffusion (CD); Gaussian noise.

  • Active contours for overlapping cervical cell segmentation   Order a copy of this article
    by Flavio Araujo, Romuere Silva, Fatima Medeiros, Jeova Farias, Paulo Calaes, Andrea Bianchi, Daniela Ushizima 
    Abstract: The nuclei and cytoplasm segmentation of cervical cells is a well-studied problem. However, the current segmentation algorithms are not robust to clinical practice due to their high the computational cost or because they cannot accurately segment cells with high overlapping. In this paper, we propose a method that is capable of segmenting both cytoplasm and the nucleus of each individual cell in a clump of overlapping cells. The proposed method consists of three steps: a) cellular mass segmentation; b) nucleus segmentation; and, c) cytoplasm identification based on an active contour method. We carried out experiments on both synthetic and real cell images. The performance evaluation of the proposed method showed that it was less sensitive to the increase in the number of cells per image and the overlapping ratio against two other existing algorithms. It has also achieved a promising low processing time and, hence, it has the potential to support expert systems for cervical cell recognition.
    Keywords: ABSnake; Active Contour; Cervical Cells; Medical Image; Nuclei and Cytoplasm Segmentation; Overlapping Cells; Pap Test.

  • Certain investigation on Biomedical impression and Image Forgery Detection   Order a copy of this article
    by Arun Anoop Mandankandy, Poonkuntran S 
    Abstract: In todays digital age, the trustworthy towards image is distorting because of malicious forgery images. The issues related to the multimedia security have led to the research focus towards tampering detection. As the source image and the target regions are from the same image so that copy move forgery is very effective in image manipulation due to its same properties such as temperature, color, noise and illumination conditions. In this paper, we have analyzed some papers related to copy move forgery detection and finally concluded with comparative analysis with some parameters and also Medical and biomedical image analysis, databases, search engines, devices and system security.
    Keywords: Block based image forgery detection; key-point based image forgery detection; pixel based image forgery detection; Copy move forgery; Biomedical search engines; Biomedical Image analysis.

  • Cerebral Palsy Rehabilitation - Effectiveness of Visual Stimulation Method by Analyzing the Quantitative Assessment of Oculomotor Abnormalities   Order a copy of this article
    by ILLAVARASON P, Arokia Renjit J 
    Abstract: Cerebral Palsy (CP) is developmental brain abnormalities which is prior to birth or during birth and after delivery. Cerebral brain damage cells which leads to other health issues such as vision, hearing, and motor activities and so on. The major health issues for the cerebral palsy children such as vision problem. The proposed approach deals which Vision dysfunction and Oculomotor Assessment for diagnosis and treatment of brain disorder. The movement of eye plays a vital role to get the accurate vision for static and dynamic objects. In these proposed approaches we assessed the Oculomotor deficits of CP children by recording the eye movement of 26 CP Children (age range 4-14) and performance compared with age matched control and also analyzing the cerebral palsy children Eye Fixation Centroid, Smooth Pursuit and Eye Lid Blinking activities. From these activities the movement of eye is to provide the window of Neuro plasticity for CP children. The oculomotor abnormalities indicate the lesion brain of CP children retains the ability to reorganize, by continuous work process of these vision stimulation task techniques for Eye Gaze Direction Approach will improve the cognitive rehabilitation of cerebral palsy children. These gaze related indices in response to both static and dynamic visual stimulation techniques may serve as potential quantitative biomarkers for cerebral palsy children.
    Keywords: Cerebral Palsy; Eye fixation; Smooth Pursuit; Blink Rate.

  • Quantitative Analysis of Paraspinal Muscle Strain during Cervical Traction Using Wireless EMG Sensor   Order a copy of this article
    by Hemlata Shakya, Shiru Sharma 
    Abstract: The aim of this study is to assess the efficacy of cervical traction on the basis of fatigue analysis using wireless EMG sensor. Neck pain is a regular health-related complaint, leading to paraspinal muscle spasm & radiculopathy. The patients having neck pain complaint were visiting therapy unit regularly for cervical traction treatment. This case study includes EMG data recording of twelve neck pain patients using wireless EMG sensor. The patients were divided into two groups having radiculopathy with paraspinal muscle spasm and without radiculopathy but paraspinal muscle spasm. The subjects were treated with 15 minutes of cervical traction with a 7 kg strain. The extracted various features in the time domain and frequency domain from the acquired EMG data to assess the muscle fatigue during cervical traction treatment. Features were calculated to evaluate the muscle fatigue during cervical traction in sitting position. Analysis of various parameters indicated significant differences in the paraspinal muscle activities. The results indicate the effectiveness of continuous traction treatment in the reduction of neck pain.
    Keywords: Electromyography; Neck pain; Muscle Fatigue; Feature extraction.

  • Medical Video based Cryptography Model to Improve the Security and Transmission Time   Order a copy of this article
    by Edwin Dayanand, R.K. Selva Kumar, N.R. Ram Mohan 
    Abstract: This paper proposes a medical video based visual Cryptography Model using Graph-based Coherence Shape Lagrange Interpolation (G-CSLI). This G-CSLI model reduces the noise on medical video while generating shares and minimizes the transmission time by minimizing the computational complexity during secret sharing. At first, a probabilistic polynomial-time model is used with the objective of minimizing computational complexity during secret sharing. Here, the similarity is measured based on the luminance and structure. Finally, a Lagrange-Interpolation scheme is applied with the objective of minimizing the transmission time and improves the security. Overall, the proposed model minimizes the computational complexity while secret sharing and performs experimental evaluation on factors such as security, transmission time and noise in medical video based visual cryptography. Experimentation shows that the proposed model is able to reduce the computational complexity while secret sharing by 13.11% and reduce the transmission time by 39.03% compared to the state-of-the-art algorithms.
    Keywords: Chaotic Oscillations,rnProbabilistic Model,rnVisual Cryptography; rnProbabilistic Polynomial Coherence Shape; rnLagrange Interpolation.

  • POWER LINE INTERFERENCE REMOVAL FROM ELECTROCARDIOGRAM SIGNAL USING MULTI-ORDER ADAPTIVE LMS FILTERING   Order a copy of this article
    by Surekha Ks, B.P. Patil 
    Abstract: Electrocardiogram (ECG) signals are susceptible to noise and interference from the external world. This paper presents the reduction of unwanted 50Hz power line interference in ECG signal using multi-order adaptive LMS filtering. The novelty of the present method is the actual hardware implementation for power line interference removal. The design of adaptive filter is carried out by the SIMULINK based model and hardware based design using FPGA. The performance measures used are SNR, PSNR, MSE and RMSE. The novelty of the proposed method is to achieve better Signal to Noise Ratio (SNR) by careful selection of the filter order using hardware.
    Keywords: Adaptive filter; ECG; LMS Filter; Multi-order; Power line interference; FPGA; SIMULINK model; SNR; PSNR.

  • Automated Computer Aided System for Early Diagnosis of Alzheimers Disease by Regional Atrophy Analysis in Functional Magnetic Resonance Imaging   Order a copy of this article
    by Sampath R., Indumathi J 
    Abstract: A growing body of researchers suggest that a preclinical stage of Alzheimers disease (AD), characterized by precise neuropsychological and brain changes, may exist several years earlier to the overt manifestation of clinical symptoms. FMRI is a promising tool for detecting brain change in AD. This paper proposes an Automated Computer Aided System (ACAS) for early diagnosis of AD using FMRI. The system consists of 4 stages: Preprocessing, Feature Extraction, Segmentation and Regional Atrophy Analyses. The preprocessing removes the noise in the FMRI image; Multi Scale Analysis (MSA) is used to analyze FMRI to obtain its fractals at 6 different scales, which produce different feature vectors to discriminate between healthy and pathological patients; SelfOrganizing Map Nework (SOMN) technique, used for segmentation process, is an unsupervised network that utilizes the obtained feature vectors for competitive learning; the Regional Atrophy Analyses are used to differentiate AD from other neurodegenerative diseases. Compared to MRI, the proposed system gives more satisfactory results for early diagnosis and differenciation of AD from other neurodegenerative diseases.
    Keywords: Alzheimer’s Disease (AD); Functional Magnetic Resonance Imaging (FMRI); Automated Computer Aided System (ACAS); Multi Scale Analysis (MSA); Self–Organizing Map Network (SOMN); Regional Atrophy Analysis.
    DOI: 10.1504/IJBET.2019.10012451
     
  • Brain Tumor Detection and Classification using Hybrid Neural Network Classifier   Order a copy of this article
    by Krishnamurthy Nayak, Supreetha B. S, Phillip Benachour, Vijayashree Nayak 
    Abstract: Brain tumor is one of the most harmful diseases, and has affected majority of people including children in the world. The probability of survival can be enhanced if the tumor is detected at its premature stage. Moreover, the process of manually generating precise segmentations of brain tumors from magnetic resonance images (MRI) is time-consuming and error-prone. Hence, in this paper, an effective technique is employed to segment and classify the tumor affected MRI images. Here, the segmentation is made with Adaptive Watershed Segmentation Algorithm. After segmentation, the tumor images were classified by means of hybrid ANN classifier. The hybrid ANN classifier employs Cuckoo Search Optimization technique to update the interconnection weights. The proposed methodology will be implemented in the working platform of MATLAB and the results were analyzed with the existing techniques.
    Keywords: Neural Network; Medical Image processing; magnetic resonance images.

  • Multi Features Based Approach for White Blood Cells Segmentation and Classification in Peripheral Blood and Bone Marrow Images   Order a copy of this article
    by BENOMAR MOHAMMED LAMINE, CHIKH MOHAMMED AMINE, DESCOMBES XAVIER, BENAZZOUZ MOURTADA 
    Abstract: In this paper, we propose a complete automated framework for white blood cells differential count in peripheral blood and bone marrow images in order to reduce the analysis time and increase the accuracy of several blood disorders diagnosis. A new color transformation is first proposed to highlight the white blood cells regions; then a marker controlled watershed algorithm is used to segment the region of interest. The nucleus and cytoplasm are subsequently separated. In the identification step a set of color, texture and morphological features are extracted from both nucleus and cytoplasm regions. Next, the performances of a random forest classifier on a set of microscopic images are compared and evaluated. The obtained results reveal high recognition accuracies for both segmentation and classification stage.
    Keywords: white blood cells; cells segmentation; cells classification; color transformation; texture features; morphological features; peripheral blood images; bone marrow images.

  • Review on Next Generation Wireless Power Transmission Technology for Implantable Biomedical Devices   Order a copy of this article
    by Saranya N., Kesavamurthy T. 
    Abstract: Wireless Power Transmission (WPT) is a promising technology that causes drastic changes in the field of biomedical, especially in medical implantable devices such as pacemakers, cardiac defibrillator and cochlear implants. The traditional implantable biomedical devices get power supply from batteries or lead wires through the skin, which not only increasing the burden on the patient, but also increases the pain and risk of surgery. To reduce the cost of biomedical devices, risk of wire snapping, periodic surgery to replace the batteries, wireless delivery of energy to these devices is desirable. WPT is a promising technology capable of addressing limitations in implantable devices. This technology not only negates the risk of infection due to cables passing through the skin but also negates need for recurrent surgeries to replace batteries and minimizes the size of the device by excising bulky components such as batteries. This paper provides an overview of wireless power transmission history, the basic principle of WPT and their recent research and developments in implantable biomedical applications.
    Keywords: Wireless Power Transmission (WPT); Implantable Medical Devices (IMD); inductive coupling; resonance coupling; rectenna; Implantable Cardioverter Defibrillator (ICD); EEG recorder.

  • The Development of a Brain Controlled Interface Employing Electroencephalography to Control a Hand Prostheses.   Order a copy of this article
    by Mahonri Owen, Chikit Au 
    Abstract: It is the aim of this report to test the feasibility of using electroencephalography (EEG) to control a prosthetic hand employing an adaptive grasp. The Purpose of this report is to support work relating to the project The Development of a Brain Controlled Interface for Trans-radial Amputees. This work is based on the idea that an EEG based control scheme for prosthetics is credible in the current technological climate. The presented experiments investigate the efficiency and usability of the control scheme by testing response times and grasp accuracy. Response times are determined by user training and control method. The grasp accuracy relies on the effectiveness of the support vector machine used in the control scheme. The outcome of the research is promising and has the potential to provide amputees with an intuitive and easy to use method to control a prosthetic device.
    Keywords: ELectroencephalography;Prosthetic Control;neural Prosthetics.

  • A Near Lossless Three-Dimensional Medical Image Compression Technique using 3D-Discrete Wavelet Transform   Order a copy of this article
    by Boopathiraja S, Kalavathi Palanisamy 
    Abstract: In the field of medical image processing, there are several imaging modalities has been emerged and with its evolutionary growth we could get high quality of images that imposes to more amount of storage space as well as bandwidth for transmitting it. Therefore, there is always a great need to develop compression techniques on such images. Moreover, they produce 3-Dimensional images and it can be further divided into slices of images and process each as like 2D image or directly produce the sequence of images. Therefore, in every medical image processing operations like segmentation, feature extraction and such operations need to converge on 3D space (3-Dimensional image) which is essential and has broader future scope. In this paper, we provide the Wavelet based image compression technique that can be directly applied on the 3D images itself. We apply 3D DWT on a 3D volume and the resultant coefficients are taken to further compression process (like thresholding, entropy encoding). The inverse processes are performed for decompression to reconstruct the images. The effects of results based on different wavelets are assessed and the results of lossy mode are compared in terms to lossless mode.
    Keywords: 3D Medical Image; 3D-DWT; Huffman coding; Near lossless Compression.

  • Feature extraction and Classification of ECG Signals with Support Vector Machines and Particle Swarm Optimization   Order a copy of this article
    by Gandham Sreedevi, Bhuma Anhuradha 
    Abstract: The present work was aimed to present a thorough experimental study that shows the superiority of the generalization capability of the support vector machine (SVM) approach in the classification of electrocardiogram (ECG) signals. Feature extraction was done using principal component analysis (PCA). Further, a novel classification system based on particle swarm optimization (PSO) was used to improve the generalization performance of the SVM classifier. For this purpose, we have optimized the SVM classifier design by searching for the best value of the parameters that tune its discriminant function and upstream by looking for the best subset of features that feed the classifier. The obtained results clearly confirm the superiority of the SVM approach as compared to traditional classifiers, and suggest that further substantial improvements in terms of classification accuracy can be achieved by the proposed PSOSVM classification system.
    Keywords: ECG; PCA; PSO; SVM; Arrhythmias; Classification.

  • An efficient and optimized system for detection of cancerous cells in tongue   Order a copy of this article
    by Mahnoor Rasheed, Ishtiaq Ahmad, Sumbal Zahoor, Muhammad Nasir Khan 
    Abstract: Major progress in image processing allows us to make large-scale use of medical imaging data to provide better detection and treatment of several diseases. Cancer is one of them that cause around 1.7 million deaths every year. Advanced and precise detection of cancer can prevent the severe complications. Tongue cancer is relatively rare that takes the consideration of medical field groups in recent time. In this research work, an efficient tongue cancer detection system is proposed that carried out in two phases. Initially, advanced filtering techniques are used to remove the noise content of microscopic images of tissue cultures from the body of the subject to be diagnosed. Image information are enhanced in pre-processing step that squeezes undesirable contortion and enhances some picture highlights for less demanding and faster assessment. In next phase, the image is segmented in a manner that abstracts significant material and characteristics of the image. The detection of the affected part is performed using the Otsu thresholding, k-means clustering and marker controlled watershed segmentation techniques. The performance and limitations of these schemes are compared and discussed. The simulation results show the marker controlled watershed offers best segmentation and detection.
    Keywords: tongue cancer detection; k-means clustering; marker controlled watershed segmentation; Otsu thresholding.

  • Classification and Quantitative Analysis of Histopathological Images of Breast Cancer   Order a copy of this article
    by Anuranjeeta Anuranjeeta, Romel Bhattracharjee, Shiru Sharma, K.K. Shukla 
    Abstract: This paper provides a robust and reliable computational technique for the classification of benign and malignant cells using the morphological features extracted from histopathological images of breast cancer through image processing. Morphological features of malignant cells show changes in patterns as compared to that of benign cells. However, manual analysis is time-consuming and varies with perception level of the expert pathologist. To assist the pathologists in analyzing, morphological features are extracted, and two datasets are prepared from the group cells and single cells images for benign and malignant categories. Finally, classification is performed using supervised classifiers. Morphological features analysis is always considered as an important tool to analyze the abnormality in cellular organization of cells. In the present investigation, three classifiers for example Artificial Neural Network (ANN), k-Nearest Neighbour (k-NN) and Support Vector Machine (SVM) are trained using publically available breast cancer datasets. The result of performance indicators for benign and malignant images was calculated through True Positive (TP), True Negative (TN), False positive (FP) and False negative (FN) respectively. By utilizing a number of samples that fall into these classes, the performance parameters accuracy, sensitivity and specificity, balanced classification rate (BCR), F-measure and Matthews\' correlation coefficient (MCC) are calculated. The statistical measure regarding sensitivity and specificity has been acquired by calculating the region under the Receiver Operating Characteristic (ROC) curve. It is found that the classification accuracy achieved by the single cells dataset is better than the group cells. Furthermore, it is established that ANN provides a better result for both datasets than the other two (k-NN and SVM). The proposed method of the computer aided diagnosis system for the classification of benign and malignant cells provides better accuracy than the other existing methods.
    Keywords: Segmentation; Cancer; Morphological features; Histopathology; Classification.

  • Semiautomated detection of aortic root in human heart MSCT images using nonlinear filtering and unsupervised clustering   Order a copy of this article
    by Antonio Bravo 
    Abstract: In this research a semiautomatic technique to detect the aortic root in threedimensional (3-D) cardiac images is proposed. This technique consists of three steps: conditioning, filtering, and detection. The cardiac images are acquired with 64-slice multislice computerized tomography (MSCT) technology. The conditioning step is based on multiplanar reconstruction (MPR) and it is required in order to reformat the cardiac volume information to orthogonal planes to the aortic root. During the filtering step, a set of nonlinear filtering techniques based on similarity enhancement, median and weighted median are considered to reduce noise and enhance the cardiac edges in reformatted images. In the detection step, the filtered volumes are subsequently processed with an unsupervised clustering technique based on simple linkage region growing. Dice score, the symmetric point-to-mesh distance and the symmetrical Hausdorff distance are used as metric functions to compare the segmentations obtained using the proposed method with respect to ground truth volumes traced by a cardiologist. A clinical dataset of 90 three-dimensional images from 45 patients is used to validate the detection technique. From this validation stage, the maximum average Dice score (0.92), the minimum average symmetric point-to-mesh distance (0.96 mm) and the minimum average symmetrical Hausdorff distance (4.80 mm) are obtained during prepreprocessed volumes segmentation using similarity enhancement.
    Keywords: Human heart; aortic root; multislice computerized tomography; segmentation; similarity enhancement; weighted median; unsupervised clustering.

  • Finite Element Analysis of Tibia Bone   Order a copy of this article
    by Pratik Thakre, K.S. Zakiuddin, I.A. Khan, M.S. Faizan 
    Abstract: The tibia also known as the shank-bone, it is the larger and stronger of the two bones in the leg below the knee in vertebrates (the other being the fibula), and it connects the knee with the bones. The leg bones are the strongest long bones as they support the rest of the body. The support and movement of the tibia is essential to many activities performed by the legs, including standing, walking, and running, jumping and supporting the bodys weight. This research was directed towards a study of the lower limb of the human body through the 3-D modeling and finite element analysis of the tibia, Finite element analysis is used to evaluate the stresses and displacements of human tibia bone under physiological loading. Three-dimensional finite element models obtained by using computed tomography (CT- Scan) data which consisting thorough description about the material properties of bone and density of bone tissues which is very essential to create accurate and realistic geometry of a bone structure. Therefore, in this study, CT - Scan data of patients Tibia Bone (Healthy, broken tibia bone after 1moth and 2 month of surgery Tibia Bone) are used to develop three-dimensional finite element models of the left proximal tibia bone, and (full average body Weight- 80 kg) half of an average body weight 37.53 kg (368.16) applied on Tibia Bone . Finite Element Analysis conducted to calculate the Equivalent Von-Mises Stress, Maximum Principal Stress, Total Deformation and Fatigue Tool from the whole proximal tibia bone and comparing the results. These analyzed results provide a great foundation for further studies of bone injury prevention, bone transplant model integrity and validity and subject-specific fracture mechanism as the results of three bones (Healthy, after one month of surgery and after two months of surgery) compared.
    Keywords: Tibia; Fibula; Stress Analysis; Material Properties; Displacement; Modeling; Simulation; Finite Element Analysis; Hypermesh; Embodi 3D.

  • ANALYSIS AND CLASSIFICATION OF KIDNEY ULTRASOUND IMAGES USING SIFT FEATURES WITH NEURAL NETWORK CLASSIFIER   Order a copy of this article
    by Mangayarkarasi Thirunavukkarasu, Najumnissa Jamal 
    Abstract: A Unique method to analyze Ultrasound Scan images of Kidney to classify renal abnormalities using SIFT features and Artificial Neural Network is presented in this Paper. The Ultrasound kidney images are classified into 4 classes normal, cyst, calculi, and tumor. Kidney images obtained from the scan centre, urologist and knowledge to predict common abnormalities by specialist are utilized as inputs to carry out the processing and analysis of ultrasound images. Preprocessing and denoising techniques are applied for the removal of speckle noise by applying median and wiener filter. Fuzzy C-Means clustering based segmentation technique is used to obtain the ROI.50 ROI is extracted based on the above method. A set of statistical features are initially obtained. Second order statistical features, the GLCM that gives information about inter pixel relationship, periodicity and spatial gray level dependencies are computed. To overcome the operator dependency of ultrasound scanning procedure rotational variance sift algorithm is applied and SIFT (SCALE INVARIANT FEATURE TRANSFORM) features are obtained. A total of 182 features for the normal images, 350 GLCM features, 250 statistical features and 42 SIFT features are calculated for the abnormal images. Abnormalities are classified using supervised learning algorithm (ANN). With the number of hidden neurons to be 10 the classification accuracy is reached for the testing input images.
    Keywords: Ultrasound scan images; speckle noise; median filter; wiener filter ; SIFT features; GLCM; fuzzy C-means segmentation ; artificial neural network;.

  • Development of Comfort Fit lower limb Prosthesis by Reverse Engineering and Rapid prototyping methods and validated with GAIT analysis   Order a copy of this article
    by Chockalingam K, Jawahar N, Muralidharan N, Balachandar Kandeepan 
    Abstract: The development of comfort fit, custom made lower limb prosthesis using the concept of reverse engineering (RE) and rapid prototyping (RP) has been introduced in this paper. The comparison of average percentage of deviation in step lengths also made between normal person, the Conventional (Plaster of paris - POP) prosthesis and reverse engineered rapid prototyping prosthesis through gait analysis. The gait analysis reveals that the average percentage of deviation in step lengths in normal person is 2.80, the average percentage of deviation in step lengths in conventional (POP) prosthesis is 53.70 and reverse engineered rapid prototyping prosthesis is 7.06. The difference in average percentage of step lengths deviation between normal person and amputed person wearing reverse engineered and rapid prototyping prosthesis is very less (Say 4.26) and hence provide comfort fit.
    Keywords: Lower limb prosthesis; Reverse engineering; Rapid prototyping; Gait analysis.

  • Efficient T2 Brain Region Extraction Algorithm Using Morphological Operation And Overlapping test from 2D and 3D MRI images   Order a copy of this article
    by Vandana Shah 
    Abstract: In the field of medical resonance image (MRI) processing the image segmentation is an important and challenging problem in an image analysis. The main purpose of segmentation in MRI images is to diagnose the problems in the normal brain anatomy and to find the location of tumour. Many of the algorithms have been found in recent years which aid to segment the medical images and identify the diseases. This paper proposes a novel 3D Brain Extraction Algorithm (3D-BEA) for segmentation of MRI images to extract the exact brain region. Transverse relaxation time (T2) weighted images are used as an input for the development of algorithm as these images provide bright compartments and dark fat tissues in the MRI brain region. The images are first denoised and smoothed for further processing. They are then used for extraction of irregular brain masks through threshold implementation which are then compared with the upper and lower slice of the brain images using morphological operations. The final brain volume has been generated using this 3D-BEA process. The result of this developed proposed algorithm is validated by comparing proposed algorithm with the results of the existing segmentation algorithm used for the same purpose. The proposed algorithm will help medical experts to understand and diagnose the tumour area of the patient.
    Keywords: segmentation; morphological operations; clustering; k-means clustering; fuzzy c means clustering; Brain Extraction Algorithm.

  • Secure Agent Based Diagnosis and Classification Using Optimal Kernel Support Vector Machine   Order a copy of this article
    by Kiran Tangod, Gururaj Kulkarni 
    Abstract: Diabetes is a serious complex condition which can affect the entire body. Diabetes requires daily self-care and if complications develop, diabetes can have a significant impact on quality of life and can reduce life expectancy. The existing multi-agent based diabetes diagnosis and classification methods require a number of agents and hence communication between those agents causes time complexity issues. Due to this complexity issues, the existing method is not applicable to the emergency crisis. Hence to overcome those issues it is essential to reduce the number of agents. Hence, we move on to the proposed method. Our method requires only three agents which are user agent, security agent and updation agent. Initially, user agent collects the user symptoms and then encrypts the symptoms. The encrypted symptoms are then directed to the updation agent. For secure communication two fish-based encryption algorithm is used between the user and updation agent. After receiving the encrypted data from the security agent, the updation agent needs to find the diabetes level of user as if it is normal or abnormal. Hence our proposed technique uses the Optimal Kernel Support Vector Machine algorithm (OKSVM) to classify the diabetes level. In our suggested technique, the traditional kernel function is optimized with the help of Sequential Minimal Optimization (SMO) algorithm. Based on the optimal kernel, the suggested technique effectively prescribes the drugs for the corresponding user. The proposed method will be implemented in JAVA platform.
    Keywords: Multi-Agent Systems (MAS); Diabetes; Two Fish Based Encryption Algorithm; Optimal Kernel Support Vector Machine algorithm (OKSVM); Sequential Minimal Optimization (SMO).
    DOI: 10.1504/IJBET.2018.10013947
     
  • Computational Study on the Effect of Human Respiratory Wall Elasticity.   Order a copy of this article
    by Vivek Kumar Srivastav, Rakesh Kumar Shukla, Akshoy Ranjan Paul, Anuj Jain 
    Abstract: The present study is focused to investigate computationally the respiratory wall behaviour and airflow characteristics inside trachea and first generation bronchi considering fluid structure interaction (FSI) between air flow and the human respiratory wall. The human respiratory tract model is constructed using Computed Tomography (CT) scan data. The objectives of the present study are to investigate the effect of elasticity of the respiratory wall on the deformation and stresses induced in the respiratory tract during inhalation. The deformation in the respiratory tract is found to be insignificant for the elasticity modulus above 40 kPa. A considerable amount of deformation is observed when the elasticity modulus is below 30 kPa. The internal flow physics is compared between rigid and compliant (flexible) human respiratory tract model. The flexibility considered in the respiratory tract wall decreases the maximum flow velocity by 24%. It is observed that the wall shear stress in compliant respiratory model is reduced to one-third of that in rigid respiratory model. The comparison of results of rigid and compliant models show that FSI technique offers more realistic results as compared to conventional computational fluid dynamics (CFD) analysis of rigid tract. The simulated results suggest that it is essential to consider respiratory wall deformability into the computational model to get realistic results. The results will help medical practitioners to correlate the clinical findings with the more accurate computational results.
    Keywords: Human respiratory tract; rigid model; compliant model; Fluid structure interaction (FSI); Elasticity modulus.

  • Experimental Studies on Acrylic Dielectric Elastomers as Actuator for Artificial Skeletal Muscle Application   Order a copy of this article
    by Yogesh Chandra, Anuj Jain, R.P. Tewari 
    Abstract: The application of acrylic dielectric elastomer- an electrically actuated electro active polymer for artificial muscles has been investigated to evaluate its suitability for prosthetic and orthotic devices by comparing its mechanical and electrical properties similar to that of natural skeletal muscles. Experimental studies have been performed to get the properties of acrylic dielectric elastomers for design and development of artificial skeletal muscles.Therefore, a commercially available acrylic dielectric elastomer; VHB 4910 by 3M film is subjected to uniaxial tensile tests under varying rates, stress relaxation test and loading-unloading test on the Universal testing machines and undergoes an electrical actuation test after pre-straining. The results of such tests have been discussed separately and then compared with previous researches on skeletal muscles as well. Moreover, the material response is also observed highly viscous and hyper-elastic i.e. sensitive to very high strain rates as in the case of skeletal muscles. These results can be utilized in material selection to develop dielectric elastomer actuator applications for artificial muscles.
    Keywords: dielectric elastomers; electrical actuation; artificial muscles; stress relaxation; pre-straining; strain rate.

  • A Comparison of Detrend Fluctuation Analysis, Gaussian Mixture Model and Artificial Neural Network Performance in detection of Microcalcification from Digital Mammograms   Order a copy of this article
    by Sannasi Chakravarthy S R, Harikumar Rajaguru 
    Abstract: This paper presents a Computer Aided approach that classifies the type of cancer (benign or malignant) and its associated risk from the digital mammogram images. Twelve statistical features are extracted through five different wavelets such as Daubechies, Haar, Biorthogonal Splines, Symlets and DMeyer with the decomposition levels of 4 and 6. The Mammogram Image Analysis Society (MIAS) database is utilized in this paper. The micro-calcification in the digital mammogram images by are detected by Detrend Fluctuation Analysis (DFA), Gaussian Mixture Model (GMM) and Artificial Neural Network (ANN). The classifiers performance are analyzed and compared based on the bench mark parameters like Sensitivity, Selectivity, Precision and Accuracy. GMM classifier outperforms the DFA and ANN Classifiers in terms of performance metrics.
    Keywords: mammogram images; breast cancer; wavelet; detrend; gaussian mixture; neural network; classification.

  • Automatic Detection of Tuberculosis based on Adaboost Classifier and Genetic Algorithm   Order a copy of this article
    by BEAULAH JEYAVATHANA RAJENDRAN, Balasubramanian R 
    Abstract: Tuberculosis is one of the most commonly affected diseases in the progressing countries. Early stage diagnosis of tuberculosis plays a significant role in curing TB patients. The work presented in this paper is focused on design and development of a system for the detection of tuberculosis in CT lung images. The disease can be diagnosed easily by radiologists with the help of automatedrntuberculosis detection system. The main objective of this paper is to get best solution selected by means of genetic programming is regarded as optimalrnfeature descriptor. Five stages are being used to detect tuberculosis disease. They are pre-processing the image, segmentation, Extracting the feature, Feature Selection and Classification. These stages are used in medical image processing to enhance the TB identification. In feature extraction stage, wavelet based statistical texture feature extraction is used to extract the features and from the extracted feature sets the optimal features are selected by Genetic algorithm. Finally, Adaboost classifier method is used for image classification. The experimentation is done and intermediate results are obtained. Experimental results show that Adaboost is a good classifier, giving an accuracy of 95% for classifying TB affected and Non-affected lungs using wavelet based statistical rntexture features.
    Keywords: Tuberculosis; Otsu's method; GLCM approach; Genetic Algorithm; Adaboost classifier.

  • MRI Images Segmentation using Improved Spatial FCM Clustering and Pillar Algorithms   Order a copy of this article
    by Boucif Beddad, Kaddour Hachemi, Sundarapandian Vaidyanathan 
    Abstract: The segmentation of brain tissue from MRI images is a vast subject of study, and has many applications in medicine. The main objective of this work is to carry out a new segmentation technique based on a combined method between Pillar algorithm and Spatial Fuzzy C-Means clustering. The proposed approach applies FCM clustering to image segmentation after optimized by pillar algorithm in terms of initial centers precision and computational time. The features of the segmented brain image are extracted in different classes (white matter WM, gray matter GM and Cerebrospinal Fluid CSF) using the integrating elements interpreted to get partially or fully automated tools allowing a correct extraction of the cerebral tissue. The developed algorithm has been implemented and the program is run through Simulink. All experimental results are satisfactory which indicate that using a combined method of several segmentation algorithms helps to get better results and improves the classification.
    Keywords: Brain MRI; Image Processing; Pillar Algorithm; Segmentation; Spatial Fuzzy C-Mean Clustering.

  • Searching for cell signatures in multidimensional feature spaces   Order a copy of this article
    by Romuere Silva, Flavio Araujo, Mariana Rezende, Paulo Oliveira, Fatima Medeiros, Rodrigo Veras, Daniela Ushizima 
    Abstract: Despite research on cervical cells since 1925, systems to automatically screen images from conventional Pap smear tests continue unavailable. One of the main challenges in deploying precise software tools is to validate cell signatures. In this paper, we introduce an analysis framework, CRIC-feat, that expedites the investigation of different image databases and respective descriptors, particularly applicable to Pap images. This paper provides a three-fold contribution: (a) we first review and discuss the main feature extraction protocols for cell description and implementations suitable for cervical cells, (b) we present a new application of Gray Level Run Length (GLRLM) features to Pap images and (c) we evaluate 93 cell classification approaches, and provide a guideline for obtaining the most accurate description, based on two current public databases with digital images of real cells. Finally, we show that the nucleus information is preponderant in cell classification, particularly when considering the GLRLM feature set.
    Keywords: Medical Image; Feature extraction; Cervical cells; Quantitative Microscopy; Cell Descriptors; Classification.

  • Skull Stripping of Brain MRI for Analysis of Alzheimers Disease   Order a copy of this article
    by DULUMANI DAS, SANJIB KUMAR KALITA 
    Abstract: MRI is a widely used imaging technique that helps to detect different brain abnormalities like brain tumor, Alzheimers disease, brain stroke etc. Skull stripping of MR brain image is a preliminary step of neuroimaging. MR brain image contains some non brain tissues like skull, skin, fat, muscle, neck etc. These non brain tissues are considered as a cause of difficulty in further analysis. So it is essential to remove these non brain tissues before detail analysis and this process is referred to as skull stripping. As the non brain tissues of brain are removed, the area of the brain gets reduced. The aim of this work is to extract the main region of brain for analysis that has adequate area by removing the non brain tissues. The main problem in skull stripping is to differentiate brain tissues from non brain tissues due to their intensity inhomogeneity. In this work, an automatic skull stripping method based on morphological operation is analyzed. Initially MR images are segmented using entropy based thresholding method. To find a precise threshold for brain tissues, five entropy based thresholding methods are analyzed. Those are maximum entropy sum method, entropic correlation method, Renyis entropy, Tsallis entropy and modified Tsallis entropy method and are compared with Otsus thresholding method. The final skull stripped image is obtained after performing morphological operation. In the present study, 50 T1 weighted coronal MR images are considered for experiment. The experiment shows that the skull stripped brain obtained using modified Tsallis threshold gives adequate area of interest. The accuracy obtained with this method is 80.4%. Further the extracted brain is analyzed for three features to diagnose Alzheimers disease. The analyzed features are perimeter, hole size and boundary distance.
    Keywords: skull stripping; Alzheimer’s disease; MRI; entropy based thresholding; mathematical morphology.

  • Epileptogenic neurophysiological feature analysis based on an improved neural mass model   Order a copy of this article
    by Zhen Ma 
    Abstract: To elucidate the neurophysiological mechanisms underlying seizures according to electroencephalogram (EEG) signals, a neurophysiologically-based neural mass model that can produce EEG-like signals was adopted to simulate ictal and interictal EEG signals. A delay unit and a gain unit were added to the Wendling model to fit EEG signals in the time domain. An optimal parameter set minimizing the error between observed and simulated EEG was identified using a genetic algorithm. To compare the inhibition and excitation during the ictal and interictal periods, the model parameters were determined for two sets of EEG signals using the proposed method. The results show that the model with identified parameters can simulate the real EEG signal well, with a mean square error of 0.03150.2138. Fifty repetitions for every selected EEG signal showed that the dispersion of the identified parameters was small in most cases, and the identification procedure generally showed similar values. Comparison of the model parameters of seizure and non-seizure EEG signals showed enhanced excitability, attenuated inhibition, and a more concentrated energy distribution in the frequency domain during the ictal periods. The experimental results for long-term EEG signals revealed continuous changes in the model parameters during epileptic seizures.
    Keywords: EEG; neural mass model; genetic algorithm; fitting.

  • A hybrid multimodal biometric scheme based on face and both irises integration for person authentication.   Order a copy of this article
    by Larbi Nouar, Nasreddine Taleb, Miloud Chikr El Mezouar 
    Abstract: In the biometric field, mono-modal biometry suffers from multiple lacks. The use of multiple biometrics has helped getting over those limitations and has allowed outperforming single biometrics in terms of recognition rate. In this paper, we propose a new approach that fuses the Gabor-winger transform and the oriented Gabor phase information for feature extraction as well as a hybrid scheme consisting of Multi-algorithm, Multi-instance and Multi-modal systems that integrates face and both left and right irises of the same subject. The SDUMLAHMT database has been used to evaluate the proposed approach. The results show that our multi-modal biometric system achieves higher accuracy than the single biometric approaches as well as the other existing multi-biometric systems.
    Keywords: biometrics ;multi-biometric systems; iris recognition; face recognition; fusion; multi-algorithm;DET curve; fusion; feature extraction.

  • Novel Slope Based Onset Detection Algorithm for Electromyographical signals   Order a copy of this article
    by Vinayak Bairagi, Archana Kanwade 
    Abstract: Electromyography (EMG) is a technique of acquiring neuromuscular activity of muscle. Onset and offset gives information about activation and deactivation timings of motor units. This paper proposes a novel slope based algorithm for onset and offset detection. EMG data are collected from different muscle of different subjects using surface EMG electrodes. Data is divided into smaller windows and Average Instantaneous Amplitude (AIA) and slope is calculated for each window. A threshold is decided to avoid baseline noise. Below threshold, maximum and minimum slope is detected as the onset and offset respectively. The results are accurate compared to single threshold and double threshold method. Accuracy increases with computation complexity (arithmetic calculations); if compared with Root Mean Square (RMS) based algorithm. The only limitation is; decrease in accuracy if signal is acquired between two muscle contractions. The proposed slope based onset detection algorithm can be way out between accuracy and computational complexity.
    Keywords: Electromyography; Onset; Offset; Slope.

  • Performance Evaluation of De-noised Medical Images after Removing Speckled Noise by Wavelet Transform   Order a copy of this article
    by Arun Kumar, M.A. Ansari 
    Abstract: In healthcare community, the quality of the medical image is of prime concern to make accurate observations for diagnosis. Different types of noise such as Gaussian noise, Impulse noise and Speckle noise, etc. have been observed as a main cause for the quality degradation of the medical image. This degradation may further lead to the inconsistent information for the diagnosis, which will directly affect the patient's life. The removal of the noise from the medical image to maintain its quality has become a very tough task for the researchers and practitioners in the field of medical image processing. This paper aims on the comparative performance evaluation of various orthogonal and biorthogonal wavelet filters that are commonly used for the de-noising purpose based on some statistical parameters such as mean square error (MSE), peak signal to noise ratio (PSNR), structural similarity index (SSIM) and Correlation Coefficient. The result of the present study depicts that biorthogonal 3.9 wavelet filters provide more precise image after the removal of noise as compare to other wavelet filters used.
    Keywords: Wavelet filters; Feature extraction; Speckle noise; Soft threshold; Performance evaluation.

  • Cryptanalysis for Securing DICOM Medical Contents using Multilevel Encryption   Order a copy of this article
    by Subhasri Prabhakaran, Padmapriya Arumugam 
    Abstract: In healthcare, medical images are considered as important and sensitive data. The patient particulars can be identified using medical data. For transferring these data, a secure encryption algorithm is essential. Cryptographic schemes are used to enhance the security and confidentiality of medical data thereby preventing unauthorized access. Data from the contributors and storage systems leads to scalability and preservation issues. Therefore a standard is required to preserve the secrecy of medical images. Digital Imaging and Communications in Medicine (DICOM) is the universal standard for secured communication in medical files. In cryptographic technique, after encryption process, some intruders can hack the sensitive data without using precise key is known as cryptanalysis. So a typical evaluation mechanism is needed to verify the quality of cryptographic methods employed. In this paper, the results are assessed for MRI, CT, X-Ray images data. The proposed method performance is evaluated using cryptanalysis measures.
    Keywords: DICOM Medical content; Cryptography; Medical Image; Cryptanalysis; Security attacks.

  • EVENT DETECTION IN SINGLE TRIAL EEG DURING ATTENTION AND MEMORY RELATED TASK   Order a copy of this article
    by PREMA PITCHAIPILLAI 
    Abstract: P300 is an endogenous event related potential (ERP) elicited by a rare or significant visual stimulus and is widely preferred in Brain Computer Interface (BCI) to assess the cognition level of the subject. Many researchers contribute to P300 estimation, as this signal is of very low strength in background Electroencephalogram (EEG) activity. This paper proposes a novel signal processing algorithm to detect the P300 event in a single trial EEG acquired from midline electrode sites in oddball paradigm to evaluate attention and memory related tasks of subjects. The algorithm incorporates wavelet combined adaptive noise canceller followed by ensemble and moving averager. Time domain analysis shows the localization of ERP around 300ms for target stimuli attended by the subjects. The Short-Time Fourier Transform (STFT) analysis shows strong theta activity associated to memory related task. Thus, the proposed algorithm is efficient in detecting the P300 with higher correlation coefficient of 0.82 (average) compared to other existing methods.
    Keywords: BCI; P300-event related potential; attention; adaptive filter; ensemble averaging; moving averager; latency; SNR; STFT; time-domain; frequency-domain.

  • Progressive Fusion of Feature Sets for Optimal Classification of MI Signal   Order a copy of this article
    by M.K.M. Rahman, Md. Omer Sadek Bhuiyan, Md.A.Mannan Joadeer 
    Abstract: Brain-Computer Interface (BCI) is the most popular research topic to the researchers of neuroprosthetics.The ultimate goal of this research is to develop a communication channel between human brains and external devices.Feature extraction is one of the most crucial steps in this research. Combination of different features may improve the classification performance, but in most of the cases straight forward combinations of different features lead to a very poor result. So, it is necessary to combine the orthogonal features and omit the redundant ones. It is a complex and time-consuming process. We have developed two new algorithms to find optimum sets of features for fusion to obtain best possible classification accuracy for both subject-specific (SS) and subject-independent (SI) cases. Experimental results indicate that our proposed algorithms, in general, improve the classification results irrespective of the different methodological setup of BCI processes such as number of input channels and spatial filter.
    Keywords: brain-computer interface (BCI); feature fusion; feature extraction; optimum feature set.

  • Automated Blink Artifact Removal from EEG using Variational Mode Decomposition and Singular Spectrum Analysis   Order a copy of this article
    by Poonam Sheoran, J.S. Saini 
    Abstract: Purpose: Blink artifacts are the major source of noise while acquiring Electroencephalogram (EEG) data for analysis. To design an efficient method for blink artifact removal is essential for conducting any sort of analysis using EEG. Method: In this paper, a novel automated eye blink artifact removal method based on Variational Mode Decomposition (VMD) and Singular Spectrum Analysis (SSA) is presented. The noisy EEG signals are first separated into uncorrelated components using Canonical Correlation Analysis (CCA) and then Variational Mode Decomposition is performed for multiresolution analysis. The decomposed components (modes) are assessed through their singular values for finding the distribution of noise using singular spectrum analysis. Phase Space Reconstruction (PSR) is also used to differentiate the clean modes and noisy modes. Result: The applicability of the proposed approach is examined through statistical measures like signal to noise ratio (SNR), correlation coefficient and root mean square error (RMSE). The results indicate the efficacy of the approach in artifact removal without manual intervention as compared to the state-of-the-art technologies. Conclusion: The proposed method automatically identified and removed the noisy fraction of signal yielding the requisite neural information without any manual intervention.
    Keywords: Artifact removal; Variational Mode decomposition (VMD); Canonical Correlation Analysis (CCA); Phase Space Reconstruction (PSR); Singular Spectrum Analysis (SSA).

  • CT Image Reconstruction from Sparse Projections Using Adaptive Total Generalized Variation with Soft Thresholding   Order a copy of this article
    by Vibha Tiwari, Prashant Bansod, Abhay Kumar 
    Abstract: CT imaging plays a vital role in non-invasive diagnosis and in surgical planning of critical diseases. However, it is essential to reduce radiation dose during CT imaging as excessive exposure may cause harm to human tissues. To reduce radiation dose, CT image is acquired using limited number of X-Ray projections and then to reconstruct the image an adaptive Total Generalized Variation (TGV) minimization method has been proposed. The simulation results have been compared with the existing TV, TGV and TGV with hard thresholding methods. Typically two types of noises, Gaussian and Poisson distributed noise are introduced during CT imaging process. So, these two types of noises have been added in measured samples. It is found that after applying soft thresholding and FISTA algorithm with the proposed method, better results have been obtained in noisy imaging environment. The reconstructed CT image quality has been compared using parameters like FSSIM, PSNR, NRMSE and MAE.
    Keywords: CT image reconstruction; limited angle reconstruction; total generalized variation;compressive sensing; telemedicine;.

  • Performance Analysis of Data Mining Classification Algorithms for Early Prediction of Diabetes Mellitus II   Order a copy of this article
    by Delshi Ramamoorthy 
    Abstract: Diabetes Mellitus (DM) generally referred to as diabetes. It is a group of metabolic infection in which there are high blood sugar levels over a prolonged period. To prevent this disease at early stage by predicting the symptoms of diabetes using several methods. Data mining is one of the domains for predicting various diseases. To categorize and predict symptoms in medicinal data, different researchers use data mining methods. From many techniques of data mining, classification is one of the main techniques. The classification techniques are used classify the hidden information in all areas including medical diagnostic field. In this research work, we compare the machine learning classifiers (Na
    Keywords: Diabetes Mellitus; Classification; SVM; AdaBoost; NB; J48;Random Tree; Random Forest; OneR; Data Mining.

  • Nerve Stimulator for regional anaesthesia procedures: an automatic interactive closed-loop control   Order a copy of this article
    by Carlos Alexandre Ferri, Antonio Augusto Fasolo Quevedo 
    Abstract: Peripheral nerve stimulators has been widespread among anaesthesiologists and remains a popular technique. However, in commercial devices the user has to manually adjust the stimulus intensity. Thus, the aim of this study is to propose a method that allows to automate the current intensity control. An earlier nerve stimulator prototype was modified to add an accelerometer and an sEMG module. The choice of these two sensors is aimed at the possibility of observing the mechanical and electrical responses of the muscle contraction evoked by the stimulation. The tests were performed in two steps. The first step was to observe how the sensors behave during stimulation and muscle contraction. The second step was to implement a control algorithm and to validate the automation technique. Comparing the two methods, no significantly differences were found on procedure time (manual: 12,5
    Keywords: nerve stimulator; regional anaesthesia; Automatic control; accelerometer; electromyography.

  • An Improved Speckle Noise Reduction Scheme Using Switching and Flagging of Noisy Data for preprocessing of Ultrasonograms in Detection of Down Syndrome during First and Second Trimesters   Order a copy of this article
    by Jeba Shiney, Amar Pratap Singh, Priestly Shan 
    Abstract: Down Syndrome (DS) is reported to be one of the most common chromosomal abnormality, affecting newborns all over the world. Diagnosis of the syndrome at an earlier stage during pregnancy will provide more options for the affected parents to make decisions on the interventional therapies required for the developing child. The techniques which are currently used in diagnosis of DS like amniocentesis and Chorionic Villus Sampling (CVS)are invasive in nature and are associated with some percentage of risk. This paper aims at developing a Clinical Decision Support System (CDSS) for detection of DS from Ultrasound (US) fetal images. As a preliminary step in achieving this, the US images have to be denoised for removal of speckle noise. A Modified Mean Median (MMM) filter has been proposed which is based on the principle of progressive switching theory.Experimental results show that the proposed filter provides better results in terms of Peak Signal to Noise Ratio(PSNR), Image Enhancement Factor(IEF) and so on.
    Keywords: Ultrasound; Down Syndrome; Modified Mean Median; Amniocentesis; Chorionic Villus Sampling; Speckle noise; filter; diagnosis; Clinical Decision Support System; Peak Signal to Noise Ratio.

  • VOLUME BASED INTER DIFFERENCE XOR PATTERN: A NEW PATTERN FOR PULMONARY NODULE DETECTION IN CT IMAGES   Order a copy of this article
    by Chitradevi A, Nirmal Singh N, Jayapriya K 
    Abstract: The pulmonary nodule identification which paves the path to the cancer diagnosis is a challenging task today. The proposed work, Volume Based Inter Difference XOR Pattern (VIDXP) provides an efficient lung nodule detection system using a 3D Texture based pattern which is formed by XOR pattern calculation of inter-frame gray value differences among centre frame with its neighbourhood frames in rotationally clockwise direction, for every segmented nodule. Different classifiers such as Random Forest (RF), Decision Tree (DT) and Adaboost are used with 10 trails of 5 fold cross validation test for classification. The experimental analysis in the public database, Lung Image Database Consortium - Image Database Resource Initiative (LIDC-IDRI) shows that the proposed scheme gives better accuracy while comparing with existing approaches. Further, the proposed scheme is enhanced by combining shape information using Histogram of Oriented Gradient (HOG) which improves the classification accuracy.
    Keywords: pulmonary nodule; classification; preprocess; segmentation; feature extraction; LIDC-IDRI; medical image segmentation; accuracy.

  • Accurate detection of Dicrotic notch from PPG signal for telemonitoring applications   Order a copy of this article
    by Abhishek Chakraborty, Deboleena Sadhukhan, Madhuchhanda Mitra 
    Abstract: Recent technological advancement have inspired the modern population to adopt a portable, simple personal telemonitoring system that uses easy-to-acquire biosignal such as Photoplethysmogram (PPG) for regular monitoring of vital signs. Consequently computerized analysis of PPG signal through accurate detection of clinically significant PPG fiducial points like dicrotic notch has become a key research area for early detection of physiological anomalies. In this research, a simple and robust algorithm is proposed for accurate detection of dicrotic notch from the PPG signal employing first and second difference of the denoised signal, slope-reversal and an empirical formula-based approach. Features related to the dicrotic notch are then extracted from the baseline-corrected PPG signal and performance of the algorithm is evaluated over different standard PPG databases as well as over originally acquired signal. The algorithm achieves high efficiency in terms of sensitivity, positive predictivity, detection accuracy and low value of errors in the detected features.
    Keywords: Photoplethysmogram; amplitude threshold; slope reversal; dicrotic notch detection.

  • PULSATILE FLOW, MICRO-SCALE ERYTHROCYTE-PLATELET INTERACTION   Order a copy of this article
    by Thakir AlMomani, Suleiman Bani Hani, Samer Awad, Mohammad Al-Abed, Hesham AlMomani, Mohammad Ababneh 
    Abstract: Platelet aggregation, activation, and adhesion on the blood vessel and implants result in the formation of the mural thrombi. Erythrocyte (or red blood cell RBC) have shown to play a significant role in the aggregation process of platelets toward vessel walls. A level-set sharp-interface immersed boundary method is employed in the computations in which RBC and platelet boundaries are advected on a two-dimensional Cartesian co-ordinate grid system. RBCs and platelets are treated as rigid non-deformed particles, where RBC assumed to have an elliptical shape while platelet is assumed to have a discoid shape. Both steady and pulsatile flow regimes were employed with Reynolds number values equivalent to those could find in the micro-blood vessels. Forces and torques between colliding blood cells are modeled using an extension of the soft sphere model for elliptical particles. RBCs and platelets are transported under the forces and torques induced by fluid flow and cell collision based on solving the momentum equation for each blood cell. The computational results indicated that platelets tend to show more interaction with RBCs and migration toward vessel wall for steady flow more than those found in the pulsatile flow. Velocity contours didnt show major differences in the peak and minimal values. The using of physiological flow conditions showed less interaction between RBCs and platelets, than that could find in the steady flow conditions. Moreover, platelets tend to concentrate in the core region in the case of pulsatile flow.
    Keywords: Erythrocyte; platelet; interaction; pulsatile flow; migration; core region; wall region.

  • Morphological detection and neuro-genetic classification of masses and calcifications in mammograms for computer-aided diagnosis   Order a copy of this article
    by Fatma Zohra Reguieg, Nadjia Benblidia, Mhania Guerti 
    Abstract: Diagnosis of breast cancer is the main worry of oncologists of this era, which knows an anxiogenic increase of the incidence in the world. This paper is destined for the semi-automatic detection of breast neoplasm taken, from digital mammograms of MIAS database (Mammographic Image Analysis Society). This research is focusing on analysis of masses and, calcifications. Therefore, the first phase of the system consists, on pre-processing of pathological structures, by morphological transformations in order to refine, the segmentation. The second step, realises extraction of clinical signs, according to adaptive deformable model which initialisation is guided by, the annotated suspicious zone. The third block is to characterise abnormalities, by morphometric and textural attributes, to generate their signature. The ultimate systemic description, categorises malignant and benign masses and calcifications from their knowledge, by a neuro-genetic classifier for computer-aided diagnosis. The elaborated decisional system, products, an accuracy of 99.25%, for the shape recognition.
    Keywords: digital mammogram; deformable model; texture and morphometry; neuro-genetic classification; computer-aided diagnosis.
    DOI: 10.1504/IJBET.2018.10015760
     
  • Fuzzy weighted histogram equalisation for contrast enhancement of mammogram images   Order a copy of this article
    by V. Magudeeswaran, K. Balasubramanian 
    Abstract: The early detection of breast cancer in the mammograms is very important in the field of medicine. Histogram equalisation is mainly used for contrast enhancement. The histogram equalisation (HE) usually results in excessive contrast enhancement because of lack of control on the level of enhancement. A novel technique, Fuzzy Weighted Histogram Equalisation (FWHE) for Contrast Enhancement of Mammogram Images is presented in this paper. The proposed method consists of three stages. First, the grey level intensities of the original mammogram image are transformed to fuzzy plane and the amount of fuzziness is adjusted by using contrast intensification operator. In the second stage, the Probability Distribution Function (PDF) of the fuzzy matrix is modified by applying weighting and thresholding and then, the HE procedure is applied to the fuzzy plane of the mammogram image. Finally, the fuzzy plane is de-fuzzified to get contrast enhanced mammogram image. From the qualitative and quantitative measures, it is interesting to see that the proposed method provides optimum results by giving better contrast enhancement and preserving the local information of the original mammogram image.
    Keywords: contrast enhancement; histogram equalisation; entropy; contrast improvement index; mammogram images; micro calcifications.
    DOI: 10.1504/IJBET.2018.10015751
     
  • Feature extraction using Pythagorean means for classification of epileptic EEG signals   Order a copy of this article
    by P.P. Muhammed Shanir, Sadaf Iqbal, Yusuf U. Khan, Omar Farooq 
    Abstract: Electroencephalogram (EEG) is a widely used tool for the study and diagnosis of epilepsy. The patients subjected to epilepsy require long term monitoring of EEG. Automatic seizure detection will eliminate chances of missing any seizure, make detection easy and reduce burden on physicians. In this work, different combination of Pythagorean means (time domain features) namely arithmetic mean (AM), geometric mean (GM) and harmonic mean (HM) of energy per epoch are used as features to classify EEG data into normal, seizure free and seizure classes by using a linear classifier. The classification accuracy of 100% is achieved in two and three class problem with a single feature/epoch and in five class problem with two features/epoch. The novelty of this work is use of new and simple features (in epileptic EEG signal classification), reduced complexity and high performance.
    Keywords: EEG; epilepsy; seizure; time domain analysis; Pythagorean mean.
    DOI: 10.1504/IJBET.2018.10015752
     
  • An approach to extract edge maps in infrared based breast images using inverse Perona-Malik diffusion filter   Order a copy of this article
    by J. Thamil Selvi, Ganesan Kavitha, C. Manoharan Sujatha 
    Abstract: Infrared thermography is the adjunctive tool for early diagnosis of breast cancer. The infrared breast images are of low Signal to Noise Ratio (SNR) and amorphous in nature which makes analysis a challenging task. In this work, an attempt is made to extract the edge map from infrared breast images using inverse Perona-Malik (PM) model. This non-linear filter varies the diffusion near the interferences and edges using inverse gradient and new nearest-neighbour scheme. The edge maps are extracted for various gradient thresholds. The statistical features such as average gradient, contrast, entropy and variance are extracted from the edge map to find the optimal gradient threshold. The diffused image of optimal gradient threshold value is validated using SNR. Results show that the statistical features are found to have maximum value for the gradient threshold of 5. It is also observed that SNR obtained from the diffused image is improved by 30 dB compared to raw image. Inverse PM model is found to reduce the noise and enhance the edges. The integration of denoising and edge map extraction would result in accurate segmentation and aid for early diagnosis.
    Keywords: breast thermal images; inverse PM model; average gradient; contrast; entropy; variance; SNR.
    DOI: 10.1504/IJBET.2018.10015753
     
  • Multi-resolution transform-based image denoising and fusion for Poisson noise suppression   Order a copy of this article
    by D. Mary Sugantharathnam, D. Manimegalai, T. Jayasree 
    Abstract: Noise removal is essential in medical imaging applications in order to enhance and recover anatomical details that may be hidden in the data. The issues of Poisson noise occurrence in medical imaging due to the arrival of photon nature of light produced at the time of capturing of image have always been a concern. This paper addresses a novel approach for the removal of Poisson noise embedded in the biomedical images based on multi-resolution transform-based denoising and fusion approach. In this technique, first, the images are denoised separately, by applying Discrete Wavelet Transform (DWT)/Stationary Wavelet Transform (SWT) and Fast Discrete Curvelet Transform (FDCT) integrated with Rudin-Osher-Fatemi (ROF) model. Next, the distinct features extracted from the denoised images are fused, resulting in an enhanced and denoised image. The proposed method was tested and compared with other denoising techniques on a set of biomedical images. Subjective and objective evaluations show better performance of this new approach compared to existing techniques.
    Keywords: wavelet transform; curvelet transform; Poisson noise; denoising; biomedical imaging.
    DOI: 10.1504/IJBET.2018.10015754
     
  • Theoretical framework, design and implementation of artificial brain architecture for service robots   Order a copy of this article
    by Naveen Kumar Malik, V.R. Singh 
    Abstract: In this investigation, a theoretical high-level brain model of human intelligence is devised to develop a theoretical high-level architecture for service robots, named as Artificial Brain Architecture for Service Robot (ABASR). It provides intelligent mechanism for service robots with multi-heterogeneous input, meta-reasoning, scientific inference engine and communication. The ABASR is a high-level human inspired theory of intelligence. To validate ABASR, the prototype robotic wheelchair is designed. It has features like obstacle avoidance, assisting user in mobility of wheelchair and emergency communication to handler. The developed robotic wheelchair is evaluated in room environment with four disable children suffering from muscular dystrophy and cerebral palsy and four caregivers. Findings confirm the importance of intelligent wheelchairs by providing independent mobility to user, reducing the cognitive workload of user, reducing the requirement of caregivers and bring confidence among its user.
    Keywords: artificial intelligence; brain model; cognitive architectures; embedded system; intelligent wheelchair; architecture of robotic system; theory of intelligence; architecture of intelligent wheelchair; muscular dystrophy; cerebral palsy; meta-reasoning mechanism; cognition; rehabilitation devices; independent mobility; architecture of artificial brain.
    DOI: 10.1504/IJBET.2018.10015755
     

Special Issue on: Developments and Issues in Medical Imaging

  • Optimal selection of threshold in EIT reconstructed images for Estimating size of Objects   Order a copy of this article
    by Nanda Ranade, Damayanti Gharpure 
    Abstract: Electrical Impedance Tomography (EIT) is widely used for various applications in process tomography and medical or geological imaging. EIT system non-invasively acquires surface potential data which is used for reconstructing conductivity images for identifying shapes and sizes of objects of interest. Based on the application, the objects of interest may be of higher or lower conductivity than the background conductivity. It is useful to convert reconstructed images to binary form for quantitatively establishing shapes and sizes of objects. In this work, we present guidelines for selecting appropriate threshold values based on systematic numerical investigations assuming prior knowledge of conductivity contrast for specific application of EIT. Various configurations of objects immersed in a background (with lower or higher conductivity) were considered. Open source software EIDORS (Electrical Impedance Diffused Optical Reconstruction Software) was used for reconstructing differential EIT images for these configurations. Diametric conductivity profiles were used to identify appropriate values of threshold to obtain accurate object size over a wide range of contrast. The calculated values of threshold and the resulting effect on estimated object size were compared with the usually preferred thresholds of
    Keywords: EIT; EIDORS; Image thresholding; conductivity contrast.
    DOI: 10.1504/IJBET.2018.10015451