Forthcoming articles

International Journal of Biomedical Engineering and Technology

International Journal of Biomedical Engineering and Technology (IJBET)

These articles have been peer-reviewed and accepted for publication but are pending final changes, are not yet published and may not appear here in their final order of publication until they are assigned to issues. Therefore, the content conforms to our standards but the presentation (e.g. typesetting and proof-reading) is not necessarily up to the Inderscience standard. Additionally, titles, authors, abstracts and keywords may change before publication. Articles will not be published until the final proofs are validated by their authors.

Forthcoming articles must be purchased for the purposes of research, teaching and private study only. These articles can be cited using the expression "in press". For example: Smith, J. (in press). Article Title. Journal Title.

Articles marked with this shopping trolley icon are available for purchase - click on the icon to send an email request to purchase.

Register for our alerting service, which notifies you by email when new issues are published online.

Open AccessArticles marked with this Open Access icon are freely available and openly accessible to all without any restriction except the ones stated in their respective CC licenses.
We also offer which provide timely updates of tables of contents, newly published articles and calls for papers.

International Journal of Biomedical Engineering and Technology (196 papers in press)

Regular Issues

  • Automated Recognition of Obstructive Sleep Apnea Using Ensemble Support Vector Machine Classifier   Order a copy of this article
    by V. Kalaivani 
    Abstract: ECG is mainly used to diagnosis the Obstructive Sleep Apnea(OSA) with a high degree of accuracy in clinical care applications. We have developed a real time algorithm for the detection of Sleep apnea disease based on electrocardiograph (ECG). In this study, features from ECG signals were extracted from 12 normal and 58 OSA patients from the Physionet Apnea ECG database. The baseline noise, motion drift and muscle noise present in the raw ECG signals are removed using median filter and Daubechies wavelet filter. QRS detection algorithm extracts R-wave amplitude and R-wave time duration from the denoised signal. The proposed QRS detection algorithm contains four stages. The initial stage is derivative function which calculates the QRS-complex slope value using some five-point derivatives. The next stage is squaring function and it removes the negative data points using square the derivative values. The first two stages are used to calculate R-peak amplitude. Third stage is moving-window integration. It can calculate the R-peak slope value by using some sample rates. Final stage is fiducially marked which calculates the R-peak value and QRS complex time duration (width). EDR is generally based on the R-wave amplitude and R-wave time duration. Time domain features are calculated from the Heart Rate Variability and ECG-derived respiration. Sleep apnea disease is diagnosed by using time domain features. Support Vector Machine(SVM) and Ensemble Support Vector Machine techniques are used for the detection of Sleep Apnea. SVM classifier was used with Linear, Polynomial, Redial Basis Function(RBF) and Multilayer Perceptron.
    Keywords: Obstructive Sleep Apnea(OSA); Heart Rate Variability(HRV); ECG Derived Respiration(EDR); Support Vector Machine(SVM); Ensemble Support Vector Machine.
    DOI: 10.1504/IJBET.2019.10021409
  • Optimized DWT using cooperative particle Swarm Optimizer for hybrid domain based Medical and Natural Image Denoising   Order a copy of this article
    by A. Velayudham, K. Madhan Kumar, R. Kanthavel 
    Abstract: The quest for productive image denoising systems still is a valid challenge, at the intersection of practical investigation and measurements. In spite of the sophistication of the recently proposed systems, most calculations have not yet achieved an attractive level of applicability. In this research, an optimal wavelet filter coefficient design-based methodology is proposed for image denoising. The method utilizes new wavelet filter whose coefficients are derived by discrete wavelet (haar) transform using CPSO optimization and bilateral filter. The optimal wavelet coefficient based denoising methods minimize the noise, while bilateral filter further decreases the noise and increases the PSNR without any loss of relevant image information. Overall, the proposed approach consists of two stages namely, (i) Design of optimal wavelet filter, ii) Image denoising using a bilateral filter. At first, wavelet optimal coefficients are selected using cooperative particle swarm optimizer (CPSO). After that, the hybrid domain based algorithm (wavelet with bilateral filter) is applied to the noisy image which is helpful to obtain the denoised image. A comparative study of the performance of different existing approaches and the proposed denoised approach is made in terms of PSNR, SDME, SSIM and GP. When compared, the proposed algorithm gives better PSNR compared to the existing methods.
    Keywords: image denoising; optimal wavelet; bilateral filter; cooperative particle swarm optimizer; wavelet coefficient; sub-bands.
    DOI: 10.1504/IJBET.2017.10012231
  • Evaluation of the Effect of Posture on Carotid-to-Toe Pulse Transit Time Values Estimated Using System Identification   Order a copy of this article
    by Aws Zuhair Sameen, Rosmina Jaafar, Edmond Zahedi 
    Abstract: Pulse Transit Time (PTT), a marker of arterial stiffness, is defined as the time a pulse wave needs to travel from one point of the blood circulation to another point. Monitoring PTT values of a person is useful in non-invasive, cuff-less estimation of blood pressure measurement. The challenge is how to estimate the PTT values continuously from the subject with accurate readings. In this paper, PTT values estimated from two PPG signals from the reflective photoplethysmography of carotid and toe that are collected from 12 healthy subjects (8 males and 4 females) age 23.8
    Keywords: Pulse Transit Time (PTT); System Identification; Photoplethysmogram (PPG); ARX model; Time Delay Estimation.

    by RAMANUJAM ELANGOVAN, Padmavathi S 
    Abstract: Time Series is a sequence of continuous data and unbounded group of observations found in many applications. Time series motif discovery is an essential and important task in time series mining. Several algorithms have been proposed to discover motifs in time series. These algorithms require user-defined parameters such as length of the motif, support or confidence. However, selection of these parameters is not an easy issue. To overcome the challenge, this paper proposes a Genetic Algorithm with constraints to discover good trade-off between representative and interesting motif. The discovered motifs are validated for their potential interest in Time Series Classification problem using Nearest Neighbor classifier. Extensive experiments show that the proposed approach can efficiently discover motifs with different length and to be more accurate and statistically significant than state-of-the-art time series techniques. Finally, the paper demonstrates the efficiency of motif discovery in large medical data from MIT-BIH-Arrhythmia database to classify the abnormal signals.
    Keywords: Genetic Algorithm; constraints; Time series classification; UCR Archive; MIT-BIH-Arrhythmia;.

  • Facial Expression Synthesis Images Using Hybrid Neural Network with Particle Swarm Optimization Techniques   Order a copy of this article
    by Deepti Chandra, Rajendra Hegadi, Sanjeev Karmakar 
    Abstract: In the advance life trend, the facial expression is the visual facial outer structure of the human affective state, intellectual action, and human interchanges and facial expression go about as the key role in the movement of communication. The human - computer interfaces are processed by the computer which is able to connect with people through facial expression and the above surfaces the path for the base of communication. This is further contrasted with the human-human association. In this paper, the facial expression synthesis performance is done using different facial expressions such as angry, sad, smile, surprise and cry of various peoples. The pre-processed image and the landmark points are shaped automatically by the viola-Jones algorithm. In the proposed method, two procedures are used namely Hybrid Neural Network (HNN) and Particle Swarm Optimization (PSO) algorithm. By training particle swarm optimization and hybrid neural network, we take the desired output. In the result section, various evaluation metrics namely Peak to Signal Noise Ratio (PSNR), Mean Square Error (MSE) and a Second-Derivative-Like Measure of Enhancement Value (SDME) is calculated using diverse algorithms. In this evaluation performance, the particle swarm optimization is given enhanced output while comparing it with other techniques and the existing methods of facial expression.
    Keywords: Facial expression; Hybrid Neural network; viola-Jones algorithm; Particle Swarm Optimization (PSO).
    DOI: 10.1504/IJBET.2017.10011900
  • Embedded Binary PSO Integrating classical methods for Multilevel Improved Feature Selection in liver and kidney disease diagnosis   Order a copy of this article
    by Gunasundari Selvaraj, Janakiraman Subbiah, Meenambal Selvaraj 
    Abstract: Feature selection is an important preprocessing technique in the field of data mining. This process removes irrelevant data thereby reduces the number of features. This paper presents a new algorithm called embedded binary particle swarm optimization (BPSO) to improve the performance of BPSO for feature selection. Embedded BPSO incorporates classical methods to select elite feature subset. The population is refined or extended at regular intervals using the best features from sequential forward selection and sequential backward selection methods. In this study, probabilistic neural network and support vector machine with 3-fold cross validation are used to evaluate the particles. The embedded algorithm is verified in the feature selection module of the liver and kidney cancer diagnostic system. The elite features extracted from wrapper based embedded algorithm are used to characterize diseases using the classifier. Findings show that the proposed system is proficient in selecting the best features with minimum error rate.
    Keywords: Binary particle swarm optimization; feature selection; sequential forward selection; sequential backward selection; liver cancer; kidney cancer; computer-aided diagnostic system; medical imaging; benign; malignant.

  • Iterative Modelling of the Closing-based Differential Morphological Profile   Order a copy of this article
    by Arif Muntasa, Indah Agustien Siradjuddin 
    Abstract: One of an image processing applications is retinal image optic disk detection. The similarity of the retinal gray scale image between the object and background has been interesting many researchers to develop the research. In this research, the Closing based Differential Morphological Profile is proposed to detect the optic disc on the retinal image. The closing process is performed by iterative. It is started by pre-processing and followed by Different Morphological Profile based on the closing operation i.e. the dilation and erosion processes. The dilation process is iteratively performed and followed by erosion process. The process results are enhanced to obtain the better image when the binary image transformation is conducted. The noise removal process is also necessity to eliminate the detection error. Furthermore, determining of the point centre of the object detection result will be used to create the optic disk. The detection rates of the proposed approach show that the maximum detection accuracy outperformed to the other methods, i.e. 2D-Gaussian Filtering Based Mathematical Morphology Approach, Differential Morphological Profile, Morphological Reconstruction Techniques and Hybrid Fuzzy Classifier.
    Keywords: Differential morphological profile; iterative modelling; optic disk image detection; closing operation.

  • Aligning Large Biomedical Ontologies for Semantic Interoperability using Graph Partitioning   Order a copy of this article
    by Sangeetha Balachandran, Vidhyapriya Ranganathan, Divya Vetriveeran 
    Abstract: Ontologies, formal specifications of domain knowledge, play an imperative role in the semantic web and are developed by several domain experts in the biomedical field. Ontology alignment or mapping is the process of identifying correspondences among the concepts in the ontology to facilitate data integration between heterogeneous data sources. The alignments generated augment information retrieval process, web service composition, drug discovery, identifying new gene patterns in species. In particular, the proposed ontology mapping system addresses three pivotal issues: (i) To Facilitate the automated alignment process by incorporating Random Forests (RF), an ensemble learning system that is stable for outliers. In addition, it facilitates training the individual random trees in parallel, and thereby reducing the execution time (ii) To Improve the execution time by partitioning the ontologies using cluster-walktrap [24] methodology and identify the correspondence between the concepts in parallel and (iii) To identify equivalence and non-equivalence correspondences based on the description, labels and object properties associated with concepts in the ontologies. The ontologies subjected to the mapping system are partitioned into sub-ontologies and the sub-ontologies having higher cosine similarity measure is selected as the candidate ontologies for further mapping. The performance of the system is pragmatically evaluated on benchmark datasets in the anatomy and large biomedical ontology tracks of the Ontology Alignment and Evaluation Initiative (OAEI) 2013 and 2014. With the aid of the proposed system, quantifiable improvement is noticed to an extent of about 4.4% in average precision, recall and F-measure. The performance of the proposed ontology mapping process is improved to an extent of 3%, compared to the state-of-the-art ontology mapping tools, for large biomedical ontologies. The alignments generated are represented using Alignment API suggested by OAEI for consistent representation and further, to ease the process of evaluation.
    Keywords: Ontology alignment; Ontology Mapping; Semantic information retrieval; Data integration; Biomedical informatics; Semantic interoperability.

  • Characterization of Breast Tissue using compact Microstrip antenna   Order a copy of this article
    by Vanaja Selvaraj, Poonguzhali Srinivasan, Divya Baskaran, Rahul Krishnan 
    Abstract: This paper presents a more improved method to characterize the breast tissue by employing a unique microstrip antenna. The pattern of the proposed antenna consists of a radiating patch with a rectangular slot, three stubs, a feed-line and a partial ground plane. Several parameters are used to analyze the microstrip antenna. The antenna designed provides a wide usable frequency band range from 2.4-4.76 GHz. In order to observe the interaction between the antenna and breast tissue, a heterogeneous breast model having dielectric characteristics similar to the human tissue is used. The tumor in the breast tissue is analyzed by measuring the resonant frequency of the reflected signal. The results show that the shift in resonant frequency decreases as the size of tumor increases due to dielectric variation in the breast tissue.
    Keywords: wideband; heterogeneous; microstrip antenna; breast tissue.

  • A Hybrid K-Means Algorithm Improving Low-Density Map Based Medical Image Segmentation with Density Modification   Order a copy of this article
    by Srinivasa Reddy A., Pakanati Chenna Reddy 
    Abstract: Segmentation is grouping of a set of pixels, which are mapped from the structures inside the prostate and the background image. The main aim of this research is to provide a better segmentation technique for medical images by solving the drawbacks that currently exist in the density map based discriminability of feature values. In this paper, we have proposed a method for image segmentation based density map segmentation properties medical image. The accurateness of the resultant value possibly not up to the level of anticipation while the dimension of the dataset is high because we cannot say that the dataset chosen are free from noises and faults. The kernel change i.e. segmentation is made by using Hybrid K-means Clustering Algorithm. Thus this method is used to provide the segmentation processing information as well as also be noise free output in an efficient way. Hence, the developed model is implemented in the working platform of Matlab and the output is compared with the existing techniques such as FCM, K-means to evaluate the performance of our proposed system.
    Keywords: Medical Image Segmentation; Hybrid K-Means Algorithm; Skull striping; FCM; K-Means; Genetic Algorithm.
    DOI: 10.1504/IJBET.2017.10012965
    by Rachel Nallathamby, Rene Robin CR 
    Abstract: In recent days, providing security to the data stored in cloud is an important and challenging task. For this purpose, several existing privacy preservation and encryption algorithms are proposed in the existing works. But, it has some drawbacks such as, high cost, required more amount of time for execution and low level security. In order to overcome all these drawbacks, this paper proposes a novel techniques such as, Tiered Blind and Anonymous Hierarchical Identity Based Encryption (TBAHIBE) and Location Based Keyword Query Search (LBKQS) for providing privacy preservation to the data stored in cloud environment. In this work, the privacy is provided to the medical data stored in the Electronic Health Record (EHR). It includes two modules such as, secure data storage and location based keyword query search. In the first module, the medical data of the egg and sperm donor, receptor, doctor and lab technician are stored in the encrypted format by using the proposed TBAHIBE technique. Here, the authenticated persons can view the medical data, for instance, the doctor can view the donor and receptor medical details. In the second module, the location based search is enabled based on the keyword and query. Here, the doctor, patient and other users can fetch the medical details in a filtered format. The main advantage of this paper is, it provides high privacy to the medical data in a secured way. The experimental results evaluate the performance of the proposed system in terms of computation cost, communication cost, query evaluation, encryption time, decryption time and key generation time.
    Keywords: Cloud Computing; Privacy Preservation; Egg Donor; Sperm Donor; Tiered Blind and Anonymous Hierarchical Identity Based Encryption (TBAHIBE) and Location Based Keyword Query Search (LBKQS); Electronic Health Record (EHR).

  • Low cost Device for early diagnosis of Chronic Obstructive Pulmonary Disease   Order a copy of this article
    by Monica Subashini Mohan Chandran, Tushar Talwar, Rohit Mazumder 
    Abstract: Chronic Obstructive Pulmonary Disease (COPD) is characterized by increasing breathlessness. Many people mistake their increased breathlessness and coughing as a normal part of aging. In the early stages of the disease, the symptoms are unnoticed. The symptoms in the more developed stages of the disease are seen. Thats why it is important that we have an easy to use device for diagnosis. All available devices in the market either need doctors help to interpret or are inaccurate. The proposed device solves the purpose of primary diagnosis of COPD, which enables the patient to do a self-test of his lungs capacity. The lung capacity is estimated by the amount of air the exhaled during the test. The sensor system of the device includes a rotary sensor which enables the patient to have precise and accurate information every time. The device has a modern and interactive application which gives the patient access to a detailed report on his lungs condition. The application also features exercise mode in which the patient can do simple breathing exercises to prevent and treat COPD from early stage. The device has been validated and processed by conducting self-diagnosis with people between ages of 20-45 with 87% accuracy. Thus our device can be used for primary diagnosis of COPD.
    Keywords: COPD;rotary sensor;lung capacity; diagnosis.

  • Mathematical Model based ontology for Human Papillomavirus in India   Order a copy of this article
    Abstract: Cervical cancer is a life threatening disease contracting women population in great numbers. It is the fifth most common cancer having high impact on human mortality. The second most common cancer prevalent among women worldwide is cervical cancer. Cervical cancer is due to sexually transmitted virus known as Human Papillomavirus (HPV). In this paper a Mathematical model and Ontological representation of this model HPVMath ontology has been formulated to expose the viability of HPV which leads to cervical cancer in women. Mathematical models translate data in to trials which gives deep insights about women population: not suspected for HPV, suspected for HPV, with HPV without cervical cancer, with HPV with cervical cancer. These trials from short term findings can lead to long term health outcomes. In addition HPVMath ontology representation formalizes a common view for HPV prevalence which can in turn assist medical practitioners and generate awareness among common men. This paper explores and defines the circumstances of HPV in to an ethical focus in an age characterization by the worldwide environmental threats.
    Keywords: Cervical cancer; HPV; Mathematical model; Ontology.

  • Design and Developing a Photoplethysmographic Device Dedicated to the Evaluation of Representative Indexes in the Response to Vascular Filling Using Systolic Time Intervals   Order a copy of this article
    by Nasr Kaid Ali Moulhi, Mohammed Benabdellah, Amine Aissa Mokbil Ali 
    Abstract: In this study, we develop an interface (human -machine) for monitoring the cardiovascular-respiratory system, through the evaluation of analogous indices obtained from a finger photoplethysmography pulse oximetry waveform. This interface consists of sensors, electronics associated with these sensors, acquisition card to make the communication with a local computer post and a graphical interface developed in Visual Basic Environment for signals tracing and data archiving. In this work we achieved the evaluation of representative indexes for the response to vascular filling, using systolic time intervals (STIs) namely, pre ejection period (PEP), respiratory change in pre ejection period (ΔPEP), left ventricular ejection time (LVET) and systolic time ratio (STR). Given that STIs are highly correlated to the fundamental cardiac functions. In order to achieve this goal, a data collection study was conducted using synchronized acquisitions of electrocardiogram (ECG), photoplethysmogram (PPG) and pneumotachogram (PTG) signals.
    Keywords: ECG; PPG; PTG; PEP,ΔPEP; LVET; STR; STIs; RS232; Microcontroller; Visual Basic; Vascular Filling.

  • Integration of global and local features based on Hybrid Similarity Matching Scheme for Medical Image Retrieval System   Order a copy of this article
    by Ajitha Gladis 
    Abstract: Similarity measure is a challenging task in content-based medical image retrieval (CBMIR) systems and the matching scheme is designed to improve the retrieval performance. However, there are several major shortcomings with conventional approaches for a matching scheme which can extensively affect their application of medical image retrieval (MIR). To overcome the issues, in this paper a multi-level matching (MLM) method for MIR using hybrid feature similarity is proposed. Here, images are represented by multi -level features including local level and global level. The Color and edge directivity descriptor (CEDD) is used as a color and edge based descriptor. Speeded-up Robust Features (SURF) and Local binary pattern (LBP) is used as a local descriptor. The hybrid of both global and local features yields enhanced retrieval accuracy, which is analyzed over collected image databases. From the experiment, the proposed method achieves better accuracy value about 92%, which is higher than other methods.
    Keywords: CBIR; local features; global features; multi-level matching; hybrid; similarity; descriptor; CEDD; LBP; SURF.
    DOI: 10.1504/IJBET.2017.10012232
  • Automatic stenosis grading system for diagnosing coronary artery disease using coronary angiogram   Order a copy of this article
    Abstract: The coronary angiogram is considered as the golden standard for diagnosing the coronary artery disease. The paper proposes a system that helps to describe the level of stenosis in coronary angiogram image by using mathematical morphology and thresholding technique. A novel method is introduced to determine the percentage of stenosis and its grading. Based on the diagnostic results, myocardial infarction (MI) is treated well-in-advance. A real time clinical dataset consisting of 25 conventional coronary angiographies with 865 frames is used to evaluate the performance of the proposed system. The execution of the proposed system is inspected by a cardiologist and confirmed the system has produced excellent segmentation and stenosis grading automatically. Sensitivity, specificity, accuracy and precision of the system are 94.74%, 83.33%, 92% and 94.74% respectively with an average computational time of 0.84 sec. Kappa value also shows perfect system agreement for stenosis grading.
    Keywords: coronary artery disease; coronary angiogram; coronary artery segmentation; stenosis detection; stenosis grading.

  • Particle Swarm Optimization aided Weighted Averaging Fusion Strategy for CT and MRI Medical Images   Order a copy of this article
    by Madheswari Kanmani, Venkateswaran Narasimhan 
    Abstract: Multimodal medical image fusion is a technique that combines two or more images into a single output image in order to enhance the accuracy of clinical diagnosis. In this paper, a non-subsampled contourlet transform (NSCT) image fusion framework that combines CT and MRI images is proposed. The proposed method decomposes the source images into low and high frequency bands using NSCT and the information across the bands are combined using weighted averaging fusion rule. The weights are optimized by particle swarm optimization (PSO) with an objective function that jointly maximizes the entropy and minimizes root mean square error to give improved image quality, which makes different from existing fusion methods in NSCT domain. The performance of the proposed fusion framework is illustrated using five sets of CT and MRI images and various performance metrics indicate that the proposed method is highly efficient and suitable for medical application in better decision making.
    Keywords: Image fusion; CT image; MRI image; NSCT; PSO; Weighted average fusion strategy.

  • Design of artificial pancreas based on the SMGC and self-tuning PI control in type-I diabetic patient   Order a copy of this article
    by Akshaya Kumar Patra, Pravat Kumar Rout 
    Abstract: Optimal closed loop control of blood glucose (BG) level has been a major focus for the past so many years to realize an artificial self regulating insulin device for Type-I Diabetes Mellitus (TIDM) patients. There is urgency for controlled drug delivery system to design with appropriate controller not only to regulate the blood glucose but also for other chronic clinical disorders requiring continuous long term medication. As a solution to the above problem, a novel optimal self-tuning PI controller is proposed whose gains dynamically vary with respect to the error signal. The controller is verified with a nonlinear model of the diabetic patient under various uncertainties arises in various physiological conditions and wide range of disturbances. A comparative analysis of self-tuning PI controller performance has been done with the sliding mode Gaussian control (SMGC) and other optimal control techniques. Obtained results clearly reveal the better performance of the proposed method to regulate the BG level within the normoglycaemic range (70-120 mg/dl) in terms of accuracy, robustness and handling uncertainties.
    Keywords: type-I diabetes mellitus; insulin dose; artificial pancreas; micro-insulin dispenser; SMGC; self-tuning PI control.

  • Optimized Denoising scheme via Opposition based Self-adaptive learning PSO algorithm for Wavelet Based ECG Signal Noise Reduction   Order a copy of this article
    by Vinu Sundararaj 
    Abstract: Electrocardiographic (ECG) signal is significant to diagnose cardiac arrhythmia among various biological signals. The accurate analysis of noisy Electrocardiographic (ECG) signal is very motivating challenge. According to this automated analysis, the noises present in Electrocardiogram signal need to be removed for perfect diagnosis. Numerous investigators have been reported different techniques for denoising the Electrocardiographic signal in recent years. In this paper, an efficient scheme for denoising electrocardiogram (ECG) signals is proposed based on a wavelet based threshold mechanism. This scheme is based on an Opposition based Self-Adaptive Learning particle swarm optimization (OSLPSO) in dual tree complex wavelet packet scheme, in which the OSLPSO is utilized to for threshold optimization. Different abnormal and normal Electrocardiographic signals are tested to evaluate this approach from MIT/BIH arrhythmia database, by artificially adding white Gaussian noises with variation of 5dB, 10dB and 15dB. Simulation result illustrate that the proposed system is well performance in various noise level, and obtains better visual quality compare with other methods.
    Keywords: Electrocardiogram; denoising; DTCWPT; Self-Adaptive Learning; Opposition learning; Particle swarm optimization; MIT/BIH arrhythmia; Thresholding.
    DOI: 10.1504/IJBET.2017.10012138
  • Optimal ECC Based Signcryption Algorithm for Secured Video Compression Process in H.264 Encoder   Order a copy of this article
    by S. Rajagopal, A. Shenbagavalli 
    Abstract: Combination of cryptography method and video technology is a Video encryption. Video encryption process is a total and demonstrable security of video data. For the purpose of protecting the video sequence, we have intended to recommend a video compression procedure along with its encryption for providing secured video compression framework. In this document we have suggested a method for ECC based Signcryption algorithm for Secured video compression process. At first, encryption process will be used on the motion vector by applying ECC (Elliptic Curve Cryptography) based Signcryption algorithm. The suggested method uses ECC method for the generation of public and private key. At the point when contrasted with the other encryption algorithms like small key, more security, increased velocity, little storage space and low data transfer capacity ECC has particular preferences. The suggested method employs the Improved Artificial Bee Colony algorithm (IABC) in order to optimize the private key. Next the optimal selection of private key applied to encrypt the motion vector. By means of different security attacks such as Man in Middle (MiM) attack, Brute Force and Denial of Service (DOS) attacks, the security of the suggested method will be examined.
    Keywords: Video encryption; Video compression; signcryption; Elliptic Curve Cryptography; Improved artificial Bee Colony algorithm; Brute force; DOS attack.
    DOI: 10.1504/IJBET.2017.10011678
  • Automatic biometric verification algorithm based on the bifurcation points and crossovers of the retinal vasculature branches   Order a copy of this article
    by Talib Hichem Betaouaf, Etienne Decenciere, Abdelhafid Bessaid 
    Abstract: Biometric identification systems allow for the automatic recognition of individuals on one or more biometric characteristics. In this paper, we propose an automatic identity verification algorithm based on the structure of the vascular network of the human retina. More precisely, the biometric template consists of the geometric coordinates of bifurcation points and crossovers of the vascular network branches. The main goal of our work is to achieve an efficient system while minimizing the processing time and the size of the data handled. Therefore, this algorithm uses a novel combination of powerful techniques for feature extraction based on mathematical morphology, like the watershed transformation for the segmentation of the retinal vasculature and Hit-or-Miss transform for the detection of bifurcation points and crossovers.We detail each step of the method from acquisition and enhancement of retinal images to signature comparison through automatic registration.We test our algorithm on a retinal images database (DRIVE). Finally, we present and discuss the evaluation results of our algorithm, and compare it with some of the literature.
    Keywords: Biometrics; biometric verification; retinal blood vessel; image segmentation; bifurcation points.

  • Detection of Fovea Region in Retinal Images Using Optimization based Modified FCM and ARMD disease classification with SVM   Order a copy of this article
    by T. Vandarkuzhali, C.S. Ravichandran 
    Abstract: The underlying motive resting with the current investigation is invested in designing a superior recognition system for locating the fovea region from the retinal image by significantly steering clear of the roadblocks encountered at present. The significant scheme streams through three specific processes particularly, Blood-vessel segmentation, Optic-disc detection, Fovea detection and ARMD disease classification. In the initial stage, the retinal images are enhanced with the help of AHE approach and then segmented by adaptive-watershed technique. The successive stage opens up with recognition of optic-disc by means of MRG system. And, in the last stage, the fovea region is effectively spotted with the help of OBMFCM technique. Along with the fovea-region segmentation, analysis is made for the classification of dry/wet ARMD with SVM classifier. The record-breaking technique is performed in the platform of Matlab2014 and its charismatic upshots are assessed and contrasted with those of the parallel fovea recognition approach.
    Keywords: Optimization based modified Fuzzy C-Means (OBMFCM); Age Related Macula Degeneration (ARMD); Adaptive Histogram Equalization (AHE); Modified Region Growing (MRG); Support Vector Machine (SVM);.

  • Detection and Diagnosis of Dilated Cardiomyopathy from the Left Ventricular parameters in Echo-cardiogram sequences   Order a copy of this article
    by G.N. Balaji, T.S. Subashini, A. Suresh, M.S. Prashanth 
    Abstract: The heart has a complicated anatomy and is in constant movement. The cardiologist use echocardiogram to visualize the anatomy and its movement. It is difficult for the cardiologist to prognosticate or affirm the diseases like heart muscle damage, valvular problems etc., due to presence of less information in echocardiograms. In this paper a system is proposed which automatically segments the left ventricle from the given echocardiogram video sequences using the combination of fuzzy C-means clustering and morphological operations and from which the left ventricle parameters and shape features are evoked. These features are then employed to linear discriminant analysis, K- nearest neighbor and Hopfield neural network to determine whether the heart is normal or affected with DCM. With LV parameters evaluated and shape features extracted it was found that HNN was able to model normal and abnormal hearts very well with an accuracy of 88% compared to LDA and K-NN.
    Keywords: Echocardiogram; Left Ventricle (LV); Dilated Cardiomyopathy (DCM); Fuzzy C-Means clustering (FCM) and Morphological operations.

  • Detection of Epilepsy using Discrete Cosine Harmonic Wavelet Transform based features and Neural Network Classifier   Order a copy of this article
    by G.R. Kiranmayi, V. Udayashankara 
    Abstract: Epilepsy is a neurological disorder caused by the sudden hyper activity in certain parts of the brain. Electroencephalogram (EEG) is the commonly used cost effective modality for the detection of epilepsy. This paper presents a method to detect epilepsy using Discrete Cosine Harmonic Wavelet Transform (DCHWT) and a neural network classifier. DCHWT is a Harmonic Wavelet Transform (HWT) based on Discrete Cosine Transform (DCT), which is proved to be a spectral estimation technique with reduced bias is used in this work. The proposed method involves decomposition of EEG signals into DCHWT subbands, extraction of features from sub bands and classification using an artificial neural network (ANN) classifier. The main focus of this study is the automatic detection of epilepsy from interictal EEG. This is still a challenge to the researchers as interictal EEG looks like normal EEG which makes the detection difficult. The proposed method is giving classification accuracy of 93.33% to 100% for various classes.
    Keywords: epilepsy; harmonic wavelet transform; HWT; discrete cosine harmonic wavelet transform; DCHWT; ictal EEG; interictal EEG; EEG subbands; neural network classifier.

  • 2D MRI Intermodal Hybrid Brain Image Fusion using Stationary Wavelet Transform   Order a copy of this article
    by Babu Gopal, Sivakumar Rajagopal 
    Abstract: Medical image fusion involves combination of multimodal sensor images to obtain both anatomical and functional data to be used by radiologists for the purpose of disease diagnosis, monitoring and research. This paper provides a comparative analysis of multiple fusion techniques that can be used to obtain accurate information from the intermodal MRI T1 T2 images. The source images are initially decomposed using Stationary Wavelet Transform (SWT) into approximation and detail components while the approximation components are reconstructed by Discrete Curvelet Transform (DCT), the SWT and DCT are good for point and line discontinuities. This paper also provides a comparative study of the different types of image fusion techniques available for MRI image decomposition. These approximation and detail components are fused using the different fusion rules. Final fused image is obtained by inverse SWT transformation. The fused image is used to localize the abnormality of brain images that lead to accurate identification of brain diseases such as 95.7% of brain lesion, 97.3% of Alzheimer's disease and 98% of brain tumor. Various performance parameters are evaluated to compare the fusion techniques and the proposed method which provides better result is analyzed. This comparison is done based on the method which provided the fused image with more Entropy, Average pixel intensity, Standard deviation and Correlation coefficient and Edge strength.
    Keywords: Inter-modal Image Fusion; MRI T1-T2; Stationary Wavelet Transform; Discrete Curvelet Transform; Principal Component Analysis.

  • Design of Wireless Contact-lens antenna for Intraocular Pressure monitoring   Order a copy of this article
    by Priya Lakshmipathy, Vijitha J, Alagappan M 
    Abstract: Intraocular pressure is an important aspect in the evaluation of patients at risk from glaucoma. Glaucoma is an ocular disorder that results in the damage of optic nerve, often associated with increased aqueous pressure in the eye. Wireless technology reduces discomfort, risk of infection and monitor patients in remote places by providing timely health information. In order to transmit the ocular pressure through the wireless media a proposed design of wireless contact-lens antenna has been designed. The contact-lens coupled structure antenna was designed for the betterment of reflection co-efficient and for minimizing the density of materials by gap-coupled configuration in comparison with the conventional on-lens loop antennas. The return loss of the designed contact lens antenna was -21 dB at 2.6 GHz with a diameter ranging from 14 to 15 mm. The simulation result of designed antenna return loss was obtained using Advanced Design System.
    Keywords: Ocular pressure; reflection co-efficient; wireless technology; coupler antenna; glaucoma; conventional on-lens loop antenna; return loss; Advance Design System; aqueous pressure; ocular disorder.

  • Effect of repetitive Transcranial Magnetic Stimulation on motor function and spasticity in spastic cerebral palsy   Order a copy of this article
    by Meena Gupta, Bablu Lal Rajak, Dinesh Bhatia, Arun Mukherjee 
    Abstract: To study the effectiveness of repetitive Transcranial magnetic stimulation (r-TMS) therapy in recovery of motor disability by normalizing muscle tones in spastic cerebral palsy (SCP) patients. Twenty SCP participants were selected from UDAAN-for the disabled, Delhi and were divided equally into two groups - control group (CG) and experimental group (EG). Ten participants in CG (mean age 8.11+SD4.09) were given physical therapy for 30 minutes daily for 20 days and those in EG (mean age 7.93+SD4.85) were administered 5Hz r-TMS frequency for 15 minutes consisting of 1500 pulses daily followed by physical therapy of same duration as provided to CG. Universally accepted - Modified Ashworth Scale and Gross Motor Function was used as outcome measures. The pre and post assessment was completed in both study groups. The GMFM result showed improvement in motor function of EG by 1.95% as compared to 0.55% in CG. Additionally, MAS score of EG showed significant spasticity reduction in muscle of lower extremity as compared to CG. Thus, our study demonstrates that r-TMS therapy followed by PT was responsible for improving the motor performance by decreasing spasticity in SCP patients in limited number of sessions.
    Keywords: motor disability; spasticity; spastic cerebral palsy; physical therapy; Transcranial magnetic stimulation.

  • An optimized pixel-based classification approach for automatic white blood cells segmentation   Order a copy of this article
    by SETTOUTI Nesma, BECHAR Mohammed El Amine, CHIKH Mohammed Amine 
    Abstract: Pixel-based classification is a potential process for image segmentation. In this process, the image is segmented into subsets by assigning a label region for each pixel. It is an important step towards pattern detection and recognition. In this paper, we are interested in the cooperation of the pixel classification and the region growing methods for the automatic recognition of WBC (White Blood Cells). The pixel-based classification is an automatic approach for classifying all pixels in image, and do not take into account the spatial information of the region of interest. On the other hand, region growing methods take the spatial repartition of the pixels into account, where neighborhood relations are considered. However, region-growing methods have a major drawback, indeed they need pixel groups called "point of interest" to initialize the growing process. We propose an optimized pixel based-classification by the cooperation of region growing strategy performed in two phases: the first, is a learning step with a characterization of each pixel of the image. The second, is a region-growing application by neighboring pixel classification from pixels of interest extracted by the ultimate erosion technique. This process has proved that the cooperation allows to obtain a nucleus and cytoplasm segmentation as closer to what is expected by human experts (as expected in the reference images).
    Keywords: Automatic white blood cell segmentation ; Region growing approach ; pixel-based classification ; mathematical morphology ; Random Forest.
    DOI: 10.1504/IJBET.2017.10013088
  • The analysis of foot loadings in high-level table tennis athletes during short topspin ball between forehand and backhand serve   Order a copy of this article
    by Yaodong Gu, Changxiao Yu, Shirui Shao 
    Abstract: The quality of backswing has a close relationship with forward swing, which could raise more power for next phase and help athletes be in an active status. The purposes of this study are to help coaches to improve understanding of backswing motion and as a guidance to improve athletic performance in practice. Twelve high-level male table tennis athletes have been selected, and their foot loadings during short topspin ball were measured by Emed force plate. Anterior-posterior center of pressure (COP) displacement in backhand serve showed significantly shorter compared with forehand at backward-end stage. Mean and peak pressures were higher under the big toe and lateral forefoot of the front foot in forehand than backhand serves during backswing. Including above two regions and lateral mid-foot of the front foot, contact areas were also higher for forehand serve compared with backhand. Otherwise, for backhand serves, the COP velocity was much faster than forehand during backswing. Compared with backhand serves, our results showed that the forehand serve at backward-end has a more sufficient preparation that can accumulate more power for improving the racket speed for forward swing. For forehand serve, it not only mainly used lateralis of front foot off the ground, but also showed larger contact areas on them compared with backhand at backswing-end. Results indicated that forehand serves of short topspin ball showed stronger and more stable than backhand.
    Keywords: Foot loading; pressure distribution; service stance style; COP velocity ratio.

  • A Locally Adaptive Edge Preserving Filter for Denoising of Low Dose CT using Multi-level Fuzzy Reasoning Concept   Order a copy of this article
    by Priyank Saxena, R. Sukesh Kumar 
    Abstract: To reduce the radiation exposure, low dose CT (LDCT) imaging has been particularly used in modern medical practice. The fundamental difficulty for LDCT lies in its heavy noise pollution in the projection data which leads to the deterioration of the image quality and diagnostic accuracy. In this study, a novel two-stage locally adaptive edge preserving filter based on multi-level fuzzy reasoning (LAEPMLFR) concept is proposed as an image space de-noising method for LDCT images. The first stage of structured pixel region employs multi-level fuzzy reasoning to handle uncertainty present in the local information introduced by noise. The second stage employs a Gaussian filter to smooth both structured and non-structured pixel region in order to retain the low frequency information of the noisy image. Comparing with traditional de-noising methods, the proposed method demonstrated noticeable improvement on noise reduction while maintaining the image contrast and edge details of LDCT images.
    Keywords: Multi-level fuzzy reasoning; Noise reduction; Bilateral filtering; Low dose CT; Edge detection; Image smoothing; Peak Signal to Noise Ratio; Image Quality Index; Gaussian filter.

  • Edge preserving de-noising method for efficient segmentation of cochlear nerve by magnetic resonance imaging   Order a copy of this article
    by Jeevakala Singarayan, A.Brintha Therese 
    Abstract: This article presents a de-noising method to improve the visual quality, edge preservation, and segmentation of cochlear nerve (CN) from Magnetic resonance (MR) images. The de-noising method is based on Non-local means (NLM) filter combining with stationary wavelet transform (SWT). The edge information is extracted from the residue of the NLM filter by processing it through the cycle spinning (CS). The visual interpretation of the proposed approach shows that it not only preserves CN edges but, also reduces the Gibbs phenomenon at their edges. The de-noising abilities of the proposed method strategy are assessed utilizing parameters such as root mean square error (RMSE), signal to noise ratio (SNR), image quality index (IQI) and feature similarity index (FSIM). The efficiencies of the proposed methods are further illustrated by segmenting the cochlear nerve (CN) of the inner ear by the region growing technique. The segmentation efficiencies are evaluated by calculating the cross- sectional area (CSA) of the CN for different de-noising methods. The comparative results show the significant improvement in edge preservation of CN from MR images after de-noising the image with proposed technique.
    Keywords: Non-Local Means (NLM); Stationary Wavelet Transform (SWT); de-noising; Rician noise; cochlear nerve (CN); MR images; SNR.

  • Feature Based Classification and Segmentation of Mitral Regurgitation Echocardiography Images Quantification Using PISA Method   Order a copy of this article
    by Pinjari Abdul Khayum, R. Sudheer Babu 
    Abstract: Echocardiography is the enormously admired scientific specification for the evaluation of valvular regurgitation and gives significant knowledge on the bareness of Mitral Regurgitation (MR). MR is a general heart disease which does not cause indications till its final phase. A technique is advanced for jet area separation and quantification in MR assessment in arithmetical expressions. Previous to this separation method count preprocessing and some attributes are mined from the record to arrangement method. From the cataloging method Support Vector Machine (SVM) classifier developed to confidential echocardiogram images. Entire abnormal images to the Modified Region Growing (MRG) separation method to segment jet area of MR. This segmented jet area in MR quantification process passed out with the support of Proximal Isovelocity Surface Area (PISA). This procedure is based on mass diverse limitations like blood flow rate, regurgitant fraction, and EROA etc. From the outcomes this projected effort associated with the current method fuzzy with PISA quantification process, the projected work attained accuracy rate 99.05% in the study of jet area segmenting and quantification method.
    Keywords: Echocardiogram; Mitral valve; Mitral Regurgitation; classification; segmentation and quantification.
    DOI: 10.1504/IJBET.2017.10012825
  • Multi-Objective Particle Swarm Optimization for mental task classification using Hybrid features and Hierarchical Neural Network Classifier   Order a copy of this article
    Abstract: Recognition of mental tasks using Electroencephalograph (EEG) signals is of prime importance in man machine interface and assistive technologies. Considerably low recognition rate of mental tasks is still an issue. This work combines Power Spectral Density (PSD) features and Lazy Wavelet Transform (LWT) coefficients to present a new approach to feature extraction from EEG signals. A simple but novel neural network classifier called hierarchical neural network is proposed for the task recognition. A novel methodology based on Multi Objective Particle Swarm Optimization (MOPSO) to select discriminative features and the number of hidden layer nodes is proposed. The extracted features are presented to the hierarchical classifier to discriminate left-hand movement, right-hand movement and word generation task. Features in the time frequency domain are extracted using LWT, while those in time domain are extracted using PSD. The hybrid features present complementary information about the task represented by EEG. The features are applied to MOPSO to select the prominent features and decide number of hidden nodes of the neural network classifier. These features train the Hierarchical neural network with hidden layer neurons decided by the MOPSO. Effective selection of the features and the number of hidden layer nodes of the hierarchical classifier improve the classification accuracy. The results are verified on standard brain computer interface (BCI) database and our own B-alert experimental system database. The benchmarking indicates that the proposed work outperforms the state of the art.
    Keywords: Mental task classification; MOPSO; LWT; Hybrid features; Hierarchical Classifier.

  • Energy efficient and low noise OTA with improved NEF for Neural Recording Applications   Order a copy of this article
    by Bellamkonda Saidulu, Arun Manohran 
    Abstract: Analog Front End (AFE) design plays a prominent role to specify the overall performance of neural recording systems. In this paper, we present a power efficient low noise Operational Transconductance Amplifier (OTA)which is power consumable block for multichannel neural recording system with shared structure. Inversion Coefficient(IC)methodology is used to size the transistors. This work focuses on the neural recording applications which show >40dB and up to 7.2 kHz bandwidth. The proposed architecture, which is referred as partial sharing operational transconductance amplifier with source degeneration, results in reduced noise, hence improving NEF. Simulation results are carried in UMC 0.18m and show an improved gain of 66 dB, the phase margin of 94, input-referred voltage noise 0.6V/sqrt(Hz) and power consumption of 2.15Wwith supply of 1.8V.
    Keywords: Neural Amplifier; Telescopic Cascode; Partial OTA Sharing Structure; Self-cascode Composite Current Mirror; Source Degeneration; NEF.

  • Comparison of Missing tooth and Dental work detection using Dental radiographs in Human Identification   Order a copy of this article
    by Jaffino George Peter, Banumathi A, Ulaganathan Gurunathan, Prabin Jose J 
    Abstract: Victim identification plays a vital role for identifying a person in major disasters at the time of critical situation when all the other biometric information was lost. At that time there is a less chance for identifying a person. The major issues of dental radiographs are dental work and missing or broken tooth was addressed in this paper. This algorithm can be established by comparing both ante mortem (AM) and postmortem (PM) dental images. This research work is mainly focuses on the detection of dental work and broken tooth or missing tooth, then comparison of active contour model with mathematical model based shape extraction for dental radiographic images are proposed. In this work, a new mathematical tooth approximation is presented and it is compared with Online Region based Active Contour Model (ORACM) is used for shape extraction. Similarity and distance based technique gives better matching about both the AM and PM dental radiographs. Exact prediction of each method has been calculated and it is validated with suitable performance measures. The accuracy achieved for contour method is 94%, graph partition method is 96% and finally the hit rate of this method is plotted with Cumulative Matching Characteristic (CMC) curve.
    Keywords: Victim identification; dental work; missing tooth; Active Contour Model; Isoperimetric graph partitioning; CMC curve.rnrn.
    DOI: 10.1504/IJBET.2017.10012600
  • Design and prototyping of a novel low stiffness cementless hip stem   Order a copy of this article
    by Ibrahim Eldesouky, Hassan Elhofy 
    Abstract: Present biocompatible materials that are suitable for load bearing implants have high stiffness compared to the natural human bone. This mechanical mismatch causes a condition known as stress shielding. The current trend for overcoming this problem is to use porous scaffold instead of solid implants to reduce the implant stiffness. Due to the wide spread of metal additive manufacturing machines, porous orthopaedic implants can be mass produced. In this regard, a porous scaffold is incorporated in the design of a low stiffness hip stem. A 3D finite element analysis is performed to evaluate the performance of the new stem with the patient descending the stairs. The results of the numerical study show that the proposed design improves stress and strain distributions in the proximal region which reduce the stress shielding effect. Finally, a prototype of the proposed design is produced using a 3D printer as proof of concept.
    Keywords: 3D printing; additive manufacturing; auxetic scaffold; low stiffness; stress shielding.

  • Image Analysis for Brain Tumour Detection using GA-SVM with Auto-Report Generation Technique   Order a copy of this article
    by Nilesh Bhaskarrao Bahadure, Arun Kumar Ray, Har Pal Thethi 
    Abstract: In this study, we have presented image analysis for the brain tumour segmentation and detection based on Berkeley wavelet transformation, enabled by genetic algorithm and support vector machine. The proposed system uses double classification analysis to conclude tumour type. The decision on the tumour type i.e. benign or malignant is facilitated by the classifier on the basis of the features extraction and on the basis of area of the tumour. The improvement in the accuracy of the classifier has been investigated through double decision-making system. The proposed system also investigated autoreport generation technique using user-friendly graphical user interface in matlab. It is the first kind of its study, to aid with the feature of auto-report generation technique invented for quick and improved diagnosis analysis by the radiologists or clinical supervisors. The experimental results of proposed technique is been evaluated and validated for performance and quality analysis on magnetic resonance (MR) medical images based on accuracy, sensitivity, specificity and dice similarity index coefficient. The experimental results achieved 97.77% accuracy, 98.98% sensitivity, 94.44% specificity and an average of 0.9849 dice similarity index coefficient, demonstrating the effectiveness of the proposed technique for identifying normal and abnormal tissues from MR images. The experimental result is validated by extracting 89 features and selecting the relevant features appropriately using Genetic algorithm optimize by support vector machine. The simulation results prove the significance in terms of quality analysis on segmentation score and classification accuracy in comparison to the state of the art techniques.
    Keywords: Berkeley Wavelet Transformation; Feature Extraction; GeneticrnAlgorithm; Magnetic Resonance Imaging (MRI); Support Vector Machine.

  • Evaluation of endothelial response to reactive hyperemia in peripheral arteries using a physiological model   Order a copy of this article
    by Mohammad Habib Parsafar, Edmond Zahedi, Bijan Vosoughi Vahdat 
    Abstract: A common approach for the non-invasive evaluation of endothelial function - a good predictor of cardiovascular events - is the measurement of brachial artery diameter changes in flow-mediated dilation (FMD) during reactive hyperemia using ultrasound imaging. However, this method is both costly and operator-dependent, limiting its application to research cases. In this study, an attempt is made toward model-based evaluation of endothelial response to reactive hyperemia. To this end, a physiological model between normalized central blood pressure and finger photoplethysmogram (FPPG) is proposed. The genetic algorithm is utilized for estimating the models parameters in thirty subjects grouped as: normal BP (N=10), high BP (N=10) and elderly (N=10). The change in beat-to-beat fitness between model output and measured FPPG (BB_fit index) during the cuff-release interval is fairly described with a first order dynamic. Results show that the time constant of this first order system is significantly greater for normal BP compared to high BP (p-value=0.004) and elderly subjects (p-value=0.01). Indeed, endothelial response to reactive hyperemia is more pronounced in normal BP and young subjects compared to high BP and elderly, delaying the return of the vasculature to the baseline state. Our findings hint that the proposed model can be utilized in physiological model-based studies of cardiovascular health, resulting eventually in a reliable index for vascular characterization using conventional FMD test.
    Keywords: flow-mediated dilation; photoplethysmography; endothelial function; cardiovascular modeling; viscoelasticity; tube-load model.

  • Automated ECG beat classification using DWT and Hilbert transform based PCA-SVM classifier   Order a copy of this article
    by Santanu Sahoo, Monalisa Mohanty, Sukanta Sabut 
    Abstract: The analysis of electrocardiogram (ECG) signals provides valuable information for automatic recognition of arrhythmia conditions. The objective of this work is to classify five types of arrhythmia beat using wavelet and Hilbert transform based feature extraction techniques. In pre-processing, wavelet transform is used to remove noise interference in recorded signal and the Hilbert transform method is applied to identify the precise R-peaks. A combination of wavelet, temporal and morphological or heartbeat interval features has been extracted from the processed signal for classification. The principal component analysis (PCA) is used to select the informative features from the extracted features and fed as input to the support vector machine (SVM) classifier to classify arrhythmia beats automatically. We obtained better performance results in the PCA-SVM based classifier with an average accuracy, sensitivity and specificity of 98.50%, 95.68% and 99.18% respectively in cubic-SVM classifier for classifying five types of ECG beats at fold eight in ten-fold cross validation technique. The effectiveness of our method is found to be better compared to published results, therefore the proposed method may be used efficiently in the ECG analysis.
    Keywords: Electrocardiogram; Wavelet; Hilbert transform; support vector machine; principal component analysis; arrhythmia.

  • Impact-induced traumatic brain injury: Effect of human head model on tissue responses of the brain   Order a copy of this article
    by Hesam Moghaddam, Asghar Rezaei, Ghodrat Karami, Mariusz Ziejewski 
    Abstract: The objective of this research is twofold; first to understand the role of the finite element (FE) head model in predicting tissue responses of the brain, and second to investigate the fidelity of pressure response in validating FE head models. Two validated FE head models are impacted in two directions under two impact severities and their tissue responses are compared. ICP peak values are less sensitive to the head model and brain material. Maximum ICPs occur on the outer surface, vanishing linearly toward the center of the brain. It is concluded that while different head models may simply reproduce the results of ICP variations due to impact, shear stress is affected by the head model, impact condition, and brain material.
    Keywords: Intracranial pressure (ICP); shear stress; injury mechanism; finite element head model; brain injury; reproducibility.

  • Selection of Surface Electrodes for Electrogastrography and Analysis of Normal and Abnormal Electrogastrograms using Hjorth Information   Order a copy of this article
    by Paramasivam Alagumariappan, Kamalanand Krishnamurthy, Ponnuswamy Mannar Jawahar 
    Abstract: Electrogastrogram (EGG) signals recorded non-invasively using surface electrodes, represent the electrical activity of the stomach muscles and are used to diagnose several digestive abnormalities. The surface electrodes play a significant role in the acquisition of EGG signals from human digestive system. In this work, an attempt has been made to demonstrate the role of contact area of surface electrodes for efficient measurement of EGG signals. Two different surface electrodes with contact diameter of 16 mm and 19 mm have been adopted for acquisition of EGG signals. Further, the Hjorth parameters of the EGG signals acquired from normal and abnormal cases suffering from diarrhea, vomiting and stomach ulcer were analyzed. Results demonstrate that the activity, mobility and complexity of the EGG signals increases with increase in contact area of the surface electrodes. Further, it is observed that there is a significant variation in Hjorth parameters for normal and abnormal cases.
    Keywords: Surface electrodes; contact area; electrogastrogram; activity; mobility; complexity; Hjorth parameters; Information measures.

  • Plasma cell identification based on evidential segmentation and supervised learning   Order a copy of this article
    by Ismahan Baghli, Mourtada Benazzouz, Mohamed Amine Chikh 
    Abstract: Myeloma disease is among the most common type of cancer, it is characterized by proliferation of plasma cells, kind of white blood cell (textsc{wbc}). Early diagnosis of the disease can improve the patient's survival rate. The manual diagnosis involves clinicians to visually examine microscopic bone marrow images for any signs of cells proliferation. This step is often laborious and can be highly subjective due to clinician's expertise. Automatic system based on textsc{wbc} identification and counting provides more accurate result than manual method. This system is mainly based on three major steps: cell's segmentation, cell's characterization and cell's classification. In the proposed system, microscopic images of bone marrow blood are segmented by the combination of watershed transform and the evidence theory, the segmented cells are characterized with shape and Colour texture features, and then classified into plasma cells or not plasma cells with three supervised classifiers; Support Vector Machines, K Nearest Neighbour and Decision Tree. Experimental results show that the recognition of plasma cells with the K nearest neighbour achieved 97% of correct rate with 100% of specificity.
    Keywords: Myeloma; Plasma cell; Bone marrow images; Segmentation; Evidence theory; Watershed; Characterization; Shape; Colour texture; Classification.

  • Engineering Approaches for ECG Artifact Removal from EEG: A Review   Order a copy of this article
    by Chinmayee Dora, Pradyut Kumar Biswal 
    Abstract: Electroencephalograms (EEG) signal, obtained by recording the brain waves are used to analyze health problems related to neurology and clinicalrnneurophysiology. This signal is often contaminated by a range of physiological and non-physiological artifacts, which leads to a misinterpretation in EEG signal analysis. Hence, artifact removal is one of the preprocessing step required for clinical usefulness of the EEG signal. One of the physiological artifact i.e. Electrocardiogram (ECG) contaminated EEG can affect the clinical analysis and diagnosis of brain health in various ways. This paper presents a review of engineering approaches adopted till date for ECG artifact identification and removal from contaminated EEG signal. In addition, the technical approach, computational extensiveness, input requirement and the results achieved with every method is discussed. Along with that, the feasibility study for real time implementation of the algorithms is discussed. Also, an analysis of these methods has been reported based on their performance.
    Keywords: EEG; ECG; Artifacts; ICA; Wavelet; EMD; EAS; ANC; Autoregression; ANFIS; TVD; SVM.

  • Enhanced Cache Sharing through Cooperative Data Cache Approach in MANET   Order a copy of this article
    by Lilly Sheeba S., Yogesh P 
    Abstract: In a Mobile Adhoc NETwork (MANET) under normal cache sharing scenarios, when the data is transmitted from source to destination, all the nodes along the path store the information on the cache layer before reaching the destination. This may result in increased node overhead, increased cache memory utility and very high end-to-end delay. In this paper, we propose an Enhanced Cache Sharing through Cooperative Data Cache (ECSCDC) approach for MANETs. During the transmission of desired data from the Data Centre back to the request originator, the data packets will be cached by the intermediate caching nodes only if required, by using the asymmetric cooperative cache approach. Those caching nodes that can retain the data in its cache, for future data retrieval is selected based on scaled power community index. By simulation results, we show that the proposed technique reduces the communication overhead, access latency and average traffic ratio near the data centre while increasing the cache hit ratio.
    Keywords: Mobile ad hoc network; caching; cache sharing; cache replacement.

  • Quality Function Deployment Model Using Fuzzy Programming with Optimization Technique for Customer Satisfaction   Order a copy of this article
    by Mago Stalany V., Sudhahar C. 
    Abstract: Quality Function Deployment (QFD) is a customer-driven superiority organization and product expansion scheme for attaining advanced consumer approval. This document is to inspect the execution of QFD at fuzzy surroundings and to expand equivalent events to contract by the fuzzy data. At this time, regard as the consumer approval limitations are ease, refund and safeness in FQFD (Fuzzy QFD) study using Fuzzy Logic Controller (FLC) by optimization method. This optimization method to develop the accurateness values of FLC in QFD procedure, now PSO is utilized for optimization method. For hard data production, dissimilar relationship utility is employed to the comparative inclination association on fuzzy statistics, it is not essential to develop two fuzzy statistics to obtain standard masses in FQFD. Since, the outcome of fuzzy to recognizing the precedence stage of the consumer approval and utmost accurateness stage of the FQFD procedure.
    Keywords: Fuzzy quality function deployment; Quality Function Deployment; Customer requirements; design quality; Relative preference relation.
    DOI: 10.1504/IJBET.2017.10018887
  • Influence of hip geometry to intracapsular fractures in Sri Lankan women: prediction of country specific values   Order a copy of this article
    by Shanika Arachchi, Narendra Pinto 
    Abstract: Falls are very common in daily life. Hip is a highly vulnerable location during a fall. Trochanter can be compressed due to side falls resulting either intracapsular or extracapsular fracture. The relationship of bone geometry to the fracture risk can be analyzed as a determinant of mechanical resistance of the bone, as well as a promising fracture prediction tool. Intracapsular fractures are highly depend on the hip geometry compared to the extracapsular fractures. This study aims to find out the influence of hip geometry to the intracapsular fractures among Sri Lankan women. HAL, NSA, FNW and moment arm length of intracapsular patients have compared with a normal group. Concurrently, the moment applied to the proximal femur during the sideway fall is computed and compared with a normal group. We observed that fractured group has greater NSA, HAL and FNW compared to the normal group. Furthermore, intracapsular fracture females have longer moment arm of the force in the sideway fall resulting a greater load on femoral neck compared to the normal group.
    Keywords: falls; hip fractures; hip geometry; Neck Shaft Angle; Femoral Neck Width.

    by Sangeetha M S, Kandaswamy A 
    Abstract: In hemodialysis therapy, the dialyser is subjected to blood flow continuously for several hours and is also being reused; the stress experienced by the fibers owing to blood flow is of utmost importance because it reflects on the mechanical stability of the membrane. It is tedious to study the stress experienced by an individual fiber in real-time; computer aided techniques enables to gain better insights about the load bearing capacity of the membrane. A finite-element strategy is implemented to study the effect of flow induced stress in hemodialyser membrane. A 3D model of the membrane was developed in straight and undulated (crimped) fiber orientations. Fluid structure interaction study was conducted to analyse the stress distribution due to varying blood flow. It is observed that in both the fiber orientations, the stress varies inversely with the blood flow rate. The effect of varying the length of the fiber, wall thickness and crimp frequency is also studied. From the analysis it is found that the crimped fibers experiences less stress compared to straight fiber. Such analysis aids to predict and evaluate the performance of the hemodialyser membrane. Keywords: finite-element strategy; hemodialyser membrane; crimping; fluid structure interaction; computer aided techniques
    Keywords: finite-element strategy; hemodialyser membrane; crimping; fluid structure interaction; computer aided techniques.

    by Anisha M, Kumar S.S, Benisha M 
    Abstract: Fetal cardiac anomaly interpretation from Fetal Electrocardiogram (FECG) is a challenging effort. Fetal cardiac activity can be assessed by scrutinizing FECG because clinically crucial features are hidden in the amplitudes and waveform time duration of FECG, and Fetal Heart Rate (FHR). These features are vital in fetal cardiac anomaly interpretation. Hence here an attempt is made to detect the presence of fetal cardiac anomaly using Support Vector Machine (SVM) classifier with polynomial kernel based on the patterns extricated from FHR, frequency domain of FECG signals, fetal cardiac time intervals and FECG morphology. Performance evaluation is done on real FECG signals with different combination of features set and the obtained results are compared. SVM showed good performance with 92% of classification accuracy when all the features are fed to the classifiers. Results evince that the proposed approach has immense prospective and guarantee in early fetal cardiac anomaly detection from FECG.
    Keywords: Fetal Electrocardiogram; Fetal Heart Rate; SVM; fetal cardiac anomaly; fetal cardiac activity.

  • A Multimodal Biometric Approach for the Recognition of Finger Print, Palm Print and Hand Vein using Fuzzy Vault   Order a copy of this article
    by R. Vinothkanna, Amitabh Wahi 
    Abstract: For the security reasons, Person Identification has got primary place by means of some of the physiological features. For this a biometric person recognition system is used, which decides who the user is. In this paper, multimodal biometry is utilized for the person identification with the help of 3 physiological features such as finger print, palm print and hand vein. Initially, in the pre-processing stage, the unwanted portions, noise content and the blur effects are removed from the input finger print, palm print and hand vein images. Then the features from these three modalities are extracted. Finger print features are extracted directly from the pre-processed image , palm print and hand vein features are extracted using maximum curvature points in the image cross-sectional profile. Then using chaff points and all the extracted feature points, a combined feature vector point is obtained. After getting the combined feature vector points, the sectret key points are added with the combined feature vector points to generate the fuzzy vault. Finally, in the Recognition stage, test persons combined vector is compared with the fuzzy vault data base. If the combined vector is matched with the fuzzy vault, then the authentication is granted and then the secret key is generated to confirm with the person. Otherwise, the authentication is denied. Now we can obtain the corresponding finger print, hand vein and palm print images.
    Keywords: Multimodal biometric; Maximum curvature points; Cross-sectional profile; Chaff points; Fuzzy Vault.
    DOI: 10.1504/IJBET.2017.10017685
  • Estimation of a point along overlapping Cervical Cell Nuclei in Pap smear image using Color Space Conversion   Order a copy of this article
    by Deepa T.P., A. Nagaraja Rao 
    Abstract: The identification of normal and abnormal cells is considered as one of the most challenging task for computer assisted Pap smear analysis system. It is even more difficult when cells are overlapped, as abnormal cells are hidden below normal cells and affects their visibility. Hence, there is need for algorithm which segments cells in the cluster formed by overlapped cells which can be achieved using image processing techniques. The complexity of the problem depends on whether only cytoplasm of two cells are overlapped, or only nuclei are overlapped with disjoint cytoplasm, and in some case both cytoplasm and nuclei are overlapped. Sometimes, Pap smear sample contains mixture of cells with disjoint/overlapped cytoplasm and nuclei. The segmentation of nuclei helps to find cell count which is one of the important features in Pap smear analysis. There is a need for method which can simultaneously segment disjoint and overlapped nuclei. In case of overlapped nuclei, identifying the point of overlapping accurately is one of the significant steps and plays important role in segmenting overlapped cells. This paper discusses such method which segment disjoint nuclei and identifies the point of intersection called as Concavity point in the cluster of cells where only nuclei are overlapped.
    Keywords: Papanicolaou Smear; Overlapping; Morphological and Microscopic Findings; cell nuclei.

  • Multiobjective Pareto optimization of a pharmaceutical product formulation using radial basis function network and nondominated sorting differential evolution   Order a copy of this article
    by Satyaeswari Jujjavarapu, Ch. Venkateswarlu 
    Abstract: Purpose In a pharmaceutical formulation involving several composition factors and responses, its optimal formulation requires the best configuration of formulation variables that satisfy the multiple and conflicting response characteristics. This work aims at developing a novel multiobjective optimization strategy by integrating an evolutionary optimization algorithm with an artificial intelligence model and evaluates it for optimal formulation of a pharmaceutical product. Methods A multiobjective Pareto optimization strategy is developed by combining a radial basis function network (RBFN) with a non-dominated sorting differential evolution (NSDE) and applied for optimal formulation of a trapidil product involving conflicting response characteristics. Results RBFN models are developed using spherical central composite design data of trapidil formulation variables representing the amounts of microcrystalline cellulose, hydroxypropyl methylcellulose and compression pressure, and the corresponding response characteristic data of release order and rate constant. The RBFN models are combined with NSDE and Pareto optimal solutions are generated by augmenting it with Na
    Keywords: Pharmaceutical formulation; Multiple regression model; Response surface method; Radial basis function network; Differential evolution; Multiobjective optimization.

  • Implementation of Circular Hough transform on MRI Images for Eye Globe Volume Estimation   Order a copy of this article
    by Tengku Ahmad Iskandar Tengku Alang, Tian Swee Tan, Azhany Yaakub 
    Abstract: Eye globe volume estimation have gained attention in both medical and biomedical engineering field. However, most of the methods used manual analysis which is tedious and prone to errors due to various inter- or intraoperator variability studies. In the present study, we estimated the volume of eye globe, in MRI images of normal eye globe condition using the Circular Hough transform (CHT) algorithm. To test the performance of the proposed method, 24 Magnetic Resonance images which constitute 14 males and 10 females (normal eye globe condition) with T1-weighted MRI images are randomly selected from the database. The mean (
    Keywords: Circular Hough transform (CHT); Magnetic Resonance Imaging (MRI); MRI images; eye globe detection; T1-weighted.

  • Electroencephalogram (EEG) Signal Quality Enhancement by Total Variation Denoising Using Non-Convex Regularizer   Order a copy of this article
    Abstract: Medical practitioners have great interest in getting the denoised signal before analysing it. EEG is widely used in detecting several neurological diseases such as epilepsy, narcolepsy, dementia, sleep apnea syndrome, alzheimers, insomnia, parasomnia, Creutzfeldt-Jakob diseases (CJD) and schizophrenia etc. In the process of EEG recordings, a lot of background noise and other kind of physiological artefacts are present, hence, data is contaminated. Therefore, to analyse EEG properly, it is necessary to denoise it first. Total variation denoising is expressed as an optimization problem. Solution of this problem is obtained by using a non-convex penalty (regularizer) in the total variation denoising. In this article, non-convex penalty is used for denoising the EEG signal. The result has been compared with wavelet methods. Signal to noise ratio (SNR) and root mean square error have been computed to measure the performance of the method. It has been observed that the approach used here works well in denoising the EEG signal and hence enhancing its quality.
    Keywords: Electroencephalogram; wavelet; artefact; denoising; regularizer; convex optimization; epilepsy; tumors; empirical mode decomposition; principal component anslysis; total variation.

    by Wiselin Jiji 
    Abstract: In this paper, we have experimented Left Ventricular (LV) Wall motion abnormalities using Eigen LV Space. We employ three phases of operations in order to perform efficient identification of LV motion abnormalities. In the First phase, LV border detection technique was used to detect LV area. In the second phase, Eigen LV spaces of six abnormalities are to be converged as the search space. In the third phase, query is projected on this search space which leads matching of closest Disease. The results proved using Receiver Operating Characteristic (ROC) curve show that the proposed architecture provides high contribute to Computer-Aided Diagnosis. Experiments were made on a set of 20 Abnormal and 20 Normal cases. We trained with 8 Normal & 8 Abnormal cases and yielded an accuracy of 88.8% for the proposed works and 75.81% and 79% respectively for earlier works. Our empirical evaluation has a superior diagnosis performance when compared to the performance of other recent works.
    Keywords: Eigen Space; LV BORDER DETECTION,Indexing.

  • Recent advances on Ankle Foot Orthosis for Gait Rehabilitation: A Review   Order a copy of this article
    by Jitumani Sarma, Nitin Sahai, Dinesh Bhatia 
    Abstract: Since the early 1980s, hydraulic and pneumatic device are used to explore methods of orthotic devices for lower limb. Over the past decades, significant development has been made by researchers in rehabilitation robotics associating assistive orthotic device for the lower limb extremities. The aim in writing this review article is to present a detailed insight towards the development of the controlled Ankle Foot Orthotic (AFO) device for enhancing the functionality of people disabled by injury to the lower limb or by neuromuscular disorders such as multiple sclerosis, spinal muscular atrophy etc. Different types of approaches towards design, actuation and control strategies of passive and active AFOs are analyzed in this article considering gait rehabilitation. In currently available commercialized ankle foot orthotic devices for lower limb, to overcome the weakness and instability produced by drop foot and to follow natural gait is still a challenge. This paper also focuses the impact of active control of AFO device mainly to enhance the functionality of lower limb reducing the deformities. Researchers have put in huge amount of efforts in terms of modeling, simulating and controlling of such devices mainly for gait rehabilitation with kinematic and dynamics analysis.
    Keywords: Foot drop; Ankle Foot Orthosis; Gait; dorsiflexion; plantarflexion.

  • Computer Aided Designing and Finite Element Analysis for development of porous 3-D tissue scaffold-A review   Order a copy of this article
    by Nitin Sahai, Manashjit Gogoi 
    Abstract: Biodegradable porous tissue scaffolds plays a crucial role in development of tissue/organ and development of these biomimetic porous tissue scaffold with accurate porosity could be achieved with the help of latest analysis techniques known as Computer Aided Tissue Engineering (CATE) which consists of Computer Tomography(CT) scan, Magnetic Resonance Imagining (MRI), Functional Magnetic Resonance Imagining FMRI, Computer Aided Designing (CAD), Finite Element Method (FEM) and other modern design and manufacturing technologies for development of 3-D architecture of porous tissue scaffolds can be fabricated with reproducible accuracy in pore size. The aim of this paper is to review and elaborate the various recent methods developed in Computer Aided Designing, Finite Element Analysis and Solid Freeform Fabrication (SFF) for development of porous 3 Dimensional tissue scaffolds.
    Keywords: Biomaterials; Scaffolds; Tissue Engineering; Computer Aided Tissue Engineering; Finite Element Method.

    by Amutha S, Vinsley SS 
    Abstract: In this project, an innovative complexity motion that has lowvector processing algorithm at the end side is proposed for motion-compensated video vector frame interpolation or frame rate up-conversion. By processing this algorithm we normally shows the problems of broken edges and deformed structure problems in an frame interpolation by hierarchically refining motion vectors on different block sizes. Such broken edges are taken out using frame interpolation method by taking the odd frames and interpolate that image so that to have the high quality resolution of images so that blur in the images obtained from the video is being removed. By using blending techniques it is easy to remove the image blur and also to improve the quality of the image obtained from the video. So the image has been obtained with high resolution. In the proposed method the input has been taken as video instead of images in the existing system and the recovery output is taken as images and further process has been undergone to get the output as video. There are some different techniques in this method such as phase based interpolation technique, multistage motion compensated interpolation etc are commonly used to get high purpose image with reduced blur in the images to get the clear image of the input videos. Experimental results prove that the proposed system visual quality to be better and is also rugged, even in video sequences comprising of fast motions and complex scenes.
    Keywords: MCFI,BMA; phase based interpolation; steerable pyramid; blending technique.

  • Investigation on staging of breast cancer using miR-21 as a biomarker in the serum   Order a copy of this article
    by Bindu SALIM, Athira M V, Kandaswamy Arumugam, Madhulika Vijayakumar 
    Abstract: Circulating microRNAs (miRNA) are a novel class of stable, minimally invasive disease biomarkers that are valuable in diagnostics and therapeutics. MiR-21 is an oncogenic miRNA that regulates the expression of multiple cancer-related target genes and it is highly expressed in the patients serum suffering from breast cancer. The focus of the present study was on measuring the expression profile of the most significantly up-regulated miR-21 in breast cancer patients serum to evaluate their correlation with the clinical stage of cancer by using molecular beacon probe. miR-21 expression was also quantitatively analyzed by TaqMan real-time PCR techniques. Ten serum samples from the confirmed breast cancer patients and one healthy control sample were used for the evaluation of miR-21 gene expression. The expression levels of miR-21 were significantly high in breast cancer serum samples compared to healthy control samples with significant differences corresponding to clinical stages of II, III, and IV. The findings indicate that serum miR-21 would serve as a potential marker for therapeutic regimes as well as monitoring the patient status by simple blood test.
    Keywords: Breast Cancer; Biomarker; miR-21; Clinical stage; Real-time PCR.
    DOI: 10.1504/IJBET.2017.10021948
  • Pose and Occlusion Invariant Face Recognition System for Video Surveillance Using Extensive Feature Set   Order a copy of this article
    by A. Vivek Yoganand, A. Celine Kavida, D. Rukmani Devi 
    Abstract: Face recognition presents a challenging problem in the field of image analysis and computer vision. Different video sequences of the same subject may contain variations in resolution, illumination, pose, and facial expressions. These variations contribute to the challenges in designing an effective video-based face-recognition algorithm. In this proposed method, we are presenting a face recognition method from video sequence with various pose and occlusion. Initially, shot segmentation process is done to separate the video sequence into frames. Then, Face part is detected from each frame for further processing. Face detection is the first stage of a face recognition system. After detecting the face exactly the facial features are extracted. Here SURF features, appearance features, and holo-entropy is used to find out the uniqueness of the face image. The Active appearance model (AAM) can be used to find out the appearance based features in the face image. These features are used to select the optimal key frame in the video sequence which is based on the supervised learning method, Modified Artificial Neural Network (MANN) using Bat algorithm. Here bat algorithm is used for optimizing the weights of Neurons. Finally, based on the feature library, the face image can be recognized.
    Keywords: face recognition; Active appearance model; Modified Artificial Neural Network; bat algorithm.

  • Automatic segmentation of Nasopharyngeal carcinoma from CT images   Order a copy of this article
    by Bilel Daoud, Ali Khalfallah, Leila Farhat, Wafa Mnejja, Ken’ichi Morooka, Med Salim Bouhlel, Jamel Daoud 
    Abstract: The nasopharyngeal carcinoma (NPC) called also Cavum cancer becomes a public health problem for the Maghreb countries and Southeast Asia. The detection of this cancer could be carried out from computed tomography (CT) scans. In this context, we proposed two approaches based on image clustering to locate affected tissues by the Cavum cancer. These approaches are based respectively on E-M and Otsu segmentation. Compared to the physician manual contouring, our automatic detection proves that the detection of the cancer while using the Otsu clustering in terms of precision, recall and F-measure is more efficient than E-M. Then, we merged the results of these two methods by using the AND and the OR logical operators. The AND fusion yields to an increase of the precision while the OR fusion raises the recall. However, the detection of the NPC using Otsu remain the best solution in terms of F-Measure. Compared to previous studies that provide a surface analysis of the NPC, our approach provides a 3D estimation of this tumor ensuring a better analysis of the patient folder.
    Keywords: Cavum Cancer; DICOM images; image segmentation; E-M; Otsu; recall; precision; F-measure.
    DOI: 10.1504/IJBET.2017.10016628
  • Descendant Adaptive Filter to Remove Different Noises from ECG Signals   Order a copy of this article
    by Mangesh Ramaji Kose, Mitul Kumar Ahirwal, Rekh Ram Janghel 
    Abstract: Electrocardiogram (ECG) signals are electrical signals generated corresponding to activity of heart. ECG signals are recorded and analyzed to monitor heart condition. In initial raw form, ECG signals are contaminated with different types of noises. These noises may be electrode motion artifact noise, baseline wander noise and muscle noise also known as electromyogram (EMG) noise etc. In this paper a descendent structure consists of adaptive filters is used to eliminate the three different types of noises (i.e. motion artifact noise, baseline wander noise and muscle noise). Two different adaptive filtering algorithms have been implemented; least mean square (LMS) and recursive least square (RLS) algorithm. Performance of these filters are compared on the basis of different fidelity parameters such as mean square error (MSE), normalized root mean squared error (NRMSE), signal-to-noise ratio (SNR), percentage root mean squared difference (PRD), and maximum error (ME) has been observed.
    Keywords: Adaptive Filters; ECG; Artifacts; LMS; RLS; SMA.

  • Epileptic Seizure Detection in EEG Using Improved Entropy   Order a copy of this article
    by A. Phareson Gini, M.P. Flower Queen 
    Abstract: Epilepsy is a chronic disorder of the brain that impacts people all around the world. This is categorized by recurrent seizure and it is difficult to recognize when someone is consuming an epileptic seizure. The electroencephalogram (EEG) signal plays a significant part in the recognition of epilepsy. The EEG signal generates complex information and it has been stowed in EEG recording schemes. It is tremendously challenging to investigate the chronicled EEG signal and the analysis of the epileptic activity in a time consuming procedure. In this article, we suggest a novel ANN based Epileptic Seizure Detection in EEG signal with the help of the Improved Entropy technique. The anticipated technique comprises pre-processing, feature abstraction and EEG organization which utilizing artificial neural network. In primary phase we sample all the input information set. In the second phase, a fuzzy entropy algorithm is utilized to abstract the features of experimented signal. In organization segment, we utilize artificial neural network for to recognize Epilepsy seizures in exaggerated patient. Lastly, we associated the anticipated technique with prevailing technique for the perceiving epileptic sections. The function is utilized to compute the following parameters like accuracy, specificity, FAR, sensitivity, FRR, GAR which established the effectiveness of the anticipated Epilepsy seizure recognition system.
    Keywords: Entropy; EEG; ANN.

  • Kurtosis Maximization for Blind Speech Separation in Hindi Speech Processing System using Swarm Intelligence and ICA   Order a copy of this article
    by Meena Patil, J.S. Chitode 
    Abstract: Blind Source Separation (BSS) method divides mixed signals blindly without any data on the mixing scheme. This is a main issue in an actual period world whether have to identify a specific person in the crowd or it is a zone of speech signal is removed. Besides, these BSS approaches are collective with shape and also statistical features to authenticate the performance of each one in outline classification. For resolving this issue proposes an active BSS algorithm on the basis of the Group Search Optimization (GSO). The kurtosis of the signals is used as the objective performance and the GSO is utilized to resolve it in the suggested algorithm. Primarily, source signals are taken into account as the Independent Component Analysis (ICA) to generate the mixing signals to BSS yield the maximum kurtosis. The source signal constituent that is divided out is then smeared off from mixtures with the help of the deflation technique. Each and every source signals establish that important development of the computation amount and the quality of signal separation is attained using the projected BSS-GSO algorithm if associated with the preceding algorithms.
    Keywords: Blind source separation (BSS); Speech signal; optimization; ICA; Mixing signals and Unknown signals.
    DOI: 10.1504/IJBET.2017.10011983
  • Electrocardiogram compression using the Non-Linear Iterative Partial Least Squares algorithm: a comparison between adaptive and non-adaptive approach   Order a copy of this article
    by Pier Ricchetti, Denys Nicolosi 
    Abstract: Data Compression is applicable in reducing amount of data to be stored and it can be applied in several data collecting processes, being generated by lossy or lossless compression algorithms. Due to its large amount of data, the use of compression is desirable in ECG signals. In this work, we present the accepted Non-Linear Iterative Partial Least Squares (NIPALS) method as an option to ECG compression method, as recommended by Nicolosi. In addition, we compare the results based in an adaptive and non-adaptive version of this method, by using the MIT Arrhythmia Database. As a help to obtain a better comparison, we have developed an abnormality indicator related to possible abnormalities in the waveform, and a decision method that helps to choose between adaptive or non-adaptive approach. Results showed that the adaptive approach is better than the non-adaptive approach, for the NIPALS compression algorithm.
    Keywords: data compression; component analysis; adaptive; comparison; PCA; principal component analysis; nipals; nonlinear iterative partial least squares; ECG; electrocardiogram; compression algorithms.

  • A tumour segmentation approach from flair MRI brain images using SVM and genetic algorithm   Order a copy of this article
    by S.U. Aswathy, G. Glan Devadhas, S.S. Kumar 
    Abstract: This paper puts forth a framework of a medical image analysis system for brain tumor segmentation. Image segmentation helps to segregate objects right from the background, thus proving to be a powerful tool in Medical Image Processing.This paper presents an improved segmentation algorithm rooted in Support Vector Machine and Genetic Algorithm. SVM are the basis technique used for segmentation and classification of medical images. The MRI database used consists of FLAIR images. The proposed system consists of two stages. The first Stage perform pre-processing the MRI image, followed by block division. The Second Stage includes feature extraction, feature selection and finally, the SVM based training and testing. The feature extraction is done by first order histogram and co-occurrence matrix and GA using KNN is used to select subset features. The performance of the proposed system is evaluated in terms of specificity, sensitivity, accuracy, time elapsed and figure of merit.
    Keywords: segmentation; support vector machine; genetic algorithm; k nearest neighbors.

  • Dual Modality Tran-Admittance Mammography and Ultrasound Reflection to Improve Accuracy of Breast Cancer Detection   Order a copy of this article
    by Khusnul Ain 
    Abstract: Breast tissue and cancer have high impedance ratio. Imaging impedance can produce high contrast images. TAM (Trans-admittance mammography) is one of prototype based on impedance to detect breast cancer. The TAM is only able to produce a projection image. It needs ratio between anomalous and normal admittance to obtain anomalous volume. Size and ratio of anomalous are very important to know precisely because it is associated with the stage and type of cancer. Acoustic data produces depth and volume anomalies appropriately. Combining TAM data and acoustic data are expected to provide promising results. The study was conducted by measuring trans-admittance of breast phantom by the TAM. It is conducted at a frequency 0.5; 1; 5; 10; 50 and 100 kHz. The acoustic data obtained by scanning breast phantom. Combination of depth and anomaly volume from ultrasonic reflection on the TAM device can provide the right information for ratio of conductivity anomalies and reference.
    Keywords: dual modality; trans-admittance mammography; ultrasound reflection; accurate; breast cancer.

  • Low Power DNA Protein Sequence alignment using FSM State Transition controller   Order a copy of this article
    by Sancarapu Nagaraju, Penubolu Sudhakara Reddy 
    Abstract: In this paper we proposed an efficient computation technique for DNA patterns on reconfigurable hardware (FPGAs) platform. The paper also presents the results of a comparative study between existing dynamic and heuristic programming methods of the widely-used Smith-Waterman pair-wise sequence alignment algorithm with FSM based core implementation. Traditional software implication based sequence alignment methods cant meet the actual data rate requirements. Hardware based approach will give high scalability and one can process parallel tasks with a large number of new databases. This paper explains FSM (Finite State Machine) based core processing element to classify the protein sequence. In addition, we analyze the performance of bit based sequence alignment algorithms and present the inner stage pipelined FPGA (Field Programmable Gate Array) architecture for sequence alignment implementations. Here synchronized controllers are used to carry out parallel sequence alignment. The complete architecture is designed to carry out parallel processing in hardware, with FSM based bit wised pattern comparison with scalability as well as with a minimum number of computations. Finally, the proposed design proved to be high performance and its efficiency in terms of resource utilization is proved on FPGA implementation.
    Keywords: DNA; Protein Sequence; FSM; Smith-Waterman algorithm; FPGA; Low Power.

  • A review on multimodal medical image fusion   Order a copy of this article
    by BYRA REDDY G R, Dr. Prasanna Kumar H 
    Abstract: Medical image fusion is defined as combining two or more images from single or multiple imaging modalities like Ultrasound, Computerized Tomography, Magnetic Resonance Imaging, Single Photon Emission Computed Tomography, Positron Emission Tomography and Mammography. Medical image fusion is used to optimize the storage capacity, minimizes the redundancy and to improve quality of the image. The goal of medical image fusion is to combine complementary information from multiple imaging modalities of the same scene. This review paper describes about different imaging modalities, fusion methods and major application domains.
    Keywords: Image fusion; Ultrasound; Mammography; Magnetic Resonance,Computed Tomography.

  • Heart Sound Interference Cancellation from Lung Sound Using Dynamic Neighborhood Learning-Particle Swarm Optimizer Based Optimal Recursive Least Square Algorithm   Order a copy of this article
    by Mary Mekala A, Srimathi Chandrasekaran 
    Abstract: Cancellation of acoustic interferences from the lung sound recordings is a challenging task. Lung sound signals provide critical analysis of lung functions. Thus lung related diseases can be diagnosed with noiseless lung sound signals. A Recursive Least Square (RLS) algorithm based adaptive noise cancellation technique can be used to reduce the heart sounds from lung sounds are proposed in this paper. In RLS, the forgetting factor is the major parameter which determines the performance of the filter. Finding the optimal forgetting factor for the given input is the vital step in RLS operation. An improved PSO algorithm is used to find the optimal forgetting factor for the proposed RLS algorithm. Three different normal breath sounds mixed with heart sound signals are used to test the algorithm. The results are assessed with the correlation coefficient between the original uncorrupted lung sound signal and the interference cancelled lung signals by the proposed optimal filter. The power spectral density plots are also used to measure the accuracy of the proposed optimal RLS algorithm.
    Keywords: Lung sound signals;Dynamic Neighborhood Learning; Recursive Least Square; Adaptive noise cancellation; Optimization;Forgetting Factor;Heart Sound Signals;Correlation Coefficient;Power Spectral Density;Bronchial Sound.
    DOI: 10.1504/IJBET.2017.10020988
  • Prediction of Risk Factors for Prediabetes using a Frequent Pattern based Outlier Detection   Order a copy of this article
    by Rajeswari A.M., Deisy C. 
    Abstract: Prediabetes is the forerunner stage of diabetes. Prediabetes develops type-2 diabetes slowly without any predominant symptoms. Hence, prediabetes has to be predicted apriori to stay healthier. The risk factors for prediabetes are abnormal in nature and are found to be present in a few negative test samples (without diabetes) of Pima Indian Diabetes data. The conventional classifiers will not be able to spot these abnormal samples among the negative samples as a separate group. Hence, we propose an algorithm Frequent Pattern Based Outlier Detection (FPBOD) to spot such abnormal samples (outliers) as a separate group. FPBOD uses an associative classification technique with few surprising measures like Lift, Leverage and Dependency degree to detect outliers. Among which, Lift measure detects more precise outliers that are able to correctly classify the person who didnt have diabetes, but just takes the risky chance of being a diabetic patient.
    Keywords: outlier detection; prediabetes; frequent pattern based outlier detection; associative classification; surprising measure.

  • Design of Analytic Wavelet Transform with Optimal Filter Coefficients for Cancer Diagnosis Using Genomic Signals   Order a copy of this article
    by Deepa Grover, Sonica Sondhi, Banerji B. 
    Abstract: DNA sequence analysis and gene expression analysis through genomic signal processing played an important role in cancer diagnosis in recent years. Cancer diagnosis through gene expression data, Discrete Fourier transform, Discrete Wavelet transform (DWT) and IIR Low pass filter are frequently used but suffer from drawbacks like longer essential time-support. Analytic wavelet transform with optimal filter coefficients for cancer diagnosis using genomic signals is designed in this paper. The proposed technique consists of three modules namely, pre-processing module, optimization module and transform and cancer diagnosis module. Initially the filter coefficients are optimally found out using Group Search Optimizer. Then, the optimal coefficients and the pre-processed DNA sequence is applied to analytic wavelet transform and subsequently, diagnosis for the cancer cell is made based on the threshold. DNA sequences obtained from National Centre for Biotechnology Information (NCBI) forms the database for the evaluation. Evaluation metrics parameters employed are sensitivity, specificity and accuracy. Comparison is made to the base method and analytic transform technique for more analysis. From the results, we can observe that the proposed technique has achieved good results attaining accuracy of 91.6% which is better than other compared techniques.
    Keywords: Genomic Signal Processing (GSP); Cancer diagnosis; GSO; Analytic transform; thresholding.

  • A Review of Non-Invasive BCI Devices   Order a copy of this article
    by Veena N., Anitha N. 
    Abstract: BCI provisions humans beings to control various devices with the help of brain waves. It is quite useful for the people who are totally paralyzed from neuromuscular diseases such as spinal cord injury, brain stem stoke. BCI permits a muscular free channel for conveying the user intent into action which help the people with motor disabilities to control their surroundings. Various non-invasive technologies like Electroencephalogram (EEG), Magnetoencephalography (MEG), functional Magnetic Resonance Imaging (fMRI) etc, are available for capturing the brain signal. In this article, various non-invasive BCI devices are analysed and nature of signals captured by it are reported. We also explore the use of signals for diseases diagnosis, there features and availability of those devices in the market.
    Keywords: EEG; MEG; fMRI; Non-invasive; Psychological; Physiological.
    DOI: 10.1504/IJBET.2017.10018427
  • R peak Detection for Wireless ECG using DWT and Entropy of coefficients   Order a copy of this article
    by Tejal Dave, Utpal Pandya 
    Abstract: Investigation of patients Electrocardiogram helps to diagnose various heart related diseases. With correct R peak detection in ECG wave, classification of arrhythmia can be carried out accurately. However, accurate R peak detection is a big challenge especially in wireless patient monitoring system. In wireless ECG system, in order to reduce the power consumption; it is desirable to capture ECG at lower sampling rate. This paper proposes an algorithm for R peak detection using discrete wavelet transform in which detailed coefficients are selected based on entropy. The proposed algorithm is validated with MIT-BIH database and its performance is compared with similar work. For MIT-BIH case, positive predictivity and sensitivity for proposed algorithm are 99.85 and 99.73, respectively. Application of proposed algorithm on wireless ECG, acquired at adjustable sampling rate from different subjects using prototyped Bluetooth ECG module, shows efficacy of algorithm to detect R-peak of ECG with high accuracy.
    Keywords: Electrocardiogram; Wireless Monitoring System; Entropy; Discrete Wavelet Transform.
    DOI: 10.1504/IJBET.2017.10013949
  • Muscle fatigue and performance analysis during fundamental laparoscopic surgery tasks   Order a copy of this article
    by Ali Keshavarz Panahi, Sohyung Cho, Michael Awad 
    Abstract: A limited working area and impaired degree of freedom have led surgeons to encounter ergonomic challenges when performing minimally invasive surgery (MIS). As a result, they become vulnerable to associated risks such as muscle fatigue, potentially impacting their performance and causing critical errors in operations. The goal of this study is to first establish the extent of muscle fatigue and time-to-fatigue in vulnerable muscle groups, before determining whether the former has any effect on surgical performance. In the experiment, surface electromyography (sEMG) was deployed to record the muscle activations of 12 subjects (6 males and 6 females) while performing fundamentals of laparoscopic surgery (FLS) tasks for a total of 3 hours. In all, 16 muscle groups were tested. The resultant data were then reconstructed using recurrence quantification analysis (RQA) to achieve the first goal. In addition, a subjective fatigue assessment was conducted to draw comparisons with the RQA results. The subjects performance was also investigated via a FLS task performance analysis, the results demonstrating that RQA can detect muscle fatigue in 12 muscle groups. The same approach also enabled an estimation of time-to-fatigue for said groups. The results also indicated that RQA and subjective fatigue assessment are very closely correlated (p-value <0.05). Although muscle fatigue was established in all 12 groups, the performance analysis results showed that the subjects execution of their duties improved over time.
    Keywords: Minimally Invasive Surgery; Fatigue Analysis; Recurrence Quantification Analysis; FLS Task Performance Analysis; Subjective Fatigue Assessment.
    DOI: 10.1504/IJBET.2017.10016990
  • Automatic Diagnosis of Stomach Adenocarcinoma using Riesz Wavelet   Order a copy of this article
    by ANISHIYA P, Sasikala M 
    Abstract: Adenocarcinoma originates from the glands. It causes changes in the gland architecture. The detection of adenocarcinoma requires histopathological examination of tissue specimens. At present, diagnosis and grading of the cancer depends on the visual interpretation of the biopsy samples by pathologist and thus, it may lead to a considerable amount of inter and intra-observer variability. To overcome this drawback and to reduce the reliance on the human interpretation and thereby reducing the workload of pathologists, many methods have been proposed. In this paper, a novel method to quantify a tissue for the purpose of automated cancer diagnosis and grading is introduced. The stomach tissue images are preprocessed to compensate for color variations. The Riesz wavelet transform is applied to the preprocessed stomach tissue images. From Riesz wavelet coefficients, 14 different statistical features were extracted. Wrapper based feature selection is used. The reliability check on the final dataset is performed using ANOVA. In diagnosis, the tissue is classified into normal(non-malignant), well differentiated, moderately differentiated, poorly differentiated and tissue. The proposed system yielded a classification accuracy of 93.2% in diagnosing and 98.33% in Grading.
    Keywords: Stomach adenocarcinoma; histopathological image analysis; Color Normalization; Riesz wavelet transform; cancer diagnosis; Hilbert transform; Simoncelli wavelet; ANOVA; Support Vector Machine.

  • Generalized Warblet Transform Based Analysis of Biceps Brachii Muscles Contraction Using Surface Electromyography Signals   Order a copy of this article
    by Diptasree MaitraGhosh, Ramakrishnan Swaminathan 
    Abstract: In this work, an attempt has been made to utilize the time-frequency spectrum obtained using Generalized Warblet Transform (GWT) for fatigue analysis. Signals are acquired from the biceps brachii muscles of twenty healthy volunteers during isometric contractions. The first and last 500 ms lengths of a signal are assumed as nonfatigue and fatigue zones respectively. Further, the signals from these zones are subjected to GWT for the computation of time-frequency spectrum. Features such as Instantaneous Mean Frequency (IMNF), Instantaneous Median Frequency (IMDF), Instantaneous Spectral Entropy (ISPEn), and Instantaneous Spectral Skewness (ISSkw) are estimated. The results show that the IMNF, IMDF and ISPEn increased by 24%, 34% and 36% respectively in nonfatigue condition. In contrast, 22% higher ISSkw is observed for fatigue condition. The statistical analysis indicates that the features are significant with p<0.001. It appears that the current method is useful in analyzing muscle fatigue disorders using sEMG signals.
    Keywords: sEMG; biceps brachii; muscle fatigue; GWT.

  • Automated Emotion State Classification using Higher Order Spectra and Interval features of EEG   Order a copy of this article
    by Rashima Mahajan 
    Abstract: Automated analysis of electroencephalogram (EEG) signals for emotion state analysis has been emerging progressively towards the development of affective brain computer interfaces. However, conventional EEG signal analysis techniques such as event related potential (ERP) and power spectrum estimation fail to provide high emotion state classification rates due to Fourier phase suppression when utilized with distinct machine learning tools. Further, only limited types of emotions has been explored for automated recognition using EEG, even though there are varieties of emotional states to illustrate the humans feelings. An attempt has been made to develop an efficient emotion classification algorithm via EEG utilizing statistics of fourth order spectra. A four-dimensional emotional model in terms of arousal, valence, liking and dominance is proposed using emotion specific EEG signals from DEAP dataset. A compact set of five temporal peak/interval related features and three trispectrum based features have been extracted to map the feature space. Through the feature map, a multiclass-support vector machine (SVM) based classifier using one-against-one algorithm is configured to yield a maximum classification accuracy of 81.6% using while classifying four emotional states. A comparison of multiclass-SVM with other classifiers such as feed forward neural network (FFNN) and radial basis function network (RBF) has been made. Significant improvement using a proposed compact hybrid EEG feature set and a multiclass -SVM has been achieved for automated emotion state classification.
    Keywords: Brain Computer Interface (BCI); Electroencephalogram (EEG); Emotions; Multiclass-SVM; Trispectrum; Temporal; DEAP.

  • Integrated neuromuscular fatigue analysis system for soldiers load carriage trial using DWT   Order a copy of this article
    by Arosha S. Senanayake, Dk N. Filzah Pg Damit, Owais A. Malik, Nor Jaidi Tuah 
    Abstract: This research work addresses neuromuscular fatigue of soldiers using wearable sensors during load carriage trial. Ten healthy male soldiers participated in the experiment. EMG was recorded bilaterally from selected lower extremity muscle groups and EEG was recorded on the frontal cortex of the brain. Each subject was asked to run and/or march on the treadmill with and without load at 6.4kmh-1, with inclination increased at every 5 minutes, until volitional exhaustion was reached. Feature extraction was performed using discrete wavelet transform on both signals. Results demonstrated significant changes in power levels at lower and middle frequency bands for EMG in most muscles during both unloaded and loaded conditions. However for EEG signals, significant changes of power distribution were observed at the frontal cortex during unloaded conditions only. Furthermore, through data visualisation, fatigue was detected at the muscle level first, before enforcing to send signals to the brain for decision making in order to stop the exercise.
    Keywords: neuromuscular fatigue; EMG; EEG; load carriage.

  • The Effects of Implant Design, Bone Quality, and Insertion Process on Primary Stability of Dental Implants: An In-vitro Study   Order a copy of this article
    by Mansour Rismanchian, Mojtaba Afshari 
    Abstract: The aim of this study was to analyse the maximum insertion torque (MIT) with respect to the implant thread face angle, bone quality and the insertion process. Two implants were designed and made: One with High Face Angle (HFA) and the other with Low Face Angle (LFA). Bovine bones were classified, and then the MIT values of the implants were measured per round using standard protocol. Finally, three-way ANOVA was used to interpret the MIT values. For the main effects, MIT values (mean and standard error) of LFA (17.4±0.7 N cm) and HFA implants (20.8±0.7 N cm) were significantly different. The MIT values for the D4 (9.4±0.6 N cm), D3 (19.1±0.7 N cm), and D2 (28.8±1.1 N cm) bones were significantly different. Considering the higher primary stability as an essential provision for immediate load implants, it is recommended to use implants featured with higher thread face angles.
    Keywords: Implant Design; Maximum Insertion Torque; Primary Stability; In-vitro.
    DOI: 10.1504/IJBET.2017.10021840
    by Anitha Anbazhagan, Uma Maheswari 
    Abstract: There is a need for intelligent way of detecting Diabetic Retinopathy (DR) in recent years, which is widely spreading in developing countries. The worst part of diabetic retinopathy is initially asymptomatic, but if untreated it can lead to blindness. In this paper, an automated technique for early screening of fundus images is proposed. This work focuses on analyzing texture of the fundus image to classify them into normal or diabetic retinopathy (DR). For texture analysis, Local Ternary Pattern (LTP) is applied and compared with Local Binary Pattern (LBP). As LBP is more sensitive to noise and illumination variation, LTP is employed and its discriminative power is explored. LTP is obtained for all three color components Red (R), Green (G) and Blue (B) for different radius R = 1, 2, 3 and 5 considering 8 neighborhood pixels. Also the importance of RGB component over Green channel component is analyzed. The histogram of LTP and variance provides a statistical set of features, which are given to the classifiers KNN and Random Forest. 10-fold cross validation approach is incorporated in both classifiers. Random forest classifier provides a sensitivity and specificity of 100%. The average sensitivity and specificity of nearly 91% are achieved. DR is detected by analyzing the retina background without segmenting the lesions. This implies that the proposed algorithm is very fast and can be used as screening test for retinal abnormalities detection.
    Keywords: Local Ternary Pattern; Local Binary Pattern; Diabetic Retinopathy; Random Forest; Computer Aided Diagnosis; Fundus images; KNN.
    DOI: 10.1504/IJBET.2017.10022527
  • Enhancement of Low Quality Blood Smear Image using Contrast and Edge Corrections   Order a copy of this article
    by Umi Salamah, Riyanarto Sarno, Agus Zainal Arifin, Anto Satriyo Nugroho, Ismail Eko Prayitno Rozi, Puji Budi Setia Asih 
    Abstract: A blood smear is one of the medical images that widely used to disease diagnosis. A Low quality of smear that produced by poor microscopy specification complicates the reading of the feature. The characteristics are blurred, diminished true colour of object, unclear boundary, and low contrast between object and background. In this study, we propose image enhancement technique to improve the readability of the features in the quality of blood smear image. The proposed method consists of contrast and edge corrections. Contrast correction utilizes the integration of contrast correction globally and locally. Edge correction uses Unsharp Masking Filetring to improve the object edge. Experiments are performed on three diseases images. The results show that the proposed method achieves best entropy and pretty good on MSE and PSNR, so it can produce images contained more information than the other methods and have a well effectiveness.
    Keywords: enhancement; low quality; blood smear; contrast; edge; correction; local; global; unclear boundary; readability feature.

  • An Adaptive Weighted Denoising Filter Framework for Impulse and Shot Noise : A Mammogram Image Applications   Order a copy of this article
    by Ramya A, Murugan D, Manish T. I, Ganesh Kumar T 
    Abstract: Breast screening contraption has not yet became an effective diagnosis method for detection of abnormalities. This is mainly due to the physical interference from the screening system; these tribulations usually develop a noise due to electric charge and also occur due to photon counting in optical screening machine. This work focuses on denoising the mammogram breast image to a greater extends with two step process. In the first phase mammogram image dataset is subjected to neural network to detect the corrupted image from non noisy images by using different feature set. Then average weighted of four different filters such as Geometric-Mean filter (GM), Decision Based Median filter (DBM), Directional Weighted Median filter (DWM) and Frost filter is applied to impulse and shot noise pixels, to preserve its corrupted edges and to avoid smoothing out of details. Additionally, this combined filter is subjected to exponential transformation to yield the enhanced output. The proposed filter is applied to these two noises and compared with existing filters with respective quality factors such as Peak to Signal Noise Ratio (PSNR), Mean Absolute Error (MAE). The outcome result shows that the proposed methods yield promising than the existing filters for mammogram images.
    Keywords: Image Denoise; Neural Network; Impulse noise; Shot noise; Adaptive Weighted filter; Mammogram Image.

  • Modelling and Simulation Analysis of Porous Polymeric Scaffold for the Replacement of Bruchs Membrane as a Therapy for Age-Related Macular Degeneration   Order a copy of this article
    by Susan Immanuel, Aswin Bob Ignatius, Alagappan Muthupalaniappan 
    Abstract: Age Related Macular Degeneration [AMD] is a common disease that is prevalent among people of age 50. It is characterized by the loss of Retinal Pigment Epithelial Cells [RPE] from the macula of eye due to the deposition of drusen in the Bruchs membrane that supports the RPE cells. Treatment for AMD is to replace Bruchs membrane with scaffolds that provide conducive environment for the RPE cells to adhere and proliferate. In this study a scaffold made of porous Poly (Caprolactone) [PCL] was designed using the tool COMSOL Multiphysics and its properties like structural integrity and the fluid flow were analysed using Brinkmans equation. The model has a square geometry with a dimension of 100 x 100 x 4
    Keywords: Age Related Macular Degeneration; Retinal Pigment Epithelial Cells; Bruch’s membrane; PCL scaffold; Brinkman’s equation.

  • Continuous User Authentication Using Multimodal Biometric Traits with Optimal Feature Level Fusion   Order a copy of this article
    by Prakash A., Krishnaveni R., Dhanalakshmi R. 
    Abstract: Biometric process demonstrates the authenticity or approval of an individual in view of his/her physiological or behavioral characteristics. Subsequently, for higher security feature, the blend of at least two or more multimodal biometrics (multiple modalities) is requiring. Multimodal biometric technology gives potential solutions for continuous user-to-device authentication in high security. In this research paper proposed Continuous Authentication (CA) process using multimodal biometric traits considers finger and iris print images to various feature extraction process. At that point, features are extracted into optimal Feature Level Fusion (FLF) process. The final Feature Vector is acquired by concatenating Directional Information and Center Area Features. Disregard the optimal feature process the inspired Fruit Fly Optimization (FFO) model is considered, and then these fused model into authentication procedure to find the matching score values (Euclidian distance) with the imposter and genuine user. From the approach results are accomplish most extreme accuracy, sensitivity and specificity compared with existing papers with better FPR and FRR value for the authentication process. The result shows 92.23% accuracy for theproposed model when compared to GA, PSO which is attained in MATLAB programming software.
    Keywords: biometrics; authentication; feature vectors; optimization; Feature level fusion; fingerprint; iris.

  • A comparison of foot kinematics between pregnant and non-pregnant women using the Oxford Foot Model during walking   Order a copy of this article
    by Minjun Liang, Yang Song, Wenlan Lian 
    Abstract: To investigate the effects between pregnant women (PW) and non-pregnant women (NW) on foot kinematics during walking using the Oxford Foot Model (OFM). The results contribute to understanding of foot biomechanics of pregnant women and may provide suggestions for customized footwear design. 20 women with last trimester of pregnancy and 23 non-pregnant women were involved in this study. Three-dimensional motion of the forefoot, hindfoot and tibia during walking were recorded by a Vicon motion analysis system. Two Force Platforms were used to record the ground reaction force. Compared with NW, PW demonstrated greater plantarflexion and internal rotation of hindfoot and internal tibial rotation during initial contact, greater forefoot eversion and hindfoot external rotation during push off. Moreover, PW showed greater external tibial rotation than NW during toe off and the center of pressure (COP) trajectory moved to the 2nd to 3th metatarsal at this stage. It appears that the altered foot kinematics of pregnant women may contribute to the redistribution of joints loads and the maintenance stability at a comfortable walking pace. In addition, falling risks for pregnant women prone to be higher at initial contact phase and overuse injury may raise in the 2nd to 3th metatarsal when walking for long time.
    Keywords: pregnant women; non-pregnant women; musculoskeletal; center of gravity; center of pressure.

    by Ebraheem Sultan, Nizar Alkhateeb 
    Abstract: The free-space broadband frequency modulated near infrared (NIR) photon technique has been used to model and characterize the optical behaviour in a lower forearm human phantom. The NIR system measures the broadband (30MHz-1000MHz) insertion loss (IL) and the insertion phase (IP) of the modulated light. This helps to characterize and model the different penetration depth and path-related modulated photon movement in the human phantom. A phantom that resembles human lower forearm (three layers) is used, along with the experimental modules and the Finite Element (FE) tools, to perform characterization and modelling. The study is divided into two stages. The first stage dedicates performing IL and IP measurements over 30MHz-1000MHz with a back-scattering measurement method to characterize the behaviour of the modulated photons. The second stage dedicates, modelling the modulated broadband photon behavior using a Finite Element simulator called as COMSOL. Results in both stages are analyzed using a signal processing method. This helps to identify the frequency modulated photon stamp associated with different layers and verifies the accuracy of the 3D FE modelling. Comparison between experimental and 3D FE modelling is computed at different frequencies is shown to be less than 6%, and results give an understanding of how the modulated waves of photons behave when traveling in the human forearms. These results can be used to further investigate the functionality surrounding any biological activity around the human forearm.
    Keywords: NIR Spectroscopy; COMSOL; 3D FE; Modulated wave of photons; human lower forearm photon transport; optical transmitter; VCSEL; optical receiver; APD; IP and IL.

  • Complex Diffusion Regularization based Low dose CT Image Reconstruction   Order a copy of this article
    by Kavkirat Kaur, Shailendra Tiwari 
    Abstract: The Computed Tomography (CT) is considered as a significant imaging tool for the clinical diagnosis. Due to the low-dose radiation in CT, the projection data is highly affected by the Gaussian noise. Thus, there is a demand of a framework that can eliminate the noise and provide high-quality images. This paper presents a new statistical image reconstruction algorithm by proposing a suitable regularization method. The proposed framework is the combination of two basic terms namely data fidelity and regularization. Maximizing the log likelihood gives the data fidelity term, which represents the distribution of noise in low dose X-Ray CT images. Maximum likelihood expectation maximization algorithm is introduced as a data-fidelity term. The ill-possedness problem of data fidelity term is overcome with the help of complex diffusion filter. It is introduced as a regularization term into the proposed framework that minimizes the noise without blurring edges and preserving the fine structure information into the reconstructed image. The proposed model has been evaluated on both simulated and real standard thorax phantoms. The final results are compared with the other standard methods and it is analyzed that the proposed model has many desirable properties such as better noise robustness, less computational cost, enhanced denoising effect.
    Keywords: Computed Tomography (CT); Noise Reduction; Maximum likelihood expectation maximization (MLEM) algorithm; Complex Diffusion (CD); Gaussian noise.

  • Active contours for overlapping cervical cell segmentation   Order a copy of this article
    by Flavio Araujo, Romuere Silva, Fatima Medeiros, Jeova Farias, Paulo Calaes, Andrea Bianchi, Daniela Ushizima 
    Abstract: The nuclei and cytoplasm segmentation of cervical cells is a well-studied problem. However, the current segmentation algorithms are not robust to clinical practice due to their high the computational cost or because they cannot accurately segment cells with high overlapping. In this paper, we propose a method that is capable of segmenting both cytoplasm and the nucleus of each individual cell in a clump of overlapping cells. The proposed method consists of three steps: a) cellular mass segmentation; b) nucleus segmentation; and, c) cytoplasm identification based on an active contour method. We carried out experiments on both synthetic and real cell images. The performance evaluation of the proposed method showed that it was less sensitive to the increase in the number of cells per image and the overlapping ratio against two other existing algorithms. It has also achieved a promising low processing time and, hence, it has the potential to support expert systems for cervical cell recognition.
    Keywords: ABSnake; Active Contour; Cervical Cells; Medical Image; Nuclei and Cytoplasm Segmentation; Overlapping Cells; Pap Test.

  • Certain investigation on Biomedical impression and Image Forgery Detection   Order a copy of this article
    by Arun Anoop Mandankandy, Poonkuntran S 
    Abstract: In todays digital age, the trustworthy towards image is distorting because of malicious forgery images. The issues related to the multimedia security have led to the research focus towards tampering detection. As the source image and the target regions are from the same image so that copy move forgery is very effective in image manipulation due to its same properties such as temperature, color, noise and illumination conditions. In this paper, we have analyzed some papers related to copy move forgery detection and finally concluded with comparative analysis with some parameters and also Medical and biomedical image analysis, databases, search engines, devices and system security.
    Keywords: Block based image forgery detection; key-point based image forgery detection; pixel based image forgery detection; Copy move forgery; Biomedical search engines; Biomedical Image analysis.

  • Cerebral Palsy Rehabilitation - Effectiveness of Visual Stimulation Method by Analyzing the Quantitative Assessment of Oculomotor Abnormalities   Order a copy of this article
    by ILLAVARASON P, Arokia Renjit J 
    Abstract: Cerebral Palsy (CP) is developmental brain abnormalities which is prior to birth or during birth and after delivery. Cerebral brain damage cells which leads to other health issues such as vision, hearing, and motor activities and so on. The major health issues for the cerebral palsy children such as vision problem. The proposed approach deals which Vision dysfunction and Oculomotor Assessment for diagnosis and treatment of brain disorder. The movement of eye plays a vital role to get the accurate vision for static and dynamic objects. In these proposed approaches we assessed the Oculomotor deficits of CP children by recording the eye movement of 26 CP Children (age range 4-14) and performance compared with age matched control and also analyzing the cerebral palsy children Eye Fixation Centroid, Smooth Pursuit and Eye Lid Blinking activities. From these activities the movement of eye is to provide the window of Neuro plasticity for CP children. The oculomotor abnormalities indicate the lesion brain of CP children retains the ability to reorganize, by continuous work process of these vision stimulation task techniques for Eye Gaze Direction Approach will improve the cognitive rehabilitation of cerebral palsy children. These gaze related indices in response to both static and dynamic visual stimulation techniques may serve as potential quantitative biomarkers for cerebral palsy children.
    Keywords: Cerebral Palsy; Eye fixation; Smooth Pursuit; Blink Rate.

  • Quantitative Analysis of Paraspinal Muscle Strain during Cervical Traction Using Wireless EMG Sensor   Order a copy of this article
    by Hemlata Shakya, Shiru Sharma 
    Abstract: The aim of this study is to assess the efficacy of cervical traction on the basis of fatigue analysis using wireless EMG sensor. Neck pain is a regular health-related complaint, leading to paraspinal muscle spasm & radiculopathy. The patients having neck pain complaint were visiting therapy unit regularly for cervical traction treatment. This case study includes EMG data recording of twelve neck pain patients using wireless EMG sensor. The patients were divided into two groups having radiculopathy with paraspinal muscle spasm and without radiculopathy but paraspinal muscle spasm. The subjects were treated with 15 minutes of cervical traction with a 7 kg strain. The extracted various features in the time domain and frequency domain from the acquired EMG data to assess the muscle fatigue during cervical traction treatment. Features were calculated to evaluate the muscle fatigue during cervical traction in sitting position. Analysis of various parameters indicated significant differences in the paraspinal muscle activities. The results indicate the effectiveness of continuous traction treatment in the reduction of neck pain.
    Keywords: Electromyography; Neck pain; Muscle Fatigue; Feature extraction.

  • Medical Video based Cryptography Model to Improve the Security and Transmission Time   Order a copy of this article
    by Edwin Dayanand, R.K. Selva Kumar, N.R. Ram Mohan 
    Abstract: This paper proposes a medical video based visual Cryptography Model using Graph-based Coherence Shape Lagrange Interpolation (G-CSLI). This G-CSLI model reduces the noise on medical video while generating shares and minimizes the transmission time by minimizing the computational complexity during secret sharing. At first, a probabilistic polynomial-time model is used with the objective of minimizing computational complexity during secret sharing. Here, the similarity is measured based on the luminance and structure. Finally, a Lagrange-Interpolation scheme is applied with the objective of minimizing the transmission time and improves the security. Overall, the proposed model minimizes the computational complexity while secret sharing and performs experimental evaluation on factors such as security, transmission time and noise in medical video based visual cryptography. Experimentation shows that the proposed model is able to reduce the computational complexity while secret sharing by 13.11% and reduce the transmission time by 39.03% compared to the state-of-the-art algorithms.
    Keywords: Chaotic Oscillations,rnProbabilistic Model,rnVisual Cryptography; rnProbabilistic Polynomial Coherence Shape; rnLagrange Interpolation.

    by Surekha Ks, B.P. Patil 
    Abstract: Electrocardiogram (ECG) signals are susceptible to noise and interference from the external world. This paper presents the reduction of unwanted 50Hz power line interference in ECG signal using multi-order adaptive LMS filtering. The novelty of the present method is the actual hardware implementation for power line interference removal. The design of adaptive filter is carried out by the SIMULINK based model and hardware based design using FPGA. The performance measures used are SNR, PSNR, MSE and RMSE. The novelty of the proposed method is to achieve better Signal to Noise Ratio (SNR) by careful selection of the filter order using hardware.
    Keywords: Adaptive filter; ECG; LMS Filter; Multi-order; Power line interference; FPGA; SIMULINK model; SNR; PSNR.

  • Automated Computer Aided System for Early Diagnosis of Alzheimers Disease by Regional Atrophy Analysis in Functional Magnetic Resonance Imaging   Order a copy of this article
    by Sampath R., Indumathi J 
    Abstract: A growing body of researchers suggest that a preclinical stage of Alzheimers disease (AD), characterized by precise neuropsychological and brain changes, may exist several years earlier to the overt manifestation of clinical symptoms. FMRI is a promising tool for detecting brain change in AD. This paper proposes an Automated Computer Aided System (ACAS) for early diagnosis of AD using FMRI. The system consists of 4 stages: Preprocessing, Feature Extraction, Segmentation and Regional Atrophy Analyses. The preprocessing removes the noise in the FMRI image; Multi Scale Analysis (MSA) is used to analyze FMRI to obtain its fractals at 6 different scales, which produce different feature vectors to discriminate between healthy and pathological patients; SelfOrganizing Map Nework (SOMN) technique, used for segmentation process, is an unsupervised network that utilizes the obtained feature vectors for competitive learning; the Regional Atrophy Analyses are used to differentiate AD from other neurodegenerative diseases. Compared to MRI, the proposed system gives more satisfactory results for early diagnosis and differenciation of AD from other neurodegenerative diseases.
    Keywords: Alzheimer’s Disease (AD); Functional Magnetic Resonance Imaging (FMRI); Automated Computer Aided System (ACAS); Multi Scale Analysis (MSA); Self–Organizing Map Network (SOMN); Regional Atrophy Analysis.
    DOI: 10.1504/IJBET.2019.10012451
  • Brain Tumor Detection and Classification using Hybrid Neural Network Classifier   Order a copy of this article
    by Krishnamurthy Nayak, Supreetha B. S, Phillip Benachour, Vijayashree Nayak 
    Abstract: Brain tumor is one of the most harmful diseases, and has affected majority of people including children in the world. The probability of survival can be enhanced if the tumor is detected at its premature stage. Moreover, the process of manually generating precise segmentations of brain tumors from magnetic resonance images (MRI) is time-consuming and error-prone. Hence, in this paper, an effective technique is employed to segment and classify the tumor affected MRI images. Here, the segmentation is made with Adaptive Watershed Segmentation Algorithm. After segmentation, the tumor images were classified by means of hybrid ANN classifier. The hybrid ANN classifier employs Cuckoo Search Optimization technique to update the interconnection weights. The proposed methodology will be implemented in the working platform of MATLAB and the results were analyzed with the existing techniques.
    Keywords: Neural Network; Medical Image processing; magnetic resonance images.

  • Multi Features Based Approach for White Blood Cells Segmentation and Classification in Peripheral Blood and Bone Marrow Images   Order a copy of this article
    Abstract: In this paper, we propose a complete automated framework for white blood cells differential count in peripheral blood and bone marrow images in order to reduce the analysis time and increase the accuracy of several blood disorders diagnosis. A new color transformation is first proposed to highlight the white blood cells regions; then a marker controlled watershed algorithm is used to segment the region of interest. The nucleus and cytoplasm are subsequently separated. In the identification step a set of color, texture and morphological features are extracted from both nucleus and cytoplasm regions. Next, the performances of a random forest classifier on a set of microscopic images are compared and evaluated. The obtained results reveal high recognition accuracies for both segmentation and classification stage.
    Keywords: white blood cells; cells segmentation; cells classification; color transformation; texture features; morphological features; peripheral blood images; bone marrow images.

  • Review on Next Generation Wireless Power Transmission Technology for Implantable Biomedical Devices   Order a copy of this article
    by Saranya N., Kesavamurthy T. 
    Abstract: Wireless Power Transmission (WPT) is a promising technology that causes drastic changes in the field of biomedical, especially in medical implantable devices such as pacemakers, cardiac defibrillator and cochlear implants. The traditional implantable biomedical devices get power supply from batteries or lead wires through the skin, which not only increasing the burden on the patient, but also increases the pain and risk of surgery. To reduce the cost of biomedical devices, risk of wire snapping, periodic surgery to replace the batteries, wireless delivery of energy to these devices is desirable. WPT is a promising technology capable of addressing limitations in implantable devices. This technology not only negates the risk of infection due to cables passing through the skin but also negates need for recurrent surgeries to replace batteries and minimizes the size of the device by excising bulky components such as batteries. This paper provides an overview of wireless power transmission history, the basic principle of WPT and their recent research and developments in implantable biomedical applications.
    Keywords: Wireless Power Transmission (WPT); Implantable Medical Devices (IMD); inductive coupling; resonance coupling; rectenna; Implantable Cardioverter Defibrillator (ICD); EEG recorder.

  • The Development of a Brain Controlled Interface Employing Electroencephalography to Control a Hand Prostheses.   Order a copy of this article
    by Mahonri Owen, Chikit Au 
    Abstract: It is the aim of this report to test the feasibility of using electroencephalography (EEG) to control a prosthetic hand employing an adaptive grasp. The Purpose of this report is to support work relating to the project The Development of a Brain Controlled Interface for Trans-radial Amputees. This work is based on the idea that an EEG based control scheme for prosthetics is credible in the current technological climate. The presented experiments investigate the efficiency and usability of the control scheme by testing response times and grasp accuracy. Response times are determined by user training and control method. The grasp accuracy relies on the effectiveness of the support vector machine used in the control scheme. The outcome of the research is promising and has the potential to provide amputees with an intuitive and easy to use method to control a prosthetic device.
    Keywords: ELectroencephalography;Prosthetic Control;neural Prosthetics.

  • A Near Lossless Three-Dimensional Medical Image Compression Technique using 3D-Discrete Wavelet Transform   Order a copy of this article
    by Boopathiraja S, Kalavathi Palanisamy 
    Abstract: In the field of medical image processing, there are several imaging modalities has been emerged and with its evolutionary growth we could get high quality of images that imposes to more amount of storage space as well as bandwidth for transmitting it. Therefore, there is always a great need to develop compression techniques on such images. Moreover, they produce 3-Dimensional images and it can be further divided into slices of images and process each as like 2D image or directly produce the sequence of images. Therefore, in every medical image processing operations like segmentation, feature extraction and such operations need to converge on 3D space (3-Dimensional image) which is essential and has broader future scope. In this paper, we provide the Wavelet based image compression technique that can be directly applied on the 3D images itself. We apply 3D DWT on a 3D volume and the resultant coefficients are taken to further compression process (like thresholding, entropy encoding). The inverse processes are performed for decompression to reconstruct the images. The effects of results based on different wavelets are assessed and the results of lossy mode are compared in terms to lossless mode.
    Keywords: 3D Medical Image; 3D-DWT; Huffman coding; Near lossless Compression.

  • Feature extraction and Classification of ECG Signals with Support Vector Machines and Particle Swarm Optimization   Order a copy of this article
    by Gandham Sreedevi, Bhuma Anhuradha 
    Abstract: The present work was aimed to present a thorough experimental study that shows the superiority of the generalization capability of the support vector machine (SVM) approach in the classification of electrocardiogram (ECG) signals. Feature extraction was done using principal component analysis (PCA). Further, a novel classification system based on particle swarm optimization (PSO) was used to improve the generalization performance of the SVM classifier. For this purpose, we have optimized the SVM classifier design by searching for the best value of the parameters that tune its discriminant function and upstream by looking for the best subset of features that feed the classifier. The obtained results clearly confirm the superiority of the SVM approach as compared to traditional classifiers, and suggest that further substantial improvements in terms of classification accuracy can be achieved by the proposed PSOSVM classification system.
    Keywords: ECG; PCA; PSO; SVM; Arrhythmias; Classification.

  • An efficient and optimized system for detection of cancerous cells in tongue   Order a copy of this article
    by Mahnoor Rasheed, Ishtiaq Ahmad, Sumbal Zahoor, Muhammad Nasir Khan 
    Abstract: Major progress in image processing allows us to make large-scale use of medical imaging data to provide better detection and treatment of several diseases. Cancer is one of them that cause around 1.7 million deaths every year. Advanced and precise detection of cancer can prevent the severe complications. Tongue cancer is relatively rare that takes the consideration of medical field groups in recent time. In this research work, an efficient tongue cancer detection system is proposed that carried out in two phases. Initially, advanced filtering techniques are used to remove the noise content of microscopic images of tissue cultures from the body of the subject to be diagnosed. Image information are enhanced in pre-processing step that squeezes undesirable contortion and enhances some picture highlights for less demanding and faster assessment. In next phase, the image is segmented in a manner that abstracts significant material and characteristics of the image. The detection of the affected part is performed using the Otsu thresholding, k-means clustering and marker controlled watershed segmentation techniques. The performance and limitations of these schemes are compared and discussed. The simulation results show the marker controlled watershed offers best segmentation and detection.
    Keywords: tongue cancer detection; k-means clustering; marker controlled watershed segmentation; Otsu thresholding.

  • Classification and Quantitative Analysis of Histopathological Images of Breast Cancer   Order a copy of this article
    by Anuranjeeta Anuranjeeta, Romel Bhattracharjee, Shiru Sharma, K.K. Shukla 
    Abstract: This paper provides a robust and reliable computational technique for the classification of benign and malignant cells using the morphological features extracted from histopathological images of breast cancer through image processing. Morphological features of malignant cells show changes in patterns as compared to that of benign cells. However, manual analysis is time-consuming and varies with perception level of the expert pathologist. To assist the pathologists in analyzing, morphological features are extracted, and two datasets are prepared from the group cells and single cells images for benign and malignant categories. Finally, classification is performed using supervised classifiers. Morphological features analysis is always considered as an important tool to analyze the abnormality in cellular organization of cells. In the present investigation, three classifiers for example Artificial Neural Network (ANN), k-Nearest Neighbour (k-NN) and Support Vector Machine (SVM) are trained using publically available breast cancer datasets. The result of performance indicators for benign and malignant images was calculated through True Positive (TP), True Negative (TN), False positive (FP) and False negative (FN) respectively. By utilizing a number of samples that fall into these classes, the performance parameters accuracy, sensitivity and specificity, balanced classification rate (BCR), F-measure and Matthews\' correlation coefficient (MCC) are calculated. The statistical measure regarding sensitivity and specificity has been acquired by calculating the region under the Receiver Operating Characteristic (ROC) curve. It is found that the classification accuracy achieved by the single cells dataset is better than the group cells. Furthermore, it is established that ANN provides a better result for both datasets than the other two (k-NN and SVM). The proposed method of the computer aided diagnosis system for the classification of benign and malignant cells provides better accuracy than the other existing methods.
    Keywords: Segmentation; Cancer; Morphological features; Histopathology; Classification.

  • Semiautomated detection of aortic root in human heart MSCT images using nonlinear filtering and unsupervised clustering   Order a copy of this article
    by Antonio Bravo 
    Abstract: In this research a semiautomatic technique to detect the aortic root in threedimensional (3-D) cardiac images is proposed. This technique consists of three steps: conditioning, filtering, and detection. The cardiac images are acquired with 64-slice multislice computerized tomography (MSCT) technology. The conditioning step is based on multiplanar reconstruction (MPR) and it is required in order to reformat the cardiac volume information to orthogonal planes to the aortic root. During the filtering step, a set of nonlinear filtering techniques based on similarity enhancement, median and weighted median are considered to reduce noise and enhance the cardiac edges in reformatted images. In the detection step, the filtered volumes are subsequently processed with an unsupervised clustering technique based on simple linkage region growing. Dice score, the symmetric point-to-mesh distance and the symmetrical Hausdorff distance are used as metric functions to compare the segmentations obtained using the proposed method with respect to ground truth volumes traced by a cardiologist. A clinical dataset of 90 three-dimensional images from 45 patients is used to validate the detection technique. From this validation stage, the maximum average Dice score (0.92), the minimum average symmetric point-to-mesh distance (0.96 mm) and the minimum average symmetrical Hausdorff distance (4.80 mm) are obtained during prepreprocessed volumes segmentation using similarity enhancement.
    Keywords: Human heart; aortic root; multislice computerized tomography; segmentation; similarity enhancement; weighted median; unsupervised clustering.

  • Finite Element Analysis of Tibia Bone   Order a copy of this article
    by Pratik Thakre, K.S. Zakiuddin, I.A. Khan, M.S. Faizan 
    Abstract: The tibia also known as the shank-bone, it is the larger and stronger of the two bones in the leg below the knee in vertebrates (the other being the fibula), and it connects the knee with the bones. The leg bones are the strongest long bones as they support the rest of the body. The support and movement of the tibia is essential to many activities performed by the legs, including standing, walking, and running, jumping and supporting the bodys weight. This research was directed towards a study of the lower limb of the human body through the 3-D modeling and finite element analysis of the tibia, Finite element analysis is used to evaluate the stresses and displacements of human tibia bone under physiological loading. Three-dimensional finite element models obtained by using computed tomography (CT- Scan) data which consisting thorough description about the material properties of bone and density of bone tissues which is very essential to create accurate and realistic geometry of a bone structure. Therefore, in this study, CT - Scan data of patients Tibia Bone (Healthy, broken tibia bone after 1moth and 2 month of surgery Tibia Bone) are used to develop three-dimensional finite element models of the left proximal tibia bone, and (full average body Weight- 80 kg) half of an average body weight 37.53 kg (368.16) applied on Tibia Bone . Finite Element Analysis conducted to calculate the Equivalent Von-Mises Stress, Maximum Principal Stress, Total Deformation and Fatigue Tool from the whole proximal tibia bone and comparing the results. These analyzed results provide a great foundation for further studies of bone injury prevention, bone transplant model integrity and validity and subject-specific fracture mechanism as the results of three bones (Healthy, after one month of surgery and after two months of surgery) compared.
    Keywords: Tibia; Fibula; Stress Analysis; Material Properties; Displacement; Modeling; Simulation; Finite Element Analysis; Hypermesh; Embodi 3D.

    by Mangayarkarasi Thirunavukkarasu, Najumnissa Jamal 
    Abstract: A Unique method to analyze Ultrasound Scan images of Kidney to classify renal abnormalities using SIFT features and Artificial Neural Network is presented in this Paper. The Ultrasound kidney images are classified into 4 classes normal, cyst, calculi, and tumor. Kidney images obtained from the scan centre, urologist and knowledge to predict common abnormalities by specialist are utilized as inputs to carry out the processing and analysis of ultrasound images. Preprocessing and denoising techniques are applied for the removal of speckle noise by applying median and wiener filter. Fuzzy C-Means clustering based segmentation technique is used to obtain the ROI.50 ROI is extracted based on the above method. A set of statistical features are initially obtained. Second order statistical features, the GLCM that gives information about inter pixel relationship, periodicity and spatial gray level dependencies are computed. To overcome the operator dependency of ultrasound scanning procedure rotational variance sift algorithm is applied and SIFT (SCALE INVARIANT FEATURE TRANSFORM) features are obtained. A total of 182 features for the normal images, 350 GLCM features, 250 statistical features and 42 SIFT features are calculated for the abnormal images. Abnormalities are classified using supervised learning algorithm (ANN). With the number of hidden neurons to be 10 the classification accuracy is reached for the testing input images.
    Keywords: Ultrasound scan images; speckle noise; median filter; wiener filter ; SIFT features; GLCM; fuzzy C-means segmentation ; artificial neural network;.

  • Development of Comfort Fit lower limb Prosthesis by Reverse Engineering and Rapid prototyping methods and validated with GAIT analysis   Order a copy of this article
    by Chockalingam K, Jawahar N, Muralidharan N, Balachandar Kandeepan 
    Abstract: The development of comfort fit, custom made lower limb prosthesis using the concept of reverse engineering (RE) and rapid prototyping (RP) has been introduced in this paper. The comparison of average percentage of deviation in step lengths also made between normal person, the Conventional (Plaster of paris - POP) prosthesis and reverse engineered rapid prototyping prosthesis through gait analysis. The gait analysis reveals that the average percentage of deviation in step lengths in normal person is 2.80, the average percentage of deviation in step lengths in conventional (POP) prosthesis is 53.70 and reverse engineered rapid prototyping prosthesis is 7.06. The difference in average percentage of step lengths deviation between normal person and amputed person wearing reverse engineered and rapid prototyping prosthesis is very less (Say 4.26) and hence provide comfort fit.
    Keywords: Lower limb prosthesis; Reverse engineering; Rapid prototyping; Gait analysis.

  • Efficient T2 Brain Region Extraction Algorithm Using Morphological Operation And Overlapping test from 2D and 3D MRI images   Order a copy of this article
    by Vandana Shah 
    Abstract: In the field of medical resonance image (MRI) processing the image segmentation is an important and challenging problem in an image analysis. The main purpose of segmentation in MRI images is to diagnose the problems in the normal brain anatomy and to find the location of tumour. Many of the algorithms have been found in recent years which aid to segment the medical images and identify the diseases. This paper proposes a novel 3D Brain Extraction Algorithm (3D-BEA) for segmentation of MRI images to extract the exact brain region. Transverse relaxation time (T2) weighted images are used as an input for the development of algorithm as these images provide bright compartments and dark fat tissues in the MRI brain region. The images are first denoised and smoothed for further processing. They are then used for extraction of irregular brain masks through threshold implementation which are then compared with the upper and lower slice of the brain images using morphological operations. The final brain volume has been generated using this 3D-BEA process. The result of this developed proposed algorithm is validated by comparing proposed algorithm with the results of the existing segmentation algorithm used for the same purpose. The proposed algorithm will help medical experts to understand and diagnose the tumour area of the patient.
    Keywords: segmentation; morphological operations; clustering; k-means clustering; fuzzy c means clustering; Brain Extraction Algorithm.

  • Secure Agent Based Diagnosis and Classification Using Optimal Kernel Support Vector Machine   Order a copy of this article
    by Kiran Tangod, Gururaj Kulkarni 
    Abstract: Diabetes is a serious complex condition which can affect the entire body. Diabetes requires daily self-care and if complications develop, diabetes can have a significant impact on quality of life and can reduce life expectancy. The existing multi-agent based diabetes diagnosis and classification methods require a number of agents and hence communication between those agents causes time complexity issues. Due to this complexity issues, the existing method is not applicable to the emergency crisis. Hence to overcome those issues it is essential to reduce the number of agents. Hence, we move on to the proposed method. Our method requires only three agents which are user agent, security agent and updation agent. Initially, user agent collects the user symptoms and then encrypts the symptoms. The encrypted symptoms are then directed to the updation agent. For secure communication two fish-based encryption algorithm is used between the user and updation agent. After receiving the encrypted data from the security agent, the updation agent needs to find the diabetes level of user as if it is normal or abnormal. Hence our proposed technique uses the Optimal Kernel Support Vector Machine algorithm (OKSVM) to classify the diabetes level. In our suggested technique, the traditional kernel function is optimized with the help of Sequential Minimal Optimization (SMO) algorithm. Based on the optimal kernel, the suggested technique effectively prescribes the drugs for the corresponding user. The proposed method will be implemented in JAVA platform.
    Keywords: Multi-Agent Systems (MAS); Diabetes; Two Fish Based Encryption Algorithm; Optimal Kernel Support Vector Machine algorithm (OKSVM); Sequential Minimal Optimization (SMO).
    DOI: 10.1504/IJBET.2018.10013947
  • Computational Study on the Effect of Human Respiratory Wall Elasticity.   Order a copy of this article
    by Vivek Kumar Srivastav, Rakesh Kumar Shukla, Akshoy Ranjan Paul, Anuj Jain 
    Abstract: The present study is focused to investigate computationally the respiratory wall behaviour and airflow characteristics inside trachea and first generation bronchi considering fluid structure interaction (FSI) between air flow and the human respiratory wall. The human respiratory tract model is constructed using Computed Tomography (CT) scan data. The objectives of the present study are to investigate the effect of elasticity of the respiratory wall on the deformation and stresses induced in the respiratory tract during inhalation. The deformation in the respiratory tract is found to be insignificant for the elasticity modulus above 40 kPa. A considerable amount of deformation is observed when the elasticity modulus is below 30 kPa. The internal flow physics is compared between rigid and compliant (flexible) human respiratory tract model. The flexibility considered in the respiratory tract wall decreases the maximum flow velocity by 24%. It is observed that the wall shear stress in compliant respiratory model is reduced to one-third of that in rigid respiratory model. The comparison of results of rigid and compliant models show that FSI technique offers more realistic results as compared to conventional computational fluid dynamics (CFD) analysis of rigid tract. The simulated results suggest that it is essential to consider respiratory wall deformability into the computational model to get realistic results. The results will help medical practitioners to correlate the clinical findings with the more accurate computational results.
    Keywords: Human respiratory tract; rigid model; compliant model; Fluid structure interaction (FSI); Elasticity modulus.

  • Experimental Studies on Acrylic Dielectric Elastomers as Actuator for Artificial Skeletal Muscle Application   Order a copy of this article
    by Yogesh Chandra, Anuj Jain, R.P. Tewari 
    Abstract: The application of acrylic dielectric elastomer- an electrically actuated electro active polymer for artificial muscles has been investigated to evaluate its suitability for prosthetic and orthotic devices by comparing its mechanical and electrical properties similar to that of natural skeletal muscles. Experimental studies have been performed to get the properties of acrylic dielectric elastomers for design and development of artificial skeletal muscles.Therefore, a commercially available acrylic dielectric elastomer; VHB 4910 by 3M film is subjected to uniaxial tensile tests under varying rates, stress relaxation test and loading-unloading test on the Universal testing machines and undergoes an electrical actuation test after pre-straining. The results of such tests have been discussed separately and then compared with previous researches on skeletal muscles as well. Moreover, the material response is also observed highly viscous and hyper-elastic i.e. sensitive to very high strain rates as in the case of skeletal muscles. These results can be utilized in material selection to develop dielectric elastomer actuator applications for artificial muscles.
    Keywords: dielectric elastomers; electrical actuation; artificial muscles; stress relaxation; pre-straining; strain rate.

  • A Comparison of Detrend Fluctuation Analysis, Gaussian Mixture Model and Artificial Neural Network Performance in detection of Microcalcification from Digital Mammograms   Order a copy of this article
    by Sannasi Chakravarthy S R, Harikumar Rajaguru 
    Abstract: This paper presents a Computer Aided approach that classifies the type of cancer (benign or malignant) and its associated risk from the digital mammogram images. Twelve statistical features are extracted through five different wavelets such as Daubechies, Haar, Biorthogonal Splines, Symlets and DMeyer with the decomposition levels of 4 and 6. The Mammogram Image Analysis Society (MIAS) database is utilized in this paper. The micro-calcification in the digital mammogram images by are detected by Detrend Fluctuation Analysis (DFA), Gaussian Mixture Model (GMM) and Artificial Neural Network (ANN). The classifiers performance are analyzed and compared based on the bench mark parameters like Sensitivity, Selectivity, Precision and Accuracy. GMM classifier outperforms the DFA and ANN Classifiers in terms of performance metrics.
    Keywords: mammogram images; breast cancer; wavelet; detrend; gaussian mixture; neural network; classification.
    DOI: 10.1504/IJBET.2018.10021543
  • Automatic Detection of Tuberculosis based on Adaboost Classifier and Genetic Algorithm   Order a copy of this article
    Abstract: Tuberculosis is one of the most commonly affected diseases in the progressing countries. Early stage diagnosis of tuberculosis plays a significant role in curing TB patients. The work presented in this paper is focused on design and development of a system for the detection of tuberculosis in CT lung images. The disease can be diagnosed easily by radiologists with the help of automatedrntuberculosis detection system. The main objective of this paper is to get best solution selected by means of genetic programming is regarded as optimalrnfeature descriptor. Five stages are being used to detect tuberculosis disease. They are pre-processing the image, segmentation, Extracting the feature, Feature Selection and Classification. These stages are used in medical image processing to enhance the TB identification. In feature extraction stage, wavelet based statistical texture feature extraction is used to extract the features and from the extracted feature sets the optimal features are selected by Genetic algorithm. Finally, Adaboost classifier method is used for image classification. The experimentation is done and intermediate results are obtained. Experimental results show that Adaboost is a good classifier, giving an accuracy of 95% for classifying TB affected and Non-affected lungs using wavelet based statistical rntexture features.
    Keywords: Tuberculosis; Otsu's method; GLCM approach; Genetic Algorithm; Adaboost classifier.

  • MRI Images Segmentation using Improved Spatial FCM Clustering and Pillar Algorithms   Order a copy of this article
    by Boucif Beddad, Kaddour Hachemi, Sundarapandian Vaidyanathan 
    Abstract: The segmentation of brain tissue from MRI images is a vast subject of study, and has many applications in medicine. The main objective of this work is to carry out a new segmentation technique based on a combined method between Pillar algorithm and Spatial Fuzzy C-Means clustering. The proposed approach applies FCM clustering to image segmentation after optimized by pillar algorithm in terms of initial centers precision and computational time. The features of the segmented brain image are extracted in different classes (white matter WM, gray matter GM and Cerebrospinal Fluid CSF) using the integrating elements interpreted to get partially or fully automated tools allowing a correct extraction of the cerebral tissue. The developed algorithm has been implemented and the program is run through Simulink. All experimental results are satisfactory which indicate that using a combined method of several segmentation algorithms helps to get better results and improves the classification.
    Keywords: Brain MRI; Image Processing; Pillar Algorithm; Segmentation; Spatial Fuzzy C-Mean Clustering.

  • Searching for cell signatures in multidimensional feature spaces   Order a copy of this article
    by Romuere Silva, Flavio Araujo, Mariana Rezende, Paulo Oliveira, Fatima Medeiros, Rodrigo Veras, Daniela Ushizima 
    Abstract: Despite research on cervical cells since 1925, systems to automatically screen images from conventional Pap smear tests continue unavailable. One of the main challenges in deploying precise software tools is to validate cell signatures. In this paper, we introduce an analysis framework, CRIC-feat, that expedites the investigation of different image databases and respective descriptors, particularly applicable to Pap images. This paper provides a three-fold contribution: (a) we first review and discuss the main feature extraction protocols for cell description and implementations suitable for cervical cells, (b) we present a new application of Gray Level Run Length (GLRLM) features to Pap images and (c) we evaluate 93 cell classification approaches, and provide a guideline for obtaining the most accurate description, based on two current public databases with digital images of real cells. Finally, we show that the nucleus information is preponderant in cell classification, particularly when considering the GLRLM feature set.
    Keywords: Medical Image; Feature extraction; Cervical cells; Quantitative Microscopy; Cell Descriptors; Classification.

  • Skull Stripping of Brain MRI for Analysis of Alzheimers Disease   Order a copy of this article
    Abstract: MRI is a widely used imaging technique that helps to detect different brain abnormalities like brain tumor, Alzheimers disease, brain stroke etc. Skull stripping of MR brain image is a preliminary step of neuroimaging. MR brain image contains some non brain tissues like skull, skin, fat, muscle, neck etc. These non brain tissues are considered as a cause of difficulty in further analysis. So it is essential to remove these non brain tissues before detail analysis and this process is referred to as skull stripping. As the non brain tissues of brain are removed, the area of the brain gets reduced. The aim of this work is to extract the main region of brain for analysis that has adequate area by removing the non brain tissues. The main problem in skull stripping is to differentiate brain tissues from non brain tissues due to their intensity inhomogeneity. In this work, an automatic skull stripping method based on morphological operation is analyzed. Initially MR images are segmented using entropy based thresholding method. To find a precise threshold for brain tissues, five entropy based thresholding methods are analyzed. Those are maximum entropy sum method, entropic correlation method, Renyis entropy, Tsallis entropy and modified Tsallis entropy method and are compared with Otsus thresholding method. The final skull stripped image is obtained after performing morphological operation. In the present study, 50 T1 weighted coronal MR images are considered for experiment. The experiment shows that the skull stripped brain obtained using modified Tsallis threshold gives adequate area of interest. The accuracy obtained with this method is 80.4%. Further the extracted brain is analyzed for three features to diagnose Alzheimers disease. The analyzed features are perimeter, hole size and boundary distance.
    Keywords: skull stripping; Alzheimer’s disease; MRI; entropy based thresholding; mathematical morphology.

  • Epileptogenic neurophysiological feature analysis based on an improved neural mass model   Order a copy of this article
    by Zhen Ma 
    Abstract: To elucidate the neurophysiological mechanisms underlying seizures according to electroencephalogram (EEG) signals, a neurophysiologically-based neural mass model that can produce EEG-like signals was adopted to simulate ictal and interictal EEG signals. A delay unit and a gain unit were added to the Wendling model to fit EEG signals in the time domain. An optimal parameter set minimizing the error between observed and simulated EEG was identified using a genetic algorithm. To compare the inhibition and excitation during the ictal and interictal periods, the model parameters were determined for two sets of EEG signals using the proposed method. The results show that the model with identified parameters can simulate the real EEG signal well, with a mean square error of 0.03150.2138. Fifty repetitions for every selected EEG signal showed that the dispersion of the identified parameters was small in most cases, and the identification procedure generally showed similar values. Comparison of the model parameters of seizure and non-seizure EEG signals showed enhanced excitability, attenuated inhibition, and a more concentrated energy distribution in the frequency domain during the ictal periods. The experimental results for long-term EEG signals revealed continuous changes in the model parameters during epileptic seizures.
    Keywords: EEG; neural mass model; genetic algorithm; fitting.

  • A hybrid multimodal biometric scheme based on face and both irises integration for person authentication.   Order a copy of this article
    by Larbi Nouar, Nasreddine Taleb, Miloud Chikr El Mezouar 
    Abstract: In the biometric field, mono-modal biometry suffers from multiple lacks. The use of multiple biometrics has helped getting over those limitations and has allowed outperforming single biometrics in terms of recognition rate. In this paper, we propose a new approach that fuses the Gabor-winger transform and the oriented Gabor phase information for feature extraction as well as a hybrid scheme consisting of Multi-algorithm, Multi-instance and Multi-modal systems that integrates face and both left and right irises of the same subject. The SDUMLAHMT database has been used to evaluate the proposed approach. The results show that our multi-modal biometric system achieves higher accuracy than the single biometric approaches as well as the other existing multi-biometric systems.
    Keywords: biometrics ;multi-biometric systems; iris recognition; face recognition; fusion; multi-algorithm;DET curve; fusion; feature extraction.

  • Novel Slope Based Onset Detection Algorithm for Electromyographical signals   Order a copy of this article
    by Vinayak Bairagi, Archana Kanwade 
    Abstract: Electromyography (EMG) is a technique of acquiring neuromuscular activity of muscle. Onset and offset gives information about activation and deactivation timings of motor units. This paper proposes a novel slope based algorithm for onset and offset detection. EMG data are collected from different muscle of different subjects using surface EMG electrodes. Data is divided into smaller windows and Average Instantaneous Amplitude (AIA) and slope is calculated for each window. A threshold is decided to avoid baseline noise. Below threshold, maximum and minimum slope is detected as the onset and offset respectively. The results are accurate compared to single threshold and double threshold method. Accuracy increases with computation complexity (arithmetic calculations); if compared with Root Mean Square (RMS) based algorithm. The only limitation is; decrease in accuracy if signal is acquired between two muscle contractions. The proposed slope based onset detection algorithm can be way out between accuracy and computational complexity.
    Keywords: Electromyography; Onset; Offset; Slope.

  • Performance Evaluation of De-noised Medical Images after Removing Speckled Noise by Wavelet Transform   Order a copy of this article
    by Arun Kumar, M.A. Ansari 
    Abstract: In healthcare community, the quality of the medical image is of prime concern to make accurate observations for diagnosis. Different types of noise such as Gaussian noise, Impulse noise and Speckle noise, etc. have been observed as a main cause for the quality degradation of the medical image. This degradation may further lead to the inconsistent information for the diagnosis, which will directly affect the patient's life. The removal of the noise from the medical image to maintain its quality has become a very tough task for the researchers and practitioners in the field of medical image processing. This paper aims on the comparative performance evaluation of various orthogonal and biorthogonal wavelet filters that are commonly used for the de-noising purpose based on some statistical parameters such as mean square error (MSE), peak signal to noise ratio (PSNR), structural similarity index (SSIM) and Correlation Coefficient. The result of the present study depicts that biorthogonal 3.9 wavelet filters provide more precise image after the removal of noise as compare to other wavelet filters used.
    Keywords: Wavelet filters; Feature extraction; Speckle noise; Soft threshold; Performance evaluation.

  • Cryptanalysis for Securing DICOM Medical Contents using Multilevel Encryption   Order a copy of this article
    by Subhasri Prabhakaran, Padmapriya Arumugam 
    Abstract: In healthcare, medical images are considered as important and sensitive data. The patient particulars can be identified using medical data. For transferring these data, a secure encryption algorithm is essential. Cryptographic schemes are used to enhance the security and confidentiality of medical data thereby preventing unauthorized access. Data from the contributors and storage systems leads to scalability and preservation issues. Therefore a standard is required to preserve the secrecy of medical images. Digital Imaging and Communications in Medicine (DICOM) is the universal standard for secured communication in medical files. In cryptographic technique, after encryption process, some intruders can hack the sensitive data without using precise key is known as cryptanalysis. So a typical evaluation mechanism is needed to verify the quality of cryptographic methods employed. In this paper, the results are assessed for MRI, CT, X-Ray images data. The proposed method performance is evaluated using cryptanalysis measures.
    Keywords: DICOM Medical content; Cryptography; Medical Image; Cryptanalysis; Security attacks.

    Abstract: P300 is an endogenous event related potential (ERP) elicited by a rare or significant visual stimulus and is widely preferred in Brain Computer Interface (BCI) to assess the cognition level of the subject. Many researchers contribute to P300 estimation, as this signal is of very low strength in background Electroencephalogram (EEG) activity. This paper proposes a novel signal processing algorithm to detect the P300 event in a single trial EEG acquired from midline electrode sites in oddball paradigm to evaluate attention and memory related tasks of subjects. The algorithm incorporates wavelet combined adaptive noise canceller followed by ensemble and moving averager. Time domain analysis shows the localization of ERP around 300ms for target stimuli attended by the subjects. The Short-Time Fourier Transform (STFT) analysis shows strong theta activity associated to memory related task. Thus, the proposed algorithm is efficient in detecting the P300 with higher correlation coefficient of 0.82 (average) compared to other existing methods.
    Keywords: BCI; P300-event related potential; attention; adaptive filter; ensemble averaging; moving averager; latency; SNR; STFT; time-domain; frequency-domain.

  • Progressive Fusion of Feature Sets for Optimal Classification of MI Signal   Order a copy of this article
    by M.K.M. Rahman, Md. Omer Sadek Bhuiyan, Md.A.Mannan Joadder 
    Abstract: Brain-Computer Interface (BCI) is the most popular research topic to the researchers of neuroprosthetics.The ultimate goal of this research is to develop a communication channel between human brains and external devices.Feature extraction is one of the most crucial steps in this research. Combination of different features may improve the classification performance, but in most of the cases straight forward combinations of different features lead to a very poor result. So, it is necessary to combine the orthogonal features and omit the redundant ones. It is a complex and time-consuming process. We have developed two new algorithms to find optimum sets of features for fusion to obtain best possible classification accuracy for both subject-specific (SS) and subject-independent (SI) cases. Experimental results indicate that our proposed algorithms, in general, improve the classification results irrespective of the different methodological setup of BCI processes such as number of input channels and spatial filter.
    Keywords: brain-computer interface (BCI); feature fusion; feature extraction; optimum feature set.

  • Automated Blink Artifact Removal from EEG using Variational Mode Decomposition and Singular Spectrum Analysis   Order a copy of this article
    by Poonam Sheoran, J.S. Saini 
    Abstract: Purpose: Blink artifacts are the major source of noise while acquiring Electroencephalogram (EEG) data for analysis. To design an efficient method for blink artifact removal is essential for conducting any sort of analysis using EEG. Method: In this paper, a novel automated eye blink artifact removal method based on Variational Mode Decomposition (VMD) and Singular Spectrum Analysis (SSA) is presented. The noisy EEG signals are first separated into uncorrelated components using Canonical Correlation Analysis (CCA) and then Variational Mode Decomposition is performed for multiresolution analysis. The decomposed components (modes) are assessed through their singular values for finding the distribution of noise using singular spectrum analysis. Phase Space Reconstruction (PSR) is also used to differentiate the clean modes and noisy modes. Result: The applicability of the proposed approach is examined through statistical measures like signal to noise ratio (SNR), correlation coefficient and root mean square error (RMSE). The results indicate the efficacy of the approach in artifact removal without manual intervention as compared to the state-of-the-art technologies. Conclusion: The proposed method automatically identified and removed the noisy fraction of signal yielding the requisite neural information without any manual intervention.
    Keywords: Artifact removal; Variational Mode decomposition (VMD); Canonical Correlation Analysis (CCA); Phase Space Reconstruction (PSR); Singular Spectrum Analysis (SSA).

  • CT Image Reconstruction from Sparse Projections Using Adaptive Total Generalized Variation with Soft Thresholding   Order a copy of this article
    by Vibha Tiwari, Prashant Bansod, Abhay Kumar 
    Abstract: CT imaging plays a vital role in non-invasive diagnosis and in surgical planning of critical diseases. However, it is essential to reduce radiation dose during CT imaging as excessive exposure may cause harm to human tissues. To reduce radiation dose, CT image is acquired using limited number of X-Ray projections and then to reconstruct the image an adaptive Total Generalized Variation (TGV) minimization method has been proposed. The simulation results have been compared with the existing TV, TGV and TGV with hard thresholding methods. Typically two types of noises, Gaussian and Poisson distributed noise are introduced during CT imaging process. So, these two types of noises have been added in measured samples. It is found that after applying soft thresholding and FISTA algorithm with the proposed method, better results have been obtained in noisy imaging environment. The reconstructed CT image quality has been compared using parameters like FSSIM, PSNR, NRMSE and MAE.
    Keywords: CT image reconstruction; limited angle reconstruction; total generalized variation;compressive sensing; telemedicine;.

  • Performance Analysis of Data Mining Classification Algorithms for Early Prediction of Diabetes Mellitus II   Order a copy of this article
    by Delshi Ramamoorthy 
    Abstract: Diabetes mellitus (DM) generally referred to as diabetes. It is a group of metabolic infection in which there are high blood sugar levels over a prolonged period. Data mining is used for predicting various diseases. From many methods of data mining, classification is one of the main techniques. The classification techniques are used classify the hidden information in all areas including medical diagnostic field. In this research work, we compare the machine learning classifiers (naive Bayes, J48 decision tree, OneR, AdaBoost, random forest, random tree and support vector machines) to classify the patients into diabetic and non diabetic mellitus. These algorithms have been tested with data samples downloaded from UCI. The performances of the algorithms have been considered in both the cases, i.e., data samples with noisy data and data samples set without noisy data. Results are evaluated in terms of accuracy, sensitivity, and specificity. Experimental results suggested that, support vector machine (SVM) classifier is the best classifier for predicting diabetes mellitus 2.
    Keywords: Diabetes Mellitus; Classification; SVM; AdaBoost; NB; J48;Random Tree; Random Forest; OneR; Data Mining.

  • An Improved Speckle Noise Reduction Scheme Using Switching and Flagging of Noisy Data for preprocessing of Ultrasonograms in Detection of Down Syndrome during First and Second Trimesters   Order a copy of this article
    by Jeba Shiney, Amar Pratap Singh, Priestly Shan 
    Abstract: Down Syndrome (DS) is reported to be one of the most common chromosomal abnormality, affecting newborns all over the world. Diagnosis of the syndrome at an earlier stage during pregnancy will provide more options for the affected parents to make decisions on the interventional therapies required for the developing child. The techniques which are currently used in diagnosis of DS like amniocentesis and Chorionic Villus Sampling (CVS)are invasive in nature and are associated with some percentage of risk. This paper aims at developing a Clinical Decision Support System (CDSS) for detection of DS from Ultrasound (US) fetal images. As a preliminary step in achieving this, the US images have to be denoised for removal of speckle noise. A Modified Mean Median (MMM) filter has been proposed which is based on the principle of progressive switching theory.Experimental results show that the proposed filter provides better results in terms of Peak Signal to Noise Ratio(PSNR), Image Enhancement Factor(IEF) and so on.
    Keywords: Ultrasound; Down Syndrome; Modified Mean Median; Amniocentesis; Chorionic Villus Sampling; Speckle noise; filter; diagnosis; Clinical Decision Support System; Peak Signal to Noise Ratio.

    by Chitradevi A, Nirmal Singh N, Jayapriya K 
    Abstract: The pulmonary nodule identification which paves the path to the cancer diagnosis is a challenging task today. The proposed work, Volume Based Inter Difference XOR Pattern (VIDXP) provides an efficient lung nodule detection system using a 3D Texture based pattern which is formed by XOR pattern calculation of inter-frame gray value differences among centre frame with its neighbourhood frames in rotationally clockwise direction, for every segmented nodule. Different classifiers such as Random Forest (RF), Decision Tree (DT) and Adaboost are used with 10 trails of 5 fold cross validation test for classification. The experimental analysis in the public database, Lung Image Database Consortium - Image Database Resource Initiative (LIDC-IDRI) shows that the proposed scheme gives better accuracy while comparing with existing approaches. Further, the proposed scheme is enhanced by combining shape information using Histogram of Oriented Gradient (HOG) which improves the classification accuracy.
    Keywords: pulmonary nodule; classification; preprocess; segmentation; feature extraction; LIDC-IDRI; medical image segmentation; accuracy.

  • Accurate detection of Dicrotic notch from PPG signal for telemonitoring applications   Order a copy of this article
    by Abhishek Chakraborty, Deboleena Sadhukhan, Madhuchhanda Mitra 
    Abstract: Recent technological advancement have inspired the modern population to adopt a portable, simple personal telemonitoring system that uses easy-to-acquire biosignal such as Photoplethysmogram (PPG) for regular monitoring of vital signs. Consequently computerized analysis of PPG signal through accurate detection of clinically significant PPG fiducial points like dicrotic notch has become a key research area for early detection of physiological anomalies. In this research, a simple and robust algorithm is proposed for accurate detection of dicrotic notch from the PPG signal employing first and second difference of the denoised signal, slope-reversal and an empirical formula-based approach. Features related to the dicrotic notch are then extracted from the baseline-corrected PPG signal and performance of the algorithm is evaluated over different standard PPG databases as well as over originally acquired signal. The algorithm achieves high efficiency in terms of sensitivity, positive predictivity, detection accuracy and low value of errors in the detected features.
    Keywords: Photoplethysmogram; amplitude threshold; slope reversal; dicrotic notch detection.

    by Thakir AlMomani, Suleiman Bani Hani, Samer Awad, Mohammad Al-Abed, Hesham AlMomani, Mohammad Ababneh 
    Abstract: Platelet aggregation, activation, and adhesion on the blood vessel and implants result in the formation of the mural thrombi. Erythrocyte (or red blood cell RBC) have shown to play a significant role in the aggregation process of platelets toward vessel walls. A level-set sharp-interface immersed boundary method is employed in the computations in which RBC and platelet boundaries are advected on a two-dimensional Cartesian co-ordinate grid system. RBCs and platelets are treated as rigid non-deformed particles, where RBC assumed to have an elliptical shape while platelet is assumed to have a discoid shape. Both steady and pulsatile flow regimes were employed with Reynolds number values equivalent to those could find in the micro-blood vessels. Forces and torques between colliding blood cells are modeled using an extension of the soft sphere model for elliptical particles. RBCs and platelets are transported under the forces and torques induced by fluid flow and cell collision based on solving the momentum equation for each blood cell. The computational results indicated that platelets tend to show more interaction with RBCs and migration toward vessel wall for steady flow more than those found in the pulsatile flow. Velocity contours didnt show major differences in the peak and minimal values. The using of physiological flow conditions showed less interaction between RBCs and platelets, than that could find in the steady flow conditions. Moreover, platelets tend to concentrate in the core region in the case of pulsatile flow.
    Keywords: Erythrocyte; platelet; interaction; pulsatile flow; migration; core region; wall region.

  • A Review on Motor Neuron Disabilities and Treatments   Order a copy of this article
    by Ankita Tiwari, O.P. Singh, Dinesh Bhatia 
    Abstract: Neuromotor or Motor Neuron disabilities (MND) are a set of medical conditions that are incurable and come with a bunch of associated problems. The disease affects the individual motor neuron functioning that can be in the whole body or any specific part of the body. The disability could be the result of improper communication between motor neurons and muscle fibre. In this review paper, we study and enumerate different neuromotor disabilities, and related treatments available till date. Although several interventions have been proposed for rehabilitation of such patients, accurate and reliable methods still need to be researched for improvement in the patients condition.
    Keywords: Motor Neuron Disease; Rehabilitation; muscle fibre.

  • Comparison between analyzing wavelets in Continuous Wavelet Transform Based on the Fast Fourier Transform: Application to estimate pulmonary arterial hypertension by heart sound.   Order a copy of this article
    by Lotfi HAMZA CHERIF 
    Abstract: Wavelet analysis makes it possible to use long-term windows when we need more accurate low-frequency information, and shorter when we need high-frequency information. Since conventional CWT requires considerable power and computation time, including application to non-stationary signal analysis, we used CWT analysis based on Fast Fourier Transform (FFT). The CWT was obtained using the properties of the circular convolution to increase the speed of computation. This method provides results for long recordings of PCG signals in a short time. The use of the CWT gives a better localization of the frequency components in the PCG signals than the short-time Fourier transform (STFT) commonly used. Such an analysis is possible by means of a sliding window, which corresponds to the analysis time scale. rnThis paper presents the analysis of phonocardiogram (PCG) signals using the continuous wavelet transform based on the fast Fourier transform (CWTFT).The analysis of the CWT are dependent on the mother wavelet function. This analysis is based on the application of analyzing wavelets (Morlet wavelet, same order derived from Gaussian wavelets, Paul and Bump wavelets) and each time the value of the mean difference (in absolute value) between the original signal and the synthesis signal obtained by the Fast Fourier Transform (FFT)is measured.rnIn this study, we indicate the possibility of parametric analysis of PCG signals using the continuous wavelet transform which is the completely new solution. The performance of the CWTFT in PCG signal analysis is evaluated and discussed in this article.rnThe results obtained show the clinical utility of our extraction methods for the recognition of heart sounds (or PCG signal), the estimation of pulmonary arterial hypertension. The results obtained also show that the severity of mitral lesion involves severe pulmonary arterial hypertension.rn
    Keywords: Phonocardiogram signal; Continuous wavelet transform; analyzing wavelets.

    by Tamilarasan Ananth Kumar, G. Rajakumar, T.S. Arun Samuel 
    Abstract: This paper introduces two features of Neighborhood Structural Similarity (NSS) with Gray Level Co-occurrence Matrix (GLCM) are proposed for the feature extraction of mammographic masses and Random Forest (RF) classifier is used for classification whether the extracted masses are benign or malignant. NSS describes the equivalence in the midst of proximate regions of masses by combining two new features NSS-I and NSS-II. Benign masses are analogous and have systematic patterns. Malignant masses contain indiscriminate patterns because of their miscellaneous attributes. For benign-malignant mass classification number of texture features are proposed namely correlation, contrast, energy and homogeneity; It quantifies neighboring pixels relationship and is unable to capture structural similarity within proximate regions. The performance of the features are evaluated using the images from the mini-MIAS and DDSM datasets, the Random Forest classifier does the recognition. This involves proper classification of masses with high accuracy.
    Keywords: Neighbourhood Structural Similarity; contrast; energy; homogeneity; correlation; Gray level Co-occurrence Matrix.

  • Principal and Independent Component Based Analysis to Enhance Adaptive Noise Canceller for Electrocardiogram Signals   Order a copy of this article
    by Mangesh Ramaji Kose, Mitul Kumar Ahirwal, Rekh Ram Janghel 
    Abstract: In this paper, the proposed methodology has suggested a way to fulfill the need of reference signal for adaptive filtering (AF) of electrocardiogram (ECG) signals. ECG signals are most important form of representation and observation of different heart conditions. During recording process the ECG signals gets contaminated with different types of noises like, baseline wander (BW), electrode motion artifact (MA), muscle noise also known as electromyogram (EMG). Noise contamination causes distortion of normal structure of ECG signal. Adaptive filters works fine for ECG noise cancellation. But, the problem is the need of reference signal or estimation of noise signal. To solve this problem principal and Independent component (PCA and ICA) of noisy signal has been analyzed to extract the noise signal, which is used in adaptive noise cancellation of ECG signals. Fidelity parameters like Mean Square Error (MSE), Signal to Noise ratio (SNR) and Maximum Error (ME) has been observed to measure the quality of filtered signals.
    Keywords: PCA; ICA; Adaptive Filters; ECG; Artifacts.

  • Accuracy comparison of the Data mining Classification techniques for the Diabetic Disease Prediction   Order a copy of this article
    by Rakesh Garg 
    Abstract: In the present scenario, the speedy use of the data mining (DM) techniques is observed for predicting and categorizing symptoms in large medical datasets. Classification is one major DM technique that is widely used for classifying various unnoticed information from various diagnostic data. In a popular country like India, Diabetes is characterized as a dangerous disease which has affected the majority of the population. The present research emphasizes on the accuracy comparison of the various classifiers such as J48, Random Forest, Sequential Minimal Optimization (SMO), Stochastic Gradient Descent (SGD), Naive Bayes, Logistic Regression, Random Tree, Decision Stump, Simple Logistic, Hoeffding Tree, Adaboost, and Bagging, when applied on a diabetic data.
    Keywords: Data Mining; Diabetes; Classification; Weka.

  • Breast Cancer Image Enhancement with the Aid of Optimum Wavelet-Based Image Enhancement using Social Spider Optimization   Order a copy of this article
    by T. Venkata Satya Vivek, C. Raju, D. Girish Kumar 
    Abstract: This paper is to enhance features and gain higher traits of breast cancer images utilizing Optimum Wavelet-Based Image Enhancement (OWBIE) with Social Spider Optimization (SSO). More than a few biomedical images are of low quality and difficulty to detect and exact information. The converted gray pictures are utilized for filtering approach; here optimum Wavelet-Based Image Enhancement (OWBIE) with Social Spider Optimization (SSO), Histogram equalization, Anti-forensic distinction enhancement process and Curvelet centered distinction enhancement are used. The proposed technique is used to remove noise and hold area moderately sharp in the fed enter images. In the result, more than a few Mean Absolute Error (MAE), Mean Square Error (MSE), Root Mean Square Error (RMSE), Peak Signal to Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM) evaluation metrics graphs are analyzed and more enhanced. This proposed process performed better in comparison with different enhancement techniques.
    Keywords: Optimum Wavelet-Based Image Enhancement (OWBIE); Social Spider Optimization (SSO); Breast Cancer.

    by M.Lavanya  
    Abstract: Lung Nodule segmentation is an important division of automated disease screening systems in cancer identification. The morphological variations of lung nodules correspond to chances of cancer. The incorrect detection of these lung nodules because of misclassification leads to false results and incorrect strategies of diagnosis. This misclassification also misdirects pharmaceutical experts for wrong preparation of drugs for diagnosis. There are different methods that are available for detection but always there is a space for improvement in terms of various parameters for better results. Therefore in this work image enhancement is done by histogram equalization and further noise removal is carried over by anisotropic diffusion filter. The nodule segmentation process is carried over by Firefly Algorithm Fuzzy C-Means (FA-FCM) segmentation process. Finally, after feature extraction is done classification of lung cancer staging is carried out using Support Vector Machine (SVM) classifier. Therefore, the nodule is accurately detected considering the morphological changes that are noted for the results which lead to proper medicine preparation and accurate diagnosis of lung nodules.
    Keywords: Lung Nodule; histogram equalization; anisotropic filter; segmentation; FAFCM; SVM.

  • Performance Analysis of Preprocessing Filters Using Computed Tomography Images for Liver Lesion Diagnosis   Order a copy of this article
    Abstract: In medical research, segmentation can be used in separating different tissues from each other, through extracting and classifying features. This paper aims to discuss the need, concept and advantages of preprocessing techniques normally used for enhancing the scanned images before segmentation. In addition, three preprocessing filters, which are used to remove the artifacts from the scanned images are also implemented and intended to analyse the problems which were developed during the simulation of liver preprocessing methods. Of the three methods, the curvature anisotropic diffusion filters performed better than the other filters and the obtained results are satisfactory. Finally a detailed analysis of parameter selection for Curvature anisotropic diffusion filtering in Computed tomography images is performed.
    Keywords: Image Processing; Liver Segmentation; MRI; Computed Tomography; Preprocessing; Curvature Anisotropic Diffusion Filter; Noise Removal.

  • Detection of Glaucoma Disease in fundus images based on Morphological Operation and Finite Element Method   Order a copy of this article
    by S.J. Grace Shoba, A. Brintha Therese 
    Abstract: The retinal vasculature has been recognized as a fundamental element in glaucoma as well as diabetic retinopathy. Segmentation of retinal blood vessels is of considerable clinical significance for diagnosing the glaucoma disease at an early stage. With the intention of glaucoma detection, initially, retinal images are acquired by utilizing advanced capture devices for image content. The present investigation has been developed for the detailed computational model analysis of the blood flow in physiologically sensible retinal arterial and venous networks. The geometrical views of both retinal artery and vein have been extricated from the blood vessels of the retinal fundus image utilizing morphological operations. The segmented arteries and veins are demonstrated utilizing the impedance-modeling method and Finite Element Analysis is utilized for the portrayal of arteries and veins to decide the biomechanical parameters of the blood that incorporates structural analysis and computational fluid analysis. The anticipated parameters are classified based on the blood flow attributes by using Support Vector Machine (SVM). The proposed technique accomplishes the maximum accuracy of 94.86% for the proficient prediction of Glaucomatous disease contrasted with existing strategies.
    Keywords: Glaucoma; Retinal images; blood vessel segmentation; Morphological operation; Impedance-modeling; Finite Element Analysis and Support Vector Machine.

  • Need for customization in preventing pressure ulcers for wheelchair patients a load distribution approach   Order a copy of this article
    by Sivasankar Arumugam, Rajesh Ranganathan, T. Ravi 
    Abstract: Pressure ulcer (PU) is a healthcare problem developed due to the factors such as pressure, shear and friction. The causes, stages, treatment methods along with mechanical factors and prevention methods for PUs are identified and analyzed. A survey undergone revealed that as people in wheelchairs have different weights and sitting posture thereby, the pressure distribution varies from patient to patient. Therefore, using one type of product for all is found to be inappropriate. In this work, 22 healthy subjects are selected for analyzing the pressure distribution. EMED sensor platform is used for measuring the interface pressure distribution, surface area and peak pressure distribution. From the results it was found that mostly the pressure distribution points for each individual varies drastically. Hence, the need is for individual customization for PU reduction to reduce shear and frictional forces. Here surface customization is identified to be a novel approach for patients in wheelchair.
    Keywords: Pressure Ulcer; Customization; Cushions; Wheelchair patients; Load distribution; Surface customization.

  • An Optimized Clustering Algorithm With Dual Tree Ds For Lossless Image Compression   Order a copy of this article
    by Ruhiat Sultana, Nisar Ahmed, Syed Abdul Sattar 
    Abstract: The emerging utilization of web and other electronic applications have expedited much consideration on image compression systems to spare storage room and diminish transmission time by compressing the size of an image by discarding the repetitive data sequences. Most of the techniques are based on lossy compression techniques where compression ratio would be low. The proposed system based on lossless compression technique achieves best compression ratio, good image quality and less psnr value by extracting the best features of an image which is to be compressed and encoded by incorporating firefly algorithm with k means algorithm which avoids local optima problem. To make the eminent compression of best derived features quad tree decomposition and Huffman encoding technique is combined which provides high compression ratio by fetching correct probabilities of occurrence of pixel intensity. This proposed technique is actualized in MATLAB and in this manner the trial results demonstrated the effectiveness of the proposed image compression technique regarding high compression ratio, low noise ratio and reduced compression and decompression time when compared with existing techniques.
    Keywords: Medical imaging; Information systems; Signal processing; hybrid firefly Clustering algorithm; Utilization of quad-tree.

  • A Comparative study of Feature Projection and Feature Selection approaches for Parkinson's disease detection and classification using T1-weighted MRI scans   Order a copy of this article
    by Gunjan Pahuja, T.N. Nagabhushan, Bhanu Prasad 
    Abstract: In this research, a multivariate analysis between feature projection and feature subset selection methods has been performed with the objective of identifying a subset of features that would help in detection and classification of people affected by Parkinsons disease. For this study, T1-weighted MRI data has been collected from Parkinson's Progression Markers Initiative (PPMI) organization. The accuracy of Support Vector Machine classifier has been checked with different number of selected features during the exploratory phase. The obtained results have shown a clear potential for using these methods in detecting and classifying the Parkinsons patients from normal persons. Further, to identify the brain region responsible for this disease, these selected features are mapped back to the standard MNI brain template. ANOVA test has been employed to show the statistical significance of the obtained results.
    Keywords: Parkinson’s disease (PD); Voxel-based morphometry (VBM); Genetic Algorithm (GA); Eigenvector Centrality based discriminant analysis (ECDA); Support Vector Machine (SVM); Analysis of Variance (ANOVA).
    DOI: 10.1504/IJBET.2018.10019041
    by Nirmaladevi Periyasamy 
    Abstract: In this paper a new wavelet shrinkage algorithm in the undecimated wavelet domain is proposed using an interscale adaptive threshold and an exponential thresholding function for the removal of speckle noise in ultrasound images. An improved spatial adaptive threshold is discussed, exploiting the inter and intra scale dependency of the wavelet coefficients. A new exponential thresholding function is proposed in this paper to overcome the smoothing effect produced by the hard and soft thresholding functions. The reconstructed image exhibits an improved noise removal with preservation of fine details. Performance of the filter is measured using various performance metrics like Peak-Signal to Noise Ratio (PSNR), Mean Square Error (MSE), Structural Similarity Index Measure (SSIM), Equivalent Number of Looks (ENL) and Edge Preservation Index (EPI). A comparison of the results reveals that the proposed filter shows significant improvement in terms of quantitative measures and in terms of visual quality of the images.
    Keywords: Undecimated Wavelet Transform; Inter and Intra scale dependency; Speckle; Adaptive threshold; Ultrasound images and Edge Preservation.

  • e-Health Relationships Diabetes; 50 Weeks Evaluation   Order a copy of this article
    by Luuk Simons, Hanno Pijl, John Verhoef, Hildo Lamb, Ben Van Ommen, Bas Gerritsen, Maurice Bizino, Marieke Snel, Ralph Feenstra, Catholijn Jonker 
    Abstract: Hybrid eHealth support was given to 11 insulin-dependent Type 2 Diabetes Mellites (DM2) patients, with electronic support plus a multi-disciplinary health support team. Challenges were: low ICT- and health literacy. After 50 weeks, attractiveness and feasibility of the intervention were perceived as high: recommendation 9,5 out of 10 and satisfaction 9,6 out of 10. TAM surveys (Technology Acceptance Model) showed high usefulness and feasibility. Acceptance and health behaviours were reinforced by the prolonged health results: Aerobic and strength capacity levels were improved at 50 weeks, plus Health Related Quality of Life (and biometric benefits and medication reductions, reported elsewhere). Regarding eHealth theory, we conclude that iterative skill growth cycles are beneficial for long term adoption and e-relationships. Next, the design analysis shows opportunities for additional affective and social support, on top of the strong benefits already apparent from the direct progress feedback loops used within the health coach processes.
    Keywords: Type 2 Diabetes (DM2); eHealth; Lifestyle; Monitoring; Coaching; Blended Care; Service Design.

  • Application of Variational Mode Decomposition in Automated Migraine Disease Diagnosis   Order a copy of this article
    by K. Jindal, R. Upadhyay, M. Vijay, A. Dube, A. Sharma, K. Gupta, J. Gupta 
    Abstract: The clinical diagnosis of migraine if supplanted by the modality of the Electroencephalograph signals, may help in delineating the neural correlates, management and prognosis of the disease progress. Recent advances in the area of biomedical signal processing have led to the development of various feature extraction and classification techniques for multiresolution analysis of Electroencephalograph signals and diagnosis of diseased conditions. In present work, a methodology using Variational Mode Decomposition is proposed for migraine disease diagnosis from Electroencephalogram signals. In proposed methodology, Variational Mode Decomposition is employed for decomposing Electroencephalogram signals into number of modes. Sample Entropy and Higuchis Fractal Dimension are estimated from the decomposed modes as features and three soft computing techniques viz. Neural Network, Support Vector Machine and Random Forest are used for classifying extracted features. Classification results obtained from soft computing techniques indicated that proposed methodology effectively identified migraine patients using Electroencephalogram data.
    Keywords: Electroencephalogram; Variational Mode Decomposition (VMD); Migraine; Fractal Dimension; Entropy.

  • A method for the classification of mammograms using a statistical based feature extraction   Order a copy of this article
    by Nebi Gedik 
    Abstract: This paper represents a classification system for mammograms using wave atom transform and feature selection process with t-test statistics. Mammogram images are transformed to the wave atom coefficients using wave atom transform. Next, a matrix is constructed from the coefficients. The matrix is used as the feature matrix in order to classify mammograms. To achieve the maximum classification accuracy rate, t-test statistics with a dynamic thresholding is additionally carried out. As a classifier, support vector machine is employed in the classification phase. According to the experimental results, the method proposed in this paper provides a successful contribution for the classification of mammographic images.
    Keywords: Mammogram; Classification; Feature extraction; Feature selection; Thresholding; t-test statistics; Wave atom transform; SVM; Normal-abnormal classification; Benign-malignant classification.

  • A Methodological Review on Computer Aided Diagnosis of Glaucoma in Fundus Images   Order a copy of this article
    by Sumaiya Pathan, Preetham Kumar, Radhika M. Pai 
    Abstract: Advances in computerized image analysis and retinal imaging modalities have significantly contributed towards the growth of image based diagnosis. Glaucoma is an ocular disorder which results in irreversible vision loss. The progression of glaucoma is quiet and in early stages doesnt show any symptoms. Vision loss due to glaucoma has been significantly increasing compared to other retinal disorders. The reliability in diagnosis of glaucoma is limited to the experience and domain knowledge of the ophthalmologist. A computer based diagnostic system can be developed using image processing algorithms for screening large population at less cost, reducing human errors and thus making the diagnosis more objective. A review of the state-of-art-methodologies employed for developing a computer aided diagnosis of glaucoma using retinal fundus image is addressed in this paper along with the future trends.
    Keywords: Classification; Cup-to-Disk Ratio; Glaucoma; Optic Disk; Optic Cup; Segmentation.
    DOI: 10.1504/IJBET.2018.10019521
  • Automated Melanoma Skin Cancer Detection from Digital Images   Order a copy of this article
    by Shalu , RAJNEESH RANI, Aman Kamboj 
    Abstract: In the early stages, diagnosis of melanoma is important for treating the illness and saving lives. This paper focuses on the development of a system for automatic detection of melanoma skin cancer. The objective of this study is to identify the importance of different colour spaces in melanoma skin cancer detection. Another objective is to compare the colour feature and texture feature to find out that which type of features have more discriminative power to correctly identify melanoma. The whole analysis is done by using the MEDNODE dataset of digital images. This dataset contains a total of 170 images (100 nevi and 70 melanoma). The results show that the combination of features extracted from the HSV (Hue, Saturation and Value) and YCbCr (Y is Luma component and Cb and Cr are two Chroma components) colour space give better performance than the features extracted from other colour spaces. Also, the performance of the system is enhanced with the colour features than the performance with texture features. By using features extracted from the HSV and YCbCr colour space the system shows more accurate result by giving an accuracy of 84.11% which is higher than the earlier approaches on this dataset.
    Keywords: Malignant Melanoma; Skin Cancer Diagnosis; Color and Texture Features.

  • Segmentation of Cartilage in Knee Magnetic Resonance Images using Gabor and Matched Filter and Classification of Osteoarthritis using Adaptive Neuro-Fuzzy Inference System   Order a copy of this article
    by Jayashree Palanisamy, Ragupathy Uthandipalayam Subramaniyam 
    Abstract: Osteoarthritis (OA) also known as degenerative arthritis is a group of mechanical abnormalities occurring in the joints like knee, finger and hip regions. Knee OA is believed to be highly prevalent today because of aging and obesity. Knee region contains complex objects, which varies in appearance significantly from one image to another. Measuring or detecting the presence of particular structures in such images can be a daunting task, since there will be variation in each image. OA in knee image can be identified by segmenting the bone and cartilage. Finding the region of interest between bone and fat tissue is difficult. Manual and some semiautomatic segmentation methods are time consuming and complex. This can be overcome by the proposed methodology. A method is described here for classification of OA in knee Magnetic Resonance Images (MRI) which deals with segmentation of cartilage region from femur and tibia bone. The images are pre - processed using contrast enhancement technique and Contrast Limited Adaptive Histogram Equalization (CLAHE) and further processed using Matched and Gabor filter for clear recognition of cartilage from background. The noises present are further eliminated using Median filter. Using Gray Level Co-occurrence Matrix (GLCM), features are extracted and extracted features are used for classification of OA. Adaptive Neuro Fuzzy Inference System (ANFIS) classifier is used for classification purpose. The datasets are obtained from Osteoarthritis Initiative (OAI) database and Ganga hospital, Coimbatore.
    Keywords: Osteoarthritis; MRI; Gabor filter; Matched filter; CLAHE; Grey Level Co-occurrence Matrix (GLCM); Adaptive Neuro Fuzzy Inference System (ANFIS).
    DOI: 10.1504/IJBET.2018.10020814
  • A New Deep Learning Structure to Improve Detection of P300 Signals: Using Supervised Neural Networks as Convolutional Kernel of CNN   Order a copy of this article
    by Syed Vahab Shojaedini, Sajedeh Morabbi, MohammadReza Keyvanpour 
    Abstract: Brain-Computer Interface (BCI) systems provide a safe and reliable interface between brain and outer world and detecting P300 signal plays a vital role in these systems. In recent years Convolutional Neural Networks (CNNs) have made a vast and rapid development in P300 signal detection. In this paper, a novel structure for CNN is proposed to enhance separability of the selected features in its convolutional layer. In the proposed scheme an artificial neural network is applied in the above layer as nonlinear filter which extracts nonlinear features which lead to improve detecting of P300 signals. The performance of the proposed structure is assessed on EPFL BCI group dataset. Then the achieved results are compared with the basic structure for P300 detection. The obtained results demonstrate the improvement of True Positive Rate (TPR) of the proposed structure against its alternative by extent of 19.69%. Such improvements for false detections and accuracy are 1.97% and 10.87% which show the effectiveness of applying the proposed structure in detecting P300 signals.
    Keywords: Brain-Computer Interface; P300 Signal Detection; Conventional Neural Network; Convolutional Kernel; Nonlinear Filter.

  • Gustatory Stimulus Based ElectroEncephaloGram (EEG) Signal Classification   Order a copy of this article
    by Kalyana Sundaram Chandran, Marichamy Perumalsamy 
    Abstract: Brain Computer Interface (BCI) gives a prompt correspondence between human brain and Personal Computer (PC). BCI obtains signals from the brain and makes an interpretation for controlling the outside gadgets. Taste Composition (TASCO) based EEG signal classification is used to differentiate normogeusia and hypogeusia. Since an Electroencephalography (EEG) signal is non-stationary and time-changing, features can be extracted either in time domain or frequency domain. The proposed method is mainly used to identify the problems in human organs by using TASCO. EEG signal of TASCO is preprocessed utilizing FIR band pass channel to mitigate the artifacts of noise. In this proposed work, the Discrete Wavelet Transform (DWT) is used as feature extraction method. DWT gives both time and frequency domain representation. DWT breaks down the separated EEG signal into its related frequency bands and the measurable features of the detailed coefficient of the alpha wave are analyzed in time domain. In this proposed method the Mean Absolute Value (MAV) which is an average of the absolute value of the EEG signal and variance of the signal are considered as statistical features. The extracted features are classified using multilayer perceptron neural network classifier which provides high accuracy classification. In this paper, sour TASCO is analyzed to identify the Gall Bladder problem in human organ. The Proposed Method improves the accuracy and performance of the system as much as 95% which cannot be achieved by conventional methods.
    Keywords: BCI; Discrete Wavelet Transform; EEG; FIR Band pass filter; gustatory stimuli; MLP; Taste Composition.

  • MRI Brain Image Volume Property based Accelerate Medical Image Algorithms using CUDA Supported GPU Machine   Order a copy of this article
    by Sriramakrishnan Pathmanaban 
    Abstract: This paper elaborates the design and implementation details of parallel image processing techniques that are used to accelerate the medical image algorithms with CUDA supported GPU. The algorithms are chosen from denoising, morphology, clustering, and segmentation. Three parallel computing models are proposed based on properties of algorithms and MRI volume. The acceleration in parallel algorithms is compared with that of sequential CPU implementation measured in terms of speedup folds (
    Keywords: Graphics processing units; Compute unified device architecture; Parallel processing; Medical imaging; Brain volume; Per-pixel threading; Per-slice threading; Hybrid threading; Bilateral filter; Non-local means; K-means clustering; Fuzzy-c-means clustering.

  • Steady State- VEP based BCI to Control 5 Digit Robotic Hand Using LabVIEW   Order a copy of this article
    by Sandesh R S, Nithya Venkatesan 
    Abstract: This paper proposes Steady State Visual Evoked Potential signals for control of five digit robotic hand using LabVIEW as software platform. The experimental setup consist of Ag/Agcl electrodes with 10-20 gel, a low cost, rechargeable battery operated EEG amplifier, a handmade simulation panel flickering at a frequency of 21 Hz with Light Emitting Diode as source, LabVIEW as software platform to implement wavelet analysis for feature extraction and linear Discriminant analysis for classification and NI USB-DAQ to provide an interface between EEG acquisition and robotic hand. A State machine chart algorithm using PWM technique is implemented for speed control of miniature metal gear DC motor with 71 RPM, positioned in robotic hand. The experiment was carried out for five different subjects with each subject undergoing five trials of which in each trial subject undergoes two recordings of SSVEP signals. Experimental results indicate that the subjects SSVEP signals were used to control the robotic hand to pick up a woolen ball in achieving an accuracy of 84% and mean time of 44.6 seconds. The obtained experimental results were compared against the results obtained from similar works.
    Keywords: SSVEP signals; EEG amplifier; LabVIEW; NI-USB DAQ; Robotic Hand; Wavelet analysis; Linear Discriminant Analysis.

  • Adaptive Fractional Order Controller with Smith Predictor based Propofol Dosing in Intravenous Anaesthesia Automation   Order a copy of this article
    by Bhavina Patel 
    Abstract: This paper is designed to propose a clinical simulation model for automatic propofol dose delivery. We suggest Adaptive Fractional Order Controller (AFCSP) with Smith Predictor design based on CRONE (Commande Robuste dOrdre Non Entier) principle to provide adequate hypnotic intravenous drug infusion regimen for propofol. The main aim of proposed design is to avoid frequent adaption complexity and provide another approach of adaption based on sensitivity parameters in place of BIS signal. Proposed controller is designed from model based analytical method with a two-time domain tuning parameters using explicit equations instead of complex nonlinear equations but yield the same results. AFCSP utilizes the combination of bolus and continuous dose. Robustness test of AFCSP is carried out with patients variability, time delays, surgical stimuli and compared with conventional methods. This scheme is advantageous in terms of improving speed of response, oscillations and overshoots in BIS, also examined on real dataset of 31 different patients.
    Keywords: Adaptive Fractional Order Controller with Smith Predictor Controller; Depth of Anaesthesia; Intravenous; BIS; Propofol.
    DOI: 10.1504/IJBET.2018.10019664
  • Quantitative Evaluation of Denoising Techniques of Lung Computed Tomography Images: An Experimental Investigation   Order a copy of this article
    by Bikesh Kumar Singh 
    Abstract: Appropriate selection of denoising method is critical component of lung computed tomography (CT) based computer aided diagnosis (CAD) systems, since; noises and artifacts may deteriorate the image quality significantly thereby leading to incorrect diagnosis. This study presents a comparative investigation of various techniques used for denoising lung CT images. Current practices, evaluation measures, research gaps and future challenges in this area are also discussed. Experiments on 20 real-time lung CT images indicate that Gaussian filter with 3
    Keywords: Image denoising; lung computed tomography; computer aided diagnosis; image smoothening; edge preservation; quantitative evaluation; image contrast; picture signal to noise ratio; image quality; noise attenuation; time domain and frequency domain.

  • Real-Time Epileptic Detection from EEG Signals using Statistical Features Optimization and Neural Networks Classification   Order a copy of this article
    by Badreddine Mandhouj, Sami Bouzaiane, Mouhamed Ali Cherni, Ines Ben Abdelaziz, Slim Yacoub, Mounir Sayadi 
    Abstract: This paper describes a completely automated approach in order to enhance the diagnosis of epilepsy disease which is one of the most prevalent neurological disorder. The major aim of this work is to be a potential contribution to the domain. The present paper is divided into three main parts. In the first part, we optimize the statistical features extracted from the EEG signals by a characterization degree. Then, these features are applied to a multilayer neural network (MNN) classifier. In the third part, we use a Digital Signal Peripheral Interface Controller (dsPIC) for the implementation of the real time EEG classification process. The used EEG data are taken from the publicly available database of the University of Bonn and are classified into healthy and epileptic subjects. To assess the performance of this classification method, several performance measures (sensitivity, specificity and accuracy) have been evaluated and have provided interesting results.
    Keywords: Electroencephalogram; Epilepsy; Statistical Features; classification; Characterization degree; Optimization; Multilayer neural network; Real-Time; dsPIC.

  • The evaluation of the healing process of diabetic foot wounds using image segmentation and neural networks classification   Order a copy of this article
    by Bruno Da Costa Motta, Marina Pinheiro Marques, Guilherme Dos Anjos Guimarães, Renan Utida Ferreira, Suélia De Siqueira Rodrigues Fleury Rosa 
    Abstract: Objective: The Diabetic Foot is characterized as an infection or ulceration of the lower limbs tissues. Furthermore, to hinder the evolution of this disease, patients need to be monitored and the evolution as well as the healing processes of the ulcers must be documented. Method: In this regard, this paper proposes the development of an easy-to-use computer program that performs segmentation of ulcers based on the color of the scar tissue and automatically classifies them into three classes by using an artificial neural network, in order to help and ease the diagnosis given by health professionals. Result: The total area of the ulcer, color characteristics of the scar tissue and dimensions of the ulcer can be used as parameters in the diagnosis. Conclusion: The technique developed detected and computed the area of the ulcers, using an imaging protocol, facilitating the application of the technique at hospitals and care units.
    Keywords: Wound healing; Medical informatics; Diabetic foot; Image processing; Neural Network.

  • Mental task classification using wavelet transform and support vector machine   Order a copy of this article
    Abstract: The present research on various mental tasks experiencing on human cognitive function disorders using Discrete Wavelet Transform (DWT) and Support Vector Machine (SVM).The Electroencephalogram (EEG) database obtained from online Brain Computer Interface (BCI) Competition paradigm III & offline BAlert EEG Machine from CARE Hospital, Nagpur. EEG signals from paralyzed patients decomposed into the frequency sub-bands using DWT and a set of statistical features extracted from the sub-bands represent the distribution of wavelet coefficients used to reduce the dimension of data, features applied to SVM for classification of left hand and right hand movement. With this system, classification of EEG signals has done with accuracy 91.66% for BCI Competition paradigm III and 97% for B-Alert Machine.
    Keywords: BCI; Brain Computer Interface; EEG; Electroencephalogram; Mental Task; DWT; Discrete Wavelet Transform; B- Alert Machine; Classification; SVM; Support Vector Machine; Accuracy; Error; ANN; Artificial Neural Network.

  • Optimization of Data-set for Classification of Diabetic Retinopathy using Support Vector Machine with Minimal Processing   Order a copy of this article
    by Amol Golwankar, Pranav Pailkar, Purvika Patil, Rajendra Sutar 
    Abstract: Diabetic Retinopathy is a disease observed in the retinal region is caused by a reduced level of insulin in a body or when the pancreas cannot properly process it. If the disease is not recognized in time it may cause permanent blindness. This paper illustrates an optimized approach towards developing a classifier that helps in diagnosing the disease and helps in checking its severity. Using large dataset of 1900 retinal photographs obtained from Kaggle Diabetic Retinopathy Detection Dataset. The proposed classifier classifies the retinal pictures based on the relevant feature values calculated from extracted primary features from the pre-processed and raw images. Classification is performed by support vector machine algorithm that classifies the retinal images into stages or categories such as normal image with no signs of retinopathy, image with mild retinopathy, image with moderate retinopathy, image with severe retinopathy and image showing proliferation of blood vessels respectively with the accuracy of 91.2 percentages.
    Keywords: KeywordsDiabetic Retinopathy; Retinal images; Pre-processing; Feature extraction; Machine learning; Support Vector Machine.

  • Weighted poison solver fusion of magnetic resonance and computer tomography images to remove the cupping artefact in brain muscle region   Order a copy of this article
    by K. Narayanan, P. Thangaraj, N. Rajalakshmi 
    Abstract: In this paper, the weighted poison solver fusion algorithm is proposed for MR and CT brain images to reduce the cupping artefact and increase the depth view of muscle regions such as masseter, lateral pterygoid and temporalis. The existing fusion algorithm increases only the depth view of the bone and spinal cord region in the fused brain images never shows muscle regions. The proposed poison fusion algorithm removes the cupping artefact and increases depth view of muscle region and also enhances lacrimal bone in the fused images. The result of the simulation shows that the performance of the proposed algorithm has better efficiency than the other existing algorithms. The performance of the method is evaluated using SSIM, LSS and ESS image quality metrics. From the result, the proposed algorithm able to increase the depth view and enhance the muscle region of 95 % efficiency than the existing algorithm based on the image quality metrics.
    Keywords: magnetic resonance imaging; computer tomography; luminance; chrominance; fused image; structural similarity; local structural similarity; edge similarity score.

  • A Topological Approach for Mammographic Density Classification Using a Modified Synthetic Minority Over-Sampling Technique Algorithm   Order a copy of this article
    by Imane NEDJAR, Said MAHMOUDI, Mohamed Amine Chikh 
    Abstract: Mammographic density is known to be a risk indicator for breast abnormalities development. Therefore, the breast tissue classification is an important part used in computer aided diagnosis (CAD) system to detect the cancer. In this paper, a CAD system for breast tissue classification using an equilibrating approach is proposed. The first contribution of this paper consists of using a representation of textons distribution by a topological map. This approach allows a good mammographic density classification using the distribution of breast tissue. The second contribution of this work consists of the equilibration of the dataset in the CAD system. Indeed, an improvement of the Synthetic Minority Over-Sampling Technique (SMOTE) algorithm is developed. Our experiments are carried out with MIAS and DDSM datasets to validate the CAD system and two different datasets to validate the proposed modified SMOTE algorithm. The obtained results confirm the validity of the presented proposal.
    Keywords: breast tissue classification; SMOTE; textons; computer aided diagnosis systems; mammography; parenchymal patterns; feature extraction; BI-RADS; classification; imbalanced data sets.

  • Wavelet-based Imagined Speech Classification Using Electroencephalography   Order a copy of this article
    by Dipti Pawar, Sudhir Dhage 
    Abstract: Introduction: Oral communication is the natural way in which humans interact. However, in some circumstances, it is not possible to emit an intelligible acoustic signal, or it is desired to communicate without making sounds. In these conditions systems that enable spoken communication in the absence of an acoustic signal is desirable. In this context, Brain-Computer Interface (BCI) is a remarkable way of solving daily life problems. Objective: The major objective of the proposed research is to develop an imagined speech classification system based on Electroencephalography (EEG). The research analysis in the field shows that there is an association between the recorded EEG data and production of speech. We wish to analyse if this can be further true for Imaginary speech. Approach: We propose an Imagined speech recognition system consists of preprocessing, feature extraction and classification. In the preprocessing stage, EOG artefacts are removed via independent component analysis (ICA). Discrete wavelet transform (DWT) is used to extract Wavelet-based features from EEG segments. Finally, the support vector machine (SVM) is employed for the discriminant of extracted features. Main Results: The proposed research achieves promising ends up in classification accuracy compared with some of the most common classification techniques in BCI. Significance: Resultant role indicate significant potential for the utilization of a speech prosthesis controller for clinical and military applications.
    Keywords: Electroencephalography; Brain-Computer Interface; DWT; Imagined Speech; SVM.

  • Optimization of Electrical Impedance Techniques based System for Medical & Non-Medical Application Monitoring   Order a copy of this article
    by Ramesh Kumar, Sharvan Kumar, Amit Sengupta 
    Abstract: Presently, the non-invasive techniques are in vogue and preferred standard approach because of its limitless advantages in monitoring real time phenomenon occurring within our human body without much interference. In this paper, the proposed Electrical impedance tomography (EIT) system helps in monitoring and recording of distributed electric field over an object surface. This imaging technique is based on internal electrical conductivity distribution of the body and reconstructing the image from the electrical measurements of electrodes attached to the circumference of the object. A constant current is passed into the boundary of the object through a pair of electrode and calculating the voltage measurements from its boundary and fed into a computer. For image reconstruction process of the cross-sectional image of resistivity required sufficient data collection. This image reconstruction algorithm is controlled by graphical user interface (GUI) window on MATLAB platform. The EIT technique offers some benefits over other imaging modalities.
    Keywords: Bio-Impedance; Electrical Impedance Tomography; Impedance Plethysmography; Electrode; Phantom; Current source; Conductivity; Resistivity; Imaging modalities; Image Reconstruction; Forward Problem; Inverse Problem; Finite Element Method; Graphical User Interface; Medical Monitoring; Industrial Monitoring.

  • Numerical Assessment of a 3-D Human Upper Respiratory Tract Model: Effect of Anatomical Structure on Asymmetric Tidal Pulmonary Ventilation Characteristics   Order a copy of this article
    by Digamber Singh, Anuj Jain, Akshoy Ranjan Paul 
    Abstract: The analysis of airway ventilation characteristic is important for diagnosis and pathological assistance for respiratory diseases. It is therefore imperative to study the impact of anatomical features on the internal flow field. The article is focused on an in-silico study on impact of anatomical structure of human upper respiratory tract on transient asymmetric tidal pulmonary ventilation characteristics. Therefore a three-dimensional human airways model is reconstructed from nasal cavity up to 7th generation bronchi from computed Tomography (CT) images of a 48 years healthy man using computational modelling technique. A validated low Reynolds number (LRN) Realizable k-ε turbulence model is used to capture the internal flow mixed turbulence characteristics. The numerical simulations were performed for asymmetric low and high tidal pulmonary ventilation (ALTPV, 10 L/min and AHTPV, 40 L/min). The numerical analysis assists to predict the near realistic airway ventilation phenomena and internal flow physics in the upper respiratory tract.
    Keywords: Human upper respiratory tract (HURT); Asymmetric low tidal pulmonary ventilation (ALTPV); Asymmetric high tidal pulmonary ventilation (AHTPV); Computed tomography (CT); Computational fluid dynamics (CFD); Transient; Wall shear stress (WSS); LRN k- ε turbulence model.

  • Automated Segmentation and Classification of Nuclei in Histopathological Images   Order a copy of this article
    by Sanjay Vincent, Chandra J 
    Abstract: Various kinds of cancer are detected and diagnosed using histopathological analysis. Recent advances in whole slide scanner technology and the shift towards digitisation of whole slides have inspired the application of computational methods on histological data. Digital analysis of histopathological images has the potential to tackle issues accompanying conventional histological techniques, like the lack of objectivity and high variability. In this paper, we present a framework for the automated segmentation of nuclei from human histopathological whole slide images, and their classification using morphological and colour characteristics of the nuclei. The segmentation stage consists of two methods, thresholding and thewatershed transform. The features of the segmented regions are recorded for the classification stage. Experimental results show that the knowledge from the selected features is capable of classifying a segmented object as a candidate nucleus and filtering out the incorrectly identified segments.
    Keywords: Histopathological Images; Whole Slide Images; Digital Image Analysis; Segmentation; Nuclei; Annotated; Nuclear; Computer-Assisted Diagnosis; Machine Learning; Classifier; Deep Learning;.

  • Comparison of Variational Mode Decomposition and Empirical Wavelet Transform methods on EEG signals for Motor Imaginary applications   Order a copy of this article
    by Keerthi Krishnan K, Soman K P 
    Abstract: Devising a reliable method for implementing Brain computer interface (BCI) systems using electroencephalogram (EEG) signals is proposed. Applicability of two modal decomposition methods, Variational Mode Decomposition (VMD) and Empirical Wavelet Transform (EWT) on EEG signals for identifying the four different motor imaginary movements by the investigation of Event-Related Desynchronisation (ERD) activity in the Mu-beta rhythm of EEG signals is analysed and compared. The EEG signals from each electrode corresponding to the sensorimotor cortex area of the brain are decomposed using VMD and EWT methods. Each decomposed modes are modelled using Auto Regressive (AR) modeling and feature vector is formed using the AR model parameters. On classification, better accuracy is perceived for VMD method in comparison with EWT and Common Spatial Pattern (CSP) methods developed on the same data set.
    Keywords: VMD; EWT; EEG; SMR; Event-Related Desynchronisation; Motor Imaginary-BCI; BCI competition data set IIIa; Short Time Fourier Transform; AR model; libSVM classifier; Neural network classifier.

  • Power Line Interference Cancellation from ECG Using Proportionate Normalized Least Mean Square Sub band Adaptive Algorithms   Order a copy of this article
    Abstract: The Electrocardiogram (ECG) record is a procedural electrical endeavor of the coronary heart, which is noninvasive recording during which noise such as power-line interference (PLI) with 60 Hz frequency is obtained from power lines. To efficaciously correct and to keep greater underlying components of an ECG signal, a powerful tool for a removal of PLI from a range of signals was introduced earlier. In this research a multiband structured sub band adaptive filter (MSAF) is developed to clear up structured problems in conventional sub band adaptive filters. This paper investigates the detailed adaptive noise canceller (ANC) for ECG signals with robustness based on a multi-level decomposition need to be carried out on the noisy signal and then splitting into low sub-bands and high band sub-bands that are performed with the help of uniform filter banks (UFB) and non uniform filter banks (NUFB) structured MSAF using Proportionate NLMS (PNLMS) & Improved Proportionate NLMS (IPNLMS) algorithms. Computer simulation demonstrates that the proposed design gives elevated performance and achieves correct adaption.
    Keywords: ECG; IPNLMS; NUFB; UFB; MSAF; SAF.

  • Biotechnical System and fuzzy logic Models for Prediction and Prevention of Post-Traumatic Inflammatory Complications in Patients with Closed Renal Trauma   Order a copy of this article
    by Riad Taha Al-kasasbeh, Nikolay Korenevskiy, Stanislav Petrovich Seregin, Marina Sergeevna Chernega, Altyn Amanzholovna Aikeyeva, Maksim Ilyash 
    Abstract: Fuzzy logic approach is developed and trained to predict occurance of health implications in blunt kidneys patients. Fuzzy logic is selected because it merges expert judgement with real patients data analysis. A fuzzy decision rules system for forecasting the posttraumatic inflammatory complications of patients with blunt kidneys injury according to the medical and laboratory testing of the research. The research shows high level of lipid peroxidation and antioxidant activity. The research predicts occurance of complications and physician can describe prevention and treatment, combining physical therapy treatments with antioxidant and detoxification therapy.
    Keywords: closed injury of the kidney; prognosis; prevention; fuzzy mathematical model.

  • An operative acute Brain Tumor recognition by jointure inward unswerving Probabilistic Neural network classifier   Order a copy of this article
    by Anitha V. 
    Abstract: Introduction: Brain tumors have to be predicted earlier to avoid the risk of being mortal. For an effective detection an adaptive segmentation with two tier tumors region extraction is needed. Materials and Methods: This framework offers preprocessing to avoid noise occurrence by fusing median and wiener filter also utilizes adaptive pillar C-means algorithm for obtaining the essential feature set thus the processing time is reduced. Results and Discussion: Thus the attained essential feature sets are then classified by means of unswerving PNN (Probabilistic Neural network) classifier where classification is done twice initially to classify whether benign or malignant, Subsequently to classify different sorts of brain tumor such as Astrocytoma, Meningioma, Glioblastoma and Medulloblastoma. Conclusion: Since the non-linearity of PNN due to distance factor consumes more computation time which is tackled by intruding the radial basis function resulted in LS-SVM (Least Square-Support Vector Machine) as a distance factor which is linear one. Thus computation time is further reduced.
    Keywords: brain tumor; median filter; wiener filter; adaptive pillar C-means; LS-SVM and PNN; MRI- Magnetic Reasoning Imaging; PSNR- Peak Signal to Noise Ratio; SNR- Signal to Noise Ratio.

  • CBIR BASED DIAGNOSIS OF DERMATOLOGY   Order a copy of this article
    by WISELIN JIJI, Rajesh A, Johnson DuraiRaj P 
    Abstract: In this work, we have presented a computer aided diagnosis approach to assist the diagnosis process of dermatological diseases. The proposed framework is used to retrieve the images from skin lesions repository which in turn facilitates the dermatologist during the diagnosis process. The system used Eigen Disease spaces of respective diseases to converge the search space more efficiently. The results proved using Receiver Operating Characteristic (ROC) curve that the proposed architecture has high contribute to computer-aided diagnosis of skin lesions. Experiments on a set of 1210 images yielded a specificity of 98.44 % and a sensitivity of 86 %. Our empirical evaluation has a superior retrieval and diagnosis performance when compared to the performance of other recent works.
    Keywords: Eigen Space; RETRIEVAL SYSTEM,Border Detection.

  • Automatic Method Recognition of Ischemic Stroke Area on unenhanced CT Brain Images   Order a copy of this article
    by Amina Fatima Zahra Yahiaoui, Abdelhafid Bessaid 
    Abstract: The purpose of this study was to develop a novel automatic method for detection area of subtle hypodensity change of ischemia on unenhanced CT images using comparison of brain hemispheres. Alberta Stroke Program Early CT Score (ASPECTS) has been proposed to help radiologists to make decisions regarding thrombolytic treatment. Only patients with favorable baseline scans (ASPECTS, 810) benefitted from endovascular revascularization therapy. The classification of the images into normal and abnormal depends on the features of left and right side of brain sides. For an accurately detection, we integrated an automatic Midline estimation algorithm to trace it correctly. The proposed method has five steps: preprocessing, segmentation of 10 Regions of Interest (ROIs), elimination of old infarcts and cerebrospinal fluid (CSF) space and feature extraction. The features obtained from ten ROIs were then used to select the abnormal regions and to compute the corresponding ASPECTS score. The method was applied to 50 patients with infarctions of Middle Cerebral Artery (MCA) who presented to LA MEKERRA imaging center. Good results can be achieved especially for midline estimation comparing with manual detection. The performance of our method is quite satisfactory with AUC of 0.845 on ROC analysis for ASPECTS score. Our approach has the potential to be used as second opinion in stroke diagnosis.
    Keywords: CT scan; stroke detection; midline estimation; ASPECTS score; hemispheres comparison.

  • Application of Data mining techniques for early detection of Heart Diseases using Framingham Heart Study Dataset   Order a copy of this article
    by Nancy Masih, Sachin Ahuja 
    Abstract: Health care organizations accumulate large amount of healthcare data, but it is not extracted to draw hidden patterns which can prove efficient for decision making process. Data mining techniques prove useful in gaining insights by discovering hidden patterns from the data sets which remain undetected manually. Heart diseases are the main cause of mortality rate in the globe. Hence, it is critical to predict the heart diseases at early stage with more accuracy and speed to save the millions of peoples lives. This paper aims to examine and compare the accuracy of four different machine learning algorithms for predicting and diagnosing heart disease using Framingham Heart Study (FHS) data set. The output of the study confirms the most prominent features that cause heart diseases and which must be analyzed for early detection of the disease. This study will be used as prognostic information in treatment of Heart Diseases.
    Keywords: Heart Disease; Prediction; Framingham heart study; Decision tree; Naïve Bayes; Support Vector Machine; Artificial Neural Network.

  • An Enhanced Nonlinear Filter and Its Applications to Medical Image Restoration   Order a copy of this article
    by Boucif Beddad, Kaddour Hachemi, Jack-Gérard Postaire, Sundarapandian Vaidyanathan 
    Abstract: In this work, we describe an efficient developed algorithm to enhance medical images which are corrupted by the impulsive noise. However, the main objective is to remove low and high impulsive noise density using an Enhanced Nonlinear Filter (ENLF). The employed filter performs spatial information processing to identify the pixel in an image which has been affected and restores it only by the median value of the proposed 2D moving window that have the low variance value. The proposed denoising algorithm was optimized and implemented on a fixed-point TMS320C6416 Digital Signal Processor of Texas Instruments and it was successfully tested with multiple medical images and provides very good restoration and also it gives better Peak Signal-to-Noise Ratio (PSNR) and Mean Square Error (MSE) results than the output of the well-known existed nonlinear filters. The execution time of this algorithm is also appreciable.
    Keywords: Code Composer Studio; Impulse Noise; Medical Images; Nonlinear Filter; Peak Signal-to-Noise Ratio; TMS320C6416 DSK.

  • Design of Band-Pass Filters by Experimental and Simulation Methods at the range of 100-125 keV of X-ray in Fluoroscopy   Order a copy of this article
    by Goli Khaleghi, Jamshid Soltani-Nabipour, Abdollah Khorshidi, Fariba Taheri 
    Abstract: The determination of filters that remove low-energy and attenuate high-energy spectra does not essentially influence image quality, so it can reduce the absorbed patient dose. This work examines the impacts of thickness and filter material on contrast, resolution, absorbed patient dose and image quality. Based on the attenuation curves of elements and taking into account the cost-effectiveness and availability factors, four elements including Tin, Tungsten, Lead and Copper in different thicknesses was studied at the range of 100-125 keV. The simulations were executed using MCNPX code with an error less than 1% that represented the accuracy. Experimental data were obtained based on the results of calculations and simulations using fluoroscopy equipment. The results showed that applying the filters caused improving the resolution and image quality, and also remarkable reduction in output dose rate. In conclusion, the 0.1 mm thick lead element was taken as the most appropriate element in filtration.
    Keywords: Fluoroscopy; Alpha phantom; Band-Pass filter; Lead; Tin; Tungsten; Copper; Filter thickness; Absorption edge; Image quality; Resolution; Dose rate; Attenuation curve; Output intensity ratio; Monte Carlo N-Particle - MCNP code.

  • The Adjuvant role of Acupuncture to treat the diabetes mellitus and its analysis using thermogram   Order a copy of this article
    Abstract: This work describes the effects of acupuncture in glycemic control and validates the results using Infrared thermography. Two groups of patients undergoing diabetes mellitus treatment, are considered for experimentation. Group A is treated with both drugs and acupuncture, while group B is treated with drugs alone. The patients blood sugar and surface temperature of the foot are studied. Infrared thermography is used to take thermogram, before and after acupuncture treatment, and the effect of Acupuncture treatment is analyzed. The liver and spleen acupoints are stimulated and the temperature changes in these points are analyzed. The results show that, the foot temperature (at treating acupoints) increases after acupuncture treatment in group A and the postprandial glucose level reduces up to 20 mg/dl whereas in Group B only 6mg/dl change, is observed with negligible temperature change. The obtained results show acupuncture as an optional treatment for diabetes, with no side effects and pain.
    Keywords: Acupuncture; diabetes mellitus; glycemic control; foot diagnosis; Infrared thermography;Acupoints; Postprandial Blood Glucose; Fasting Glucose; Line analysis; Spot analysis.

  • Large-scale brain network model and multi-band Electroencephalogram rhythm simulations   Order a copy of this article
    by Auhood Al-Hossenat 
    Abstract: Electroencephalogram (EEG) alpha oscillations play a considerable role in understanding cognitive and physiological aspects of human life, and in diagnosing neurocognitive disorders such as Alzheimers disease (AD) and dementia. In this work, we developed a large-scale brain network model (LSBNM) to simulate multi-alpha band EEG rhythms. This model includes six cortical areas in the left hemisphere and each area is implemented as a local Jansen and Rit (JR) network. The proposed model is developed using the biologically realistic, large-scale connectivity connectome. The implementation and simulations were performed on the neuroinformatics platform, The Virtual Brain (TVB v1.5.4). Experimental results show that the proposed brain network model enables the generation of the multi-alpha band of EEG rhythms at different ranges of frequencies 7-8Hz, 8-9Hz and 10-11Hz by combining the local dynamics of the JR model with connectome. This model can help physicians to understand the general mechanism of EEG rhythms, it is also helpful in accurately diagnosing neurocognitive disorders.
    Keywords: Large-scale brain network model; local neural masses modelling; human connectome; The Virtual Brain package.

  • A Biomechanical Analysis of Prosthesis Disc in Lumbar Spinal Segment using Three-Dimensional Finite Element Modeling   Order a copy of this article
    by Mai S. Mabrouk, Samir Y. Marzouk, Heba Afifi 
    Abstract: Lumbar total disc replacement (LTDR) could be an operation for handling of chronic disc illness and spinal mutilation to safeguard a range of motion (ROM). The SB Charit
    Keywords: Lumbar total disc replacement (LTDR); biomechanical model; finite element method (FEE); SB Charité™ disc; von Mises stress.

  • Epilepsy Detection from Electroencephalogram Signal Using Singular Value Decomposition and Extreme Learning Machine Classifier   Order a copy of this article
    by Nalini Singh, Satchidananda Dehuri 
    Abstract: Automatic detection of seizure plays an important role for both long term monitoring and diagnosis of epilepsy. In this work, the proposed singular value decomposition-extreme learning machine (SVD-ELM) classifier technique provide good generalized performance with a remarkable fast learning speed in comparison to existing conventional techniques. Here, both feature extraction and classification of EEG signal has been done for detection of epileptic seizure of human brain, taking Bonn University dataset. Proposed method is based upon the multi-scale eigen space analysis of the matrices generated using discrete wavelet transform (DWT) of EEG signal by SVD at substantial scale and are classified using extracted singular value features and extreme learning machine (ELM) with dissimilar activation functions. The proposed SVD-ELM technique has been applied for the first time on EEG signal for epilepsy detection using five class classification which produces overall accuracy of 95% (p < 0.001) with sine and radbas activation function.
    Keywords: EEG; Epilepsy; DWT; SVD; ELM; Eigen value; EEG Classification; Neurons; Activation functions.

  • A hybrid approach for analysis of brain lateralization in autistic children using graph theory techniques and deep belief networks   Order a copy of this article
    by Vidhusha Srinivasan, Udayakumar N, Hualou Liang, Kavitha Anandan 
    Abstract: Cerebral lateralization refers to the quality of inclination and a neural function specialized towards one hemisphere of the brain over the other for a specific activity. Autism spectrum disorder (ASD), encompasses wide range of presentations including reduced language processing capacity and impaired communication. This work analyses the lateralization patterns present at the language regions of the brain for typical controls (TC), low functioning (LFA) and high functioning autistic (HFA) individuals using resting state fMRI (rsfMRI). A total of 101 participants were considered for this study. The active and inactive regions in the left and right hemisphere, responsible for language processing have been analyzed through graph theory techniques. Results showed overall left hemisphere (LH) activation for TCs while impaired LH activation for LFA and unique right hemisphere (RH) activation for the HFA group. Using Deep belief networks (DBN), the average classification accuracy of the left/right lateralization exhibited by each participant was measured. The accuracy was highest in LH for controls with 97.88% and LFA measuring 78.17% in LH while, the HFA group showed dominance at RH with 94.23%. These results were validated by a senior expert professional. Thus, this work shows the variations of hemispherical lateralization using graph theory techniques and deep learning classifier to bring out the functional differences among the ASD children who exhibit overlapping brain behavioral characteristics.
    Keywords: Autism; ASD; fMRI; Lateralization; Language processing in autism; High functioning autism; Graph theory; Deep belief networks.

  • Characterising Leg-Dominance in Healthy Netballers Using 3-D Kinematics-Electromyography Features Integration and Machine Learning Techniques   Order a copy of this article
    by Umar Yahya, S.M.N. Arosha Senanayake, Abdul Ghani Naim 
    Abstract: The present study utilised machine learning techniques to characterise differences between dominant (DL) and non-dominant (nDL) legs of healthy female netballers during single-leg lateral jump. Electromyography (EMG) activity of eight lower-extremity muscles and 3-dimensional motion of the ankle, knee, and hip joints were recorded for both jumping (JL) and landing (LL) legs. Integrated EMG of each muscle and joints range-of-motion (ROM) in all three planes were computed. Using hierarchical clustering, two subgroups were identified in both feature subsets JL and LL. LLs subgroups exhibited significant differences (p<0.05) in ROM of all joints in at-least one plane. Support vector machine classifier outperformed artificial neural networks at recognising DL and nDL patterns in subsets LL and JL with accuracy (F-Measure) of 86.21% and 81.36% respectively. These findings suggest DL-nDL differences are more manifested during landing than during jumping, a vital coaches insight as both legs are alternatingly used during single-leg jump-landing tasks.
    Keywords: Leg Dominance; Netball; Machine Learning; Surface EMG; 3D-Kinematics; Single-Leg Jump; Dominant Leg; non-Dominant Leg; Lower Extremity; Functional Asymmetry; Support Vector Machine; Artificial Neural Network; Hierarchical Clustering; Principal Component Analysis.

  • Monitoring optical responses and physiological status of human skin in vivo with diffuse reflectance difference spectroscopy   Order a copy of this article
    by Jung Huang, Jyun-Ying Chen 
    Abstract: Fourier-transform visible-near infrared spectroscopy was applied to analyse diffuse reflectance from human skin perturbed with three skin-agitating methods. Principal component analysis (PCA) was applied to deduce three characteristic spectral responses of human skin. Based on Monte Carlo multilayer simulation, the responses can be attributed to changes in light scattering and haemoglobin and melanin content. The eigenspectra form a basis for resolving the optical responses of human skin from diffuse reflectance difference spectra measured at different time points after the skin tissue is mechanically stressed. We demonstrate that by applying this analysis scheme on in vivo measured diffuse reflectance difference spectra, valuable information about the responses of skin tissue can be deduced and thereby the physiological status of skin can be monitored.
    Keywords: diffuse reflectance spectroscopy; skin tissue; optical response; monte-carlo simulation; principal component analysis.

  • Neonatal Heart Disease Screening Using An Ensemble of Decision Trees   Order a copy of this article
    by Amir M. Amiri, Giuliano Armano, Seyedhossein Ghasemi 
    Abstract: This paper is concerned with the occurrence of a heart disease specifically for the neonate, as those seriously affected may face an increased risk of death. In this paper, a novel computer-based tool is proposed for a medical center diagnosis aimed at monitoring neonates who are potential vulnerable to heart disease. In particular, cardiac cycles of phonocardiograms (PCGs) are first preprocessed and then used to train an ensemble of decision trees (DTs). The classifier model consists of 12 trees, with bagging and hold-out methods used for training and testing. Several feature encoding methods have been experimented with to generate the feature space over which the classifier has been tested, including Shannon Energy and Wigner Bispectrum. On average 93.91% classification accuracy, 96.15% sensitivity and 91.67% specificity have been obtained from the given data, which has been validated with a balanced dataset of 110 PCG signals taken from healthy and unhealthy medical cases.
    Keywords: Neonate; Heart Diseases; Phonocardiogram; Ensemble of Decision Trees.

  • False positives reduction in pulmonary nodule detection using a connected component analysis based approach   Order a copy of this article
    by Satya Prakash Sahu, Narendra D. Londhe, Shrish Verma, Priyanka Agrawal, Sumit K. Banchhor 
    Abstract: In this paper, we have proposed a connected component analysis (CCA) based approach for reducing the false positives rate (FPR) per scan in the early detection of pulmonary lung nodules using computed tomography (CT) images. The lung CT scans were obtained from Lung Image Database Consortium - Image Database Resource Initiative database. Proposed study consists of four stages: (i) segmentation of lung parenchyma through K-means clustering algorithm, (ii) nodule extraction using an automated threshold-based approach (Santos), (iii) noise removal using CCA-based approach, and (iv) detection of lung nodule by using the sphericity (roundness) feature. The results were validated against the annotated ground truth provided by four expert radiologists. The study showed a reduced FPs/scan rate of 0.76 with an overall accuracy of 84.03%. The proposed well-balanced system showed a reduction in the FPR while maintaining high accuracy in lung nodule detection and thus can be usable in clinical settings.
    Keywords: K-means; multi-thresholding; connected component analysis; sensitivity; false positives.

  • Deep 3D multi-scale dual path network for automatic lung nodule classification   Order a copy of this article
    by Shengsheng Wang, Xiaowei Kuang, Yungang Zhu 
    Abstract: Lung cancer is the cancer with the highest mortality rate in the US. Computed tomography (CT) scans for early diagnosis of pulmonary nodules can detect lung cancer in time. To overcome the limitations of the segmentation and handcrafted features required by traditional methods, we take deep neural network to diagnose lung cancer. In this work, we propose a deep end-to-end 3D multi-scale network based on dual path architecture (3D MS-DPN) for lung nodule classification. The 3D MS-DPN model incorporates the dual path architecture to reduce the complexity and improve the accuracy of the model fully considering the 3D nature of CT scan while performing 3D convolution. Meanwhile, the multi-scale feature fusion is used to eliminate the effects which the size of lung nodules varied widely and nodules occupying few regions and slices in CT scans. Our model achieves competitive performance on the LIDC-IDRI dataset compared to the recent related works.
    Keywords: Lung nodule classification; Deep neural network; Computed tomography scans; LIDC-IDRI.

  • New Methodology Based on Images Processing for the Diabetic Retinopathy Disease Classification   Order a copy of this article
    by BENSMAIL Ilham, MESSADI Mahammed, Feroui Amel, Lazzouni Mohammed Elamine, Bessaid Abdelhafid 
    Abstract: Diabetes is a chronic disease that cannot be cured, but can be treated and controlled. It is caused by a lack of use of insulin. In the long run, a high blood sugar level causes complications, especially in the eyes, which leads to the development of diabetic retinopathy (DR), which could be considered a serious illness if it is not diagnosed and treated as soon as it appears. Poor care could cause blindness to the sick person. In this paper, we propose a new system for early detection of the DR. The tested algorithm includes several important phases, especially, the detection of the retinal lesions caused by the disease (Microaneurysms and Hemorrhages), through pretreatment and segmentation processes, as well as the classification of the different stages of non-proliferative DR. Several classifiers have been tested and the Support Vector Machine (SVM) has given a very good sensitivity, specificity, and accuracyof 97.56%, 99.01%, 97.52%, respectively. These values show that our approach can be used for diagnostic assistance in ophthalmology.
    Keywords: Diabetic Retinopathy; Extraction of microaneurysms; Detection of hemorrhages; classification of the diabetic retinopathy stages.

  • Brain Tumor Segmentation from Magnetic Resonance Images using Improved FCM and Active Contour Model   Order a copy of this article
    by Nagaraja Perumal, Kalaiselvi Thiruvenkadam 
    Abstract: The proposed method is based on multimodal brain tumor segmentation method (MBTSM) using improved fuzzy c-means (IFCM) and active contour model (ACM). This proposed MBTSM is present a brain tissue and tumor segmentation method that segments magnetic resonance imaging (MRI) of human head scans into gray matter (GM), white matter (WM), cerebrospinal fluid (CSF), edema, core tumor and compete tumor. The proposed method consists of three stages, Stage-1 is an IFCM method, modifying the conventional FCM for brain tissue segmentation process and this method gives comparable results than existing segmentation techniques. In Stage-2, is an abnormal detection process that helps to check the results of IFCM method by fuzzy symmetric measure (FSM). In Stage-3 is segment the tumor region from multimodal MRI head scans by modified Chan-Vese model (MCV) model. The accuracy analysis of proposed MBTSM used the parameters are dice coefficient (DC), positive predictive value (PPV), sensitivity, kappa coefficient (KC) and processing time. The mean DC values are 83% for GM, 86% for WM, 13% for CSF and 75% for complete tumor.
    Keywords: Active Contour; Brain Tumor; Clustering; Magnetic Resonance Image; Segmentation.

  • Automated methodology for breast segmentation and mammographic density classification using co-occurrence and statistical and SURF descriptors   Order a copy of this article
    by Roberto Pavusa Junior, Joao C. L. Fernandes, Alessandro P. Da Silva, Marcia A. S. Bissaco, Silvia R. M. S. Boschi, Terigi A. Scardovelli, Silvia C. Martini 
    Abstract: This paper presents a fully automated process of segmentation and classification of mammographic images at medio-lateral oblique projections. For this purpose, we developed a new set of descriptors for determination of breast density based in the standard used in the MIAS database. The process is started with the application of new techniques in the preprocessing of the image, composed by detecting the laterality of the image, and removing the image background and its artifacts, and the identification and segmentation of the pectoral muscle. From the segments, namely breast and pectoral muscle, were extracted descriptors from histogram, co-occurrence, and points of interest analysis. The descriptors were reduced by three different techniques, Spearman correlation analysis, principal component analysis and linear discriminant analysis. The image classification is performed by two different classifiers, k nearest neighbors (KNN) and support vector machine (SVM). With the SVM classifier was achieved precision of 72.05% and with the KNN classifier was achieved precision of 91.30%. Compared to other related works, the developed pre-processing technique is promising, as well as the descriptors used for density classification, which surpassed most of previous works that used all images from the database.
    Keywords: Breast density; mammography; computer-aided diagnosis; SVM; KNN; SURF.

  • An effective Fast Conventional pattern measure based suffix feature selection to search gene expression data   Order a copy of this article
    by Surendar A 
    Abstract: Biomedical gene sequences are incompletely or erroneously annotated because of a lack of experimental evidence or prior functional knowledge in sequence datasets. Identifying the genomic useful selections instead of relying on correlations across large experimental datasets or sequence similarity remains a problem. This study proposes a Fast Conventional suffix feature pattern search algorithm(FcsFPs) for searching the gene sequence from expression data using fast feature pattern by measuring the conventionality of search accuracy from gene expression dataset. The aim is to obtain an efficient search algorithm. In this case, features from state matrix and sequence centers are described in the form of a string and the assignment of points to different sequences is done by suffix term search. Overall, the conventional pattern selection reduces computing complexity of fast gene search, improves the accuracy of searching accuracy, and reduces time complexity and the dimensionality of nonlinear gene expression data.
    Keywords: gene search; pattern matching; suffix point; sequence data; throughput; gene expression; genome sequence; feature selection; clustering; suffix feature.

  • An effective morphological-stabled denoising method for ECG signals using wavelet based techniques   Order a copy of this article
    by Hui Yang, Zhiqiang Wei 
    Abstract: Wavelet transform has been identified as an effective denoising method for ECG signals with its advantage of multi-resolution analysis. However, it should be noted that import morphological features, such as peak of the QRS complex, should be retained after denoising for further medical practice. In this paper, an effective morphological-stabled denoising method for ECG signals is proposed though optimal selection of wavelet basis function, designing a new threshold method, optimizing decomposition levels and thresholding scheme. When validated in the MIT-BIH Arrhythmia Database, the denoising method achieved Mean Square Error and Signal-to-Noise value of 0.0146 and 68.6925 respectively, while successfully retained the QRS complex amplitude close to its full amplitude. Also, a total of 23 simulations were carried out to compare our proposed method with other methods. The experimental results indicate that the proposed denoising method can outperform other state-of-the-art wavelet-based methods while remain stable in morphology.
    Keywords: ECG denoising; noise; morphology; QRS complex; wavelet transform; basis function; multi-resolution; thresholding.

  • Segmentation of Liver Computed Tomography Images using Dictionary based Snakes   Order a copy of this article
    by SHANILA NAZEERA, Vinod Kumar R S, Ramya Ravi R 
    Abstract: In medical research, segmentation can be used in separating different tissues from each other, through extracting and classifying the features. Segmentation of liver from computed tomography (CT) and magnetic resonance imaging (MRI) is a challenging task. Many image segmentation methods have been used in medical applications. In addition to the briefing of the need, concept and advantages of a few liver segmentation methods, this paper introduces a novel approach for the segmentation of liver computed tomography images using dictionary snakes. The performance of the proposed method is quite satisfactory.
    Keywords: Image Processing; Liver Segmentation; Computed Tomography; Preprocessing; Active contour; Snakes; Dictionary Snakes; Segmentation.

  • Non-Invasive Estimation of Random Blood Glucose from Smartphone-based PPG   Order a copy of this article
    by UTTAM KUMAR ROY, Shivashis Ganguly, Arijit Ukil 
    Abstract: Traditional blood glucose meters are invasive in nature; blood is collected by needle pricking, which is painful, has a high risk of infections and damages tissues over repeated usage. Although, a few non-invasive methods have been proposed, they require very high-end costly non-portable custom devices and lack accuracy. This work presents a non-invasive estimate of the blood glucose using only smartphone based on PhotoPlethysmoGraph (PPG). The method supports 27x7 monitoring without any extra hardware. The system leverages the fact that glucose molecules enter the Red Blood Cells (RBC), attach to hemoglobin and affect blood color. We cleaned the noise PPG signal and extracted the red component from PPG of 25 patients, applied non-linear regression to estimate glucose and cross-validate against laboratory invasive method. The RMS error comes out to be 2.1525 mg/dL which is superior to existing non-invasive techniques. Three methods viz. geometric regression, Bland-Altman analyses and Surveillance Error Grid are used to prove the correctness.
    Keywords: Non-invasive measurement; Blood glucose estimate; Regression; PhotoPlethysmoGraphy.

  • Design and implementation of textile antenna and their comparative analysis of performance parameters with off and on body condition   Order a copy of this article
    by E. Thangaselvi, K. Meena Alias Jeyanthi 
    Abstract: A wearable antenna is an important part of body area network (BAN) to communicate the healthcare information and secured information to the central hub. This paper presents the comparative study of rectangular shaped microstrip patch textile antenna for different conductive textile materials and conductive non-textile materials. The textile antenna is designed, evaluated using ADS 2013.06 software and measured by N9926A 14 GHz field fox handheld vector network analyser for the industrial, scientific and medical (ISM) band application operated at the resonant frequency of 2.4-2.485 GHz. These textile antennas are analysed and compared by the performance parameters like VSWR, reflection coefficient, bandwidth, impedance, directivity and gain. The results of the proposed design shows the return loss of −53.32 dB, the VSWR as 1 and 100% efficiency, narrow bandwidth, effective directional radiation pattern, 50 ohm impedance, highest gain and directivity of about 5 dB and 5 dBi.
    Keywords: body area network; BAN; inset feed; ISM band; PCPTF; CFTCA; copper sheet; vector network analyser.
    DOI: 10.1504/IJBET.2019.10022441
  • Analysis of speech imagery using brain connectivity estimators on consonant-vowel-consonant words   Order a copy of this article
    by Chengaiyan Sandhya, Anandan Kavitha 
    Abstract: Speech imagery refers to the perceptual experience of uttering speech to oneself without any articulation. In this paper, the neural correlations between brain regions associated with articulated and imagined speech processes of consonant-vowel-consonant (CVC) words are analysed using brain connectivity estimators. EEG coherence, a synchronisation parameter establishes the correlation between several cortical areas. To analyse the causal dependence, partial directed coherence (PDC) and directed transfer function (DTF) estimators are derived from multi-channel EEG data. From inter and intra hemispheric coherences it has been observed that theta, beta and gamma frequencies were dominant and words with same vowel having one consonant common have similar coherence values. Results inferred from intra-hemispheric PDC and DTF parameters show that the frontal and temporal regions of the left hemisphere are more activated for all the given speech imagery tasks. Thus, the analysis provides a significant step in understanding the neural interactions of the brain while thinking and articulating processes.
    Keywords: speech imagery; electroencephalography; EEG; EEG coherence; partial directed coherence; PDC; directed transfer function; DTF; consonant-vowel-consonant words.
    DOI: 10.1504/IJBET.2019.10022442
  • Wireless speech control system for robotic arm   Order a copy of this article
    by Biswajeet Champaty, Suraj K. Nayak, Ashirbad Pradhan, Sirsendu S. Ray, Indranil Banerjee, Kunal Pal, Biswajit Mohapatra, Arfat Anis 
    Abstract: Speech-controlled devices have been explored for potential applications in rehabilitation technology. These systems have shown great promises in improving the independence of the differently-abled persons by providing hands-free operation of the rehabilitative aids. The present study delineates the development of a speech-activated wireless control system for controlling rehabilitative devices. A robotic arm was used as the representative rehabilitative device. This technology can be extended to operate other rehabilitative aids (e.g., wheelchairs). The performance of the device was evaluated using ten volunteers. All the volunteers were able to accurately complete the desired movements of the robotic arm with relative ease. The developed device is simple and user-friendly.
    Keywords: speech; rehabilitative device; robotic arm; XBee; Arduino.
    DOI: 10.1504/IJBET.2019.10022443
  • Mix-model for optimisation of textural features applied to multiple sclerosis lesion-tumour segmentation   Order a copy of this article
    by A. Lakshmi, Thangadurai Arivoli, M. Pallikonda Rajasekaran 
    Abstract: Segmentation of biomedical images plays an important role in many applications especially in medical imaging, forming an important step in enabling qualification in the field of medical research as well as clinical practices. Magnetic resonance imaging is normally used to distinguish and enumerate multiple sclerosis lesions in the brain. Recently multiple sclerosis lesion of segmentation is the challenging issue due to special variation, low size and unclear boundaries. Since usual technique for brain MRI tumour detection and classification is manual investigation but it is varied from person to person and also very time consuming. Many new methods have been proposed to segment lesions automatically. This paper proposed segmentation of MRI brain tumour using cellular automata and classification of tumour by pointing kernel classifier (PKC). The utilisation of modified Cuckoo search with the priority values and the PKC in proposed mix model for optimisation of textural features (M-MOTF) provides the significant improvement in classification performance with low dimensionality. The proposed system has been validated with the support of real time data set from Frederick National Laboratory and the experimental results showed improved performance.
    Keywords: angle projection-based method; Cuckoo search; CS; magnetic resonance imaging; multiple sclerosis; optimisation algorithm; skull region stripping.
    DOI: 10.1504/IJBET.2019.10022444
  • Effective facial expression recognition system based on hybrid classification technique   Order a copy of this article
    by J. Sunetha, K. Sandhya Rani 
    Abstract: In our daily life, facial expression recognition has possible functions in various sectors but still it is not understand, because the absence of efficient expression identification methods. Many methods are used to develop the effectiveness of the identification through indicating issues in face detection and extraction aspects in identification expressions. In the first phase, the noise is eliminating from the image by using of preprocessing techniques and to obtain the quality image in order to decrease the computational complexity. The following phase is feature extraction phase. In this phase, we are extracting related features like eyes, mouth and nose. The shape feature of eye part can be extracted by active appearance model (AAM). The texture feature of nose and mouth can be extracted using grey-level co-occurrence matrix (GLCM). Then in the final phase, we have to categorise a facial expression for this categorisation process by introducing an adaptive genetic fuzzy classifier (AGFC) and neural network (NN). Finally, score level fusion of this two classification result will be done to obtain the face emotions.
    Keywords: preprocessing; feature extraction; classification; active appearance model; AAM; grey-level co-occurrence matrix; GLCM; neural network; adaptive genetic fuzzy classifier.
    DOI: 10.1504/IJBET.2019.10011474

Special Issue on: Developments and Issues in Medical Imaging

  • Optimal selection of threshold in EIT reconstructed images for Estimating size of Objects   Order a copy of this article
    by Nanda Ranade, Damayanti Gharpure 
    Abstract: Electrical Impedance Tomography (EIT) is widely used for various applications in process tomography and medical or geological imaging. EIT system non-invasively acquires surface potential data which is used for reconstructing conductivity images for identifying shapes and sizes of objects of interest. Based on the application, the objects of interest may be of higher or lower conductivity than the background conductivity. It is useful to convert reconstructed images to binary form for quantitatively establishing shapes and sizes of objects. In this work, we present guidelines for selecting appropriate threshold values based on systematic numerical investigations assuming prior knowledge of conductivity contrast for specific application of EIT. Various configurations of objects immersed in a background (with lower or higher conductivity) were considered. Open source software EIDORS (Electrical Impedance Diffused Optical Reconstruction Software) was used for reconstructing differential EIT images for these configurations. Diametric conductivity profiles were used to identify appropriate values of threshold to obtain accurate object size over a wide range of contrast. The calculated values of threshold and the resulting effect on estimated object size were compared with the usually preferred thresholds of
    Keywords: EIT; EIDORS; Image thresholding; conductivity contrast.
    DOI: 10.1504/IJBET.2018.10015451