International Journal of Biomedical Engineering and Technology (170 papers in press)
Automated Recognition of Obstructive Sleep Apnea Using Ensemble Support Vector Machine Classifier
by V. Kalaivani
Abstract: ECG is mainly used to diagnosis the Obstructive Sleep Apnea(OSA) with a high degree of accuracy in clinical care applications. We have developed a real time algorithm for the detection of Sleep apnea disease based on electrocardiograph (ECG). In this study, features from ECG signals were extracted from 12 normal and 58 OSA patients from the Physionet Apnea ECG database. The baseline noise, motion drift and muscle noise present in the raw ECG signals are removed using median filter and Daubechies wavelet filter. QRS detection algorithm extracts R-wave amplitude and R-wave time duration from the denoised signal. The proposed QRS detection algorithm contains four stages. The initial stage is derivative function which calculates the QRS-complex slope value using some five-point derivatives. The next stage is squaring function and it removes the negative data points using square the derivative values. The first two stages are used to calculate R-peak amplitude. Third stage is moving-window integration. It can calculate the R-peak slope value by using some sample rates. Final stage is fiducially marked which calculates the R-peak value and QRS complex time duration (width). EDR is generally based on the R-wave amplitude and R-wave time duration. Time domain features are calculated from the Heart Rate Variability and ECG-derived respiration. Sleep apnea disease is diagnosed by using time domain features. Support Vector Machine(SVM) and Ensemble Support Vector Machine techniques are used for the detection of Sleep Apnea. SVM classifier was used with Linear, Polynomial, Redial Basis Function(RBF) and Multilayer Perceptron.
Keywords: Obstructive Sleep Apnea(OSA); Heart Rate Variability(HRV); ECG Derived Respiration(EDR); Support Vector Machine(SVM); Ensemble Support Vector Machine.
Evaluation of endothelial response to reactive hyperemia in peripheral arteries using a physiological model
by Mohammad Habib Parsafar, Edmond Zahedi, Bijan Vosoughi Vahdat
Abstract: A common approach for the non-invasive evaluation of endothelial function - a good predictor of cardiovascular events - is the measurement of brachial artery diameter changes in flow-mediated dilation (FMD) during reactive hyperemia using ultrasound imaging. However, this method is both costly and operator-dependent, limiting its application to research cases. In this study, an attempt is made toward model-based evaluation of endothelial response to reactive hyperemia. To this end, a physiological model between normalized central blood pressure and finger photoplethysmogram (FPPG) is proposed. The genetic algorithm is utilized for estimating the models parameters in thirty subjects grouped as: normal BP (N=10), high BP (N=10) and elderly (N=10). The change in beat-to-beat fitness between model output and measured FPPG (BB_fit index) during the cuff-release interval is fairly described with a first order dynamic. Results show that the time constant of this first order system is significantly greater for normal BP compared to high BP (p-value=0.004) and elderly subjects (p-value=0.01). Indeed, endothelial response to reactive hyperemia is more pronounced in normal BP and young subjects compared to high BP and elderly, delaying the return of the vasculature to the baseline state. Our findings hint that the proposed model can be utilized in physiological model-based studies of cardiovascular health, resulting eventually in a reliable index for vascular characterization using conventional FMD test.
Keywords: flow-mediated dilation; photoplethysmography; endothelial function; cardiovascular modeling; viscoelasticity; tube-load model.
Influence of hip geometry to intracapsular fractures in Sri Lankan women: prediction of country specific values
by Shanika Arachchi, Narendra Pinto
Abstract: Falls are very common in daily life. Hip is a highly vulnerable location during a fall. Trochanter can be compressed due to side falls resulting either intracapsular or extracapsular fracture. The relationship of bone geometry to the fracture risk can be analyzed as a determinant of mechanical resistance of the bone, as well as a promising fracture prediction tool. Intracapsular fractures are highly depend on the hip geometry compared to the extracapsular fractures. This study aims to find out the influence of hip geometry to the intracapsular fractures among Sri Lankan women. HAL, NSA, FNW and moment arm length of intracapsular patients have compared with a normal group. Concurrently, the moment applied to the proximal femur during the sideway fall is computed and compared with a normal group. We observed that fractured group has greater NSA, HAL and FNW compared to the normal group. Furthermore, intracapsular fracture females have longer moment arm of the force in the sideway fall resulting a greater load on femoral neck compared to the normal group.
Keywords: falls; hip fractures; hip geometry; Neck Shaft Angle; Femoral Neck Width.
FLUID STRUCTURE INTERACTION STUDY ON STRAIGHT AND UNDULATED HOLLOW FIBER HEMODIALYSER MEMBRANES
by Sangeetha M S, Kandaswamy A
Abstract: In hemodialysis therapy, the dialyser is subjected to blood flow continuously for several hours and is also being reused; the stress experienced by the fibers owing to blood flow is of utmost importance because it reflects on the mechanical stability of the membrane. It is tedious to study the stress experienced by an individual fiber in real-time; computer aided techniques enables to gain better insights about the load bearing capacity of the membrane. A finite-element strategy is implemented to study the effect of flow induced stress in hemodialyser membrane. A 3D model of the membrane was developed in straight and undulated (crimped) fiber orientations. Fluid structure interaction study was conducted to analyse the stress distribution due to varying blood flow. It is observed that in both the fiber orientations, the stress varies inversely with the blood flow rate. The effect of varying the length of the fiber, wall thickness and crimp frequency is also studied. From the analysis it is found that the crimped fibers experiences less stress compared to straight fiber. Such analysis aids to predict and evaluate the performance of the hemodialyser membrane.
Keywords: finite-element strategy; hemodialyser membrane; crimping; fluid structure interaction; computer aided techniques
Keywords: finite-element strategy; hemodialyser membrane; crimping; fluid structure interaction; computer aided techniques.
A NOVEL CLASSIFICATION APPROACH TO DETECT THE PRESENCE OF FETAL CARDIAC ANOMALY FROM FETAL ELECTROCARDIOGRAM
by Anisha M, Kumar S.S, Benisha M
Abstract: Fetal cardiac anomaly interpretation from Fetal Electrocardiogram (FECG) is a challenging effort. Fetal cardiac activity can be assessed by scrutinizing FECG because clinically crucial features are hidden in the amplitudes and waveform time duration of FECG, and Fetal Heart Rate (FHR). These features are vital in fetal cardiac anomaly interpretation. Hence here an attempt is made to detect the presence of fetal cardiac anomaly using Support Vector Machine (SVM) classifier with polynomial kernel based on the patterns extricated from FHR, frequency domain of FECG signals, fetal cardiac time intervals and FECG morphology. Performance evaluation is done on real FECG signals with different combination of features set and the obtained results are compared. SVM showed good performance with 92% of classification accuracy when all the features are fed to the classifiers. Results evince that the proposed approach has immense prospective and guarantee in early fetal cardiac anomaly detection from FECG.
Keywords: Fetal Electrocardiogram; Fetal Heart Rate; SVM; fetal cardiac anomaly; fetal cardiac activity.
A Multimodal Biometric Approach for the Recognition of Finger Print, Palm Print and Hand Vein using Fuzzy Vault
by R. Vinothkanna, Amitabh Wahi
Abstract: For the security reasons, Person Identification has got primary place by means of some of the physiological features. For this a biometric person recognition system is used, which decides who the user is. In this paper, multimodal biometry is utilized for the person identification with the help of 3 physiological features such as finger print, palm print and hand vein. Initially, in the pre-processing stage, the unwanted portions, noise content and the blur effects are removed from the input finger print, palm print and hand vein images. Then the features from these three modalities are extracted. Finger print features are extracted directly from the pre-processed image , palm print and hand vein features are extracted using maximum curvature points in the image cross-sectional profile. Then using chaff points and all the extracted feature points, a combined feature vector point is obtained. After getting the combined feature vector points, the sectret key points are added with the combined feature vector points to generate the fuzzy vault. Finally, in the Recognition stage, test persons combined vector is compared with the fuzzy vault data base. If the combined vector is matched with the fuzzy vault, then the authentication is granted and then the secret key is generated to confirm with the person. Otherwise, the authentication is denied. Now we can obtain the corresponding finger print, hand vein and palm print images.
Keywords: Multimodal biometric; Maximum curvature points; Cross-sectional profile; Chaff points; Fuzzy Vault.
Estimation of a point along overlapping Cervical Cell Nuclei in Pap smear image using Color Space Conversion
by Deepa T.P., A. Nagaraja Rao
Abstract: The identification of normal and abnormal cells is considered as one of the most challenging task for computer assisted Pap smear analysis system. It is even more difficult when cells are overlapped, as abnormal cells are hidden below normal cells and affects their visibility. Hence, there is need for algorithm which segments cells in the cluster formed by overlapped cells which can be achieved using image processing techniques. The complexity of the problem depends on whether only cytoplasm of two cells are overlapped, or only nuclei are overlapped with disjoint cytoplasm, and in some case both cytoplasm and nuclei are overlapped. Sometimes, Pap smear sample contains mixture of cells with disjoint/overlapped cytoplasm and nuclei. The segmentation of nuclei helps to find cell count which is one of the important features in Pap smear analysis. There is a need for method which can simultaneously segment disjoint and overlapped nuclei. In case of overlapped nuclei, identifying the point of overlapping accurately is one of the significant steps and plays important role in segmenting overlapped cells. This paper discusses such method which segment disjoint nuclei and identifies the point of intersection called as Concavity point in the cluster of cells where only nuclei are overlapped.
Keywords: Papanicolaou Smear; Overlapping; Morphological and Microscopic Findings; cell nuclei.
Multiobjective Pareto optimization of a pharmaceutical product formulation using radial basis function network and nondominated sorting differential evolution
by Satyaeswari Jujjavarapu, Ch. Venkateswarlu
Abstract: Purpose In a pharmaceutical formulation involving several composition factors and responses, its optimal formulation requires the best configuration of formulation variables that satisfy the multiple and conflicting response characteristics. This work aims at developing a novel multiobjective optimization strategy by integrating an evolutionary optimization algorithm with an artificial intelligence model and evaluates it for optimal formulation of a pharmaceutical product. Methods A multiobjective Pareto optimization strategy is developed by combining a radial basis function network (RBFN) with a non-dominated sorting differential evolution (NSDE) and applied for optimal formulation of a trapidil product involving conflicting response characteristics. Results RBFN models are developed using spherical central composite design data of trapidil formulation variables representing the amounts of microcrystalline cellulose, hydroxypropyl methylcellulose and compression pressure, and the corresponding response characteristic data of release order and rate constant. The RBFN models are combined with NSDE and Pareto optimal solutions are generated by augmenting it with Na
Keywords: Pharmaceutical formulation; Multiple regression model; Response surface method; Radial basis function network; Differential evolution; Multiobjective optimization.
Implementation of Circular Hough transform on MRI Images for Eye Globe Volume Estimation
by Tengku Ahmad Iskandar Tengku Alang, Tian Swee Tan, Azhany Yaakub
Abstract: Eye globe volume estimation have gained attention in both medical and biomedical engineering field. However, most of the methods used manual analysis which is tedious and prone to errors due to various inter- or intraoperator variability studies. In the present study, we estimated the volume of eye globe, in MRI images of normal eye globe condition using the Circular Hough transform (CHT) algorithm. To test the performance of the proposed method, 24 Magnetic Resonance images which constitute 14 males and 10 females (normal eye globe condition) with T1-weighted MRI images are randomly selected from the database. The mean (
Keywords: Circular Hough transform (CHT); Magnetic Resonance Imaging (MRI); MRI images; eye globe detection; T1-weighted.
Electroencephalogram (EEG) Signal Quality Enhancement by Total Variation Denoising Using Non-Convex Regularizer
by PADMESH TRIPATHI
Abstract: Medical practitioners have great interest in getting the denoised signal before analysing it. EEG is widely used in detecting several neurological diseases such as epilepsy, narcolepsy, dementia, sleep apnea syndrome, alzheimers, insomnia, parasomnia, Creutzfeldt-Jakob diseases (CJD) and schizophrenia etc. In the process of EEG recordings, a lot of background noise and other kind of physiological artefacts are present, hence, data is contaminated. Therefore, to analyse EEG properly, it is necessary to denoise it first. Total variation denoising is expressed as an optimization problem. Solution of this problem is obtained by using a non-convex penalty (regularizer) in the total variation denoising. In this article, non-convex penalty is used for denoising the EEG signal. The result has been compared with wavelet methods. Signal to noise ratio (SNR) and root mean square error have been computed to measure the performance of the method. It has been observed that the approach used here works well in denoising the EEG signal and hence enhancing its quality.
Keywords: Electroencephalogram; wavelet; artefact; denoising; regularizer; convex optimization; epilepsy; tumors; empirical mode decomposition; principal component anslysis; total variation.
IDENTIFYING THE ANOMALY IN LV WALL MOTION USING EIGEN SPACE
by Wiselin Jiji
Abstract: In this paper, we have experimented Left Ventricular (LV) Wall motion abnormalities using Eigen LV Space. We employ three phases of operations in order to perform efficient identification of LV motion abnormalities. In the First phase, LV border detection technique was used to detect LV area. In the second phase, Eigen LV spaces of six abnormalities are to be converged as the search space. In the third phase, query is projected on this search space which leads matching of closest Disease. The results proved using Receiver Operating Characteristic (ROC) curve show that the proposed architecture provides high contribute to Computer-Aided Diagnosis. Experiments were made on a set of 20 Abnormal and 20 Normal cases. We trained with 8 Normal & 8 Abnormal cases and yielded an accuracy of 88.8% for the proposed works and 75.81% and 79% respectively for earlier works. Our empirical evaluation has a superior diagnosis performance when compared to the performance of other recent works.
Keywords: Eigen Space; LV BORDER DETECTION,Indexing.
Recent advances on Ankle Foot Orthosis for Gait Rehabilitation: A Review
by Jitumani Sarma, Nitin Sahai, Dinesh Bhatia
Abstract: Since the early 1980s, hydraulic and pneumatic device are used to explore methods of orthotic devices for lower limb. Over the past decades, significant development has been made by researchers in rehabilitation robotics associating assistive orthotic device for the lower limb extremities. The aim in writing this review article is to present a detailed insight towards the development of the controlled Ankle Foot Orthotic (AFO) device for enhancing the functionality of people disabled by injury to the lower limb or by neuromuscular disorders such as multiple sclerosis, spinal muscular atrophy etc. Different types of approaches towards design, actuation and control strategies of passive and active AFOs are analyzed in this article considering gait rehabilitation. In currently available commercialized ankle foot orthotic devices for lower limb, to overcome the weakness and instability produced by drop foot and to follow natural gait is still a challenge. This paper also focuses the impact of active control of AFO device mainly to enhance the functionality of lower limb reducing the deformities. Researchers have put in huge amount of efforts in terms of modeling, simulating and controlling of such devices mainly for gait rehabilitation with kinematic and dynamics analysis.
Keywords: Foot drop; Ankle Foot Orthosis; Gait; dorsiflexion; plantarflexion.
Computer Aided Designing and Finite Element Analysis for development of porous 3-D tissue scaffold-A review
by Nitin Sahai, Manashjit Gogoi
Abstract: Biodegradable porous tissue scaffolds plays a crucial role in development of tissue/organ and development of these biomimetic porous tissue scaffold with accurate porosity could be achieved with the help of latest analysis techniques known as Computer Aided Tissue Engineering (CATE) which consists of Computer Tomography(CT) scan, Magnetic Resonance Imagining (MRI), Functional Magnetic Resonance Imagining FMRI, Computer Aided Designing (CAD), Finite Element Method (FEM) and other modern design and manufacturing technologies for development of 3-D architecture of porous tissue scaffolds can be fabricated with reproducible accuracy in pore size. The aim of this paper is to review and elaborate the various recent methods developed in Computer Aided Designing, Finite Element Analysis and Solid Freeform Fabrication (SFF) for development of porous 3 Dimensional tissue scaffolds.
Keywords: Biomaterials; Scaffolds; Tissue Engineering; Computer Aided Tissue Engineering; Finite Element Method.
PHASE BASED FRAME INTERPOLATION METHOD FOR VIDEOS WITH HIGH ACCURACY USING ODD FRAMES
by Amutha S, Vinsley SS
Abstract: In this project, an innovative complexity motion that has lowvector processing algorithm at the end side is proposed for motion-compensated video vector frame interpolation or frame rate up-conversion. By processing this algorithm we normally shows the problems of broken edges and deformed structure problems in an frame interpolation by hierarchically refining motion vectors on different block sizes. Such broken edges are taken out using frame interpolation method by taking the odd frames and interpolate that image so that to have the high quality resolution of images so that blur in the images obtained from the video is being removed. By using blending techniques it is easy to remove the image blur and also to improve the quality of the image obtained from the video. So the image has been obtained with high resolution. In the proposed method the input has been taken as video instead of images in the existing system and the recovery output is taken as images and further process has been undergone to get the output as video. There are some different techniques in this method such as phase based interpolation technique, multistage motion compensated interpolation etc are commonly used to get high purpose image with reduced blur in the images to get the clear image of the input videos. Experimental results prove that the proposed system visual quality to be better and is also rugged, even in video sequences comprising of fast motions and complex scenes.
Keywords: MCFI,BMA; phase based interpolation; steerable pyramid; blending technique.
Investigation on staging of breast cancer using miR-21 as a biomarker in the serum
by Bindu SALIM, Athira M V, Kandaswamy Arumugam, Madhulika Vijayakumar
Abstract: Circulating microRNAs (miRNA) are a novel class of stable, minimally invasive disease biomarkers that are valuable in diagnostics and therapeutics. MiR-21 is an oncogenic miRNA that regulates the expression of multiple cancer-related target genes and it is highly expressed in the patients serum suffering from breast cancer. The focus of the present study was on measuring the expression profile of the most significantly up-regulated miR-21 in breast cancer patients serum to evaluate their correlation with the clinical stage of cancer by using molecular beacon probe. miR-21 expression was also quantitatively analyzed by TaqMan real-time PCR techniques. Ten serum samples from the confirmed breast cancer patients and one healthy control sample were used for the evaluation of miR-21 gene expression. The expression levels of miR-21 were significantly high in breast cancer serum samples compared to healthy control samples with significant differences corresponding to clinical stages of II, III, and IV. The findings indicate that serum miR-21 would serve as a potential marker for therapeutic regimes as well as monitoring the patient status by simple blood test.
Keywords: Breast Cancer; Biomarker; miR-21; Clinical stage; Real-time PCR.
Pose and Occlusion Invariant Face Recognition System for Video Surveillance Using Extensive Feature Set
by A. Vivek Yoganand, A. Celine Kavida, D. Rukmani Devi
Abstract: Face recognition presents a challenging problem in the field of image analysis and computer vision. Different video sequences of the same subject may contain variations in resolution, illumination, pose, and facial expressions. These variations contribute to the challenges in designing an effective video-based face-recognition algorithm. In this proposed method, we are presenting a face recognition method from video sequence with various pose and occlusion. Initially, shot segmentation process is done to separate the video sequence into frames. Then, Face part is detected from each frame for further processing. Face detection is the first stage of a face recognition system. After detecting the face exactly the facial features are extracted. Here SURF features, appearance features, and holo-entropy is used to find out the uniqueness of the face image. The Active appearance model (AAM) can be used to find out the appearance based features in the face image. These features are used to select the optimal key frame in the video sequence which is based on the supervised learning method, Modified Artificial Neural Network (MANN) using Bat algorithm. Here bat algorithm is used for optimizing the weights of Neurons. Finally, based on the feature library, the face image can be recognized.
Keywords: face recognition; Active appearance model; Modified Artificial Neural Network; bat algorithm.
Automatic segmentation of Nasopharyngeal carcinoma from CT images
by Bilel Daoud, Ali Khalfallah, Leila Farhat, Wafa Mnejja, Ken’ichi Morooka, Med Salim Bouhlel, Jamel Daoud
Abstract: The nasopharyngeal carcinoma (NPC) called also Cavum cancer becomes a public health problem for the Maghreb countries and Southeast Asia. The detection of this cancer could be carried out from computed tomography (CT) scans. In this context, we proposed two approaches based on image clustering to locate affected tissues by the Cavum cancer. These approaches are based respectively on E-M and Otsu segmentation. Compared to the physician manual contouring, our automatic detection proves that the detection of the cancer while using the Otsu clustering in terms of precision, recall and F-measure is more efficient than E-M. Then, we merged the results of these two methods by using the AND and the OR logical operators. The AND fusion yields to an increase of the precision while the OR fusion raises the recall. However, the detection of the NPC using Otsu remain the best solution in terms of F-Measure. Compared to previous studies that provide a surface analysis of the NPC, our approach provides a 3D estimation of this tumor ensuring a better analysis of the patient folder.
Keywords: Cavum Cancer; DICOM images; image segmentation; E-M; Otsu; recall; precision; F-measure.
Descendant Adaptive Filter to Remove Different Noises from ECG Signals
by Mangesh Ramaji Kose, Mitul Kumar Ahirwal, Rekh Ram Janghel
Abstract: Electrocardiogram (ECG) signals are electrical signals generated corresponding to activity of heart. ECG signals are recorded and analyzed to monitor heart condition. In initial raw form, ECG signals are contaminated with different types of noises. These noises may be electrode motion artifact noise, baseline wander noise and muscle noise also known as electromyogram (EMG) noise etc. In this paper a descendent structure consists of adaptive filters is used to eliminate the three different types of noises (i.e. motion artifact noise, baseline wander noise and muscle noise). Two different adaptive filtering algorithms have been implemented; least mean square (LMS) and recursive least square (RLS) algorithm. Performance of these filters are compared on the basis of different fidelity parameters such as mean square error (MSE), normalized root mean squared error (NRMSE), signal-to-noise ratio (SNR), percentage root mean squared difference (PRD), and maximum error (ME) has been observed.
Keywords: Adaptive Filters; ECG; Artifacts; LMS; RLS; SMA.
Epileptic Seizure Detection in EEG Using Improved Entropy
by A. Phareson Gini, M.P. Flower Queen
Abstract: Epilepsy is a chronic disorder of the brain that impacts people all around the world. This is categorized by recurrent seizure and it is difficult to recognize when someone is consuming an epileptic seizure. The electroencephalogram (EEG) signal plays a significant part in the recognition of epilepsy. The EEG signal generates complex information and it has been stowed in EEG recording schemes. It is tremendously challenging to investigate the chronicled EEG signal and the analysis of the epileptic activity in a time consuming procedure. In this article, we suggest a novel ANN based Epileptic Seizure Detection in EEG signal with the help of the Improved Entropy technique. The anticipated technique comprises pre-processing, feature abstraction and EEG organization which utilizing artificial neural network. In primary phase we sample all the input information set. In the second phase, a fuzzy entropy algorithm is utilized to abstract the features of experimented signal. In organization segment, we utilize artificial neural network for to recognize Epilepsy seizures in exaggerated patient. Lastly, we associated the anticipated technique with prevailing technique for the perceiving epileptic sections. The function is utilized to compute the following parameters like accuracy, specificity, FAR, sensitivity, FRR, GAR which established the effectiveness of the anticipated Epilepsy seizure recognition system.
Keywords: Entropy; EEG; ANN.
Kurtosis Maximization for Blind Speech Separation in Hindi Speech Processing System using Swarm Intelligence and ICA
by Meena Patil, J.S. Chitode
Abstract: Blind Source Separation (BSS) method divides mixed signals blindly without any data on the mixing scheme. This is a main issue in an actual period world whether have to identify a specific person in the crowd or it is a zone of speech signal is removed. Besides, these BSS approaches are collective with shape and also statistical features to authenticate the performance of each one in outline classification. For resolving this issue proposes an active BSS algorithm on the basis of the Group Search Optimization (GSO). The kurtosis of the signals is used as the objective performance and the GSO is utilized to resolve it in the suggested algorithm. Primarily, source signals are taken into account as the Independent Component Analysis (ICA) to generate the mixing signals to BSS yield the maximum kurtosis. The source signal constituent that is divided out is then smeared off from mixtures with the help of the deflation technique. Each and every source signals establish that important development of the computation amount and the quality of signal separation is attained using the projected BSS-GSO algorithm if associated with the preceding algorithms.
Keywords: Blind source separation (BSS); Speech signal; optimization; ICA; Mixing signals and Unknown signals.
Electrocardiogram compression using the Non-Linear Iterative Partial Least Squares algorithm: a comparison between adaptive and non-adaptive approach
by Pier Ricchetti, Denys Nicolosi
Abstract: Data Compression is applicable in reducing amount of data to be stored and it can be applied in several data collecting processes, being generated by lossy or lossless compression algorithms. Due to its large amount of data, the use of compression is desirable in ECG signals. In this work, we present the accepted Non-Linear Iterative Partial Least Squares (NIPALS) method as an option to ECG compression method, as recommended by Nicolosi. In addition, we compare the results based in an adaptive and non-adaptive version of this method, by using the MIT Arrhythmia Database. As a help to obtain a better comparison, we have developed an abnormality indicator related to possible abnormalities in the waveform, and a decision method that helps to choose between adaptive or non-adaptive approach. Results showed that the adaptive approach is better than the non-adaptive approach, for the NIPALS compression algorithm.
Keywords: data compression; component analysis; adaptive; comparison; PCA; principal component analysis; nipals; nonlinear iterative partial least squares; ECG; electrocardiogram; compression algorithms.
A tumour segmentation approach from flair MRI brain images using SVM and genetic algorithm
by S.U. Aswathy, G. Glan Devadhas, S.S. Kumar
Abstract: This paper puts forth a framework of a medical image analysis system for brain tumor segmentation. Image segmentation helps to segregate objects right from the background, thus proving to be a powerful tool in Medical Image Processing.This paper presents an improved segmentation algorithm rooted in Support Vector Machine and Genetic Algorithm. SVM are the basis technique used for segmentation and classification of medical images. The MRI database used consists of FLAIR images. The proposed system consists of two stages. The first Stage perform pre-processing the MRI image, followed by block division. The Second Stage includes feature extraction, feature selection and finally, the SVM based training and testing. The feature extraction is done by first order histogram and co-occurrence matrix and GA using KNN is used to select subset features. The performance of the proposed system is evaluated in terms of specificity, sensitivity, accuracy, time elapsed and figure of merit.
Keywords: segmentation; support vector machine; genetic algorithm; k nearest neighbors.
Low Power DNA Protein Sequence alignment using FSM State Transition controller
by Sancarapu Nagaraju, Penubolu Sudhakara Reddy
Abstract: In this paper we proposed an efficient computation technique for DNA patterns on reconfigurable hardware (FPGAs) platform. The paper also presents the results of a comparative study between existing dynamic and heuristic programming methods of the widely-used Smith-Waterman pair-wise sequence alignment algorithm with FSM based core implementation. Traditional software implication based sequence alignment methods cant meet the actual data rate requirements. Hardware based approach will give high scalability and one can process parallel tasks with a large number of new databases. This paper explains FSM (Finite State Machine) based core processing element to classify the protein sequence. In addition, we analyze the performance of bit based sequence alignment algorithms and present the inner stage pipelined FPGA (Field Programmable Gate Array) architecture for sequence alignment implementations. Here synchronized controllers are used to carry out parallel sequence alignment. The complete architecture is designed to carry out parallel processing in hardware, with FSM based bit wised pattern comparison with scalability as well as with a minimum number of computations. Finally, the proposed design proved to be high performance and its efficiency in terms of resource utilization is proved on FPGA implementation.
Keywords: DNA; Protein Sequence; FSM; Smith-Waterman algorithm; FPGA; Low Power.
A review on multimodal medical image fusion
by BYRA REDDY G R, Dr. Prasanna Kumar H
Abstract: Medical image fusion is defined as combining two or more images from single or multiple imaging modalities like Ultrasound, Computerized Tomography, Magnetic Resonance Imaging, Single Photon Emission Computed Tomography, Positron Emission Tomography and Mammography. Medical image fusion is used to optimize the storage capacity, minimizes the redundancy and to improve quality of the image. The goal of medical image fusion is to combine complementary information from multiple imaging modalities of the same scene. This review paper describes about different imaging modalities, fusion methods and major application domains.
Keywords: Image fusion; Ultrasound; Mammography; Magnetic Resonance,Computed Tomography.
Heart Sound Interference Cancellation from Lung Sound Using Dynamic Neighborhood Learning-Particle Swarm Optimizer Based Optimal Recursive Least Square Algorithm
by Mary Mekala A, Srimathi Chandrasekaran
Abstract: Cancellation of acoustic interferences from the lung sound recordings is a challenging task. Lung sound signals provide critical analysis of lung functions. Thus lung related diseases can be diagnosed with noiseless lung sound signals. A Recursive Least Square (RLS) algorithm based adaptive noise cancellation technique can be used to reduce the heart sounds from lung sounds are proposed in this paper. In RLS, the forgetting factor is the major parameter which determines the performance of the filter. Finding the optimal forgetting factor for the given input is the vital step in RLS operation. An improved PSO algorithm is used to find the optimal forgetting factor for the proposed RLS algorithm. Three different normal breath sounds mixed with heart sound signals are used to test the algorithm. The results are assessed with the correlation coefficient between the original uncorrupted lung sound signal and the interference cancelled lung signals by the proposed optimal filter. The power spectral density plots are also used to measure the accuracy of the proposed optimal RLS algorithm.
Keywords: Lung sound signals;Dynamic Neighborhood Learning; Recursive Least Square; Adaptive noise cancellation; Optimization;Forgetting Factor;Heart Sound Signals;Correlation Coefficient;Power Spectral Density;Bronchial Sound.
Prediction of Risk Factors for Prediabetes using a Frequent Pattern based Outlier Detection
by Rajeswari A.M., Deisy C.
Abstract: Prediabetes is the forerunner stage of diabetes. Prediabetes develops type-2 diabetes slowly without any predominant symptoms. Hence, prediabetes has to be predicted apriori to stay healthier. The risk factors for prediabetes are abnormal in nature and are found to be present in a few negative test samples (without diabetes) of Pima Indian Diabetes data. The conventional classifiers will not be able to spot these abnormal samples among the negative samples as a separate group. Hence, we propose an algorithm Frequent Pattern Based Outlier Detection (FPBOD) to spot such abnormal samples (outliers) as a separate group. FPBOD uses an associative classification technique with few surprising measures like Lift, Leverage and Dependency degree to detect outliers. Among which, Lift measure detects more precise outliers that are able to correctly classify the person who didnt have diabetes, but just takes the risky chance of being a diabetic patient.
Keywords: outlier detection; prediabetes; frequent pattern based outlier detection; associative classification; surprising measure.
Design of Analytic Wavelet Transform with Optimal Filter Coefficients for Cancer Diagnosis Using Genomic Signals
by Deepa Grover, Sonica Sondhi, Banerji B.
Abstract: DNA sequence analysis and gene expression analysis through genomic signal processing played an important role in cancer diagnosis in recent years. Cancer diagnosis through gene expression data, Discrete Fourier transform, Discrete Wavelet transform (DWT) and IIR Low pass filter are frequently used but suffer from drawbacks like longer essential time-support. Analytic wavelet transform with optimal filter coefficients for cancer diagnosis using genomic signals is designed in this paper. The proposed technique consists of three modules namely, pre-processing module, optimization module and transform and cancer diagnosis module. Initially the filter coefficients are optimally found out using Group Search Optimizer. Then, the optimal coefficients and the pre-processed DNA sequence is applied to analytic wavelet transform and subsequently, diagnosis for the cancer cell is made based on the threshold. DNA sequences obtained from National Centre for Biotechnology Information (NCBI) forms the database for the evaluation. Evaluation metrics parameters employed are sensitivity, specificity and accuracy. Comparison is made to the base method and analytic transform technique for more analysis. From the results, we can observe that the proposed technique has achieved good results attaining accuracy of 91.6% which is better than other compared techniques.
Keywords: Genomic Signal Processing (GSP); Cancer diagnosis; GSO; Analytic transform; thresholding.
A Review of Non-Invasive BCI Devices
by Veena N., Anitha N.
Abstract: BCI provisions humans beings to control various devices with the help of brain waves. It is quite useful for the people who are totally paralyzed from neuromuscular diseases such as spinal cord injury, brain stem stoke. BCI permits a muscular free channel for conveying the user intent into action which help the people with motor disabilities to control their surroundings. Various non-invasive technologies like Electroencephalogram (EEG), Magnetoencephalography (MEG), functional Magnetic Resonance Imaging (fMRI) etc, are available for capturing the brain signal. In this article, various non-invasive BCI devices are analysed and nature of signals captured by it are reported. We also explore the use of signals for diseases diagnosis, there features and availability of those devices in the market.
Keywords: EEG; MEG; fMRI; Non-invasive; Psychological; Physiological.
R peak Detection for Wireless ECG using DWT and Entropy of coefficients
by Tejal Dave, Utpal Pandya
Abstract: Investigation of patients Electrocardiogram helps to diagnose various heart related diseases. With correct R peak detection in ECG wave, classification of arrhythmia can be carried out accurately. However, accurate R peak detection is a big challenge especially in wireless patient monitoring system. In wireless ECG system, in order to reduce the power consumption; it is desirable to capture ECG at lower sampling rate. This paper proposes an algorithm for R peak detection using discrete wavelet transform in which detailed coefficients are selected based on entropy. The proposed algorithm is validated with MIT-BIH database and its performance is compared with similar work. For MIT-BIH case, positive predictivity and sensitivity for proposed algorithm are 99.85 and 99.73, respectively. Application of proposed algorithm on wireless ECG, acquired at adjustable sampling rate from different subjects using prototyped Bluetooth ECG module, shows efficacy of algorithm to detect R-peak of ECG with high accuracy.
Keywords: Electrocardiogram; Wireless Monitoring System; Entropy; Discrete Wavelet Transform.
Muscle fatigue and performance analysis during fundamental laparoscopic surgery tasks
by Ali Keshavarz Panahi, Sohyung Cho, Michael Awad
Abstract: A limited working area and impaired degree of freedom have led surgeons to encounter ergonomic challenges when performing minimally invasive surgery (MIS). As a result, they become vulnerable to associated risks such as muscle fatigue, potentially impacting their performance and causing critical errors in operations. The goal of this study is to first establish the extent of muscle fatigue and time-to-fatigue in vulnerable muscle groups, before determining whether the former has any effect on surgical performance. In the experiment, surface electromyography (sEMG) was deployed to record the muscle activations of 12 subjects (6 males and 6 females) while performing fundamentals of laparoscopic surgery (FLS) tasks for a total of 3 hours. In all, 16 muscle groups were tested. The resultant data were then reconstructed using recurrence quantification analysis (RQA) to achieve the first goal. In addition, a subjective fatigue assessment was conducted to draw comparisons with the RQA results. The subjects performance was also investigated via a FLS task performance analysis, the results demonstrating that RQA can detect muscle fatigue in 12 muscle groups. The same approach also enabled an estimation of time-to-fatigue for said groups. The results also indicated that RQA and subjective fatigue assessment are very closely correlated (p-value <0.05). Although muscle fatigue was established in all 12 groups, the performance analysis results showed that the subjects execution of their duties improved over time.
Keywords: Minimally Invasive Surgery; Fatigue Analysis; Recurrence Quantification Analysis; FLS Task Performance Analysis; Subjective Fatigue Assessment.
Automatic Diagnosis of Stomach Adenocarcinoma using Riesz Wavelet
by ANISHIYA P, Sasikala M
Abstract: Adenocarcinoma originates from the glands. It causes changes in the gland architecture. The detection of adenocarcinoma requires histopathological examination of tissue specimens. At present, diagnosis and grading of the cancer depends on the visual interpretation of the biopsy samples by pathologist and thus, it may lead to a considerable amount of inter and intra-observer variability. To overcome this drawback and to reduce the reliance on the human interpretation and thereby reducing the workload of pathologists, many methods have been proposed. In this paper, a novel method to quantify a tissue for the purpose of automated cancer diagnosis and grading is introduced. The stomach tissue images are preprocessed to compensate for color variations. The Riesz wavelet transform is applied to the preprocessed stomach tissue images. From Riesz wavelet coefficients, 14 different statistical features were extracted. Wrapper based feature selection is used. The reliability check on the final dataset is performed using ANOVA. In diagnosis, the tissue is classified into normal(non-malignant), well differentiated, moderately differentiated, poorly differentiated and tissue. The proposed system yielded a classification accuracy of 93.2% in diagnosing and 98.33% in Grading.
Keywords: Stomach adenocarcinoma; histopathological image analysis; Color Normalization; Riesz wavelet transform; cancer diagnosis; Hilbert transform; Simoncelli wavelet; ANOVA; Support Vector Machine.
Generalized Warblet Transform Based Analysis of Biceps Brachii Muscles Contraction Using Surface Electromyography Signals
by Diptasree MaitraGhosh, Ramakrishnan Swaminathan
Abstract: In this work, an attempt has been made to utilize the time-frequency spectrum obtained using Generalized Warblet Transform (GWT) for fatigue analysis. Signals are acquired from the biceps brachii muscles of twenty healthy volunteers during isometric contractions. The first and last 500 ms lengths of a signal are assumed as nonfatigue and fatigue zones respectively. Further, the signals from these zones are subjected to GWT for the computation of time-frequency spectrum. Features such as Instantaneous Mean Frequency (IMNF), Instantaneous Median Frequency (IMDF), Instantaneous Spectral Entropy (ISPEn), and Instantaneous Spectral Skewness (ISSkw) are estimated. The results show that the IMNF, IMDF and ISPEn increased by 24%, 34% and 36% respectively in nonfatigue condition. In contrast, 22% higher ISSkw is observed for fatigue condition. The statistical analysis indicates that the features are significant with p<0.001. It appears that the current method is useful in analyzing muscle fatigue disorders using sEMG signals.
Keywords: sEMG; biceps brachii; muscle fatigue; GWT.
Automated Emotion State Classification using Higher Order Spectra and Interval features of EEG
by Rashima Mahajan
Abstract: Automated analysis of electroencephalogram (EEG) signals for emotion state analysis has been emerging progressively towards the development of affective brain computer interfaces. However, conventional EEG signal analysis techniques such as event related potential (ERP) and power spectrum estimation fail to provide high emotion state classification rates due to Fourier phase suppression when utilized with distinct machine learning tools. Further, only limited types of emotions has been explored for automated recognition using EEG, even though there are varieties of emotional states to illustrate the humans feelings. An attempt has been made to develop an efficient emotion classification algorithm via EEG utilizing statistics of fourth order spectra. A four-dimensional emotional model in terms of arousal, valence, liking and dominance is proposed using emotion specific EEG signals from DEAP dataset. A compact set of five temporal peak/interval related features and three trispectrum based features have been extracted to map the feature space. Through the feature map, a multiclass-support vector machine (SVM) based classifier using one-against-one algorithm is configured to yield a maximum classification accuracy of 81.6% using while classifying four emotional states. A comparison of multiclass-SVM with other classifiers such as feed forward neural network (FFNN) and radial basis function network (RBF) has been made. Significant improvement using a proposed compact hybrid EEG feature set and a multiclass -SVM has been achieved for automated emotion state classification.
Keywords: Brain Computer Interface (BCI); Electroencephalogram (EEG); Emotions; Multiclass-SVM; Trispectrum; Temporal; DEAP.
Integrated neuromuscular fatigue analysis system for soldiers load carriage trial using DWT
by Arosha S. Senanayake, Dk N. Filzah Pg Damit, Owais A. Malik, Nor Jaidi Tuah
Abstract: This research work addresses neuromuscular fatigue of soldiers using wearable sensors during load carriage trial. Ten healthy male soldiers participated in the experiment. EMG was recorded bilaterally from selected lower extremity muscle groups and EEG was recorded on the frontal cortex of the brain. Each subject was asked to run and/or march on the treadmill with and without load at 6.4kmh-1, with inclination increased at every 5 minutes, until volitional exhaustion was reached. Feature extraction was performed using discrete wavelet transform on both signals. Results demonstrated significant changes in power levels at lower and middle frequency bands for EMG in most muscles during both unloaded and loaded conditions. However for EEG signals, significant changes of power distribution were observed at the frontal cortex during unloaded conditions only. Furthermore, through data visualisation, fatigue was detected at the muscle level first, before enforcing to send signals to the brain for decision making in order to stop the exercise.
Keywords: neuromuscular fatigue; EMG; EEG; load carriage.
The Effects of Implant Design, Bone Quality, and Insertion Process on Primary Stability of Dental Implants: An In-vitro Study
by Mansour Rismanchian, Mojtaba Afshari
Abstract: The aim of this study was to analyse the maximum insertion torque (MIT) with respect to the implant thread face angle, bone quality and the insertion process. Two implants were designed and made: One with High Face Angle (HFA) and the other with Low Face Angle (LFA). Bovine bones were classified, and then the MIT values of the implants were measured per round using standard protocol. Finally, three-way ANOVA was used to interpret the MIT values. For the main effects, MIT values (mean and standard error) of LFA (17.4±0.7 N cm) and HFA implants (20.8±0.7 N cm) were significantly different. The MIT values for the D4 (9.4±0.6 N cm), D3 (19.1±0.7 N cm), and D2 (28.8±1.1 N cm) bones were significantly different. Considering the higher primary stability as an essential provision for immediate load implants, it is recommended to use implants featured with higher thread face angles.
Keywords: Implant Design; Maximum Insertion Torque; Primary Stability; In-vitro.
DIABETIC RETINOPATHY DETECTION USING LOCAL TERNARY PATTERN
by Anitha Anbazhagan, Uma Maheswari
Abstract: There is a need for intelligent way of detecting Diabetic Retinopathy (DR) in recent years, which is widely spreading in developing countries. The worst part of diabetic retinopathy is initially asymptomatic, but if untreated it can lead to blindness. In this paper, an automated technique for early screening of fundus images is proposed. This work focuses on analyzing texture of the fundus image to classify them into normal or diabetic retinopathy (DR). For texture analysis, Local Ternary Pattern (LTP) is applied and compared with Local Binary Pattern (LBP). As LBP is more sensitive to noise and illumination variation, LTP is employed and its discriminative power is explored. LTP is obtained for all three color components Red (R), Green (G) and Blue (B) for different radius R = 1, 2, 3 and 5 considering 8 neighborhood pixels. Also the importance of RGB component over Green channel component is analyzed. The histogram of LTP and variance provides a statistical set of features, which are given to the classifiers KNN and Random Forest. 10-fold cross validation approach is incorporated in both classifiers. Random forest classifier provides a sensitivity and specificity of 100%. The average sensitivity and specificity of nearly 91% are achieved. DR is detected by analyzing the retina background without segmenting the lesions. This implies that the proposed algorithm is very fast and can be used as screening test for retinal abnormalities detection.
Keywords: Local Ternary Pattern; Local Binary Pattern; Diabetic Retinopathy; Random Forest; Computer Aided Diagnosis; Fundus images; KNN.
Enhancement of Low Quality Blood Smear Image using Contrast and Edge Corrections
by Umi Salamah, Riyanarto Sarno, Agus Zainal Arifin, Anto Satriyo Nugroho, Ismail Eko Prayitno Rozi, Puji Budi Setia Asih
Abstract: A blood smear is one of the medical images that widely used to disease diagnosis. A Low quality of smear that produced by poor microscopy specification complicates the reading of the feature. The characteristics are blurred, diminished true colour of object, unclear boundary, and low contrast between object and background. In this study, we propose image enhancement technique to improve the readability of the features in the quality of blood smear image. The proposed method consists of contrast and edge corrections. Contrast correction utilizes the integration of contrast correction globally and locally. Edge correction uses Unsharp Masking Filetring to improve the object edge. Experiments are performed on three diseases images. The results show that the proposed method achieves best entropy and pretty good on MSE and PSNR, so it can produce images contained more information than the other methods and have a well effectiveness.
Keywords: enhancement; low quality; blood smear; contrast; edge; correction; local; global; unclear boundary; readability feature.
An Adaptive Weighted Denoising Filter Framework for Impulse and Shot Noise : A Mammogram Image Applications
by Ramya A, Murugan D, Manish T. I, Ganesh Kumar T
Abstract: Breast screening contraption has not yet became an effective diagnosis method for detection of abnormalities. This is mainly due to the physical interference from the screening system; these tribulations usually develop a noise due to electric charge and also occur due to photon counting in optical screening machine. This work focuses on denoising the mammogram breast image to a greater extends with two step process. In the first phase mammogram image dataset is subjected to neural network to detect the corrupted image from non noisy images by using different feature set. Then average weighted of four different filters such as Geometric-Mean filter (GM), Decision Based Median filter (DBM), Directional Weighted Median filter (DWM) and Frost filter is applied to impulse and shot noise pixels, to preserve its corrupted edges and to avoid smoothing out of details. Additionally, this combined filter is subjected to exponential transformation to yield the enhanced output. The proposed filter is applied to these two noises and compared with existing filters with respective quality factors such as Peak to Signal Noise Ratio (PSNR), Mean Absolute Error (MAE). The outcome result shows that the proposed methods yield promising than the existing filters for mammogram images.
Keywords: Image Denoise; Neural Network; Impulse noise; Shot noise; Adaptive Weighted filter; Mammogram Image.
Modelling and Simulation Analysis of Porous Polymeric Scaffold for the Replacement of Bruchs Membrane as a Therapy for Age-Related Macular Degeneration
by Susan Immanuel, Aswin Bob Ignatius, Alagappan Muthupalaniappan
Abstract: Age Related Macular Degeneration [AMD] is a common disease that is prevalent among people of age 50. It is characterized by the loss of Retinal Pigment Epithelial Cells [RPE] from the macula of eye due to the deposition of drusen in the Bruchs membrane that supports the RPE cells. Treatment for AMD is to replace Bruchs membrane with scaffolds that provide conducive environment for the RPE cells to adhere and proliferate. In this study a scaffold made of porous Poly (Caprolactone) [PCL] was designed using the tool COMSOL Multiphysics and its properties like structural integrity and the fluid flow were analysed using Brinkmans equation. The model has a square geometry with a dimension of 100 x 100 x 4
Keywords: Age Related Macular Degeneration; Retinal Pigment Epithelial Cells; Bruch’s membrane; PCL scaffold; Brinkman’s equation.
Continuous User Authentication Using Multimodal Biometric Traits with Optimal Feature Level Fusion
by Prakash A., Krishnaveni R., Dhanalakshmi R.
Abstract: Biometric process demonstrates the authenticity or approval of an individual in view of his/her physiological or behavioral characteristics. Subsequently, for higher security feature, the blend of at least two or more multimodal biometrics (multiple modalities) is requiring. Multimodal biometric technology gives potential solutions for continuous user-to-device authentication in high security. In this research paper proposed Continuous Authentication (CA) process using multimodal biometric traits considers finger and iris print images to various feature extraction process. At that point, features are extracted into optimal Feature Level Fusion (FLF) process. The final Feature Vector is acquired by concatenating Directional Information and Center Area Features. Disregard the optimal feature process the inspired Fruit Fly Optimization (FFO) model is considered, and then these fused model into authentication procedure to find the matching score values (Euclidian distance) with the imposter and genuine user. From the approach results are accomplish most extreme accuracy, sensitivity and specificity compared with existing papers with better FPR and FRR value for the authentication process. The result shows 92.23% accuracy for theproposed model when compared to GA, PSO which is attained in MATLAB programming software.
Keywords: biometrics; authentication; feature vectors; optimization; Feature level fusion; fingerprint; iris.
A comparison of foot kinematics between pregnant and non-pregnant women using the Oxford Foot Model during walking
by Minjun Liang, Yang Song, Wenlan Lian
Abstract: To investigate the effects between pregnant women (PW) and non-pregnant women (NW) on foot kinematics during walking using the Oxford Foot Model (OFM). The results contribute to understanding of foot biomechanics of pregnant women and may provide suggestions for customized footwear design. 20 women with last trimester of pregnancy and 23 non-pregnant women were involved in this study. Three-dimensional motion of the forefoot, hindfoot and tibia during walking were recorded by a Vicon motion analysis system. Two Force Platforms were used to record the ground reaction force. Compared with NW, PW demonstrated greater plantarflexion and internal rotation of hindfoot and internal tibial rotation during initial contact, greater forefoot eversion and hindfoot external rotation during push off. Moreover, PW showed greater external tibial rotation than NW during toe off and the center of pressure (COP) trajectory moved to the 2nd to 3th metatarsal at this stage. It appears that the altered foot kinematics of pregnant women may contribute to the redistribution of joints loads and the maintenance stability at a comfortable walking pace. In addition, falling risks for pregnant women prone to be higher at initial contact phase and overuse injury may raise in the 2nd to 3th metatarsal when walking for long time.
Keywords: pregnant women; non-pregnant women; musculoskeletal; center of gravity; center of pressure.
CHARACTERIZATION OF FREQUENCY MODULATED WAVE OF NIR PHOTONS TRANSPORT IN HUMAN LOWER FOREARM PHANTOM
by Ebraheem Sultan, Nizar Alkhateeb
Abstract: The free-space broadband frequency modulated near infrared (NIR) photon technique has been used to model and characterize the optical behaviour in a lower forearm human phantom. The NIR system measures the broadband (30MHz-1000MHz) insertion loss (IL) and the insertion phase (IP) of the modulated light. This helps to characterize and model the different penetration depth and path-related modulated photon movement in the human phantom. A phantom that resembles human lower forearm (three layers) is used, along with the experimental modules and the Finite Element (FE) tools, to perform characterization and modelling. The study is divided into two stages. The first stage dedicates performing IL and IP measurements over 30MHz-1000MHz with a back-scattering measurement method to characterize the behaviour of the modulated photons. The second stage dedicates, modelling the modulated broadband photon behavior using a Finite Element simulator called as COMSOL. Results in both stages are analyzed using a signal processing method. This helps to identify the frequency modulated photon stamp associated with different layers and verifies the accuracy of the 3D FE modelling. Comparison between experimental and 3D FE modelling is computed at different frequencies is shown to be less than 6%, and results give an understanding of how the modulated waves of photons behave when traveling in the human forearms. These results can be used to further investigate the functionality surrounding any biological activity around the human forearm.
Keywords: NIR Spectroscopy; COMSOL; 3D FE; Modulated wave of photons; human lower forearm photon transport; optical transmitter; VCSEL; optical receiver; APD; IP and IL.
Complex Diffusion Regularization based Low dose CT Image Reconstruction
by Kavkirat Kaur, Shailendra Tiwari
Abstract: The Computed Tomography (CT) is considered as a significant imaging tool for the clinical diagnosis. Due to the low-dose radiation in CT, the projection data is highly affected by the Gaussian noise. Thus, there is a demand of a framework that can eliminate the noise and provide high-quality images. This paper presents a new statistical image reconstruction algorithm by proposing a suitable regularization method. The proposed framework is the combination of two basic terms namely data fidelity and regularization. Maximizing the log likelihood gives the data fidelity term, which represents the distribution of noise in low dose X-Ray CT images. Maximum likelihood expectation maximization algorithm is introduced as a data-fidelity term. The ill-possedness problem of data fidelity term is overcome with the help of complex diffusion filter. It is introduced as a regularization term into the proposed framework that minimizes the noise without blurring edges and preserving the fine structure information into the reconstructed image. The proposed model has been evaluated on both simulated and real standard thorax phantoms. The final results are compared with the other standard methods and it is analyzed that the proposed model has many desirable properties such as better noise robustness, less computational cost, enhanced denoising effect.
Keywords: Computed Tomography (CT); Noise Reduction; Maximum likelihood expectation maximization (MLEM) algorithm; Complex Diffusion (CD); Gaussian noise.
Active contours for overlapping cervical cell segmentation
by Flavio Araujo, Romuere Silva, Fatima Medeiros, Jeova Farias, Paulo Calaes, Andrea Bianchi, Daniela Ushizima
Abstract: The nuclei and cytoplasm segmentation of cervical cells is a well-studied problem. However, the current segmentation algorithms are not robust to clinical practice due to their high the computational cost or because they cannot accurately segment cells with high overlapping. In this paper, we propose a method that is capable of segmenting both cytoplasm and the nucleus of each individual cell in a clump of overlapping cells. The proposed method consists of three steps: a) cellular mass segmentation; b) nucleus segmentation; and, c) cytoplasm identification based on an active contour method. We carried out experiments on both synthetic and real cell images. The performance evaluation of the proposed method showed that it was less sensitive to the increase in the number of cells per image and the overlapping ratio against two other existing algorithms. It has also achieved a promising low processing time and, hence, it has the potential to support expert systems for cervical cell recognition.
Keywords: ABSnake; Active Contour; Cervical Cells; Medical Image; Nuclei and Cytoplasm Segmentation; Overlapping Cells; Pap Test.
Quantitative Analysis of Paraspinal Muscle Strain during Cervical Traction Using Wireless EMG Sensor
by Hemlata Shakya, Shiru Sharma
Abstract: The aim of this study is to assess the efficacy of cervical traction on the basis of fatigue analysis using wireless EMG sensor. Neck pain is a regular health-related complaint, leading to paraspinal muscle spasm & radiculopathy. The patients having neck pain complaint were visiting therapy unit regularly for cervical traction treatment. This case study includes EMG data recording of twelve neck pain patients using wireless EMG sensor. The patients were divided into two groups having radiculopathy with paraspinal muscle spasm and without radiculopathy but paraspinal muscle spasm. The subjects were treated with 15 minutes of cervical traction with a 7 kg strain. The extracted various features in the time domain and frequency domain from the acquired EMG data to assess the muscle fatigue during cervical traction treatment. Features were calculated to evaluate the muscle fatigue during cervical traction in sitting position. Analysis of various parameters indicated significant differences in the paraspinal muscle activities. The results indicate the effectiveness of continuous traction treatment in the reduction of neck pain.
Keywords: Electromyography; Neck pain; Muscle Fatigue; Feature extraction.
Medical Video based Cryptography Model to Improve the Security and Transmission Time
by Edwin Dayanand, R.K. Selva Kumar, N.R. Ram Mohan
Abstract: This paper proposes a medical video based visual Cryptography Model using Graph-based Coherence Shape Lagrange Interpolation (G-CSLI). This G-CSLI model reduces the noise on medical video while generating shares and minimizes the transmission time by minimizing the computational complexity during secret sharing. At first, a probabilistic polynomial-time model is used with the objective of minimizing computational complexity during secret sharing. Here, the similarity is measured based on the luminance and structure. Finally, a Lagrange-Interpolation scheme is applied with the objective of minimizing the transmission time and improves the security. Overall, the proposed model minimizes the computational complexity while secret sharing and performs experimental evaluation on factors such as security, transmission time and noise in medical video based visual cryptography. Experimentation shows that the proposed model is able to reduce the computational complexity while secret sharing by 13.11% and reduce the transmission time by 39.03% compared to the state-of-the-art algorithms.
Keywords: Chaotic Oscillations,rnProbabilistic Model,rnVisual Cryptography; rnProbabilistic Polynomial Coherence Shape; rnLagrange Interpolation.
POWER LINE INTERFERENCE REMOVAL FROM ELECTROCARDIOGRAM SIGNAL USING MULTI-ORDER ADAPTIVE LMS FILTERING
by Surekha Ks, B.P. Patil
Abstract: Electrocardiogram (ECG) signals are susceptible to noise and interference from the external world. This paper presents the reduction of unwanted 50Hz power line interference in ECG signal using multi-order adaptive LMS filtering. The novelty of the present method is the actual hardware implementation for power line interference removal. The design of adaptive filter is carried out by the SIMULINK based model and hardware based design using FPGA. The performance measures used are SNR, PSNR, MSE and RMSE. The novelty of the proposed method is to achieve better Signal to Noise Ratio (SNR) by careful selection of the filter order using hardware.
Keywords: Adaptive filter; ECG; LMS Filter; Multi-order; Power line interference; FPGA; SIMULINK model; SNR; PSNR.
Brain Tumor Detection and Classification using Hybrid Neural Network Classifier
by Krishnamurthy Nayak, Supreetha B. S, Phillip Benachour, Vijayashree Nayak
Abstract: Brain tumor is one of the most harmful diseases, and has affected majority of people including children in the world. The probability of survival can be enhanced if the tumor is detected at its premature stage. Moreover, the process of manually generating precise segmentations of brain tumors from magnetic resonance images (MRI) is time-consuming and error-prone. Hence, in this paper, an effective technique is employed to segment and classify the tumor affected MRI images. Here, the segmentation is made with Adaptive Watershed Segmentation Algorithm. After segmentation, the tumor images were classified by means of hybrid ANN classifier. The hybrid ANN classifier employs Cuckoo Search Optimization technique to update the interconnection weights. The proposed methodology will be implemented in the working platform of MATLAB and the results were analyzed with the existing techniques.
Keywords: Neural Network; Medical Image processing; magnetic resonance images.
Multi Features Based Approach for White Blood Cells Segmentation and Classification in Peripheral Blood and Bone Marrow Images
by BENOMAR MOHAMMED LAMINE, CHIKH MOHAMMED AMINE, DESCOMBES XAVIER, BENAZZOUZ MOURTADA
Abstract: In this paper, we propose a complete automated framework for white blood cells differential count in peripheral blood and bone marrow images in order to reduce the analysis time and increase the accuracy of several blood disorders diagnosis. A new color transformation is first proposed to highlight the white blood cells regions; then a marker controlled watershed algorithm is used to segment the region of interest. The nucleus and cytoplasm are subsequently separated. In the identification step a set of color, texture and morphological features are extracted from both nucleus and cytoplasm regions. Next, the performances of a random forest classifier on a set of microscopic images are compared and evaluated. The obtained results reveal high recognition accuracies for both segmentation and classification stage.
Keywords: white blood cells; cells segmentation; cells classification; color transformation; texture features; morphological features; peripheral blood images; bone marrow images.
Review on Next Generation Wireless Power Transmission Technology for Implantable Biomedical Devices
by Saranya N., Kesavamurthy T.
Abstract: Wireless Power Transmission (WPT) is a promising technology that causes drastic changes in the field of biomedical, especially in medical implantable devices such as pacemakers, cardiac defibrillator and cochlear implants. The traditional implantable biomedical devices get power supply from batteries or lead wires through the skin, which not only increasing the burden on the patient, but also increases the pain and risk of surgery. To reduce the cost of biomedical devices, risk of wire snapping, periodic surgery to replace the batteries, wireless delivery of energy to these devices is desirable. WPT is a promising technology capable of addressing limitations in implantable devices. This technology not only negates the risk of infection due to cables passing through the skin but also negates need for recurrent surgeries to replace batteries and minimizes the size of the device by excising bulky components such as batteries. This paper provides an overview of wireless power transmission history, the basic principle of WPT and their recent research and developments in implantable biomedical applications.
Keywords: Wireless Power Transmission (WPT); Implantable Medical Devices (IMD); inductive coupling; resonance coupling; rectenna; Implantable Cardioverter Defibrillator (ICD); EEG recorder.
The Development of a Brain Controlled Interface Employing Electroencephalography to Control a Hand Prostheses.
by Mahonri Owen, Chikit Au
Abstract: It is the aim of this report to test the feasibility of using electroencephalography (EEG) to control a prosthetic hand employing an adaptive grasp. The Purpose of this report is to support work relating to the project The Development of a Brain Controlled Interface for Trans-radial Amputees. This work is based on the idea that an EEG based control scheme for prosthetics is credible in the current technological climate. The presented experiments investigate the efficiency and usability of the control scheme by testing response times and grasp accuracy. Response times are determined by user training and control method. The grasp accuracy relies on the effectiveness of the support vector machine used in the control scheme. The outcome of the research is promising and has the potential to provide amputees with an intuitive and easy to use method to control a prosthetic device.
Keywords: ELectroencephalography;Prosthetic Control;neural Prosthetics.
A Near Lossless Three-Dimensional Medical Image Compression Technique using 3D-Discrete Wavelet Transform
by Boopathiraja S, Kalavathi Palanisamy
Abstract: In the field of medical image processing, there are several imaging modalities has been emerged and with its evolutionary growth we could get high quality of images that imposes to more amount of storage space as well as bandwidth for transmitting it. Therefore, there is always a great need to develop compression techniques on such images. Moreover, they produce 3-Dimensional images and it can be further divided into slices of images and process each as like 2D image or directly produce the sequence of images. Therefore, in every medical image processing operations like segmentation, feature extraction and such operations need to converge on 3D space (3-Dimensional image) which is essential and has broader future scope. In this paper, we provide the Wavelet based image compression technique that can be directly applied on the 3D images itself. We apply 3D DWT on a 3D volume and the resultant coefficients are taken to further compression process (like thresholding, entropy encoding). The inverse processes are performed for decompression to reconstruct the images. The effects of results based on different wavelets are assessed and the results of lossy mode are compared in terms to lossless mode.
Keywords: 3D Medical Image; 3D-DWT; Huffman coding; Near lossless Compression.
Feature extraction and Classification of ECG Signals with Support Vector Machines and Particle Swarm Optimization
by Gandham Sreedevi, Bhuma Anhuradha
Abstract: The present work was aimed to present a thorough experimental study that shows the superiority of the generalization capability of the support vector machine (SVM) approach in the classification of electrocardiogram (ECG) signals. Feature extraction was done using principal component analysis (PCA). Further, a novel classification system based on particle swarm optimization (PSO) was used to improve the generalization performance of the SVM classifier. For this purpose, we have optimized the SVM classifier design by searching for the best value of the parameters that tune its discriminant function and upstream by looking for the best subset of features that feed the classifier. The obtained results clearly confirm the superiority of the SVM approach as compared to traditional classifiers, and suggest that further substantial improvements in terms of classification accuracy can be achieved by the proposed PSOSVM classification system.
Keywords: ECG; PCA; PSO; SVM; Arrhythmias; Classification.
An efficient and optimized system for detection of cancerous cells in tongue
by Mahnoor Rasheed, Ishtiaq Ahmad, Sumbal Zahoor, Muhammad Nasir Khan
Abstract: Major progress in image processing allows us to make large-scale use of medical imaging data to provide better detection and treatment of several diseases. Cancer is one of them that cause around 1.7 million deaths every year. Advanced and precise detection of cancer can prevent the severe complications. Tongue cancer is relatively rare that takes the consideration of medical field groups in recent time. In this research work, an efficient tongue cancer detection system is proposed that carried out in two phases. Initially, advanced filtering techniques are used to remove the noise content of microscopic images of tissue cultures from the body of the subject to be diagnosed. Image information are enhanced in pre-processing step that squeezes undesirable contortion and enhances some picture highlights for less demanding and faster assessment. In next phase, the image is segmented in a manner that abstracts significant material and characteristics of the image. The detection of the affected part is performed using the Otsu thresholding, k-means clustering and marker controlled watershed segmentation techniques. The performance and limitations of these schemes are compared and discussed. The simulation results show the marker controlled watershed offers best segmentation and detection.
Keywords: tongue cancer detection; k-means clustering; marker controlled watershed segmentation; Otsu thresholding.
Classification and Quantitative Analysis of Histopathological Images of Breast Cancer
by Anuranjeeta Anuranjeeta, Romel Bhattracharjee, Shiru Sharma, K.K. Shukla
Abstract: This paper provides a robust and reliable computational technique for the classification of benign and malignant cells using the morphological features extracted from histopathological images of breast cancer through image processing. Morphological features of malignant cells show changes in patterns as compared to that of benign cells. However, manual analysis is time-consuming and varies with perception level of the expert pathologist. To assist the pathologists in analyzing, morphological features are extracted, and two datasets are prepared from the group cells and single cells images for benign and malignant categories. Finally, classification is performed using supervised classifiers. Morphological features analysis is always considered as an important tool to analyze the abnormality in cellular organization of cells. In the present investigation, three classifiers for example Artificial Neural Network (ANN), k-Nearest Neighbour (k-NN) and Support Vector Machine (SVM) are trained using publically available breast cancer datasets. The result of performance indicators for benign and malignant images was calculated through True Positive (TP), True Negative (TN), False positive (FP) and False negative (FN) respectively. By utilizing a number of samples that fall into these classes, the performance parameters accuracy, sensitivity and specificity, balanced classification rate (BCR), F-measure and Matthews\' correlation coefficient (MCC) are calculated. The statistical measure regarding sensitivity and specificity has been acquired by calculating the region under the Receiver Operating Characteristic (ROC) curve. It is found that the classification accuracy achieved by the single cells dataset is better than the group cells. Furthermore, it is established that ANN provides a better result for both datasets than the other two (k-NN and SVM). The proposed method of the computer aided diagnosis system for the classification of benign and malignant cells provides better accuracy than the other existing methods.
Keywords: Segmentation; Cancer; Morphological features; Histopathology; Classification.
Semiautomated detection of aortic root in human heart MSCT images using nonlinear filtering and unsupervised clustering
by Antonio Bravo
Abstract: In this research a semiautomatic technique to detect the aortic root in threedimensional (3-D) cardiac images is proposed. This technique consists of three steps: conditioning, filtering, and detection. The cardiac images are acquired with 64-slice multislice computerized tomography (MSCT) technology. The conditioning step is based on multiplanar reconstruction (MPR) and it is required in order to reformat the cardiac volume information to orthogonal planes to the aortic root. During the filtering step, a set of nonlinear filtering techniques based on similarity enhancement, median and weighted median are considered to reduce noise and enhance the cardiac edges in reformatted images. In the detection step, the filtered volumes are subsequently processed with an unsupervised clustering technique based on simple linkage region growing. Dice score, the symmetric point-to-mesh distance and the symmetrical Hausdorff distance are used as metric functions to compare the segmentations obtained using the proposed method with respect to ground truth volumes traced by a cardiologist. A clinical dataset of 90 three-dimensional images from 45 patients is used to validate the detection technique. From this validation stage, the maximum average Dice score (0.92), the minimum average symmetric point-to-mesh distance (0.96 mm) and the minimum average symmetrical Hausdorff distance (4.80 mm) are obtained during prepreprocessed volumes segmentation using similarity enhancement.
Keywords: Human heart; aortic root; multislice computerized tomography;
segmentation; similarity enhancement; weighted median; unsupervised
Finite Element Analysis of Tibia Bone
by Pratik Thakre, K.S. Zakiuddin, I.A. Khan, M.S. Faizan
Abstract: The tibia also known as the shank-bone, it is the larger and stronger of the two bones in the leg below the knee in vertebrates (the other being the fibula), and it connects the knee with the bones. The leg bones are the strongest long bones as they support the rest of the body. The support and movement of the tibia is essential to many activities performed by the legs, including standing, walking, and running, jumping and supporting the bodys weight. This research was directed towards a study of the lower limb of the human body through the 3-D modeling and finite element analysis of the tibia, Finite element analysis is used to evaluate the stresses and displacements of human tibia bone under physiological loading. Three-dimensional finite element models obtained by using computed tomography (CT- Scan) data which consisting thorough description about the material properties of bone and density of bone tissues which is very essential to create accurate and realistic geometry of a bone structure. Therefore, in this study, CT - Scan data of patients Tibia Bone (Healthy, broken tibia bone after 1moth and 2 month of surgery Tibia Bone) are used to develop three-dimensional finite element models of the left proximal tibia bone, and (full average body Weight- 80 kg) half of an average body weight 37.53 kg (368.16) applied on Tibia Bone . Finite Element Analysis conducted to calculate the Equivalent Von-Mises Stress, Maximum Principal Stress, Total Deformation and Fatigue Tool from the whole proximal tibia bone and comparing the results. These analyzed results provide a great foundation for further studies of bone injury prevention, bone transplant model integrity and validity and subject-specific fracture mechanism as the results of three bones (Healthy, after one month of surgery and after two months of surgery) compared.
Keywords: Tibia; Fibula; Stress Analysis; Material Properties; Displacement; Modeling; Simulation; Finite Element Analysis; Hypermesh; Embodi 3D.
ANALYSIS AND CLASSIFICATION OF KIDNEY ULTRASOUND IMAGES USING SIFT FEATURES WITH NEURAL NETWORK CLASSIFIER
by Mangayarkarasi Thirunavukkarasu, Najumnissa Jamal
Abstract: A Unique method to analyze Ultrasound Scan images of Kidney to classify renal abnormalities using SIFT features and Artificial Neural Network is presented in this Paper. The Ultrasound kidney images are classified into 4 classes normal, cyst, calculi, and tumor. Kidney images obtained from the scan centre, urologist and knowledge to predict common abnormalities by specialist are utilized as inputs to carry out the processing and analysis of ultrasound images. Preprocessing and denoising techniques are applied for the removal of speckle noise by applying median and wiener filter. Fuzzy C-Means clustering based segmentation technique is used to obtain the ROI.50 ROI is extracted based on the above method. A set of statistical features are initially obtained. Second order statistical features, the GLCM that gives information about inter pixel relationship, periodicity and spatial gray level dependencies are computed. To overcome the operator dependency of ultrasound scanning procedure rotational variance sift algorithm is applied and SIFT (SCALE INVARIANT FEATURE TRANSFORM) features are obtained. A total of 182 features for the normal images, 350 GLCM features, 250 statistical features and 42 SIFT features are calculated for the abnormal images. Abnormalities are classified using supervised learning algorithm (ANN). With the number of hidden neurons to be 10 the classification accuracy is reached for the testing input images.
Keywords: Ultrasound scan images; speckle noise; median filter; wiener filter ; SIFT features; GLCM; fuzzy C-means segmentation ; artificial neural network;.
Development of Comfort Fit lower limb Prosthesis by Reverse Engineering and Rapid prototyping methods and validated with GAIT analysis
by Chockalingam K, Jawahar N, Muralidharan N, Balachandar Kandeepan
Abstract: The development of comfort fit, custom made lower limb prosthesis using the concept of reverse engineering (RE) and rapid prototyping (RP) has been introduced in this paper. The comparison of average percentage of deviation in step lengths also made between normal person, the Conventional (Plaster of paris - POP) prosthesis and reverse engineered rapid prototyping prosthesis through gait analysis. The gait analysis reveals that the average percentage of deviation in step lengths in normal person is 2.80, the average percentage of deviation in step lengths in conventional (POP) prosthesis is 53.70 and reverse engineered rapid prototyping prosthesis is 7.06. The difference in average percentage of step lengths deviation between normal person and amputed person wearing reverse engineered and rapid prototyping prosthesis is very less (Say 4.26) and hence provide comfort fit.
Keywords: Lower limb prosthesis; Reverse engineering; Rapid prototyping; Gait analysis.
Efficient T2 Brain Region Extraction Algorithm Using Morphological Operation And Overlapping test from 2D and 3D MRI images
by Vandana Shah
Abstract: In the field of medical resonance image (MRI) processing the image segmentation is an important and challenging problem in an image analysis. The main purpose of segmentation in MRI images is to diagnose the problems in the normal brain anatomy and to find the location of tumour. Many of the algorithms have been found in recent years which aid to segment the medical images and identify the diseases. This paper proposes a novel 3D Brain Extraction Algorithm (3D-BEA) for segmentation of MRI images to extract the exact brain region. Transverse relaxation time (T2) weighted images are used as an input for the development of algorithm as these images provide bright compartments and dark fat tissues in the MRI brain region. The images are first denoised and smoothed for further processing. They are then used for extraction of irregular brain masks through threshold implementation which are then compared with the upper and lower slice of the brain images using morphological operations. The final brain volume has been generated using this 3D-BEA process. The result of this developed proposed algorithm is validated by comparing proposed algorithm with the results of the existing segmentation algorithm used for the same purpose. The proposed algorithm will help medical experts to understand and diagnose the tumour area of the patient.
Keywords: segmentation; morphological operations; clustering; k-means clustering; fuzzy c means clustering; Brain Extraction Algorithm.
Secure Agent Based Diagnosis and Classification Using Optimal Kernel Support Vector Machine
by Kiran Tangod, Gururaj Kulkarni
Abstract: Diabetes is a serious complex condition which can affect the entire body. Diabetes requires daily self-care and if complications develop, diabetes can have a significant impact on quality of life and can reduce life expectancy. The existing multi-agent based diabetes diagnosis and classification methods require a number of agents and hence communication between those agents causes time complexity issues. Due to this complexity issues, the existing method is not applicable to the emergency crisis. Hence to overcome those issues it is essential to reduce the number of agents. Hence, we move on to the proposed method. Our method requires only three agents which are user agent, security agent and updation agent. Initially, user agent collects the user symptoms and then encrypts the symptoms. The encrypted symptoms are then directed to the updation agent. For secure communication two fish-based encryption algorithm is used between the user and updation agent. After receiving the encrypted data from the security agent, the updation agent needs to find the diabetes level of user as if it is normal or abnormal. Hence our proposed technique uses the Optimal Kernel Support Vector Machine algorithm (OKSVM) to classify the diabetes level. In our suggested technique, the traditional kernel function is optimized with the help of Sequential Minimal Optimization (SMO) algorithm. Based on the optimal kernel, the suggested technique effectively prescribes the drugs for the corresponding user. The proposed method will be implemented in JAVA platform.
Keywords: Multi-Agent Systems (MAS); Diabetes; Two Fish Based Encryption Algorithm; Optimal Kernel Support Vector Machine algorithm (OKSVM); Sequential Minimal Optimization (SMO).
Computational Study on the Effect of Human Respiratory Wall Elasticity.
by Vivek Kumar Srivastav, Rakesh Kumar Shukla, Akshoy Ranjan Paul, Anuj Jain
Abstract: The present study is focused to investigate computationally the respiratory wall behaviour and airflow characteristics inside trachea and first generation bronchi considering fluid structure interaction (FSI) between air flow and the human respiratory wall. The human respiratory tract model is constructed using Computed Tomography (CT) scan data. The objectives of the present study are to investigate the effect of elasticity of the respiratory wall on the deformation and stresses induced in the respiratory tract during inhalation. The deformation in the respiratory tract is found to be insignificant for the elasticity modulus above 40 kPa. A considerable amount of deformation is observed when the elasticity modulus is below 30 kPa. The internal flow physics is compared between rigid and compliant (flexible) human respiratory tract model. The flexibility considered in the respiratory tract wall decreases the maximum flow velocity by 24%. It is observed that the wall shear stress in compliant respiratory model is reduced to one-third of that in rigid respiratory model. The comparison of results of rigid and compliant models show that FSI technique offers more realistic results as compared to conventional computational fluid dynamics (CFD) analysis of rigid tract. The simulated results suggest that it is essential to consider respiratory wall deformability into the computational model to get realistic results. The results will help medical practitioners to correlate the clinical findings with the more accurate computational results.
Keywords: Human respiratory tract; rigid model; compliant model; Fluid structure interaction (FSI); Elasticity modulus.
Experimental Studies on Acrylic Dielectric Elastomers as Actuator for Artificial Skeletal Muscle Application
by Yogesh Chandra, Anuj Jain, R.P. Tewari
Abstract: The application of acrylic dielectric elastomer- an electrically actuated electro active polymer for artificial muscles has been investigated to evaluate its suitability for prosthetic and orthotic devices by comparing its mechanical and electrical properties similar to that of natural skeletal muscles. Experimental studies have been performed to get the properties of acrylic dielectric elastomers for design and development of artificial skeletal muscles.Therefore, a commercially available acrylic dielectric elastomer; VHB 4910 by 3M film is subjected to uniaxial tensile tests under varying rates, stress relaxation test and loading-unloading test on the Universal testing machines and undergoes an electrical actuation test after pre-straining. The results of such tests have been discussed separately and then compared with previous researches on skeletal muscles as well. Moreover, the material response is also observed highly viscous and hyper-elastic i.e. sensitive to very high strain rates as in the case of skeletal muscles. These results can be utilized in material selection to develop dielectric elastomer actuator applications for artificial muscles.
Keywords: dielectric elastomers; electrical actuation; artificial muscles; stress relaxation; pre-straining; strain rate.
A Comparison of Detrend Fluctuation Analysis, Gaussian Mixture Model and Artificial Neural Network Performance in detection of Microcalcification from Digital Mammograms
by Sannasi Chakravarthy S R, Harikumar Rajaguru
Abstract: This paper presents a Computer Aided approach that classifies the type of cancer (benign or malignant) and its associated risk from the digital mammogram images. Twelve statistical features are extracted through five different wavelets such as Daubechies, Haar, Biorthogonal Splines, Symlets and DMeyer with the decomposition levels of 4 and 6. The Mammogram Image Analysis Society (MIAS) database is utilized in this paper. The micro-calcification in the digital mammogram images by are detected by Detrend Fluctuation Analysis (DFA), Gaussian Mixture Model (GMM) and Artificial Neural Network (ANN). The classifiers performance are analyzed and compared based on the bench mark parameters like Sensitivity, Selectivity, Precision and Accuracy. GMM classifier outperforms the DFA and ANN Classifiers in terms of performance metrics.
Keywords: mammogram images; breast cancer; wavelet; detrend; gaussian mixture; neural network; classification.
Automatic Detection of Tuberculosis based on Adaboost Classifier and Genetic Algorithm
by BEAULAH JEYAVATHANA RAJENDRAN, Balasubramanian R
Abstract: Tuberculosis is one of the most commonly affected diseases in the progressing countries. Early stage diagnosis of tuberculosis plays a significant role in curing TB patients. The work presented in this paper is focused on design and development of a system for the detection of tuberculosis in CT lung images. The disease can be diagnosed easily by radiologists with the help of automatedrntuberculosis detection system. The main objective of this paper is to get best solution selected by means of genetic programming is regarded as optimalrnfeature descriptor. Five stages are being used to detect tuberculosis disease. They are pre-processing the image, segmentation, Extracting the feature, Feature Selection and Classification. These stages are used in medical image processing to enhance the TB identification. In feature extraction stage, wavelet based statistical texture feature extraction is used to extract the features and from the extracted feature sets the optimal features are selected by Genetic algorithm. Finally, Adaboost classifier method is used for image classification. The experimentation is done and intermediate results are obtained. Experimental results show that Adaboost is a good classifier, giving an accuracy of 95% for classifying TB affected and Non-affected lungs using wavelet based statistical rntexture features.
Keywords: Tuberculosis; Otsu's method; GLCM approach; Genetic Algorithm; Adaboost classifier.
MRI Images Segmentation using Improved Spatial FCM Clustering and Pillar Algorithms
by Boucif Beddad, Kaddour Hachemi, Sundarapandian Vaidyanathan
Abstract: The segmentation of brain tissue from MRI images is a vast subject of study, and has many applications in medicine. The main objective of this work is to carry out a new segmentation technique based on a combined method between Pillar algorithm and Spatial Fuzzy C-Means clustering. The proposed approach applies FCM clustering to image segmentation after optimized by pillar algorithm in terms of initial centers precision and computational time. The features of the segmented brain image are extracted in different classes (white matter WM, gray matter GM and Cerebrospinal Fluid CSF) using the integrating elements interpreted to get partially or fully automated tools allowing a correct extraction of the cerebral tissue. The developed algorithm has been implemented and the program is run through Simulink. All experimental results are satisfactory which indicate that using a combined method of several segmentation algorithms helps to get better results and improves the classification.
Keywords: Brain MRI; Image Processing; Pillar Algorithm; Segmentation; Spatial Fuzzy C-Mean Clustering.
Searching for cell signatures in multidimensional feature spaces
by Romuere Silva, Flavio Araujo, Mariana Rezende, Paulo Oliveira, Fatima Medeiros, Rodrigo Veras, Daniela Ushizima
Abstract: Despite research on cervical cells since 1925, systems to automatically screen images from conventional Pap smear tests continue unavailable. One of the main challenges in deploying precise software tools is to validate cell signatures. In this paper, we introduce an analysis framework, CRIC-feat, that expedites the investigation of different image databases and respective descriptors, particularly applicable to Pap images. This paper provides a three-fold contribution: (a) we first review and discuss the main feature extraction protocols for cell description and implementations suitable for cervical cells, (b) we present a new application of Gray Level Run Length (GLRLM) features to Pap images and (c) we evaluate 93 cell classification approaches, and provide a guideline for obtaining the most accurate description, based on two current public databases with digital images of real cells. Finally, we show that the nucleus information is preponderant in cell classification, particularly when considering the GLRLM feature set.
Keywords: Medical Image; Feature extraction; Cervical cells; Quantitative Microscopy; Cell Descriptors; Classification.
Skull Stripping of Brain MRI for Analysis of Alzheimers Disease
by DULUMANI DAS, SANJIB KUMAR KALITA
Abstract: MRI is a widely used imaging technique that helps to detect different brain abnormalities like brain tumor, Alzheimers disease, brain stroke etc. Skull stripping of MR brain image is a preliminary step of neuroimaging. MR brain image contains some non brain tissues like skull, skin, fat, muscle, neck etc. These non brain tissues are considered as a cause of difficulty in further analysis. So it is essential to remove these non brain tissues before detail analysis and this process is referred to as skull stripping. As the non brain tissues of brain are removed, the area of the brain gets reduced. The aim of this work is to extract the main region of brain for analysis that has adequate area by removing the non brain tissues. The main problem in skull stripping is to differentiate brain tissues from non brain tissues due to their intensity inhomogeneity. In this work, an automatic skull stripping method based on morphological operation is analyzed. Initially MR images are segmented using entropy based thresholding method. To find a precise threshold for brain tissues, five entropy based thresholding methods are analyzed. Those are maximum entropy sum method, entropic correlation method, Renyis entropy, Tsallis entropy and modified Tsallis entropy method and are compared with Otsus thresholding method. The final skull stripped image is obtained after performing morphological operation. In the present study, 50 T1 weighted coronal MR images are considered for experiment. The experiment shows that the skull stripped brain obtained using modified Tsallis threshold gives adequate area of interest. The accuracy obtained with this method is 80.4%. Further the extracted brain is analyzed for three features to diagnose Alzheimers disease. The analyzed features are perimeter, hole size and boundary distance.
Keywords: skull stripping; Alzheimer’s disease; MRI; entropy based thresholding; mathematical morphology.
Epileptogenic neurophysiological feature analysis based on an improved neural mass model
by Zhen Ma
Abstract: To elucidate the neurophysiological mechanisms underlying seizures according to electroencephalogram (EEG) signals, a neurophysiologically-based neural mass model that can produce EEG-like signals was adopted to simulate ictal and interictal EEG signals. A delay unit and a gain unit were added to the Wendling model to fit EEG signals in the time domain. An optimal parameter set minimizing the error between observed and simulated EEG was identified using a genetic algorithm. To compare the inhibition and excitation during the ictal and interictal periods, the model parameters were determined for two sets of EEG signals using the proposed method. The results show that the model with identified parameters can simulate the real EEG signal well, with a mean square error of 0.03150.2138. Fifty repetitions for every selected EEG signal showed that the dispersion of the identified parameters was small in most cases, and the identification procedure generally showed similar values. Comparison of the model parameters of seizure and non-seizure EEG signals showed enhanced excitability, attenuated inhibition, and a more concentrated energy distribution in the frequency domain during the ictal periods. The experimental results for long-term EEG signals revealed continuous changes in the model parameters during epileptic seizures.
Keywords: EEG; neural mass model; genetic algorithm; fitting.
A hybrid multimodal biometric scheme based on face and both irises integration for person authentication.
by Larbi Nouar, Nasreddine Taleb, Miloud Chikr El Mezouar
Abstract: In the biometric field, mono-modal biometry suffers from multiple lacks. The use of multiple biometrics has helped getting over those limitations and has allowed outperforming single biometrics in terms of recognition rate. In this paper, we propose a new approach that fuses the Gabor-winger transform and the oriented Gabor phase information for feature extraction as well as a hybrid scheme consisting of Multi-algorithm, Multi-instance and Multi-modal systems that integrates face and both left and right irises of the same subject. The SDUMLAHMT database has been used to evaluate the proposed approach. The results show that our multi-modal biometric system achieves higher accuracy than the single biometric approaches as well as the other existing multi-biometric systems.
Keywords: biometrics ;multi-biometric systems; iris recognition; face recognition; fusion; multi-algorithm;DET curve; fusion; feature extraction.
Novel Slope Based Onset Detection Algorithm for Electromyographical signals
by Vinayak Bairagi, Archana Kanwade
Abstract: Electromyography (EMG) is a technique of acquiring neuromuscular activity of muscle. Onset and offset gives information about activation and deactivation timings of motor units. This paper proposes a novel slope based algorithm for onset and offset detection. EMG data are collected from different muscle of different subjects using surface EMG electrodes. Data is divided into smaller windows and Average Instantaneous Amplitude (AIA) and slope is calculated for each window. A threshold is decided to avoid baseline noise. Below threshold, maximum and minimum slope is detected as the onset and offset respectively. The results are accurate compared to single threshold and double threshold method. Accuracy increases with computation complexity (arithmetic calculations); if compared with Root Mean Square (RMS) based algorithm. The only limitation is; decrease in accuracy if signal is acquired between two muscle contractions. The proposed slope based onset detection algorithm can be way out between accuracy and computational complexity.
Keywords: Electromyography; Onset; Offset; Slope.
Performance Evaluation of De-noised Medical Images after Removing Speckled Noise by Wavelet Transform
by Arun Kumar, M.A. Ansari
Abstract: In healthcare community, the quality of the medical image is of prime concern to make accurate observations for diagnosis. Different types of noise such as Gaussian noise, Impulse noise and Speckle noise, etc. have been observed as a main cause for the quality degradation of the medical image. This degradation may further lead to the inconsistent information for the diagnosis, which will directly affect the patient's life. The removal of the noise from the medical image to maintain its quality has become a very tough task for the researchers and practitioners in the field of medical image processing. This paper aims on the comparative performance evaluation of various orthogonal and biorthogonal wavelet filters that are commonly used for the de-noising purpose based on some statistical parameters such as mean square error (MSE), peak signal to noise ratio (PSNR), structural similarity index (SSIM) and Correlation Coefficient. The result of the present study depicts that biorthogonal 3.9 wavelet filters provide more precise image after the removal of noise as compare to other wavelet filters used.
Keywords: Wavelet filters; Feature extraction; Speckle noise; Soft threshold; Performance evaluation.
Cryptanalysis for Securing DICOM Medical Contents using Multilevel Encryption
by Subhasri Prabhakaran, Padmapriya Arumugam
Abstract: In healthcare, medical images are considered as important and sensitive data. The patient particulars can be identified using medical data. For transferring these data, a secure encryption algorithm is essential. Cryptographic schemes are used to enhance the security and confidentiality of medical data thereby preventing unauthorized access. Data from the contributors and storage systems leads to scalability and preservation issues. Therefore a standard is required to preserve the secrecy of medical images. Digital Imaging and Communications in Medicine (DICOM) is the universal standard for secured communication in medical files. In cryptographic technique, after encryption process, some intruders can hack the sensitive data without using precise key is known as cryptanalysis. So a typical evaluation mechanism is needed to verify the quality of cryptographic methods employed. In this paper, the results are assessed for MRI, CT, X-Ray images data. The proposed method performance is evaluated using cryptanalysis measures.
Keywords: DICOM Medical content; Cryptography; Medical Image; Cryptanalysis; Security attacks.
EVENT DETECTION IN SINGLE TRIAL EEG DURING ATTENTION AND MEMORY RELATED TASK
by PREMA PITCHAIPILLAI
Abstract: P300 is an endogenous event related potential (ERP) elicited by a rare or significant visual stimulus and is widely preferred in Brain Computer Interface (BCI) to assess the cognition level of the subject. Many researchers contribute to P300 estimation, as this signal is of very low strength in background Electroencephalogram (EEG) activity. This paper proposes a novel signal processing algorithm to detect the P300 event in a single trial EEG acquired from midline electrode sites in oddball paradigm to evaluate attention and memory related tasks of subjects. The algorithm incorporates wavelet combined adaptive noise canceller followed by ensemble and moving averager. Time domain analysis shows the localization of ERP around 300ms for target stimuli attended by the subjects. The Short-Time Fourier Transform (STFT) analysis shows strong theta activity associated to memory related task. Thus, the proposed algorithm is efficient in detecting the P300 with higher correlation coefficient of 0.82 (average) compared to other existing methods.
Keywords: BCI; P300-event related potential; attention; adaptive filter; ensemble averaging; moving averager; latency; SNR; STFT; time-domain; frequency-domain.
Progressive Fusion of Feature Sets for Optimal Classification of MI Signal
by M.K.M. Rahman, Md. Omer Sadek Bhuiyan, Md.A.Mannan Joadder
Abstract: Brain-Computer Interface (BCI) is the most popular research topic to the researchers of neuroprosthetics.The ultimate goal of this research is to develop a communication channel between human brains and external devices.Feature extraction is one of the most crucial steps in this research. Combination of different features may improve the classification performance, but in most of the cases straight forward combinations of different features lead to a very poor result. So, it is necessary to combine the orthogonal features and omit the redundant ones. It is a complex and time-consuming process. We have developed two new algorithms to find optimum sets of features for fusion to obtain best possible classification accuracy for both subject-specific (SS) and subject-independent (SI) cases. Experimental results indicate that our proposed algorithms, in general, improve the classification results irrespective of the different methodological setup of BCI processes such as number of input channels and spatial filter.
Keywords: brain-computer interface (BCI); feature fusion; feature extraction; optimum feature set.
Automated Blink Artifact Removal from EEG using Variational Mode Decomposition and Singular Spectrum Analysis
by Poonam Sheoran, J.S. Saini
Abstract: Purpose: Blink artifacts are the major source of noise while acquiring Electroencephalogram (EEG) data for analysis. To design an efficient method for blink artifact removal is essential for conducting any sort of analysis using EEG. Method: In this paper, a novel automated eye blink artifact removal method based on Variational Mode Decomposition (VMD) and Singular Spectrum Analysis (SSA) is presented. The noisy EEG signals are first separated into uncorrelated components using Canonical Correlation Analysis (CCA) and then Variational Mode Decomposition is performed for multiresolution analysis. The decomposed components (modes) are assessed through their singular values for finding the distribution of noise using singular spectrum analysis. Phase Space Reconstruction (PSR) is also used to differentiate the clean modes and noisy modes. Result: The applicability of the proposed approach is examined through statistical measures like signal to noise ratio (SNR), correlation coefficient and root mean square error (RMSE). The results indicate the efficacy of the approach in artifact removal without manual intervention as compared to the state-of-the-art technologies. Conclusion: The proposed method automatically identified and removed the noisy fraction of signal yielding the requisite neural information without any manual intervention.
Keywords: Artifact removal; Variational Mode decomposition (VMD); Canonical Correlation Analysis (CCA); Phase Space Reconstruction (PSR); Singular Spectrum Analysis (SSA).
CT Image Reconstruction from Sparse Projections Using Adaptive Total Generalized Variation with Soft Thresholding
by Vibha Tiwari, Prashant Bansod, Abhay Kumar
Abstract: CT imaging plays a vital role in non-invasive diagnosis and in surgical planning of critical diseases. However, it is essential to reduce radiation dose during CT imaging as excessive exposure may cause harm to human tissues. To reduce radiation dose, CT image is acquired using limited number of X-Ray projections and then to reconstruct the image an adaptive Total Generalized Variation (TGV) minimization method has been proposed. The simulation results have been compared with the existing TV, TGV and TGV with hard thresholding methods. Typically two types of noises, Gaussian and Poisson distributed noise are introduced during CT imaging process. So, these two types of noises have been added in measured samples. It is found that after applying soft thresholding and FISTA algorithm with the proposed method, better results have been obtained in noisy imaging environment. The reconstructed CT image quality has been compared using parameters like FSSIM, PSNR, NRMSE and MAE.
Keywords: CT image reconstruction; limited angle reconstruction; total generalized variation;compressive sensing; telemedicine;.
Performance Analysis of Data Mining Classification Algorithms for Early Prediction of Diabetes Mellitus II
by Delshi Ramamoorthy
Abstract: Diabetes mellitus (DM) generally referred to as diabetes. It is a group of metabolic infection in which there are high blood sugar levels over a prolonged period. Data mining is used for predicting various diseases. From many methods of data mining, classification is one of the main techniques. The classification techniques are used classify the hidden information in all areas including medical diagnostic field. In this research work, we compare the machine learning classifiers (naive Bayes, J48 decision tree, OneR, AdaBoost, random forest, random tree and support vector machines) to classify the patients into diabetic and non diabetic mellitus. These algorithms have been tested with data samples downloaded from UCI. The performances of the algorithms have been considered in both the cases, i.e., data samples with noisy data and data samples set without noisy data. Results are evaluated in terms of accuracy, sensitivity, and specificity. Experimental results suggested that, support vector machine (SVM) classifier is the best classifier for predicting diabetes mellitus 2.
Keywords: Diabetes Mellitus; Classification; SVM; AdaBoost; NB; J48;Random Tree; Random Forest; OneR; Data Mining.
An Improved Speckle Noise Reduction Scheme Using Switching and Flagging of Noisy Data for preprocessing of Ultrasonograms in Detection of Down Syndrome during First and Second Trimesters
by Jeba Shiney, Amar Pratap Singh, Priestly Shan
Abstract: Down Syndrome (DS) is reported to be one of the most common chromosomal abnormality, affecting newborns all over the world. Diagnosis of the syndrome at an earlier stage during pregnancy will provide more options for the affected parents to make decisions on the interventional therapies required for the developing child. The techniques which are currently used in diagnosis of DS like amniocentesis and Chorionic Villus Sampling (CVS)are invasive in nature and are associated with some percentage of risk. This paper aims at developing a Clinical Decision Support System (CDSS) for detection of DS from Ultrasound (US) fetal images. As a preliminary step in achieving this, the US images have to be denoised for removal of speckle noise. A Modified Mean Median (MMM) filter has been proposed which is based on the principle of progressive switching theory.Experimental results show that the proposed filter provides better results in terms of Peak Signal to Noise Ratio(PSNR), Image Enhancement Factor(IEF) and so on.
Keywords: Ultrasound; Down Syndrome; Modified Mean Median; Amniocentesis; Chorionic Villus Sampling; Speckle noise; filter; diagnosis; Clinical Decision Support System; Peak Signal to Noise Ratio.
VOLUME BASED INTER DIFFERENCE XOR PATTERN: A NEW PATTERN FOR PULMONARY NODULE DETECTION IN CT IMAGES
by Chitradevi A, Nirmal Singh N, Jayapriya K
Abstract: The pulmonary nodule identification which paves the path to the cancer diagnosis is a challenging task today. The proposed work, Volume Based Inter Difference XOR Pattern (VIDXP) provides an efficient lung nodule detection system using a 3D Texture based pattern which is formed by XOR pattern calculation of inter-frame gray value differences among centre frame with its neighbourhood frames in rotationally clockwise direction, for every segmented nodule. Different classifiers such as Random Forest (RF), Decision Tree (DT) and Adaboost are used with 10 trails of 5 fold cross validation test for classification. The experimental analysis in the public database, Lung Image Database Consortium - Image Database Resource Initiative (LIDC-IDRI) shows that the proposed scheme gives better accuracy while comparing with existing approaches. Further, the proposed scheme is enhanced by combining shape information using Histogram of Oriented Gradient (HOG) which improves the classification accuracy.
Keywords: pulmonary nodule; classification; preprocess; segmentation; feature extraction; LIDC-IDRI; medical image segmentation; accuracy.
Accurate detection of Dicrotic notch from PPG signal for telemonitoring applications
by Abhishek Chakraborty, Deboleena Sadhukhan, Madhuchhanda Mitra
Abstract: Recent technological advancement have inspired the modern population to adopt a portable, simple personal telemonitoring system that uses easy-to-acquire biosignal such as Photoplethysmogram (PPG) for regular monitoring of vital signs. Consequently computerized analysis of PPG signal through accurate detection of clinically significant PPG fiducial points like dicrotic notch has become a key research area for early detection of physiological anomalies. In this research, a simple and robust algorithm is proposed for accurate detection of dicrotic notch from the PPG signal employing first and second difference of the denoised signal, slope-reversal and an empirical formula-based approach. Features related to the dicrotic notch are then extracted from the baseline-corrected PPG signal and performance of the algorithm is evaluated over different standard PPG databases as well as over originally acquired signal. The algorithm achieves high efficiency in terms of sensitivity, positive predictivity, detection accuracy and low value of errors in the detected features.
Keywords: Photoplethysmogram; amplitude threshold; slope reversal; dicrotic notch detection.
PULSATILE FLOW, MICRO-SCALE ERYTHROCYTE-PLATELET INTERACTION
by Thakir AlMomani, Suleiman Bani Hani, Samer Awad, Mohammad Al-Abed, Hesham AlMomani, Mohammad Ababneh
Abstract: Platelet aggregation, activation, and adhesion on the blood vessel and implants result in the formation of the mural thrombi. Erythrocyte (or red blood cell RBC) have shown to play a significant role in the aggregation process of platelets toward vessel walls. A level-set sharp-interface immersed boundary method is employed in the computations in which RBC and platelet boundaries are advected on a two-dimensional Cartesian co-ordinate grid system. RBCs and platelets are treated as rigid non-deformed particles, where RBC assumed to have an elliptical shape while platelet is assumed to have a discoid shape. Both steady and pulsatile flow regimes were employed with Reynolds number values equivalent to those could find in the micro-blood vessels. Forces and torques between colliding blood cells are modeled using an extension of the soft sphere model for elliptical particles. RBCs and platelets are transported under the forces and torques induced by fluid flow and cell collision based on solving the momentum equation for each blood cell. The computational results indicated that platelets tend to show more interaction with RBCs and migration toward vessel wall for steady flow more than those found in the pulsatile flow. Velocity contours didnt show major differences in the peak and minimal values. The using of physiological flow conditions showed less interaction between RBCs and platelets, than that could find in the steady flow conditions. Moreover, platelets tend to concentrate in the core region in the case of pulsatile flow.
Keywords: Erythrocyte; platelet; interaction; pulsatile flow; migration; core region; wall region.
A Review on Motor Neuron Disabilities and Treatments
by Ankita Tiwari, O.P. Singh, Dinesh Bhatia
Abstract: Neuromotor or Motor Neuron disabilities (MND) are a set of medical conditions that are incurable and come with a bunch of associated problems. The disease affects the individual motor neuron functioning that can be in the whole body or any specific part of the body. The disability could be the result of improper communication between motor neurons and muscle fibre. In this review paper, we study and enumerate different neuromotor disabilities, and related treatments available till date. Although several interventions have been proposed for rehabilitation of such patients, accurate and reliable methods still need to be researched for improvement in the patients condition.
Keywords: Motor Neuron Disease; Rehabilitation; muscle fibre.
Comparison between analyzing wavelets in Continuous Wavelet Transform Based on the Fast Fourier Transform: Application to estimate pulmonary arterial hypertension by heart sound.
by Lotfi HAMZA CHERIF
Abstract: Wavelet analysis makes it possible to use long-term windows when we need more accurate low-frequency information, and shorter when we need high-frequency information. Since conventional CWT requires considerable power and computation time, including application to non-stationary signal analysis, we used CWT analysis based on Fast Fourier Transform (FFT). The CWT was obtained using the properties of the circular convolution to increase the speed of computation. This method provides results for long recordings of PCG signals in a short time. The use of the CWT gives a better localization of the frequency components in the PCG signals than the short-time Fourier transform (STFT) commonly used. Such an analysis is possible by means of a sliding window, which corresponds to the analysis time scale. rnThis paper presents the analysis of phonocardiogram (PCG) signals using the continuous wavelet transform based on the fast Fourier transform (CWTFT).The analysis of the CWT are dependent on the mother wavelet function. This analysis is based on the application of analyzing wavelets (Morlet wavelet, same order derived from Gaussian wavelets, Paul and Bump wavelets) and each time the value of the mean difference (in absolute value) between the original signal and the synthesis signal obtained by the Fast Fourier Transform (FFT)is measured.rnIn this study, we indicate the possibility of parametric analysis of PCG signals using the continuous wavelet transform which is the completely new solution. The performance of the CWTFT in PCG signal analysis is evaluated and discussed in this article.rnThe results obtained show the clinical utility of our extraction methods for the recognition of heart sounds (or PCG signal), the estimation of pulmonary arterial hypertension. The results obtained also show that the severity of mitral lesion involves severe pulmonary arterial hypertension.rn
Keywords: Phonocardiogram signal; Continuous wavelet transform; analyzing wavelets.
ANALYSIS OF BREAST CANCER USING Gray LEVEL CO-OCCURRENCE MATRIX AND RANDOM FOREST CLASSIFIER
by Tamilarasan Ananth Kumar, G. Rajakumar, T.S. Arun Samuel
Abstract: This paper introduces two features of Neighborhood Structural Similarity (NSS) with Gray Level Co-occurrence Matrix (GLCM) are proposed for the feature extraction of mammographic masses and Random Forest (RF) classifier is used for classification whether the extracted masses are benign or malignant. NSS describes the equivalence in the midst of proximate regions of masses by combining two new features NSS-I and NSS-II. Benign masses are analogous and have systematic patterns. Malignant masses contain indiscriminate patterns because of their miscellaneous attributes. For benign-malignant mass classification number of texture features are proposed namely correlation, contrast, energy and homogeneity; It quantifies neighboring pixels relationship and is unable to capture structural similarity within proximate regions. The performance of the features are evaluated using the images from the mini-MIAS and DDSM datasets, the Random Forest classifier does the recognition. This involves proper classification of masses with high accuracy.
Keywords: Neighbourhood Structural Similarity; contrast; energy; homogeneity; correlation; Gray level Co-occurrence Matrix.
Principal and Independent Component Based Analysis to Enhance Adaptive Noise Canceller for Electrocardiogram Signals
by Mangesh Ramaji Kose, Mitul Kumar Ahirwal, Rekh Ram Janghel
Abstract: In this paper, the proposed methodology has suggested a way to fulfill the need of reference signal for adaptive filtering (AF) of electrocardiogram (ECG) signals. ECG signals are most important form of representation and observation of different heart conditions. During recording process the ECG signals gets contaminated with different types of noises like, baseline wander (BW), electrode motion artifact (MA), muscle noise also known as electromyogram (EMG). Noise contamination causes distortion of normal structure of ECG signal. Adaptive filters works fine for ECG noise cancellation. But, the problem is the need of reference signal or estimation of noise signal. To solve this problem principal and Independent component (PCA and ICA) of noisy signal has been analyzed to extract the noise signal, which is used in adaptive noise cancellation of ECG signals. Fidelity parameters like Mean Square Error (MSE), Signal to Noise ratio (SNR) and Maximum Error (ME) has been observed to measure the quality of filtered signals.
Keywords: PCA; ICA; Adaptive Filters; ECG; Artifacts.
Accuracy comparison of the Data mining Classification techniques for the Diabetic Disease Prediction
by Rakesh Garg
Abstract: In the present scenario, the speedy use of the data mining (DM) techniques is observed for predicting and categorizing symptoms in large medical datasets. Classification is one major DM technique that is widely used for classifying various unnoticed information from various diagnostic data. In a popular country like India, Diabetes is characterized as a dangerous disease which has affected the majority of the population. The present research emphasizes on the accuracy comparison of the various classifiers such as J48, Random Forest, Sequential Minimal Optimization (SMO), Stochastic Gradient Descent (SGD), Naive Bayes, Logistic Regression, Random Tree, Decision Stump, Simple Logistic, Hoeffding Tree, Adaboost, and Bagging, when applied on a diabetic data.
Keywords: Data Mining; Diabetes; Classification; Weka.
Breast Cancer Image Enhancement with the Aid of Optimum Wavelet-Based Image Enhancement using Social Spider Optimization
by T. Venkata Satya Vivek, C. Raju, D. Girish Kumar
Abstract: This paper is to enhance features and gain higher traits of breast cancer images utilizing Optimum Wavelet-Based Image Enhancement (OWBIE) with Social Spider Optimization (SSO). More than a few biomedical images are of low quality and difficulty to detect and exact information. The converted gray pictures are utilized for filtering approach; here optimum Wavelet-Based Image Enhancement (OWBIE) with Social Spider Optimization (SSO), Histogram equalization, Anti-forensic distinction enhancement process and Curvelet centered distinction enhancement are used. The proposed technique is used to remove noise and hold area moderately sharp in the fed enter images. In the result, more than a few Mean Absolute Error (MAE), Mean Square Error (MSE), Root Mean Square Error (RMSE), Peak Signal to Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM) evaluation metrics graphs are analyzed and more enhanced. This proposed process performed better in comparison with different enhancement techniques.
Keywords: Optimum Wavelet-Based Image Enhancement (OWBIE); Social Spider Optimization (SSO); Breast Cancer.
LUNG CANCER DIAGNOSIS AND STAGING USING FIREFLY ALGORITHM FUZZY C-MEANS SEGMENTATION AND SUPPORT VECTOR MACHINE CLASSIFICATION OF LUNG NODULES
Abstract: Lung Nodule segmentation is an important division of automated disease screening systems in cancer identification. The morphological variations of lung nodules correspond to chances of cancer. The incorrect detection of these lung nodules because of misclassification leads to false results and incorrect strategies of diagnosis. This misclassification also misdirects pharmaceutical experts for wrong preparation of drugs for diagnosis. There are different methods that are available for detection but always there is a space for improvement in terms of various parameters for better results. Therefore in this work image enhancement is done by histogram equalization and further noise removal is carried over by anisotropic diffusion filter. The nodule segmentation process is carried over by Firefly Algorithm Fuzzy C-Means (FA-FCM) segmentation process. Finally, after feature extraction is done classification of lung cancer staging is carried out using Support Vector Machine (SVM) classifier. Therefore, the nodule is accurately detected considering the morphological changes that are noted for the results which lead to proper medicine preparation and accurate diagnosis of lung nodules.
Keywords: Lung Nodule; histogram equalization; anisotropic filter; segmentation; FAFCM; SVM.
Performance Analysis of Preprocessing Filters Using Computed Tomography Images for Liver Lesion Diagnosis
by SHANILA NAZEERA, VINOD KUMAR R.S, RAMYA RAVI R
Abstract: In medical research, segmentation can be used in separating different
tissues from each other, through extracting and classifying features. This paper
aims to discuss the need, concept and advantages of preprocessing techniques
normally used for enhancing the scanned images before segmentation. In addition,
three preprocessing filters, which are used to remove the artifacts from the scanned
images are also implemented and intended to analyse the problems which were
developed during the simulation of liver preprocessing methods. Of the three
methods, the curvature anisotropic diffusion filters performed better than the
other filters and the obtained results are satisfactory. Finally a detailed analysis
of parameter selection for Curvature anisotropic diffusion filtering in Computed
tomography images is performed.
Keywords: Image Processing; Liver Segmentation; MRI; Computed Tomography; Preprocessing; Curvature Anisotropic Diffusion Filter; Noise Removal.
Detection of Glaucoma Disease in fundus images based on Morphological Operation and Finite Element Method
by S.J. Grace Shoba, A. Brintha Therese
Abstract: The retinal vasculature has been recognized as a fundamental element in glaucoma as well as diabetic retinopathy. Segmentation of retinal blood vessels is of considerable clinical significance for diagnosing the glaucoma disease at an early stage. With the intention of glaucoma detection, initially, retinal images are acquired by utilizing advanced capture devices for image content. The present investigation has been developed for the detailed computational model analysis of the blood flow in physiologically sensible retinal arterial and venous networks. The geometrical views of both retinal artery and vein have been extricated from the blood vessels of the retinal fundus image utilizing morphological operations. The segmented arteries and veins are demonstrated utilizing the impedance-modeling method and Finite Element Analysis is utilized for the portrayal of arteries and veins to decide the biomechanical parameters of the blood that incorporates structural analysis and computational fluid analysis. The anticipated parameters are classified based on the blood flow attributes by using Support Vector Machine (SVM). The proposed technique accomplishes the maximum accuracy of 94.86% for the proficient prediction of Glaucomatous disease contrasted with existing strategies.
Keywords: Glaucoma; Retinal images; blood vessel segmentation; Morphological operation; Impedance-modeling; Finite Element Analysis and Support Vector Machine.
Need for customization in preventing pressure ulcers for wheelchair patients a load distribution approach
by Sivasankar Arumugam, Rajesh Ranganathan, T. Ravi
Abstract: Pressure ulcer (PU) is a healthcare problem developed due to the factors such as pressure, shear and friction. The causes, stages, treatment methods along with mechanical factors and prevention methods for PUs are identified and analyzed. A survey undergone revealed that as people in wheelchairs have different weights and sitting posture thereby, the pressure distribution varies from patient to patient. Therefore, using one type of product for all is found to be inappropriate. In this work, 22 healthy subjects are selected for analyzing the pressure distribution. EMED sensor platform is used for measuring the interface pressure distribution, surface area and peak pressure distribution. From the results it was found that mostly the pressure distribution points for each individual varies drastically. Hence, the need is for individual customization for PU reduction to reduce shear and frictional forces. Here surface customization is identified to be a novel approach for patients in wheelchair.
Keywords: Pressure Ulcer; Customization; Cushions; Wheelchair patients; Load distribution; Surface customization.
An Optimized Clustering Algorithm With Dual Tree Ds For Lossless Image Compression
by Ruhiat Sultana, Nisar Ahmed, Syed Abdul Sattar
Abstract: The emerging utilization of web and other electronic applications have expedited much consideration on image compression systems to spare storage room and diminish transmission time by compressing the size of an image by discarding the repetitive data sequences. Most of the techniques are based on lossy compression techniques where compression ratio would be low. The proposed system based on lossless compression technique achieves best compression ratio, good image quality and less psnr value by extracting the best features of an image which is to be compressed and encoded by incorporating firefly algorithm with k means algorithm which avoids local optima problem. To make the eminent compression of best derived features quad tree decomposition and Huffman encoding technique is combined which provides high compression ratio by fetching correct probabilities of occurrence of pixel intensity. This proposed technique is actualized in MATLAB and in this manner the trial results demonstrated the effectiveness of the proposed image compression technique regarding high compression ratio, low noise ratio and reduced compression and decompression time when compared with existing techniques.
Keywords: Medical imaging; Information systems; Signal processing; hybrid firefly Clustering algorithm; Utilization of quad-tree.
A Comparative study of Feature Projection and Feature Selection approaches for Parkinson's disease detection and classification using T1-weighted MRI scans
by Gunjan Pahuja, T.N. Nagabhushan, Bhanu Prasad
Abstract: In this research, a multivariate analysis between feature projection and feature subset selection methods has been performed with the objective of identifying a subset of features that would help in detection and classification of people affected by Parkinsons disease. For this study, T1-weighted MRI data has been collected from Parkinson's Progression Markers Initiative (PPMI) organization. The accuracy of Support Vector Machine classifier has been checked with different number of selected features during the exploratory phase. The obtained results have shown a clear potential for using these methods in detecting and classifying the Parkinsons patients from normal persons. Further, to identify the brain region responsible for this disease, these selected features are mapped back to the standard MNI brain template. ANOVA test has been employed to show the statistical significance of the obtained results.
Keywords: Parkinson’s disease (PD); Voxel-based morphometry (VBM); Genetic Algorithm (GA); Eigenvector Centrality based discriminant analysis (ECDA); Support Vector Machine (SVM); Analysis of Variance (ANOVA).
INTERSCALE ADAPTIVE THRESHOLD WAVELET FILTER FOR ULTRASOUND IMAGE DESPECKLING
by Nirmaladevi Periyasamy
Abstract: In this paper a new wavelet shrinkage algorithm in the undecimated wavelet domain is proposed using an interscale adaptive threshold and an exponential thresholding function for the removal of speckle noise in ultrasound images. An improved spatial adaptive threshold is discussed, exploiting the inter and intra scale dependency of the wavelet coefficients. A new exponential thresholding function is proposed in this paper to overcome the smoothing effect produced by the hard and soft thresholding functions. The reconstructed image exhibits an improved noise removal with preservation of fine details. Performance of the filter is measured using various performance metrics like Peak-Signal to Noise Ratio (PSNR), Mean Square Error (MSE), Structural Similarity Index Measure (SSIM), Equivalent Number of Looks (ENL) and Edge Preservation Index (EPI). A comparison of the results reveals that the proposed filter shows significant improvement in terms of quantitative measures and in terms of visual quality of the images.
Keywords: Undecimated Wavelet Transform; Inter and Intra scale dependency; Speckle; Adaptive threshold; Ultrasound images and Edge Preservation.
e-Health Relationships Diabetes; 50 Weeks Evaluation
by Luuk Simons, Hanno Pijl, John Verhoef, Hildo Lamb, Ben Van Ommen, Bas Gerritsen, Maurice Bizino, Marieke Snel, Ralph Feenstra, Catholijn Jonker
Abstract: Hybrid eHealth support was given to 11 insulin-dependent Type 2 Diabetes Mellites (DM2) patients, with electronic support plus a multi-disciplinary health support team. Challenges were: low ICT- and health literacy. After 50 weeks, attractiveness and feasibility of the intervention were perceived as high: recommendation 9,5 out of 10 and satisfaction 9,6 out of 10. TAM surveys (Technology Acceptance Model) showed high usefulness and feasibility. Acceptance and health behaviours were reinforced by the prolonged health results: Aerobic and strength capacity levels were improved at 50 weeks, plus Health Related Quality of Life (and biometric benefits and medication reductions, reported elsewhere). Regarding eHealth theory, we conclude that iterative skill growth cycles are beneficial for long term adoption and e-relationships. Next, the design analysis shows opportunities for additional affective and social support, on top of the strong benefits already apparent from the direct progress feedback loops used within the health coach processes.
Keywords: Type 2 Diabetes (DM2); eHealth; Lifestyle; Monitoring; Coaching; Blended Care; Service Design.
Application of Variational Mode Decomposition in Automated Migraine Disease Diagnosis
by K. Jindal, R. Upadhyay, M. Vijay, A. Dube, A. Sharma, K. Gupta, J. Gupta
Abstract: The clinical diagnosis of migraine if supplanted by the modality of the Electroencephalograph signals, may help in delineating the neural correlates, management and prognosis of the disease progress. Recent advances in the area of biomedical signal processing have led to the development of various feature extraction and classification techniques for multiresolution analysis of Electroencephalograph signals and diagnosis of diseased conditions. In present work, a methodology using Variational Mode Decomposition is proposed for migraine disease diagnosis from Electroencephalogram signals. In proposed methodology, Variational Mode Decomposition is employed for decomposing Electroencephalogram signals into number of modes. Sample Entropy and Higuchis Fractal Dimension are estimated from the decomposed modes as features and three soft computing techniques viz. Neural Network, Support Vector Machine and Random Forest are used for classifying extracted features. Classification results obtained from soft computing techniques indicated that proposed methodology effectively identified migraine patients using Electroencephalogram data.
Keywords: Electroencephalogram; Variational Mode Decomposition (VMD); Migraine; Fractal Dimension; Entropy.
A method for the classification of mammograms using a statistical based feature extraction
by Nebi Gedik
Abstract: This paper represents a classification system for mammograms using wave atom transform and feature selection process with t-test statistics. Mammogram images are transformed to the wave atom coefficients using wave atom transform. Next, a matrix is constructed from the coefficients. The matrix is used as the feature matrix in order to classify mammograms. To achieve the maximum classification accuracy rate, t-test statistics with a dynamic thresholding is additionally carried out. As a classifier, support vector machine is employed in the classification phase. According to the experimental results, the method proposed in this paper provides a successful contribution for the classification of mammographic images.
Keywords: Mammogram; Classification; Feature extraction; Feature selection; Thresholding; t-test statistics; Wave atom transform; SVM; Normal-abnormal classification; Benign-malignant classification.
A Methodological Review on Computer Aided Diagnosis of Glaucoma in Fundus Images
by Sumaiya Pathan, Preetham Kumar, Radhika M. Pai
Abstract: Advances in computerized image analysis and retinal imaging modalities have significantly contributed towards the growth of image based diagnosis. Glaucoma is an ocular disorder which results in irreversible vision loss. The progression of glaucoma is quiet and in early stages doesnt show any symptoms. Vision loss due to glaucoma has been significantly increasing compared to other retinal disorders. The reliability in diagnosis of glaucoma is limited to the experience and domain knowledge of the ophthalmologist. A computer based diagnostic system can be developed using image processing algorithms for screening large population at less cost, reducing human errors and thus making the diagnosis more objective. A review of the state-of-art-methodologies employed for developing a computer aided diagnosis of glaucoma using retinal fundus image is addressed in this paper along with the future trends.
Keywords: Classification; Cup-to-Disk Ratio; Glaucoma; Optic Disk; Optic Cup; Segmentation.
Automated Melanoma Skin Cancer Detection from Digital Images
by Shalu , RAJNEESH RANI, Aman Kamboj
Abstract: In the early stages, diagnosis of melanoma is important for treating the illness and saving lives. This paper focuses on the development of a system for automatic detection of melanoma skin cancer. The objective of this study is to identify the importance of different colour spaces in melanoma skin cancer detection. Another objective is to compare the colour feature and texture feature to find out that which type of features have more discriminative power to correctly identify melanoma. The whole analysis is done by using the MEDNODE dataset of digital images. This dataset contains a total of 170 images (100 nevi and 70 melanoma). The results show that the combination of features extracted from the HSV (Hue, Saturation and Value) and YCbCr (Y is Luma component and Cb and Cr are two Chroma components) colour space give better performance than the features extracted from other colour spaces. Also, the performance of the system is enhanced with the colour features than the performance with texture features. By using features extracted from the HSV and YCbCr colour space the system shows more accurate result by giving an accuracy of 84.11% which is higher than the earlier approaches on this dataset.
Keywords: Malignant Melanoma; Skin Cancer Diagnosis; Color and Texture Features.
Segmentation of Cartilage in Knee Magnetic Resonance Images using Gabor and Matched Filter and Classification of Osteoarthritis using Adaptive Neuro-Fuzzy Inference System
by Jayashree Palanisamy, Ragupathy Uthandipalayam Subramaniyam
Abstract: Osteoarthritis (OA) also known as degenerative arthritis is a group of mechanical abnormalities occurring in the joints like knee, finger and hip regions. Knee OA is believed to be highly prevalent today because of aging and obesity. Knee region contains complex objects, which varies in appearance significantly from one image to another. Measuring or detecting the presence of particular structures in such images can be a daunting task, since there will be variation in each image.
OA in knee image can be identified by segmenting the bone and cartilage. Finding the region of interest between bone and fat tissue is difficult. Manual and some semiautomatic segmentation methods are time consuming and complex. This can be overcome by the proposed methodology. A method is described here for classification of OA in knee Magnetic Resonance Images (MRI) which deals with segmentation of cartilage region from femur and tibia bone. The images are pre - processed using contrast enhancement technique and Contrast Limited Adaptive Histogram Equalization (CLAHE) and further processed using Matched and Gabor filter for clear recognition of cartilage from background. The noises present are further eliminated using Median filter. Using Gray Level Co-occurrence Matrix (GLCM), features are extracted and extracted features are used for classification of OA. Adaptive Neuro Fuzzy Inference System (ANFIS) classifier is used for classification purpose. The datasets are obtained from Osteoarthritis Initiative (OAI) database and Ganga hospital, Coimbatore.
Keywords: Osteoarthritis; MRI; Gabor filter; Matched filter; CLAHE; Grey Level Co-occurrence Matrix (GLCM); Adaptive Neuro Fuzzy Inference System (ANFIS).
A New Deep Learning Structure to Improve Detection of P300 Signals: Using Supervised Neural Networks as Convolutional Kernel of CNN
by Syed Vahab Shojaedini, Sajedeh Morabbi, MohammadReza Keyvanpour
Abstract: Brain-Computer Interface (BCI) systems provide a safe and reliable interface between brain and outer world and detecting P300 signal plays a vital role in these systems. In recent years Convolutional Neural Networks (CNNs) have made a vast and rapid development in P300 signal detection. In this paper, a novel structure for CNN is proposed to enhance separability of the selected features in its convolutional layer. In the proposed scheme an artificial neural network is applied in the above layer as nonlinear filter which extracts nonlinear features which lead to improve detecting of P300 signals. The performance of the proposed structure is assessed on EPFL BCI group dataset. Then the achieved results are compared with the basic structure for P300 detection. The obtained results demonstrate the improvement of True Positive Rate (TPR) of the proposed structure against its alternative by extent of 19.69%. Such improvements for false detections and accuracy are 1.97% and 10.87% which show the effectiveness of applying the proposed structure in detecting P300 signals.
Keywords: Brain-Computer Interface; P300 Signal Detection; Conventional Neural Network; Convolutional Kernel; Nonlinear Filter.
Gustatory Stimulus Based ElectroEncephaloGram (EEG) Signal Classification
by Kalyana Sundaram Chandran, Marichamy Perumalsamy
Abstract: Brain Computer Interface (BCI) gives a prompt correspondence between human brain and Personal Computer (PC). BCI obtains signals from the brain and makes an interpretation for controlling the outside gadgets. Taste Composition (TASCO) based EEG signal classification is used to differentiate normogeusia and hypogeusia. Since an Electroencephalography (EEG) signal is non-stationary and time-changing, features can be extracted either in time domain or frequency domain. The proposed method is mainly used to identify the problems in human organs by using TASCO. EEG signal of TASCO is preprocessed utilizing FIR band pass channel to mitigate the artifacts of noise. In this proposed work, the Discrete Wavelet Transform (DWT) is used as feature extraction method. DWT gives both time and frequency domain representation. DWT breaks down the separated EEG signal into its related frequency bands and the measurable features of the detailed coefficient of the alpha wave are analyzed in time domain. In this proposed method the Mean Absolute Value (MAV) which is an average of the absolute value of the EEG signal and variance of the signal are considered as statistical features. The extracted features are classified using multilayer perceptron neural network classifier which provides high accuracy classification. In this paper, sour TASCO is analyzed to identify the Gall Bladder problem in human organ. The Proposed Method improves the accuracy and performance of the system as much as 95% which cannot be achieved by conventional methods.
Keywords: BCI; Discrete Wavelet Transform; EEG; FIR Band pass filter; gustatory stimuli; MLP; Taste Composition.
MRI Brain Image Volume Property based Accelerate Medical Image Algorithms using CUDA Supported GPU Machine
by Sriramakrishnan Pathmanaban
Abstract: This paper elaborates the design and implementation details of parallel image processing techniques that are used to accelerate the medical image algorithms with CUDA supported GPU. The algorithms are chosen from denoising, morphology, clustering, and segmentation. Three parallel computing models are proposed based on properties of algorithms and MRI volume. The acceleration in parallel algorithms is compared with that of sequential CPU implementation measured in terms of speedup folds (
Keywords: Graphics processing units; Compute unified device architecture; Parallel processing; Medical imaging; Brain volume; Per-pixel threading; Per-slice threading; Hybrid threading; Bilateral filter; Non-local means; K-means clustering; Fuzzy-c-means clustering.
Steady State- VEP based BCI to Control 5 Digit Robotic Hand Using LabVIEW
by Sandesh R S, Nithya Venkatesan
Abstract: This paper proposes Steady State Visual Evoked Potential signals for control of five digit robotic hand using LabVIEW as software platform. The experimental setup consist of Ag/Agcl electrodes with 10-20 gel, a low cost, rechargeable battery operated EEG amplifier, a handmade simulation panel flickering at a frequency of 21 Hz with Light Emitting Diode as source, LabVIEW as software platform to implement wavelet analysis for feature extraction and linear Discriminant analysis for classification and NI USB-DAQ to provide an interface between EEG acquisition and robotic hand. A State machine chart algorithm using PWM technique is implemented for speed control of miniature metal gear DC motor with 71 RPM, positioned in robotic hand. The experiment was carried out for five different subjects with each subject undergoing five trials of which in each trial subject undergoes two recordings of SSVEP signals. Experimental results indicate that the subjects SSVEP signals were used to control the robotic hand to pick up a woolen ball in achieving an accuracy of 84% and mean time of 44.6 seconds. The obtained experimental results were compared against the results obtained from similar works.
Keywords: SSVEP signals; EEG amplifier; LabVIEW; NI-USB DAQ; Robotic Hand; Wavelet analysis; Linear Discriminant Analysis.
Adaptive Fractional Order Controller with Smith Predictor based Propofol Dosing in Intravenous Anaesthesia Automation
by Bhavina Patel
Abstract: This paper is designed to propose a clinical simulation model for automatic propofol dose delivery. We suggest Adaptive Fractional Order Controller (AFCSP) with Smith Predictor design based on CRONE (Commande Robuste dOrdre Non Entier) principle to provide adequate hypnotic intravenous drug infusion regimen for propofol. The main aim of proposed design is to avoid frequent adaption complexity and provide another approach of adaption based on sensitivity parameters in place of BIS signal. Proposed controller is designed from model based analytical method with a two-time domain tuning parameters using explicit equations instead of complex nonlinear equations but yield the same results. AFCSP utilizes the combination of bolus and continuous dose. Robustness test of AFCSP is carried out with patients variability, time delays, surgical stimuli and compared with conventional methods. This scheme is advantageous in terms of improving speed of response, oscillations and overshoots in BIS, also examined on real dataset of 31 different patients.
Keywords: Adaptive Fractional Order Controller with Smith Predictor Controller; Depth of Anaesthesia; Intravenous; BIS; Propofol.
Quantitative Evaluation of Denoising Techniques of Lung Computed Tomography Images: An Experimental Investigation
by Bikesh Kumar Singh
Abstract: Appropriate selection of denoising method is critical component of lung computed tomography (CT) based computer aided diagnosis (CAD) systems, since; noises and artifacts may deteriorate the image quality significantly thereby leading to incorrect diagnosis. This study presents a comparative investigation of various techniques used for denoising lung CT images. Current practices, evaluation measures, research gaps and future challenges in this area are also discussed. Experiments on 20 real-time lung CT images indicate that Gaussian filter with 3
Keywords: Image denoising; lung computed tomography; computer aided diagnosis; image smoothening; edge preservation; quantitative evaluation; image contrast; picture signal to noise ratio; image quality; noise attenuation; time domain and frequency domain.
Real-Time Epileptic Detection from EEG Signals using Statistical Features Optimization and Neural Networks Classification
by Badreddine Mandhouj, Sami Bouzaiane, Mouhamed Ali Cherni, Ines Ben Abdelaziz, Slim Yacoub, Mounir Sayadi
Abstract: This paper describes a completely automated approach in order to enhance the diagnosis of epilepsy disease which is one of the most prevalent neurological disorder. The major aim of this work is to be a potential contribution to the domain. The present paper is divided into three main parts. In the first part, we optimize the statistical features extracted from the EEG signals by a characterization degree. Then, these features are applied to a multilayer neural network (MNN) classifier. In the third part, we use a Digital Signal Peripheral Interface Controller (dsPIC) for the implementation of the real time EEG classification process. The used EEG data are taken from the publicly available database of the University of Bonn and are classified into healthy and epileptic subjects. To assess the performance of this classification method, several performance measures (sensitivity, specificity and accuracy) have been evaluated and have provided interesting results.
Keywords: Electroencephalogram; Epilepsy; Statistical Features; classification; Characterization degree; Optimization; Multilayer neural network; Real-Time; dsPIC.
The evaluation of the healing process of diabetic foot wounds using image segmentation and neural networks classification
by Bruno Da Costa Motta, Marina Pinheiro Marques, Guilherme Dos Anjos Guimarães, Renan Utida Ferreira, Suélia De Siqueira Rodrigues Fleury Rosa
Abstract: Objective: The Diabetic Foot is characterized as an infection or ulceration of the lower limbs tissues. Furthermore, to hinder the evolution of this disease, patients need to be monitored and the evolution as well as the healing processes of the ulcers must be documented. Method: In this regard, this paper proposes the development of an easy-to-use computer program that performs segmentation of ulcers based on the color of the scar tissue and automatically classifies them into three classes by using an artificial neural network, in order to help and ease the diagnosis given by health professionals. Result: The total area of the ulcer, color characteristics of the scar tissue and dimensions of the ulcer can be used as parameters in the diagnosis. Conclusion: The technique developed detected and computed the area of the ulcers, using an imaging protocol, facilitating the application of the technique at hospitals and care units.
Keywords: Wound healing; Medical informatics; Diabetic foot; Image processing; Neural Network.
Mental task classification using wavelet transform and support vector machine
by PRAVIN KSHIRSAGAR
Abstract: The present research on various mental tasks experiencing on human cognitive function disorders using Discrete Wavelet Transform (DWT) and Support Vector Machine (SVM).The Electroencephalogram (EEG) database obtained from online Brain Computer Interface (BCI) Competition paradigm III & offline BAlert EEG Machine from CARE Hospital, Nagpur. EEG signals from paralyzed patients decomposed into the frequency sub-bands using DWT and a set of statistical features extracted from the sub-bands represent the distribution of wavelet coefficients used to reduce the dimension of data, features applied to SVM for classification of left hand and right hand movement. With this system, classification of EEG signals has done with accuracy 91.66% for BCI Competition paradigm III and 97% for B-Alert Machine.
Keywords: BCI; Brain Computer Interface; EEG; Electroencephalogram; Mental Task; DWT; Discrete Wavelet Transform; B- Alert Machine; Classification; SVM; Support Vector Machine; Accuracy; Error; ANN; Artificial Neural Network.
Optimization of Data-set for Classification of Diabetic Retinopathy using Support Vector Machine with Minimal Processing
by Amol Golwankar, Pranav Pailkar, Purvika Patil, Rajendra Sutar
Abstract: Diabetic Retinopathy is a disease observed in the retinal region is caused by a reduced level of insulin in a body or when the pancreas cannot properly process it. If the disease is not recognized in time it may cause permanent blindness. This paper illustrates an optimized approach towards developing a classifier that helps in diagnosing the disease and helps in checking its severity. Using large dataset of 1900 retinal photographs obtained from Kaggle Diabetic Retinopathy Detection Dataset. The proposed classifier classifies the retinal pictures based on the relevant feature values calculated from extracted primary features from the pre-processed and raw images. Classification is performed by support vector machine algorithm that classifies the retinal images into stages or categories such as normal image with no signs of retinopathy, image with mild retinopathy, image with moderate retinopathy, image with severe retinopathy and image showing proliferation of blood vessels respectively with the accuracy of 91.2 percentages.
Keywords: KeywordsDiabetic Retinopathy; Retinal images; Pre-processing; Feature extraction; Machine learning; Support Vector Machine.
A Topological Approach for Mammographic Density Classification Using a Modified Synthetic Minority Over-Sampling Technique Algorithm
by Imane NEDJAR, Said MAHMOUDI, Mohamed Amine Chikh
Abstract: Mammographic density is known to be a risk indicator for breast abnormalities development. Therefore, the breast tissue classification is an important part used in computer aided diagnosis (CAD) system to detect the cancer. In this paper, a CAD system for breast tissue classification using an equilibrating approach is proposed. The first contribution of this paper consists of using a representation of textons distribution by a topological map. This approach allows a good mammographic density classification using the distribution of breast tissue. The second contribution of this work consists of the equilibration of the dataset in the CAD system. Indeed, an improvement of the Synthetic Minority Over-Sampling Technique (SMOTE) algorithm is developed. Our experiments are carried out with MIAS and DDSM datasets to validate the CAD system and two different datasets to validate the proposed modified SMOTE algorithm. The obtained results confirm the validity of the presented proposal.
Keywords: breast tissue classification; SMOTE; textons; computer aided diagnosis systems; mammography; parenchymal patterns; feature extraction; BI-RADS; classification; imbalanced data sets.
Wavelet-based Imagined Speech Classification Using Electroencephalography
by Dipti Pawar, Sudhir Dhage
Abstract: Introduction: Oral communication is the natural way in which humans interact. However, in some circumstances, it is not possible to emit an intelligible acoustic signal, or it is desired to communicate without making sounds. In these conditions systems that enable spoken communication in the absence of an acoustic signal is desirable. In this context, Brain-Computer Interface (BCI) is a remarkable way of solving daily life problems.
Objective: The major objective of the proposed research is to develop an imagined speech classification system based on Electroencephalography
(EEG). The research analysis in the field shows that there is an association between the recorded EEG data and production of speech. We wish to analyse if this can be further true for Imaginary speech.
Approach: We propose an Imagined speech recognition system consists of preprocessing, feature extraction and classification. In the preprocessing stage, EOG artefacts are removed via independent component analysis (ICA). Discrete wavelet transform (DWT) is used to extract Wavelet-based features from EEG segments. Finally, the support vector machine (SVM) is employed for the discriminant of extracted features.
Main Results: The proposed research achieves promising ends up in classification accuracy compared with some of the most common classification techniques in BCI.
Significance: Resultant role indicate significant potential for the utilization of a speech prosthesis controller for clinical and military applications.
Keywords: Electroencephalography; Brain-Computer Interface; DWT; Imagined Speech; SVM.
Optimization of Electrical Impedance Techniques based System for Medical & Non-Medical Application Monitoring
by Ramesh Kumar, Sharvan Kumar, Amit Sengupta
Abstract: Presently, the non-invasive techniques are in vogue and preferred standard approach because of its limitless advantages in monitoring real time phenomenon occurring within our human body without much interference. In this paper, the proposed Electrical impedance tomography (EIT) system helps in monitoring and recording of distributed electric field over an object surface. This imaging technique is based on internal electrical conductivity distribution of the body and reconstructing the image from the electrical measurements of electrodes attached to the circumference of the object. A constant current is passed into the boundary of the object through a pair of electrode and calculating the voltage measurements from its boundary and fed into a computer. For image reconstruction process of the cross-sectional image of resistivity required sufficient data collection. This image reconstruction algorithm is controlled by graphical user interface (GUI) window on MATLAB platform. The EIT technique offers some benefits over other imaging modalities.
Keywords: Bio-Impedance; Electrical Impedance Tomography; Impedance Plethysmography; Electrode; Phantom; Current source; Conductivity; Resistivity; Imaging modalities; Image Reconstruction; Forward Problem; Inverse Problem; Finite Element Method; Graphical User Interface; Medical Monitoring; Industrial Monitoring.
Numerical Assessment of a 3-D Human Upper Respiratory Tract Model: Effect of Anatomical Structure on Asymmetric Tidal Pulmonary Ventilation Characteristics
by Digamber Singh, Anuj Jain, Akshoy Ranjan Paul
Abstract: The analysis of airway ventilation characteristic is important for diagnosis and pathological assistance for respiratory diseases. It is therefore imperative to study the impact of anatomical features on the internal flow field. The article is focused on an in-silico study on impact of anatomical structure of human upper respiratory tract on transient asymmetric tidal pulmonary ventilation characteristics. Therefore a three-dimensional human airways model is reconstructed from nasal cavity up to 7th generation bronchi from computed Tomography (CT) images of a 48 years healthy man using computational modelling technique. A validated low Reynolds number (LRN) Realizable k-ε turbulence model is used to capture the internal flow mixed turbulence characteristics. The numerical simulations were performed for asymmetric low and high tidal pulmonary ventilation (ALTPV, 10 L/min and AHTPV, 40 L/min). The numerical analysis assists to predict the near realistic airway ventilation phenomena and internal flow physics in the upper respiratory tract.
Keywords: Human upper respiratory tract (HURT); Asymmetric low tidal pulmonary ventilation (ALTPV); Asymmetric high tidal pulmonary ventilation (AHTPV); Computed tomography (CT); Computational fluid dynamics (CFD); Transient; Wall shear stress (WSS); LRN k- ε turbulence model.
Automated Segmentation and Classification of Nuclei in Histopathological Images
by Sanjay Vincent, Chandra J
Abstract: Various kinds of cancer are detected and diagnosed using
histopathological analysis. Recent advances in whole slide scanner technology
and the shift towards digitisation of whole slides have inspired the application of
computational methods on histological data. Digital analysis of histopathological
images has the potential to tackle issues accompanying conventional histological
techniques, like the lack of objectivity and high variability. In this paper, we
present a framework for the automated segmentation of nuclei from human
histopathological whole slide images, and their classification using morphological
and colour characteristics of the nuclei. The segmentation stage consists of two
methods, thresholding and thewatershed transform. The features of the segmented
regions are recorded for the classification stage. Experimental results show that
the knowledge from the selected features is capable of classifying a segmented
object as a candidate nucleus and filtering out the incorrectly identified segments.
Keywords: Histopathological Images; Whole Slide Images; Digital Image Analysis; Segmentation; Nuclei; Annotated; Nuclear; Computer-Assisted Diagnosis; Machine Learning; Classifier; Deep Learning;.
Comparison of Variational Mode Decomposition and Empirical Wavelet Transform methods on EEG signals for Motor Imaginary applications
by Keerthi Krishnan K, Soman K P
Abstract: Devising a reliable method for implementing Brain computer interface (BCI) systems using electroencephalogram (EEG) signals is proposed. Applicability of two modal decomposition methods, Variational Mode Decomposition (VMD) and Empirical Wavelet Transform (EWT) on EEG signals for identifying the four different motor imaginary movements by the investigation of Event-Related Desynchronisation (ERD) activity in the Mu-beta rhythm of EEG signals is analysed and compared. The EEG signals from each electrode corresponding to the sensorimotor cortex area of the brain are decomposed using VMD and EWT methods. Each decomposed modes are modelled using Auto Regressive (AR) modeling and feature vector is formed using the AR model parameters. On classification, better accuracy is perceived for VMD method in comparison with EWT and Common Spatial Pattern (CSP) methods developed on the same data set.
Keywords: VMD; EWT; EEG; SMR; Event-Related Desynchronisation; Motor Imaginary-BCI; BCI competition data set IIIa; Short Time Fourier Transform; AR model; libSVM classifier; Neural network classifier.
Power Line Interference Cancellation from ECG Using Proportionate Normalized Least Mean Square Sub band Adaptive Algorithms
by B. BHASKARA RAO, B. PRABHAKARA RAO
Abstract: The Electrocardiogram (ECG) record is a procedural electrical endeavor of the coronary heart, which is noninvasive recording during which noise such as power-line interference (PLI) with 60 Hz frequency is obtained from power lines. To efficaciously correct and to keep greater underlying components of an ECG signal, a powerful tool for a removal of PLI from a range of signals was introduced earlier. In this research a multiband structured sub band adaptive ﬁlter (MSAF) is developed to clear up structured problems in conventional sub band adaptive ﬁlters. This paper investigates the detailed adaptive noise canceller (ANC) for ECG signals with robustness based on a multi-level decomposition need to be carried out on the noisy signal and then splitting into low sub-bands and high band sub-bands that are performed with the help of uniform filter banks (UFB) and non uniform filter banks (NUFB) structured MSAF using Proportionate NLMS (PNLMS) & Improved Proportionate NLMS (IPNLMS) algorithms. Computer simulation demonstrates that the proposed design gives elevated performance and achieves correct adaption.
Keywords: ECG; IPNLMS; NUFB; UFB; MSAF; SAF.
Biotechnical System and fuzzy logic Models for Prediction and Prevention of Post-Traumatic Inflammatory Complications in Patients with Closed Renal Trauma
by Riad Taha Al-kasasbeh, Nikolay Korenevskiy, Stanislav Petrovich Seregin, Marina Sergeevna Chernega, Altyn Amanzholovna Aikeyeva, Maksim Ilyash
Abstract: Fuzzy logic approach is developed and trained to predict occurance of health implications in blunt kidneys patients. Fuzzy logic is selected because it merges expert judgement with real patients data analysis. A fuzzy decision rules system for forecasting the posttraumatic inflammatory complications of patients with blunt kidneys injury according to the medical and laboratory testing of the research. The research shows high level of lipid peroxidation and antioxidant activity. The research predicts occurance of complications and physician can describe prevention and treatment, combining physical therapy treatments with antioxidant and detoxification therapy.
Keywords: closed injury of the kidney; prognosis; prevention; fuzzy mathematical model.
CBIR BASED DIAGNOSIS OF DERMATOLOGY
by WISELIN JIJI, Rajesh A, Johnson DuraiRaj P
Abstract: In this work, we have presented a computer aided diagnosis approach to assist the diagnosis process of dermatological diseases. The proposed framework is used to retrieve the images from skin lesions repository which in turn facilitates the dermatologist during the diagnosis process. The system used Eigen Disease spaces of respective diseases to converge the search space more efficiently. The results proved using Receiver Operating Characteristic (ROC) curve that the proposed architecture has high contribute to computer-aided diagnosis of skin lesions. Experiments on a set of 1210 images yielded a specificity of 98.44 % and a sensitivity of 86 %. Our empirical evaluation has a superior retrieval and diagnosis performance when compared to the performance of other recent works.
Keywords: Eigen Space; RETRIEVAL SYSTEM,Border Detection.
Automatic Method Recognition of Ischemic Stroke Area on unenhanced CT Brain Images
by Amina Fatima Zahra Yahiaoui, Abdelhafid Bessaid
Abstract: The purpose of this study was to develop a novel automatic method for detection area of subtle hypodensity change of ischemia on unenhanced CT images using comparison of brain hemispheres. Alberta Stroke Program Early CT Score (ASPECTS) has been proposed to help radiologists to make decisions regarding thrombolytic treatment. Only patients with favorable baseline scans (ASPECTS, 810) benefitted from endovascular revascularization therapy. The classification of the images into normal and abnormal depends on the features of left and right side of brain sides. For an accurately detection, we integrated an automatic Midline estimation algorithm to trace it correctly. The proposed method has five steps: preprocessing, segmentation of 10 Regions of Interest (ROIs), elimination of old infarcts and cerebrospinal fluid (CSF) space and feature extraction. The features obtained from ten ROIs were then used to select the abnormal regions and to compute the corresponding ASPECTS score. The method was applied to 50 patients with infarctions of Middle Cerebral Artery (MCA) who presented to LA MEKERRA imaging center. Good results can be achieved especially for midline estimation comparing with manual detection. The performance of our method is quite satisfactory with AUC of 0.845 on ROC analysis for ASPECTS score. Our approach has the potential to be used as second opinion in stroke diagnosis.
Keywords: CT scan; stroke detection; midline estimation; ASPECTS score; hemispheres comparison.
Application of Data mining techniques for early detection of Heart Diseases using Framingham Heart Study Dataset
by Nancy Masih, Sachin Ahuja
Abstract: Health care organizations accumulate large amount of healthcare data, but it is not extracted to draw hidden patterns which can prove efficient for decision making process. Data mining techniques prove useful in gaining insights by discovering hidden patterns from the data sets which remain undetected manually. Heart diseases are the main cause of mortality rate in the globe. Hence, it is critical to predict the heart diseases at early stage with more accuracy and speed to save the millions of peoples lives. This paper aims to examine and compare the accuracy of four different machine learning algorithms for predicting and diagnosing heart disease using Framingham Heart Study (FHS) data set. The output of the study confirms the most prominent features that cause heart diseases and which must be analyzed for early detection of the disease. This study will be used as prognostic information in treatment of Heart Diseases.
Keywords: Heart Disease; Prediction; Framingham heart study; Decision tree; Naïve Bayes; Support Vector Machine; Artificial Neural Network.
An Enhanced Nonlinear Filter and Its Applications to Medical Image Restoration
by Boucif Beddad, Kaddour Hachemi, Jack-Gérard Postaire, Sundarapandian Vaidyanathan
Abstract: In this work, we describe an efficient developed algorithm to enhance medical images which are corrupted by the impulsive noise. However, the main objective is to remove low and high impulsive noise density using an Enhanced Nonlinear Filter (ENLF). The employed filter performs spatial information processing to identify the pixel in an image which has been affected and restores it only by the median value of the proposed 2D moving window that have the low variance value. The proposed denoising algorithm was optimized and implemented on a fixed-point TMS320C6416 Digital Signal Processor of Texas Instruments and it was successfully tested with multiple medical images and provides very good restoration and also it gives better Peak Signal-to-Noise Ratio (PSNR) and Mean Square Error (MSE) results than the output of the well-known existed nonlinear filters. The execution time of this algorithm is also appreciable.
Keywords: Code Composer Studio; Impulse Noise; Medical Images; Nonlinear Filter; Peak Signal-to-Noise Ratio; TMS320C6416 DSK.
Design of Band-Pass Filters by Experimental and Simulation Methods at the range of 100-125 keV of X-ray in Fluoroscopy
by Goli Khaleghi, Jamshid Soltani-Nabipour, Abdollah Khorshidi, Fariba Taheri
Abstract: The determination of filters that remove low-energy and attenuate high-energy spectra does not essentially influence image quality, so it can reduce the absorbed patient dose. This work examines the impacts of thickness and filter material on contrast, resolution, absorbed patient dose and image quality. Based on the attenuation curves of elements and taking into account the cost-effectiveness and availability factors, four elements including Tin, Tungsten, Lead and Copper in different thicknesses was studied at the range of 100-125 keV. The simulations were executed using MCNPX code with an error less than 1% that represented the accuracy. Experimental data were obtained based on the results of calculations and simulations using fluoroscopy equipment. The results showed that applying the filters caused improving the resolution and image quality, and also remarkable reduction in output dose rate. In conclusion, the 0.1 mm thick lead element was taken as the most appropriate element in filtration.
Keywords: Fluoroscopy; Alpha phantom; Band-Pass filter; Lead; Tin; Tungsten; Copper; Filter thickness; Absorption edge; Image quality; Resolution; Dose rate; Attenuation curve; Output intensity ratio; Monte Carlo N-Particle - MCNP code.
The Adjuvant role of Acupuncture to treat the diabetes mellitus and its analysis using thermogram
by Raja Gomathi, S. JEYADEVI, K. HEMA LATHA, P.K. RAMALINGAM, S.P. RAJA
Abstract: This work describes the effects of acupuncture in glycemic control and validates the results using Infrared thermography. Two groups of patients undergoing diabetes mellitus treatment, are considered for experimentation. Group A is treated with both drugs and acupuncture, while group B is treated with drugs alone. The patients blood sugar and surface temperature of the foot are studied. Infrared thermography is used to take thermogram, before and after acupuncture treatment, and the effect of Acupuncture treatment is analyzed. The liver and spleen acupoints are stimulated and the temperature changes in these points are analyzed. The results show that, the foot temperature (at treating acupoints) increases after acupuncture treatment in group A and the postprandial glucose level reduces up to 20 mg/dl whereas in Group B only 6mg/dl change, is observed with negligible temperature change. The obtained results show acupuncture as an optional treatment for diabetes, with no side effects and pain.
Keywords: Acupuncture; diabetes mellitus; glycemic control; foot diagnosis; Infrared thermography;Acupoints; Postprandial Blood Glucose; Fasting Glucose; Line analysis; Spot analysis.
Large-scale brain network model and multi-band Electroencephalogram rhythm simulations
by Auhood Al-Hossenat
Abstract: Electroencephalogram (EEG) alpha oscillations play a considerable role in understanding cognitive and physiological aspects of human life, and in diagnosing neurocognitive disorders such as Alzheimers disease (AD) and dementia. In this work, we developed a large-scale brain network model (LSBNM) to simulate multi-alpha band EEG rhythms. This model includes six cortical areas in the left hemisphere and each area is implemented as a local Jansen and Rit (JR) network. The proposed model is developed using the biologically realistic, large-scale connectivity connectome. The implementation and simulations were performed on the neuroinformatics platform, The Virtual Brain (TVB v1.5.4). Experimental results show that the proposed brain network model enables the generation of the multi-alpha band of EEG rhythms at different ranges of frequencies 7-8Hz, 8-9Hz and 10-11Hz by combining the local dynamics of the JR model with connectome. This model can help physicians to understand the general mechanism of EEG rhythms, it is also helpful in accurately diagnosing neurocognitive disorders.
Keywords: Large-scale brain network model; local neural masses modelling; human connectome; The Virtual Brain package.
A Biomechanical Analysis of Prosthesis Disc in Lumbar Spinal Segment using Three-Dimensional Finite Element Modeling
by Mai S. Mabrouk, Samir Y. Marzouk, Heba Afifi
Abstract: Lumbar total disc replacement (LTDR) could be an operation for handling of chronic disc illness and spinal mutilation to safeguard a range of motion (ROM). The SB Charit
Keywords: Lumbar total disc replacement (LTDR); biomechanical model; finite element method (FEE); SB Charité™ disc; von Mises stress.
Epilepsy Detection from Electroencephalogram Signal Using Singular Value Decomposition and Extreme Learning Machine Classifier
by Nalini Singh, Satchidananda Dehuri
Abstract: Automatic detection of seizure plays an important role for both long term monitoring and diagnosis of epilepsy. In this work, the proposed singular value decomposition-extreme learning machine (SVD-ELM) classifier technique provide good generalized performance with a remarkable fast learning speed in comparison to existing conventional techniques. Here, both feature extraction and classification of EEG signal has been done for detection of epileptic seizure of human brain, taking Bonn University dataset. Proposed method is based upon the multi-scale eigen space analysis of the matrices generated using discrete wavelet transform (DWT) of EEG signal by SVD at substantial scale and are classified using extracted singular value features and extreme learning machine (ELM) with dissimilar activation functions. The proposed SVD-ELM technique has been applied for the first time on EEG signal for epilepsy detection using five class classification which produces overall accuracy of 95% (p < 0.001) with sine and radbas activation function.
Keywords: EEG; Epilepsy; DWT; SVD; ELM; Eigen value; EEG Classification; Neurons; Activation functions.
A hybrid approach for analysis of brain lateralization in autistic children using graph theory techniques and deep belief networks
by Vidhusha Srinivasan, Udayakumar N, Hualou Liang, Kavitha Anandan
Abstract: Cerebral lateralization refers to the quality of inclination and a neural function specialized towards one hemisphere of the brain over the other for a specific activity. Autism spectrum disorder (ASD), encompasses wide range of presentations including reduced language processing capacity and impaired communication. This work analyses the lateralization patterns present at the language regions of the brain for typical controls (TC), low functioning (LFA) and high functioning autistic (HFA) individuals using resting state fMRI (rsfMRI). A total of 101 participants were considered for this study. The active and inactive regions in the left and right hemisphere, responsible for language processing have been analyzed through graph theory techniques. Results showed overall left hemisphere (LH) activation for TCs while impaired LH activation for LFA and unique right hemisphere (RH) activation for the HFA group. Using Deep belief networks (DBN), the average classification accuracy of the left/right lateralization exhibited by each participant was measured. The accuracy was highest in LH for controls with 97.88% and LFA measuring 78.17% in LH while, the HFA group showed dominance at RH with 94.23%. These results were validated by a senior expert professional. Thus, this work shows the variations of hemispherical lateralization using graph theory techniques and deep learning classifier to bring out the functional differences among the ASD children who exhibit overlapping brain behavioral characteristics.
Keywords: Autism; ASD; fMRI; Lateralization; Language processing in autism; High functioning autism; Graph theory; Deep belief networks.
Characterising Leg-Dominance in Healthy Netballers Using 3-D Kinematics-Electromyography Features Integration and Machine Learning Techniques
by Umar Yahya, S.M.N. Arosha Senanayake, Abdul Ghani Naim
Abstract: The present study utilised machine learning techniques to characterise differences between dominant (DL) and non-dominant (nDL) legs of healthy female netballers during single-leg lateral jump. Electromyography (EMG) activity of eight lower-extremity muscles and 3-dimensional motion of the ankle, knee, and hip joints were recorded for both jumping (JL) and landing (LL) legs. Integrated EMG of each muscle and joints range-of-motion (ROM) in all three planes were computed. Using hierarchical clustering, two subgroups were identified in both feature subsets JL and LL. LLs subgroups exhibited significant differences (p<0.05) in ROM of all joints in at-least one plane. Support vector machine classifier outperformed artificial neural networks at recognising DL and nDL patterns in subsets LL and JL with accuracy (F-Measure) of 86.21% and 81.36% respectively. These findings suggest DL-nDL differences are more manifested during landing than during jumping, a vital coaches insight as both legs are alternatingly used during single-leg jump-landing tasks.
Keywords: Leg Dominance; Netball; Machine Learning; Surface EMG; 3D-Kinematics; Single-Leg Jump; Dominant Leg; non-Dominant Leg; Lower Extremity; Functional Asymmetry; Support Vector Machine; Artificial Neural Network; Hierarchical Clustering; Principal Component Analysis.
Monitoring optical responses and physiological status of human skin in vivo with diffuse reflectance difference spectroscopy
by Jung Huang, Jyun-Ying Chen
Abstract: Fourier-transform visible-near infrared spectroscopy was applied to analyse diffuse reflectance from human skin perturbed with three skin-agitating methods. Principal component analysis (PCA) was applied to deduce three characteristic spectral responses of human skin. Based on Monte Carlo multilayer simulation, the responses can be attributed to changes in light scattering and haemoglobin and melanin content. The eigenspectra form a basis for resolving the optical responses of human skin from diffuse reflectance difference spectra measured at different time points after the skin tissue is mechanically stressed. We demonstrate that by applying this analysis scheme on in vivo measured diffuse reflectance difference spectra, valuable information about the responses of skin tissue can be deduced and thereby the physiological status of skin can be monitored.
Keywords: diffuse reflectance spectroscopy; skin tissue; optical response; monte-carlo simulation; principal component analysis.
Neonatal Heart Disease Screening Using An Ensemble of Decision Trees
by Amir M. Amiri, Giuliano Armano, Seyedhossein Ghasemi
Abstract: This paper is concerned with the occurrence of a heart disease specifically for the
neonate, as those seriously affected may face an increased risk of death. In this
paper, a novel computer-based tool is proposed for a medical center diagnosis
aimed at monitoring neonates who are potential vulnerable to heart disease. In
particular, cardiac cycles of phonocardiograms (PCGs) are first preprocessed
and then used to train an ensemble of decision trees (DTs). The classifier
model consists of 12 trees, with bagging and hold-out methods used for training
and testing. Several feature encoding methods have been experimented with to
generate the feature space over which the classifier has been tested, including
Shannon Energy and Wigner Bispectrum. On average 93.91% classification
accuracy, 96.15% sensitivity and 91.67% specificity have been obtained from
the given data, which has been validated with a balanced dataset of 110 PCG
signals taken from healthy and unhealthy medical cases.
Keywords: Neonate; Heart Diseases; Phonocardiogram; Ensemble of Decision Trees.
False positives reduction in pulmonary nodule detection using a connected component analysis based approach
by Satya Prakash Sahu, Narendra D. Londhe, Shrish Verma, Priyanka Agrawal, Sumit K. Banchhor
Abstract: In this paper, we have proposed a connected component analysis (CCA) based approach for reducing the false positives rate (FPR) per scan in the early detection of pulmonary lung nodules using computed tomography (CT) images. The lung CT scans were obtained from Lung Image Database Consortium - Image Database Resource Initiative database. Proposed study consists of four stages: (i) segmentation of lung parenchyma through K-means clustering algorithm, (ii) nodule extraction using an automated threshold-based approach (Santos), (iii) noise removal using CCA-based approach, and (iv) detection of lung nodule by using the sphericity (roundness) feature. The results were validated against the annotated ground truth provided by four expert radiologists. The study showed a reduced FPs/scan rate of 0.76 with an overall accuracy of 84.03%. The proposed well-balanced system showed a reduction in the FPR while maintaining high accuracy in lung nodule detection and thus can be usable in clinical settings.
Keywords: K-means; multi-thresholding; connected component analysis; sensitivity; false positives.
Deep 3D multi-scale dual path network for automatic lung nodule classification
by Shengsheng Wang, Xiaowei Kuang, Yungang Zhu
Abstract: Lung cancer is the cancer with the highest mortality rate in the US. Computed tomography (CT) scans for early diagnosis of pulmonary nodules can detect lung cancer in time. To overcome the limitations of the segmentation and handcrafted features required by traditional methods, we take deep neural network to diagnose lung cancer. In this work, we propose a deep end-to-end 3D multi-scale network based on dual path architecture (3D MS-DPN) for lung nodule classification. The 3D MS-DPN model incorporates the dual path architecture to reduce the complexity and improve the accuracy of the model fully considering the 3D nature of CT scan while performing 3D convolution. Meanwhile, the multi-scale feature fusion is used to eliminate the effects which the size of lung nodules varied widely and nodules occupying few regions and slices in CT scans. Our model achieves competitive performance on the LIDC-IDRI dataset compared to the recent related works.
Keywords: Lung nodule classification; Deep neural network; Computed tomography scans; LIDC-IDRI.
New Methodology Based on Images Processing for the Diabetic Retinopathy Disease Classification
by BENSMAIL Ilham, MESSADI Mahammed, Feroui Amel, Lazzouni Mohammed Elamine, Bessaid Abdelhafid
Abstract: Diabetes is a chronic disease that cannot be cured, but can be treated and controlled. It is caused by a lack of use of insulin. In the long run, a high blood sugar level causes complications, especially in the eyes, which leads to the development of diabetic retinopathy (DR), which could be considered a serious illness if it is not diagnosed and treated as soon as it appears. Poor care could cause blindness to the sick person. In this paper, we propose a new system for early detection of the DR. The tested algorithm includes several important phases, especially, the detection of the retinal lesions caused by the disease (Microaneurysms and Hemorrhages), through pretreatment and segmentation processes, as well as the classification of the different stages of non-proliferative DR. Several classifiers have been tested and the Support Vector Machine (SVM) has given a very good sensitivity, specificity, and accuracyof 97.56%, 99.01%, 97.52%, respectively. These values show that our approach can be used for diagnostic assistance in ophthalmology.
Keywords: Diabetic Retinopathy; Extraction of microaneurysms; Detection of hemorrhages; classification of the diabetic retinopathy stages.
Brain Tumor Segmentation from Magnetic Resonance Images using Improved FCM and Active Contour Model
by Nagaraja Perumal, Kalaiselvi Thiruvenkadam
Abstract: The proposed method is based on multimodal brain tumor segmentation method (MBTSM) using improved fuzzy c-means (IFCM) and active contour model (ACM). This proposed MBTSM is present a brain tissue and tumor segmentation method that segments magnetic resonance imaging (MRI) of human head scans into gray matter (GM), white matter (WM), cerebrospinal fluid (CSF), edema, core tumor and compete tumor. The proposed method consists of three stages, Stage-1 is an IFCM method, modifying the conventional FCM for brain tissue segmentation process and this method gives comparable results than existing segmentation techniques. In Stage-2, is an abnormal detection process that helps to check the results of IFCM method by fuzzy symmetric measure (FSM). In Stage-3 is segment the tumor region from multimodal MRI head scans by modified Chan-Vese model (MCV) model. The accuracy analysis of proposed MBTSM used the parameters are dice coefficient (DC), positive predictive value (PPV), sensitivity, kappa coefficient (KC) and processing time. The mean DC values are 83% for GM, 86% for WM, 13% for CSF and 75% for complete tumor.
Keywords: Active Contour; Brain Tumor; Clustering; Magnetic Resonance Image; Segmentation.
Automated methodology for breast segmentation and mammographic density classification using co-occurrence and statistical and SURF descriptors
by Roberto Pavusa Junior, Joao C. L. Fernandes, Alessandro P. Da Silva, Marcia A. S. Bissaco, Silvia R. M. S. Boschi, Terigi A. Scardovelli, Silvia C. Martini
Abstract: This paper presents a fully automated process of segmentation and classification of mammographic images at medio-lateral oblique projections. For this purpose, we developed a new set of descriptors for determination of breast density based in the standard used in the MIAS database. The process is started with the application of new techniques in the preprocessing of the image, composed by detecting the laterality of the image, and removing the image background and its artifacts, and the identification and segmentation of the pectoral muscle. From the segments, namely breast and pectoral muscle, were extracted descriptors from histogram, co-occurrence, and points of interest analysis. The descriptors were reduced by three different techniques, Spearman correlation analysis, principal component analysis and linear discriminant analysis. The image classification is performed by two different classifiers, k nearest neighbors (KNN) and support vector machine (SVM). With the SVM classifier was achieved precision of 72.05% and with the KNN classifier was achieved precision of 91.30%. Compared to other related works, the developed pre-processing technique is promising, as well as the descriptors used for density classification, which surpassed most of previous works that used all images from the database.
Keywords: Breast density; mammography; computer-aided diagnosis; SVM; KNN; SURF.
An effective Fast Conventional pattern measure based suffix feature selection to search gene expression data
by Surendar A
Abstract: Biomedical gene sequences are incompletely or erroneously annotated because of a lack of experimental evidence or prior functional knowledge in sequence datasets. Identifying the genomic useful selections instead of relying on correlations across large experimental datasets or sequence similarity remains a problem. This study proposes a Fast Conventional suffix feature pattern search algorithm(FcsFPs) for searching the gene sequence from expression data using fast feature pattern by measuring the conventionality of search accuracy from gene expression dataset. The aim is to obtain an efficient search algorithm. In this case, features from state matrix and sequence centers are described in the form of a string and the assignment of points to different sequences is done by suffix term search. Overall, the conventional pattern selection reduces computing complexity of fast gene search, improves the accuracy of searching accuracy, and reduces time complexity and the dimensionality of nonlinear gene expression data.
Keywords: gene search; pattern matching; suffix point; sequence data; throughput; gene expression; genome sequence; feature selection; clustering; suffix feature.
An effective morphological-stabled denoising method for ECG signals using wavelet based techniques
by Hui Yang, Zhiqiang Wei
Abstract: Wavelet transform has been identified as an effective denoising method for ECG signals with its advantage of multi-resolution analysis. However, it should be noted that import morphological features, such as peak of the QRS complex, should be retained after denoising for further medical practice. In this paper, an effective morphological-stabled denoising method for ECG signals is proposed though optimal selection of wavelet basis function, designing a new threshold method, optimizing decomposition levels and thresholding scheme. When validated in the MIT-BIH Arrhythmia Database, the denoising method achieved Mean Square Error and Signal-to-Noise value of 0.0146 and 68.6925 respectively, while successfully retained the QRS complex amplitude close to its full amplitude. Also, a total of 23 simulations were carried out to compare our proposed method with other methods. The experimental results indicate that the proposed denoising method can outperform other state-of-the-art wavelet-based methods while remain stable in morphology.
Keywords: ECG denoising; noise; morphology; QRS complex; wavelet transform; basis function; multi-resolution; thresholding.
Segmentation of Liver Computed Tomography Images using Dictionary based Snakes
by SHANILA NAZEERA, Vinod Kumar R S, Ramya Ravi R
Abstract: In medical research, segmentation can be used in separating different tissues from each other, through extracting and classifying the features. Segmentation of liver from computed tomography (CT) and magnetic resonance imaging (MRI) is a challenging task. Many image segmentation methods have been used in medical applications. In addition to the briefing of the need, concept and advantages of a few liver segmentation methods, this paper introduces a novel approach for the segmentation of liver computed tomography images using dictionary snakes. The performance of the proposed method is quite satisfactory.
Keywords: Image Processing; Liver Segmentation; Computed Tomography; Preprocessing; Active contour; Snakes; Dictionary Snakes; Segmentation.
Non-Invasive Estimation of Random Blood Glucose from Smartphone-based PPG
by UTTAM KUMAR ROY, Shivashis Ganguly, Arijit Ukil
Abstract: Traditional blood glucose meters are invasive in nature; blood is collected by needle pricking, which is painful, has a high risk of infections and damages tissues over repeated usage. Although, a few non-invasive methods have been proposed, they require very high-end costly non-portable custom devices and lack accuracy. This work presents a non-invasive estimate of the blood glucose using only smartphone based on PhotoPlethysmoGraph (PPG). The method supports 27x7 monitoring without any extra hardware. The system leverages the fact that glucose molecules enter the Red Blood Cells (RBC), attach to hemoglobin and affect blood color. We cleaned the noise PPG signal and extracted the red component from PPG of 25 patients, applied non-linear regression to estimate glucose and cross-validate against laboratory invasive method. The RMS error comes out to be 2.1525 mg/dL which is superior to existing non-invasive techniques. Three methods viz. geometric regression, Bland-Altman analyses and Surveillance Error Grid are used to prove the correctness.
Keywords: Non-invasive measurement; Blood glucose estimate; Regression; PhotoPlethysmoGraphy.
Non-rigid Registration (Computed Tomography Ultrasound) of Liver Using B-Splines and Free Form Deformation
by Romel Bhattacharjee, Ashish Verma, Neeraj Sharma, Shiru Sharma
Abstract: Medical Image registration is a key enabling technology and a highly challenging task. Medical images captured using different modalities (sometimes same modality) undergo the process of registration for applications like the diagnosis of a tumor, image-guided surgery, image-guided radiotherapy, etc. By iteratively minimizing a cost function and optimizing transformation parameters, the registration is achieved. In this paper, the semi-automatic non-rigid registration method is utilized in order to register computed tomography (CT) and ultrasound (US) images of the liver. The global motion is modeled by an affine transformation, while the local motion is described by Free Form Deformation (FFD) based on B-Splines. As the existence of local deformation between US and CT images is inevitable due to respiratory phases, two different techniques are included and investigated for registration refinement: transformation using Multi-level B-splines and using gradient orientation information. This work also includes and inspects three different types of optimization strategies: Steepest Gradient Descent, quasi-Newton and Levenberg-Marquardt method. This method is tested on six clinical datasets, and quantitative measures are assessed. Visual examinations and experimental results verify a lower level of registration error and a higher degree of accuracy when the method is employed using Levenberg-Marquardt optimization while utilizing the gradient orientation information for registration refinement.
Keywords: non-rigid registration; free form deformation; multilevel B-Splines; gradient orientation information.
EVALUATION OF THE MECHANICAL BEHAVIOR OF A BIPOLAR HIP PROSTHESIS UNDER TRANSIENT LOADING
by Rabiteja Patra, Harish Chandra Das, Shreeshan Jena
Abstract: Most of the studies available in the open literature make use of static analysis and discretization of the load components for studying the mechanical behavior of implants and prosthesis. The present study discusses the effect of time-varying loading on the prosthesis and femur bone assembly. The solid model of the femur bone was reconstructed using femur bone slices obtained from computed tomography (CT). The components of the hip joint forces and moments were applied at the femoral head of the prosthesis. The results from the present study were compared with the data from literature, and the present study shows that a time-varying loading analysis can provide much more realistic information about the prosthesis as compared to the prevailing use of static analyses.
Keywords: transient loading; CT; gait; finite element analysis; femoral prosthesis.
COLOR SPACE BASED THRESHOLDING FOR SEGMENTATION OF SKIN LESION IMAGES
by Sudhriti Sengupta, Neetu Mittal, Megha Modi
Abstract: In Computer Aided Diagnosis (CAD) of various skin diseases, the skin lesion image segmentation is an important phase. The quality of skin lesion images is severely affected by various factors such as poor contrast, low illumination, complexity of texture and presence of artifacts like hair etc. Thus, the existing image segmentation techniques used in diagnosis of various skin lesions are not appropriate. For better skin lesion detection, these limitations are overcome by an improved color space-based split-and-merge process in combination with global thresholding segmentation and color space technique. The obtained results have been further enhanced by self-guided edge smoothing-color space technique. The effectiveness of the proposed self-guided edge smoothing-color space technique has been verified by quantitatively comparing the obtained results with the existing Otsu thresholding, adaptive thresholding and color-space techniques. The computed results show much better values of performance measuring parameters viz.-entropy, dice similarity index and Structural Content for edge smoothing-color space technique. This indicates far superior quality of images obtained by the proposed self-guided edge smoothing-color space technique in comparison with existing Otsu, adaptive and color space techniques. The proposed technique may assist the medical professionals in early and accurate detection of skin lesions and associated diseases for benefit of patients.
Keywords: Skin lesions; Segmentation; Color space; Thresholding;Entropy; Merging; Split;Adaptive Thresholding; Otsu Thresholding; Global Thresholding;Skin diseases; Self-guided Edge Smoothing.
Dual Feature Set Enabled with Optimized Deep Belief Network for Diagnosing Diabetic Retinopathy
by Shafiulla Basha, K. Venkata Ramanaiah
Abstract: In DR detection, there are a lot of challenges to be faced in order to provide better performance and accuracy. The problem that still remains in DR detection is selection of image features, and classifiers for appropriate datasets. In order to develop a better detection method, this paper intends to propose an advanced model for detecting DR using fundus images. This detection model accomplishes in four phases include Preprocessing, Blood Vessel Segmentation, Feature Extraction and Classification. Initially, Contrast Limited AHE (CLAHE) and median filtering methods are used for preprocessing. For blood vessel segmentation, Fuzzy C-Mean (FCM) thresholding works well for making rough clustering of pixels. Further, the local features and morphological transformation-based features are extracted from the segmented blood vessels. Moreover, the deep learning classifier called Deep Belief network (DBN) classifies the extracted features, which detects whether the image is healthy or affected. As a novelty, the number of hidden neurons in DBN is optimized using modified Monarch Butterfly Optimization (MBO) termed as Distance-based MBO (D-MBO). To the next of the simulation, the performance of the proposed D-MBO-DBN-based DR detection model is compared over the existing models by analyzing the most relevant positive, and negative performance measures, and substantiates the overall performance.rnrn
Keywords: Diabetic Retinopathy Detection; Fuzzy C-Mean; Deep Belief Network; Monarch Butterfly Optimization; Hidden Neuron Optimization.
Machine Learning Approach for Automatic Brain Tumor Detection using Patch based Feature Extraction and Classification
by T. Kalaiselvi, P. Kumarashankar, Sriramakrishnan Pathmanaban
Abstract: Manual selection of tumorous slices from MRI volume is a time expensive process. In the proposed work, we have developed an automatic method for tumorous slice classification from MRI head volume. The proposed method is named as patch based classification (PBC). PBC uses 8
Keywords: Tumor detection; Feature extraction; Feature blocks; Brain tumor; BraTS dataset;.
3D Printing for Aneurysms Clipping Elective Surgery
by Stefano Guarino, Enrico Marchese, Gennaro Salvatore Ponticelli, Alba Scerrati, Vincenzo Tagliaferri, Federica Trovalusci
Abstract: This paper deals with the realization of 3D printed cerebral aneurysms by using the Direct Light Processing (DLP) technique. The aim was to improve the anatomy knowledge, training and surgical planning on individualized patient-specific basis. Computed Tomography Angiography and Digital Subtraction Angiography of three patients were used to create 3D virtual models by using a commercial image-processing software. The DLP technique was aimed at realizing the corresponding 3D physical models. These were firstly evaluated by the surgeons and then, if acceptable, used for the patient-specific treatment planning. All three models provided a comprehensive 3D representation of the related anatomical structure of the aneurysms improving the understanding of the surrounding vessels and their relationships. Moreover, the use of the DLP technology allowed fabricating the 3D models of the cerebral aneurysms in a low-time and low-cost consuming way.
Keywords: 3D Printing; Aneurysms; DLP; Neurosurgery; Rapid Prototyping; Solid Modelling.
Modified U-Net for Fully Automatic Liver Segmentation from Abdominal CT-Image
by Gajendra Kumar Mourya, Sudip Paul, Akash Handique, Ujjwal Baid, Prasad Dutande, S.N. Talbar
Abstract: Liver volume estimation using segmentation is the first step for liver diagnosis and its therapeutic planning. Liver segmentation from abdominal CT image has always been a universal challenge for researchers because of low contrast among surrounding organs. An automatic liver segmentation technique is extremely desired in clinical practice. In this paper, we have modified conventional U-Net architecture for automatic liver segmentation. This method will precisely delineate the boundaries between the liver and other abdominal organs and outperforms over another state of the art methods. We extensively evaluated our method on 'CHAOS challenge-2019 dataset of 20 subjects volumetric CT images. Quantitative evaluation of the proposed method is done in terms of various evaluation parameters with respect to their ground truth. Result achieved Average Dice Similarity Coefficient 0.97
Keywords: Computed tomography; liver segmentation; U-Net; semantic segmentation; Deep Learning.
Age-Related Macular Degeneration identification based on HRC layers analyses in OCT images
by Amel BEN KHELFALLAH, MESSADI Mahammed, BESSAID Abdelhafid, LAZZOUNI Mohammed Amine
Abstract: Age-related Macular Degeneration (AMD) is a very dangerous disease which usually affects the eyes of people with age above 50 years. AMD is characterized by extracellular deposition that accumulate between the retinal pigment epithelium (RPE) and the inner collagenous layer of Bruchs membrane, causing the death of RPE cells and subsequent loss of photoreceptor cells. Optical coherence tomography (OCT) imaging technique is the powerful tool that can detect at early stage the different macular abnormalities, in view of its high-resolution cross-sectional images. The purpose of this work is to separate the healthy images from AMD OCT images by analysing and quantifying the extracted HRC (Hyper Reflective Complex) layer using the image processing technique. The extracted layer is divided in to 10 quadrants. In each sample, the no. of white pixels is counted and the mean value of these pixels is then calculated. For both the Healthy and the AMD affected images, the average mean value is calculated. Based on this value, a decision rule is fixed to classify the images of interest. The proposed method showed an accuracy of 87,5%.
Keywords: Age-related Macular Degeneration (AMD); Hyper Reflective Complex (HRC); automatic segmentation; Optical Coherence Tomography (OCT); AMD classification.
Resampling schemes within a particle filter framework for brain source localization
by Santhosh Veeramalla
Abstract: One of the critical aspects of neuroscience research is locating neural sources from EEG data. The particle filter was used to locate resources due to its superior performance in tracking and prediction. The unknown number of neural sources in the EEG data is tracked by particle filters. A few adjustments to particle filters were proposed by improving resampling techniques for EEG applications to alleviate the particle degeneracy of the particle filter. Various methods of resampling should be studied and examined for localizing the neural source, evaluating its viability under the large sets of data. In this paper, we proposed a new approach for localization of the neural source of the real EEG data based on residual and residual systematic resampling methods in the particle filters. The robustness and the performance are validated by the root mean square error (RMSE), relative accuracy (RA) and the execution time. We show that with the proposed residual systematic resampling algorithm the proposed filter improves the root mean square error estimation performance, improves the exact position of the source and reduces time to run. The suggested approach for the source localization using a residual systematic resampling approach, by taking into account the efficiency measures, provides better performance than the other methods of resampling used in particle filter for source localization.
Keywords: particle filter; resampling; EEG; state estimation; source localization; inverse problem.
An IoT Based Smart Hearing Aid for Hearing and Speech Impaired Persons
by Solomon Nwaneri, Charles Osuagwu
Abstract: This paper presents a smart hearing aid designed to assist individuals suffering from both hearing and speech impairment. The hardware consists mainly of a digital hearing aid unit, a Bluetooth audio receiver module and a smart phone with Android applications designed on android studio using recognizer intent and Google application programming interfaces (APIs) installed and programmed. An innovative Internet of things (IoT) based interaction between the modules enabled hearing and speech impaired patients communicate effectively through the use of the hearing aid, text-to-speech converter and speech-to-text converter. The device was tested on thirty subjects from the Ear, Nose, and Throat (ENT) clinic of Lagos University Teaching Hospital Lagos, Nigeria. The results demonstrate the effectiveness of the device in assisting patients suffering from various degrees of hearing loss. Patients with various degrees of hearing loss will benefit immensely from the use of the proposed device in communicating with others.
Keywords: Android Application; Analogue-to-Digital Converters; Digital-to-Analogue Converters; Hearing loss; Internet of Things; Hearing Loss; Smart hearing aid; Smart phone; Text-to-speech.
Mitotic Cells Detection in H&E-Stained Breast Carcinoma Images
by Afiqah Abu Samah, Mohammad Faizal Ahmad Fauzi, See Y. Khor, Jenny T.H. Lee, Kean H. Teoh, Lai M. Looi
Abstract: Breast cancer is the most common cancer occurring in women, and is the second leading cause of cancer related deaths in women. Grading of breast cancer is carried out based on characteristics such as the gland formation, nuclear features, and mitotic activities, all of which need to be correctly detected first. In this paper, we proposed a system to detect mitotic cells from H&E-stained whole-slide images of breast carcinoma. The system consists three stages, namely superpixel segmentation to group similar pixels into superpixel regions, blob analysis to separate the cells from the tissues and the background, and shape analysis and classification to distinguish mitotic cells from non-mitotic cells. The proposed system, with the histogram of oriented gradients (HOG) and Fourier descriptor (FD) as features, is able to detect mitotic cells reliably, with more than 90% true positive rate, true negative rate and overall accuracy.
Keywords: breast carcinoma; mitosis detection; superpixel segmentation; digital pathology.
Mass Detection in Mammographic Images Using Improved Marker-Controlled Watershed Approach
by Pratap Vikhe, Vaishali Mandhare, Chandrakant Kadu
Abstract: Mass detection in mammogram plays vital role for early diagnosis of
breast cancer. However, screening of masses is challenging task for radiologist,
due to contrast variation, noisy mammographic images and imprecise edges. In
this paper, improved marker-controlled watershed approach presented to segment
and detects precise suspicious regions from mammograms. Morphological
operations and threshold technique has been used in proposed algorithm, to
suppress artifacts and pectoral region. Magnitude gradient computed to obtain
mass edges. Finally, internal and external marker determined and watershed
transform applied on modified gradient image, to segregate suspicious region.
Proposed approach applied on 140 mammograms from two datasets, MIAS and
DDSM. The performance of proposed approach in terms of True Positive Fraction
yields 93.7% and 94.3% respectively, at the rate of 0.72 and 0.45 average False
Positive per Image. Thus, achieved results depicts, proposed approach gives better
results for mass detection helping radiologists in diagnosis at early stage.
Keywords: Watershed Transform; Mass Detection; Marker-Controlled; Segmentation; Mammograms.
Ease Drug Delivery: Wirelessly Controlled Medication Delivery System via Android Application
by Maham Sarvat, Suhaib Masroor, Muhammad Muzammil Khan
Abstract: Medication delivery system or syringe driver system is used for administering the predefined amount of drug, into the patient, within a specific period of time through intravenous procedure i-e patient were incapable to take the drug orally. Injected medicine or fluid is absorbed in the body via blood circulation. In the last decade, numerous authors had been studied and propose various methods related to the medication delivery system, such as touch screen syringe pump, microcontroller based syringe pump, dual syringe pump, and etc. Moreover, all these medication systems have some drawbacks such as they are manual, have a crude methodology and require constant monitoring by the medical staff. In some hospitals, a shortage of medical staff, or untrained staff further increase the drawback of these kinds of systems. In this paper, a novel approach is presented to create a cost effective wireless ease drug delivery system, which can overcome the deficiencies of all the aforesaid drug delivery systems. In the proposed ease drug delivery system, it is shown that the control and operation of the drug delivery system are performed wirelessly from the nursing counter, located within the range of 30m via an android device. The device will provide information of all the installed drug delivery systems on a single screen. Moreover, it requires only a single staff member to monitor them and give them necessary instructions via the same android device. Thus, the proposed system can overcome all the shortcomings of the older drug delivery systems.
Keywords: Electro-Medical Instrument; Syringe Pump; Wireless Control.
Performance analysis of different segmentation methods applied to positron emission tomographyimages fusion
by Abdallah Mehidi, Malika Mimi, Jerome Lapuyade-Lahorgue
Abstract: Medical imaging provides objective quantitative functional information leading to decision-making on diseases. Image segmentation is of great importance in extracting this information. The labeling of regions of interest on all these volumes is an issue for automatic or semi-automatic segmentation methods. The objective of this paper is to present and analyze the main techniques of PET image segmentation and to provide a comparative study of all methods in terms of precision, accuracy assessment and reproducibility. We report the most recent results of tumor image segmentation that are used in literature. Six state-of-the-art tumor segmentation algorithms are applied to set of PET tumors which are characterized by the following properties: noise levels, wide range of contrast, uptake heterogeneity and complexity of the form by considering clinical tumor cases. The obtained results show that the Fuzzy Locally Adaptive Bayesian (FLAB) provides superior accuracy and higher precision compared to the recently used methods namely Hidden Fuzzy Markov Fields (HFMF) and Fuzzy Hidden Markov Chains (FHMC) as well as other clustering-based approaches like Fuzzy C-means (FCM), Fuzzy Local Information C-Means (FLICM) and Automated Generalized Fuzzy C-means (GFCM) with estimated norm less than 3. Furthermore, we show that the GFCM achieves the best results outperforming all other techniques when the estimated norm values, noted Norm, are greater than 3.
Keywords: Image Segmentation; Clustering Methods-Bayesian Segmentation; Fuzzy C-means Hilbertian-norm; Positron Emission Tomography (PET); Image Fusion.
An Automatic detection of Microcalcification in Mammogram Images using Neuro-Fuzzy classifier
by Neha Shahare, Dinkar Yadav
Abstract: Breast cancer is a standout amongst the most widely recognized diseases and has a high rate of mortality around the world, significantly risking the health of the females because of insufficiency in awareness about health check-up, breast screening, and insufficient medical experts. Among existing all modalities of medical scans, mammography is the most preferred modality for preliminary examination of breast cancer. In mammogram images, micro-calcifications is one of the imperative sign for breast cancer detection. An automatic technique with considering different statistical features followed by advanced fuzzy based artificial neural network for classification and detection of breast cancer is proposed. As mammogram images suffers from different noises, anisotropic diffusion filtering method is used for pre-processing of medical scan as initial step. Further, to extract the different statistical features, combine discrete wavelet transform and grey-level co-occurrence technique is used. Finally, these extracted feature vectors are then fed as input to the advanced fuzzy based artificial neural network for classification and detection of the microcalcifications present in mammogram images. For extensive experimental analysis, mini-MIAS database is considered with sensitivity, specificity and accuracy as evaluation parameters. From qualitative and quantitative results, it is evident that the proposed classification method is achieved significant improved performance as compared to existing state-of-the-art classification technique like SVM, ANN, etc.
Keywords: Microcalcifications; Mammogram; GLCM; Cellular automata; Neuro-fuzzy.
Classification of Primary and Secondary Malignant Liver Lesions using Laws Mask Analysis and PNN classifier
by Jitendra Virmani, Dilsheen Dhoat
Abstract: A common technique to identify liver cancer is through subjective analysis of ultrasound (US) images. The process of subjective analysis and classification of ultrasound images is sometimes difficult and confusing for the radiologists. Due to limited sensitivity of US images, a computer aided classification (CAC) system is developed for differential diagnosis between malignant liver lesions (MLLs). The differential diagnosis between primary malignant i.e. Hepatocellular carcinoma (HCC) and secondary malignant i.e. Metastases (MET) lesion of the liver has been carried out using three experiments based on various ROI extraction protocols i.e. (a) IROIs and NROI extraction: Multiple IROIs have been extracted within the lesion and one neighboring ROI has been extracted from the region surrounding the lesion; (b) LROI and NROI extraction: A single largest ROI has been extracted from the region within the lesion; (c) GROI extraction: A single ROI has been extracted such that the lesion is contained within the GROI i.e. this ROI includes region inside the lesion, margin and some of the surrounding area of the lesion. For the three experiments, feature extraction has been carried out using Laws Mask analysis using 1D kernels of various resolutions i.e. 3, 5, 7, 9. Probabilistic neural network (PNN) has been used extensively for the classification task. Experiment 1 which uses ratio features obtained by dividing texture features from IROIs and texture features from NROI yields a classification accuracy of 78.8 % using Laws Mask of length 7. Experiment 2 which uses ratio features obtained by dividing texture features from LROI and texture features from NROI yields a classification accuracy of 90 % using Laws Mask of length 3. Experiment 3 which uses GROI extraction yields a classification accuracy of 90 % using Laws Mask of length 7. Feature vector yielding maximum accuracy in Experiment 2 and 3 were concatenated to yield a concatenated feature vector (CFV) consisting of Laws Mask of length 3 carried out for LROI and NROI extraction and Laws Mask of length 7 carried out for GROI extraction. It has been observed that Experiment 4 yields an accuracy of 93 %.
Keywords: Focal liver lesions; Malignant liver lesions; HCC; MET; B-Mode Ultrasound images; Laws’ Mask Analysis; Probabilistic neural network classifier.
Swarm Optimization Based Bag of Visual Words Model for Content-Based X-Ray Scan Retrieval
by Karthik K
Abstract: Classification and retrieval of medical images (MedIR) are emerging applications of computer vision for enabling intelligent medical diagnostics. Medical images are multi-dimensional and require specialized processing for the extraction of features from their manifold underlying content. Existing models often fail to consider the inherent characteristics of data and have thus often fallen short when applied to medical images.rnIn this paper, we present a MedIR approach based on the Bag of Visual Words (BoVW) model for content-based medical image retrieval. When it comes to any medical approach models, an imbalance in the dataset is one of the issues. Hence the perspective is also considering a balanced set of categories from an imbalanced dataset. The proposed work on BoVW model extracts features from each image are used to train supervised machine learning classifier for X-ray medical image classification and retrieval. During the experimental validation, the proposed model performed well with the classification accuracy of 89.73% and a good retrieval result using our filter-based approach.
Keywords: Content Based Medical Image Retrieval; Image classification; Visual Space Modeling.
Hierarchical Fusion in Feature and Decision Space for Detection of Valvular Heart Disease using PCG Signal
by M.K.M. Rahman, Ainul Anam Shahjamal Khan, Tasmeea Rahman
Abstract: Detection of valvular heart disease from phonocardiogram (PCG) signal is an important non-invasive and low-cost tool that has can have a big impact on the health care market. We have developed two techniques namely Weighted Fusion of Features in Decision Space (WFFDS) and Hierarchical Fusion in Feature and Decision Space (HFFDS) that combined information from multiple feature domains to improve the disease-detection accuracy. We have shown that fusion of multiple features improve the detection-accuracy compared with individual features. The accuracy is further improved by WFFDS technique, where the fusion is performed in decision space instead of feature space. In WFFDS, classifiers of same type are trained on different feature sets and some weights are calculated from confusion-matrix, which are then used to combine information in decision space for classifying new data. In HFFDS, fusion is performed both in feature and decision space. Our experimental results corroborate that both WFFDS and HFFDS performs better than traditional representations of features and their straight-forward fusion.
Keywords: Phonocardiogram; valvular diseases; neural network; feature fusion; decision fusion.
Detection of Abnormal Electromyograms Employing DWT Based Amplitude Envelope Analysis Using Teager Energy Operator
by Sayanjit Singha Roy, Debangshu Dey, Anwesha Karmakar, Ankita Singha Roy, Kumar Ashutosh, Niladri Ray Choudhary
Abstract: In this contribution, discrete wavelet transform based amplitude envelope analysis is proposed for automated detection and classification of healthy, myopathy and neuropathy electromyography signals. Electromyograms of healthy, myopathy and neuropathy classes were initially decomposed into several frequency bands with the help of discrete wavelet transform based multi resolution analysis. Following this, instead of using Hilbert transform, a novel technique for amplitude envelope extraction from different decomposed frequency subbands was performed using discrete energy separation algorithm implementing Teager energy operator. Three distinct features were extracted from the amplitude envelopes of each subband and analysis of variance test was carried out to measure their statistical significance. The extracted features were finally served as input to a support vector machines classifier to classify different categories of electromyography signals. It was observed that 100% classification accuracy is obtained in this work, which is found to outperform the existing methods studied on the same database.
Keywords: Classification; electromyograms; envelope analysis; support vector machines and Teager energy operator.
Early Onset/Offset Detection of Epileptic Seizure using M-band Wavelet Decomposition
by Yash Vardhan Varshney, Garima Chandel, Prashant Upadhyaya, Omar Farooq, Yusuf Uzzaman Khan
Abstract: Early detection of the seizure and its diagnosis play an important role for effective treatment of epileptic patients. Most of the research used in this field has been focused on detection of the seizure. However, it is also very important to detect seizure with minimum delay, which can be useful to take care of the patient. In this paper, an efficient approach for seizure detection with low onset/offset latency is proposed using three-band wavelet decomposition. Variance and higher order moments are computed from wavelet based feature extracted using three level wavelet decomposition. For comparative analysis, the extracted features are classified using two classifiers; decision tree (DT) and a shallow artificial neural network (ANN). The DT shows better classification performance as compare to ANN with classification specificity, sensitivity and accuracy of 99.6%, 98.97% and 99.49% respectively with onset and offset latency of 4.01s and -0.21s.
Keywords: Onset/Offset Seizure Detection; M-band Wavelet Transform; Decision tree (DT); Shallow network.
Fully automatic segmentation of LV from Echocardiography images and calculation of Ejection Fraction using Deep Learning
by Pallavi Kulkarni, Deepa Madathil
Abstract: Echocardiography is a widely used ultrasound imaging technique for cardiac health diagnosis. Echocardiography segmentation is a crucial process to evaluate multiple cardiac parameters like ejection fraction, heart wall thicknesses, etc. Recently machine learning techniques especially deep learning using convolution neural network models are finding increasing applications for echo image analysis including its segmentation. In this paper, we have presented a unique convolution neural network (CNN) model for automatic left ventricle (LV) segmentation of echo images. Denoising and feature extraction processes are integrated with the CNN model to enhance its prediction accuracies after training. The proposed system is trained on two-dimensional sequence images of 60 patients and tested on data of 22 patients. An automatic method for evaluation of ejection fraction is appended using the LV segmentation predictions generated by the CNN model. The performance of this CNN architecture is evaluated using various similarities and distance based majors as well as ejection fraction correlation with ground truth segmentation labelled images. CNN layer visualization methods are applied to obtain deeper insight into the trained network.
Keywords: Echocardiography; Left ventricle; Convolutional Neural Network; Autoencoders; feature extraction; Layer Visualization.
Special Issue on: Developments and Issues in Medical Imaging
Optimal selection of threshold in EIT reconstructed images for Estimating size of Objects
by Nanda Ranade, Damayanti Gharpure
Abstract: Electrical Impedance Tomography (EIT) is widely used for various applications in process tomography and medical or geological imaging. EIT system non-invasively acquires surface potential data which is used for reconstructing conductivity images for identifying shapes and sizes of objects of interest. Based on the application, the objects of interest may be of higher or lower conductivity than the background conductivity. It is useful to convert reconstructed images to binary form for quantitatively establishing shapes and sizes of objects. In this work, we present guidelines for selecting appropriate threshold values based on systematic numerical investigations assuming prior knowledge of conductivity contrast for specific application of EIT. Various configurations of objects immersed in a background (with lower or higher conductivity) were considered. Open source software EIDORS (Electrical Impedance Diffused Optical Reconstruction Software) was used for reconstructing differential EIT images for these configurations. Diametric conductivity profiles were used to identify appropriate values of threshold to obtain accurate object size over a wide range of contrast. The calculated values of threshold and the resulting effect on estimated object size were compared with the usually preferred thresholds of
Keywords: EIT; EIDORS; Image thresholding; conductivity contrast.
Special Issue on: Machine Learning Techniques for Medical and Biological Applications
Cerebral Palsy Rehabilitation - Effectiveness of Visual Stimulation Method by Analyzing the Quantitative Assessment of Oculomotor Abnormalities
by ILLAVARASON P, Arokia Renjit J
Abstract: Cerebral Palsy (CP) is developmental brain abnormalities which is prior to birth or during birth and after delivery. Cerebral brain damage cells which leads to other health issues such as vision, hearing, and motor activities and so on. The major health issues for the cerebral palsy children such as vision problem. The proposed approach deals which Vision dysfunction and Oculomotor Assessment for diagnosis and treatment of brain disorder. The movement of eye plays a vital role to get the accurate vision for static and dynamic objects. In these proposed approaches we assessed the Oculomotor deficits of CP children by recording the eye movement of 26 CP Children (age range 4-14) and performance compared with age matched control and also analyzing the cerebral palsy children Eye Fixation Centroid, Smooth Pursuit and Eye Lid Blinking activities. From these activities the movement of eye is to provide the window of Neuro plasticity for CP children. The oculomotor abnormalities indicate the lesion brain of CP children retains the ability to reorganize, by continuous work process of these vision stimulation task techniques for Eye Gaze Direction Approach will improve the cognitive rehabilitation of cerebral palsy children. These gaze related indices in response to both static and dynamic visual stimulation techniques may serve as potential quantitative biomarkers for cerebral palsy children.
Keywords: Cerebral Palsy; Eye fixation; Smooth Pursuit; Blink Rate.
Special Issue on: Bioscience and Computational Methods
Certain investigation on Biomedical impression and Image Forgery Detection
by Arun Anoop Mandankandy, Poonkuntran S
Abstract: In todays digital age, the trustworthy towards image is distorting because of malicious forgery images. The issues related to the multimedia security have led to the research focus towards tampering detection. As the source image and the target regions are from the same image so that copy move forgery is very effective in image manipulation due to its same properties such as temperature, color, noise and illumination conditions. In this paper, we have analyzed some papers related to copy move forgery detection and finally concluded with comparative analysis with some parameters and also Medical and biomedical image analysis, databases, search engines, devices and system security.
Keywords: Block based image forgery detection; key-point based image forgery detection; pixel based image forgery detection; Copy move forgery; Biomedical search engines; Biomedical Image analysis.
Special Issue on: Evolutionary Deep Learning Methods for Intelligent Biomedical Computation Engineering
Systematic CO2 monitoring using Machine Learning enabled WSN to develop the Anti-Hazard Strategies for the future
by Jeyakkannan N, Manimegalai P, Prasanna Venkatesan G.K.D
Abstract: The life cycle of an organism does not complete without the carbon dioxide (Co2). Co2 is an essential peculiar flavor ingredient gas that drives the world. On the other hand, the impact of excessive Co2 affects the atmosphere with rapid climate changes, greenhouse effect, rain with corrosive effect and many more hazards. The Excessive Co2 influences the natural resource and depletes it to the hazard nature. Therefore, the atmosphere needs attention towards monitoring under various conditions. In this paper, the technology of WSN (Wireless Sensor Network) is used for monitoring the Co2 and other gases through which data is stored and fed to the machine learning methodology for future strategies. This paper gives the analysis of the feasibility and effectiveness of the various methodologies. During the analysis, the GRNN (Generalized Regression Neural Network) is identified to be a suitable algorithm for the learning and anti-hazard strategies for monitoring the Co2. The results produced by the GRNN method are promising which particulates up to 96% of accuracy compared to other algorithms.
Keywords: GRNN; Co2 monitoring; Artificial Neural Networks; Atmospheric Gas Management.
Development of empirical model and controller for Negative Pressure Wound Therapy System
by Jimry H, Sanjeevi Gandhi
Abstract: Negative Pressure Wound therapy process usually helps to heal the diabetic open wound much better by applying smooth vacuum process. This therapy gives the positive effects based on the stimulation of formation of granulation tissue, blood flow, angiogenesis, cell proliferation, accelerated secondary wound closure and removal of bacteria from the wound. Generally maintaining the negative pressure in the system is a challenging task which desires the patient to feel more relaxed during the period of healing. In this work attaining and authenticating of a mathematical model of a Negative Pressure Wound Therapy System is presented. The established model is based on the vacuum system along with the pump and other elements. The objective of the model is to perform with the design and analysis of a Proportional Integral Derivative control algorithm. It has been validated that this algorithm, is suitable in controlling the vacuum pressure in the chamber, which is based on the simulation on MATLAB Simulink.
Keywords: Negative Pressure Wound Therapy System; Vacuum Pressure; System modelling; PID algorithm; MATLAB Simulink.
Machine Learning Algorithms using Binary Classification and Multi Model Ensemble Techniques for Skin Diseases Prediction
by Vikas Chaurasia, Saurabh Pal Pal
Abstract: Unlike many other diseases, the skin disease has more irritability. Dermatology sicknesses incorporates normal skin rashes to serious skin contaminations, which happens because of scope of things, like diseases, warm, allergens, framework issue and drugs. First regular skin issue are dermatitis. Atopic dermatitis is relating current (perpetual) condition that causes eager, aroused skin. Most much of the time Skin disease has more touchiness as compared to any other disease. Dermatology sicknesses includes normal skin rashes to serious skin contaminations, which occurs due to various causes, like diseases, warm, allergens, framework issue and drugs. Regular skin issues are dermatitis. Atopic dermatitis is related to perpetual conditions that causes eager, aroused skin. Mostly it appears as patches on the face, neck, trunk or appendages.
The main focus of this research paper will be on Dermatology database which contains different Eryhemato-Squamous diseases class as psoriasis, seboreic dermatitis, lichen planus, pityriasisrosea, cronic dermatitis and pityriasisrubrapilaris. The differential analysis of erythemato-squamous disease maladies is a legitimate subject in dermatology. Erythemato-squamous disease offer the clinical attributes of erythema and scaling, with next to no distinctions. Each record is a collection of 33 attributes which are linear values and one attribute of them is nominal. The 75% of the dataset utilize for demonstrating and keep down 25% for approval. Objective of this paper is to bring about best performer classifier which will communicate in dermatology informational collection so for this reason the gut feel recommends distance based calculations like k-Nearest Neighbors and Support Vector Machines may progress. By using 10-fold cross validation and assess calculations utilizing the accuracy metric. This is a gross metric which will prove the developed model is best one.
Keywords: Eryhemato-Squamous; k-Nearest Neighbors; CART; Support Vector Machines; Ensemble methods.
Review on retinal blood vessel segmentation An algorithmic perspective
by Pearl Mary S, Thanikaiselvan V
Abstract: Medical image processing has progressed in leaps and bounds with the advent of radical medical imaging modalities. Blood vessel segmentation from the retinal fundus image is very useful in the diagnosis of chronic vascular diseases, arteriosclerosis, diabetic retinopathy, hypertension, etc. This review paper aims to bring out the existing algorithms developed for the segmentation of vessels in the fundus. This paper covers various segmentation approaches categorized under template matching, multi-scale approach, region growing, active contour model and pattern recognition methods. Pattern recognition is further classified as unsupervised, supervised and deep learning approaches. Performance metrics such as Accuracy, Specificity, Sensitivity, and Area under the Curve measures for these algorithms performed on the appropriate retinal databases are tabulated and discussed. Moreover, this paper discusses the impact of retinal blood vessel segmentation in screening Cardiovascular and Cerebrovascular Diseases. Also, this paper recommends a universal blood vessel segmentation algorithm for the medical vasculature images.
Keywords: Diabetic Retinopathy; supervised learning; Convolutional Neural Network; Deep Neural Network; Matched filter; unsupervised learning; Fully Convolutional Neural Network; retina; blood vessels; segmentation; region growing; pathology.
Special Issue on: Electromagnetic Wave Propagation for Biomedical Applications
Highly Efficient Particle Swarm Optimization for Energy Efficiency in Wireless Sensor Network WSN using Energy Capping and Predictive Energy Allocation (EC-PEA)
by Vandana Reddy, Gayathri P
Abstract: WSN has changed the way of life for humans in the due course of technological evolution. The change has led to many improvisations and sophistication in the day-to-day usable devices. These devices help the human in daily routines and now they have been an integral part. More precisely when we consider any smart environment, we use many of the devices that are essentials rather than necessities. Some of these devices could be fire sensors, smoke sensors, water level controllers, CCTV and so on. These devices are energy critical and need power source to run but have constraints of the energy storage on them. Hence, it is critical for us to device an energy conservation mechanism using highly accurate Particle Swarm Optimization(PSO) in these sensors and device an algorithm Energy Capping and Predictive Energy Allocation (EC-PEA) that would supervise conservation and schedules schemes to achieve optimal energy consumption by the devices.
Keywords: Internet of Things (IoT); Energy Consumption; Home Automation; Particle Swarm Optimization (PSO); Energy Capping and Predictive Energy Allocation (EC-PEA).