Forthcoming articles


International Journal of Computer Aided Engineering and Technology


These articles have been peer-reviewed and accepted for publication in IJCAET, but are pending final changes, are not yet published and may not appear here in their final order of publication until they are assigned to issues. Therefore, the content conforms to our standards but the presentation (e.g. typesetting and proof-reading) is not necessarily up to the Inderscience standard. Additionally, titles, authors, abstracts and keywords may change before publication. Articles will not be published until the final proofs are validated by their authors.


Forthcoming articles must be purchased for the purposes of research, teaching and private study only. These articles can be cited using the expression "in press". For example: Smith, J. (in press). Article Title. Journal Title.


Articles marked with this shopping trolley icon are available for purchase - click on the icon to send an email request to purchase.


Articles marked with this Open Access icon are freely available and openly accessible to all without any restriction except the ones stated in their respective CC licenses.


Register for our alerting service, which notifies you by email when new issues of IJCAET are published online.


We also offer RSS feeds which provide timely updates of tables of contents, newly published articles and calls for papers.


International Journal of Computer Aided Engineering and Technology (121 papers in press)


Regular Issues


  • Interactive Rendering of Light Scattering in Dust Molecules using Particle Systems   Order a copy of this article
    by Hatam Ali, Mohd Shahriza Sunar, Hoshang Kolivand 
    Abstract: In this paper introduces a technique for rendering effect of thernvolumetric lighting in dusty atmosphere using particle systems. Despite numerousrntechniques have been proposed for rendering these effects, but still lack realismrnin the interactive applications. This technique is based on the sampling planesrnto compute radiance transport equation and use dynamic model of the dust.rnThe scattering of light is computed by using fragment shaders and 2D texturernby making use of the graphics hardware, while using Particle-Engine techniquerngener-ates the dust. The technique is efficient and accurate to mimic a realisticrnscenes have effect such as scattering of light in dust molecules. In additionrnto, volumetric shadow is created resulting of density and size particles with-inrnparticipating media. Therefore, scattering of light is generated in presence dustyrnmedia that lead to provide visual clue closer to realistic.rn
    Keywords: Interactive rendering;light scattering;particle system;real-time shadows;Volumetric shadowrn

  • Multiple Phases-based Classification for Cloud Services   Order a copy of this article
    by Abdullah Ali, Siti Mariyam Shamsuddin, Fathy E. Eassa 
    Abstract: The current problem in cloud services discovery is the lack of standardization in the naming convention and the heterogeneous type of its features. Therefore, to accurately retrieve the appropriate services, an intelligent service discovery is required. To do that, the cloud services attributes should be extracted from the heterogeneous formats and represented it in a uniform manner such as an ontology to increase the accuracy of discovery. The extraction process can be done by classifying the cloud services into different types. In this paper, single and multiple phases-based classifications are performed using Support Vector Machine (SVM) and Naive Bayes as classifiers. The Cloud Armors dataset used which represents four classes of cloud services. Topic modeling using MALLET tool is used for dataset preprocessing. The experimental results showed that the classification accuracy for the two phases-based and single phase-based classification reached 87.90% and 92.78% respectively.
    Keywords: Cloud Computing; Cloud services; classification; SVM; Naïve Bayes; Features Selection; Topic Modeling; Cloud Services Discovery; Preprocessing; Standard Deviation

  • Noise Removal Using Statistical Operators for Efficient Leaf Identification   Order a copy of this article
    by Muhammad Ghali Aliyu 
    Abstract: Plant species identification and classification based on leaf shape is becoming a popular trend, since each leaf carries substantial information that can be used to identify and classify the type of a plant. This is difficult because the features of a leaf shape can be influenced by other leaves that have similar features but different categories or classes. To overcome this problem, an efficient preprocessing stage needs to be considered. This paper presents the most popular statistical operators such as mean filtering technique (MFT), median filtering technique (MDFT) , adaptive (wiener) filtering technique (WFT), rank order filtering technique (ROFT) and adaptive two-pass rank order filtering technique (ATRFT) for noise removal during preprocessing stage. The five different filtering techniques were applied to various categories of plant leaf and their performance is evaluated using mean square error (MSE) and peak signal to noise ratio (PSNR). Ten morphological features were extracted from the pre-processed images. The mean values were calculated from the extracted features using Modified Weighted Mean approach (MWM). The retrievals or identification performance was evaluated using Precision and Recall measurements. The Wus Standard database was used to test the proposed algorithm. The results showed the best filtering technique gives the higher identification performance. It is found that WFT is the best filtering technique and gives the best identification accuracy of 95.1%. Hence, a better strategies in improving image pre-processing and mean feature extraction technique offers good result for the performance of plant identification system as proven by our results.
    Keywords: preprocessing; plant identification; statistical operators; noise removal

  • Design of Power Saving Protocol with Duty Cycle Adjustment for Public Safety-LTE   Order a copy of this article
    by Mingoo Kang 
    Abstract: In this paper, a power saving protocol for PS-LTE(Public Safety-Long Term Evolution) with duty cycle adjustment is proposed according to the emergency of PS-LTE basedmobile stations. Because the PS-LTE transceiver consumes power whenever it is active, it is the most efficient to leave the receiver off, and wake it up asynchronously only when needed. As an alternative to protocol-based duty-cycle control, an asynchronous wake-upscheme is designed for battery saving of PS-LTE devices. This type of PS-LTE phones with power saving protocol has an adaptive cycle control during EMERGENCY-CALL. This dedicated wake-up EMERGENCY PS-LTE Phone can continuously monitor the channel, listening for a wake-up signal from other nodes and activating the main receiver upon the detection of asynchronous wake-up protocol. As a result, PS-LTE basedmobile station reduces power consumption by maximizing the node sleeping-time without compromising network latency, and the wake-up PS-LTE phone with power saving protocol(without PS-LTEs function activities : Group Communication Systems Enabler, Proximity-based Service, Mission Critical Push-to-Talk, Isolated E-UTRAN Operation for Public Safety, Relay etc.) also enables the improvementof overall network performance.
    Keywords: Power Saving Protocol, Duty Cycle Adjustment, Public Safety-LTE

  • A Method for Creating Balanced Cluster areas in Wireless Sensor Networks   Order a copy of this article
    by ChongGun Kim, HyunJin Park, Mary Wu 
    Abstract: The balancing node energy is an important issue in sensor networks, since sensor nodes are deployed with a limited power in arbitrary regions. Clustering is one of the basic approaches for designing energy-efficient distributed sensor networks. In a representative clustering method, LEACH, cluster headers are elected based on a probability method. Therefore, cluster headers may be located by geographical randomness. Based on LEACH, there are some nodes which may dont belong within the transmission range of any cluster header or there are some nodes that are very far from their cluster header. The nodes that are far from their cluster header consume node energy quickly and it result in bad effects on working in sensor networks. The bad coverage of sensor networks gives a fatal effect on the QoS of the sensor application. In order to solve this problem, we propose a balanced clustering method that provides cluster headers to be geographically distributed evenly in entire clustered sensor networks. The experimental results show an excellent performance on the coverage of sensor networks and the life duration for the entire nodes of sensor networks.rn
    Keywords: Cluster sensor networks; energy efficiency; the connectivity of sensor networks; distribution of clusters.

  • Intelligent Breast Cancer Diagnosis Based on Enhanced Pareto Optimal and Multilayer Perceptron Neural Network   Order a copy of this article
    by Ashraf Osman, Siti Mariyam Shamsuddin 
    Abstract: Among the common cancer diseases is a Breast Cancer. Diagnosis of this disease depends on the human experience. It is time consuming and have an element of human error in the results. The Pareto optimal evolutionary multi-objective optimization is used to get multiple final results in a single run for simultaneous parameter optimization of artificial neural networks (ANNs). In this paper, a computer based method of an automatic classifier for the Breast Cancer disease diagnosis task is proposed. The proposed method applied a multilayer perceptron (MLP) neural network based on enhanced non-dominated sorting genetic algorithm (NSGA-II) to achieve an accurate classification result for the Breast Cancer diseases diagnosed. Moreover, it is also used to optimize the network structure and reduce the error rate of the MLP neural network simultaneously. Compared to other methods found in the literature, the proposed method is viable in Breast Cancer disease diagnosis.
    Keywords: Breast Cancer diagnosis; Multilayer Perceptron; Pareto Optimal; NSGA-II.

  • Efficiency and stability of EN-ReliefF, a new method for feature selection   Order a copy of this article
    by Nicole Challita, Mohamad Khalil, Pierre Beauseroy 
    Abstract: One of the most advanced forms of industrial maintenance is predictive maintenance. Indeed, the present analysis of the behavior of a material helps to predict future behavior. So as the diagnosis of faults in rotating machines is an important subject in order to increase their productivity and reliability, the choice of features to be used for classification and diagnosis constitutes a crucial point. The use of all the possible features will cause an increase in the computational cost and it will even lead to the increase of the classification error because of the existence of redundant and non-significant features. In this context, we are interested in presenting different methods of feature selection and proposing a new approach that tends to select the best features among existing ones and perform the classification-identification using the selected features. A study of the proposed method stability is also provided.
    Keywords: feature selection; dimension reduction; feature selection methods; Lasso; Elastic Net; Relief; performance; stability; classification error; important features.

  • ECG signal compression using filter bank based on Hermite polynomial   Order a copy of this article
    by Ram Kanhe, Satish Hamde 
    Abstract: The Electrocardiogram (ECG) signal compression is necessary for storage and transmission. In this paper we propose a new filter bank based on hermite function for the effective compression of the ECG signal. The hermite function is used to derive the low pass filter taps and compression using Discrete wavelet transform (DWT) is carried out. The compression scheme is implemented and performance evaluation is presented based on the standard compression indices like compression ratio (CR), percent root mean square difference (PRD) and cross correlation coefficient (CCC). The retrieval of the dominant morphological features of the ECG waveform like P-QRS-T complex upon reconstruction are also verified. The results are presented using the CSE-DS-rn5 and MIT-BIH data bases. The results reflects that the quality of the reconstructed is excellent with minimum loss of the diagnostically important features.
    Keywords: compression ratio, hermite functions, morphology, percent root meanrnsquare difference, wavelet.

  • Comparative Study of Design and Analysis of Gripper Systems for Bore well Rescue Operation   Order a copy of this article
    by Sridhar K P, Hema C R, Deepa S 
    Abstract: Rescuing the trapped child from the bore well is always challenging task for the rescue operation team. In recent years, rescue robots are used to save the child in short duration. In this work, we design and develop three types of robotic arms, rectangular, square and cylindrical based mechanical gripper system to rescue the child from the bore well safely. We performed the structural and performance analysis using NX 10 software package to check the effectiveness of the robotic arms. From our experiments, it observed that rectangular robotic arm has the high displacement of 9.08 mm during gripping operation. A high displacement of the arm may lead to cause injuries to the child. To overcome this issue, we designed square robotic arm based gripper system. However square robotic arm reduces the displacement from 9.08 mm to 7.84 mm only. We then explored cylindrical robotic arm based gripper system for improving the effectiveness of the robotic arm. Our experimental results show that cylindrical robotic arm outperforms rectangular and square robotic arms based mechanical gripper system. It is noted that cylindrical robotic arm reduces the displacement to 0.01 mm, which would ensure the safety of the child during the rescue operation. We also implemented all the robotic arms and practically tested in real rescue operations.
    Keywords: Square robotic arm; cylinder robotic arm; Gripper system; NX 10 software;

  • Performance Metrics on Ultra Low Power Polyphase decimation filter using Carbon Nanotube Field Effect Transistor Technology   Order a copy of this article
    by Mathan N, Vadivel M, Jayashri S 
    Abstract: Low power consumption and abatement in area are the most pre-eminent criteria to scheme the Digital Signal Processor. Multi-rate signal processing studies digital signal processing systems which include conversion. Filters are the substantial building blocks of DSP. The Polyphase filters are the momentous component in crafting of various filter structures. Due to the actuality that it conquers the price and complexity of the filter by deeding the course of decimation prior to filtering that declines the multiplications per input sample. Polyphase structure employs FIR filter that terminates to very efficacious implementation. The Polyphase decimation filter is generally built with multipliers, parallel in serial out shift register, serial in parallel out shift register, ripple carry adder, carry look ahead adder and parallel in parallel out shift registers as a delay element. For excel efficacy of the system, a low power D-Flip Flop is employed to contrive the shift registers. Of the above declared devices consumed in the circuit the multiplier circuit plays a dynamic role. To accomplish the desired results in performance parameters of the multiplier an potent adder is proposed and embodied in the multiplier. The Carbon Nanotube field effect transistor (CNTFET) is an ameliorating new device that may tramples some of the restraints of a silicon based MOSFET. The circuits are designed in 32nm CMOS and CNTFET technology in Synopsys HSpice. Performance parameters such as power, delay and power delay product are assayed and compared in both the technologies.
    Keywords: Polyphase decimation filter, Radix-4 Multiplier architecture, CNTFET, Delay, Average power, Power delay product.

    by Murali Subramanian, Jaisankar N 
    Abstract: Air pollution obtains a key concern in India owing to faster economic development, urbanization and industrialization connected with increased energy demands. But these methods are expensive and provide low resolution sensing data. Also the monitoring system has high communication overhead, power consuming and time. To solve the above problem a clustered wireless sensor network based air pollution monitoring system with swarm intelligence is discussed. Initially, the sensor nodes in the networks are grouped into clusters and the cluster head is selected using the Glowworm Swarm Optimization (GSO) algorithm and Cuckoo Search Algorithm (CSA). Then the Air Quality Index (AQI) based fuzzy rule is formed using Fuzzy Inference System (FIS). Then the data aggregation is using the Improved Artificial Fish Swarm Algorithm (IAFSA) and Hybrid Bat Algorithm (HBA) to find the optimal path for efficient data transmission by reducing the communication overhead. The bat fitness function is calculated using Differential Evolution (DE). The result shows that the proposed method is improved than the obtainable one in stipulations of network energy utilization, delay and throughput and aggregation latency.
    Keywords: Air pollution; air quality index; cuckoo search algorithm; artificial fish swarm algorithm; cluster; aggregation; fuzzy logic.

    by Kamalakannan Jayaseelan, M. Rajasekhara Babu 
    Abstract: Breast cancer is the second most common cancer among the Women. The major victim for the breast cancer is the women. Breast cancer is peculiar growth of the cancer cells in the breast. In U.S one out of eight is diagnosed as breast cancer among the other cancers [1]. Researchers are striving hard to detect the breast cancer at early stage and provide appropriate remedial measures to extend the life time of cancer patients. Medical imaging has become a major tool in clinical trials since it enables rapid diagnosis with visualization and quantitative assessment. Image preprocessing is an essential procedure is used for reducing image noise, highlighting edges, or displaying digital images. Mammogram is the best way for screening the breast [2]. Applying medical image techniques could help in identifying and classifying the abnormalities present in the breast. There are many classifiers available for classifying the data .The features which are extracted from medical images can also be given as input to the classifier for classification. Mammogram is the grey scale image which can be given as input to the proposed system. Mammograms are preprocessed before given to the classifier. The features are extracted through GLCM [5]. The decision tree classifier is used in this paper for classifying the breast abnormality as benign and malignant. Samples are taken from the MIAS database which contains all types of indicators of breast cancer.
    Keywords: Mammogram; Screening; Feature; GLCM; Malignant; Benign.

  • MMQuiz: Multi Modal Human Computer Interaction based online Assessment System   Order a copy of this article
    by Vairamuthu Subbiah, Margret S 
    Abstract: Ever changing computing techniques and novel innovations in the education system plays a major role for the way any institutions prefer to adopt now-a-days. All the varsities and institutions started transferring their mode of education from traditional class room based to internet based online. Part of these processes, the way how students can be assessed online also plays a major role. In this work, a framework has been proposed which was implemented so as to enable the physically challenged students to efficiently utilize to complete their online based assessments. Many voice based browsers though existed, they were all far from reality. Several frameworks were designed for specified objectives like interactive assessment whilst they supported generic features like browsing and mailing. Any individual who may be able to communicate via speech may utilize our proposed framework. The same framework could be employed in any varsities or institutions to conduct their speech based assessments online.
    Keywords: HCI, Modality, Knowledge Base, Screen Readers

  • An EM-MPM Algorithmic approach to detect and classify Thyroid Dysfunction in Medical Thermal Images   Order a copy of this article
    by Gopinath M P, Prabu S 
    Abstract: In this paper, a non-invasive method to diagnose thyroid using thermal imaging process is proposed. Heat distribution in an object is referred as Thermography it is utilized in medical analysis as the human body emits certain amount of heat. The proposed technique is based on the following computational methods Expectation Maximization - Maximize of the Posterior Marginal Algorithm (EMMPM) for segmenting the thyroid region, Gray-Level Co-Occurrence Matrix (GLCM) for feature extraction and Support Vector Machine (SVM) for classifying abnormalities. The experiment was carried out with 40 thermal images of which 10 were normal and 30 abnormal (hyper and hypo) from real human thyroid region thermal image. The accuracy of proposed system is 97.5% which is significantly good. As a result domain user are able to analyses the prediction given by the proposed system for decision support tool.
    Keywords: Thyroid; Thermal Imaging; Segmentation; Classification.

  • A Novel Performance aware Real-Time data handling for big data platforms on Lambda Architecture   Order a copy of this article
    by Rizwan Patan, M. Rajasekhara Babu 
    Abstract: Big data is becoming a popular technology for analytics. But its techniques and tools are very limited to solve the energy aware real time Data handling problems. The real time data handling can be in one of the two computing areas: (1). Batch computing and (2). Stream Computing. Stream computing environment uses round robin algorithm as default scheduling strategy whereas batch process uses distributed scheduling for allocation of its resources. But these computings are not considered proper energy aware distributed scheduling policies for allocation of its resources. This paper presents development of management policies that reduces the energy for the allocation of resources. The Big Data Fusion has been used to improve the efficiency for handing different data types: Batch data, online data, and real-time data. A hybrid computational model has been applied to improve the performance further through Lambda Architecture. Finally, the experimental results have shown the 20% of performance improvement.
    Keywords: Big Data, Batch Computing, Energy efficiency, Stream Computing, Response Time, Resource Scheduling.

  • Spreadsheet-based Neural Networks Modeling and Simulation for Training and Predicting Inverse Kinematics of Robot Arm   Order a copy of this article
    by Khairul Annuar Abdullah, Zuriati Yusof, Riza Sulaiman 
    Abstract: This paper is proposed to solve the inverse kinematic (IK) problem of two-degree-of-freedom planar robot arm using neural networks (NN) technique. Several NN model designs of distinct hidden neurons based on the sum of square error (SSE) function of joint angle are developed and trained with generalized reduced gradient algorithm. The paper is also intended to demonstrate the modeling process of feed-forward NN topology in spreadsheet environment. Technicalities of NN that include layer, neuron, bias, weight, summation function, transfer function, net input, response, objective function, and training algorithm; and the IK problem parameters are stepwise-modeled using built-in front-end spreadsheet properties. The spreadsheet functions as INDEX, SUMPRODUCT, EXP, and SUMSQ; the utilities as Name Manager, Data Validation, Data Table, ActiveX Controls, Answer Report, and Charts; and the add-in Solver are utilized to develop the models. From the series of experiments conducted, it is discovered that most of the spreadsheet-based NN models developed are successful in solving the IK problem of robot arm satisfactorily accurate due to minimum SSEs. With the input parameters of link lengths and end-effector position and orientation, two models with the structures 5-12-1 and 5-10-1 surpass other models in training and best capable in predicting first and second joint angles respectively. This NN-based IK technique contributes significantly to the optimal motion control of robot arm for quality processing and assembly tasks.
    Keywords: feed-forward neural networks; generalized reduced gradient algorithm; inverse kinematics; multiple linear regression; robot arm; spreadsheet modeling and simulation.

  • SOSIoT: SOS Optimization to leverage the Energy Efficient Internet of Things(IoT) based on Route Search Optimization   Order a copy of this article
    by Kallam Suresh, M.Rajasekhara Babu 
    Abstract: IoT has become popular in smart vision of world development. It is more and more complex due to billions of heterogeneous wireless devices communicating each other. Each wireless sensor node or device consumes more energy for its communication. There are various techniques for reduction of this energy such as low energy routing algorithm by using PSO (Particle Swam Optimization) and advances in heuristic techniques with SOS (Symbiotic Organism Search). But these techniques are inefficient due to direct deployment of sensor nodes in the network without considering the more energy consume when transmitting. This paper proposes an SOS Internet of Things (SOSIoT) technique that deals and regulates energy factors in IoT efficiently. It is a self-adaptive technique that aims to minimize the energy harvesting in significant manner on Internet of Things. Finally, it presents a comparative result against existing methods on energy consumption factors. Impact of SOSIoT and Optimal techniques battery life of smart devices increase up to the 40 to 60 percentage.
    Keywords: Internet of Things; Wireless Network; Energy Efficient Technique; Symbiotic Organism Search(SOS); Optimization.

  • Implementation of Biologically Motivated Optimization Approach for Tumor Categorization   Order a copy of this article
    by Sushruta Mishra, Hrudaya Kumar Tripathy, Brojo Kishore Mishra 
    Abstract: Tumor prediction and classification is regarded as a complex task that needs attention. Moreover medical experts lack expertise in this section. Hence an intelligent clinical system model is the time of the hour. Recently Biologically motivated techniques are emerging to be an efficient computing method to solve imprecise and complex problems. Nature forms an immense source of motivation in finding solutions to sophisticated problems IT sector since it is highly robust and dynamic. The result obtained is highly optimized and balanced solution. This is the basic idea of such nature motivated techniques. In our research we have analyzed and implemented some important Bio-inspired optimization techniques to categorize different kinds of tumor. Multilayer perceptron is the classifier used in the process. We have later evaluated our results with some critical metrics like RMSE, Kappa Coefficient, Accuracy and many others to determine the effectiveness of our system model developed. It is observed that using Bio-inspired computation approach enhances the efficiency of Tumor classification. The results are depicted in this paper.
    Keywords: Bio-inspired computation; PSO search; Genetic search; Evolutionary search; Kappa Coefficient; Classification Accuracy.

  • Mathematical modeling for fatigue life prediction of a symmetrical 65Si7 leaf spring   Order a copy of this article
    by Vinkel Kumar Arora, Gian Bhushan, M.L. Aggarwal 
    Abstract: Leaf spring is a suspension component, which is designed to sustain a required fatigue life before its failure or permanent set. The fatigue life of a leaf spring depends upon the various factors like geometry, design, material, processing, fatigue strength reduction and some uncontrollable factors. To determine the effect of variation of individual factor on the fatigue life is always a challenging task as the experimental procedure is time consuming and costly. Also no such attempt has been made to predict this effect. The work presented in this paper depicts the effect of variation of an individual factor on the fatigue life of a leaf spring. A computer program has been written in FORTRAN, for determination of the fatigue life of a light commercial vehicle, has been validated experimentally. Two processing factors, five strength reduction factors, one design factor, one material factor and two geometry factors have been considered for investigation in this work. The effect of variation of one factor at time on the fatigue life of the leaf spring has been determined and modeled using statistical tool NCSS. The regression model has been depicted and validated analytically and experimentally.
    Keywords: Fatigue life; Design factor; Geometry factor; Strength reduction factor; Material & processing factor.

  • Review on IMS-4G-Cloud networks maintaining service continuity using Distributed multi agent scheme   Order a copy of this article
    by Bagubali Annasamy, Prithiviraj V, Mallick Partha S 
    Abstract: In the current networking field, research works are going on with the aim of providing users desired QoS everywhere. This paper discuss about the DMAS (Distributed multi agent scheme) scheme used in the IMS-4G-Cloud networks to support session (service) continuity. It explains the role of different agents which make use of knowledge sources in solving the problem and performing tasks. A Q learning awareness algorithm is used to estimate the QoS by calculating cost function with the help of mathematical formulae comprising of QoS parameters like jitter, delay, packet loss, network resource availability, mobility and service fare parameter. Agents in the DMAS interact with each other and work cooperatively to provide both QoS and mobility information. Based on the type of information provided two types of handoff came in to existence. A detailed explanation of these handoffs with phases is discussed in this paper. Finally,a comparison is made between general IMS handoff procedure and DMAS-IMS handoff procedure with the help of signal flow diagram.
    Keywords: 4G; IMS; CLOUD; Q- learning algorithm.

  • Design of 5-3 Multicolumn Compressor for High Performance Multiplier   Order a copy of this article
    by Marimuthu Ramakrishnan, Balamurugan Subramani, Partha S. Mallick 
    Abstract: Compressors are widely used in multiplier to reduce the partial products. This paper proposed the design of 5-3 multicolumn compressor. The proposed 5-3 multicolumn compressor is used to design the various size multipliers. In this paper, we have designed 6
    Keywords: Full adder; 5-3 multi-column compressor; multiplier; EDP (Energy delay product).

  • Efficient Storage and Retrieval of Medical Records using Fusion Based Multimodal Biometrics   Order a copy of this article
    by Lalithamani Nithyanandham, Amrutha C 
    Abstract: Biometrics helps to uniquely identify a person using their biological features and hence is used to develop systems with a high level of security. Multimodal Biometrics further increases the level of security and provides controlled access, using the combination of multiple traits to identify a person. This paper presents an application of multimodal biometrics to efficiently store, access and retrieve medical records of a patient, which is independent of the hospital servers across the country. In case of emergencies, where it is important to know the medical history of a patient, the record can securely be accessed from a cloud server by using their biometric traits. Here we use two traits, face print and fingerprint to simulate the process of uniquely identifying a patients record which is stored on a cloud server. The records can only be accessed by authorized representatives of the hospitals, which preserve its confidentiality. Feature level fusion technique is used to determine if a record, corresponding to a patient is available in the database. Cryptographic methods like shuffling algorithms are applied for further security of the records.
    Keywords: Multimodal Biometrics; Feature Level Fusion; Fingerprint; Faceprint; Histogram Equalization; Principal Component Analysis; Crossing Number Algorithm.

  • Hierarchy based Knapsack approach for Network Selection in HetNets   Order a copy of this article
    by Trinatha Rao P, HimaBindu Valiveti 
    Abstract: In order to realize the dynamic network environment of current day where the end-users are equipped with various smart mobile devices and technologies, it is necessary to incorporate mechanisms which yield high speed and less end-to-end delay with minimum transition loss. Appropriate selection of the target network will help to resolve the issue to a major extent. The information regarding the next point of attachment is provided by the network to assist the node in taking decisions about altering the base station to suit to the then network situation. In this paper, an intelligent approach is incorporated by ranking the networks available in proximity and ignoring the less compatible ones. This scenario is accomplished between Wi-Fi and WiMax networks by the proposed Hierarchical Knapsack approach with dynamic programming, helps identifying the optimal scenario to support endusers while selecting a network for the handoff.
    Keywords: Heterogeneous Networks; Network selection; Hierarchical Knapsack.

  • PR-LRU: Partial Random LRU Technique for Performance Improvement of Last Level Cache   Order a copy of this article
    by Sheela Kathavate, Lakshmi Rajesh, Srinath N. K.  
    Abstract: Abstract As Chip Multiprocessors (CMP) have become eminent in all areas of computing, it is inevitable for the operating system to schedule processes efficiently on different cores. These multi-cores pose different challenges of which shared resource contention is the dominant one, as cores share resources like last level cache (LLC) and main memory. This can lead to poor and unpredictable performance of the threads running on the system. The cache replacement policy of LLC becomes critical in managing the cache data in an efficient way. Though prominent, Least Recently Used (LRU) algorithm has some issues with applications which do not follow the temporal locality pattern. This study proposes a modification to the LRU algorithm where a random selection of the victim from N LRU blocks yields better results than the conventional method. The evaluation of the algorithm is carried out using Multi2sim simulator using Parsec and Splash2 benchmarks. The results show an overall performance improvement in hit ratio up to 6% and 2% over LRU for PARSEC and SPLASH2 benchmarks respectively.
    Keywords: multi-core; last level cache; LLC; least recently used; LRU; multi2sim; parsec; splash; hit ratio; performance.

    by Sivaraman Eswaran, Manickachezian R 
    Abstract: Cloud computing is the most concerned and growing field in real world which is used by various fields to handle and manage the multimedia application due to availability of more amount of resources. At the time of multiple multimedia requests entering into the cloud server, fining and provisioning the required resources for the incoming multimedia requests of different king would be more difficult task. In the existing system centralized hierarchical cloud-based multimedia system (CMS) is introduced which consists of elements such as resource manager, cluster head and server clusters. This word can assign the user required resources effectively with reduced cost. And also existing research overcomes the load balancing problem which might occurs at the time of multiple multimedia services with different characteristics entering into the system and allocating them in the server without considering their load capacity level using genetic algorithm. However, genetic algorithm would fail to find the optimal resource with optimal load level due to local search optimization problem. And also, load balancing cannot be done effectively in case of arrival of multimedia tasks with varying characteristics. This problem is resolved in the proposed research methodology by introducing the novel load balancing system, in which both task unevenness problems and the genetic algorithms local search optimization problems are resolved. In this paper, proposed an effective Multimedia Load Balancing (MLB) scheme for Cloud-Based Multimedia System (CMS) using Support Vector Machine (SVM) and dynamic multiservice load balancing with Adaptive Genetic Algorithm (AGA) (MLB-SVM-AGA).In this work, SVM is introduced for the purpose of quantifying the unevenness in the utilization of multiple resources on a resource manager on the client side and confirmed at the server side in the each cluster. Unevenness scenario can be modeled as a mathematical hyperplane problem, but in most cases it is computationally intractable. Here, used Adaptive Genetic Algorithm (AGA) for the purpose of solving the problem of dynamic load balancing. The experimental evaluation is conducted in the CloudSim toolkit for the both proposed and existing research methodologies. The performance evaluation is done and the results demonstrate that AGA approach is able to dynamically spread the multimedia task load equally.
    Keywords: Cloud computing; Adaptive Genetic Algorithm (AGA); Load balancing; Meta heuristic; Cloud-based Multimedia System (CMS); Support Vector Machine (SVM) and Unevenness.

  • Wear prediction of hard carbon coated hard-on-hard hip implants using finite element method   Order a copy of this article
    by Shankar Subramaniam, Siddarth R, Nithyaprakash R, Uddin M. S 
    Abstract: Wear is an important factor affecting the performance of hip implants. Wear could be reduced by selecting proper material pair for the bearing components. In addition to its excellent biocompatibility, hard carbon coatings on the articulating surface are shown to exhibit improved friction, wear characteristics and mechanical properties. This paper presents finite element (FE) analysis of the effects of hard carbon coatings on wear evolution of hard-on-hard hip implants. Three different types of thin film hard carbon coatings on the articulating surfaces of the bearing components (e.g. on both head and cup) are considered and they are Nano crystalline diamond (NCD), diamond like carbon (DLC)and poly crystalline diamond (PCD). A uniform coating of 0.01 mm in thickness was applied on both head and cup while the head was 28 mm in diameter with a radial clearance of 0.05 mm. By considering the 3D angular rotation as well as gait loading for a normal walking cycle, linear and volumetric wears are computed for 20 million cycles. At first, the current FE wear model was validated with experimental hip simulator study available in literature. FE simulation results showed that as wear progresses, contact stress at the interface between the head and cup decreases with the increase of gait cycles or the number of years. Wear modelling indicated that PCD coated bearing couple had the least wear evolution as compared to NCD and DLC coated couples.Thus, PCD can be considered as the promising coating material for the hard bearing surface to improve tribological and biomechanical performance, thus increasing the longevity of implants.
    Keywords: hip implants; linear wear; volumetric wear;hard carbon coatings; finite element method.

  • SysML Model-Driven Approach to Verify Blocks Compatibility   Order a copy of this article
    by Hamida Bouaziz, Samir Chouali, Ahmed Hammad, Hassan Mountassir 
    Abstract: In the component paradigm, the system is seen as an assembly of heterogeneous components, where the system reliability depends on these components compatibility. In our approach, we focus on verifying compatibility of components modelled with SysML diagrams. Thus, we model component interactions with sequence diagrams (SDs) and components with SysML blocks. The SDs constitute a good start point for compatibility verification. However, this verification is still inapplicable directly on SDs, because they are expressed in informal language. Thus, to apply a verification method, it is necessary to translate the SDs into formal models, and then verify the wanted properties. In this paper, we propose a high-level model-driven approach which consists of an ATL grammar that automates the transformation of SDs into interface automata. Also, to allow an easy use of Ptolemy tool to verify properties on automata, we have proposed some Acceleo templates, which generate the Ptolemy entry specification.
    Keywords: model-driven; SysML; sequence diagram; interface automata; ATL; Acceleo.

  • Optimization of Speed Control for Switched Reluctance Motor using Matrix Converter   Order a copy of this article
    by Sridharan Subbiah, Sudha S 
    Abstract: This paper proposes a new idea for converter based speed control optimization for Switched Reluctance Motor. The main objective of the speed control technique is reduced torque ripples and improved control performance achieved using matrix converter based PID controller. Speed control performance optimization is attained using Particle Swarm Optimization algorithm. The potential benefits of matrix converter are flexible current profile and lowest switching frequency, Hence minimal loss. Simulation and analysis has been evaluated using MATLAB / Simulink with the reference speed 1500 rpm within the time frame of 0.045 seconds.
    Keywords: EMC – Embedded Micro Controller; PID- Proportional Integral and Derivative; SRM- Switched Reluctance Motor; NPSO- New Particle Swarm Optimization.

  • Radius Problem for Pascu Type Functions with Fixed Second Coefficient   Order a copy of this article
    by S.Sunil Varma, Thomas Rosy 
    Abstract: In this paper we derive the sharp radii results for functions of the form rnrnf(z)=z+∑_(n=2)^∞▒〖a_n z^n 〗 with Taylor's coefficients satisfying the rnrncondition 'a_2 '=2b ,0≤b≤1, and 'a_n '≤cn+d,(c,d≥0) or c/n,(c>0) or M,(M>0) rnrnfor n≥3 to be in the Pascu Class.
    Keywords: Fixed Second Coefficient ; Pascu Class.

  • A New Robust Fuzzy-PID Controller Design Using Gravitational Search Algorithm   Order a copy of this article
    by Nour E.L. Yakine Kouba, Mohamed Menaa, Mourad Hasni, Mohamed Boudour 
    Abstract: This paper proposes the design of a novel robust Load Frequency Control (LFC) strategy based optimised fuzzy-PID controller employing Gravitational Search Algorithm (GSA). The suggested GSA algorithm was applied to optimise the input scaling factors of the fuzzy logic and the PID controller gains. To show the potential of the proposed control methodology, a multi-sources two-area interconnected power system was investigated for the simulation. The considered test system comprises various power generating units from hydro, thermal and nuclear sources in Area-1, and power generation from hydro, nuclear and diesel sources in Area-2. Initially, the simulation was carried considering a centralised controller for both areas to cope with load changes, and then was extended with decentralised controller. Further, sensitivity analysis was performed to demonstrate the ability of the proposed approach in face of wide changes in system parameters and position of load changes. The frequency deviations and the tie-line power flow change were presented, and the superiority of the proposed control strategy was demonstrated by comparing the results with individual Gravitational Search Algorithm (GSA), Fuzzy Logic Control (FLC) and with some reported techniques in the literature such as Ziegler-Nichols, Genetics Algorithm (GA), Bacterial Foraging Optimisation Algorithm (BFOA) and Particle Swarm Optimisation (PSO).
    Keywords: Ancillary Frequency Control; Load Frequency Control (LFC); PID Controller; Fuzzy Logic Controller (FLC); Optimal Control; Gravitational Search Algorithm (GSA).

    by Samuel Eweni, Chukwuebuka Eweni 
    Abstract: The most popularly used light bulb in homes is the incandescent. It is also the least energy efficient. The filament in the bulb is so thin that it causes resistance in the electricity, which in turn causes the electricitys energy to form heat. This causes the incandescent to waste a lot of energy generating heat rather than light. It uses 15 lumens per watt of input power. Increasing energy efficiency is considered as a feasible policy option for many economies. It motivate every nation to reconsider their energy and climate policies A recorded MATLAB demonstration showcased LED versatility and its use by an Arduino UNO board. The purpose of this experimental study was to determine to what extent LEDs can reduce energy consumption through the use of an Arduino UNO board and MATLAB and to discuss the applications of LED. LED will be the future of lighting homes and will eventually completely incandescent bulbs when companies complete the necessary improvements to the LED. Light-emitting diodes (LEDs) offer an exciting opportunity to improve energy efficiency.
    Keywords: Simulation; Light Emitting Diodes (LEDs); Incandescent bulbs; Communications; control systems; numerical simulation.

  • A Formal Framework based K-Maude for Modeling Scalable Software Architectures   Order a copy of this article
    by Sahar Smaali, Aïcha Choutri, Faïza Belala 
    Abstract: Dynamic software architecture (DySA) is one of the most important challenges and curcial problems for managing dynamically scalable software evolution. In this paper, we propose DySAL, a DySA specific modeling language, primarily based on interfaces, dealing with both evolution types: spatial reconfiguration at system level and architectural elements changes (types and behavior) at architecture level. DySAL combines MDE and K-Maude techniques for modeling different DySA aspects (topology, behaviour, interactions, and dynamic evolution). It is defined according to an incremental and multi-level approach and may benefit from the strength of the used approaches: meta-modeling (MDE) and formal semantics (Maude). This has the advantage to manage all types of dynamic evolution and their eventual side effects. Moreover, DySAL facilitates DySA definition with multiple views while covering all phases of dynamic application development life cycle, including validation and verification. Our approach is illustrated and evaluated through a realistic example of a ubiquitous system.
    Keywords: Scalable software architecture; Dynamic evolution; DySAL; MDE; K-Maude framework; Formal semantics.

  • Removed material volume calculations in CNC milling by exploiting CAD functionality   Order a copy of this article
    by Panorios Benardos, George-Christopher Vosniakos 
    Abstract: Material removal volume calculations in machining processes are important in a variety of milling simulation applications, including material removal rate estimation and machining force calculation. In this paper two different approached are presented to this end, i.e. Zmaps and Boolean operations with solid models. The Z-map method is simple but results in large files and needs sophisticated routines to render acceptable accuracy. Boolean operations between accurate solid models of the tool and the workpiece is implemented on readily available CAD system Application Programming Interface. Beside the computational load which is bound to the accuracy level, it requires a sufficient number of interpolated points through one revolution of the tool to be trustworthy. It is practical to use at particular points of interest along the toolpath.
    Keywords: material volume; milling; z-map; solid model; toolpath; Computer-Numerical Control.

  • Design Optimization of Cutting Parameters for a Class of Radially-Compliant Spindles via Virtual Prototyping Tools   Order a copy of this article
    by Giovanni Berselli, Marcello Pellicciari, Gabriele Bigi, Roberto Razzoli 
    Abstract: The widespread adoption of Robotic Deburring (RD) can be effectively enhanced by the availability of methods and integrated tools capable of quickly analyzing the overall process performance in a virtual environment. On the other hand, despite the current availability of several CAM tools, the tuning of an RD process parameters is still mainly based on several physical tests, which drastically reduce the robotic cell productivity. The reason is that several potential sources of errors, which are unavoidable in the physical cell, are simply neglected in state-of-the-art CAM software. For instance, the effectiveness of a RD process is highly influenced by the limited accuracy of the robot motions and by the unpredictable variety of burr size/shape. In most cases, it is strictly necessary to maintain a uniform contact pressure between the tool and the workpiece at all times, despite the burr thickness, so that either an active force feedback or a passive compliant spindle must be employed. Focusing on the latter solution, the present paper proposes a Virtual Prototype (VP) of a radially-compliant spindle, suitable to quickly assess deburring efficiency in different case scenarios. The proposed VP is created by integrating a 3D multi-body model of the spindle mechanical structure with the behavioural model of the process forces. Differently from previous literature and from state-of-the-art CAM packages, the proposed VP allows to quickly estimate the process forces (accounting for the presence of workpiece burr and tool compliance) and the optimal deburring parameters, which are readily provided as contour maps of the envisaged deburring error as function of the cutting parameters. As an industrial case study, a commercial compliant spindle is considered and numerical simulations are provided, concerning the prediction of the surface finishing accuracy for either optimal or sub-optimal parameter tuning.
    Keywords: Virtual Prototyping; Parameter Design; Robotic Deburring; Passively Compliant Spindle.

    by Koussaila Iffouzar, Mohamed Fouad BENKHORIS, Haroune AOUZELLAG, Kaci GHEDAMSI, Djamal AOUZELLAG 
    Abstract: The study of the behavioral analysis of the dual star induction machine fed by voltage inverters is presented in this article. One of the drawbacks of the DSIM supply via PWM voltage inverters is the occurrence of harmonic currents circulating large amplitude; these late induce leaking in the machine due to the chopping frequency of the inverter. Based on advanced dynamical model and equivalent circuit of DSIM developed in this paper, the study and the analysis of the DSIM fed by PWM inverter is done. The impact of the increase in levels of the voltage inverter controlled with natural PWM is studied. Minimization of the circulating currents between two stars with the technique of space vector PWM is analyzed, using the SVPWM simplify of three level NPC inverter will relieve the simulation and thereby obtain the effect of this technique on torque ripples and quality of energy in the machine.
    Keywords: Dual stars induction machine; multilevel voltage source inverter; vector control; dynamical modelling; behavior analysis; current quality; torque ripples.

  • Investigating The Applicability of Generating Test Cases for Web Applications Based on Traditional Graph Coverage   Order a copy of this article
    by Ahmad A. Saifan, Mahmoud Bani Ata, Bilal Abul-Huda 
    Abstract: Web applications provide services to hundreds of millions of people throughout the world. However, developers face a range of problems and challenges in testing them, including the fact that web applications run on diverse and heterogeneous platforms and are written using diverse programming languages. Moreover, they can be dynamic, with their contents and structures determined by inputs from users, so they need to be tested to ensure their validity. In this paper we investigate the ability to generate a set of test cases for web applications based on traditional graph coverage criteria. First, we extracted the in-link and out-link from given web applications in order to draw a web graph, before extracting the prime paths from the graph. After that, the invalid transitions were built from the prime paths. Finally, all the invalid transitions were extended with valid transitions. We evaluated our investigation process by using different sizes of web applications. Two cases studies were used in this paper, the first a small size application and the second a medium size. The results show how difficult it is to run a huge number of test cases generated manually using graph coverage criteria, even for a small web application.
    Keywords: Web testing; graph coverage criteria; prime paths; invalid transitions; invalid paths; test case generation;.

  • A Fuzzy Rule Based Approach for Test Case Selection Probability Estimation in Regression Testing   Order a copy of this article
    by LEENA SINGH, Shailendra Narayan Singh, Sudhir Dawra, Renu Tuli 
    Abstract: Regression testing is a very essential activity during maintenance of software. Due to constraints of time and cost, it is not possible to re-execute every test case with respect to every change occurred. Thus, a technique is required that selects and prioritizes the test cases efficiently. This paper proposes a novel fuzzy rule based approach for selecting and ordering a number of test cases from an existing test suite to predict the selection probability of test cases using multiple factors. The test cases, which have ability to find high fault detection rate with maximum coverage and minimum execution time to test are selected. The results specify the effectiveness of the proposed model for predicting the selection probability of individual test cases.
    Keywords: Prioritization; regression testing; selection probability; fuzzy rule.

  • Base station Placement Optimization Using Genetic Algorithms Approach   Order a copy of this article
    by Ouamri Mohamed Amine, Abdelkrim KHireddine 
    Abstract: The base station (BS) placement, or planning cell problem, involves choosing the position and infrastructure configuration for cellular networks. This problem is considered to be a mathematical optimization problem and will be optimized in our study using genetic algorithms. The various parameters such as site coordinates(x,y), transmitting power, height and tilt are taken as design parameters for BS placement. This paper takes signal coverage, interference and cost as objective functions and handover, traffic demand and overlap as a very important constraint. Receiving field strength testing services for all items is calculated using simulations and path loss is calculated using Hata model. Assuming that a flat area is considered, the performance of the proposed algorithm was evaluated with 97% of the users in the network being covered with a good quality signal.
    Keywords: Base Station; Network Planning; Antenna; Propagation model; Genetic Algorithms; WSM (Weighted Sum Method).
    DOI: 10.1504/IJCAET.2020.10006440
  • Enhanced Approach for Test suite Optimization Using Genetic Algorithm   Order a copy of this article
    by Manju Khari 
    Abstract: The software is growing in size and complexity every day due to which strong need is felt by the research community to search for the techniques which can optimize test cases effectively. Search based test cases optimization has been a key domain of interest for the researchers. Test case optimization techniques selectively pick up only those test cases from the pool of all available test data which satisfies the predefined testing criteria. The current study is inspired by the ants and genetic behaviour of finding paths for the purpose of finding good optimal solution. The proposed algorithm is GACO algorithm, the Genetic Algorithm (GA) and Ant Colony Optimization (ACO) is used to find a suitable solution to solve optimization problems. The performance of the proposed algorithm is verified on the basis of various parameters namely running time, complexity, efficiency of test cases and branch coverage. The results suggest that proposed algorithm is significantly average percentage better than ACO and GA in reducing the number of test cases in order to accomplish the optimization target. The inspiring result raises the need to carry out future work.
    Keywords: Bio Inspired Computation; Genetic; Ant Colony optimization; Fitness function.

  • Cost Minimization Technique in Geo-Distributed Data Centers   Order a copy of this article
    by Ayesheh Ahrari Khalaf 
    Abstract: Significant growth of Big Data leads to a great opportunity for data analysis. Data centers are continuously becoming more popular. At the same time data centers cost are increasing as the amount of data is growing. Simply as Big Data is significantly increasing, data centers are facing new challenges. Hence the idea of geo-distributed data center is introduced. This project investigates on the main challenges that data centers face and presents an enhanced technique for cost optimization in geographical distributed data centers. Parameters involved such as task assignment, task placement, big data processing and quality of service are analyzed. Analytical evaluation results show that joint parameters technique proposed outperformed separate parameter techniques in some cases even with 20 percent enhancement. Academic Gurobi solver is used for the evaluation.
    Keywords: Cloud computing; data flow; data placement; geo-distributed data centers; cost minimization; task assignment.

  • Hall effects on MHD flow of a Visco-elastic fluid through a porous medium over an infinite oscillating plate with Heat source and Chemical reaction   Order a copy of this article
    by Mangali Veera Krishna 
    Abstract: In this paper, we have considered the unsteady flow of an incompressible visco-elastic liquid of the Walter 𝐵 model with simultaneous heat and mass transfer near an oscillating porous plate in slip flow regime taking hall current into account. The governing equations of the flow field are solved by a regular perturbation method for small elastic parameter. The expressions for the velocity, temperature, concentration have been derived analytically and also its behaviour is computationally discussed with reference to different flow parameters with the help of graphs. The skin friction on the boundary, the heat flux in terms of the Nusselt number, and the rate of mass transfer in terms of the Sherwood number are also obtained and their behaviour discussed.
    Keywords: Heat and mass transfer; Hall effects; MHD flows; porous medium; unsteady flows and visco-elastic fluids.

  • Optimized Adaptive Speech Coder for Software Defined Radio   Order a copy of this article
    by Sheetal Gunjal, Rajeshree Raut 
    Abstract: In this paper, use of Discrete Wavelet Transform (DWT) along with Discrete Cosine Transform (DCT) is proposed to exploit speech coding parameters such as bit rate, compression ratio, delay and quality so as to fit the proposed coder in the family of existing speech coders. The proposed coding technique is applied on different speech signals with fix frame size and desired bit rates. The obtained simulation result shows that the proposed coding technique outperforms in compression ratio with compatible processing delay. The Mean Opinion Score (MOS) assessment shows its effective working at different bit rates (13Kbps to 256 Kbps). The coder is also tested successfully on ARM 9 based Software Defined Radio (SDR) platform at different frequency bands with desired bit rates. Hence, the coder may be considered as ʺone size fits allʺ type of coder for efficient utilization of available frequency spectrum in mobile communication.
    Keywords: DCT; DWT; Software Defined Radio;.

  • Intelligent Mobile Robot Navigation Using a Neuro-Fuzzy Approach   Order a copy of this article
    by Somia Brahimi, Ouahiba Azouaoui, Malik Loudini 
    Abstract: This paper introduces an intelligent navigation system allowing a car-like robot to attain its destination autonomously, intelligently and safely. Based on a Neuro-Fuzzy (FNN) approach, the applied technique permits the robot to avoid all encountered obstacles and seek for its target's location in a local manner referring to the concepts of learning and adaptation. It uses two Fuzzy Artmap neural networks, a Reinforcement trial and error neural network and a Mamdani fuzzy logic controller (FLC). Experimental results in the Generator of modules (GenoM) robotics architecture, in an unknown environment, shows the FNN effectiveness for the autonomous mobile robot Robucar.
    Keywords: Mobile robots; autonomous systems; intelligent navigation; fuzzy logic; neural networks; obstacle avoidance; targets seeking; Fuzzy Artmap; Mamdani model.

  • Effect of algorithm parameters in development of spiral tool path for machining of 2.5D star-shaped pockets   Order a copy of this article
    by Divyangkumar Patel, Devdas Lalwani 
    Abstract: 2.5D pocket milling, which is used for manufacturing of many mechanical parts, is one of the main operations as compared to other milling operations and is extensively used in aerospace, shipyard, automobile, dies and molds industries. In machining of 2.5D pockets, directional parallel tool-path and contour parallel tool-path are widely used. However, these tool paths significantly limit the machining efficiency in terms of machining time, surface finish and tool wear because of repeated machining direction alteration, stop-and-go motion, sharp velocity discontinuity, and frequent repositioning, retraction, acceleration and deceleration of a tool. In the present work, to overcome the above mentioned problems, an attempt has been made to generate a spiral tool path for machining of 2.5D star-shaped pocket. The successful generation of spiral tool path depends on various algorithm parameters such as mesh size, permissible error and number of degree-steps. The effect of these parameters on spiral tool path generation is discussed and the best values are reported. The spiral tool path is developed using second order elliptic partial differential equation (PDE) and it is free from sharp corners inside the pocket region. The developed algorithm is formulated and presented in steps using MATLAB
    Keywords: Pocket Machining; Spiral Tool Path; High Speed Machining (HSM); Partial Differential Equation (PDE); Star-shaped Pocket.

  • Automatic Generation of Agent-based Models of Migratory Waterfowl for Epidemiological Analyses   Order a copy of this article
    by Dhananjai Rao, Alexander Chernyakhovsky 
    Abstract: Seasonal migration of waterfowl, in which avian influenza viruses are enzootic, plays a strong role in the ecology of the disease and has been implicated in several zoonotic epidemics and pandemics. Recent investigations have established that with just 1 mutation current avian influenza viral strains gain the ability to be readily transmitted between humans. These investigations further motivate the need for detailed analysis, in addition to satellite surveillance, of migratory patterns and its influence on the ecology of the disease to aid design and assessment of prophylaxis and containment strategies for emergent epidemics. Accordingly, this paper proposes a novel methodology for generating a global agent-based stochastic epidemiological model involving detailed migratory patterns of waterfowl. The methodology transforms Geographic Information Systems (GIS) data containing global distribution of various species of waterfowl to generate metapopulation for agents that model collocated flocks of birds. Generic migratory flyways are suitably adapted to model migratory flyways for each waterfowl metapopulation. Migratory characteristics of various species are used to determine temporal attributes for the flyways. The resulting data is generated in XML format compatible with our simulationbased epidemiological analysis environment called SEARUMS. Case studies conducted using SEARUMS and the generated models for high-risk waterfowl species indicate good correlation between simulated and observed viral dispersion patterns, demonstrating the effectiveness of the proposed methodology.
    Keywords: Migratory Flyways; Tessalation; Agent-based Modeling; Simulation; Computational Epidemiology; Avian Influenza (H5N1).

  • Model-Driven Development of Self-Adaptive Multi-Agent Systems with Context-Awareness   Order a copy of this article
    by Farid Feyzi 
    Abstract: In recent years, there has been an increasing interest in distributed and complex software systems which are capable of operating in open, dynamic and heterogeneous environments, and are required to adapt themselves to cope with environmental or contextual changes. In order to achieve or preserve the specific design objectives, such systems need to operate in an adaptive manner. Self-adaptive systems have the capability to dynamically modify their behavior at run-time in response to different kinds of changes. This paper presents a methodology to develop context-aware self-adaptive software systems by attempting to employ the model driven architecture (MDA) and agent-oriented technology advantages. The approach aims to combine these two promising research areas in order to overcome the complexity associated with the development of these systems and improve the quality and efficiency of the development process. The methodology focuses on the key issues in the analysis and design of self-adaptive multi-agent systems. Different abstraction levels based on MDA has been proposed and mappings between models in these levels provided. These mappings bridge the gap between the high-level models produced in computation independent (CIM) and platform independent models (PIM) as well as the low-level models based on specific implementation platform called SADE (Self-adaptation Development Environment). The proposed approach has been evaluated through a case study described in the paper.
    Keywords: Self-Adaptive System; Multi-Agent Systems; Self-* properties; Model-Driven Development.

  • Execution of UML based oil palm fruit harvester algorithm: novel approach   Order a copy of this article
    by Gaurang Patkar 
    Abstract: Farmers in rustic India have negligible access to rural specialists, who can investigate edit pictures and render counsel. Deferred master reactions to inquiries regularly achieve farmers past the point of no return. This review addresses the above issue with the target of building up another calculation to review Elaeis Guineensis types of palm natural product to help agriculturists and analysts. The structure outlined can unravel issues of human reviewing evaluating in light of two qualities and anticipate the rate of free unsaturated fat and oil content. Guidance can be rendered from best practices in light of this. After gathering agreement with agriculturists and starting examination it is discovered that alongside shading, the quantity of separated fruitlets additionally assumes significant part in reviewing. In the recently planned calculation both elements are mulled over for basic leadership. Since manual evaluating is inclined to blunder, the nature of oil expelled from substance is low. Hence, there is a need to outline calculation which is a structure for agriculturists and analysts. This structure can be utilized with any shading model in any ecological conditions. The computerization of the manual evaluating procedure is finished with the proposed Palm natural product Harvester calculation utilizing Unified Modeling Language chart (UML).
    Keywords: fruitlets; elaeis guineensis; modeling; elaeis guineensis; oil palm fruit; unified modeling langauage; free fatty acid.

  • A BIM-based framework for construction project scheduling risk management   Order a copy of this article
    by F.H. Abanda 
    Abstract: The management of risks has been at the heart of most construction projects. Building Information Modelling (BIM) provides opportunities to manage risks in construction projects. However, studies about the use of BIM in risk management are sketchy with a lack of a systematic approach in using BIM for managing risk in construction projects. Based on existing risk models, this study investigated and developed a BIM-based framework for the management of construction project scheduling risk. Although, the frameworks were developed by mining risk management processes from Synchro and Vico, both being amongst leading 4D/5D BIM software systems, they can inform risk management in BIM projects that are supported by 4D/5D BIM software systems that contain risk management modules. The frameworks were validated for their syntactic and semantic correctness.
    Keywords: BIM; construction projects; risk; Synchro; Vico; 4D/5D BIM.

  • On The Order Reduction of MIMO Large Scale Systems Using Block-Roots of Matrix Polynomials   Order a copy of this article
    by Belkacem Bekhiti, Abdelhakim Dahimene, Bachir Nail, Kamel Hariche 
    Abstract: The present paper deals with the problem of approximating linear timerninvariant MIMO large scale systems with reduced order system via the help of thernso called Block-moment matching method based on the dominance exist betweenrnsolvents of the system characteristic matrix polynomial, where the Block-rootsrnare reconstructed using a new proposed procedure. The validation and study ofrnaccurate approximation is done by a specified performance index called pulsernenergy criterion. The necessary condition for correctness and applicability of the proposed method is the Block-controllablity or Block-observability. Finally, for the demonstration of the proposed method efficiency a numerical example is illustrated.
    Keywords: Solvents; Block-roots; Matrix polynomial; Moment matching; MIMO systems.

  • A combining technique based on channel shortening equalization for ultra wideband cooperative Systems   Order a copy of this article
    by Asma Ouardas, Sidahmed Elahmar 
    Abstract: This paper presents a novel combining technique based on the channel shortening approach for cooperative diversity in the context of time hopping ultra wideband (TH-UWB) systems. Since UWB channel has very long impulse response as compared to the very narrow pulse used system, TH-UWB performances are affected by inter-symbol interference (ISI). Therefore, the use of the Rake diversity combining is very effective, but it increases the receiver complexity due to its large number of correlations. The idea is to introduce a channel shortening equalizer (CSE) [named also Time domain equalizer (TEQ)] before the Rake reception in first and second time slots at the relay and destination, respectively. This proposed combination structure shows that there are great results in both decreasing the complexity of the receiver architecture by significantly reducing the number of effective channel taps and mitigating ISI. The Decode and Forward (DF) is used as a relay protocol to retransmit signals from the source to the destination and the relay is supposed equipped with multiple antennas and antenna selection criterion is used to exploit the diversity with reduced complexity. In the considered relay network, UWB links between the nodes are modeled according to IEEE 802.15.4a standards. The performance of the proposed structure is compared to cases where the relay is equipped with a single antenna and multiple antennas (full diversity). Numerical results show that significant improvement in the BER of UWB system is obtained by combining cooperative diversity technique and Channel shortening technique (lower than 〖10〗^(-5)) with respect to both improving the system performance and reducing the system complexity by using the antenna selection strategy which achieved the full diversity gain.
    Keywords: Time Hopping Ultra Wideband; TH-UWB; Channel Shortening Equalizer; RAKE receiver; Cooperative diversity; Antenna selection; Decode and Forward.

  • A Rewriting Logic Based Semantics and Analysis of UML Activity Diagrams: A Graph Transformation Approach   Order a copy of this article
    by Elhillali Kerkouche, Khaled Khalfaoui, Allaoua Chaoui 
    Abstract: Activity diagrams are UML behaviour diagrams which describe global dynamic behaviours of systems in a user-friendly manner. Nevertheless, UML notations lack firm semantics which make them unsuitable for formal analysis. Formal methods are suitable techniques for systems analysis. Rewriting Logic and its language Maude provides a powerful formal method with flexible and expressive semantics for the specification and the analysis of systems behaviour. However, the learning cost of these methods is very high. The aim of this paper is to integrate UML with formal notation in order to make the UML semantics more precise which allow rigorous analysis of its models. In this paper, we propose a graph transformation based approach to generate automatically Maude specifications from UML Activity diagrams. The proposed approach is automated using the AToM3 tool, and it is illustrated through an example.
    Keywords: UML Activity Diagrams; Rewriting Logic; Maude language; Meta-Modelling; Graph Grammars; Graph Transformation; AToM3.

  • Shape definition and parameters validation through sheet metal feature for CNC dental wire bending   Order a copy of this article
    by Rahimah Abdul Hamid, Teruaki Ito 
    Abstract: The present study is conducted to validate the calculated Computer-aided Manufacturing (CAM) data, or the bending code (B-code) according to the theory of 3D linear segmentation algorithm. The theory uses Cartesian coordinates of the segmented 3D lines and produces the desired bending parameters in terms of feeding length (L), the plane rotation angle (β) and bend angle (θ). The parameters are intended to control and drive the Computer Numerical Control (CNC) dental wire bending machine. Till recently, the wire bending operation in dentistry application is manually bent in both orthodontics and prosthodontics application. The study proposes the idea to automate the dentistry wire bending operation by means of a CNC desktop wire bender. For this reason, a theory of 3D linear segmentation is introduced and the recent work discusses the validation process of this approach. This paper aims to give an early theoretical result of the wire bending operation and does not consider material properties in the calculation. A reverse engineering is adopted where a pre-fabricated dental target shape is physically measured and re-designed. Sheet metal feature is used to virtually simulate the wire bending operation based on the theory and to show the procedure of translating the design into CAM data works well. As a result, the generated sheet metal bending parameters are analysed and compared with the calculated parameters. To conclude, the B-code for the wire bending mechanism has been validated in the present work.
    Keywords: Concurrent engineering; 3D linear segmentation; parameters validation; dental wire; bending code; reverse engineering; CAD/CAM.

    by T.R. Ganesh Babu, S. Nirmala, K. Vidhya 
    Abstract: To image trabecolectomy blebs using anterior segment optical coherence tomography AS-OCT and to measure the blebs morphological features such as bleb height area and extend.In this paper fuzzy local informationC-means clustering is used to segment the bleb boundary. A batch of 25 AS-OCT images are used to assess the performance of the determined parameters to the clinical parameters, and 91.43% accuracy is obtained in the determined parameters result.The mean value of bleb height, area and extend are 0.2 mm, 1.618 mm2, 0.343 mm respectively. The result showsthe potential applicability of the method for automated and objective mass screening for detection of bleb boundary.
    Keywords: Blebs; Fuzzy local information-c means clustering Trabeculectomy; Anterior chamber optical coherence tomography; median filter.

  • Towards a new supporting platform for collaboration in industrial diagnosis within an agent-based WEB DSS   Order a copy of this article
    Abstract: The main objective of this study is to provide a professional social network which is integrated within an agent-based WEB DSS. This DSS Network will facilitate interaction, discussion, and the sharing of information among production operators, especially to deal with nominal situations of resource faults. The coordination among the agents in our approach is made possible by a dynamic agent coordination protocol. The general architecture is based on agents named: Production Agent (PA), Ontology Agent (OA), Evaluator Agent (EA), and Coordinator Agent (COA). The Coordination Protocol that is applied comprised a set of behaviors used to maintain and control the messages exchanges among the agents. Learning features enable these agents to gain more time in calculation and executions. As a representation of the agents common knowledge, a domain ontology has been developed to represent major generic concepts in the industrial domain. Analytical Hierarchy Process (AHP) methodology is applied to evaluate sorting solutions in respect of human participants preferences. In this study, we choose as a field of application ALFATRON electronics industry.
    Keywords: Analytic Hierarchy Process (AHP); Collaborative Interface; Coordinator Agent (COA); Evaluator Agent (EA); Decision Support System (DSS); Production Agents (PA); WEB DSS.

  • A Probabilistic Analysis of Transactions Success Ratio in Real-Time Databases   Order a copy of this article
    by Mourad Kaddes, Majed Abdouli, Laurent Amanton, Alexandre Berred, Bruno Sadeg, Rafik Bouaziz 
    Abstract: Nowadays, due to rapidly changing technologies, applications handling more data and providing real-time services are becoming more frequent. Realtime database systems are the most appropriate systems to manage these applications. In this paper, we study statistically the behavior of real-time transactions under the Generalized Earliest Deadline First scheduling policy (GEDF ). GEDF is a new scheduling policy in which a priority is assigned to a transaction according to both its deadline and a parameter which expresses the importance of the transaction in the system. In this paper, we focus our study on the influence of transactions composition. Precisely, we study the influence of transaction distribution on the system performances and on approximation of transactions success ratio behavior by a probability distribution. To this end, we have developed our RTDBS simulator and we have conducted intensive MonteCarlo simulations
    Keywords: Real-time databases system; Transactions; schedule; GEDF; Stochastics; Monte-carlo Simulation.

    by Arun Kumar M, Agilan P, Ramamoorthy S, MaheshKumar N 
    Abstract: In this paper, the authors investigate the general solution and generalized Ulam-Hyers stability of a n-dimensional additive functional equation with n > 2 in Banach spaces by applying direct and fixed point methods.
    Keywords: additive functional equation; fixed point Generalized Ulam-Hyers stability.

  • Optimistic and Pessimistic Solutions of the Fuzzy Shortest Path Problem by Physarium Polycephalum approach   Order a copy of this article
    by Renu Tuli, Vini Dadiala 
    Abstract: The remarkable behavior of Physarium polycephalum has been used to solve the fuzzy shortest path problem. A novel algorithm has been developed for varying degrees of optimism ranging from purely pessimistic to purely optimistic. Providing the decision maker (DM) a range of solutions gives him/her more flexibility in choosing the solution according to his/her degree of optimism. The triangular and trapezoidal fuzzy numbers representing cost or duration of travel are converted to crisp numbers by finding their total integral values and thereafter optimal solutions for varying degrees of optimism are obtained. The process is explained by four numerical examples including a tourist network problem and results obtained are compared with existing work. It has been observed that in comparison to the existing work, this method is not only easier to understand and implement but also gives better non-dominated optimal solutions.
    Keywords: Physarium polycephalum; triangular fuzzy numbers; trapezoidal fuzzy numbers; optimistic and pessimistic approachesrnrn.

  • Employment Effects and Efficiency of Ports   Order a copy of this article
    by Torsten Marner, Matthias Klumpp 
    Abstract: Expected increasing transport volumes in Germany and Europe, combined with increasing sustainability requirements, lead to a prospective major role of sea and inland ports in future transport systems. But especially for inland ports this increased expectations more and more lead to conflicts regarding port property denomination as city development heavily pursues non-transport and non-industry dedications e.g. with high-scale living quarters, recreation and office space concepts like e.g. in D
    Keywords: Employment effects; inland ports; cost-benefit analysis; bottlenecks; freight transport performance; data envelopment analysis.

  • Evolutionary Neural Network Classifiers for Software Effort Estimation   Order a copy of this article
    by Noor Alhamad, Fawaz Alzaghoul, Esra Alzaghoul, Mohammed Akour 
    Abstract: The estimation of software development efforts has become a crucial activity in software project management. Due to this importance, many researchers focused their efforts on proposing models for relationship construction between efforts and software size and requirements. However, there are still gaps and problems in software effort's estimation process; due to the lack of enough data available in the initial stage of project life cycle. The need for an enhanced and an accurate method for software effort estimation is an urgent issue that challenged software project-management researchers around the world. This work proposes a model based on Artificial Neural Network (ANN) and Dragonfly Algorithm (DA), in order to provide more accurate model for software effort estimation. The applicability of the model was evaluated using several experiments and the results were in favour of the enhancement with more accurate effort estimation.
    Keywords: COCOMO 81; Artificial Neural Network; Dragonfly Algorithm; Effort estimation.

Special Issue on: ICEE2015 Computational Techniques, Simulation and Optimisation Applied to Engineering Problems

  • Artificial Neural Networks (ANN) and Genetic Algorithm Modeling and Identification of arc Parameter in Insulators Flashover Voltage and Leakage Current   Order a copy of this article
    by Khaled Belhouchet, Abdelhafid Bayadi, M.Elhadi Bendib 
    Abstract: In this paper we present an optimization method based on genetic algorithms and Artificial Neural Networks (ANN) experimental data from artificially polluted insulators for the determination of the arc constants and Dielectric properties in the surface. The study of flashover phenomenon in polluted insulators has not yet been described accurately through a mathematical model. The definitions of arc constants are very difficult, which is created in the dry bands when the voltage exceeds its critical value. In this work a pollution flashover generalized model is used. The obtained results show that the mathematical model with optimized arc constants simulates accurately the experimental data and corroborate the inverse relationship between flashover voltage and pre-flashover leakage current. rnFor this purpose, an ANN was constructed in MATLAB and has been trained with several MATLAB training functions, while tests regarding the number of neurons, the number of epochs and the value of learning rate have taken place, in order to find which net architecture and which value of the other parameters give the best result.rnTo validate our method an experimental tests for different insulators show very good agreement with the measured values and the computed ones.
    Keywords: Insulators; Flashover; Critical Voltage; Genetic Algorithm; ANN; Leakage Current.
    DOI: 10.1504/IJCAET.2019.10007320
  • Multiple description image coding using contourlet transform   Order a copy of this article
    by Amina Naimi, Kamel Belloulata 
    Abstract: In our work we present multiple description coding (MDC) scheme using contourlet transform (CT) for gray scale images. The principle is to prove performance of our MDTC algorithm using CT in contrast to wavelet which is widely used for image coding. The main goal of our approach is to encode directional subbands resulting from the directional stage decomposition, to form the four descriptions compressed and transmitted over different packet loss channels. Most of multiple description scenarios are implemented in transform domain. A controlled amount of correlation is appended to the formed descriptions using a pairwise correlating transform (PCT). The experimental results of contourlet transform CT compared to discret wavelet transform (DWT) show that our method achieves better performance in term of delivered peak signal to noise ratio (PSNR) and visual image quality reconstruction.
    Keywords: Image Coding; MDC; DWT; CT; PCT; PSNR.

  • Robust Load Angle Direct Torque Control with SVM for Sensorless Induction Motor Using Sliding Mode Controller and Observer   Order a copy of this article
    by Abdelkarim AMMAR, Amor BOUREK, Abdelhamid BENAKCHA 
    Abstract: The AC drives Direct Torque Control technique (DTC) has been designed to obtain a high performance for flux and torque control. Because of some downsides, the space vector modulation (SVM) have been proposed for the improvement of classical DTC strategy. Besides, the sliding mode control proofs that it can solve robustness and disturbances problems. Furthermore, it has taken a grand part in sensorless applications. In this paper, a robust modified DTC scheme will be presented for induction motor drive basing on various control strategies. Firstly, ripples reduction strategy based on load torque angle variation and space vector modulation (SVM) will be presented. This technique is known as load angle SVM-DTC. Secondly, a sliding mode speed controller will be used instead of the conventional PI for speed control loop. Moreover, this paper aims to design a dual sliding mode observers for speed/flux and load torque estimation. They can improve the control performances by decreasing the cost and increasing the reliability of the global control system. The proposed sensorless control method will be investigated by simulation and experimentally using Matlab/Simulink in real time environment using dSpace 1104.
    Keywords: Induction Motor; Direct torque control (DTC); Load Angle; Space vector modulation (SVM); Sliding Mode Control; Sliding Mode Observer; dS1104.

  • Channel shortening equalizer through energy concentration for TH-UWB systems   Order a copy of this article
    by BENOTMANE BENAMARA Noureddine, ELAHMAR Sidahmed 
    Abstract: Channel shortening equalizer (CSE) plays an important role in ultra wide band (UWB). This paper presents a method for updating the coefficients of CSE based on the adaptation of energy optimization method using singular value decomposition (SVD) algorithm in UWB channel with time hopping (TH) and pulse position modulation (PPM). This is to suppress inter-symbol interference (ISI) and to simultaneously simplify the Rake receiver architecture by significantly reducing the number of channel taps. This CSE can be concentrated the most amount of channel energy within just first taps and suppressed the most of this out the desired window. The presence of the proposed CSE before Rake receiver enables the Rake receiver implementation with less number of correlators. Computer simulation results are provided to compare the performance of the proposed method with a MSSNR CSE, lower bound [also known as All-Rake], Partial-Rake, and Selective-Rake in terms of channel shortening and bit error rate (BER).
    Keywords: Time-hopping Uultra-Wide-Band; Channel Shortening Equalizers; MSSNR; SVD; Rake receiver.

  • Optimization of Brake Pedal Linkage: A Comparative Analysis towards Material Saving Using CAE Tools   Order a copy of this article
    by Amandeep Garg, Krishan Kumar 
    Abstract: This work is concerned with the design optimization of brake pedal linkage. An existing model of brake pedal linkage was analyzed through finite element analysis module of ANSYS. The target outcomes were observed under defined loading & boundary conditions to provide a realistic analysis environment. The maximum deflection, von mises stress etc. were determined for maximum load through FEA. The maximum stress prone areas of the assembly as well as parts under high stress were concentrated. To optimize existing model, the stress free volume of the material from the linkage assembly has been targeted. The CAD model of the new design of linkage is modeled using CATIA features. Keeping in view the design concepts of mechanical components this stress free volume of the material has been removed to make it light weight without compromising with the strength of linkage. The optimized model has been analyzed through ANSYS again to observe its outcomes. The prime objective of this work is material saving and related material cost minimization.
    Keywords: CAE; Brake pedal; FEA; Optimization etc.

  • Reliability and safety analysis using Fault Tree and Bayesian Networks   Order a copy of this article
    by Hamza Zerrouki, Hacene Smadi 
    Abstract: Fault tree analysis (FTA) is one of the most prominent techniques used in risk analysis, this method aimed to identify how component failures lead to system failures using logical gates (i.e. AND, OR gates). However, some limitations appear on FTA due to its static structure. Bayesian networks (BNs) have become a popular technique used in the reliability analysis, it represents a set of random variables and their conditional dependencies. This paper discuss the advantages of Bayesian networks over Fault tree in reliability and safety analysis. Also it shows the ability of BN to update probabilities, to represent multi-state variables, dependent failures and common cause failure. An example taken from the literature is used to illustrate the application and compare the results of both Fault Tree and Bayesian Networks techniques.
    Keywords: Fault Tree; Bayesian Network; Safety analysis; Reliability assessment.

Special Issue on: APIC-IST 2015 Intelligent Information Services and Systems

  • Computer-Vision based Bare-Hand Augmented Reality Interface for Controlling an AR Object   Order a copy of this article
    by Hyungwoo Lee, Junchul Chun, Deepanjal Shrestha 
    Abstract: In this paper we design and implement a vision-based bare-hand interface which manipulates a virtual object in Augmented Reality environments in a natural fashion. For years in the researches of developing vision-based human computer interaction, a lot of works have been conducted with the augmented reality technology. Various vision-based interfaces invented are developed by utilizing the movement of eyes, hand and body gesture to interact between human and substances to solve the problems such as portability and efficiency of the extensively used interfaces like mouse and keyboard. Many of previous studies for the augmented reality technology are animatedly conducted to make possible to interact with user interface through the information from the image seen with human eyes when people wear the glasses. In this work, we developed a vision-based AR interface which can interact with and control the augmented virtual objects such as architectural or engineering 3D models by recognizing simple hand gesture of both human bare hands in Augment Reality.
    Keywords: Augmented Reality; fingertip detection; AR interface; skin colour model; hand interface.
    DOI: 10.1504/IJCAET.2018.10006394
  • sFlow Monitoring System in a Disaster-Resilient Global SDN Testbed based on KOREN/APII/TEIN Network   Order a copy of this article
    by Afaq Muhammad, Song Wang-Cheol, Seok Seung-Joon, Kang M.G. 
    Abstract: This paper provides insights into the traffic flow monitoring system of a disaster-resilient SDN testbed which is based on KOREN (Korea advanced Research Network)/APII (Asia Pacific Information Infrastructure)/TEIN (Trans-Eurasia Information Network) network. This testbed consists of research and education networks of several countries including South Korea, Japan, and United States of America. Since, it is a large multi-tenant testbed deployed over high-speed research networks across several countries, it is not possible for the traditional traffic monitoring solutions, such as, NetFlow, Network Packet Brokers to provide network-wide visibility. Therefore, we have implemented a new sFlow-based monitoring system that is not only tailored to the requirements of this testbed, but can also provide real-time network-wide visibility. It is carried out by deploying open-source Host sFlow agents with a Graphite collector which offers a complete, highly scalable monitoring solution. It periodically fetches sFlow metrics and other statistics from sFlow agents, and stores them in time-series format in Whisper RRD database. These statistics are then exported to the Disaster Management Server of the global SDN testbed where they are used for further analysis.
    Keywords: sFlow; monitoring system; SDN; KOREN; Graphite; disaster-resilient network.
    DOI: 10.1504/IJCAET.2018.10002607

Special Issue on: Research and Challenges in Soft Computing Theory and Applications

  • User Authentication and Key Agreement Scheme for Internet of Thing A Study   Order a copy of this article
    by Kaliyanasundaram Muthuramalingam 
    Abstract: Internet of Things entails different aspects of different computing devices, embedded devices, communication devices and sensing devices which join together to customize a system, in order to combine the real and digital worlds in interdependent interactions. Huge number of interdependent devices and substantial amount of available data open a new prospect to produce valuable services to the society. As an Internet of Things determines its reliability provided by evolving of networks, particularly, security appears as a challenging issue with their limited resource. All security approaches need a certain amount of resources including data, memory and power for processing the devices. To address the security in remote networks those are widely used, due to the immense development of computer network services. Authentication based on Password is widely used to ensure that systems resources that are not properly acknowledged, which is one of the simplest and a suitable authentication mechanism. Thus, contest is to conceive such security protocols which are capable of minimizing the processing capability and saves energy which can reserve in the devices. Even though several studies addresses the security mechanisms for IoT environments, the main objective states analyzing security it acknowledges the characteristics for integration with minimum power and the processing mechanisms are the real solutions to design in its context. This paper summary a brief reviews on some of the recent articles on addressing the security issues of IoT environments.
    Keywords: Authentication Complex networks; Light Weight; Key Establishment; Host Identity Protocol and Explicit authenticated key agreement.
    DOI: 10.1504/IJCAET.2018.10009631

Special Issue on: ICMCE-2015 Advances in Applied Mathematics

  • Robust optimal sub-band wavelet cepstral coefficient method for speech recognition   Order a copy of this article
    by John Sahaya Rani Alex, Nithya Venkatesan 
    Abstract: The objective of this paper is to propose a robust feature extraction technique for speech recognition system which is insusceptible in the adverse environments. Efficacy of the speech recognition system depends on the feature extraction method. This paper proposes an auditory scale like filter banks using optimal sub-band tree structuring based on wavelet transform. The optimized wavelet filter banks along with energy, logarithmic, discrete cosine transform and cepstral mean normalisation blocks form a robust feature extraction method. This method is validated on a Hidden Markov Model (HMM) based single Gaussian isolated word recognition system for additive white Gaussian noise, street and airport noises with different noise levels. Compared with Fourier transform based methods such as Mel-Frequency Cepstral Coefficient (MFCC) and Perceptual Linear Predictive (PLP) methods, the wavelet transform based method yielded significant improvement across all the noise levels. The experiments also performed with higher dimensions of MFCC features including delta, acceleration features (MFCC_D_A). This study proves that the outcome of wavelet transform based method give an increased recognition accuracy of 13% over MFCC_D_A for non-stationary noises.
    Keywords: Speech recognition; feature extraction; wavelet transform; robust; noisy environments; MFCC; PLP.

Special Issue on: RAME 16 Advances in Mechanical Engineering Research

  • Effect of Turbo A/R Ratio on a High Speed Turbocharged Automotive Diesel Engine   Order a copy of this article
    by Senthil Kumar, Gandhi Amarnadh, Akshay Gupta 
    Abstract: In the light of recent trend to improve fuel economy and emission control in diesel engines to meet future energy conservation standards a lot of research is being done on turbocharger. A turbocharger is bolted to exhaust of engine, it recovers the waste energy from exhaust gas to power the compressor at the inlet helping in achieving volumetric ratio greater than 100%. The turbines in turbochargers are characterized mainly by A/R ratio. Varying A/R ratios can adversely affect the performance, fuel economy and emissions of a diesel engine due to effects of pressure boost, back pressure and exhaust gas recirculation. The current study deals with influence of A/R ratios of turbochargers on performance and emissions of a two cylinder diesel engine. It is found from the experiments carried out that oxides of nitrogen emitted from the engine for all tested turbochargers showed the increasing trend from low to medium engine speeds and it is reverse trend from medium to higher speed range. The other engine emissions unburned hydrocarbon, carbon monoxide and smoke showed the inverse trend of oxides of nitrogen.
    Keywords: Turbocharger; Performance; Emissions; A/R ratio; Diesel engine.

  • A hybrid crow search algorithm to minimize the weighted sum of makespan and total flow time in a flow shop environment   Order a copy of this article
    Abstract: In this paper, flow shop scheduling problems which were proved to be strongly NP-hard (non-deterministic polynomial time hard) are considered. The objective is to minimize the weighted sum of makespan and total flow time. For solving this problem, a recently developed meta-heuristics algorithm called as crow search algorithm is proposed. Moreover, the dispatching rules are hybridized with the crow search algorithm to improve the solution quality. An evaluation of the performance of the proposed algorithm is carried out by industrial scheduling problem and the results are compared with many dispatching rules and constructive heuristics. The results obtained by the proposed algorithm is much better than the dispatching rules and constructive heuristics. Random problem instances are also used to validate the performance of the proposed algorithm. The results are compared with many other meta-heuristics addressed in the literature and the results indicate the effectiveness of the proposed algorithm in terms of solution quality and computational time. To the best of our knowledge this is the first reported application of crow search algorithm to solve the scheduling problems.
    Keywords: flow shop; scheduling; NP-hard; crow search algorithm; makespan; total flow time.

  • Dynamic analysis of composite propeller of ship using FEA   Order a copy of this article
    by Roopsandeep Bammidi 
    Abstract: Ships and underwater vehicles use propeller for propulsion. In general, propellers are used as propulsors and they are also used to develop significant thrust to propel the vehicle at its operational speed and RPM. The blade geometry and design are more complex involving many controlling parameters. In current years the increased need for light weight structural element with composite materials has led to use of S2 Glass fabric/epoxy to propeller. The present research work is to carry out the model and static analysis of aluminum, composite material which is a combination of GFRP (Glass Fiber Reinforced Plastics) materials. The research work deals with modeling and analyzing the propeller blade of a underwater vehicle for their strength. A propeller is a complex geometry which requires high end modeling software. The solid model of propeller is developed in CATIA V5 R21. Static, model analysis of the propellers made of aluminum and composite materials are carried out in ANSYS (Advanced Numerical Simulation Systems). We applied thrust force at blade blend section and centrifugal force at the centre of gravity and found out Von Mises stresses, total deformation, directional deformation, principal and shear stresses of aluminum and composite propeller.
    Keywords: Propeller; GFRP; Composites; CATIA; ANSYS.

    by Balamurugan Ponnamabalam, Uthayakumar Marimuthu 
    Abstract: Copper based composites play a vital role in the structural, automobile and aerospace applications. In the present study composites with varying fly ash reinforcement of 0%, 2.5%, 5%, 7.5% and 10% are prepared by powder metallurgy route. The influence of manufacturing constraints such as sintering temperature, compaction pressure and sintering time along with the reinforcement percentage on copper matrix composite is studied using full factorial design. Fly ash is used as the reinforcement. The output parameters studied are relative density, hardness and compressive strength. Contributions of each input parameters on output responses are found out. The results show that compaction pressure contributes more for the relative density with the highest contribution of 60.21%, while for the hardness and compression strength, the influence of percentage reinforcement made the highest contribution with 85.5% and 51.94%
    Keywords: copper; fly ash; powder metallurgy; manufacturing constraints.

  • Preparation, characterization and machining of LaPO4-Y2O3 composite by abrasive water jet machine   Order a copy of this article
    by Balamurugan Karnan, Uthayakumar M, Sankar S, Hareesh U.S, Warrier K.G.K 
    Abstract: This article illustrates the work done on preparing a new ceramic matrix composite with the proportional concentration of Lanthanum phosphate (LaPO4) (80wt.%) and Yttria (Y2O3) (20 wt.%) by aqueous sol-gel process. The prepared nano powder is compacted to 480 MPa at 280C (room temperature) followed by sintering at 14000C for 2 hours to get the required disc shape geometry. The prepared ceramic matrix composite was machined using Abrasive Water Jet Machining (AWJM). Grey Relational Analysis (GRA) is performed with various input parameters like Jet Pressure (JP), Stand-Off Distance (SOD) and Traverse Speed (TS) to measure the deviations in performance characteristics such as Material Removal Rate (MRR), Kerf Angle (KA) and Surface roughness (Ra). The significant correlations of these parameters were analyzed by using Analysis of Variance (ANOVA). From the experimental work it is evident that SOD has significant contribution of about 60.89% and it affects the output performance characteristics to a great extent. Confirmation experiment is also carried out to validate the results.
    Keywords: Lanthanum phosphate-Yttria composite; Abrasive water jet machine; Kerf angle; Material removal rate; Surface roughness.

  • A short review on Fretting wear behaviour of Al7075   Order a copy of this article
    by Poomarimuthukumar Gurumoorthy, Siva Irulappasamy, Thirukumaran Manoharan, Winowlin Jappes Jebas Thangiah 
    Abstract: Generally, matting components are great deals with a surface of friction in contacting area gripping and its focusing their service life of the mating materials. In applied components, service life can also be measure in terms of wear. Due to their higher bearing capacity and fatigue resistance, the Aluminum (Al) based alloys are ever preferred in high-end applications. Nevertheless, due to be weaken tribological strengths, so many researchers involved to explore the possible wear mechanisms exists with the worn surfaces of Al based alloys to enhance the strength. Fretting, one of the primary failure mechanisms which promote severe damage to the matting of metal parts especially in Al 7075 is revealed recently by the researchers. In general, damage severity during the wear is analyzed through the fretting which could be the result of synergy of rubbing with matting parts and vibrations. The fretting regime represent many primary wear mechanisms. Fretting map support the cause analysis of wear from the worn morphology. The present survey dealt the utilization of fretting regime to explore the possible wear mechanisms and factors caused the wear damages from the research outcome of leading tribo research groups.
    Keywords: Aluminum alloy; Fretting wear; Survey study; Surface topography.

  • Characteristic study on Al7020 Friction Stir joints with various rotational speeds   Order a copy of this article
    by Anselm Lenin, Nagaraj P, Lincy George, Lakshman Thangavelu 
    Abstract: This article provides information on the weld quality of joints which are made by friction stir welding (FSW). Based on industrial application Especially aerospace and automotive Here, two Aluminium 7020 plates were welded. The joints were produced using same welding parameters only the variation is done in the rpm of thetool. Here the characterization is done against mechanical properties and microstructure. Using the selected welding parameters, excellent weldment forms,but the micrograph for each joint cross-section made with different rpm are different. With the change of tools rotary speed, a number of particles stirred into the stir zone are varied. Moreover, the failure loads of the joints also vary with relevant to the rpm of thetool. The results of the analysis show that the specimen joint made with 1400rpm tool rotation and feed rate of 32 mm/min has achieved higher mechanical properties, better metal dispersion with poor defects. Finally, the direction of future research and potential applications are examined and explained.
    Keywords: FSW interface characteristics; Al7020 alloy joining; Friction stir welding.

Special Issue on: ICIS 2016 Computer Aided Inventive Computational Techniques

    by P. Pitchandi, M. Rajendran 
    Abstract: With abundance of heat especially in tropical countries and Middle East nations, power harnessing techniques have been more focussed towards construction of solar panels and ponds for trapping and harvesting energy. This research paper is focussed towards the design considerations for a high efficiency harvesting of solar heat using solar ponds utilizing thermocouples. The design considerations discussed and implemented in this paper include the analysis of harvested power towards varying pond structures such as shallow water pond, medium and deep water ponds. A design of the experimentation layout has been presented in this paper. Data collected over a period of 30 days have been utilized to determine the percentage efficiency in the proposed construction. The final part of this paper proposes an optimized flash evaporator for electric power generation simulated and analysed for different working fluids which could be thought of as a future scope for large scale power generation.
    Keywords: Solar pond; design considerations; convection zones; flash separator.

  • A Novel Approach for Feature Fatigue Analysis using HMM stemming and Adaptive Invasive Weed Optimization with Hybrid Firework Optimization Method   Order a copy of this article
    by Midhun Chakkaravarthy 
    Abstract: Due to the rapid growth of customer product reviews in e-commerce website makes the new online customer to analyze reviews to know about the features of the product that they want to buy.Integrating many features into a single product provides more attractive which makes the customer to buy that product, after worked with the high-feature product; the customer may get dissatisfied whicheventually reduces the manufacturers Customer Equity (CE). Thus, it isnecessary to analyze the usability of the product.The existing usability evaluation methodshave some limitations in determining which features must be integrated into the product in order to remove the unnecessary feature.In this paper, a novel approach is proposed to help designers to find an optimal feature that providesthe decision supports for product designers to enhance the product usability in the future.The most updated customer reviews on product usability are collected from web. Latent Dirichlet Allocationis used for extracting the product featureby stemming process with the integration of Hidden Markov Model. The k- Optimal Rule Discovery technique with Adaptive Invasive Weed Optimization algorithm is adopted to obtain theoptimal customer opinions on the usability of product features. Finally, hybrid Firework Optimization method with differential evolution is adopted for feature fatigue analysis based on the usability.Based on the analyzed feature, Feature fatigue is alleviated efficiently. The proposed approaches are experimented and result shows that proposed work achieves 97 % accuracy which is higher than existing work.
    Keywords: Feature Fatigue; Latent Dirichlet Allocation; Hybrid Firework Optimization; Differential Evolution.
    DOI: 10.1504/IJCAET.2019.10009148
  • Investigation of Methodical Framework for Cross-Platform Mobile Application Development: Significance of Codename One   Order a copy of this article
    by Munir Kolapo Yahya-Imam, Sellappan Palaniappan, Seyed Mohammadreza Ghadiri 
    Abstract: Mobile application development landscape is changing very rapidly with developers moving from traditional approach to write once, run anywhere. In any case, most mobile applications have comparable performance such as tight project schedules, budget, and the need to bolster both Android and iOS. For most developers, particularly those that are migrating from web to mobile applications, cross-platform mobile application tools are often preferred as new development tools that promise some native-like functionalities and performance. However, they often ask questions like which cross-platform development tool to choose?, and which framework is easier, better and supports our requirements?. This paper presents answers to these questions by evaluating some popular cross-platform mobile applications development tools. Towards the end, this paper recommends Codename One for cross-platform mobile applications developers because of its unique strengths and significance.
    Keywords: Cross-Platform; Codename One; PhoneGap; Xamarin; SenchaTouch; Titanium; Mobile App Development.

  • OntoCommerce: An Ontology Focused Semantic Framework for Personalized Product Recommendation for User Targeted E-Commerce   Order a copy of this article
    by Gerard Deepak, Dheera Kasaraneni 
    Abstract: In recent times, with the increase in the number of users of the Internet and the World Wide Web, there is a paradigm shift in business strategy in terms of online marketing and e-commerce. Several e-commerce websites serve as a perfect platform for integrating the products and users with the goal of selling the product. Although many e-commerce websites are available, the recommendation of the relevant products to the users can always be improved. With the World Wide Web transforming into a more intelligent Semantic Web, there is a perpetual need for semantically driven e-commerce system which recommends products as per the user preferences. In this paper, OntoCommerce which is an e-commerce system that incorporates semantically driven algorithms for product recommendation with personalization has been proposed. Also, a new variant of Normalized Pointwise Mutual Information called as the Enriched Normalized Pointwise Mutual Information strategy for semantic similarity computation has been proposed. OntoCommerce is an e-commerce system which incorporates ontologies and recommends products based on the user query, recorded user navigations as well as the user profile analysis. In order to make the recommendations more relevant and lower the false discovery rate, OntoCommerce uses fuzzification of certain parameters to enhance the number of recommendable products. OntoCommerce yields an average accuracy of 88.68 % with a very low false discovery rate of 0.13 which makes it a best-in-class semantically driven product recommendation system.
    Keywords: E-Commerce; Ontologies; Personalized Recommendation System; Semantic Similarity; Enriched Normalized Pointwise Mutual Information.

  • Road Segmentation and Tracking on Indian Road Scenes   Order a copy of this article
    by VIPUL MISTRY, Ramji Makwana 
    Abstract: Vision based road detection is a challenging task due to surrounding scenes and types of roads. This paper describes an efficient and effective algorithm for general road segmentation and tracking. The major contributions of this paper are two aspects: 1) An optimized voter selection strategy based modified voting process for vanishing point detection. 2) Use of kalman filter to avoid false detection of vanishing point and reduction in computational complexity of final road segmentation. 3) The algorithm is evaluated against different road types with varying surrounding scenes. The method has been implemented and tested with 10000 video frames of Indian Road scenes. Experimental results demonstrate that the algorithm achieves better efficiency compared to some of the texture-based vanishing point detection algorithms and successfully segments drivable road regions from varying Indian Road scenes.
    Keywords: Vanishing Point; Kalman Filter; Road segmentation; Drivable road detection.
    DOI: 10.1504/IJCAET.2019.10010547
  • Advanced Prediction of Learner's Profile based on Felder Silverman Learning Styles using Web Usage Mining approach and Fuzzy C-Means Algorithm   Order a copy of this article
    by Youssouf EL ALLIOUI 
    Abstract: Problem Statement: One of the biggest problems concerning e-learning is how to predict the learners profile in order to well personalize the process of e-learning. The lack of information about the learner makes it very complicated for a learning environment to provide information and to identify the starting difficulty of the content, which leads to a decrease in the efficiency of the e-learning process. Research Questions: Which model of identifying the starting difficulty of the content will be accurate and global in order to gain a stabilized prediction for different learners? Purpose of the Study: Developing an automatic, optimal and universal prediction model to identify the starting difficulty of the content of the e-learning process and testing the accuracy of the model for different levels of learners. Research Methods: The learning behavior is captured using the Web Usage Mining (WUM) technique. The captured data is then converted into a standard learning style model. The work is mainly focused on the identification of learning styles. The captured data is preprocessed and converted into the XML format based on sequences of accessing contents on the portal. These sequences are mapped to the eight categories of Felder Silverman Learning Style Model (FSLSM) using Fuzzy C-Means (FCM) algorithm. A Gravitational Search Based Back Propagation Neural Network (GSBPNN) algorithm is used for the prediction of learning styles of a new learner. In this algorithm, the Neural Network approach is modified by calculating the weights using Gravitational Search Algorithm (GSA). The accuracy of the prediction model is compared with the Basic Back Propagation Neural Network (BPNN) algorithm. Findings: The accuracy of the prediction model is compared with the Basic Back Propagation Neural Network (BPNN) algorithm.
    Keywords: Learner's Profile; Felder Silverman Learning Styles; Web Usage Mining; Fuzzy C-Means Algorithm.

    by Sivakumaran AR, Marikkannu P 
    Abstract: Web Mining (WM) is the programmed disclosure of client access design from web servers. Universities gather vast volumes of data in their day by day operations, produced naturally by web servers and gathered in server access logs. The research shows the Forecasting and Enhancing Universities Navigation from Web Log Data (F&EUN-WLD). In the primary stage F&EUN-WLD concentrates on isolating the potential clients in web log data (WLD). Trial comes about speak about methodology can enhance the quality of grouping for client route design in web utilization mining frameworks. These outcomes can be utilized for foreseeing client's next solicitation in the tremendous web destinations.
    Keywords: Web Mining; Web Log Data; route design; Navigation; web utilization.

  • A safety system for school children using GRAG   Order a copy of this article
    by Joe Louis Paul Ignatius, Sasirekha Selvakumar 
    Abstract: Millions of children need to commute between home and school every day. Safer transportation of school children has been a critical issue as it is often observed that, children may prone to abductions, accidents and etc. Parents are really worried about their childrens safety. Hence many systems using Radio Frequency Identification (RFID) and Global Positioning System (GPS) were built. This work is intended to provide yet another solution to these problems by integrating both the technologies with Global System for Mobile Communication (GSM) to provide an efficient system called GRAG (GPS, RFID And GSM). RFID will monitor the entry and the exit of children into and out of the school respectively. Messages will be sent to parents regarding the same using Short Message Service (SMS). In case, the child doesnt reach the school or home within the time period, parents can call the registered number placed in the GPS tracker to activate it. Then, the GPS tracker will track the position, the latitude and longitude coordinates of the child are sent to the parents mobile. These coordinates can be entered into the Android application to find the exact location.
    Keywords: Radio Frequency Identification (RFID); Global System for Mobile Communication (GSM); GPS tracker; Arduino UNO; Parallax Data Acquisition (PLX-DAQ); Android application.

  • An Android based Hardware System for Accident Avoidance and Detection on Sharp Turns   Order a copy of this article
    by Shilpa Mahajan, Nikhat Ikram 
    Abstract: Accidents are the major cause of destruction of human lives as it is the most uninvited and unintentional happening that causes a lot of damage, injury as well as they can cause loss of human life. Road accidents have been major reason for loss of human lives .One of the reason for increase of road accidents is the growing prosperity of the world, which has resulted in increase in vehicles on roads, this in turn increase the traffic density, travelling distance and the time spend in travelling, thus increasing the chances of vehicle collision. Seeking the current scenario of increasing road accidents, a working hardware prototype has been proposed along with android based application to avoid road accident. This paper represents the scenario where collision of vehicles is avoided by alerting the driver with a buzzer that will buzz whenever there is another car with ZigBee, in range with the current car and a message will be flashing on LCD inside the car that vehicle is detected nearby. Communication between the cars will take place via ZigBee. Position of the approaching vehicles can be seen on an android application installed on a mobile device that aims to depict the exact location of the vehicles on Google map. This will help to reduce the collision. This whole proposed method is different from other methods as the methods that have been invented till date are mostly in-built techniques while the method proposed here is a prototype that can be implemented in real world. In this paper
    Keywords: WSN;Accidents;VANET.

  • Computer Aided Software Integrated Automated Safety System   Order a copy of this article
    by SOWJANYA Pentakota 
    Abstract: Abstract: Software for any system must deal with the hazards identified by safety analysis in order to make the system safe. Building a safety software requires special procedures to be used in all phases of the software development process. In this work we have dealt with Safety analysis techniques such as Failure Modes and Effects Analysis (FMEA) and fault tree analysis (FTA) based safety-critical approach towards to development of an integrated automotive system from a safety perspective. A proposal of Software Safety Architecture and Software Safety Lifecycle has developed here using some important Safety techniques.rnA new Software Development Lifecycle with an integration approach i.e. Agile-V model is proposed. Driver Assistance System like ACCS is a vehicle automotive system which is helpful to prevent accidents by reducing the workload on the driver. The basic design and functionality of ACCS is done with the Safety command of bypassing to braking system when needed. As a Safety approach for some limitations we have introduced an integrated architecture using Fuzzy Logic which has less failure cases and improves efficiency. The basic design and functionality of braking system is done with ABS and without ABS so that stopping distance also decreases. rn
    Keywords: rnKeywords: Adaptive Cruise Control System (ACCS); Anti-Lock Braking System (ABS); FMEA; FTA; Software Safety Architecture (SSA) and Software Safety Lifecycle (SSL).rn.

  • Performance Comparison of SDN OpenFlow Controllers   Order a copy of this article
    by Vishnupriya Achuthan, Radhika N 
    Abstract: Software Defined Networking (SDN) is the centralized network management technology that could reduce the network administration and policy enforcement overhead in the traditional IP networking. SDN controller is the network operating system responsible for entire network operations. However, there are many open source controllers available, such as NOX, POX, FloodLight, and Open Daylight. Each controller has its own properties that could support specific requirements. In this paper, we have compared the performance of most familiar Openflow controllers like NOX,POX,Ryu,FloodLight and OpenFlow reference controller based on their packet handling capacity, by varying the packet size, number of packets and arrival pattern in the IP traffic flows. Distributed Internet Traffic flow Generator (D-ITG) tool has been used to measure the performance in terms of delay, jitter, throughput and packet loss. Our experimentation results show that, FloodLight has the better throughput and less delay when compared to other controllers. This work substantiates the researcher in choosing the appropriate controller for their requirements.
    Keywords: Software Defined Networking; SDN Controllers; Traffic Generation; QoS parameters.

Special Issue on: Recent Trends in Computing and Engineering

  • A smooth three-dimensional reconstruction of human head from minimally selected computed tomography slices   Order a copy of this article
    by Haseena Thasneem, Mohamed Sathik, Mehaboobathunnisa R 
    Abstract: Three-dimensional reconstruction has been deeply investigated by researchers all over the world. This is a comprehensive effort to find an effective interpolation technique which can provide an accurate and enhanced three-dimensional reconstruction of human head from a select set of computed tomography slice data. Based on structural similarity measurement, a set of slices is selected and segmented using phase field segmentation. Keeping these segmented slices as base, the intermediate slices are re-created using linear and Modified Curvature Registration based interpolation and the results are compared. To further enhance the result and provide a better reconstruction, we apply a refinement process using modified Cahn-Hilliard equation to the interpolated slices. The results are validated both quantitatively and qualitatively. Results show that Modified Curvature Registration based interpolation with our proposed refinement outperforms linear interpolation with refinement providing a simultaneous improvement in sensitivity (95.95%) and specificity (95.94%) with an accuracy of more than 96% and minimal mean square error.
    Keywords: structural similarity measure; phase field segmentation; curvature registration based interpolation; three dimensional reconstruction; computed tomography head slices.

  • A New High Performance Empirical Model for software Cost Estimation   Order a copy of this article
    by H. Parthasarath Patra 
    Abstract: A software project can be successful when it is delivered on time, within the budget and maintaining the required quality as per client requirement. But in today's software industry, cost estimation is a critical issue for modern software developers. To estimate the effort and cost is significantly difficult and a challenging task. Since last 20 years more than 30 models are already developed to estimate the effort and cost for the betterment of software industry. But these algorithms cannot satisfy the modern software industry due to the dynamic behavior of the software for all kind of environments. On this study an empirical high performance interpolation model is developed to estimate the effort of the software projects. This model compares with the COCOMO based equations and predicts its result analyzing individually taking different cost factors. The equation consists one independent variable (KLOC) and two constants a, b which are chosen empirically taking different NASA projects historical data and the results viewed in this model are compared with COCOMO model with different scale factor values.
    Keywords: Kilo Lines of code; Software cost estimation; MRE; MMRE; PRED.

  • Cryptographic Key Management Scheme for Supporting Multi-User SQL Queries over Encrypted Databases   Order a copy of this article
    Abstract: Database outsourcing is getting more popular bringing in a new standard, called database-as-a-service, where an organizations database is stored in cloud. In such a setting, both access control and data confidentiality plays an important role, particularly when a data owner likes to publish his data for external use. Any cloud provider promises the security of its platform, while the execution of solutions to ensure confidentiality of the data stored in cloud databases is left to the data owner. The state-of-the-art solutions deal few preliminary issues with aid of SQL queries on encrypted data. In this paper, we propose a novel cryptographic key management scheme that combines data encryption and key management and supports multi-user SQL queries over encrypted databases. Our approach shows the proposed solutions for enforcing access control and for ensuring confidentiality of data. The experimental results obtained in this paper show the performance of proposed scheme.
    Keywords: data confidentiality; access control; key derivation; encryption; metadata.

  • A New Intelligent System for Glaucoma Disease Detection   Order a copy of this article
    by Mohamed El Amine Lazouni, Amel Feroui, Saïd MAHMOUDI 
    Abstract: Glaucoma is a redundant disease and a major cause of blindness resulting from damage in the optic nerve. Its major risk factor is increased intraocular pressure. This disease generally spreads very slowly and does not show any symptom at the beginning. The research presented in this paper is both a clinical and a technological aid for diagnosis of early glaucoma based on four different artificial intelligence classification techniques, which are: multi-layer perceptron, support vector machine, K-nearest neighbour and decision tree. A majority vote system was applied to these four artificial intelligence classification techniques in order to optimize the performances of th e proposed system. As far as the ratio cup to disc, which is one of the descriptors of the collected database, is concerned, we developed a non-supervised classification technique, which is the K-means algorithm for the detection of the cup, and another technique that is the Drainage divide algorithm (mathematical morphology method) for the detection of the disc. Moreover, we proposed a contour adjustment technique, which is the Ellipse Fitting method. We also applied a feature selection method (ReliefF) on our database in order to detect the pertinent descriptors ie those responsible for early glaucoma disease. The obtained results are satisfying, promising, and prove the efficiency and the coherence of our new database. They also were confirmed and validated by different doctors in ophthalmology.
    Keywords: Glaucoma; Classification; SVM; MLP; RBF; K-NN; Majority voting; ReliefF; Segmentation; LPE; K-Means; Ellips Fitting.

Special Issue on: IJCAET ECEC 2013, ESM 2013, SIMEX'2013 and MESM'2014 New Trends in European Simulation and Modelling

  • Finite Element Analysis of Implant Design Used in Elbow Arthroplasty Process   Order a copy of this article
    by VIKKY KUMHAR, Amit Sarda 
    Abstract: This article provides an information of the biomedical engineering modeling and approach of the elbow arthroplasty process. In this investigation also gives the conceptual design of the total replacement elbow joint which is allows the formulation and analyzing. The complete assembly of elbow model was designed in Creo parametric for modification of existing design and surface geometry of model from sharp edge to smooth edges and analysis done using ANSYS tool, which was gives the results of proposed design like von mises stress, principal stress and after that compare between existing and proposed work. It has been tested a complex environment to find out the suitable bio materials than the exist approach. For the validation, the existing and proposed simulation results using ANSYS has provided in the paper.
    Keywords: Total elbow arthroplasty; finite element analysis; von mises stress; principal stress; implant; bio materials.

Special Issue on: Future Directions in Computer-Aided Engineering and Technology

  • Improved Indoor Location Tracking System for Mobile Nodes   Order a copy of this article
    by SUNDAR S, KUMAR R, Harish M.Kittur 
    Abstract: The solutions to the problem of the tracking a wireless node is approached conventionally by (i) Proximity Detection, (ii) Triangulation and (iii) Scene Analysis methods. In these, scene analysis method is simple, accurate and less expensive. Indoor localization technologies need to address the existing inaccuracy and inadequacy of Global Positioning based Systems (GPS) in indoor environments (such as urban canyons, inside large buildings, etc.).This paper presents a novel indoor Wi-Fi tracking system with minimal error in the presence of barrier using Bayesian inference method. The System integrates an Android App and python scripts (that run on server) to identify the position of the Mobile node within an indoor environment. The received signal strength indicator (RSSI) method is used for tracking. Experimental results presented to illustrate the performance of the system comparing with other methods. From the tracked nodes, a theoretical solution is proposed for finding shortest path using Steiner nodes.
    Keywords: Location Tracking; GPS; MANETs; Mobile nodes; Wi-Fi Access points; WLAN; Bayesian Inference; RSSI; Shortest paths; Steiner nodes.

  • Bi-level User Authentication for Enriching Legitimates and Eradicating Duplicates (EnEra) in Cloud Infrastructure   Order a copy of this article
    by Thandeeswaran R, Saleem Durai M A 
    Abstract: Ease of usage of Cloud computing leads to an exponential growth in all sectors. Exponential growth always attracts duplicates to consume and deplete resources. Cloud is not exempted from invaders and overwhelming the resource utilization thereby availability become a threat. Availability issue arises due to multiple requests towards the same victim, a DDoS attack. Hence, the major concern in the cloud is to rightly identify legitimates, and providing the required services all time go by avoiding DDoS attacks. Multiple techniques are available to identify and authenticate the users. This paper not only just try to authenticate the users but also works on eliminating the invaders in two fold. In the first phase, the user ID is scrambled in four different steps. In the second phase, the users are authenticated depending on the credits. Based on the traffic flow (in the case of network level attack) and on the interval between consequent service requests (in the case of service level attack), users are authenticated upon which services are provisioned accordingly. The simulation results presented here exhibits the strength of the proposed method in detection and prevention of DDoS attack in cloud computing environment.
    Keywords: DDoS attack; SSID; Authentication; credits; cloud environment; legitimate; attackers.

  • Hybrid Algorithm for Twin Image Removal in Optical Scanning Holography   Order a copy of this article
    by P. Bhuvaneswari Samuel, A. Brintha Therese 
    Abstract: Optical Scanning Holography is an incoherent optical image processing system. It is a technique, where the complete information of an object or image will be recorded as a hologram and later reconstructed to get back the original image. In the hologram reconstruction process, a virtual image is formed along with the real image, which appears as a twin image noise. To eliminate such noises, a technique of Hybrid Algorithm is used while recording the hologram itself. Hybrid algorithm is derived from the combination of conventional Optical Transfer Function (OTF) used in existing method and the proposed OTF obtained by varying the spatial frequency and arrived to an optimal spatial frequency which imparts good quality of image. Various images are tested with the Hybrid Algorithm. The Matlab R2012b image processing tool is used for simulation and the simulated values are tabulated and compared with the existing method in terms of Peak Signal to Noise Ratio, Mean Square Error. In the reconstruction, the proposed method results are having 26% increment in the MSE and PSNR values. To further improve the MSE and PSNR values a case study using different denoising techniques combined with the proposed hybrid algorithm is used and found considerable improvement of 32%. Hence the image quality is increased.
    Keywords: Optical Scanning Holography; Fresnel Zone plate; OTF ; spatial frequency; twin image noise; denoising.

  • Evaluation of Video Watermarking Algorithms on Mobile Device   Order a copy of this article
    by Venugopala P S 
    Abstract: Advancement of Internet services and design of image and video capturing devices along with various storage technologies has made video piracy an issue. Asserting the originality of digital data and having a copyright on the file is always a challenging task. Digital watermarking is a technique of embedding secret information known as watermarks within image or video file. This can be used for authentication and ownership verification purposes. This paper presents an analysis of mobile deployment of various video watermarking algorithms. The analysis is carrie dout using the quality parameters like PSNR, time of execution and power consumption. The goal of video watermarking method, implemented on a mobile phone, is to enhance security and to achieve copyright protection for the video files that are captured using a mobile phone. The video file is applied with three different watermarking methods, DCT, LSB and Bit stream. These methods are compared for their performance using the parameters PSNR, power consumed and time of execution. It is observed that, the proposed Bitstream method gives better performance compared to other methods for these parameters.
    Keywords: DCT; LSB; Bit stream; Watermarking; Copyright protection.

  • Automatic Identification of Acute Arthritis from Ayurvedic Wrist Pulses   Order a copy of this article
    by Arunkumar N, Mohamed Shakeel P, Venkatraman V 
    Abstract: Traditional ayurvedic doctors examine the state of the body by analyzing the wrist pulse from the patient. Mysteriously the characteristics of the pulses vary corresponding to the various changes in the body. The three pulses acquired from the wrist are named as Vata, Pitta and Kapha. Ayurveda says that when there is imbalance in these three doshas, one will have disease. Two different diseases will have different patterns in their pulse characteristics. Thus the wrist pulse signal serves as a tool to analyze the health status of a patient. In the earlier work, we have standardized the signals for normal persons and then classified the diabetic cases using approximate entropy (ApEn) [10] and later enhanced the results using sample entropy. In the present work, sample entropy (SampEn) is being used to classify for the acute arthritis cases.
    Keywords: Vata; Pitta; Kapha; Approximate Entropy(ApEn); Sample Entropy (SamPEn).

  • A Real-Time Auto Calibration Technique for Stereo Camera   Order a copy of this article
    by Hamdi Boukamcha, Fethi Smach, Mohamed Atri 
    Abstract: Calibration of the internal and external parameters of a stereo vision camera is a well-known research problem in the computer vision. Usually, to get accurate 3D results the camera should be manually calibrated accurately as well. This paper proposes a robust approach to Auto Calibration stereo camera Without intervention of the user. There are several methods and techniques of calibration that have been proven, in this work we exploiting the geometric constraint, namely, the epipolar geometry. We specifically focus to use 7 techniques for Features Extraction (SURF, BRISK, FAST, FREAK, MinEigen, MSERF, SIFT), however tries to establish the correspondences between points extracted in stereo images with Various Matching Techniques (SSD, SAD, Hamming).Then we exploit the Fundamental Matrix to estimate the epipolar Line by choosing the perfect Eight-point algorithms (Norm8Point, LMedS, RANSAC, MSAC, LTS). rnA large number of experiments have been carried out, and very good results have been obtained by Comparison &choice the perfect technique in every stage.rn
    Keywords: Auto calibration; Robust matching; Epipolar geometry; Fundamental matrix; Matching Technique.

  • Improved automatic age estimation algorithm using a hybrid feature selection   Order a copy of this article
    by Santhosh Kumar G, Suresh H. N 
    Abstract: Age estimation (AE) is one of the significant biometric behaviors for emphasizing the identity authentication. In facial image, Automatic-AE is an actively researched topic, which is also an important but challenging study in the field of face recognition. This paper explores several algorithms utilized to improve AE and the combination of features and classifiers are associated. Initially, the facial image databases are trained and then the features are extracted by employing several algorithms like Histogram of Oriented Gradients (HOG), Binary Robust Invariant Scalable Keypoints (BRISK), and Local Binary Pattern (LBP). Here, the AE is done in three various age groups from 20 to 30, 31 to 50 and above 50. The age groups are classified by utilizing Na
    Keywords: Age estimation; BRISK; HOG; LBP; NBC.

Special Issue on: Computer-Aided Intelligent Systems

  • Optimized RBIDS: Detection and Avoidance of Black Hole Attack through NTN Communication in Mobile Ad-hoc Networks   Order a copy of this article
    by Gayathri VM, Supraja Reddy 
    Abstract: A Mobile Ad-hoc Networks which is an emerging technology in various fields of computer science along with Sensor Applications providing a big credential to the people via smart innovation. In this network, each node connected on requirement basis. Since it is infrastructure-less any node can come into the network topology and participate in the packet transmission. Each and every node in the network topology join based on their sequence number, distance, RF based calculation. Any node satisfies these requirements can involve in transferring packets as a router or intermediate nodes. It becomes an open door for the attackers to enter into the network which results more vulnerable state. In this paper, we are concern about the black hole attack in the network which results in dropping of packets where originally it has to send to the destination. It happens because of false identity of the node. Implementation using NS2 simulator on demand protocol namely AODV. An algorithm is proposed to improvise the network performance by detecting the malicious node called RBIDS. This algorithm applies on every individual node over a period of time to calculate their performance based on regression values.
    Keywords: NTN;RBIDS;AODV;Regression.

  • A new parallel DSP hardware compatible algorithm for noise reduction and contrast enhancement in video sequence using Zynq-7020   Order a copy of this article
    by MADHURA S, Suresha K 
    Abstract: Various video processing applications such as liquid crystal display processing, high quality video photography, terrestrial video recording and medical imaging systems requires robust noise reduction and contrast enhancement technique which provides visually pleasing For real time implementation novel hardware architecture has been designed using Look-Up-Table (LUT) acceleration approach which helps achieve high speed processing. Until now a lot of researchers have worked on noise removal and contrast enhancement of digital videos, but the developed algorithms works with only some verity of noises and failed to produce desirable results for various types of distortions and real-time implementation is still remaining a challenge. Hence appropriate filter needs to be designed which address both kinds of errors. In this paper adaptive trilateral filter has been designed for noise reduction, the results are measured using qualitative and quantitative analysis which has aided in better utilization of hardware for real-time implementation. The experimental results show that the proposed algorithm provides a frame rate of 40 fps on an average and has a resolution of 720x576. The proposed algorithm was implemented on ZedBoard Znyq-7020 development kit by Xilinx.
    Keywords: video enhancement; segmentation; trilateral filtering; real-time implementation; Znyq-7020.

  • HDFS Based Parallel and Scalable Pattern Mining Using Clouds for Incremental Data   Order a copy of this article
    by Sountharrajan S., Suganya E, Aravindhraj N, Rajan C 
    Abstract: Increased usage of Internet led to the migration of large amount of data to the cloud environment which uses Hadoop and Map Reduce framework for managing various mining applications in distributed environment. Earlier research activity in distributed mining comprises of solving complex problems using distributed computational techniques and new algorithmic designs. But as the nature of the data and user requirement becomes more complex and demanding, the existing distributed algorithms fails in multiple aspects. In our work, a new distributed frequent pattern algorithm, named Hadoop based Parallel Frequent Pattern mining (HPFP) has been proposed to optimally utilize the clusters efficiently and mine repeated patterns from large databases very effectively. The empirical evaluation shows that HPFP algorithm improves the performance of mining operation by increasing the level of parallelism and execution efficacy. HPFP achieves complete parallelism and delivers superior performance to become an efficient algorithm in HDFS, than existing distributed pattern mining algorithms.
    Keywords: Cloud Computing; Hadoop Distributed File System; Map Reduce; Association Rules; Frequent Pattern Growth Algorithm; Distributed Mining; Parallel Pattern Mining.

    by Muthukumaresan Mb, Sakthivel S 
    Abstract: Most of the military organization now takes the help of robots to carry out many risky jobs that cannot be done by the soldier. These robots used in military are usually employed with the integrated system, Including video screens, sensors, gripper and cameras. The military robots also have different shapes according to the purposes of each Robot. Here the new system is proposed with the help of low power Zigbee wireless sensor network to trace out the intruders (unknown persons) an d the robot will take the necessary action automatically. Thus the proposed system, an Intelligent Unmanned Robot (IUR) Using Zigbee saves human live and reduces manual error in defense side. This is specially designed robotic system to save human life and protect the country from enemies.
    Keywords: Microcontroller; ZIGBEE module; IUR robot.

  • An Efficient Packet Image Transmission based on Texture Content for Border side Security Using Sensor Networks   Order a copy of this article
    by Pitchai Ramu, Reshma Gulsar, Raja Jayamani 
    Abstract: In the field of surveillance, several algorithms are developed to extract meaningful information from an image captured via a camera. In the presence of intrusion event, these cameras will transmit those captured images to the sink node via other intermediate nodes. Since, WSNs operate with limited resources, efficient utilization of resource is needed while processing and transporting images. Since the node does not need whole image data are mandatory. Prioritization is one of the methods to utilize the available resource. It will prioritize images from its macro-blocks dynamically. Here the camera is attached in a sensor node which forms Wireless Multimedia Sensor Networks (WMSN). Its employs an encoding scheme at the source node by naming the blocks as important or not-important based on the information they contain. Here image texture feature and spectral information is used as priority measures to weight importance of macro-blocks using their textural GLCM properties. Experimental results disclose that the priority encoding scheme adapts itself to the applications quality requirements while reducing the required bandwidth comparatively.
    Keywords: Wireless Sensor Network; Prioritization; Wireless Multimedia Sensor Networks; Texture; GLCM; Macro Block.

  • Hidden Object Detection for Classification of Threat   Order a copy of this article
    by Gautam KS, Senthil Kumar Thangavel 
    Abstract: The automated video surveillance has become important due to the focus from government and users for improving the smart nature of the buildings. A system developed for handling this can be used for prison, airport, banks etc. Though there are solutions for this they fail in situations of mishaps and objects that are hidden that could become a threat to the environment. In this paper a framework has been built using Modified K Means Segmentation Algorithm to detect hidden objects. The framework operates in two phases. Phase 1-Modified K Means Segmentation Algorithm for segmenting the hidden objects. Phase 2- Deep Convolutional Neural Network for classifying the hidden object The algorithm selects searched for the approximately optimal value of K and segments the object. The result of the algorithm is given to Deep Convolutional Neural Network for classifying the type of object. The algorithm is tested with manually built dataset using Fluke Tis40 Thermal Imager. The experiments were carried out in batches of 50*50 images and the performance of the approach is presented using Top-1 Accuracy and Mean Average Precision and they are 0.94 and 0.64 respectively. From the experimental analysis, we infer that the proposed algorithm works with precision 0.88 false discovery rate 0.12.
    Keywords: Video Analytics; Deep Learning; Deep Convolutional Neural Network; Thermal image; K-Means Segmentation.

  • Deep Learning based Techniques to Enhance the Precision of Phrase-Based Statistical Machine Translation System for Indian Languages   Order a copy of this article
    by Sanjanasri JP, Anand Kumar M, Soman KP 
    Abstract: The paper focuses on improving the existing Phrase-Based Statistical Machine Translation (PB-SMT) system by integrating deep learning knowledge to it. In this paper, a deep learning based PB-SMT system for Indian languages is developed, so as to improve the conditional probability of the phrase-table and replaced the neural probabilistic language model with the existing back off algorithm of n-gram language model to improve the performance of language model. It is shown that the deep feature based PB-SMT is better than the standard PB-SMT system. It is shown the significance of integrating manually created dictionaries that has been trained as separate translational model can enhance the result of statistical machine translation system when decoding. For automatic evaluation, it is shown that RIBES being a better evaluation metric for Indian languages compared to BLEU, a standard one.
    Keywords: Indian Languages; Phrase-based Statistical Machine Translation (PB-SMT); Neural Probabilistic Language Model (NPLM); Deep Belief Network (DBN); Pruning; Minimum Error Rate Training (MERT); Bilingual Evaluation Understudy (BLEU); Rank-based Intuitive Bilingual Evaluation Score (RIBES).

  • Enhancing Performance Of WSN By Utilizing Secure Qos Based Explicit Routing   Order a copy of this article
    by Kantharaju HC 
    Abstract: Wireless Sensor Networks (WSN) are infrastructure less and self-configured wireless networks that allows monitoring the physical conditions of an environment. Many researchers focus on enhancing the performance of WSN in order to provide effective delivery of data on the network, but still results in lower quality of services like data transmission time, energy consumption, delay and routing. We tackle this problem by introducing a new routing algorithm, QoS based Explicit Routing Algorithm which helps in transmitting the data from source node to destination node on WSN. We also involve clustering process in WSN based on GA and PSO algorithm (Genetic Algorithm and Particle Swarm Optimization) and followed by cluster head selection process which is more important on the routing process. Secure communication is the most important need for WSN, for that we propose IBDS (Identity based Digital Signature) and EIBDS (Enhanced Identity based Digital Signature) that involves reduction of computation overhead and also increasing resilience on the WSN. We also use AES (Advanced Encryption Standard), for ensuring the security between nodes and avoid hacking of data by other intruders. This process is done on base station, sensor nodes and cell coordinator nodes. Thus our proposed framework is effective by increasing the lifetime of nodes, improving secure communication between nodes.
    Keywords: Wireless Sensor Network; Cryptography; Digital Signature; Quality of Service.

  • Hybrid Data Model Of PACE and Quadruple: An Efficient Data Model for Cloud Computing   Order a copy of this article
    by CLARA KANMANI, Dr Suma V. , Guruprasad N 
    Abstract: Cloud computing is a promising computing paradigm that involves outsourcing of computing resources with the capabilities of expendable resource scalability, on-demand provisioning with little or no up-front IT infrastructure investment costs. The semantic web is an extension of the web through standards by the World Wide Web consortium (W3C). Resource Description Framework (RDF) is the semantic data model for cloud computing which provides interoperability but is not effective in terms of scalability, formal semantics and query optimization and reification. One of the challenges in cloud computing therefore is to enhance RDF data model which is achievable by addressing the current weakness of RDF reification mechanism.This paper hence put forth a comprehensive overview of challenges in RDF Reification. Further, the paper introduces a data model which uses hybrid approach of Provenance aware context entity(PACE) and quadruples method of reification. This hybrid RDF data model is deployed and tested for its performance on the AWS public cloud. Experimental results indicate that the proposed hybrid data model enhances accessibility, maintainability, and also accelerates query execution time.
    Keywords: Cloud computing; Semantic web; Resource description framework; Data model; PACE; Quadruple.

  • A Semi-Automated System for Smart Harvesting Of Tea Leaves   Order a copy of this article
    by Manesh Murthi, Senthil Kumar Thangavel 
    Abstract: Tea leaf cultivation is a major part of livelihood in hill station like Nilgiris. The conventional method of tea leaf plucking is done manually with a knife. Harvesting machines have also been designed that could quickly This gives better result in manpower who has better experience and knowledge about terrains. The paper has proposed a semi-automatic working model that has an arm that can move around and pluck the leaves. A complete preprocessing phase has been done using key frame extraction, rice counting, optical flow with noise model by the author in an earlier paper. This process is improved by using Active contour with optical flow algorithm that minimizes the region on which the tea leaf detection algorithm is applied. The second phase of the paper also suggests how deep learning approach can also be used for improving the performance of the proposed work. The proposed work is novel because it has capabilities of considering motion with keyframe capabilities and the noise model using deep learning. The proposed work has experimented with parameters like precision, recall, FAR, FRR to evaluate the nature of misclassifications.
    Keywords: Video analytics; Noise model Keyframe; Raspberry Pi; Arduino due; optical flow; rice counting; segmentation.

    Abstract: The most popular renewable energy technology is Hybrid Power System consisting of wind and solar energy sources because the system is reliable and complimentary in nature. Wind / PV Hybrid system is commonly used in Distributed Generation (DG). This paper proposes a new solution for improved voltage stability with quality power output. In this system voltage out from wind energy conversion system(WECS) and Photo voltaic panel are given to separate DC - DC converters, independently controlled and connected to a common D C bus and from there it is inverted. In the proposed controller the voltage stability is obtained by applying Honey Bee (HB) optimization algorithm along with a PI controller. The implementation of the proposed method is done by using Simulink platform. The performance of the suggested co ordinated control system is analyzed by comparing the computer simulation results with and with out using controllers and it shows that the proposed system is more efficient.
    Keywords: Hybrid Power System ; Distributed Generation(DG); Honey Bee algorithm; PI; Wind and solar energy.

Special Issue on: Image Processing in Computer Vision - Techniques and Advancements

  • Feature Extraction and Classification of COPD Chest X-ray Images   Order a copy of this article
    by P. Bhuvaneswari Samuel, A. Brintha Therese 
    Abstract: COPD (Chronic Obstructive Pulmonary Disease) is a group of lung disease like Emphysema, Chronic bronchitis, Asthma and some kinds of bronchiectasis .This group of diseases are expected to be one of the major cause of morbidility and the third case of mortality by 2020. Many people with COPD also develop lung cancer likely due to a history of smoking cigarettes. India contributes highest COPD mortality in the world. If the disease is identified in the early stage itself the survival rate will be increased. In this paper a novel method is proposed to classify the disease COPD in chest x-ray images. Prior to classification essential features to be extracted. In this regards some structural features include no of ribs in the chest x-ray , heart shape, diaphragm shape, distance between ribs of the given x-ray image are extracted by means of various image processing techniques. Based on the above said features the input image is classified as normal or COPD with various classifiers include MLC, LDA, Neural Network, Genetic Algorithm.600 x-ray images (PA view) are tested with the proposed method and classified based on the above features. The maximum classification accuracy achieved is 97.9% .Based on the comparison results of different classifiers, Genetic Algorithm based classification method proved to have more accuracy. This work not only ends up with the classification of COPD images, it also enables the medicos to identify the heart disease cardiomegaly.
    Keywords: COPD; Adaptive histogram equalization; Hough transform; Zernike moments; classification; MLC; LDA; Neural Network; Genetic Algorithm.
    DOI: 10.1504/IJCAET.2020.10010445
    by Koppola Mohan 
    Abstract: The Object Face Liveness Detection for Genuine face recognition and user authentication is a difficult task and day to day it becoming an interesting tricky in real time vision and security applications. Since many decades, various authors have proposed new technique and methods and developed but still the system has to improve to recognise the genuine object faces from spoofing objects with increasing in accuracy. However, by considering the various existing methods and techniques, were fails in finding of genuine objects from various shapes of object and individual differences between the objects. The ordinary classifier cannot simplifies well to various kind of objects in different directions especially in case of blur images. In order to overcome this problem, we proposed an Object-Specific Face Authentication System for Liveness Detection using Combined Feature Descriptors with Fuzzy based SVM Classifier, allows to select specific area from whole object, extract features from specific area of object leads reduction in processing time and complexity in feature extraction. Later the system recognises respective faces, finally it checks for live objects with the help of Fuzzy logic based SVM classifier. With these proposed Object-Specific Face Authentication System for Liveness Detection using Combined Feature Descriptors with Fuzzy based SVM Classifier makes it practical to train well performed individual Object to its certain face with liveness detection and got improvement in performance and accuracy.
    Keywords: Object-Specific Face; Genuine Object; Spoofing objects; Liveness Detection; Authentication; Anti-Spoofing; Feature Extractors; Region of Interest; HOG-LPQ Descriptors and Fuz-SVM Classifier.

  • Event Recognition and Classification in Sports Video Using HMM   Order a copy of this article
    by VIJAYAN ELLAPPAN, Rajkumar Rajasekaran 
    Abstract: Sports event recognition and classification is a challenging task due to the number of possible categories. On one hand, how to characterize legitimate occasion classification names and how to acquire preparing tests for these classes should be investigated; then again, it is non-inconsequential to accomplish acceptable order execution. To address these issues, we propose the use of the spatio-temporal behaviour of an object in the footage as an embodiment of a semantic event. This is accomplished by modelling the evaluation of the position of the object with a Hidden Markov Model(HMM). Snooker is used as an example for this purpose of research. The system firstly parses the video sequence based on the geometry of the content in the camera view and classifies the footage as a particular view type. Secondly, we consider the relative position of the white ball on the snooker table over the duration of a clip to embody semantic events. The temporal behaviour of the white ball is modelled using a HMM where each model is representative of a particular semantic event.
    Keywords: HMM; Event Recognition.

  • Automated extraction of dominant endmembers from hyperspectral image using SUnSAL and HySime   Order a copy of this article
    by Nareshkumar Patel, Himanshukumar Soni 
    Abstract: Linear Spectral Unmixing (LSU) is widely used technique, in the fieldrnof remote sensing (RS), for the accurate estimation of number of endmembers,rntheir spectral signatures and fractional abundances. Large data size, poor spatial resolution, not availability of pure endmember signatures in data set, rnmixing of materials at various scales and variability in spectral signature makes linear spectral unmixing as a challenging and inverse-ill posed task. Mainly there are three basic approaches to manage the linear spectral unmixing problem: Geometrical, Statistical and Sparse regression. First two approaches are kind of blind source separation (BSS). Third approach assumes the availability of some standard publically available spectral libraries, which contains spectral signatures of many materials measured on the earth surface using advance spectro radiometer. The problem of linear spectral unmixing, in semi supervised manner, is simplified to finding the optimal subset of spectral signatures from the spectral library known in advance. In this paper, the concept of soft thresholding is incorporated along with the sparse regression for automatic extraction of endmember signatures and their fractional abundances. Our simulation results, conducted for both standard publically available synthetic fractal data set and real hyperspectral data set, like cuprite image, shows procedural improvement in spectral unmixing.
    Keywords: Spectral Unmixing; Sparse Unmixing; Hyperspectral Unmixing;Alternating directional Method of Multiplier; ADMM; Hysime;.

Special Issue on: Recent Trends and Developments of Computer Vision and Image Processing

  • An Approach for Infrared Image Pedestrian Classification based on Local Directional Pixel Structure Elements' Descriptor   Order a copy of this article
    by S. Rajkumar 
    Abstract: Pedestrian classification is a major problem in infrared (IR) images due to lack of shape, low signal-to-noise ratio and complex background. And it find applications in agriculture, forestry, night vision monitoring system, intelligence system and defense system. In this paper, local directional pixel structure elements' descriptor (LDPSED) based pedestrian classification approach is proposed to overcome these problems. In addition, for segment the objects (pedestrian and non-pedestrian) from an IR image interest point detection approach is proposed. The proposed method consists of three steps segmentation, feature extraction and classification. Firstly, objects are segmented from the input image. Secondly, the feature extraction is carried out on the segmented objects. Finally, support vector machine (SVM) is implemented for classification of objects in IR image into pedestrian and non-pedestrian. To prove the effectiveness of the proposed approach, we have conducted experimental test on the standard OTCBVS-BENCH-thermal collection over the OSU thermal pedestrian database. In addition, the classification results of the proposed approach is comparedrnwith the existing approaches. The efficiency of the proposed approach proved by high classification accuracy.
    Keywords: Infrared image; Local directional pattern; Structure element descriptor; Support Vector Machine; Pedestrian classification.

Special Issue on: ICATS'15 Emerging Advances in Control Systems and Automation

  • On Simple Adaptive Control of Plants not Satisfying Almost Strict Passivity and Positivity Conditions: An Introduction to Parallel Feedforward Configuration   Order a copy of this article
    by Khalil Mokhtari, Mourad Abdelaziz 
    Abstract: Simple adaptive control systems were known to be robust against a class of disturbances and globally stable if the controlled plant is almost strictly positive real (ASPR), that is, if there exists a positive definite static output feedback (unknown and not needed for implementation) such that the resulting closed-loop transfer function is strictly positive real (SPR). The present paper discusses the simple adaptive control scheme for a non-Almost Strictly Positive Real Plants and gives a brief review of the parallel feedforward which makes the augmented plant satisfy the almost passivity or positivity conditions based on the stabilizability property of the system. The validity of the simplified adaptive algorithm under the positivity condition is examined through numerical simulation for both single-inputs single-outputs (SISO) and multi-inputs multi-outputs (MIMO) systems.
    Keywords: Simple Adaptive Control; Almost Strictly Positive Real; Parallel Feedforward Configuration;.

Special Issue on: ICMCE-2015 Advances in Applied Mathematics

    by Subramani Rajamanickam, Vijayalakshmi C 
    Abstract: Abstract - This paper mainly deals with the development of an energy management model using a SCADA (Supervisory Control and Data Acquisition) system. A predictive controller is implemented above the centralized SCADA platform. The distribution networks have been focused by a Monitor, Control and maintain equipment in the sub stations to reduce the operating cost. This research proposes a new energy management model that enables a flexible and also efficient operation of various power plants. The Distribution Control Centre (DCC) is being monitored and controlled by SCADA systems and the DCC has become an important energy efficient policy concept. Based on the numerical calculations and graphical representations the renewable energy sources in both configurations, is independent of the enduring or intermittent main energy resource availability, which can lead to effective production.
    Keywords: Distribution Automation; DCC; SCADA; Transmission Capacity; Demand-Side Management; Lagrangian Relaxation (LR).

    by Karunamurthy Krishnasamy, Chandrasekar M, Manimaran R 
    Abstract: In this paper, artificial neural networks (ANNs) model was used to predict the performance parameters of a laboratory model salinity gradient solar pond (SGSP), which is used for supplying hot water. Experiments were conducted on three different solar ponds provided with and without twisted tapes in the flow passage of the in-pond heat exchanger during the month of MAY 2015 at Chennai weather conditions in India. The performance parameters of solar pond such as outlet water temperature, efficiency of solar pond and effectiveness of in-pond heat exchanger were determined experimentally for two different flow rates of Reynolds numbers 1746 & 8729. The experimental data obtained from the observations were utilised for training, validating and testing the proposed artificial neural network model. The parameters like incident solar radiation, inlet water temperature, lower convective zone (LCZ) temperature and flow rate are responsible for the outlet water temperature of the solar pond. Based on the experimental readings as inputs a computational program was developed in Python. This program was trained using artificial neural network with back propagation algorithm to predict the outlet water temperature of the in-pond heat exchanger. The results predicted using the model developed is in good agreement with the experimental results.
    Keywords: Solar Pond; Performance Parameters; Artificial Neural Network; Twisted Tapes.

  • Logistic Regression Model as Classifier for Early Detection of Gestational Diabetes Mellitus   Order a copy of this article
    Abstract: Gestational Diabetes Mellitus (GDM) is any degree of glucose intolerance during pregnancy. In view of maternal morbidity and mortality as well as fetal complications, early diagnosis is an utmost necessity one in the present scenario. In developing country like India, early detection and prevention will be more cost effective. Oral Glucose Tolerance Test (OGTT) is the crucial method for diagnosing GDM done usually between 24th and 28th week of pregnancy. The proposed work focuses on early detection of GDM without a visit to the hospital for women who are pregnant for the second time onwards (multigravida patients). In recent years, prediction models using multivariate logistic regression analysis have been developed in many areas of health care research. With an accuracy of 82.45%, the classifier has proved to be an efficient model for diagnosis of GDM without the conventional method of blood test by providing newly designed parameters as inputs to the model.
    Keywords: Gestational Diabetes Mellitus; Diagnosis; Logistic Regression; Risk Factors.

  • Information Hiding using LSB Replacement Technique and Adaptive Image Fusion   Order a copy of this article
    by Lakshmi Priya S, Namitha K, Neela Niranjani V, Manoj Kumar Natha 
    Abstract: Steganography is a branch of information hiding which allows people to communicate in a secure way. As more information is transferred electronically, the need for confidentiality of this information increases. Our paper combines two techniques (1) the LSB replacement technique for hiding text messages in an image and (2) an iterative image fusion algorithm. This algorithm uses the calculated fusion parameter to select and fuse an optimal carrier image with the input image containing the hidden text. Both of these images belong to the same class and this classification is done based on the values of three coefficients brightness, texture and variation which are obtained from Haar wavelet transform. This additional step of fusing the images increases the PSNR value of the final image which in turn enhances security during transmission. The results show how PSNR value increases for the final image and also a comparative analysis is done in three ways (1) PSNR values before and after fusion (2) effects of using an optimum carrier versus a random carrier image and (3) effects of varying lengths of input text messages and the corresponding changes in PSNR values. From the three cases, we are able to prove that the PSNR value of the final image is increased after fusion with an optimal carrier image.
    Keywords: Steganography; Adaptive Image Fusion; LSB Replacement.

Special Issue on: Sp Issue New Trends in European Simulation and Modelling (Eurosis 2013/14)

  • Detection and localization of water leaks in water nets supported by an ICT system   Order a copy of this article
    by Jan Studzinski, Izabela Rojek 
    Abstract: In the paper a complex approach to detect and localize the water leaks in water networks is presented. To realize the approach a sophisticated monitoring system is to design and implement on the water net, a water net hydraulic model has to be calibrated and afterwards neuronal nets to develop a water leaks classifier are used. The appropriate programs realizing these task are included into an ICT system developed at the Systems Research Institute of Polish Academy of Sciences. The neuronal nets used are of MLP and Kohonen types.
    Keywords: Municipal water networks; hydraulic models; neuronal networks; SCADA systems; water leaks detection and localization.