International Journal of Advanced Intelligence Paradigms (356 papers in press)
A New Fractal Watermarking Method for Images of Text
by Kourosh Kiani, Arash Mousavi, Shahaboddin Shamshirband
Abstract: A new method, using the orthogonal fractal coding is developed for fractal watermarking of high contrast, low-density images of texts. In this method, image is divided into one pixel height sub-images. Each sub-image is coded separately using the orthogonal fractal coding technique. A binary watermark is re-ordered using a chaotic sequence. The binary watermark is inserted into the range block means of fractal codes. This fractal code is further decoded to obtain the watermarked image. The watermark sequence is retrieved by comparing the original image and the watermarked code. The extracted watermark is re-ordered using the key of the chaotic sequence. The method is robust against JPEG and noise attacks and has a very low watermark visibility.
Keywords: fractal watermarking; high contrast image; text watermarking; steganography.
Methodology of Wavelet Analysis in Research of Dynamics of Phishing Attacks
by Mehdi Dadkhah, Vyacheslav V. Lyashenko, Zhanna V. Deineko, Shahaboddin Shamshirband, Mohammad Davarpanah Jazi
Abstract: Safety of transfer and reception of various data over the Internet can be accompanied by a presence of harmful components in a passed content. The phishing attack is one of versions of such harmful components. Thus it is important to know the relationship between the Phishes Verified as Valid and Suspected Phishes Submitted. This is necessary for the forecast. To solve this problem, we use the wavelet analysis of time series which represent Phishes Verified as Valid and Suspected Phishes Submitted. We are considering the change of Hurst indicator; we analyze of a spectrum of wavelet energy. This allows you to identify the features of the main characteristics of time series which are considered. Conducted researches have shown the presence of essential duration of long-term dependence of investigated data. We also identified presence of trend component in structure of investigated series of data. It allows you to investigate recurrence of occurrence of phishing attacks that allows to concentrate forces and means during the periods of activization of such harmful influences. The analysis is spent on real data that reflects the importance of the conclusions obtained.
Keywords: Internet; phishing; trend; wavelet analysis; wavelet energy; wavelet expansion; Hurst indicator; Daubechies Wavelet.
Energy and Velocity based Multipath Routing Protocol for VANET
by Bhagyavathi Miriam, Saritha Vankadara
Abstract: The VANET is a type of network that can be built randomly, quickly and temporarily without any standard infrastructure. In VANET, routing of data is an interesting and challenging task because of the high mobility. Therefore, the routing algorithm for VANETs is an imperative issue, particularly in vehicle to vehicle communication. This paper proposes a multipath routing algorithm for VANET named Energy and Velocity based Multipath Routing Protocol (EVMRP) based on available bandwidth, residual energy and relative velocity. The most important point of the proposed algorithm is setting the CWmax as the available bandwidth of the path. The proposed algorithm is tested on the QoS parameters like end-to-end delay, throughput and packet loss. The results clearly indicate that the proposed algorithm, EVMRP outperforms when compared to the legacy systems like AOMDV.
Keywords: Routing; VANET; available bandwidth; Multipath.
Mutation Based Genetic Algorithm for Efficiency Optimization of Unit Testing
by Rijwan Khan, Mohd. Amjad
Abstract: Fault in a software program can be detected by mutation testing . However, mu-rntation testing is an expensive process in a software testing domain. In this paper, wernhave introduced a method based on Genetic Algorithm and Mutation Analysis for unitrntesting process. Software industry produces high quality software in which softwarerntesting has an important role. First, we make a program/software and intent somernmutant in this program/software, nd most critical path and optimize test cases usingrngenetic algorithm for the unit testing. Initially generated test cases are rened usingrngenetic algorithm. We use a mutant function for measuring the adequacy of the testrncase set. The given mutant function is used to calculate a mutant score. We havernachieved 100% path coverage and boundary coverage using mutation testing. The ob-rnjective is to produce a set of good test cases for killing one or more undesired mutantsrnand produces dierent mutant from original software / program. Unlike simple algo-rnrithms, Genetic Algorithms provide suitability for reducing the data generation at arncomparable cost. An optimized test cases has been generated by proposed approachrnfor cost reduction and revealing or killing undesired test cases .
Keywords: Genetic Algorithms (GA); Software Testing (ST); Automatic Test Case Coverage (ATCC),rnBoundary Value Analysis (BVA); Mutation Testing (MT).
AMST-MAC: Adaptive Sleeping Multi-Frames Selective Data Transmission Control for Wireless Sensor Networks
by Bindiya Jain, Gursewak Brar, Jyoteesh Malhotra
Abstract: Energy efficiency is the major issue in the designing of wireless sensor networks. Keeping in view the importance of energy efficiency in wireless sensor networks, designing the efficient MAC protocol is of paramount importance which intends to make them energy efficient. The proposed MAC protocol called AMST- MAC (Adaptive sleeping multi frames selective data transmission MAC) is an energy saving mechanism whose objective is to remove the redundancy, reduce the number of packets sent for same amount of information by using SDT It allows the node to sleep for the time when it is idle even in the data cycle using the concept of DDC. The aim of this simulation study was to evaluate the reliability of the proposed protocol in terms of energy efficiency, end-to-end delay and packet delivery ratio compared to SMAC protocol without degrading service quality. The results obtained clearly shows that the proposed AMST-MAC protocol is more energy efficient in comparison to SMAC protocol and it maintains the lowest sender and receiver duty cycles. AMST MAC decreases the delay by a factor of 10% thus overall mean delay will show a reasonable decrease .AMST-MAC protocol consumes less energy in every round enabling AMST-MAC to be a better protocol as compare to S-MAC protocol without SDT & with SDT.
Keywords: Sensor networks; Medium Access Control; Energy Efficient; AMST-MAC Protocol; Selective data Transmission; Dynamic duty cycle.
An Intelligent Clustering Approach for Improving Search Result of a Website
by Shashi Mehrotra, Shruti Kohli, Aditi Sharan
Abstract: These days internet has become part of our life, and thus web data usage is increased tremendously. We proposed a model that will improve the search result using clustering approach. Clustering is being used to group the data into the relevant folder so that accessing of information will be fast. The K-Means clustering algorithm is very efficient in terms of speed and suitable for large data set. However, K-Means algorithm has some drawbacks, such as a number of clusters need to be defined in the starting itself, initialization affects the output, and it often gets stuck to local optima. We proposed a hybrid model that determines the number of clusters itself and gives global optimal result. The number which has been obtained is passed as a parameter for The K-Means. Thus, our novel hybrid model integrates the features of K-means and Genetic algorithm. The model will have best characteristics of K-Means and genetic algorithm, and overcomes the drawbacks of K-Means and genetic algorithm.
Keywords: Clustering; K-Means algorithm; Genetic algorithm; Hybrid algorithm.
A Clustered Neighborhood Consensus Algorithm for a Generic Agent Interaction Protocol
by Aarti Singh, Dimple Juneja, Rashmi Singh, Saurabh Mukherhjee
Abstract: The premise of the paper is twin fold. It not only improves the existing generic agent interaction protocol (GIPMAS) but also uniquely addresses the issue of generating consensus amongst agents participating in generic agent interaction protocol. In a multi-agent system, agents cooperate and coordinate to reach to decision while sending the information. Now, in a clustered multiagent system, all member agents of the given cluster send the data to cluster head which then forwards the processed information to next level for further processing. It is quite apparent that agents in close proximity (belonging to same or different clusters) would transmit the redundant information. Hence, it is desired that before sending the raw data, member agents should mutually agree on common decision (based on some common metrics) and send in the only relevant and agreed upon information to next higher level. The paper significantly contributes a consensus algorithm which is marriage of neighborhood algorithm and discrete time consensus protocol. The proposed neighborhood algorithm focuses on providing more weight to communication links/edges joining two clusters as compared to links joining two agents in a cluster, which increases rate of convergence of information. Thus in clustered network of agents, cluster head and executive cluster head would be responsible for deriving consensus in the received information. Simulation reflects that proposed mechanism improves time of convergence of information. However, slight increase in task execution is also observed due to trade off between quality of output and complexity of mechanism.
Keywords: Multiagent Systems; Agent Interaction Protocol; Clustered Network; Neighborhood Algortihm.
Evolutionary Optimization Based Fractional Order
controller for Web Transport Systems in Process
by Haripriya N., Kavitha Paneerselvam, Seshadhri Srinivasan, Juri Belikov
Abstract: This investigation presents an optimization based design of fractional
order proportional integral (FO-PI) controller for web transport systems
used in paper industries. The objective of the optimization algorithm is to
reduce the integral absolute error of the closed loop web transport systems
considering the underlying physical and operating constraints. The resulting
optimization problem is non-linear, and to compute the controller parameters,
evolutionary algorithms- particle swarm optimization (PSO) and bacterial
foraging optimization (BFO) are used. The performance improvements achieved
usingFOCis compared with traditional proportional integral derivative controller.
Our results show that BFO tuned FOC shows better performance
Keywords: Web Transport Systems (WTS); Web Transport Controllers (WTC);
Fractional Order Controllers (FOC); Particle Swarm Optimization (PSO);
Bacterial Foraging Optimization (BFO); Offline optimization.
Protagonist and deuteragonist based video indexing and retrieval system for movie and video song sequences
by Tushar Ratanpara, Narendra Patel
Abstract: Protagonist and deuteragonist are two main characters that plays leading role in Indian Hindi movie (IHM). Currently such information is attached using textual caption which is highly unreliable. The research presented in this paper is to automatically index and retrieve the content based on protagonist and deuteragonist from the large IHM and video song sequences (VSS). Video song sequence indexes are extracted using audio based approach from IHM in module 1. These indexes are used as an input in module 2. Faces are identified from every VSS. Colour histogram and spatiogram descriptor are extracted from faces. Similarity between two faces is computed using Bhattacharyya coefficient. Similarity based clustering technique is performed to obtain clusters of faces. Recognition of protagonist and deuteragonist is done using surf feature points. The experimental results are carried out using Indian Hindi movies of different genres.
Keywords: Content based video indexing and retrieval; song sequences; clustering; similarity; color histogram; Spatiogram.
An Improved Key Management Scheme in Cloud Storage
by VijayaKumar V, Abdul Quadir, Kiran Mary Matthew
Abstract: Nowadays, cloud services are used by numerous people all around the globe. One of its major applications is in the field of cloud storage. Users can store data in cloud without the need to have hardware resources for their storage. They will just have to pay for the amount of resources they use. For storage applications, usually the user provides the cloud with the data to be stored. Cloud encrypts this data and returns a key to the user. So the user needs to store only this key for decryption. Storage of this key is a matter of concern. If the key is lost, then the probability of data loss is very high. In order to avoid this, a large number of key management techniques have been proposed. In this paper, a key management scheme is proposed that regenerates the key, in case of its loss, using the attributes of the user.
Keywords: cloud computing; data privacy; key management;.
An Effective System for Video Transmission and Error Recovery Mechanisms in Multimedia Networks
by U. Rahamathunnisa, R. Saravanan
Abstract: In this paper an effective system has been proposed for video transmission. This system solves the problems arised due to occurrence of errors in transmitted video and delivers video with required quality of services. The reconstructed video maintains the quality of services at the decoder side. Video dynamics based error concealment algorithm is applied for recovering errors occurred during transmission. The performance of the proposed system is measured by means of simulations using JM reference software.
Keywords: Error concealment; Video dynamics; Video transmission; Quality of service; Reconstructed Video.
CLUSTERING MIXED DATA USING NEIGHBORHOOD ROUGH SETS
by Sharmila Banu Kather, B.K. Tripathy
Abstract: Data in varied nature and huge quantities are being generated every day. They range from tabulated, structured and semi-structured as well as numerical or categorical in terms of attributes. Data preprocessing presents data in a favourable format to apply analytics algorithm and derive knowledge therein. Data analytics has revolutionized millennial mankind unwinding the knowledge and patterns mined from data. Clustering is an unsupervised learning pattern which has popular algorithms based on distance, density, dimensions and other functions. These algorithms are operational on numerical attributes and special algorithms for data involving categorical features are also reported. In this paper we propose a straight forward way of clustering data involving both numerical and categorical features based on Neighborhood Rough Sets. It does not include calculation of any extra parameters like entropy, saliency, dependency or call for discretization of data. Hence its complexity is lesser than algorithms proposed for categorical or mixed data and offers better efficiency.
Keywords: clustering; mixed; categorical and numerical data; continuous data; rough sets; neighborhood rough sets; granulation.
A Unified Approach for Skin Colour Segmentation Using Generic Bivariate Pearson Mixture Model
by B.N. Jagadesh, K. Srinivasa Rao, Ch Satyanarayana
Abstract: Skin colour segmentation is rapidly growing area of research in computer science for identification and authentication of persons. In this paper, a novel generic bivariate Pearsonian mixture model for skin colour segmentation is proposed. It is observed that the hue and saturation of the colour image better characterize the features of the individual human races. In general, the human race can be characterized in to three categories namely Asian, African and European. The feature of the African skin colour can be modeled by bivariate Pearson type-IIb distribution, the Asian skin colour feature can be modeled by bivariate Pearson type-IIaα distribution and the European skin colour feature can be modeled by bivariate Pearson type-IVa distribution. The combination of all these three races of people in an image can be characterized by a three component mixture model. Deriving the updated equations of the EM-Algorithm of the generic bivariate Pearson mixture model parameters is estimated. The initialization of the model parameters are done through moment method of estimation and K-Means algorithm. The segmentation algorithm is developed using component maximum likelihood under Bayesian frame. The performance of the proposed algorithm is carried by experimentation with random sample of five images collected from our own database and various magazine websites with a combination of three races (Asian, African and European) and computing the segmentation performance metrics such as PRI, GCE and VOI. The efficiency of the proposed model with that of Bivariate GMM is carried through confusion matrix and ROC curves. It is observed that the proposed algorithm outperform the existing algorithms.
Keywords: Skin colour segmentation; Generic bivariate Pearsonian mixture model; EM-Algorithm; Segmentation performance metrics; Feature Vector.
An Intelligent and Interactive AR based Location Identifier for Indoor Navigation
by Shriram K Vasudevan, Karthik Venkatachalam, Harii Shree, Keerthana Rani, Priya Dharshini
Abstract: Augmented Reality (AR) has been in existence for more than five decades, but the techniques and methods for implementing this technology are developing only in the recent past i.e. for the past one decade. We have built an application using AR techniques with Android as base platform. We have combined Global Positioning System (GPS) and Augmented Reality (AR) to build an application for indoor navigation. Even though other applications like Google maps already exist for navigation, our application offers the users with more ease and attractiveness through AR. The data of the surroundings of a particular location is being stored in the form of latitude, longitude and altitude (geo location) in the cloud. When a user visits a location for the first time, the geo location details are entered and subsequently stored in the cloud. Consequently, the next time when the same user visits the location or when a new user visits, the stored information will be displayed about the location. The location details are updated as and when a new location is identified. These location details are displayed in the form of markers through the camera that has been integrated into the application For example, when a new student visits a school or college for cultural fest, even after finding the correct building it becomes a tedious task to locate the correct venue or classroom as the area could be too vast. Whereas, with our app, one would reach the correct venue and the augmented reality feature makes it more interactive and user friendly.
Keywords: Augmented Reality (AR); Android Application Development; Global Positioning System (GPS); Geo Location; Location; Location Manager; Indoor Navigation; Cloud Computing;.
River flow prediction with memory based artificial neural networks: A case study of Dholai river basin
by Shyama Debbarma, Parthatsarathi Choudhury
Abstract: Prediction of hydrologic time series has been one of the most challenging tasks in water resources management due to the non-availability of adequate data. Recently, applications of Artificial Neural Networks (ANNs) have proved quite successful in such situation in various fields. This paper demonstrates the use of memory-based ANNs to predict daily river flows. Two different networks, namely the gamma memory neural network (GMN) and genetic algorithm-gamma memory neural network (GA-GMN) have been chosen. The best network topologies for both the ANN models are achieved with Tanh transfer function and Levenberg-Marquardt learning rule after calibrations with multiple combinations of network parameters. The selected ANN models are then used to predict the daily mean flows of Dholai (Rukmi) river in Assam, India, a sub-basin of the Barak river basin. A comparative study of both networks indicates that the GA-GMN model performed better than the GMN model. The GA-GMN model gave better results for both training and testing dataset with minimum training MSE as 0.018 and minimum testing MSE as 22.97. Hence GA-GMN model is selected as an effective tool for predicting flow features of the Dholai river.
Keywords: Prediction; gamma memory; genetic algorithm; flow.
The Recommender System: A Survey
by Bushra Alhijawi, Yousef Kilani
Abstract: Recommender system is a helpful tool for helping the user in cutting the time needs to find personalized products, documents, friends, places and services. In addition, the recommender system handles the century web problem: information overload. In the same time, many environments or technologies (i.e. cloud, mobile, social network) become popular today and facing the problem of large amount of information. Therefore, the researchers recognize that the recommender system is a suitable solution to this problem in those environments. This paper, reviews the recent research papers that were applied the recommender system in mobile, social network, or cloud environment. We classify these recommender systems into four groups (i.e. mobile recommender system, social recommender system, cloud recommender system and traditional (PC) recommender system) depending on technology or environment that the RS is applied in. This survey presents some compression, advantages and challenges of these types of recommender systems. Also, it will directly support researchers and professionals in their understanding of those types of recommender systems.
Keywords: Recommender system; Collaborative filtering; Recommendation; Hybrid; Mobile; Cloud; Social; cold-start; Content-based filtering; Demographic-based filtering.
Occlusion Detection and Processing using Optical Flow and Particle Filter
by Wesam Askar, Osama Elmowafy, Anca Ralescu, Aliaa Youssif, Gamal Elnashar
Abstract: Object tracking systems continue to be an intensive area of research, for which detection and processing of occlusion is a well-known challenge. This paper proposes a new approach to detection and handling of occlusion based on the integration of two known techniques, optical flow and particle filtering. Results of preliminary experiments show that the proposed method can detect and overcome the occlusion problem successfully during the tracking process.
Keywords: Video tracking; optical flow; particle filter; occlusion.
Improving Recommendation quality and performance of Genetic-Based Recommender System
by Bushra Alhijawi, Yousef Kilani, Ayoub Alsarhan
Abstract: The recommender system came to help the user in finding the required itemrnin a short time by filtering the available choices. This paper addresses the problem of recommending items to users by presenting new three genetic-based recommender system (GARS+, GARS + + and HGARS). HGARS is a combination of GARS+ with GARS + +. It is an enhanced version of GARS which is works without the need of using the hybrid model. In the proposed algorithm, the genetic algorithm is used to find the optimal similarity function. This function depending on a liner combination of values and weights. We experimentally prove that HGARS improves the accuracy by 16.1%, the recommendation quality by 17.2% and the performance by 40%.
Keywords: Collaborative filtering; Recommender System; Genetic Algorithms; Similarity.
ENERGY AWARE TASK SCHEDULING USING HYBRID FIREFLY - GA IN BIG DATA
by M. Senthilkumar, P. Ilango
Abstract: Task scheduling is important of research in big data and its made in two traditions user level and system level. In user-level issues with scheduling between the service provider and customer. In system level issues in scheduling with resources management in the data center. The drawbacks of various existing methods to increase in power consumption of data centers have become a significant issue. Now the Map Reduce clusters constitute a major piece of the data center for Big Data Applications. Simply the absolute size, high fault-tolerant nature and low utilization levels make them less energy efficient. The complexity of scheduling increases when there is an increase in the size of the task, it becomes very tedious to perform scheduling effectively. The drawback with existing scheduling algorithm generates higher computational cost and less efficient. The multi-objective scheduling with cloud computing makes it difficult to resolve the problem in the case of complex tasks. These are the primary drawbacks of several existing works, which prompt us to manage this research on task scheduling in cloud computing
Keywords: Firefly algorithm (FA); genetic algorithm (GA); task scheduling; Hadoop; Map Reduce framework.
An Interactive and Innovative Application For Hand Rehabilitation Through Virtual Reality
by Shriram K. Vasudevan, S. Aakash Preethi, Karthik Venkatachalam, Mithula G, Navethra G, Krithika Nagarajan
Abstract: Physiotherapy has been very monotonous for patients and they tend to lose interest and motivation in exercising. Introducing games with short term goals in the field of rehabilitation is the best alternative, to maintain patients' motivation. Our research focuses on gamification of Hand Rehabilitation exercises to engage patients wholly in rehab and to maintain their compliance to repeated exercising, for a speedy recovery from hand injuries (wrist, elbow and fingers). This is achieved by integrating Leap Motion Sensor with Unity game development engine. Exercises (as gestures) are recognized and validated by Leap Motion Sensor. Game application for exercises are developed using Unity. Gamification alternative has been implemented by very few in the globe and it has been taken as a challenge in our research. We could successfully design and build an engine which would be interactive and real-time, providing platform for rehabilitation. We have tested the same with patients and received positive feedbacks. We have enabled the user to know the score through GUI
Keywords: Rehabilitation; Physiotherapy; Gesture; Leap Motion Sensor; Recovery; Virtual Reality;.
Discovering Communities for Web Usage Mining Systems
by Yacine SLIMANI, Abdelouaheb MOUSSAOUI, Yves LECHEVALLIER, Ahlem DRIF
Abstract: Discovering the community structure in the field of web usage mining structure has been addressed in many different ways. In this paper, we present a new method for detecting community structure using Markov chains based on the set of frequent motifs. The basic idea is to analyze the occurrence probability of different frequent sequences during different user sessions in order to extract the communities that describe the users behavior. The proposed method is constructed and successfully applied on the web site in the university campus of Farhat AbbAs Setif.
Keywords: Web usage mining; Community detection; Complex networks; Markov chains; Quality function.
Person Re-Identification Using kNN Classifier Based Fusion Approach
by Poongothai Elango, Andavar Suruliandi
Abstract: Re-identification is the process of identifying the same person from images or videos taken from different cameras. Although many methods have been proposed for re-identification, it is still challenging because of unsolved issues like variation in occlusions, viewpoint, pose and illumination changes. The objective of this paper is, to propose a fusion based re-identification method to improve the identification accuracy. To meet the objective, texture and colour features are considered. In addition the proposed method employs Mahalanobis metric based kNN classifier for classification. The performance of proposed method is compared with the existing feature based re-identification methods. CAVIAR, VIPeR, 3DPes, PRID datasets are used for experiment analysis. Results show that the proposed method outperforms the existing methods. Further it is observed that Mahalanobis metric based kNN classifier improves the recognition accuracy in re-identification process.
Keywords: Person re-identification; Colour features; Texture feature; Feature Fusion.
Graph Embedded Discriminant Analysis for the Extraction of Features in Hyperspectral Images
by Hannah M. Adebanjo, Jules R. Tapamo
Abstract: In remote sensed hyperspectral imagery (HSI), class discrimination has been a major concern in the process of reducing the dimensionality of hyperspectral images. Local Discriminant Analysis (LDA) is a widely accepted dimensionality reduction (DR) technique in HSI processing. LDA discriminates between classes of interest in order to extract features from the image. However, the drawbacks of its application to HSI is the presence of few labeled samples and its inability to extract equivalent number of features for the classes in the image rnThis paper proposes a new graphical manifold DR algorithm for HSI. The proposed method has two objectives: to maximize class separability using unlabeled samples and preserve the manifold structure of the image. The unlabeled samples are clustered and the labels from the clusters are used in our semi--supervised feature extraction approach. Classification is then performed using Support Vector Machine and Neural Networks. The analysis of the result obtained shows that proposed algorithm can preserve both spatial and spectral property of HSI while reducing the dimension. Moreover, it performs better in comparison with some related state of the art dimensionality reduction methods.rn
Keywords: feature extraction; graph-based methods; manifold learning; hyperspectral image(HSI).
Adaptive Tutoring System based on Fuzzy Logic
by Makram Soui, Abed Mourad, Ghannem Adnane, Daouas Karim
Abstract: In recent years, education method has changed and has become very innovative and modern. In this way, online adaptive learning seems to be a revolutionary competitive method. The advancement of computer and networking technologies is the key to this whole change from the classic education to the modern online adaptive education. The majority of E-learning systems are based on Boolean logic. In fact, the system considers that the learner like or not a course characteristic but the user can prefer gradually this parameter (low, medium, high). To this end, the proposed approach exploits semantic relations between data elements and learners preferences to determine adapted UI components appropriate to learners characteristics based on fuzzy logic. The results of evaluation confirm the efficiency of our technique with an average of more than 77% of precision and recall.
Keywords: Adaptation ; Adaptive course ; Evaluation; Multi-criteria Decision Making; Intelligent Tutoring System.
Technical Analysis based Fuzzy support system for stock market Trading
by Aviral Sharma, Vishal Bhatnagar, Abhay Bansal
Abstract: Technical analysis form an integral part of the life of a stock trader. In econometric analysis, technical analysis is method for predicting the course of prices of security under consideration through the study of past statistics relating to the equity, mostly price and volume. Traders tend to use this type of analysis to take a decision regarding a particular security. Fuzzy logic based systems could be used in developing decision models where the experience of a traders can be incorporated inthe decision model. In this paper, we present a hybrid approach between fuzzy logic and technical analysis. The system generates a signal on direction of movement of the stock. Thus, helping the trader to better understand the underlying behavior of the stock under consideration and take a decision accordingly.
Keywords: Technical analysis; Commodity Channel index; relative strength index; William %R; ultimate oscillator; Aroon; Fuzzy Logic; Artificial intelligence.
Adaptive Savitzky-Golay Filtering and Its Applications
by Jozsef Dombi, Adrienn Dineva
Abstract: Noise reduction is a central issue of the theory and practice of signal processing. The Savitzky-Golay (SG) smoothing and differentiation filter is widely acknowledged as a simple and efficient method for denoising. However only few book on signal processing contain this method. As is well known, the performance of the classical SG-filter depends on the appropriate setting of the windowlength and the polynomial degree, which should match the scale of the signal since, in the case of signals with high rate of change, the performance of the filter may be limited. This paper presents a new adaptive strategy to smooth irregular signals based on the Savitzky-Golay algorithm. The proposed technique ensures high precision noise reduction by iterative multi-round smoothing and correction. In each round the parameters dynamically change due to the results of the previous smoothing. Our study provides additional support for data compression based on optimal resolution of the signal with linear approximation. Here, simulation results validate the applicability of the novel method.
Keywords: Savitzky-Golay filter; adaptive multi-round smoothing;iterative smoothing and correction;noise removal; data compression.
A new hybrid Genetic Algorithm for job shop scheduling problem
by Marjan Kuchaki Rafsanjani, Milad Riyahi
Abstract: Job shop scheduling problem is an NP-Hard problem. This paper proposes a new hybrid genetic algorithm to solve the problem in an appropriate way. In this paper, a new selection criterion to tackle premature convergence problem is introduced. To make full use of the problem itself, a new crossover based on the machines is designed. Furthermore, a new local search is designed which can improve the local search ability of proposed GA. This new approach is run on the some problems and computer simulation shows the effectiveness of the proposed approach.
Keywords: Job shop scheduling problem (JSSP); Genetic algorithm; Selection operator; Crossover operator; Local search.
An Optimized Component Selection Algorithm for Self-Adaptive Software architecture using the Component Repository
by Y. Mohana Roopa, A. Rama Mohan Reddy
Abstract: Component based software engineering focus on the development and reuse of the component. The component reuse is depending on the storage and retrieves process. The storage and retrieve process is carried by a component repository. This paper presents the component repository model for the developers to achieve good productivity. The component selection from the component repository according the functionality and requirements is a crucial part. This paper proposed an algorithm for optimizing component selection with the functionality constraints like customer size, reliability and performance. The experimental result evaluates the performance of the algorithm and it is proved that the proposed algorithm had better performance in terms of component selection.
Keywords: component; software system selection; adaptability; functionality.
Test Optimization: An Approach Based On Modified Algorithm For Software Network
by Manju Khari, Prabhat Kumar, Gulshan Shrivastava
Abstract: Testing is an indispensable part of the software development life cycle. It is performed to improve the performance, quality, efficiency and reliability of the software network. In this paper, three algorithms are implemented namely, Genetic Algorithm (GA), Cuckoo Search Algorithm (CSA), and Artificial Bee Colony (ABC) algorithm for the purpose of Test Suite Optimization and with the help of results obtained from the implementation of these three algorithms, a novel Hybrid Algorithm will be proposed to enhance the result of optimization. To test a system, suitable test cases are developed but these test cases need to be optimized, as executing all the test cases is a time-consuming process. Testing a system with all possible test cases will increase the time required for testing and will also affect the cost of a product. Thus, it is a good idea to reduce the number of test cases which in turn reduces the testing time andwork of a software tester. Authors focus on optimizing test suites so that only the best test cases need to be executed to test software network. In order to optimize test cases, nature-inspired algorithms are used as they provide the best optimization techniques. The proposed algorithm is implemented and experiments are conducted on various real-time programs to evaluate the efficiency of the proposed approach. Experimental results show that hybrid algorithm generates better/comparable results as compared to the existing state-of-the-art algorithms.
Keywords: Genetic; Cuckoo Search; Artificial bee; Test suite Optimization; Hybrid algorithm; software network; Test data.
Application of Artificial Neural Network (ANN) on deformation and densification behaviour of sintered Fe-C steel under cold upsetting
by T.K. Kandavel, T. Ashok Kumar, D. Vijay, S. Aswanth Samraj
Abstract: Cold upsetting is one of the densification processes used in P/M materials to achieve the desired density by applying required amount of load. The present work aims to study the deformation and densification characteristics of plain carbon steel (Fe-C) containing various levels of carbon viz. 0.2%, 0.5% and 1% under cold upsetting.
Elemental powders of iron (Fe) and graphite (C) were accurately weighed based on the compositions requirement and blended homogeneously using a pot mill. Cylindrical preforms of Fe and Fe-C powders were prepared using 100 T capacity Universal Testing Machine (UTM) by applying suitable axial pressure to get 80% theoretical density of respective alloy steels. The green compacts were sintered using 3.5kW electric muffle furnace and nitrogen gas was purged to prevent oxidation during sintering.
The sintered preforms of various compositions of Fe-C were subjected to cold upset. The axial and lateral deformations were calculated from the physical measurements taken from the deformedand non-deformed specimens and the density of the deformed preforms was measured by Archimedes principle. The experimental data were used further to generate the deformation and densification model using Artificial Neural Networks (ANN).
It is observed from the experimental results that increasing carbon content improves the deformation and densification properties of iron material as it behaves like a lubricant and increases the binding strength between the grains. As the target value of ANN model approaches unity, it could be concluded that the ANN prediction and experimental values have good agreement with each other. It is also added that ANN can be used as a prediction model on deformation and densification behaviours of any P/M materials.
Keywords: Artificial Neural Network; Powder metallurgy; Densification; Deformation; True axial stress; Plain carbon steel.
A study of Total Technical Life (TTL) for an Aircraft with implementation and suggestions for improvisation.
by Balachandran A, P.R. Suresh, Shriram K. Vasudevan
Abstract: Travel has become more sophisticated and inevitable these days. Aircraft has become one of the best opted ways to reach the target. Not only for civilians, but it is also used by the military for operational purposes. With so much of complicated design, there is a need to havernmore reliable systems and also effective use of service life of the aircraft known as Total Technical Life (TTL). The present system of fixing the TTL for an aircraft is passive method in which, the predicted values are compared with the value which is obtained from sample aircraftrnespecially monitored for this purpose. However, the actual fatigue of each aircraft is different as all the aircraft undergo different way of flying under different conditions at different locations. In order to cater for this unknown parameters, factor of safety is applied and hence safe utilization life is obtained. When the aircraft reaches the safe life limits, it is withdrawn from service though still useful life is available in the aircraft. In the absence of actual data available for each aircraft,rnpresent method is the only way to fly the aircraft safe at the cost of under utilization. In the recent years, much advancement has taken place in data sensing, capturing and processing. The computing platforms are available with very high reliable factor with much cheaper cost. With this technological advancement, it is possible to monitor the fatigue of all the aircraft structures dynamically and collect the actual data. The actual fatigue experienced by the aircraft during the usage period can be compared against the predicted value so that the life of an aircraft can bernextended without compromising the safety aspects. The proposed methodology is tested with a model aircraft and the readings are found to be consistent. The proposed system is one of the ways forward for optimal use of aircraft and scientific way of providing life extensions to the aircraft based on actual data rather than approximation of service life of aircraft fleet.
Keywords: TTL; Aircraft; Total Technical Life; Under utilization; Life of the aircraft; safety; Arduino; Microcontroller;.
A Stable Routing Algorithm for Mobile Ad Hoc Network Using Fuzzy Logic System
by D. Helen, D. Arivazhagan
The Mobile Ad Hoc Network (MANET) is an infrastructure-less network, where the nodes communicate either directly or indirectly through intermediate nodes. The network topology can change frequently due to its dynamic nature and limited resource availability. In MANET energy-efficient routing is a major issue because nodes are operated with a limited battery power. The energy-efficient routing algorithm can confirm the high performance by increasing the network lifetime. In order to make the network more scalable, the routing algorithm needs to maximize the usage of network resources. This paper proposes a novel routing approach Energy Aware Fuzzy Controlled Routing (EAFCR) algorithm. The proposed algorithm enriches the intelligence to the node by applying the fuzzy decision tools to develop a more stable and energy-efficient route during the route discovery phase. The fuzzy logic system uses the per hop delay, available energy and link quality to form a more stable route. With the proposed EAFCR algorithm, the packet delivery ratio, end-to-end delay, residual energy, and throughput show an improvement of 3.05%,1.38%, 4.25% and 3.3% respectively, than the existing Fuzzy Logic Modified AODV Routing (FMAR) protocol.
Keywords: infrastructure-less, topologies, fuzzy decision, routing, protocol.
Keywords: infrastructure-less; topologies; fuzzy decision; routing; protocol. rnrn.
Automatic Short Answer Grading using Rough Concept Clusters
by Udit Kr. Chakraborty, Debanjan Konar, Samir Roy, Sankhayan Choudhury
Abstract: Evaluation of text based answers has stayed as a challenge for researchers in recent years and with the growing acceptance of e-learning system, a solution needs to be achieved fast. While assessing the knowledge content, correctness of expression and linguistic patterns are complex issues in themselves, a smaller answer may be evaluated using keyword matching only. The work proposed in this paper is aimed at evaluating smaller text answers, no longer than a single sentence using keyword matching. The proposed method agglomerates keywords from a group of model answers forming clusters of words. The evaluation process thereafter exploits the inherent roughness of the keyword clusters to evaluate a learners response through comparison and keyword matching. The novelty in the proposed system lies in the usage of fuzzy membership functions along with rough set theory to evaluate the answers.
Rigorous tests have been conducted on dataset built for the purpose returned good correlation values with the average of two human evaluators. The proposed system also fares better than Latent Semantic Analysis (LSA) based and Link Grammar based evaluation systems.
Keywords: Text answer; Single Sentence; Keyword; Concept Cluster; Rough Set; Latent Semantic Analysis; Link grammar.
A hybrid grey wolf optimization and pattern search algorithm for automatic generation control of multi area interconnected power systems
by Vikas Soni, Girish Parmar, Mithilesh Kumar
Abstract: A hybrid grey wolf optimization-pattern search (hGWO-PS) algorithm has been proposed to optimize the parameters of two degree of freedom-proportional integral derivative (2DOF-PID) controllers in multi area power systems for automatic generation control. The integral of time multiplied by absolute error (ITAE) has been considered as an objective function in the present work. Firstly, this algorithm has been applied to two area non reheat thermal power system; secondly, the analysis of ITAE, dynamic responses and robustness of the same has also been carried out. The dynamic behaviour of the system optimized by the proposed approach hardly alters with the broad changes in the load and system parameters within the range [-50%, +50%]. The proposed algorithm has also been applied extensively to three area hydro-thermal power system with appropriate generation rate constraints (GRC). The simulation results show that the proposed algorithm performs better when compared with recently published approaches in terms of less ITAE value, settling time, overshoot and faster system ability to return at zero frequency and tie line power deviations.
Keywords: Automatic generation control; two area parallel interconnected thermal power system; three area interconnected hydro thermal power system; two degree of freedom-PID controllers; grey wolf optimization; pattern search; generation rate constraints; governor dead band nonlinearities.
PRIVACY PRESERVATION USING HYBRID CLOUD ENVIRONMENT AND MAP-REDUCE FOR DATA DEDUPLICATION
by Rutuja Mote, Ambika Pawar
Abstract: Cloud is an umbrella wherein the internet based development and services are scrutinized and then explored. Cloud can be entitled as an enigma, wherein the novel opportunities are pioneered to manifest a large scale and flexible computing framework. The actors of a cyber supply chain can be commenced through the important functionalities such as, the utility model of consumption with elasticity, the abstraction of the framework and so on. Hybrid clouds vary greatly in sophistication facilitating portability of workloads across the entire inter-cloud, without compromising users availability, security, or performance requirements. This paper comprehensively helps to enhance a privacy design model with cloud computing adaptation hitting the fast lane. In the first phase, the system assimilates and devises the formation of hybrid cloud architecture. In the second phase, the system implements various security tactics that are Advanced Encryption Standard (AES) Technique, Byte Replacement Shuffling (BRS) algorithm in consonance with sensitivity level, assigned to the file to preserve privacy. The third phase delineates the optimization of response time (to upload and download a file) and workflow using Map-Reduce for data deduplication for a cavernous privacy and security solution.
Keywords: Hybrid Cloud Architecture; File Upload; File Download; Byte Replacement Shuffling; Map-Reduce; Data Deduplication; Security; Privacy.
Possible Adoption of Various Machine Learning Techniques in Cognitive Radio-A Survey.
by Barnali Dey, Ashraf Hossain, Rabindranath Bera
Abstract: The concept of Cognitive Radio (CR) system is the need for next generation Wireless Communication technology in terms of providing intelligence and superior performance to a wireless device. The CR is mainly an intelligent system which is aware of its environment and is well capable to adapt in accordance with the changing environment and user needs. The concept of adaptation of the communication system can be realised well with machine learning capability inculcated within the system. It is a well known fact that, the key strengths of any Machine Learning paradigm is its ability to adapt with respect to the dynamic changing system parameters. In this paper an attempt has been made to compile various applications of machine learning techniques for different activities of CR cycle. Further, this note reviews the work on development of machine learning techniques for spectrum sensing of CR in order to make the CR system as a whole practically feasible and robust, thus mitigating its existing computational limitations due to the use of conventional techniques.
Keywords: Cognitive Radio; Machine Learning; Spectrum Sensing; Energy Detection.
Raga Recognition through Tonic Identification using Flute Acoustics
by Sinith M S, Shikha Tripathi, Murthy K V V
Abstract: Tonic identification is traditionally approached using
pitch histogram. Acoustic characteristics of musical instruments
have not been used for the purpose. The conventional tonic
identifiers are either knowledge based or multi-pitch analysis
based. These methods either directly or indirectly depend on the
drone sound. The efficiency of these systems drastically decreases
in the absence of the later. In this paper, a Tonic Identification
method which is independent of drone sound is proposed for
flute signals which makes use of acoustic characteristics of the
instrument. In addition, tonic identification is utilized for real-
time raga recognition.
Keywords: Tonic identification; Indian Classical Music; Raga recognition; Flute acoustics.
Performance improvement in Cardiology department of a hospital by Simulation
by Shriram K. Vasudevan, Narassima Seshadri, Anbuudayasankar SP, Thennarasu M
Abstract: Healthcare industry plays a vital role in life of humankind and in economic development of a country. Healthcare services have to be provided to mankind as and when required without time delay and compromise on quality. This research focusses on reduction of waiting time of patients as it is considered as one of the important parameters that governs the service quality and is considered to improve patient satisfaction. This was achieved by performing a case study in Cardiology outpatient department of a private hospital in South India. Cardiology was chosen as it is one of the most critical areas which demands immediate attention. The study follows a Discrete Event Simulation approach for analysing the trajectory of patients in cardiology department, determining various performance parameters, suggesting changes in the existing system and developing alternate models to compare the results with those of existing model. Reducing waiting time permits physicians to address more number of patients in a given period which are evident from the results obtained from the developed models. Simulation results revealed that the four alternate systems proposed were effective than the existing system.
Keywords: Discrete Event Simulation; Arena model; Healthcare; Cardiology; Outpatient department; Waiting time reduction;.
Computing the Shortest path with Words
by Arindam Dey, Anita Pal
Abstract: Computing with Words is a soft computing technique to solve the decision making problem with the information described in natural language. It is inspired by the remarkable human capability to perform a wide variety of physical and mental tasks without any measurements or computations. In this paper, we propose a generalized Diskrtras algorithm to solve the shortest path problem from a specific node to every other nodes on a fuzzy graph, in which words taken from natural language are assigned to the arcs as their arc length.rnWe call this problem as computing the shortest path with word (CSPWW). In arnshortest path problem, the arc lengths may represent time or cost. Human beingrndescribe those arc costs in real life by the terms small, large, some, etc termrnwhich do not supply any natural numbers or fuzzy numbers. We describe thosernterms as words. Same word may have different meaning to different people. So,rnuncertainty appears in description of the word in natural language. Here, we usernInterval Type 2 Fuzzy Set (IT2FS) to capture the uncertainty of the words. Arnperceptual computer model is introduced to use in our algorithm. The Per-C thatrnis associated with shortest path problem is called a shortest path advisor (SPA),and its design is described in detail in this paper. It consists of three components:encoder, CWW engine and decoder. The encoder receives all the words present in the path and transforms all the words into IT2FSs. The CWW engine adds all the IT2FSs and returns an IT2FS for the corresponding path. The decoder receives the output of CWW engine and calculates the corresponding centroid based ranking value of the path. This rank is used to determine the shortest path. A numerical example of transportation network is used to illustrate the effectiveness of the proposed method.
Keywords: Computing with words; Interval type-2 fuzzy sets; perceptualrncomputer; centroid rank.
An Analysis of the Most Accident Prone Regions within the Dhaka Metropolitan Region Using Clustering
by M. Rashedur Rahman
Abstract: Most of the worlds developed countries have decreased the unusual deaths like traffic accidents of their citizens by taking efficient steps. In Bangladesh, injuries because of road accidents have become a regular incident. The highly-populated cities in Bangladesh are still having such incidents daily. As the number of vehicles is increasing and most of the drivers are not willing to follow the traffic rules, injuries due to traffic accidents are not going down at all. Among all those big cities in Bangladesh, Dhaka city has highest amount of road accidents. So, in this paper we focus on the most hazardous regions in Dhaka Metropolitan area. We have collected the accident related data from Accident Research Institute (ARI) at Bangladesh University of Engineering and Technology (BUET) that is located in the city of Dhaka. In our paper, we have used the Fuzzy C-means Clustering, Expectation Maximization, Hierarchical Agglomerative Clustering and K-means Clustering to identify the regions where traffic incidents occur the most in Dhaka Metropolitan area. The missing values for some attributes in the dataset are overwritten by the mean/mode of that attribute itself.
Keywords: data mining; accidental injury severity; clustering; hazardous areas; dhaka metropolitan area.
A statistical comparison for evaluating the effectiveness of linear and nonlinear manifold detection techniques for software defect prediction
by Soumi Ghosh, Ajay Rana, Vineet Kansal
Abstract: Most of the software systems are released without predicting defects and therefore, this paper presents a new effective technique manifold detection technique (MDT) is essential and different than earlier applied defect prediction methods like regression, feature selection methods, etc. In this paper, performance of classifiers has been compared with or without MDTs to evaluate the effectiveness of different MDTs (linear and nonlinear) by reducing the dimensions of software datasets. In this process, eight classifiers were applied to four PROMISE datasets to determine the best performing classifier with respect to prediction performance measuring factors (accuracy, precision, recall, F-measure, AUC, misclassification error) with or without MDTs. The experimental results statistically tested by paired two-tailed t-test proved that FastMVU is the most accurate result producing technique as compared to all other nonlinear MDTs and Bayesian network (BN) is the most effective technique for software defect prediction using with or without MDTs.
Keywords: defects; linear; nonlinear; manifold detection; promise datasets; prediction; software system.
Modified SVPWM Technique for a Sensorless Controlled Induction Motor Drive using Neural Network Observer and Predictive Controller
by Shoeb Hussain, Mohammad Abid Bazaz
Abstract: The use of multi-level inverter in a sensorless control scheme increases reliability in state parameter estimation. In this paper, sensorless control is presented using a neural network observer that uses the direct and quadrature current and voltage components for speed estimation. Distortion in current and voltage will result in deviations in speed estimation. In order to address the problem, this paper presents a modified space vector modulation scheme for sensorless control of induction motor drive fed by a multi-level inverter. The modulation scheme uses lesser switching states and is employed on a cascaded H-bridge inverter configuration. This results in reliable speed estimation by reducing distortion in current and voltage measurement. Moreover the paper uses predictive controller for speed control. Simulation is carried out in MATLAB and results show improved performance of sensorless operation.rnrn
Keywords: Induction motor; predictive controller; neural network observer; Sensorless Vector control; SVPWM.
Determination of Reliability Index of Cantilever Retaining Wall by RVM, MPMR and MARS
by Pijush Samui, Rahul Kumar, Sunita Kumari, Sanjiban Sekhar Roy
Abstract: Overturning criterion is an important parameter for designing cantilever retaining wall. This study adopts Relevance Vector Machine (RVM) based First Order Second Method (FOSM), Minimax Probability Machine Regression(MPMR) based FOSM and Multivariate Adaptive Regression Spline (MARS) based FOSM for determination of reliability index of cantilever retaining wall based on overturning criterion. RVM, MPMR and MARS have been used to overcome the limitations of the FOSM model. An example is illustrated how the proposed RVM based FOSM, MPMR based FOSM and MARS based FOSM analysis can be carried out. A comparative study has been carried out between the developed models. The results demonstrate that the developed models have ability to overcome the limitations of FOSM
Keywords: Retaining Wall; Reliability; First Order Second Moment Method; Minimax Probability Machine Regression; Relevance Vector Machine; Multivariate Adaptive Regression Spline.
Thumb Movement for Prosthetic Hand based Fuzzy Logic
by Anilesh Dey, Amarjyoti Goswami, Abdur Rohman, Jamini Das, Nilanjan Dey, Amira S. Ashour, Fuqian Shi
Abstract: Electromyography innovation leads to the development of modern prostheses (artificial limbs) control. Prosthetic hands are developed to assist amputees during their daily activities. Over the years, it is seen that the fluid movements which are required to carry out different functions, such as gripping and holding are not reaching its full potential, especially in thumb movement pattern. Consequently, the current work proposed an efficient mechanism for the movement of the prosthetic thumb in order to position the thumb even at intermediate angles as 45.3 degrees and 78.6 degrees. Obtaining such flexibility in the movement will lead to a movement pattern which is more similar to the human hand. A fuzzy-based control strategy is implied to design a prosthetic thumb with the above-mentioned movement pattern. The Mamdani fuzzy control model is proposed with three input variables, namely the thumbs first joint bend, second joint bend and the second joint movement in the left and right direction. The proposed system provided the expected results, where twenty-seven combinations of the rules facilitate the alignment of the prosthetic thumb at different degrees.
Keywords: Intermediate movements; Mamdani fuzzy control; Prosthetic thumb movement.
QoS-Aware Online Mechanism for Dynamic VM Provisioning in Cloud Market Using Q-learning
by Ayoub Alsarhan
Abstract: Cloud provider (CP) leases various resources such as CPUs, memory, and storage in the form of Virtual Machine (VM) instances to clients over internet. This paper tackles the issue of quality of service (QoS) provisioning in cloud environment. We examine using Q-learning for provisioning VMs in the cloud market. The extracted decision function should decide when rejecting new request for VMs that violate QoS guarantee. This problem requires the reward for CP be maximized while simultaneously meeting a quality of service (QoS) constraints. These complex contradicting objectives are embedded in our Q-Learning model that is developed and implemented as shown in this paper. Numerical analysis shows the ability of our solution to earn significantly higher revenue than alternatives.
Keywords: Quality of Service; Cloud Computing; Resource Management; Q-learning; Cloud Service Trading.
Using modified background subtraction for detecting vehicles in Videos
by Mohamed Maher Ata, Mohamed El-Darieby, M.Abd Elnaby, Sameh A. Napoleon
Abstract: In this paper; a comparison study has been introduced between the traditional foreground detector based (background subtraction technique) and a modified background subtraction based (empty frame subtraction technique). Our case study was estimating average vehicular speed and the level of crowdedness in 3 test traffic videos with 5 different indices; frame rate, resolution, number of frames, duration, and extension). The proposed modification in the background subtraction detector strategy aims to reduce vehicle detection processing time which increase vehicle tracking efficacy. In addition, we have applied some sort of video degradations (salt and pepper noise, Gaussian noise, and speckle noise) to the appropriate traffic videos in order to evaluate the effect of a challenging weather condition case study on the detection processing time. This degradation has been applied in both traditional and modified background subtraction for detecting vehicles in traffic videos. Results show an obvious enhancement in the processing time of the detected vehicles according to this modification in the background subtraction of interest rather than the traditional background detector.
Keywords: computer vision; foreground object detection; background subtraction; video degradation.
An Efficient prefix based labeling scheme for Dynamic update of XML Documents
by Dhanalekshmi Gopinathan, Krishna Asawa
Abstract: The increasing volume of XML documents and the real-world requirement to support the updations has motivated the research community to develop dynamic labeling schemes. Each of the dynamic labeling schemes proposed till date differs in characteristics and has its own advantages and limitations. They may differ in terms of the query supported, their update performance, label size etc. In this paper, a new prefix based labeling scheme is proposed which is compact, dynamic. And, it also facilitates the computation of structural relationships which is the core part of query processing. The proposed scheme can handle both static as well as dynamic XML documents. The experimentation is conducted to evaluate the performance of storage requirement, structural relationship computation and update processing. The result is compared with some of the existing labeling mechanisms.
Keywords: Labeling Scheme; XML; Structural relationship; dynamic update; ancestor-descendant; parent-child relationship.
Content based load balancing of tasks using task clustering for cost optimization in cloud computing environment
by Kaushik Sekaran, Venkata Krishna P
Abstract: Cloud computing is the recent mantra for all the techies and internet users all around the world. The power of cloud computing is enormous as it provides big services in an optimal cost as well as in a reliable manner. Load balancing of tasks in the cloud server is an important issue to be addressed. In this paper, we propose a task clustering algorithm to minimize the load across the cloud servers through content based load balancing of tasks using task clustering methods and cost reduction method for optimal energy consumption at all the cloud data center heads. The results analysed in our paper are better when compared with existing content based load balancing models. Our approach clearly represents the achievement of optimal load balancing of tasks with respect to upload bandwidth utilization, minimal latency and some other QoS (Quality of service) metrics.
Keywords: Cloud computing; load balancing; tasks clustering; cost reduction; energy consumption; QoS (Quality of service) metrics.
A Two Step Clustering Method for Facility Location Problem
by Ashish Sharma, Ashish Sharma, A.S. Jalal, Krishna Kant
Abstract: Facility location problems are designed with the objective to gain more profit. The profit can be gained when the maximum demand is satisfied. The demand can be satisfied when maximum number of customers are covered or served. To attain maximum number of customers, there are various approaches have been investigated. In general, most of the approaches consider for the facility location models are based on radius as a service area of facility. Therefore, such facilities which fulfill their service in a radius can be served by conventional approach. However, conventional approaches fail to allocate those facilities which are not inclined by topographical and road network barriers. In this paper, we propose a model to optimized facility allocation in such scenarios. In the propose model, we have used a two step clustering approach to solve the facility location problem. Experimental results illustrate that the proposed algorithm based on density affinity propagation (DAP) for the Facility location problem can be used to construct a solution for maximal service and covering area.
Keywords: Facility location; Proximity; Density; Approximation; Clustering.
Marker and Modified Graph Cut Algorithm for Augmented Reality Gaming.
by Shriram K. Vasudevan, R.M.D. Sundaram
Abstract: Augmented reality aims at superimposing a computer generated image on a users view of the real world thereby creating a composite view. Virtual reality on the other hand keeps the user isolated from the real world and immersed in a world that is completely fabricated. The main objective of this research is to capture a real life image and augment it as a component of a gaming environment using the principles of augmented reality. For this research implementation, we have chosen car racing as our gaming environment. The core elements are the image segmentation using CIELAB color space based graph cut algorithm, 2D to 3D modelling, and game development with augmented reality. The tools utilised are Mat Lab, insight3d and Unity3D.The proposed idea will enable someone to view a virtual environment with real components that are integrated dynamically.
Keywords: Augmented Reality; Gaming; Image extraction; Modelling; Image segmentation; Racing.
Predicting longitudinal dispersion coefficient in natural streams using Minimax Probability Machine Regression and Multivariate Adaptive Regression Spline
by Sanjiban Sekhar Roy, Pijush Samui
Abstract: This article employs Minimax Probability Machine Regression(MPMR) and Multivariate Adaptive Regression Spline(MARS) for prediction of longitudinal dispersion coefficient in natural streams.The variables of hydraulic features such as channel width(B),flow depth(H), flow velocity(U), shear velocity(u*) and geometric features such as channel sinuosity (σ) and channel shape parameter(β) were taken as the input.The dispersion coefficient Kx was the decision parameter for the proposed machine learning models.MARS does not assume any functional relationship between inputs and output.The MARS model is a non-parametric regression model that splits the data and fits each interval into a basis function.MPMR is a probabilistic model which maximizes the minimum probability of predicted output. MPMR also provides output within some bound of the true regression function.The proposed study gives an equation for prediction of Longitudinal Dispersion Coefficient based on the developed MARS. The developed MARS has been compared with proposed MPMR. Finally, the performances of the models have been measured by different performance metrics.
Keywords: Longitudinal Dispersion Coefficient;Natural Streams;Minimax Probability Machine Regression;Prediction; Multivariate Adaptive Regression Spline.
A Brain-like Cognitive Process with Shared Methods
by Kieran Greer
Abstract: This paper describes a new entropy-style of equation that may be useful in a general sense, but can be applied to a cognitive model with related processes. The model is based on the human brain, with automatic and distributed pattern activity. Methods for carrying out the different processes are suggested. The main purpose of this paper is to reaffirm earlier research on different knowledge-based and experience-based clustering techniques. The overall architecture has stayed essentially the same and so it is the localised processes or smaller details that have been updated. For example, a counting mechanism is used slightly differently, to measure a level of cohesion instead of a correct classification, over pattern instances. The introduction of features has further enhanced the architecture and the new entropy-style equation is proposed. While an earlier paper defined three levels of functional requirement, this paper re-defines the levels in a more human vernacular, with higher-level goals described in terms of action-result pairs.
Keywords: Cognitive model; distributed architecture; entropy; neural network; concept tree.
Cross-corpus Classification of Affective Speech
by Imen Trabelsi, Mohammed Salim Bouhlel
Abstract: Automatic speech emotion recognition still has to overcome severalrnobstacles before it can be employed in realistic situations. One of these barriersrnis the lack of suitable training data, both in quantity and quality. The aim of thisrnstudy is to investigate the effect of cross-corpus data on automatic classification ofrnemotional speech. In thiswork, features vectors, constituted by the Mel FrequencyrnCepstral Coeffcients (MFCC) extracted from the speech signal are used to trainrnthe Support Vector Machines (SVM) and Gaussian mixture models (GMM). Wernevaluate on three different emotional databases from three different languagesrn(English, Polish, and German) following a three cross-corpus strategies. In thernintra-corpus scenario, the accuracies were found to vary widely between 70%rnand 87%. In the inter-corpus scenario, the obtained average recall is 70.87%. Thernaccuracies of the cross-corpus scenario were found to be below to 50%.
Keywords: Cross corpus strategies; Speech emotion recognition; GMM; SVM;rnMFCC.
GA based efficient Resource allocation and task scheduling in multi-cloud environment
by Tamanna Jena, Jnyana Ranjan Mohanty
Abstract: Efficient resource allocation to balance load evenly in heterogeneous multi-cloud computing environment is challenging. Resource allocation followed by competent scheduling of tasks is of crucial concern in cloud computing. Load balancing is assigning incoming job-requests to resources evenly so that each involved resources are efficiently utilized. Number of cloud users are immense and volume of incoming job-request is arbitrary and data is enormous in cloud application. In cloud computing resources are limited, therefore it is challenging to deploy various applications with irregular capacities as well as functionalities in heterogeneous multi-cloud environment. In this paper Genetic Algorithm based task mapping, followed by priority scheduling in multi-cloud environment is proposed. The proposed algorithm has two important phases, namely mapping and scheduling. Performed rigorous simulations on synthetic data for heterogeneous multi-cloud environment. Experimental results are compared with existing First In First Out (FIFO) mapping and scheduling. Validity of mapping and scheduling clearly proves better performance of the entire system in terms of makespan time and throughput.
Keywords: Load Balancing; Task Scheduling; Cloud Computing; multi-cloud environment; Genetic Algorithm.
Efficient and Secure Approaches for Routing in VANETs
by Marjan Kuchaki Rafsanjani, Hamideh Fatemidokht
Abstract: Vehicular ad hoc networks (VANETs) are a particular type of Mobile Ad Hoc Networks (MANETs). These networks provide communication services between nearby vehicles and between vehicles and roadside infrastructure that improve road safety and provide travelers' comfort. Due to the characteristics of VANET, such as self-organization, low bandwidth, variable network density, rapid changes in network topology, providing safe driving, enhancing traffic efficiency, etc., and the applications of them, problems related to these networks, such as routing and security, are popular research topics. A lot of research has been performed for providing efficient and secure routing protocol. In this paper, we investigate and compare various routing protocols based on swarm intelligence and key distribution in VANET.
Keywords: Vehicular ad hoc networks (VANETs); Swarm intelligence; Routing protocols; Cryptography.
Analysis of Energy Efficiency Based on Shortest Route Discovery in Wireless Sensor Network
by Mohit Mittal
Abstract: Todays scenario is totally based on advancement of existing technologies to get more reliable wireless communication. Wireless sensor networks are one of the popular emerging technologies that are deployed commonly in harsh environment. These networks main dependency is on battery powers. Our mission is to reduce the energy consumption as much as possible. Every routing protocol has been designed for sensor network based on minimum energy consumption. In this paper, LEACH protocol has been modified with various shortest path algorithms to find out best performance of sensor network. Simulation result shows that Dijsktra algorithm has found to be better among other algorithms.
Keywords: LEACH; Energy efficiency; Bellman-ford algorithm; Dijkstra algorithm; BFS algorithm.
Optimum Generation and VAr Scheduling on a Multi-Objective Framework using Exchange Market Algorithm
by Abhishek Rajan, T. Malakar
Abstract: This paper presents an application of Exchange Market Algorithm (EMA) in solving multi-objective optimization problems of power systems. This optimization algorithm is based on the activities of shareholders to maximize their profit in the Exchange Market. The uniqueness of this algorithm lies in the fact that, it enjoys double exploitation and exploration property unlike several other algorithms. In order to investigate its search capability, the EMA is utilized to solve power systems active and reactive related objectives simultaneously in presence of several non-linear constraints. Both optimum generation and VAr planning problems are formulated as conventional Optimal Power Flow (OPF) problem. Fuel cost (Active related objective), Transmission Line Loss and Total Voltage Deviation (reactive related objectives) are taken as different objective functions. The multi-objective optimization problem is performed through weighted sum approach. Both fuzzy and equal weight approach is utilized to declare the compromised solution. Programs are developed on MATLB and simulations are performed on Standard IEEE-30 & IEEE-57 bus systems. The search capability of EMA in solving the multi-objective power system problems are compared with PSO based solutions.
Keywords: Optimal Power Flow; Exchange Market algorithm; multi-objective optimization; Pareto front; fuzzy decision making.
A Novel Three-Tier Model with Group Based CAC for Effective Load Balancing in Heterogeneous Wireless Networks
by Kalpana S, Chandramathi S, Shriram KV
Abstract: Seamless and ubiquitous connections are the ultimate objectives of 4G technologies. But due to randomised mobility and different service class of applications, the connection failure rate increases, which can be overcome through handover (HO). With the increased demand for handovers, the number of networks scanned for decision making and the number of negotiations for connectivity become too large. To improve their efficiency, a three tier model is proposed, where requests for similar type are grouped and a common negotiation is made to reduce the number of communication messages. Only qualified networks among all the reachable access points are chosen for decision. Handover need estimation is performed to reduce the unwanted handovers. Finally, an adaptive resource management is made possible through a group based call admission control (GB-CAC) algorithm that harmonises up to 50 percent of the resource utilisation, ensuring higher numbers of connections with negligible percent call blocking and dropping.
Keywords: Point of Attachment; handover; candidate networks; elimination factor; queues; Quality of Service; Smart Terminal.
Knowledge based Semantic Discretization using Data Mining Techniques
by Jatinderkumar R. Saini, Omprakash Chandrakar
Abstract: Discretization is an important and, sometimes, an essential pre-processing step for data mining. Certain data mining techniques such as Bayesian networks, induction rules or association rule mining can be applied only on discretized nominal data. Various studies show significant improvement for certain data mining techniques, when applied on discretized data rather than continuous data. Several discretization methods have been reported in literature, which are based on statistical techniques. Such statistical techniques are inadequate in capturing and exploiting the underling knowledge inherent in data and context of study. Big data with high dimension, and unavailability of any a priori knowledge on the study context, even make the situation miserable. To overcome this limitation, we propose a novel knowledge based semantic discretization method using data mining techniques, in which discretization is done based on Semantic data. Semantic data is domain knowledge inherent in the data itself and context of the study. Unlike semantic data mining, no explicit ontology associated with the data for semantic discretization. Therefore, its a challenging task to identify, capture, interpret and exploit the semantic data for semantic discretization. This study presents the novel concept of semantic discretization and demonstrate the application of data mining techniques in extracting semantic data, which is further used in knowledge based semantic discretization. We show the effectiveness of the proposed methodology by applying it on Pima Indian Diabetes dataset, which is a standard dataset, taken from UCI Machine learning repository.
Keywords: Association rule mining; Data mining; Discretization; Machine learning; Pima Indian Diabetes Dataset; Prediction Model; Semantic Discretization; Type-2 Diabetes.
Intricacies in Image steganography and Innovative Directions
by Krishna Veni, Sudhakar P
Abstract: With the advancement in digital communication and data sets getting huge due to computerization of data gathering worldwide, the need for data security in transmission also increases. Cryptography and steganography are well known methods available to provide security where the former use techniques that control information in order to cipher or hide their presence and the latter concentrates on data concealment. Steganography is the practice of masking data especially multimedia data within another data. Visual contents gets more importance from people compared to audio contents and moreover visual content file is huge when compared to audio file thereby helping increase robustness of the hiding algorithms. In this paper, we consider three domains in which the image steganography algorithms are proposed along with the experimentation results on USC-SIPI image database which prove the betterment of the algorithms as compared with the traditional algorithms. We propose to use rule based LSB substitution method in spatial domain, XOR based hiding in frequency domain and data encryption standard based embedding in wavelet domain. We find that the proposed algorithms have a better PSNR value averaging close to 53 after embedding the secret data, while the existing algorithms has values of around 50.
Keywords: Peak Signal to Noise Ratio; Quantization; Discrete Cosine Transformation; Wavelet; Steganalysis; cipher text;.
Fuzzy Soft Set Approach for Classifying Malignant and Benign Breast Tumors
by Sreedevi Saraswathy Amma, Elizabeth Sherly
Abstract: Breast cancer is one of the most common health problems faced by women all over the world and mammography is an effective technique used for its early detection. This work is concentrated on developing machine learning algorithms combined with a mathematical model for classifying malignant or benign images in digital mammograms. The mathematical concept of the fuzzy soft set theory is advocated here, which is an extension of crisp and fuzzy with parameterization. Even though fuzzy and other soft computing techniques have made great progress in solving complex systems that involve uncertainties, imprecision and vagueness, the theory of soft sets open up a new way for managing uncertain data with parameterization. The classification is performed by using fuzzy soft aggregation operator to identify the abnormality in a mammogram image as malignant or benign. This work is a fully automated computer aided detection method which involves automated noise removal, pectoral muscles removal, segmentation of ROI, identification of micro-calcification clusters, feature extraction and feature selection followed by classification. The experiment is performed on images from MIAS dataset resulted in 95.12% accuracy.
Keywords: Digital Mammography; computer-aided diagnosis (CAD); fuzzy soft set theory; fuzzy c-means; NL-means; fuzzy soft aggregation operator.
Using Lego EV3 to Explore Robotic Concepts in a Laboratory
by Jeffrey W. Tweedale
Abstract: During a recent Massive Open On-line Course (MOOC) at the Queensland University of Technology (QUT) titled an Introduction to Robotics, a young student used the forum to question the skills required to gain employment. The resounding response was the need for multiple disciplines that typically included mechatronics, software, mechanical and electrical/electronics engineering. Similarly the curriculum focused on professional systems and the scientific rigour involved in their evolution. This limits the growing community of enthusiasts and keen observers seeking greater involvement as they are often constrained by the lack of Science Technology Engineering and Maths (STEM) skill sets. For these reasons a means of accelerating the learning of key concepts is required as well as a mechanism of providing cheap and reliable access to the tools and techniques required to participate. AlthoughLEGOMindstorms is considered a toy that has traditionally been targeted toward the 8-14 year group of children, it does cater for enthusiasts and is increasingly being used to support STEM initiatives. Because of its low cost and availability, Mindstorms was recently used as the focal solution in the MOOC course to enable every student to demonstrate robotic concepts independent of the pre-requisite skills. This raises a new question about how useful LEGO can be used to explore robotic concepts in a laboratory. The course shows it can be used for sensor development and was successfully used to enhance conceptual learning for the uninitiated (enthusiast, interested observer, undergraduate, post-graduate and even those being integrated within the domain).
Keywords: Cartesian Coordinates; Forward Kinematics; Inverse Kinematics; Lego; Mindstorms; Robotics.
Applying Genetic Algorithm to Optimize the Software Testing Efficiency with Euclidean Distance
by Rijwan Khan
Abstract: Software testing ensures that a developed software is error free and reliable for customer use. For verification and validation of software products, testing has been applied on these products in various different software industries. So before the delivery of the software to the customer, all the types of testing have been applied. In this paper, automatic test cases have been developed with the help of a genetic algorithm for data flow testing and these tests are divided in different groups using Euclidean distance. Elements of each group are applied on the data flow diagram of the program/software and all the du paths are found, covering the given test suits. New test suits are generated with the help of the genetic algorithm to cover all du-paths.
Keywords: Software Testing; Automatic test cases; Data flow testing; Genetic Algorithm.
How can Reasoning improve ontology based Context-Aware system?
by Hatim Guermah, Tarik Fissaa, Bassma Guermah, Hatim Hafiddi, Mahmoud Nassar, Abdelaziz Kriouile
Abstract: Over the past two decades, the large evolution of software engineering, telecommunication and pervasive devices has lead to emergence of a new vision of development aiming at building systems to meet more complex and personalized needs known as Context-Aware systems. This type of systems is becoming the next computing paradigm in which infrastructure and services are sensitive to any change of the context, so that plays a crucial role to provide interactive intelligent environments. In parallel, Contextual Situation refers to a higher level of information inferred from different context data flow that can be extracted from physical and virtual sensors. The power of using Situation is lies in their ability to provide a simple and comprehensible representation of context property, which preserve the services that manipulate them from the complexity of sensor readings, data transmission errors and inferencing activities. In this work, we aim to explore the added value of using ontology-based reasoning, focusing on first-order logic and fuzzy logic, to produce contextual situations.
Keywords: Context; Context-Aware; Situation; Semantic Web; Ontologies; Context modeling; First Order Reasoning; Fuzzy logic Reasoning; inference and Reasoning.
Fractional Inverse Full State Hybrid Projective Synchronization
by Adel Ouannas, Ahmad Taher Azar, Toufik Ziar
Abstract: Referring to fractional-order systems, thisrnpaper investigates the inverse full state hybrid projective synchronizationrn(IFSHPS) of non-identical systems characterized by different dimensions andrndifferent orders. By taking a master system of dimension $n$ and a slavernsystem of dimension $m$, the method enables each master system state to bernsynchronized with a linear combination of slave system states, where thernscaling factor of the linear combination can be any arbitrary realrnconstants. Based on fractional Lyapunov approach and stability theory ofrnlinear fractional-order systems, the method enables commensurate andrnincommensurate fractional-order systems with different dimension to bernsynchronized. Two different numerical examples are reported. The examplesrnclearly highlight the capability of the conceived approach in effectivelyrnachieving synchronized dynamics for any scaling constants.
Keywords: Full state hybrid projective synchronization; Fractional chaos,rnIncommensurate and commensurate systems; Fractional Lyapunov approach.
Dominion Algorithm- A novel metaheuristic optimization method
by Bushra Alhijawi
Abstract: In this paper, a novel bio-inspired and nature-inspired algorithm, namely Dominion Algorithm is proposed for solving optimization tasks. The fundamental concepts and ideas which underlie the proposed algorithm is inspired from nature and based on the observation of the social structure and collective behavior of wolves pack in the real world. Several experiments were preformed to evaluate the proposed algorithm and examine the correlation between its main parameters.
Keywords: Dominion Algorithm; Metaheuristic methods; Biologically-inspired algorithm; Artificial intelligence.
Fitness Inheritance in Multi-objective Genetic Algorithms: A Case Study on Fuzzy Classification Rule Mining.
by Harihar Kalia, Satchidananda Dehuri, Ashish Ghosh
Abstract: In this paper, the trade-off between accuracy and interpretability in fuzzy rule-based classifier has been examined through the incorporation of fitness inheritance in multi-objective genetic algorithms. The aim of this mechanism is to reduce the number of fitness evaluation spared by estimating the fitness value of the offspring individual from the fitness value of their parents. The multi-objective genetic algorithms with efficiency enhancement technique is a hybrid version of Michigan and Pittsburgh approaches. Each fuzzy rule is represented by its antecedent fuzzy sets as an integer string of fixed length. Each fuzzy rule-based classifier, which is a set of fuzzy rules is representedrnas a concatenated integer string of variable length. Our algorithm simultaneously maximizes the accuracy of rule sets and minimizesrntheir complexity (i.e., maximization of interpretability). As a result of adopting fitness inheritance, it minimizes the total fitness computation time (i.e., overall time to generate rule set). The accuracyrnis measured by the number of correctly classified training samples,rnwhile the rule complexity is measured by the number of fuzzy rulesrnand/or the total number of antecedent conditions of fuzzy rules. Thernefficiency enhancement technique such as fitness inheritance is usedrnto minimize the overall computation time of generating the rule set.rnWe examine our method through computational experiments on somernbenchmark datasets. The experimental outcome conforms that thernproposed method reduces the computational cost, without decreasingrnthe quality of the results in a significant way.
Keywords: Classification; fuzzy classification; multi-objective genetic algorithm; fitness inheritance; accuracy; and interpretability.
Geometric Based Histograms for Shape Representation and Retrieval
by Nacera Laiche, Slimane Larabi
Abstract: In this paper, we present a new approach for shape representation and retrieval based on histograms. In the drawback of the proposed histograms descriptor, we consider the concept of curves points. This integration in the proposed histogram-based approach is quite different since geometric description is stored in histograms. The proposed description is not only effective and invariant to geometric transformations and deformations, but also is insensitive to articulations and occluded shapes as it has the advantage of exploring the geometric information of points. The generated histograms are then used to establish matching of shapes by comparing their histograms using dynamic programming. Experimental results of shape retrieval on different kinds of shape databases show the efficiency of the proposed approach when compared with existing shape matching algorithms in literature.
Keywords: Log-polar histogram; Least squares curve; High curvature points; Shape description; Shortest augmenting path algorithm; Shape retrieval.
Improved Biogeography-based Optimization
by Raju Pal, Mukesh Saraswat
Abstract: Biogeography-based optimization (BBO) is one of the popular evolutionary algorithms, inspired by the theory of island biogeography. It has been successfully applied in various real world optimization problems such as image segmentation, data clustering, combinatorial problems, and many more. BBO finds the optimal solution by using two of its main operators namely; migration and mutation. However, sometimes it traps into local optimum and converges slowly due to poor population diversity generated by mutation operator. Moreover, single feature migration property of BBO gives poor performance for non-separable functions. Therefore, this paper introduces a new variant of BBO known as improved BBO (IBBO) by enhancing its migration and mutation operators.
The proposed variant successfully improves the population diversity and convergence behavior of BBO as well as shows better solutions for non-separable functions. The performance of proposed variant has also been compared and analyzed with other existing algorithms over 20 benchmark functions.
Keywords: Evolutionary algorithm; Biogeography-based optimzation; Migration operator; Mutation operator.
Sequential Pattern based Activity Recognition model for Ambient Computing
by GITANJALI J, Muhammad Rukunuddin Ghalib
Abstract: In the recent years, the human activity recognition gain popularity in ambient computing. The human activity recognition is composed of identifying the daily activities of the users by observing their actions. Action identification is more complex task from senor data generated by each sensor. In this paper, sequential pattern based activity recognition is proposed for identifying sequential patterns among actions on the given dataset. This support value is used as a parameter to validate the sequence. The experimental evaluation is performed on the real time dataset and it is observed that the sequential pattern approach is very beneficial in reducing the execution time and increasing the classification accuracy of the classifiers.
Keywords: Action; Activity; sensor based data; sequence patterns; classifiers.
Evaluation of Large Shareholders Monitoring or Tunneling Behavior in Companies Accepted in Tehran Stock Exchange
by Sahar Mojaver
Abstract: Shareholders' wealth in the real world of finance is very important and focus on it has become very important in recent years. Although the purpose of each investment and consequently, the main purpose of each company has been maximizing shareholder wealth but over the past decades, most companies have not paid enough attention to it. Ownership composition, particularly the ownership concentration of majority shareholders is one of the most important factors influencing on the control and managing companies. When large shareholders or internal shareholders like managers have the capacity to control the company, they may have some incentives to get private benefits. Given the importance of monitoring and behavior of controlling shareholders, this study investigates the large shareholders monitoring or tunneling behavior in companies accepted in Tehran Stock Exchange. To do so, 125 companies over the period of 2010 to 2011 (a total of 750 years- company) are analyzed using systematic elimination sampling method. Results show that there is a significant relationship between large shareholders tunneling behavior and financial performance (return on equity and Tobin's Q indexes) in companies accepted in Tehran Stock Exchange, and this relationship is U shaped.
Keywords: Tunneling Behavior; Large Shareholders; Companies Accepted in Tehran Stock Exchange.
A practical approach to Energy Consumption in Wireless Sensor Networks
by Sonam Khera, Neelam Turk, Navdeep Kaur
Abstract: A Wireless Sensor Network (WSN) is network formed by large number of spatially distributed, wirelessly communicating sensor nodes deployed for remote environment monitoring. These networks are specifically deployed to perform various sensing operations like measurement of temperature, pressure, vibrations and humidity etc. in an environment where human intervention is not possible. Thus once deployed, the WSN starts performing its functions and consumes the energy from the limited power source installed in sensor nodes. Due to inaccessibility of sensor nodes, these power sources are non-replaceable, once the nodes are deployed in the physical environment. Therefore the energy consumption of sensor nodes plays significant role in determining the life of a WSN. Various studies have been undertaken using available simulation environments to increase the lifetime of the network by reducing the energy consumption. In our previous studies it has been observed that controlled software environment is created with the help of various modelling tools and simulators available like MATLAB, NS2, OMNET++ etc. Though the simulation and modelling done in the software environment has been found to be convenient in terms of scalability and for simulating various scenarios but it lacks the exposure to the real time issues faced during the actual deployment. We have written this paper based on our experience of creating a physical WSN test bed to get first hand information about of real time deployment. The test bed has been designed with an aim to understand practical aspects of energy consumption in sensor networks. It monitors the temperature at different locations in a building. In this paper we have also covered different scenarios to analyse the energy consumption in our WSN test bed.
Keywords: WSN; wireless sensor network; sensor; energy efficiency; power consumption; sleep mode; testbed.
Local Patterns for Offline Arabic Handwritten Recognition
by Yasser Qawasmeh, Sari Awwad, Ahmed Otoom, Feras Hanandeh, Emad Abdallah
Abstract: Off-line recognition of Arabic handwritten text is a challenging problem due to the cursive nature of the language and high inter and intra variability in the way of writing. Majority of the existing approaches are based on structural and statistical features and are constrained for a specific task with vast amount of pre-processing steps. In this paper, we explore the performance of local features for unconstrained offline Arabic text recognition with no prior assumptions or pre-processing steps.Our approach is based on local SIFT features. To capture important information and remove any redundancy, we apply a fisher encoding algorithm, and a dimensionality reduction approach, Principle Component Analysis (PCA) . The resulted features are combined with a contemporary Support Vector Machine (SVM) classifier and tested on a dataset of 12 different classes. There has been great improvements in recall and precision values in comparison with that of SIFT features alone or with that of SIFT features and other encoding algorithms, with more that 35% improvements when tested with 5-fold cross-validation test.
Keywords: Local Features; Offline Recognition; Arabic Handwriting; Fisher Encoding;.
A Supervised Learning Approach for Link Prediction in Complex Social Networks
by Upasana Sharma
Abstract: The use of internet based social media for establishing links with family, friends and customers has become very popular. In current scenario, social networking is being used for social and business purpose such as facebook, twitter and LinkedIn. New link is being created in every fraction of second. To predict the future link is a major challenge in link prediction domain. Various techniques have been proposed in past that are based on Similarity, Maximum likelihood estimation and Machine learning. The focus of this work is on supervised machine learning approach for link prediction in complex social networks. In past, many researchers have been worked on supervised approach by using only unweighted networks. Our aim is to assign weight to each connection in the network. Weight represents the strength of the connection and it improves the accuracy of the link predictor. This paper introduced a new approach using closed triangle concept to recommend future links in social networks. Extensive experiments have been performed on real YouTube data set and the proposed technique performs well.
Keywords: Link Prediction; Social Networks; Artificial Neural Network; Supervised Learning Approach; Learning Algorithms.
Trust Based Quality Awareness Using Combinatorial Auction Web Service Selection In Service Based Systems
by Suvarna Pawar, Prasanth Yalla
Abstract: The service-oriented paradigm offers support for engineering service-based systems (SBSs) based on service composition where existing services are composed to create new services. The selection of services with the aim to fulfil the quality constraints becomes critical and challenging to the success of SBSs, especially when the quality constraints are stringent. However, none of the existing approaches for quality-aware service composition has sufficiently considered the following two critical issues to increase the success rate of finding a solution: 1) the complementarities between services; and 2) the competition among service providers. This paper proposes a novel approach called combinatorial auction for service selection (CASS) to support effective and efficient service selection for SBSs based on combinatorial auction. In CASS, service providers can bid for combinations of services and apply discounts or premiums to their offers for the multi-dimensional quality of the services. Based on received bids, CASS attempts to find a solution that achieves the SBS owners optimization goal while fulfilling all quality constraints for the SBS. When a solution cannot be found based on current bids, the auction iterates so that service providers can improve their bids to increase their chances of winning.
Keywords: Combinatorial auction; Quality of service; Service composition; Service selection; Trust.
Computational Modelling of Cerebellum Granule Neuron Temporal Responses for Auditory and Visual Stimuli
by Arathi Rajendran, Asha Vijayan, Chaitanya Medini, Bipin Nair, Shyam Diwakar
Abstract: Sensorimotor signals from cerebral cortex modulate the pattern generating metaheuristic capabilities of cerebellum. To better understand the functional integration of multisensory information by single granule neurons and the role of multimodal information in motor guidance of cerebellum, we have modelled granular layer microcircuit in the cerebellum and analysed the encoding of information during auditory and visual stimuli. A multi-compartmental granule neuron model comprising of excitatory and inhibitory synapses was used and in vivo like behaviour was modelled with short and long bursts. The change in intrinsic parameters in the model helped to quantify the effect of spike-time dependent plasticity in the firing of granule neurons. Computer simulations implicate coding correlation of output patterns to temporal excitatory stimuli. We observed the role of induced plasticity and granular layer role in sparse recoding of auditory and visual inputs and the model predict how plasticity mechanisms affect the average amount of information transmitted through the single granule neurons during multimodal stimuli.
Keywords: Cerebellum; Computational Neuroscience; Auditory; Visual; Plasticity; Sparse Coding.
Resource discovery in inter-cloud environment: A Review
by Mekhla Sharma, Ankur Gupta, Jaiteg Singh
Abstract: The Inter-cloud has emerged as a logical evolution to cloud computing extending computational scale and geographic boundaries through collaboration across individual Cloud Service Providers (CSPs). Resource discovery in this large-scale, distributed and highly heterogeneous environment remains a fundamental challenge to enable effective cross-utilization of resources and services. This review paper examines various resource discovery approaches in the inter-cloud outlining key challenges. Finally, the paper presents some ideas to build effective and efficient resource discovery strategies for the inter-cloud.
Keywords: inter-cloud resource discovery; inter-cloud challenges; resource discovery challenges; resource discovery approaches.
Building a Simulated Educational Environment for the Diagnosis of Lumbar Disk Herniation Using Axial View MRI Scans
by Mohammad Alsmirat, Khaled Alawneh, Mahmoud Al-Ayyoub, Mays Al-dwiekat
Abstract: Computer-aided diagnosis systems have been the focus of many research endeavors. They are based on the idea of processing and analyzing various types of inputs (such as patients medical history, physical examination results, images of different parts of the human body, etc.) to help physicians reach a quick and accurate diagnosis. In addition to being a great asset for any hospital (especially the less fortunate ones with no or with small number of radiologists), such systems represent invaluable platforms for educational and research purposes. In this work, we propose a system for the diagnosis and training on the diagnosis of lumbar disk herniation from Magnetic Resonance Imaging (MRI) scans. The proposed system has three main novel contributions. First, it utilizes the axial MRI spine view of the suspected region instead of using MRI sagittal spine view. Axial view is usually more accurate and provides more information about lumbar disk herniation. Second, instead of simply classifying cases as normal or abnormal, the proposed system is capable of determining the type of lumbar disk herniation and pinpoint its location. To the best of our knowledge, this is the first work to address the problem of determining the type and location of lumbar disk herniation based on the axial MRI spine view. The final contribution of this work is the simulated training environment, that can be used to train novice radiologists on the diagnosis of lumbar disk herniation. The experiments conducted to evaluate the system show that it is quick and accurate besides being very useful for training purposes.
Keywords: Axial MRI Spine View; Classification; Computer-aided Diagnosis; Feature Extraction; Lumbar Disk Herniation; ROI Enhancement; ROI Extraction.
Types of fuzzy graph coloring and polynomial ideal theory
by Arindam Dey, Anita Pal
Abstract: The graph coloring problem (GCP) is one of the most importantrnoptimization problems in graph theory. In real life scenarios, many applicationsrnof graph coloring are fuzzy in nature. Fuzzy set and fuzzy graph can manage thernuncertainty, associated with the information of a problem, where conventionalrnmathematical models/graph may fail to reveal satisfactory result. To include thosernfuzzy properties in solving those types of problems, we have extended the variousrntypes of classical graph coloring methods to fuzzy graph coloring methods. Inrnthis study, we describe three basic types of fuzzy graph coloring methods namely,rnfuzzy vertex coloring, fuzzy edge coloring and fuzzy total coloring.We introducerna method to color the vertices of the fuzzy graph using the polynomial idealrntheory and find the fuzzy vertex chromatic number of the fuzzy graph. A practicalrnexample of scheduling committees meeting is given to demonstrate our proposedrnalgorithm.
Keywords: Fuzzy graph; Fuzzy coloring; Chromatic number; Polynomialrnideal; Groebner basis.
Selective Harmonic Elimination Strategy in the Multilevel Inverters for Grid Connected Photovoltaic System
by Sihem Ghoudelbourk, Ahmad Taher Azar, Djalel Dib, Amar Omeiri
Abstract: In recent years, power electronic converters are widely used in industrial as well as domestic applications for the control of power flow for automation and energy efficiency. The topologies of multilevel inverter have several advantages such as high output voltage, lower total harmonic distortion (THD) and reduction of voltage ratings of the power semiconductor switching devices. The paper deals with the multilevel converters control strategy for Photovoltaic (PV) system integrated in distribution grids. The objective of the proposed work is to design multilevel inverters for solar energy applications so as to reduce the Total Harmonic Distortion (THD) and to improve the power quality. The use of multilevel converters as power interface for PV grid-connected systems are very appropriate due to the Grid-connected photovoltaic power plants are consistently increasing in power rating and also the reduction in the cost of photovoltaic modules. The proposed control strategy ensures an implementation of selective harmonic elimination (SHE) modulation for a 5, 7, 9 and 11 levels. This technique is a method to get rid of harmonics by judicious selection of the firing angles of the inverter and eliminates the need of the expensive low pass filters in the system. Previous research considered constant and equal DC sources with invariant behavior; therefore, the circuit can be called an unequal DC sources multilevel converter. The voltage levels depend on the availability of DC sources so its possible to reduce the harmonic contents for unequal DC sources multilevel converter. This article deals with the reduction of harmonics for multilevel converter sources converter for equal DC voltage cases and then it is extended for unequal DC voltages.
Keywords: Multilevel inverter; Selective Harmonic Elimination (SHE); Total harmonic distortion (THD); Photovoltaic (PV); Battery.
Design and Analysis of SRRC filter in wavelet based multiuser environment of mobile WiMax
by Harpreet Kaur, Manoj Kumar, Ajay Sharma, Harjit P. Singh
Abstract: Wavelets amid its capability to provide simultaneous information in both time and frequency domain along with minimization of interference and improved bandwidth efficiency is considered as an efficient approach to replace Fast fourier transform (FFT) in the conventional Orthogonal Frequency Division Multiplexing (OFDM) systems. To improve the Quality of service (QoS) in such systems spectrally efficient filter pulses are employed in order to mitigate the effect of inter-symbol interference (ISI) as well as thy satisfy the bandwidth limitations imposed by the multipath fading channels. Morever by allowing multiple users to utilize the transmission channel at the same time aspires towards achieving optimal resource allocation with acceptable error rates considering undesirable effects of correlated fading in the channel. In this paper, multi user environment is simulated in wavelet based OFDM for Wimax system with SRRC pulses employed as transmit and receive filters to perform matched filtering. The performance analysis in terms of Bit Error rate (BER) as a function of Signal to Noise Ratio (SNR) is investigated by varying number of users for the purpose of comparing their relative performances for various modulation schemes under AWGN channel. The simulation outcome substantiates that implementation of multiuser environment while overcoming co-channel interference elevates channel capacity and meet higher data rate demand along with effective utilization of the spectral resources. This simulation model is developed in MATLAB.
Keywords: DWT; OFDM; Square Root Raised Cosine,;Pulse shaping filter; multiuser; mobile WiMax.
A hybrid approach for improving data classification based on PCA and enhanced ELM
by Doaa L. El-Bably, Khaled M. Fouad
Abstract: The efficient and effective process of extracting the useful information from high-dimensional data is a worth studying problem. The high-dimensional data is a big and complex that it becomes difficult to be processed and classified. Dimensionality reduction (DR) is an important and a key method to address these problems.rnThis paper presents a hybrid approach for data classification constituted from the combination of principal component analysis (PCA) and enhanced extreme learning machine (EELM). The proposed approach has two basic components. Firstly, PCA; as a linear data reduction, is implemented to reduce the number of dimensions by removing irrelevant attributes to speed up the classification method and to minimize the complexity of computation. Secondly, EELM is performed by modifying the activation function of single hidden layer feed-forward neural network (SLFN) perfect distribution of categories. rnThe proposed approach depends on a static determination of the reduced number of principal components. The proposed approach is applied on several datasets and is assisted its effectiveness by performing different experiments. For more reliability, the proposed approach is compared with two of the previous works, which used PCA and ELM in data analysis.rn
Keywords: Data mining; Data classification; Principal component analysis (PCA); Neural Network; Extreme Learning Machine (ELM).
Fuzzy Fault-Tolerant Control for doubly fed induction generator in wind energy conversion system
by Samir Abdelmalek, Ahmad Taher Azar, Djalel Dib
Abstract: Fault-tolerant control systems have received considerable interest in academic researches area. This paper presents an efficient Fault-Tolerant Control of Additive Voltage Measurement Faults (AVMFs) of a controlled doubly-fed induction generator (DFIG) driven wind energy conversion system (WECS). First, the nonlinear model of a DFIG is transformed into an equivalent TakagiSugeno (TS) fuzzy model by using the sector nonlinear approach (SNA). Then, based on the obtained model of the generator, a new FTC strategy is proposed in order to ensure the nominal performances and stability of the plant while the occurrence of AVMFs and Noisy Outputs (NOs). Furthermore, the proposed FTC strategy is combined of a fuzzy proportional integral observer, nominal and faulty system. In addition, the stability of the closed-loop system is demonstrated by means of Lyapunov analyses which are formulated in terms of Linear Matrix Inequalities (LMIs) to prove the stability of the whole closed-loop system and to reduce the actuator faults effects and noisy outputs attenuation. Finally, simulation has been performed in MATLAB/Simulink environment to highlight the designed FTC strategy performances and robustness with respect to AVMFs occurrence.
Keywords: Fault-tolerant control; Additive Voltage Measurement Faults (AVMFs); Observer; Doubly-fed induction Generator (DFIG); Fuzzy Proportional Integral Observer.
A Comprehensive Review on Time Series Motif Discovery using Evolutionary Techniques
by RAMANUJAM ELANGOVAN, Padmavathi S
Abstract: Time series data are produced daily in a large quantity virtually in most of the fields. Most of the data are stored in time series database. Time Series Motif is a frequent or recurrent or unknown pattern that occurs in a time series database used to aid in decision making process. The Time series motif mining is a useful technique for summarizing additional techniques like classification or clustering process. Recently, diverse techniques have been proposed for time series motif discovery techniques. In which, this paper explores the time series motif discovery using evolutionary techniques in various real time data with its characteristics. The primary aim of this research is to provide a glossary for interested researchers in time series motif discovery and to aid in identifying their potential research direction using evolutionary techniques.
Keywords: Time Series; Motif; DataMining; Evolutionary techniques; Genetic Algorithm;.
Performance index assessment of intelligent computing methods in e-learning systems
by Aditya Khamparia, Babita Pandey
Abstract: In the current advancing and smart growing technology, e-learning system strikes the most dominant position for the learning style. Many research studies have evaluated e-learning system by using various criteria like prediction accuracy, satisfaction degree, pre-post analysis etc., but none of the results have explored the common methodology for appraising such systems. The proposed research work is focused on resolving the drawbacks of common benchmarks for evaluating the performance of e-learning system by including the Importance (I), Complexity(CC) and also determined the measurements of different learning problems and learning techniques. Finally the Performance Index (PI) is computed on the basis of I and CC, which is represented on graph with comparative view of Importance (I), Complexity (CC) and Performance Index (PI) for all the models.
Heterogeneous Mixing of Dynamic Differential Evolution Variants in Distributed Frame work for Global Optimization Problems
by G. Jeyakumar, C. Shunmuga Velaytham
Abstract: Differential Evolution (DE) is a real parameter optimization algorithm added to the pool of algorithms under Evolutionary Computing field. DE is well known for simplicity and robustness. The Dynamic Differential Evolution (DDE) was proposed in the literature as an extension to DE, to alleviate the static population update mechanism of DE. Since the island based distributed models are the natural extension of DE to parallelize it with structured population, they can also be extended for DDE. This paper, initially, implements distributed versions for 14 variants of DDE and also proposes an algorithm hmDDEv (heterogeneous mixing of dynamic differential evolution variants) to mix different DDE variants in island based distributed model. The proposed hmDDEv algorithm is implemented and validated against a well defined benchmarking suite with 14 benchmarking functions, by comparing it with its constituent DDE variants. The efficacy of hmDDEv is also validated with two state-of-the-art distributed DE algorithms.
Keywords: Dynamic Differential Evolution; Island Models; Distributed Algorithm; Mixed Variants.
A new Approach For Automatic Arabic-Text Detection and Localization in video frames
by Sadek Mansouri, Mbarak Charhad, Mounir Zrigui
Abstract: Text embedded in video frames provides useful information forrnsemantic indexing and browsing system. In this paper, we propose an efficientrnapproach for automatic Arabic-text detection which combines edge informationrnand Maximally Stable Extremal Region (MSER) method in order to extract textrnregion candidates. These regions are, then, grouped and filtered on the b asisrnof geometric properties such as area and orientation. Besides, we introduce arnnew geometric descriptor of Arabic text called baseline to improve the filtering process. Our proposed approach was tested on a large collection of Arabic TV news and experimental results have been satisfying.
Keywords: Arabic text detection;Arabic news; baseline estimation;MSER.
Proposed Enhancement for Vehicle Tracking in Traffic Videos Based Computer Vision Techniques
by Mohamed Maher Ata, Mohamed El Darieby, Mustafa Abdelnabi, Sameh A. Napoleon
Abstract: In this paper, Traffic video enhancement has been approached according to means of computer vision algorithms. We have measured the average number of tracks which assigned correctly in the whole video. These tracks express the correct prediction of vehicles that guarantee the keep track process of each vehicle from the first frame until the end frame. In addition, some video degradations (i.e. salt & pepper, speckle, and Gaussian noise) have been applied in order to measure the effect of these degradations on the tracking efficacy. Some filtering systems have been applied to the degraded traffic video in order to conclude the best filter mask which satisfies the least deviation in the value of assigned tracks. Experimental results show that both wiener and disk filters are the best mask for salt and pepper video degradation. However, median filter mask is the best choice for both speckle and Gaussian video degradations.
Keywords: Video disturbance; Prediction; Assigned track; GMM; Spatial filtering.
Speckle Noise Reduction in SAR Images using TypeII NeuroFuzzy Approach
by S. Vijayakumar, V. Santhi
Abstract: Synthetic Aperture RADAR (SAR) images play a vital role in remote sensing applications and thus it insist the requirement of quality enhancement as it gets affected with speckle noise. It is a kind of noise that gets multiplied with pixel intensities due to interference of backscattering signal. In this paper, computational intelligence based approach is proposed to remove speckle noise by preserving edges and texture information. In particular, the proposed system uses typeII Neuro-Fuzzy approach using pixel neighbourhood topologies. The performance efficiency of the proposed system is proved by comparing its results with existing methods.
Keywords: SAR Image; Speckle Noise; Fuzzy Logic System; Artificial Neural Network Approach; Noise Reduction; Gaussian Model.
IMPLEMENTING REVERSE UP GROWTH TRACKING APPROACH UNDER DISTRIBUTED DATAMINING
by R. Aswini, Praveen Kumar Rajendran, A. Piosajin
Abstract: Data mining is the methodology which discovers useful and hidden information from large databases. Many Researchers have proposed innumerable algorithms in the field of data mining. In this system Improvised UP-Growth is considered for mining high utility itemset from Potential High Utility Itemset and improvised under different constraints. The Node Utility and Reorganized Transaction Utility are the aspects considered as the key term in the proposed system which are manipulated using the technique as in UP-growth. However mining Potential High Utility Itemset from RTU using UP-Growth needs number of tree traversals. This is reduced in the proposed system by introducing bottom up approach and merging certain manipulations. However, working the system as a sequential process will be time consuming. Distributed environment is considered in the proposed system to overwhelm the problem in existing methodology.
Keywords: Node utility; Transaction Utility;Transaction Weight Utility;Reorganized Transaction Utility;Potential High Utility Itemset.
An Enhanced Secure Data Aggregation Routing Protocol for Sensor Networks
by A.L. SREENIVASULU, Chenna Reddy P
Abstract: From the past decade, the utilization of sensor devices in the real world applications is increased rapidly. To meet the demand of applications, the sensor nodes are deployed in remote areas where the operation is very complex. The security of the sensor nodes will be compromised at any time. Therefore, a secure data aggregation mechanism is needed to overcome their limitations. In this paper, a secure data aggregation mechanism is proposed for securing the data from unauthorized access. The proposed method concentrated on three modules such as data encryption, data aggregation and data decryption. Additionally, the data aggregation module concentrated on removing the redundant data for minimizing the energy consumption of the sensor nodes. The proposed method is evaluated under different conditions. The proposed method showed superior performance in terms of reducing the communication overhead, minimizing the difference in energy consumption and increased the data aggregation accuracy.
Keywords: Data Communication; Aggregation; Encryption; Security; Sensor nodes.
An Efficient Approach towards Building CBIR Based Search Engine for Embedded Computing Board
by Shriram K Vasudevan, P.L.K. Priyadarsini, Sundaram RMD
Abstract: Investigating a picture gives us more information than how it is expressed through words. Image processing is such a field which is ever booming and handles n number of images. Thanks to technology we are able to store and retrieve such a massive data set of image based data from anywhere. Search engines provide a way to link images and queries. They are searched using various factors like keyword, image dimension, texture etc. which is called as content based image retrieval. In this search methodology, the input query image is analysed and its properties or features are saved. Using the recorded features, other images are retrieved which match with the input image. But then again, searching by just name, colour or texture is not very efficient and so we have proposed a novel algorithm for the same. The proposed algorithm takes features like colour, texture, SURF, entropy etc. and finds out how differently they work and what distinct results they produce when combined. Implementation of CBIR on Beagle board led us to some satisfactory results, which encouraged us to do further research.
Keywords: Retrieval; Wavelet; Histogram; Texture; OpenCV; MATLAB; Region of interest.
CAYLEY BIPOLAR FUZZY GRAPHS ASSOCIATED WITH BIPOLAR FUZZY GROUPS
by Ali Asghar Talebi, Samaneh Omidbakhsh
Abstract: In this paper, we introduce the concept of Cayley bipolar fuzzy graphs on the bipolar fuzzy groups. Also some properties of Cayley bipolar fuzzy graphs as connectivity, transitivity are provided.
Keywords: Bipolar fuzzy groups; Cayley fuzzy graphs; isomorphism.
Impact of multimedia in learning profiles
by Ariel Zambrano, Daniela Lopez De Luise
Abstract: The present paper has as original contribution the definition of an automated model of the behavior of a user against a certain type of images in a context of playful learning. Therefore, the Entropy is used to classification profiles, starting from temporary information, which is mixed with certain characteristics previously extracted from the images. The aim of all this is to determine to what extent visual images trigger functions of comprehension and abstraction on topics of high degree complexity. As part of the obtained model, is intended to generate learning profiles, which will enrich in the future with other Non Invasive device, to observe the behavior of the user. For example: cameras, monitory keyboard, mouse, and use among others. The profiles are discovered and described with the minimum information needed. The collected information is processed with Bio Inspired techniques, which are essentially bases on Deep Learning concept.
Keywords: Audiovisual Techniques; Engineering Teaching; Video Games; Learning Model; Deep Learning; Multimedia; Data Mining.
Domination number of complete restrained fuzzy graphs
by R. Jahir Hussain, S. Satham Hussain, Sankar Sahoo, Madhumangal Pal
Abstract: This work is concerned with the restrained complete domination number and triple connected domination number of fuzzy graphs. Some basic definitions and needful results are given with an example. The necessary and sufficient conditions for the fuzzy graph to be complete restrained domination set is formulated and proved. Also
the relation between complete restrained domination set and $n$-dominated set is illustrated. Finally, triple connected domination number of a restrained complete fuzzy graph is provided.
Keywords: Fuzzy graphs; Complete restrained domination set; Complete restrained domination number; Triple connected domination number.
Introducing the Rock Hyrax Intelligent Optimization Algorithm: An Exploration for Web 3.0 Domain Selection
by B. Suresh Kumar, Deepshikha Bharghava, Arpan Kumar Kar, Chinwe Peace Igiri
Abstract: Currently, the immense growth of internet usage has become a bottleneck situation for web developers to meet the customer requirements. To analyze this changing scenario, the developers need to meet these requirements through the introduction of various optimization techniques. Various enumerable optimization techniques are available in the market to explore the Web 3.0 domain. In this research, the author proposed a new metaheuristic approach that aimed at providing an appropriate solution to the analysis and optimization issues. The main aim to design this algorithm in spite of existing algorithms is for wider search space and less time for optimization as based on the foraging time by Rock Hyraxes. Here, the swarm intelligence metaheuristic approach is proposed based on the biological behavior of Rock Hyrax available in East Africa. This novel Rock Hyrax Intelligent Optimization Algorithm (RHIO) is used to optimize the results in the Web 3.0 domain.
Keywords: Metaheuristics; Web 3.0; Optimization; Swarm Intelligence; Rock Hyrax Intelligent Optimization (RHIO).
Hardware Implementation of a New Chaotic Secured transmission System
by Hamid Hamiche, Karim Kemih, Sid-Ali ADDOUCHE, Ahmad Taher Azar, Rafik Saddaoui, Mourad Laghrouche
Abstract: In this paper, a novel secured transmission system implemented in Arduino-Uno boards is proposed. The transmission scheme is composed of two coupled discrete-time chaotic systems and two combined ob- servers. In the first observer, some sufficient conditions on varying equal impulsive distance are established in order to guarantee the impulsive synchronization method. In the second observer, we design an exact discrete- time observer in order to reconstruct all states and the message information. Simulation results are presented to highlight the performances of the proposed method. One of the main contributions is to show that the pro- posed scheme based on impulsive synchronization of discrete-time chaotic systems is experimentally feasible with digital devices using the Arduino-Uno boards. The obtained experimental results validate our approach.
Keywords: Chaotic synchronization; Impulsive synchronization; Step by step observer; Implementation; Arduino-Uno board.
Energy Efficient Cluster Head Selection for Wireless Sensor Network by Improved Firefly Optimization
by Achyut Shankar, Dr N. Jaisankar
Abstract: In WSN, the energy efficiency is a major issue in networks for enhancing the network lifetime. Due to the more data collections and packet transmission this issue becomes even more critical in large sensor networks. In this study, an energy efficient cluster head selection methodology has been proposed for WSN using Firefly with Dual Update Process (FFDUP) algorithm. The proposed approach produces maximum energy and prolongs the network lifetime. Subsequently the analysis based on the network sustainability, manner of cluster head distribution, risk mode and trade-off occurred from the proposed FFDUP algorithm is determined and validated by comparing with conventional algorithms such as Artificial Bee Colony (ABC), FABC, Firefly (FF) and Artificial Bee Colony- Dynamic Scout Bee (ABC-DS). The simulation results revealed that the proposed algorithm provides superior performance while comparing with the existing algorithm.
Keywords: WSN; Cluster head selection; Energy Awareness; Risk awareness; FFDUP.
A Novel Statistical Approach to an Event Management A study and analysis of a Techfest with suggestions for improvements.
by Narassima Seshadri, Shriram KV
Abstract: Events play a vital role in day-to-day life, either in a casual or a professional manner. Some formal events that occur routinely over a period of time, needs to be successful to become sustainable. Event management strategies vary consistently as choices of different people and even same people change as time progresses. Educational institutions showcase their talents by organizing annual fests, gather likeminded people from various institutions to exhibit their talents and gain knowledge. These events need to be successful in order to attract audience and sustain over a long time. This study aims to study about various aspects of Anokha 2016, the sixth annual Techfest of Amrita School of Engineering, so as to improve Anokha 2017. The paper investigates various aspects that remained favorite and also the aspects that were not up to the expectations of the participants, and the suggestions to improve these aspects have also been discussed.
Keywords: Event management; Techfest; Educational institution; Reliability analysis; Construct validity; Hypothesis testing;.
A hybrid approach of Missing Data Imputation for Upper Gastrointestinal Diagnosis
by Khaled Fouad
Abstract: Gastrointestinal and liver diseases (GILDs) are the major causes of death and disability in Middle East and North Africa. The investigation of upper gastrointestinal (GI) symptoms of a medically limited area resource is a challenge. The real-world clinical data analysis using data mining techniques often is facing observations that contain missing values saved for number of attributes. The main challenge of mining real clinical dataset of upper GI to diagnose the diseases, is the existence of missing values. The missing values should be first tackled to achieve high accurate and effective results of data mining approach for diagnosing and predicting upper GI diseases.
In this paper, the proposed approach to missing data imputation is accomplished to pre-process the real clinical dataset of upper GI to apply the feature selection and classification algorithms with accurate and effective results. These accurate and effective results will provide accurate diagnosing and predicting for upper GI diseases. The proposed approach aims at tackling the missing data onto the upper GI categorical dataset and enhancing the accuracy of the classifiers by exploiting the feature selection method before imputation process. This approach is evaluated by implementing experimental framework to apply five phases. These phases aim at partitioning the dataset to eight different datasets; with various ratio of missing data, performing the feature selection, imputing the missing data, classifying the imputed data, and finally, evaluating the outcome using k-fold cross validation for nine evaluation measures.
Keywords: Data mining; Data classification; Feature selection; Missing data imputation; Categorical Data mining; Diagnosis of upper GI diseases.
Evaluation Method based on a Tracing Mechanism for Adaptive User Interfaces: Application in Intelligent Transport Systems
by Soui Makram, Soumaya Moussa, Christophe Kolski, Mourad Abed
Abstract: Nowadays, Adaptive User Interfaces (AUI) are more and more present everywhere in our daily life activities (at home, at work, in public places, etc.). Moreover, they can have different adaptation capabilities, can be disseminated in the environment of the users, and take into account different user profiles. Many academic and industrial studies are conducted about user modelling, design methods and tools for User Interface (UI) generation. However, the evaluation of such user interfaces is difficult. In fact, there exist relatively few works in the literature about such AUI evaluation. To fill up this lack, it is necessary to envisage new evaluation methods focused on adaptation quality of UI. In this research work, we propose an evaluation method called MetTra (METhod based on a TRAcing system). This method has been validated by evaluating AUIs in the transportation field.
Keywords: Adaptive User Interface (AUI); Evaluation; MetTra; Intelligent Transport Systems (ITS).
Types of uncertain nodes in a fuzzy graph
by Arindam Dey, Anita Pal
Abstract: The graph theory has numerous applications in the problems of operations research, economics, systems analysis, and transportation systems. However, in real applications of a graph theory are full of linguistic vagueness, i.e., uncertainty. For e.g., the vehicle travel time or number of vehicles on a road network may not be known precisely. In those types of problem, fuzzy graph model can be used to deal those uncertainties. In a fuzzy graph, it is very important to identify the nature (strength) of nodes and no such analysis on nodes is available in the literature. In this paper, we introduce a method to find out the strength of the node in a fuzzy graph. The degree of the node and maximum membership value of the adjacent edges of that node are used to compute the strength of the node. The strength of a fuzzy node itself is a fuzzy set. Depending upon the strength of the nodes, we classify the nodes of a fuzzy graph into six types namely α- strong fuzzy node, - strong fuzzy node, regular fuzzy node, α- weak fuzzy node, - weak fuzzy node and balance fuzzy node.
Keywords: Keywords: Fuzzy graph; fuzzy node; Strength of node; vagueness of object.
A kernel based SVM for Semantic Relations Extraction from Biomedical Literature
by Kanimozhi Uma
Abstract: To identify and extract semantic relationships among named entities, relation extraction is a significant approach in knowledge representation. In order to capture the semantic as well as syntactic structures in text and to enable deep understanding of biomedical literatures, relation expression become essential. The automatic extraction of disease gene relations is presented by utilizing shallow linguistic features of global and local word sequence context with string kernel based supporting vector machine (SVM) for efficient disease-gene relation extraction. The performance of the proposed work shows that the bag-of-features kernel-based SVM classification is a promising resolution for specific disease-gene association mining.
Keywords: Biomedical Relation Extraction; Natural Language Processing; Machine Learning; Biomedical Literature.
Implementing RSA algorithm for network security using Dual Prime Secure Protocol (DPSP) in crypt analysis
by Durga R, Sudhakar P
Abstract: Cryptography in Network security is the most important approach for secure communication. Demonstrating security is well experimented by utilizing the RSA algorithm is commonly used in efficient cryptographic mechanisms. RSA algorithm is used to monitor the scenario involving hackers and to change the way of transpositions. The original RSA crypto mechanism is needed to perform the behavioural characteristics of multi-privacy system and for exploring specified research strengthening technique. In this methodology the user uses the RSA (DPSP) algorithmic program and generates dual prime pairs for the encrypted messages that are sorted priority-wise to measure the respective of accurate derived system, translate and rotate the intractable algorithm to obtain essential security enhancement.This methodology reduces the danger of man-in-middle attacks and temporal arrangement attacks. Because the encrypted and decrypted messages are additionally disordered to support their priority. RSA (DPSP) algorithm is mainly applied for distributing the data with different environment and variety of approaches is available to implement computation and consumption of designing algorithm. To perform the process with real time data, the cryptographic encryption algorithm along with RSA crypto algorithm is used. Introducing secure RSA (DPSP) for secure file transmission, since there are several cases where we would like secure file transmission to avoid any kind of attack from intruders to process the location with approachable positions to represent with proper identifications of a system. In RSA (DPSP) algorithm the most important key representation is symmetrical random key of crypto mechanism. We apply the mechanism to enhance the effect for confidentially transferring the data and for managing different sizes of messages using time complexity methodologies. It approaches the sizes of the messages and the size of the key proposed by prime numbers.
Keywords: Cryptography; RSA algorithm secured protocol; file transmission; nodes; ns2 tool; Priority programming;.
RELIABILITY ANALYSIS OF SHALLOW FOUNDATION BASED ON SETTLEMENT CRITERIA
by Pijush Samui, Aditi Palsapure, Sanjiban Roy
Abstract: Foundation settlement is an important design criterion as it affects the durability of a structure. Conventional methodologies calculate only a global factor of safety to determine the safety of the structure. However this does not account for the uncertainties due to soil variability and measurement errors. Therefore reliability based design principles must be incorporated to determine the performance and reliability of a structure. The First Order Second Moment Method (FOSM) is generally used for this analysis but it is time consuming. On the other hand, Relevance Vector Machine (RVM) achieves very good generalization performance. Thus in our study we have used RVM based FOSM and ELM and compared the results obtained from both. For this, a dataset of 480 readings was developed for cohesive frictional soil taking Poissons ratio and elastic modulus parameters as random variables. 70% of the readings were used for training and 30% were used for testing. Normalised data was used. Additionally, several error and correlation functions were also calculated to assess the performance of the models.
Keywords: settlement; Reliability analysis; FOSM; RVM; ELM.
Some Applications Of Vague Sets
by Hossein Rashmanlou, Kishore Kumar Krishna, S. Firouzian, Mostafa Noori
Abstract: In this paper, we gave a concise note on vague fuzzy sets. We present two applications on vague sets namely an application of vague fuzzy sets in career determination using an assumed data. The application was conducted with the aid of a new distance measure of vague fuzzy sets. Also the second one deals with research questionnaire construction, filling, analysis, and interpretation is given. Respondents decision is obtained assuming questionnaire is distributed among respondents. The respondents decision is converted into vague data set, analysed, and from which interpretation is drawn.
Keywords: Vague set; fuzzy set; distance measure; hesitancy.
Domination in Hesitancy Fuzzy Graphs
by R. Jahir Hussain, S. Satham Hussain, Sankar Sahoo, Madhumangal Pal
Abstract: Hesitant fuzzy sets (HFS) are introduced by author Torra which is a novel and recent extension of fuzzy sets that aims to model the uncertainty originated by the hesitation to arise in the assignment of membership degrees of the elements to a fuzzy set. Hesitancy Fuzzy Graphs (HFG) is introduced to capture the common intricacy that occurs during a selection of membership degree of an element from some possible values that makes one to hesitate. HFG are used to choose a Time Minimized Emergency Route (TiMER) to transport accident victims. This paper addresses the study of domination in hesitancy fuzzy graphs. By using the concept of strength of a path, strength of connectedness and strong arc, domination set is established. The necessary and sufficient condition for the minimum domination set of HFG is investigated. Further some properties of independent domination number of HFG are obtained and the proposed concepts are described with suitable examples.
Keywords: Domination number; Hesitancy fuzzy graphs; Independent domination set; Necessary and sufficient condition; Strong arc.
Multi-objective Artificial Bee Colony Algorithm in Redundancy Allocation Problem
by Monalisa Panda, Satchidananda Dehuri, Alok Jagadev
Abstract: This paper presents an empirical study of uncovering Pareto fronts by multi-objective artificial bee colony for redundancy allocation problem (RAP). Multi-objective artificial bee colony has been successfully applied in many optimization problems; however, a very little effort has been extended towards solving RAP. In this work, we have considered simultaneous optimization of the unavoidable objectives that are maximization of reliability, minimization of cost, and minimization of weight in a series parallel system, which leads to a multiple objective redundancy allocation problem (MORAP). The objective of this paper is to uncover true Pareto fronts populated with non-dominated solution sets as a solution to MORAP using multi-objective artificial bee colony algorithm (MOABC). Two MOABC algorithms have been developed and are inspired from the popular and established multi-objective genetic algorithms like Vector Evaluated Genetic Algorithm (VEGA) and Non-dominated Sorting Genetic Algorithm II (NSGA II). We named these two algorithms as MOABC-I and MOABC-II, respectively. From the experimental results, we visualize that the approximation of true Pareto front by MOABC-II is better than Pareto front obtained through MOABC-I. Further this resultant Pareto fronts are supervised by two inherent multi-criterion decision making (MCDM) methods like Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) and Analytical hierarchy process (AHP) to reach at a definite goal.
Keywords: Redundancy allocation problem; Genetic algorithms; Multi-objective optimization; Artificial bee colony; Multi-objective artificial bee colony; Multi-criteria decision making.
Cost Effective Hybrid Genetic Algorithm for Scheduling Scientific Workflows in Cloud under Deadline Constraint
by Gursleen Kaur, Mala Kalra
Abstract: Cloud has emerged as a convenient platform for executing complicated scientific applications from multiple disciplines by providing on-demand and scalable infrastructure on rental basis. Research and scientific community often opt for workflows to model these scientific applications Workflow scheduling has been extensively studied for decades with regard to grid and cluster computing, but few initiatives have been tailored for cloud. Whats more, the previous work fails to incorporate the basic principles of IaaS clouds like pay-as-you-go model, elasticity, heterogeneity, dynamic provisioning, and issues of VMs performance variation and acquisition delay besides other QoS requirements. This paper proposes a resource provisioning and scheduling strategy using genetic algorithm with the aim to optimize the overall execution cost while staying below the given deadline. The performance is further enhanced by using a high quality seed generated by Predict Earliest Finish Time (PEFT) algorithm which acts as a catalyst and helps the algorithm to converge faster. The proposed approach is simulated in WorkflowSim and evaluated using various well-known different sized realistic scientific workflows. The results validate the better performance of our approach over numerous state-of-art-algorithms.
Keywords: Cloud Computing; Workflow Scheduling; PEFT; Genetic Algorithm; Time-Cost trade off; Dynamic Resource Provisioning.
Optimal capacitor placement and sizing in distribution system using Competitive Swarm Optimizer algorithm
by Soumyabrata Das, Tanmoy Malakar
Abstract: This article investigates the implementation of Competitive Swarm Optimizer (CSO) algorithm for solving the Optimal Capacitor Locations and Sizing (OCLS) problems for Radial Distribution System (RDS) networks. The problem is formulated as a two-stage Mixed Integer Non-Linear Programming problem. In the first stage, a new parameter called Relative Emission Index is developed to assess the impact of Shunt Capacitors (SC) on environment and using the same, the probable capacitors locations are determined and in the second stage, a novel CSO algorithm is applied to find the optimal capacitors locations and sizes in RDS networks. The SC locations and their output are taken as binary and discrete control variables respectively in the optimization problem. The proposed algorithm is tested on IEEE 34 bus and IEEE 85 bus RDS networks with different loading conditions. The parametric sensitivity studies are performed to select the optimum values of the free parameters of the CSO algorithm. The results obtained by this proposed algorithm are compared with other reported results in the state-of-art literature and comparison confirms the superiority of CSO algorithm over other methods in solving OCLS problems for RDS networks.
Keywords: Competitive Swarm Optimizer; Radial Distribution System; Optimal Capacitor Locations and Sizing; Mixed Integer Non-Linear Programming; Emission.
Multi-Key Searchable Encryption Technique For Index - Based Searching
by P. Sri Vani, S. Ramachandram, R. Sridevi
Abstract: Multi - key searchable encryption scheme enables keyword search on encrypted data with different keys. The scheme is practical to apply for client server applications to achieve data confidentiality and makes the server to perform the search operation on the encrypted data. So far, this algorithm can be implemented for sequential search. This paper presents an improved version of multi key searchable encryption algorithm implemented for index based searching and also shows the experimental results of index based multi key searchable encryption scheme implemented using C and pbc library. This research uses ECC technique (Elliptic Curve Cryptography) for key improving security. This ECC technique contains the sequence of steps for a secure key generation by the user using hash function. Through the use of the hash function in ECC, performance is enhanced for index-based searching. Experimental results show the execution time of an improved version of Multi key searchable scheme for index based search constructed for different elliptic curves. An application is also designed by using new scheme to perform the search on one lakh encrypted collections with java as front end and MongoDB as back end.
Keywords: encryption; search token; delta; token; client; server; confidentiality; searchable ; multi key; index; public key; elliptic curve cryptography.
Further Improved Stability Condition for T-S Fuzzy Time-Varying Delay Systems via Generalized Inequality
by Rupak Datta, Rajeeb Dey, Baby Bhattacharya
Abstract: This paper deals with the problem of stability analysis for a class of nonlinear systems with time-varying delay, which is represented by the Takagi-Sugeno (T-S) fuzzy model. By choosing an appropriate augmented Lyapunov-Krasovskii (L-K) functional and utilizing the efficiency of a generalized integral inequality combining with the reciprocal convex lemma, a new and improved delay-range-dependent stability condition is obtained in terms of linear matrix inequalities (LMIs) for guaranteeing the asymptotic stability of the studied fuzzy systems. Two numerical examples are solved to validate the efficiency and improvement of the proposed theoretical results over some existing stability methods.
Keywords: Stability analysis; T-S fuzzy model; Integral inequality; L-K functional; Linear matrix inequalities.
Optimization of SPARQL queries over the RDF data in the Cloud Environment
by Ranichandra Dharmaraj, Tripathy B.K.
Abstract: The semantic web is built with the support of Resource Description framework (RDF). The changing faces of semantic web created the requirement of new approaches to store and query the RDF data. The RDF data contains large volume of data that have more number of binding. Processing the SPARQL queries over the RDF data in the cloud creates some challenges. The network cost and query processing time majorly impacts the performance of queries over the cloud. This paper proposed an optimization algorithm for query processing in the large datasets. The proposed algorithm considered the parallel execution of queries as a major objective to reduce the network cost as well as to minimize the response time of the query. The experimental evaluation is carried using the LUBM 400 university dataset along with the hardware rented with amazon web services. The proposed algorithm proved their efficiency in terms of reducing the query response time and minimizing the network traffic.
Keywords: Query; SPARQL; RDF data; Response time; Distributed cloud.
Privacy Preserving Using Diffie-Hellman and An Envelope Protocol Through Key Handling Techniques In Cloud Storage
by K. Santhi Sri, N. Veeranjaneyulu
Abstract: Cloud computing is unremittingly advancing and demonstrating reliable development in the arena of computing. It is in receipt of fame by giving distinctive computing administrations as distributed storage, cloud facilitating, and cloud servers and so forth for various sorts of enterprises and in addition in scholastics. On the opposite side there are heaps of issues identified with the cloud security and protection. Security is as yet basic test in the distributed computing worldview. These difficulties incorporate client\'s mystery information misfortune, information spillage and revealing of the individual information security. In view of the security and protection inside the cloud there are different vulnerabilities to the client\'s sensitive information on cloud storage. In this paper we are considering the risk of storing the data in cloud from third party and accessing the stored data by cloud users we are proposing a novel mechanism that will give the confidence to cloud user about the security from third party that is cloud service provider and also providing privacy to cloud data users using efficient group key management schema.
Keywords: cloud computing; Privacy Preserving; Data owner; Cloud User; key.
Hybrid Rough Set with Black Hole Optimization Based Feature Selection Algorithm for Protein Structure Prediction
by H. Hannah Inbarani, Ahmad Taher Azar, M. Bagyamathi
Abstract: The Protein structure prediction is one of the most important problems in modern computational biology. The Structure of protein is predicted using Amino acid composition (AAC) and pseudo amino acid composition (PseAAC) features extracted from its primary sequence. A major problem of protein dataset is the complexity of its analysis due to their enormous number of features. Feature selection techniques are capable of dealing with this high dimensional space of features. Rough set theory is one of the effective methods to feature selection, which can preserve the originality of features. The essence of rough set approach to feature selection is to find a subset of the original features. Since finding a minimal subset of the features is a NP-hard problem, it is necessary to investigate effective and efficient heuristic algorithms. In this paper, we propose a new approach hybridizing Rough Set Quick Reduct and Relative Reduct approaches with Black Hole optimization algorithm. This algorithm is inspired of black holes. A black hole is a region of space-time whose gravitational field is so strong that nothing which enters it, not even light, can escape. Every black hole has mass, and charge. In this Algorithm, each solution of problem is considered as a black hole and gravity force is used for global search and electrical force for local search. The proposed algorithm is compared with leading algorithms such as , Rough Set Quick Reduct, Rough Set Relative Reduct, Rough Set PSO based Quick Reduct, Rough Set based PSO Relative Reduct, Rough Set Harmony Search based Quick Reduct, and Rough Set Harmony Search based Relative Reduct. The experiments are carried out on protein primary sequence data sets that are derived from PDB on SCOP classification, based on the structural class prediction such as all α, all β, all α+β and all α / β. The effectiveness of the proposed new approach of black hole algorithm combining with Rough Set Quick Reduct and Relative Reduct for protein structure prediction are studied and compared based on classification techniques. Experimental results on protein data sets show that the proposed algorithm offers its efficiency and comparable testing accuracy to that of the existing algorithms.
Keywords: Data Mining; Bioinformatics; Feature Selection; Protein Sequence; Rough Set; Quick Reduct; Relative Reduct; Black Hole algorithm; Particle Swarm Optimization; Harmony Search; Protein Structure Prediction; classification.
A Comparative Investigation of Approaches for Web Search Results Clustering
by Zaher Salah, Abdel-rahman Al-ghuwairi, Ahmad Aloqaily, Aladdin Baarah, Ayoub Alsarhan
Abstract: Online files especially textual documents that have different forms (books, papers, emails, news, lyrics, etc.) are now in billions and up to increase. The huge diversity of topics covered by this massive amount of documents is expected as these documents are originated from various resources worldwide and expected to cover different topics in science, engineering, economy, politics and history etc. Looking to all of these aspects, how to search and find specific documents relative to a specific topic in the user mind and how to facilitate the browsing process? And how to reflect properly the user intention to the information retrieval system to perform the searching and delivering task in precise and fast process? This paper investigates various techniques used for clustering the web search results produced from a web search engine as a result of running a user's query in order to meet that user's information needs. The goal of clustering is not to facilitate finding specific documents only (navigation between documents), but also to make it easier to preview the general structure and distribution of the topics among documents. Furthermore, clustering may be used to induce or reveal hidden or embedded topics in the corpus. The aim of this paper is to provide the reader with the relevant background concerning clustering of web search results (short-text snippets) in much more detail.
Keywords: Information Retrieval; Machine Learning; Text Mining; Web Search Results Clustering.
Certain graph parameters in bipolar fuzzy environment
by Ganesh Ghorai, Sankar Sahoo, Madhumangal Pal
Abstract: Yang et. al  introduced the concept of generalized bipolar fuzzy graphs in 2013. In this paper, we have introduced certain concepts of covering, matching and paired domination using strong arcs in bipolar fuzzy graphs with suitable examples. We investigated some properties of them. Also, we have calculated strong node covering number, strong independent number and other parameters of complete and complete bipartite bipolar fuzzy graphs.
Keywords: Bipolar fuzzy graphs; strong arcs; covering; matching; paired domination.
Teaching Learning Based Optimization for Job Scheduling in Computational Grids
by Tarun Kumar Ghosh, Sanjoy Das
Abstract: Grid computing is a framework that enables the sharing, selection and aggregation of geographically distributed resources dynamically to meet the current and growing computational demands. Job scheduling is the key issue of Grid computing and its algorithm has a direct effect on the performance of the whole system. Because of distributed heterogeneous nature of resources, the job scheduling in computational Grid is an NP-complete problem. Thus, the use of meta-heuristic is more appropriate option in obtaining optimal results. In this paper, a recent Teaching Learning Based Optimization (TLBO) is proposed to solve job scheduling problem in computational Grid system with minimization of makespan, processing cost and job failure rate, and maximization of resource utilization criteria. In order to measure the efficacy of proposed TLBO, Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) are considered for comparison. The comparative results exhibit that the proposed TLBO technique outperforms other two algorithms.
Keywords: Computational Grid; Job Scheduling; Makespan; Processing Cost; Fault Rate; Resource Utilization; GA; PSO; TLBO.
An Intelligent Model for diagnosis of breast cancer
by Raj Kamal Kaur Grewal, Babita Pandey
Abstract: Breast cancer, the most common disease in India in comparison to the United States and China, is not easily diagnosed in its initial stage. The early diagnosis of breast cancer can save lives, therefore it is very important to diagnose it at the initial stage. The development of an effective diagnosis model is an important issue in breast cancer treatment. This study accordingly employs J48 classification algorithm and case-based reasoning to construct an intelligent integrated diagnosis model aiming to provide a comprehensive analytic framework to raise the accuracy of breast cancer diagnosis at two levels. The dataset used in the diagnosis is based on the advice and assistance of doctors and medical specialists of breast cancer. At the first level, J48 algorithm is deployed for classifying the breast cancer dataset into malignant and benign cancer types. At the second level, malignant cases are further classified as Ductal carcinoma in situ, Lobular carcinoma in situ, Invasive ductal carcinoma, Invasive lobular carcinoma, and Mucinous carcinoma using case-based reasoning. The result specifies that the J48 accuracy rate is 90%. In case-based reasoning at a second level, the new case is supported by a similar ratio, and the case-based reasoning diagnostic accuracy rate is 98.25%. The implemented result shows that the intelligent integrated diagnosis model is able to examine the breast cancer with considerable accuracy. This model can be helpful for making the decision regarding breast cancer diagnosis.
Keywords: Breast cancer; Data mining; Case based reasoning; J48.
On the learning machine in quaternionic domain and its application
by Sushil Kumar, Bipin Kumar Tripathi
Abstract: There are various high-dimensional engineering and scientific applications in communication, control, robotics, computer vision, biometrics, etc.; where researchers are facing problem to design an intelligent and robust neural system which can process higher dimensional information efficiently. In various literatures, the conventional real-valued neural networks are tried to solve the problem associated with high-dimensional parameters, but the required network structure possesses high complexity and are very time consuming and weak to noise. These networks are also not able to learn magnitude and phase values simultaneously in space. The quaternion is the number, which possesses the magnitude in all four directions and phase information is embedded within it. This paper presents a learning machine with a quaternionic domain neural network that can finely process magnitude and phase information of high dimension data without any hassle. The learning and generalization capability of the proposed learning machine is presented through 3D linear transformations, 3D face recognition and chaotic time series predictions (Lorenz system and Chuas circuit) as benchmark problems,which demonstrate the significance of the work.
Keywords: Quaternion; quaternionic domain neural network; 3D motion; 3D imaging; time series prediction.
Categorization of Random Images into fog and blur based on the Statistical Analysis
by Monika Verma, Vandana Dixit Kaushik, Vinay Pathak
Abstract: : Noisy images are a bottleneck to solve the image processing problems. The present paper aims to classify images as different types of foggy and blurry images. A feature based classifier called FB Classifier has been proposed. Given an image the classifier is able to tell whether the image is clear or unclear, which type of distortion is there, either foggy or blurry and also the categories of different types of blur and fog. The quality of the images taken through any equipment depends on few factors: 1. Medium in which the photograph is taken, 2. the movements of either the camera or the object or movement of both, 3. the quality of the equipment that is used for capturing. All the algorithms of classification or the removal of distortions are made to handle the above three scenarios. The three factors encompass all types of foggy or the blurry images. The images viewed are given different threshold values according to their properties and finally the cumulative threshold value decides which type of the image is it. The algorithm is simple to implement yet it is comparable to the state of art methods.
Keywords: statistical analysis; classifier; categorization; point spread function; cumulative probability of blur detection; eccentricity; textured segments; deblurring.
Aspect based Summarization in the Big Data Environment
by KRISHNAKUMARI KALYANASUNDARAM, SIVASANKAR ELANGO
Abstract: Due to the large amounts of information available, it is difficult for customers to select a superior product. Reviews of shopping sites may confuse the customer when purchasing a product. With the large volume of information, it is difficult for customers to assess all of the reviews. Sentiment analysis plays an active role in extracting and identifying the opinion of the customer who purchased the product. Sentiment summarization helps the customer to buy the best product based on its features and values. Our technique involves aspect based sentiment analysis followed by summarization. The size of the datasets analyzed is huge and cannot be handled by traditional single machine systems. To handle large datasets, we propose a parallel approach using the Hadoop cluster to extract features and opinions. By referring to an online sentiment dictionary and Interaction Information(IIn) method, the sentiments are predicted and then summarized using clustering. After classifying each opinion word, our summarization system generates a short summary of the product based on several features. This makes the customer feel comfortable and improves the competitive intelligence.
Keywords: Sentiment summarization; Opinion; Aspects; Hadoop; MapReduce; big data.
Deep Filter Bridge For Malaria Identification And Classification In Microscopic Blood Smear Images
by Priyadarshini Adyasha Pattanaik, Tripti Swarnkar, Debabala Swain
Abstract: Malaria is a major global infectious health disease threat. The process of identifying and quantifying malaria is one of the most challenging tasks in the field of microscopy image processing due to variations in sample preparation and uncertainty of cell classes. However, motivated by the challenges, we present a novel simplified deep learning model, deep filter bridge, combining multi-rolling stacked denoising autoencoder (SAE) and Fisher vector (FV) to automatically classify the different types of single cells in microscopic blood smear images as either infected or uninfected. The results indicate that the proposed model SAE kernels can extract representative malaria features from large unlabelled data and extreme learning machine (ELM) is used as a final ensemble base classifier to improve the learning speed of the algorithm. We have experimentally evaluated performance based on 39000 single cell elements in a ten-fold cross-validation, obtaining average classification F-score and accuracy at 98.36 % and 98.12 % respectively, on the microscopic blood smear image datasets.
Keywords: Deep Filter Bridge; Fisher Vector Pooling layer; Malaria Classification; Stacked Sparse Autoencoder.
On Lifetime Enhancement of Wireless Sensor Network using Particle Swarm Optimization
by Ashish Pandey, Shashank Shekhar, Arnab Nandi, Banani Basu
Abstract: In this article, a multi-dimensional, multi-objective optimization method to manage the network as per specific requirement while considering the constraints of Wireless Sensor Networks (WSNs) is studied. A particle swarm optimization (PSO) based technique is used to address the energy management and lifetime issues. A predefined percentage of nodes are assumed as supernodes having higher energy than ordinary nodes. Supernodes are used as cluster head to enhance the lifetime of the network. Free space and fading channel models are considered to evaluate the performance of the PSO based WSNs. The location of the sink node and node density are varied to study the effect on network performance matrices. The population of supernodes are also varied to demonstrate their effect on the network while keeping all the other parameters unchanged. Energy consumed by each clusterhead (CH) is also studied to observe the load distribution among the CHs.
Keywords: WSNs; Wireless sensor networks; cluster; supernodes; PSO.
Design of robust H_ fuzzy output feedback controller for affine nonlinear systems: Fuzzy Lyapunov function approach
by Leila Rajabpour, Mokhtar Shasadeghi, Alireza Barzegar
Abstract: In this paper, we proposed a new systematic approach based on non-quadratic Lyapunov function and technique of introducing slack matrices, for a class of affine nonlinear systems with disturbance. To achieve the goal, first, the affine nonlinear system is represented via TakagiSugeno (TS) fuzzy bilinear model. Subsequently, the robust H_∞ controller is designed based on parallel distributed compensation (PDC) scheme. Then, the stability conditions are derived in terms of linear matrix inequalities by utilizing Lyapunov function. The Lyapunov function is proposed in non-quadratic context. Moreover, some slack matrices are presented to reduce the conservativeness of the LMI stability conditions. Finally, for illustrating the merits and verifying the effectiveness of the proposed approach, the application of an isothermal continues stirred tank reactor (CSTR) for Van de Vusse reactor is discussed in details.
Keywords: T–S fuzzy bilinear model; robust H_∞controller; fuzzy output feedback controller; fuzzy Lyapunov function; linear matrix inequality; slack matrices; CSTR benchmark.
A Novel Map Matching Algorithm: for Real Time Location using Low Frequency Floating Trajectory Data
by Kanta Prasad Sharma
Abstract: The continuous enhancement of technologies and modern well-equipped infrastructure are necessary for easy life. The road accident and missing vehicle ratio are very challenging for preventing misshapenig because these are continually increasing due to traffic hazards. The single way to protect human life from such type of conditions that is more reliable navigation services such as correct location tracking of vehicles on the road network. The real-time location tracking methods fully depend on the map matching algorithms which also compute a reliable path on the road network. A smart vehicle can provide more reliable tracking services during or before any misshaping using proposed map matching algorithm. rnThis work contributes to ensure correct location for necessary action during misshaping, alert accident zone and communicate messages without wasting valuable time. The proposed approach is validated on the real tracking data and is compared to poor GPS services
Keywords: Map Matching;rnConfidence level;rnGAGAN;rnGPS trajectory; rnPerpendicular Distance;rnEuclidean distance;rnSignal Frequency;rnKalman Fiilter;rnClustersrn.
New Concepts of Product Vague Graphs with Applications
by Hossein Rashmanlou, Kishore Kumar Krishna, S. Lavanya, Ali Asghar Talebi
Abstract: It is known that vague models give more precision, rnexibility andrncompatibility to the system as compare to the classic and fuzzy models.rnVague graph has an important role in neural networks, computer network, andrnclustering. In the design of a network, it is important to analyse connectionsrnby the levels. The structural properties of vague graphs provide a tool thatrnidenties a solution to operations research problems.In this paper, we denernring sum of two product vague graph and analyse some interesting propertiesrnof isomorphism on product vague graphs.
Keywords: Ring sum; direct product; product vaguerngraphs.rn.
Construction of Complete Minimal Test Set for Single intra-level Bridging and Stuck-at Faults in Reversible Circuits
by MOUSUM HANDIQUE
Abstract: Reversible logic computing is gaining enough interest in the field of low-power circuit design technology. Moreover, the concept of reversible computing is widely used in quantum circuit computation. Therefore, it has motivated researchers to explore reversible logic as a circuit design alternative. Several problems of synthesis and optimization of the reversible circuits have been reported in the present literature. In this context, the testing plays an important role in the effective performance of these circuits. For detecting the all possible faults in the reversible circuits, numerous fault models have been proposed, where many of the proposed fault models common to conventional logic circuits. In this paper, we consider the problem of testing in the NCT or GT library based reversible circuits with respect to an efficient test set generation for single intra-level bridging faults and single stuck-at faults. The proposed test generation method has been developed based on the concept of one-to-one mapping mechanism (reversible circuit property). Finally, experimental results show that the generated test is capable of 100% fault coverage for both the fault models and also the results are compared with the existing methods for analyzing the test set size.
Keywords: Complete test set; Fault Description List (FDL); Reversible Circuits; Single intra-level Bridging faults; Single stuck-at faults.
Multi-Resolution Image Fusion With Regularization Framework Using Enhanced Gabor Preceding Method
by Ravikanth Garladinne, K.V.N. Sunitha, B. Eswara Ruddy
Abstract: Image Fusion is characterized as the way toward consolidating applicable data from at least two pictures caught utilizing diverse sensors, keeping in mind the end goal to create a solitary yield picture with an engaging visual recognition. It is thought to be a standout amongst the most intense apparatuses changing the field of picture handling in different zones like prescription, space science, resistance et cetera. The transformation as far as photography utilizing satellite made it conceivable to see the world's surface without being in contact with the range of intrigue. Over the most recent four decades, the progression of the remote detecting innovations have enhanced the strategies for accumulation, preparing and examination of the information. Numerous specialists have utilized the model based methodologies for combination with the accentuation on enhancing the intertwined picture quality and diminishing the shading twisting. In a model based technique, the low determination Multi otherworldly picture is displayed as the obscured and loud form of its optimal high determination intertwined picture. Since this issue is poorly postured, it expects regularization to get the last arrangement. In the proposed show based approach a learning based technique that utilizations photogenic information is utilized to get the required corruption framework that records for associating. At that point utilizing the proposed display, the last arrangement is gotten by taking care of the reverse issue where a Markov irregular field smoothness going before is utilized for regularizing the arrangement. Keeping in mind the end goal to better save the spatial points of interest and to enhance the gauge of combined picture, we take care of the multi-determination combination issue in a regularization structure by making utilization of another previous called Enhanced Gabor going before . Utilization of Enhanced Gabor going before guarantees highlights at various spatial frequencies of intertwined picture to coordinate those of the accessible HR Photogenic picture. Alongside Enhanced Gabor going before strategy we additionally incorporate a MRF going before which keeps up the spatial correlatedness among the HR pixels.
Keywords: Resolution;fusion problem;image fusion;image enhancement;remote sensing;.
EFFICIENT ROUTE DISCOVERY METHOD IN MANETS AND PACKET LOSS REDUCTION MECHANISMS
by Lakshman Narayana
Abstract: In the course of expansion in mobiles, quantity of hubs in the Mobile Ad-hoc Networks (MANETs) must be expanded. The MANETs being active in nature and it starts issues in deciding the most ideal course for packets. Besides the packets may confront overabundance movement and clog in the system which corrupt the execution of general system and influencing the situation to most exceedingly bad these issues even prompt packet losses. Int his paper an attempt to propose another directing convention which consolidates the properties of both static and dynamic steering convention and from that point tries to take out the issues innate in the system through thickness based directing is done. The paper concentrates on efficient route discovery process for secure data transfer with less ratio of packet loss.The examination mostly concentrates on the normal movement of the system and in the wake of breaking down it the packet is given a way from source to goal which is less congested. For the minimization of assault and packet dropping different creators manufactured different technique such strategy is hub validation, aloof criticism conspire, ack - based method, status based plan and motivating force dependent plan, ack - based plan endured an issue of monstrous overhead because of additional affirmation packet and it likewise endured choice equivocalness if asked for hub decline to send back Acknowledgment. In this paper we utilizes 2 ack - based plan utilizing protected channel for conquering the issue of choice equivocalness for asked for hub, enhanced hub confirmation and limit packet dropping in adhoc arrange.
Keywords: Dynamic routing; MANETs; Traffic analysis;packet loss reduction;2-ack method.
An efficient Quantum Hash based CP-ABE framework on Cloud Storage data
by Kranthi Kumar
Abstract: With the exponential growth of cloud data and storage space, cloud security has become one of the interesting research area of cloud computing servers. Attribute based encryption is a public key encryption algorithm that allows cloud users to secure their sensitive information in the public cloud servers. Quantum key Distribution (QKD) is required to improve the security of communication systems. Quantum Cryptographic scheme completely depends on quantum mechanics. The major objective of quantum key distribution is to generate a key that takes part in encryption. Traditional attribute based encryption models are insecure and possible of key distribution attacks using man-in-the-middle attacks. Also, as the size of the input data increases, traditional ABE models failed to compute efficient secret key due to computational time and network overhead. To overcome these issues, a novel chaotic integrity and quantum key distribution (QKD) based cipher text policy ABE model is implemented in cloud environment. Experimental results proved that the proposed model has high computation speed, storage overhead and secured key distribution compared to traditional CPABE, KPABE and QKD-ABE models.
Keywords: ABE;CPABE;Quantum distribution;Data security;Cloud computing.
A Study on Automatic Early Detection of Skin Cancer
by VIKASH YADAV, VANDANA DIXIT KAUSHIK
Abstract: Skin cancer is one of the most deadly types of skin cancer among all existingrntypes of cancer. The growth rate of this type of cancer is very high so only early detection of skin cancer can help it to cure successfully. So we can say that the curability and survival from skin cancer is directly depends upon the diagnosis of skin cancer in its early stages. Since clinical observations faces several difficulties for its detection, the automatic detection of skin cancer can help to increase the accuracy. This paper mainly emphasis on reviewing the research work which has already being done in this area and providing the overview on automatic detection of skin cancer.
Keywords: Skin Cancer detection; Classification; Segmentation; Feature Extraction; Automatic detection.
Fractional order Control of Switched Reluctance Motor
by Sihem Ghoudelbourk, Ahmad Taher Azar, Djalel Dib, Abelkrim Rechach
Abstract: In recent years, the switched reluctance machine (SRM) represents a contemporary competitive technology in the fields of automotive and aeronautics, because their growing needs for electric drives with a frequent and important high-speed variation. This paper presents an application of the fractional order control of the speed of switched reluctance motor for its application used for electric and hybrid vehicles. It also presents a comparative study between the speed control with the fuzzy logic control (FLC) regulator and the fractional order proportional integral control. The numerical simulations showed that the speed control with the designed fractional order proportional integral control realize a better dynamic behavior of the motor, a better speed and a good accommodation to the disturbances of the load. The application of the fractional order proportional integral controller (FOPI) achieved a high performance and a longer lifetime to SRM than the results obtained by a fuzzy logic controller.
Keywords: Fractional order Proportional integral controller (FOPI); Fuzzy Logic Control (FLC); speed control; Variable Reluctance Motor; Direct torque control (DTC).
Smart and Efficient IoT Based Quality Tracking System for Perishables Pertaining to Indian Conditions
by Shriram K V, Sriharsha P, Ikram Shah
Abstract: In most of the countries, the vegetables, fruits or any eatable (Could be a raw material as well) is cultivated at one place and being circulated to all the other places in the country through various modes of transport. For an example, most of the Vegetables are from Coimbatore and other parts of Tamilnadu are being cultivated and circulated from Ooty through truck, Lorries and other modes of transports. It is eventually understandable that, the transports of vegetables happen from one place to another and the same gets distributed to further interior places. In the process, the order in which the vegetables are stacked in the order in which the vegetables are stacked out. The first placed vegetable basket is the last one to go out, By that time it goes out, it could have been already spoiled and could be unusable. Also, there could be some vegetables that could withstand much more time to be inside the transport vehicle. Our innovation is aimed at identifying the vegetable basket which is most likely to be spoiled and delivering them in order of the chance that they may be spoiled earlier. Thereby delivering all the vegetable basket at appropriate times in an identified order without letting them go wasted. Our product has to be kept in the truck or Lorries cabin, by arranging multiple sensors and a micro-controller in order to track the record by placing tag number in each and every basket/ bag which will keep track of the basket/ bag even if it is transferred or shifted to other truck. Overall cost for this product is very minimal and will be a one-time investment for owners and also would ensure healthy vegetables being delivered to the customer.
Keywords: Perishable tracking; IoT; Food quality monitoring; Food Products; Food wastage; Technology for quality monitoring; Food transport.
Secure Data Transmission for Protecting the Users Privacy in Medical Internet of Things (M-IoT)
by Purushotham Jyotheeswari, N. Jeyanthi
Abstract: Internet of things (IoT) is the catchphrase in the recent years with transdisciplinary research. Medical Internet of Things (M-IoT) is the novel development in the fields of healthcare and information technology to store and retrieve the medical data that contains the sensitive information of the patients and heterogeneous medical data. To preserve the users privacy, we propose a secure data transmission mechanism for M-IoT. The proposed approach encompassesthree phases such as user authentication, symmetric key generation and disjoint multipath data transmission modules. In the first phase, gateways are validated with the cloud data servers. In the second phase, secret keys are generated for scrambling the message. Finally, the disjoint multipath data transmission divides the encrypted data into fragments and sends to the server. The experimental evaluation proved the efficiency of the proposed protocol in terms of reduced delay and response time at the upload and download of the medical data from the cloud servers.
Keywords: Authentication; Wireless Medium; Medical Data; Privacy; Internet of Things.
Fuzzy Associated Trust Based Data Security in Cloud Computing by Mining User Behavior
by THULASI BIKKU
Abstract: As of now cloud computing assumes an indispensable part in various areas. Despite the fact that cloud is adaptable and savvy, it has a few testing issues to be tended to. A portion of the fundamental issues are cloud security and protection. The proposed fuzzy based security mechanism enhances the security level of data storage in cloud by computing cloud users trustworthiness depending on their behavior. Reliability is assessed utilizing parameters that express user behavior such as transfer rate, bandwidth and number of bytes per second of data from service provider to user, time period of access to the cloud system, timings of user visit, and IP address used by user for cloud access. Cloud information is ensured by encoding utilizing key created in view of trust level of clients and their continuous access design. Frequent access pattern is detected by mining users past behavior using FP-Growth algorithm. Experiment results show that the proposed scheme withstands blackhole attack and offer higher packet delivery ratio.
Keywords: Cloud computing;Security;Privacy;Trust;fuzzy analysis;pattern mining.
The Report of Questionnaire Survey on Privacy in Social Networking Websites
by R.G. KAVITHA, N. JAYALAKSHMI
Abstract: Social Networking Sites (SNS) have become an important live information source. The large personal information available over SNS attracts the attention of corporate, business and marketing people. So, these people misuse the personal information of the users through different ways. This process leads to critical user concerns over their privacy. This paper tries to identify the factors which influence the privacy disclosure on SNS through a questionnaire. The main focus of this paper is to examine the behavior and privacy issues of the users on the SNS. It also attempts to analyze the survey results which illustrate interesting findings on the use of SNS and the awareness of personal data protection.
Keywords: Privacy; Social Networking Sites; questionnaire survey; new findings.
Mining Historical Changes to Predict Software Evolution
by Mustafa Hammad, Maen Hammad, Batool Horani, Sari Awwad
Abstract: Software evolution reflects the progress volume of the development process. This increasing volume is based on a sequence of incremental changes and maintenance activities. As volume increases, more resources are needed to control and handle future change requests. Developers and designers need to be able to analyze and predict how the software evolves in advance in order to better allocate needed resources. Better and correct predictions, help in estimating the required resources as cost and maintainers. Prediction can also help in setting the strategies and assumptions to solve the problems that can be encountered when evolving a project from version to the next. This paper presents a prediction model to predict the evolution of software projects. A set of hypotheses are examined to predict the evolution of specific parameters based on machine learning techniques. Data mining approaches are used to test the hypothesis on different versions of two software projects. Experimental results showed that future changes in software systems can be predicted using changed parameters of the previous version.
Keywords: software evolution; prediction model; machine learning.
A System to prevent toiletry (lavatory) based diseases as Norovirus, Staphylococcus, Escherichia and Streptococcus through IoT and Embedded Systems.
by K.V. Shriram, Giridhararajan R, Ikram Shah, Karthikeyan S, Abhishek SN
Abstract: One would be surprised to know that the toiletry based diseases play a major role in world hygiene. Many a times, it has even been fatal, killing so many. One side there are no toilets and people are still using open spaces. Another side, there are toilets, but, they lack maintenance. Both are dangerous. Not only at home, lavatories as everyone knows, are everywhere. From the simplest of the bus stands in interior villages to the most sophisticated airports, toilets play a major role. Also, lavatories inside the train, flight or any other mode of transport is inclusive on the list. All of these lavatories, if not maintained well, would lead to many disastrous and dangerous diseases like Staphylococcus, Escherichia, and Streptococcus. Surveys reveal that these unclean lavatories are agents which mostly target the children and victims are mostly under 5. Hence, it is important with this much abundant technology growth to provide a technical solution to monitor the quality of the lavatories, so as to maintain it as and when required. Here, we aim at getting a frugal IoT based solution towards monitoring the quality of the lavatories, on the go while alerting the concerned one to carry out the necessary action. Also, we have got a simple feedback mechanism which can be used to alert the concerned about the quality of the lavatories. We capitalized the exploration of IoT, Data analytics, and embedded systems to build this system. The system is tested for its working and it is observed that, if implemented, it would be the definite value add for the users and would help in building the healthy chain of lavatories.
Keywords: cleanliness; unclean lavatories; toiletry diseases; IoT; Cloud; Data Analytics; On the go; feedback; hygiene.
Exploring real domain problems on the second generation neural network
by Amit Gupta, Bipin Kumar Tripathi, Vivek Srivastava
Abstract: This paper presents a competitive performance of second generation neural network (CVNN) on the two dimensional space over first generation neural network (RVNN) on single dimensional space. The real datasets problems are selected for proposed research work. The second-generation neural network is based on the theory of complex number. Complex numbers are forms of subset of real numbers having magnitude and phase to represent a real valued phenomenon. For the testing and training of real valued problems in complex domain, a mathematical approach hilbert transformation is used to convert all the real valued data in complex form by sifting the phase by
Keywords: real value neural network; complex value neural network; complex activation function; back propagation algorithm; hilbert transformation.
A node combination approach with fuzziness in shortest path problem
by Pushpi Rani, Dilip K. Shaw, Jayakrushna Sahoo
Abstract: Shortest path problem is one of the most popular and frequently usedrnnetwork optimization problem. In this paper, a method fuzzy node combinationrnis proposed to find the shortest path under uncertain environment. The proposed method incorporates fuzziness in node combination algorithm, an alternative to Dijkstras algorithm. An illustration for the proposed fuzzy node combination method is presented and impact of the method is evaluated in a transportation network. Experimental results reveal that the fuzzy node combination algorithm is more efficient than the existing fuzzy shortest path finding methods.
Keywords: Fuzzy sets; fuzzy number; node combination; canonical representation; graded mean integration.
Map Reduce Approach For Road Accident Data Analysis Using Data Mining Techniques
by Nagendrababu Cs
Abstract: Now a days, the most life-threateningrisk to humans are road accidents. Traffic accidents that cause a lot of damages are occurring all overthe places. The best answer for these sorts of accidents is to foresee future accidents ahead of time, giving driver's odds to maintain a strategic distance from the perils or decrease the harm by reacting rapidly. Anticipating accidents out and about can be accomplished utilizing characterization investigation, an information mining system requiring enough information to fabricate a learning model. Notwithstanding, developing such an anticipating framework includes a few issues. It requires numerous equipment assets to gather and dissect activity information for foreseeing roadaccidents since the information to a great degree. The motivation behind this manuscript is to fabricate an anticipating structure that can resolve every one of these issues. This paper recommends utilizing the guide decrease structure to process and dissect huge activity information proficiently. In view of this, the anticipating framework first pre-forms the huge activity information and investigations it to make information for the learning framework. To enhance the foreseeing precision, amended information are arranged into a few gatherings, to which characterization investigation is connected.
Keywords: Road accident prediction;Map Reduce;Clustering;pre-processing;Association rules;data set.
Multiple Polynomial Regression for Solving Atmospheric Scattering Model
by MONIKA VERMA, VIKASH YADAV, VANDANA DIXIT KAUSHIK, VINAY KUMAR PATHAK
Abstract: Multiple Polynomial Regression (MPR) is a novel technique for removing haze from images. Haze removal is a challenging task because not much information is there if the single image haze removal is taken into consideration. MPR is a method that solves the Atmospheric Scattering Model (ASM). To solve the ASM from a single image many unknown variables like depth of the scene, scattering coefficient, transmission of the medium, atmospheric light have to be calculated. If all these variables are calculated with precision then it is possible to obtain a haze free image. The current work has shown improvements over the state of the art methods.
Keywords: Atmospheric Scattering Model; Regression; Peak Signal to Noise Ratio; Entropy.
shunt active power filter with a Three-Phase Neutral-Point-Clamped Inverter for power quality improvement based on Fractional order proportional integral control
by Ahmad Taher Azar, Hayette Dendani, Mohamed Adjabi, Amar Omeiri, Sihem Ghoudelbourk, Djalel Dib
Abstract: The quality of the electrical wave is far from being perfect, due to the use of non-linear loads that generate current harmonics and consume reactive power. This quality of wave can be altered by several types of disturbances. Knowing the origins and the effects of pollution harmonics on the electrical networks, this study has for solution to implement a system of active filters with three levels inverter hysteresis-based control which injects into the network a current equal to those absorbed by the polluting load, but in opposition of phase with these, thus leading the supply current to be sinusoidal. The regulation and the stability of the power supply of the filter during a variation of the load is ensured by a classical PI and then by a fractional PIα. A comparative study has been conducted and the results have been validated and improved under the MATLAB/Simulink environment.
Keywords: Active filters; shunt active power filter; Reactive Power; Harmonics; fractional order PI controller (FOPI).
Cooperative Retransmission Based MAC Method for Underwater Sensor Networks
by Abdul Gaffar Humayun, Venkata Krishna P
Abstract: Designing a medium access control (MAC) is a challenging task for underwater sensor networks (USN). In this paper, a cooperative retransmission based medium access control (CRMAC) method for an underwater sensor network is proposed. When the direct transmission between the source node and the sender node failed, the retransmission imitated. The cooperative nodes are used for retransmission. In CRMAC method, the cooperative nodes are selected based on virtual backoff algorithm method. The CRMAC method is compared with Slotted FAMA. The simulation results show that the proposed method performs well compared with Slotted FAMA in terms of throughput and packet delivery ratio. rn
Keywords: Underwater Sensor Network; Cooperative retransmission; medium access control,throughput.
Facial Expression Recognition of Multiple Stylized Characters using Deep Convolutional Neural Network
by Yogesh Kumar, Shashi Kant Verma, Sandeep Sharma
Abstract: Human faces manifest the treasury of their abilities including emotions, character, state of mind and many more. Apart from the things that are spoken, human faces conveys plenty of information in the form of facial expressions. Recognition of facial expressions has become significant in the discipline of human-computer interaction to attain the emotional state of human beings. This paper proposes a Facial Expression Identification Method (FEIM) for the recognition of six basic facial expressions (anger, sad, fear, happy, surprise and disgust) plus one neutral emotion. The features are extracted by implementing an integrated Gabor and Local Binary Pattern (LBP) feature extraction method and the concept of Principal Component Analysis (PCA) is executed for feature selection. A deep neural network is trained for the FERG-DB (Facial Expression Research Group Database) dataset to classify the facial expression images into seven emotion expression classes (anger, fear, disgust, happy, neutral, sad, and surprise). The effectiveness of the proposed system is manifested by comparing the recognition rate results with state-of-the art-techniques. The overall results in terms of precision, recall and f-score also favours the efficacy of proposed method.
Keywords: Facial Expressions; Deep Learning; Convolutional Neural Network; Deep NeuralrnNetwork; Facial Features; Gabor Filter; Principal Component Analysis; Local Binary Pattern.
Medical Data Clustering Based on Particle Swarm Optimization and Genetic Algorithm
by INDRESH KUMAR GUPTA, VIKASH YADAV, SUSHIL KUMAR
Abstract: Medical data clustering is popular scientific approach for finding hidden patterns from large medical dataset. Medical experts utilized these patterns to make clinical diagnosis for likelihood of a disease. Clustering groups the data objects of dataset into different groups based on data similarity within group is higher than other groups. In this work, a hybrid PSO-GA algorithm is developed for medical data clustering based on "Particle Swarm Optimization (PSO) and Genetic Algorithm (GA)". Hybrid PSO-GA performance has examined against K-means, PSO and GA with respect to six popular medical datasets namely iris, thyroid, breast cancer, heart, diabetes and pima adopted from UCI machine learning repository over three criterion namely i.e. sum of intra cluster distance, error rate and CPU running time. Tabular and graphical results of simulation confirm hybrid PSO-GA technique for medical data clustering is superior against K-means, PSO and GA.
Keywords: Medical data clustering; Particle swarm optimization; Genetic algorithm; Influence factor; Clustering metric.
Some results on edge irregular product vague graphs
by Ganesh Ghorai, Abolfazl Lakdashti, Hossein Rashmanlou, Kishore Kumar P.K, Madhumangal Pal
Abstract: Recently, vague graph is a highly growing research area as it is the generalization of the fuzzy graphs. In this paper, we analyzed the concepts of edge regular product vague graphs and its properties. The concepts of edge irregular product vague graphs, strongly edge irregular product vague graphs are analyzed with properties.
Keywords: Product vague graph; edge regular and irregular product vague graph; strongly edge irregular.
Node Replication Attacks in Mobile Wireless Sensor Networks
by Mojgan Rayenizadeh, Marjan Kuchaki Rafsanjani
Abstract: the mobile wireless sensor networks (MWSNs) commonly operate in hostile environments such as battlefields and surveillance zones. Owing to their operating nature, MWSNs are often unattended and generally are not equipped with tamper-resistant tools. Because of the mobility of the nodes in MWSN, this is more vulnerable than static WSN (Wireless Sensor Network). The node replication attacks are one of the most insidious attacks in sensor networks. The attack can be the foundation of many attacks. In order to detect and prevent this attack, many methods have been proposed. Although several countermeasures exist, almost all practical schemes assume a stationary network model where sensor nodes are fixed and immobile so they arent suitable for MWSNs. In this article, we intend to consider several node replication attacks detection methods in mobile wireless sensor network that they use different strategies to detect, and we compare them according to some parameters such as memory overhead and communication overhead.
Keywords: Mobile Wireless Sensor Network; Node replication attacks; Wireless network security; Detection method.
A METHOD FOR PREDICTING THE RELEASE DATE OF SOFTWARE DURING ITS TESTING STAGE
by Poonam Panwar, Arvind Kumar Lal, Chander Mohan
Abstract: Software developers are generally interested in estimating the release date of software during its testing stage itself so that further commitments regarding its release date etc. can be made in advance. In this paper a method is proposed for predicting the release date of software during its testing stage itself. The proposed method first chooses the most appropriate reliability growth model that best fits the available test data midway during its testing stage and then uses it to predict the likely release date. The proposed method is simple and easy to implement. Its performance has been tested on ten real datasets. The results show that in most of the cases almost accurate prediction of the release date is possible midway during the testing stage itself. The performance of the method has been also compared with some of the earlier methods proposed in literature for this purpose.
Keywords: Software Reliability Growth Models; Testing; Model Ranking; Reliability; Release time.
Damping of oscillations in multi machine power system by PSO-GWO optimized dual UPFC based controller
by Narayan Nahak, Ranjan Kumar Mallick
Abstract: In this work, hybrid Particle Swarm Optimization-Grey Wolf Optimizer (PSO-GWO) technique is proposed to tune the parameters of UPFC based dual damping controller. The proposed optimized controller is applied to damp inter area oscillations in a multi machine power system. The dual controller simultaneously controls two independent control actions of UPFC, which are modulation index of series converter and phase angle of shunt converter. Before being tested to multi machine system, the controller is tested in a single machine system to validate its efficacy. The results obtained with proposed controller are compared with PSO and GWO optimized controller to prove its supremacy. A broad comparison is performed between single lead-lag and dual controller optimized by PSO, GWO and PSO-GWO techniques. The system responses and eigen values show that proposed PSO-GWO optimized dual controller damps oscillations to a large extent in contrast to all other single and dual optimized controllers.
Keywords: FACTS;UPFC;PSO- GWO; dual damping controller;multi machine stability;.
An IoT based accident severity detection for automobiles with alerting the appropriate location of the accident - An innovative attempt
by Juluru Anudeep, G. Kowshik, G.I. Aswath, Shriram KV
Abstract: Although occupant protection systems are aiding the present means of transportation like cars etc., the statistics of crash severity surveyed from past few years indicate that the mortality rate has increased to about 35% indicating the need of augmenting the quality of service to be given to the citizens. An article from Times of India tells that 27% of the deaths caused in India are due to the lack of medical attention  and delay in the medical help which has been a persistent cause for deaths mainly for the accidents occurring on the highways. NHTSA (National Highway Traffic Safety Administration) says that on an average about 15,913  accidents occur per day in the USA based on the statistics of a survey for a period of 5 years. For an instance consider a place on a long highway where there exists only one hospital with basic requirements like emergency ward, ambulance service and an Operational Theatre and let us suppose that 4 to 5 accidents had happened in the same province of the hospital. Now, the hospital authorities will be in a dilemma because there is uncertainty in the decision to where the ambulance must be sent first. If the ambulance is sent to the nearest place where the severity of the accident is very low, the person with a big hit will succumb to death fast. The problem lies in intimating the severity of the accident to the hospital. There are systems designed to detect the collision and implant the airbags and safety measures, but many systems dont measure the severity level of the accident. So, here we introduce a system to measure and intimate the severity of the accident and even our system provides the geotag i.e., details of the place where the accident has happened and time stamp. The whole system is built with Force sensitive resistors (FSR) which are capable of detecting an impact accurately and a GPS module is used to get the data of the longitude and latitude of the area of the crash. The data from the sensors are processed using a python script which checks whether the crash has occurred or not. This system mainly strives in decreasing the delay in the time of arrival of medical services to the place of accident or crash. It also helps the hospitals in deciding the right place for the medical services to be sent in case of multiple accidents at the same time by using Severity level.
Keywords: Severity level; GPS data; force sensitive resistor; Collision; webpage;.
Ant Colony Optimization for Latent Fingerprint Matching
by Richa Jindal, Sanjay Singla
Abstract: Biometric recognition is a prominent tool to recognize individuals based on their personal and biological traits like fingerprint, iris, voice, and face. Fingerprint recognition has substantially used over the decades to identify and verify an individuals identity. In forensic investigations, fingerprint matching is one of the most reliable tools for person identification. The latent fingerprints collected from crime scenes are matched with full fingerprints for person identification. Latent fingerprints are accidentally left finger impressions with overlapping patterns, backgrounds and spoiled minutiae information. This paper aims to propose a system for automated latent fingerprint matching. The latent fingerprint images are initially enhanced to remove noise and to obtain useful information. Image enhancement includes pre-processing steps of segmentation followed by normalization, filtering and image binarization. Further, minutiae features are extracted from the pre-processed data and final matching is performed using an Ant Colony Optimization (ACO) algorithm to optimize the process of minutiae matching. The experimentation results on NIST Special Database 27 are computed in terms of accuracy assessment measures (Precision, Recall, F-score), Similarity Score and Identification Rate. The proposed system has evinced satisfactory results in comparison with other existing fingerprint matching techniques.
Keywords: Ant Colony Optimization; Latent Fingerprints; Biometric Recognition; Fingerprint Matching; Swarm Intelligence; Optimization.
An Exploratory Data Analysis on Rating Data using Recommender System Algorithms
by Lakshmi Pathi
Abstract: Day to day the uploading of data into world wide web and E-commerce directed the development of Recommender Systems. RecommenderSystem filters the information based on the users interest. Recommender Systems are being used in every domain now a day. The advantage of Recommender System is making search easy. Recommender Systems are classified into Content Based Filtering, Collaborative Filtering and Hybrid Approach. In this paper, we analysed the performance of Item Similarity, Matrix Factorization and Popular recommender algorithms and evaluated with Precision- recall and Root Mean Square Error metrics.
Keywords: Recommender Systems; Collaborative filtering; Matrix factorization; evaluation metrics.
Towards A Standard-Based Model of System Dependability Requirements
by Ghadeer Al-Qahmouss, Khaled Almakadmeh
Abstract: System dependability is a quality factor indicates that system is able to provide trusted services and system failures will not cause catastrophic or unexpected events. The identification of dependability requirements help to develop dependable real-time critical systems in order to build the user's trust. The review of literature shows that there is no standard-based model that captures dependability requirements for all types of real time critical systems. This paper presents a standard-based model for capturing system dependability requirements of real time critical systems by identifying dependability requirements using the concepts exist in ISO25010 and ISO19761 international standards. Further, this paper presents an approach that demonstrate the practical steps needed to build a dependability requirements model. An experiment is conducted using the requirements specifications of an aircraft control system is presented to verify the applicability of the proposed standard-based model to model the required dependability requirements of such real-time system.
Keywords: Dependability; Real-Time Systems; System Requirements; ISO19761; ISO/IEC 25010.
Optimal Path Planning with Hybrid Firefly Algorithm and Cuckoo Search Optimization
by Monica Sood, Vinod Kumar Panchal
Abstract: Background/Objective: Path planning is one of the core and extensively studied problems in robotics. The scope of path planning is not only limited to robotics, it has gained its pertinence in many of the application areas including simulations and gaming, computer graphics, very large scale integration (VLSI) and many more. This paper aims to propose an optimization algorithm to identify the optimum path from defined source to destination without any obstacle collision. Method: A hybrid algorithm is proposed by combining the properties of two swarm intelligence techniques: Cuckoo search and Firefly algorithm. The multi agent firefly algorithm makes use of the levy flight property for the random movement of fireflies and put forth the best path from defined source to destination without colliding with any of the obstacle. The property of clever cuckoos brood parasitic behaviour of imitating the pattern of hosts egg is used by fireflies to handle the present obstacles in the path. Result/Conclusion: The experimental results obtained work in an adequately acceptable agreement with the proposed hybrid algorithm. Three experiments are performed considering the red band satellite image of the urban and vegetation area of Alwar region in Rajasthan, India. The experimental results calculated indicate the efficiency of proposed hybrid algorithm as compared to individual cuckoo search and firefly algorithm. The proposed hybrid algorithm detected the optimum path at iteration number 27 with a path length of 246 pixels and with a simulation time of minimum 112 seconds and maximum 167 seconds. Whereas, cuckoo search achieved the optimum path at iteration 49 with a simulation time of minimum 179 seconds and maximum 230 seconds. In the similar manner, firefly algorithm achieved optimum path length at 56 iterations with a simulation time of minimum 151 seconds and maximum 195 seconds respectively.
Keywords: Optimal Path Planning; Cuckoo Search; Firefly Algorithm; Nature Inspired Computing; Computational Intelligence; Machine Learning.
Advanced Cryptography Technique in Certificateless Environment using SDBAES
by Naveen Kumar
Abstract: Certificateless encryption is a kind of public key encryption which is used to eliminate the dis-advantage of traditional PKI-based public key encryption scheme and identity based encryption scheme. In the existing certificateless environment contains the lack of security which lead to the nu-merous security issues. Also the existing system does not provide the efficient certificateless environ-ment in terms of time and performance. To reduce such issues the proposed work used the SDBAES algorithm which is used to reduce the maximum of the security threats in the cloud. It is also used to increase the efficiency of the system by reducing the time and improving the performance. The expe-rimental results shows that the proposed work provides higher security and efficiency than the existing technique.
Keywords: EMV-CLSC; CLSC; IBAS; SDBAES.
Role of Data Mining and Machine Learning Techniques in Medical Imaging
by Abhishek Agnihotri, VIKASH YADAV, VANDANA DIXIT KAUSHIK
Abstract: Medical images are ubiquitous images among all images around us. Due to the expansion in the use of medical images and growing resolution and size of medical images, medical image processing became the subclass of image processing and this field is continuously gaining the attention of researchers all around the world. There is a need to handle medical images for better diagnostic purposes and this gave rise to cope up with the techniques of image processing like data mining and machine learning together. Data mining and Machine learning techniques gave rise to use the computer based application in the diagnostic purpose for better, reliable and fast diagnosis of patient, detection of lesions, change detection and more important better therapy and diagnosis planning. In this paper various data mining and Machine learning techniques are given which are used in the field of Medical Imaging.
Keywords: Machine Learning; Neural Network; Bayesian Classification; Support Vector Machine; Decision Tree.
OPTIMISED RETINA VERIFICATION SYSTEM FOR PATHOLOGICAL RETINA
by Rani Bms
Abstract: In retinal biometrics acknowledgment rate is influenced by the vasculature unpredictability of retinal pictures.Vascular example turns out to be extremely unpredictable in effected retinal images because of pathological signs. In this paper retina verification which includes an AWN classifier to detect blood vessel structure from pathological retina. Distinct retinal feature which remains constant under pathological changes is bifurcation angle. This paper demonstrates a method for extraction of bifurcation angle. The particular bifurcation focuses had been created and positions are ascertained of a similar bifurcation indication. Sparse matrix representation used for retina template storing for optimization of memory and the template is compared .
Keywords: Retinal biometrics;vascular;AWN classifier;bifurcation angle;retina template;sparse matrix.
On Interval Covering Salesman Problem
by Siba Prasada Tripathy, Amit Tulshyan, Samarjit Kar, Tandra Pal
Abstract: After a disaster, during humanitarian relief transportation or mass
fatality management, cost of journey between two places may be uncertain due
to the variation of degree of devastation in the affected area. In such scenarios,
a viable model is essential to handle the situation in cost-effective and reliable
manner, which is able to handle this uncertainty. In this paper, we introduce
Interval Covering Salesman Problem (ICSP), where cost of an edge is represented
by interval number. ICSP is a variant of Covering Salesman Problem (CSP) which
is helpful for many real world problems in uncertain environment. Here, we
formulate a mathematical model for ICSP with uncertain cost associated with the
cost of travel between two nodes/places. Here, we have proposed a Metameric
Genetic Algorithm (MGA) for ICSP and presented its simulation results. For
implementation, we have used some benchmark TSP instances by changing the
costs to interval numbers.
Keywords: Traveling Salesman problem; Covering Salesman Problem; Uncertainty; Interval Constraint; Metameric Genetic Algorithm; Global parent.
Home Automation System using Raspberry Pi Zero W
by VIKASH YADAV, Deepak Kumar Mishra, Prathmesh Singh, Priytosh Kumar Tripathi
Abstract: This paper proposes a Home Automation System using IOT (Internet of Things). The paper focuses on the use of latest advanced technologies like the Internet of Things, Raspberry Pi Zero W. The Home Automation System is a setup to remotely connect the appliances and different electronic devices at our home or workplace to the internet and control it via remote devices from anywhere in the world.
Keywords: Home Automation; Internet of Things; Raspberry Pi; Remote devices.
OABC Scheduler: A Multi-Objective Load Balancing Based Task Scheduling in a Cloud Environment
by Shameer A.P, A.C. Subhajini
Abstract: The primary goal of scheduling is to allocate each task to the corresponding virtual machine on the cloud. Load balancing of virtual machines (VMs) is an imperative part of task scheduling in clouds. At whatever point, certain VMs are over-loaded and remaining VMs are under loaded with tasks for scheduling, the load must be adjusted to accomplish ideal machine use. This paper proposes a multi-objective task scheduling algorithm utilizing oppositional artificial bee colony algorithm (OABC), which expects to accomplish well-balanced load across virtual machines for minimizing the execution cost and completion time. The generated solution is competent to the quality of service (QoS) and enhances IaaS suppliers' believability and financial advantage. The OABC algorithm is planned based on oppositional strategy, employee bee, onlooker bee, scout bee and suitable fitness function for the corresponding task. The experimental results demonstrate that a proposed approach accomplishes better task scheduling result (minimum cost, time and energy) compare to other approaches.
Keywords: Cloud computing; Virtual machine; Load balancing; oppositional artificial bee colony; Time; Cost; Task scheduling.
COPY MOVE IMAGE FORGERY DETECTION USING CUCKOO SEARCH
by Tarun Kumar, Gourav Khurana
Abstract: Most of the people face the dilemma of accepting the photographs as authentic or not, mainly in the case of forensics where the images will influence the judgments. Research communities are constantly providing methods to identify these kinds of forged images. Attention captures specifically the cases where a region of the image is copied in the same image (Copy Move Forgery-CMF). To detect these forged images, a recent approach Scale Invariant Features Transform (SIFT) has proved its worth and robustness in various geometrical transformations. However, the framework needs to be optimized for numerous parameters involved cause wrong selection of values leads to wrong identifications. To solve this problem, a novel method has been proposed named as Cuckoo Search based Copy Move Forgery detection (CSCMFD) for optimizing the parameter values in the SIFT framework. The CSCMFD demonstrates to attain better results by automatically determining the values of different parameters as compared to state of art research work. Experimentation is performed on the Christlein et al. database and MICC-F220 dataset. The experimental results proved that the CSCMFD is able to capture small forged areas as well as the regions that are difficult to identify by other methods.
Keywords: Cuckoo Search; Meta-heuristic Algorithm; Copy Move Forgery Detection; Region Duplication; Scale Invariant Features Transform.
An inventive and innovative approach to monitor warehouse with Drone and IoT
by Aswath G.I., Shriram Vasudevan, Sundaram RMD, Giri Dhararajan, Sowmiya Nagarajan
Abstract: In the recent years, the technology growth in the sector of USVs / UAVs has been enormous. When it comes to UAVs, Quad copters play a major role and the platforms / hardware-software availability has become abundant which offer more choices and enhanced performance. One must be aware of the usage of the UAVs towards delivery of goods, pizzas etc. Taking the growth and available facilities, we have attempted to use the Quad copters towards enhancing/increasing the efficiency in monitoring the warehouse through a frugal and cost effective approach. We have proposed to use the drones inside a warehouse for inventory monitoring. Through the literature survey, we un-derstood that there is a lot of loss because of inefficient monitoring techniques, which mostly involve human efforts. While manual verification is both time consuming and error prone, we have used Drones, Data Analytics, Android and IoT as the backbone to simplify the process. This approach is found to be affordable, accurate and viable. Our drone will fly inside the warehouse, track the goods and components rack wise, and give an alert/update to the store manager through both web interface and android application that we have developed. This way, we can track every individual box in the warehouse, while eliminating the chance for it to be lost/untracked.
Keywords: Drones; Warehouse Inventory control; IOT controlled drone; RFID; NFC; Raspberry Pi; Android; and Intelligent Warehouse Monitoring.
Collaborative Computing Methods With Enhanced Trust and Security Mechanisms
by Dileep Kumar Gopaluni
Abstract: Security and protection issues have been researched with regards to a solitary association practicing control over its clients' entrance to resources. In such a registering domain, security arrangements are characterized and overseen statically inside the limit of an association and are regularly halfway controlled. Be that as it may, growing huge scale Internet-based application frameworks exhibits new difficulties. There is a requirement for a model, and a system for demonstrating, indicating, and upholding the understanding set up by teaming up associations regarding trust and security issues. This trust understanding is expected to build up between authoritative security approaches that oversee the communication, coordination, cooperation, and resource sharing of the collective group of networks. In this paper application-level, trust-based security innovations to help Internet-based shared frameworks are introduced. In this paper a efficient collaborative method is proposed which performs network creation and authorization of nodes in network and then maintain security and trust levels on the network so as to provide secure path for data transmission among trusted nodes of a network. In the proposed work Enhanced Key Management Scheme (EKMS) is introduced for enhancing security in the network and several constraints are proposed for identifying trusted nodes in network. The manuscript also concentrates on Intrusion Detection system(IDS) for identifying any faults in the established network for smooth and efficient collaborative computing networking. The proposed method uses NS2 simulator for network creation and MATLAB environment for analyzing the performance of the collaborative network.
Keywords: security; trust; collaborative computing; certificate authority; network authentication; intrusion detection system.
Development of Deep Intelligent System in Complex Domain for Human Recognition
by Swati Srivastava, Bipin K. Tripathi
Abstract: This paper aims to develop a deep intelligent system that can perform human recognition through proficient and compressed deep learning. The proposed Complex Deep Intelligent System(CDIS) incorporates multiple segments that includes image representation in lower dimensional feature space, Fused Fuzzy Distribution(FFD) and Complex Hybrid Neural Classifier(CHNC). One of the advantages of our CHNC is reduction in computational complexity because very few novel complex higher order neurons are sufficient to recognize a human identity. Further, the proposed intelligent system uses the advantages of both supervised and unsupervised learning to enhance the recognition rates. CDIS outperforms the best results accounted in the literature on three benchmark biometric datasets-CASIA iris, Yale face and Indian face datasets with 99.8%, 100% and 98.0% recognition accuracies respectively.
Keywords: Fused fuzzy distribution (FFD); complex hybrid neural classifier (CHNC); biometric; deep architecture.
SECONDARY USER SELECTION FOR DISTORTED COGNITIVE RADIO NETWORK
by Suresh Babu
Abstract: Cognitive Radio Framework (CRF) has a perpetual number of potential outcomes. Routing, auxiliary client selection, and way enhancement is a piece of the CRF. Subjective radio is engaging advancement for supporting dynamic range get to: the technique that tends to the range deficiency issue that is knowledgeable about various countries. Secondary Users (SUs) are permitted to utilize the briefly unused authorized range without irritating Primary Users (PUs) in Cognitive Radio Networks. Be that as it may, the propelled CRFs will build vitality cost for their intellectual functionalities, which is unfortunate for the battery fueled gadgets. The range identifying issue has gotten new perspectives with Cognitive radio frameworks. Radio range is the most imperative resource in remote correspondence. This paper centers around the advancement of a protected routing and optional hub choice if there should arise an occurrence of mutilation in CRF. The proposed algorithm uses machine learning (ML) for optional client choice and to discover the mutilation in the CRF. The assessment has been done on the base of Throughput and Energy Consumed with a specific end goal to exchange the information starting with one end then onto the next.
Keywords: cognitive radio framework; Alteration; Machine Learning; radio association,Routing method.
Neighborhood Rough Set Approach With Biometric Application
by B. Lavanya, Ahmad Taher Azar, H. Hannah Inbarani
Abstract: This paper provides a new approach for human identification based on Neighborhood Rough Set (NRS) algorithm with biometric application of ear recognition. The traditional rough set model can just be used to evaluate categorical features. The neighborhood model is used to evaluate both numerical and categorical features by assigning different thresholds for different classes of features.The feature vectors are obtained from ear image and ear matching process is performed. Actually matching is a process of ear identification. The extracted features are matched with classes of ear images enrolled in the database. NRS algorithm is developed in this work for feature matching. A set of 20 persons are used for experimental analysis and each person is having six images. The experimental result illustrates the high accuracy of NRS approach when compared to other existing techniques. Experimental results on AMI (Mathematical Analysis of Images) Ear database and IIT (Indian Institute of Technology) ear databases illustrates that the proposed method is effective and feasible for ear recognition.
Keywords: Neighborhood Rough Set (NRS); Feature selection; Biometric; Classification.
TaskTracker Aware Scheduler with Resource Availability Control for Hadoop MapReduce
by Jisha S. Manjaly, T. Subbulakshmi
Abstract: In cloud computing, scheduling is the process of allocating right resources to the desired machine within a certain time. Schedulers are playing a vital role in task assignment for Hadoop MapReduce. Hadoop has normally three schedulers called FIFO scheduler, Fair scheduler and Capacity scheduler. There are some use cases where jobs have external dependency on services like a database or web service. The external services might be overloaded if the number of connections established from a particular machine should exceed a limit. The running tasks may fail in this scenario and Hadoop should rerun the tasks again in another TaskTracker slot. To address the issue of default schedulers of Hadoop, TaskTracker Aware Scheduler has been introduced. This paper focuses the resource availability control of TaskTracker Aware Scheduler. The proposed scheduler will not allow a task to run and fail if the load of the TaskTracker reaches its threshold for the Job. It also supports the execution of jobs in a specified list of TaskTrackers configured within the job. The performance of this scheduler may increase if the scheduler is aware of the status of the resources present in the TaskTracker nodes. The main features of this scheduler is user controllability of jobs and configuration based resource utilization control for task allocation. The performance comparison depends upon the amount of data using in each job and the number of nodes in the cluster.
Keywords: Hadoop; Scheduling; MapReduce; HDFS; Fair scheduler.
Big data secure storing in cloud and privacy preserving mechanism for outsourced cloud data
by Dr B. Renuka
Abstract: Big data is a buzz word in this decade it gets tremendous concentration in these days by the researchers because of the characteristics and features. And also big data gives lot of challenges to the world that is storage, processing and security. In any technology security is the prime concern in this manuscript, we map to misuse new complications of enormous information regarding security, further more, confer our thought toward viable and insurance protecting enlisting in the immense data time. Specifically, we at first formalize the general building of gigantic data examination, recognize the relating security necessities, and present a capable and assurance sparing outline for immense data which is secured in cloud.
Keywords: Privacy Preserving; Security; Big data; Cloud Computing; outsourced data.
Sensor and sensorless speed control of doubly fed induction Machine
by Amel Bouchemha, Ahmad Taher Azar, Yousfi Laatra, Chouaib Souaidia, Djallel Dib
Abstract: In this paper, the behavior of doubly fed induction machine operating in motor mode is studied. The proposed system consists of a doubly fed induction machine with a stator connected directly to AC current and the wound rotor supplied from the pulse-width modulation voltage inverter. To ensure the control of torque and flux separately due to the problem induced by the presence of coupling between these two variables, the indirect field-oriented control is used. Furthermore, in this work sensor and sensorless control for doubly fed induction machine is studied through a comparison of the proportional integral controller and the observer method based on Kalman filter. The obtained simulation results show the effectiveness, feasibility and robustness of the proposed control approach under different operating conditions.
Keywords: Doubly fed induction Machine; Indirect Field Oriented Control; PWM voltage-source inverter; PI controller; Sensor and Sensorless control; Kalman Filter.
A Novel Approach for increased transaction security with Biometrics and One Time Password A complete implementation.
by Deveshwar H, Gowtham V, K.V. Shriram
Abstract: The advent of distributed and ubiquitous computing systems have resulted in the increase of digital financial transactions, consequentially making security a primary concern. Here we address the problem by proposing the usage of biometric sensors embedded in mobile systems to authenticate and generate a One Time Pin (OTP), as opposed to the existing systems that incorporate static and constant pins. This reduces the risk of spoofing and will make the user impervious to attacks on the Automatic Teller Machine (ATM) centers. We have proposed the usage of a central server that keeps track of requests and processes for the same. This ensures a wider scope for randomization of the pins, hence reducing predictability to almost zero.
Keywords: Biometrics; Fingerprint; One Time Pin (OTP); Mobile devices; Transactions; Debit/Credit cards.
Facial Expression Recognition using Local Binary Pattern and Modified Hidden Markov Model
by Mayur Rahul, Narendra Kohli, Rashi Agarwal
Abstract: Facial Expression is the non-verbal communication used to send andrnreceive their inner emotions state. It plays an important role in interpersonal communication and persons behaviour. Facial expression recognition can be applied in different fields like human-computer interaction, surveillance systems, medicines, home security and credit card verification. An efficient and robust face descriptor is very important in FER systems. In this paper, we analyse the Local Binary Pattern to represent the facial features in face images. LBP can be calculated in all eight directions of each pixel to obtain the binary coded number. Each expression is represented using binary code. We incorporated this LBP with our new modified HMM, which acts as a classifier. Modified HMM consists of two layers: bottom layer consists of atomic expressions and upper layer consists of combinations of atomic expressions. There are 7 classes of known expression i.e. anger, disgust, fear, joy, sadness, surprise, neutral are recognised with this approach. We have also tested our new framework using confusion matrix, ROC curve, recognition performance, processing time, error rates and found the overall accuracy of 85%.
Keywords: LBP; Machine learning; ROC curve; Processing time; Error rates; Binary code; Face descriptor.
Circular Local Search for Unconstrained Optimization Problems
by Mohammed A. El-Shorbagy, Aboul Ella Hassanien, Ahmad Taher Azar
Abstract: In this paper, a heuristics algorithm to solve unconstrained optimization problems (UOPs) in two dimensions is proposed. This algorithm is called as: circular local search (CLS); where it is an efficient local search. The algorithm starts with an arbitrarily chosen point in the search domain. Secondly, a radius of CLS is defined around the current search point; where any point in this region is feasible. Finally, by an angle with a decay step length, CLS can move from current search point to obtain a new base point. The radius and the angle of CLS are modified during the search. CLS is tested on evaluated by many benchmark problems taken from the literature. According to the obtained numerical results, the proposed method show that its robustness and effectiveness.
Keywords: Circular Local Search; Unconstrained Optimization; Global Optimization.
Usability Estimation of Component Based Software System using Adaptive Neuro Fuzzy Approach
by Jyoti Agarwal, Sanjay Kumar Dubey, Rajdev Tiwari
Abstract: Cost effective development is the prime goal for the software developers. To achieve this goal, now a days Component Based Software System (CBSS) are developed. In CBSS, the existing components are reused to develop a new software system which increases the reusability of components. It also reduces time and efforts of software developers, which is cost effective. The success or failure of software system depends on its usability. Usability can increase the market revenue. So, to increase the acceptance rate of CBSS among the users, it is important to evaluate the usability of CBSS before the software is released. In this paper, usability of CBSS is evaluated based on four input factors by using two widely used soft computing techniques i.e., Fuzzy Logic and Adaptive Neuro Fuzzy Logic. Experimental results obtained from both the techniques are also compared and it is observed that ANFIS approach reduces the error rate and provide more accurate results. This research work will help the software developers to estimate the usability of CBSS in more efficient manner.
Keywords: Usability; Component; Software; Fuzzy; Adaptive Neuro Fuzzy; Membership Function.
THE RELATIONSHIP BETWEEN CULTURAL DIVERSITY AND CULTURAL INTELLIGENCE: A CROSS-CULTURAL RESEARCH
by Mazlum Çelik, Ahmet Keser, Ümit Körcük Yapıcı
Abstract: The research aims to investigate the relationship between cultural diversity and cultural intelligence. The current status of the studies on cultural values usually focuses on the diversity of national characteristics. One of the newly emerging topics on the agenda is the cultural intelligence and its possible impacts on the integration of different cultural environments. So this new agenda and its uncovered aspects motivate us to conduct such a research. Within this frame, we examined the hypotheses related to the effect of four dimensions of culture on four dimensions of cultural intelligence. We find that the value of individualism/collectivism, among the dimensions of culture, causes a differentiation on the values of meta-cognitive intelligence, cognitive intelligence and motivational intelligence but not on behavioral cultural intelligence. The value of power distance cause a change on all dimensions of cultural intelligence except for the meta-cognitive intelligence, while the value of uncertainty avoidance cause a change on all dimensions of cultural intelligence except for the cognitive intelligence. On the other hand, the Femininity/Masculinity dimension of culture has an impact only on cognitive intelligence. The results of the research may have contributions both to the academics and practitioners of intelligence and culture areas.
Keywords: Cultural intelligence; meta-cognitive intelligence; cognitive intelligence; motivational intelligence; behavioral intelligence; national culture.
EARLY DETECTION OF CHRONICAL KIDNEY DISEASE USING DATA MINING METHODS
by Suresh Cse
Abstract: Early examination and acknowledgment of kidney disease is a fundamental issue to help stop the development to kidney failure. Data mining and examination strategies can be used for anticipating Chronic Kideny Disease (CKD) by utilizing obvious patient's data and assurance records. In this examination, careful examination techniques, for instance, Decision Trees, Logistic Regression, Naive Bayes, and Artificial Neural Networks are used for predicting CKD. Pre-treatment of the data is performed to trait any missing data and perceive the components that should be considered in the identification models. The different careful examination models are assessed and contemplated in perspective of exactness of estimations. The examination gives a decision help contraption that can help in the identification of CKD. With the certifications of careful examination in tremendous data, and the usage of machine learning figurings, anticipating future isn't any more a troublesome task, especially to wellbeing division, that has seen a great headway following the change of new PC developments that delivered diverse fields of research. Various undertakings are done to adjust to remedial data impact on one hand, and to get important gaining from it, predict diseases and suspect the cure of course. This incited experts to apply all the particular progressions like huge data examination, farsighted examination, machine learning and learning computations with a particular true objective to remove profitable data and help in choosing. In this paper, we will present a examination on the headway of colossal data in human administrations structure.
Keywords: keen examination; machine adjustment; huge data examination; Kidney failure problems; learning estimations; diagnostics;data examination; data mining; sensible examination.
Automatic Detection of Brain Cancer Using Cluster Evaluation and Image Processing Techniques
by Bobbillapati Suneetha, A. Jhansi Rani
Abstract: The Brain Cancer is perceived through radiologist utilizing MRI, which takes fundamentally a most outrageous time. Most of the brain tumor acknowledgment procedures give compound information about the brain tumor and they require in giving a correct result on nearness of tumor. Consequently, a formal guidance with a radiologist is necessary, which transforms into a surplus utilization if there ought to be an event of a non-tumor understanding. The objective of this work is to develop a supporting system that would assist the radiologist with having beforehand said result which reduces the time taken as a primary concern tumor revelation. The proposed procedure includes following stages. At first, the MRI (Magnetic Resonance Imaging) Brain Image is acquired from Brain MRI Image instructive gathering. In second stage the picked up MRI Image is given to the Pre-Processing stage, where the film craftsmanship marks are removed. In third stage, the high repeat parts are removed from MRI Image using distinctive filtering techniques. Finally, the proposed method investigates the best upgrade methodology, known as Ant Colony Optimization (ACO) is considered in this proposed work. The proposed techniques diminish the time unconventionality for brain tumor area which in like manner consolidates more exactness. In this work the MRI brain images are considered as info. The end clients themselves look at the MRI report by typical vitalization without counseling radiologist.
Keywords: image processing; improvement; Median and Adaptive filter; automatic detection ,filtering; brain cancer; cluster evaluation.
Question Answering System for Semantic Web: A Review
by Irphan Ali, Divakar Yadav, A.K. Sharma
Abstract: The process of querying and searching huge and heterogeneous contents from Web has become increasingly challenging task with the contemporary growth of semantic web. In order, to make the vision of semantic web a reality, some user friendly interfaces are needed, that can help users in querying and searching the huge and heterogeneous information space. Because of the complexity of natural language, question answering systems over semantic web presents many challenges and thus research opportunities. This work presents a survey on question answering, which has emerged as an important tool in recent years to exploit the opportunities offered by semantic information on the Web. It provides a comprehensive view by analyzing the history of question answering research field developed in the past from open domain question answering system to the latest semantic question answering solutions, before discussing the latest developments in open user friendly interfaces for semantic web. It was tried to explore the potential of question answering techniques to go beyond the current state of the art to support users in reusing and querying the semantic web contents. In the initial part of article, the classification of Questiona Answering Systems (QAS) under four categories namely: Natural Language Interfaces to Databases (NLIDB), Open Domain Question Answering over text , Semantic Ontology-Based Question Answering and QA Systems based on types of questions has been discussed in detail. Later part of the work, it was identified the common challenges of Question Answering over Semantic Web, solutions, and suggested recommendations for future systems. In the last, it concludes the review with an outlook to the techniques that need to be pursued to realize the aim of efficient retrieval of answers from huge, heterogeneous and continuously evolving semantic information on the Web.
Keywords: Ontology; semantic web (SW); question answering systems (QASs); natural language processing (NLP); named entity recognition (NER).
A Frugal and Innovative, Intelligent Messaging Assistant A Futuristic approach
by Shriram Vasudevan, Rahul Ignatius, Himanshu Batra, Aswin Tekur
Abstract: : For systems receiving large amount of information and requiring prompt decision making, it is the need of the hour to facilitate quick sifting of the data (regardless of scale) and follow a specified course of action at the least possible response time, to provide real time decision making capability. Our system allows users to handle large amount of data streaming in and adopt an appropriate course of action with a trained data model approach. Our system will assess incoming text (or audio which can be converted to text by a Speech to Text conversion system) and decide the criticality (i.e. urgency) of the message based on datasets pre-marked and certain configurations preset by the user. Our system will evaluate the received request and generate a score based on multiple parameters (such as geo, user history, time). A larger score implies that the message is of high importance and a low score implies a message is trivial and may be discarded or placed in a spam folder, based on users specified preferences. The provision of a real time decision system will be very useful for use cases where large amount of information is received and course of action is to be decided instantly. Scalability of conventional systems is an issue as deployment of more personnel to facilitate decision-making may not be feasible. We elaborate three use cases for our system, enterprise scenario, a first responders scenario and a personal use case scenario (as a personal message/call assistant).
Keywords: Message filtering; real-time decision system; scalable message ranking system; scalable text classification system.
NETWORK DATABASE SECURITY WITH INTELLECTUAL ACCESS SUPERVISION USING OUTLIER DETECTION TECHNIQUES
by Ch Mani
Abstract: With the rise of the Internet, security has transformed into a critical concern. An interference distinguishing proof structure is used to redesign the security of frameworks by evaluating all inbound and outbound framework practices and by perceiving suspicious cases as possible intrusions. From two decades, various researchers are working in Intrusion Detection Systems. Of late, characteristic acknowledgment has gotten unmistakable quality with its ability to perceive novel ambushes. Nowadays masters focus on applying special case distinguishing proof frameworks for irregularity revelation in light of its promising results in perceiving bona fide ambushes and in lessening false alert rate. In this paper, characteristic acknowledgment on systems database assurance is analyzed and their results are examined. Inconsistency area in organize frameworks is a continuously creating field with persuading applications in regions, for instance, security, acknowledgment of framework interferences, wicked in arrange courses, and human sciences. In this original copy a few noteworthy components are viewed as like, a framework structure and an information suspicion driven inconsistency revelation technique. We demonstrate their ampleness and results on real datasets with scholarly access administration from the regions of social, spatial, and stage walled in area frameworks. Finally, we offer an outline to widen the examined irregularity acknowledgment frameworks to the dynamic setting of graphs. In this original copy we utilized MATLAB condition for discovery of exceptions on systems database and the proposed strategy execution is likewise examined.
Keywords: IDS;organize inconsistency;security;database assurance;recognizable proof of exceptions.
Regression Test Case Prioritization Using Genetic Algorithm
by Anand Kumar Yadav, Anil Kumar Malviya
Abstract: On customers demand, new requirements are implemented in the
software. The modified software may not work properly as earlier because of
the new requirements added. So the modified software must be tested. RT
(Regression Testing) is defined as retesting of the modified software. It is
performed using the already developed test suite and a newly developed test
suite. The big software has a larger test suite size. For a single requirement
change, to run the whole test cases is not beneficial for the development
organization. To make RT more effective, prioritization of test suite is done.
Here we present the GA (Genetic Algorithm) for the TCP (test case
prioritization). Different approaches have discussed and implemented using the
APFD (Average Percentage of Fault Detected) metric. The discussed
approaches are applied over a single problem and the result is shown in the
tabular form. APFD metric is applied to all the discussed approaches and
suggested which one is better. This paper uses GA to arrange the test cases in a
prioritized way on the basis of the fault detected.
Keywords: APFD; Genetic algorithm; Regression testing; Test cases prioritization.
Text Document Classification using a hybrid approach of ACOGA for feature selection
by Avjeet Singh, Anoj Kumar
Abstract: Categorization of text document involves a great deal of information or features. Feature selection is normally used to reduce the dimensionality of datasets that have a large number of attributes or features. Appropriately, feature determination is vehemently considered as one of the pivotal parts of text document classification. There are many methodologies employed by various researchers to handle this issue. In this paper, we present an approach for categorization of text documents using kNN-classifier that works upon the best-selected features of the document. The feature selection process is advanced that lessens the dimensionality of the feature space and thus increment the performance. The feature selection is done through a hybrid approach that utilizes the combination of Ant Colony Optimization (ACO) algorithm and Genetic Algorithms (GA). The feature selection process is optimized that reduces the dimensionality of the feature space and consequently increase the performance.
Keywords: Text categorization; Ant Colony Optimization; Genetic Algorithm and Feature Selection.
Facial Expression Recognition using Local Multidirectional Score Pattern (LMSP) descriptor and Modified Hidden Markov Model
by Mayur Rahul, Narendra Kohli, Rashi Agarwal
Abstract: Facial Expression Recognition is an application used for biometric software that can be used to recognize special expressions in a digital image by comparing and analysing and different patterns. These software are popularly used for the purpose of security and are commonly used in other applications such as medicines, home security, human-computer interface, credit card verification, surveillance systems etc.. Recognizing faces becomes very difficult when there is a frequent change occurs in facial expressions. In this paper, two tier modification of normal Hidden Markov Model (HMM) is used to identify continuous effective facial expressions. Two tier extension of HMM: bottom and upper tier. Bottom tier represents the atomic facial expressions which are made by mouth, eyes and nose separately and upper tier represents the joint of these atomic facial expressions such as neutral, sadness etc. In HMM, optimal state sequence, observed sequence probability and parameter estimation are calculated by Viterbi, Forward and Baum-welch methods respectively. This paper introduces the Improved Modified Local Decision Based Unsymmetrical Trimmed Median Filter (IMLDBUTMF) for noise reduction, the Local Multidirectional Score Pattern(LMSP) for feature extraction and modified Hidden Markov Model for classification process. The experimental result shows that the proposed method gives enhanced performance of 85% of recognition rate on publicly available datasets JAFFE.
Keywords: Facial Expression Recognition; Feature Extraction; Machine Learning;Hidden Markov Model; Baum welch method; Forward Method.
Object Recognition based on Topology Preserving Skeleton Features
by neelima nalla
Abstract: Object recognition is a procedure for recognizing a particular object in an advanced video or image.Appearance-based or feature-based techniques are used for object recognition. An object skeleton is the useful cue for object recognition, which provides a structural representation to specify the relationship among object parts. The shape’s geometry and topology can be efficiently encoded. In this paper, an effective object recognition method is introduced with the help of Multi-Kernel SVM using the skeleton derived from topology preserving skeleton features method.
Keywords: skeleton; classification; object recognition; support vector machine; junction points.
Special Issue on: Intelligence in Communication Systems
Non-Dominated Sorting Particle Swarm Optimization (NSPSO) for Multi-Channel Cooperative Spectrum Sensing in Heterogeneous Green CRNs
by Senthil Kumar Babu, C.V.M.S.N. Pavan Kumar
Abstract: In the radio spectrum the exact help of white spaces is needed speed, robust and the correct approaches for the recognition of scarcity. To overcome these troubles new approaches is introduced in the radio spectrum to identify the white spaces. These kinds of approaches are specifically used in the Cognitive Radio Networks (CRNs), and the node executes a CSS based on power detection in an accommodating path or not. In this paper mainly focused the online algorithms to reduce, a Non-dominated Sorting Particle Swarm Optimization (NSPSO) is engaged to iteratively estimate its result. However, in the minimum point of view the sub routine of the overall viewpoint to estimate the performance of a persons results and these results is a single example of the entire potential cluster development. It means, in the cluster formation at the micro level the following procedure are used for the execution. The first task is to choose the Cluster Heads (CHs) with the exact reporting paths with the reduced faults among the members of clusters and the CHs. Another task is to resolve the optimal sensing attributes like sensing periods and identification of thresholds of the entire Secondary User (SU) to reduce the power consumption of the entire cluster to the Primary User (PU) fortification and the spectrum usage of the constraints. Employing Poisson-Beta-binomial distribution, a new and common K-out-of- N voting regulations is implemented for the heterogeneous CRNs to permit the Secondary Users to have the various identification performances. After that a convex optimization structural design is implemented to reduce the intra-cluster power price by mutually acquiring the optimal sensing periods and thresholds of feature detectors for the new voting regulations. The simulation outcomes is illustrated that the grouping of new CH selection and the collaboration methodologies provides a efficient performance with respect to the energy efficiency and the robustness in opposition to the faults.
Keywords: Cooperative Spectrum Sensing (CSS); Non-dominated Sorting Particle Swarm Optimization (NSPSO); Poisson-Beta-binomial distribution; Cognitive Radio (CR); Clustering; Cluster Head (CH) selection; Energy; Heterogeneous Green Cognitive Radio Networks(HCRNs).
An Integrated and Secured Medical Data Framework for Effective Tele Health Applications
by G. Vallathan, K. Jayanthi
Abstract: Abstract: The rapid progression of health care technologies systems and transmission strategies makes it reliable to gain, dispense and manage data over medical devices and as well improves conventional hospital information systems (HIS) to deliver effective health care services. When the medical information is communicated through wireless network, there exists a high chance of modifying the information. Before examining the patient, the physician has to check for the integrity of received medical image. A futuristic tele healthcare framework has been proposed to ensure the security, data integrity, quality and minimization of bandwidth requirements for offering complete healthcare services at reduced cost. In this paper, the proposed framework encompasses the integration of three modules viz. steganography, compression and encryption. Initially, the Bhattacharya coefficient segmentation method is applied over the brain tumour images which are segregated into Region of Interest (ROI) and Non-region of Interest (NROI) region. Patient information and hash value of ROI are embedded on to the NROI using improved data embedding algorithm and successively SPIHT encoding technique is applied over the embedded image by signifying the image at different scales and directions to accomplish better compression ratio. This framework also validates the integrity of ROI, furthermore it ensures the robustness for the embedded data in NROI and lends ROI perfectly for investigation. Finally, the whole image is encrypted with Logistic map encryption in order to afford complete medical data security. Experimental outcomes demonstrate that the proposed framework offers robustness in terms of security, quality and reliability which alleviate misdiagnosis at the physician end in telemedicine applications.
Keywords: Bhattacharya coefficient segmentation; SHA-1; Contourlet Transform; SPIHT Encoding Technique; Improved EMD technique; Modified Chaotic Map Encryption.
Distributed Genetic Algorithm for Lifetime Coverage Optimization in Wireless Sensor Networks
by Ali Kadhum IDREES, Wathiq Laftah Al-Yaseen
Abstract: The coverage problem represents a research challenge in designing energy-efficient Wireless Sensor Networks (WSNs), in which both coverage ratio and energy saving should be considered. In this paper, a protocol called Distributed Genetic Algorithm for Lifetime Coverage Optimization (DiGALCO) is suggested to preserve the coverage and enhance the lifetime of a WSN. DiGALCO protocol is based on two steps. First, the sensing field is logically divided into smaller uniform subfields. Second, DiGALCO protocol is then implemented at each sensor node in each subfield. To achieve our goal, the proposed protocol combines three energy-efficient schemes: virtual network subdivision into subfields, distributed cluster head selection in each subfield, followed by sensor activity scheduling based Genetic Algorithm (GA) optimization performed by each elected cluster head. DiGALCO protocol works into rounds. More precisely, a round consists of three phases: (i)~Discovery, (ii)~Cluster head selection, (iii)~GA Decision and Sensing. The decision process, which results in an activity scheduling vector, is carried out by a cluster head node through executing the GA in order to pick out a set of sensors staying active for monitoring through the current sensing round. Every set is constructed to guarantee coverage at a low cost of energy, enabling to improve the WSN lifetime. In comparison with some other protocols, several experimental results were done by using OMNeT++ network simulator show that DiGALCO protocol is capable of prolonging the lifetime of WSN and gives enhanced coverage performance.
Keywords: Wireless Sensor Networks; Coverage; Network lifetime; Genetic Algorithm; Scheduling.
Investigations on Scheduling Algorithms in LTE-Advanced Networks with Carrier Aggregation.
by Shaffath Husssain Shakir S, Rajesh A
Abstract: The driving force for Long Term Evolution - Advanced (LTE-A) development was to provide high data rates in a cost efficient way and also to fulfill the requirements set by International Telecommunication Union (ITU) for Fourth Generation (4G). LTE-A is the fastest growing technology that supports variety of applications like audio/video conferencing, video streaming, Voice over LTE (VoLTE), Voice over Internet Protocol (VoIP), browsing and file transfer. To support multiple applications an effective and efficient Radio Resource Management (RRM) procedure is required, which plays a major role for maximizing the resource utilization. With Carrier Aggregation (CA) concepts included in LTE-Advanced protocol, the complexity of the RRM and scheduling of data increases. Third Generation Partnership Project (3GPP) does not define any specification on scheduling algorithms; hence it became a special interest for vendors and service providers. In this paper, basic LTE-A concepts and a study of different downlink scheduling algorithms published in various literatures are discussed and also performance evaluations of different algorithms are done. The key issues of scheduling algorithms to be considered are also discussed.
Keywords: LTE-Advanced; resource allocation; radio resource management; scheduling.
Development and Analysis of Downlink Scheduling Algorithm in LTE System with Imperfect Channel Quality Indicator
by S.Fouziya Sulthana, R. Nakkeeran
Abstract: Long Term Evolution (LTE) is the broadband technology, introduced by Third Generation Partnership Project (3GPP) to support variety of multimedia services. Scheduling in LTE plays an important role to meet the system performance targets. In this paper, a new scheduling method is proposed, which considers the achievable rate based on the estimated channel condition of user in the priority metric calculation. It is highly uncertain to have perfect Channel Quality Indicator (CQI) report at scheduler end due to poor quality of the channel and the variation of current channel condition with that of received CQI report. So, the channel condition of the user is ascertained with imperfect CQI, where the Kalman filter is used to estimate the channel condition and effectively recover the correct CQI from imperfect CQI to improve the system performance. The proposed scheduling provides better performance interms of throughput, delay and Packet Loss Rate (PLR) when compared with the already proposed Service Based Scheduler (SBS) method.
Keywords: LTE; resource allocation; scheduling; channel quality indicator; Kalman filter; throughput; delay; packet loss rate.
Dynamic Service Oriented Resource Allocation system for Interworking Broadband Networks
by Kokila Subramanian, Sivaradje Gopalakrishnan
Abstract: Optimizing available radio resource efficiently to the diverse traffic categories in a Heterogeneous Interworking Network, is the key issue of Radio Resource Management (RRM). In this paper, an advanced RRM method specified as, Dynamic Application Centric Resources Provisioning Algorithm (DAC-RP), to provide users with a dedicated set of suitable channels to real- time (RT) and non-real- time (NRT) services based on bandwidth conditions, to maximize the capacity with satisfied QoS constraints is proposed. The DAC-RP is realized over an Ultra Mobile Broadband (UMB) -Worldwide Interoperability for Microwave Access (WiMAX) Wireless Local Area Network (WLAN) hybrid interworking network, linked over a novel Intelligent Internet Protocol (IIP) architecture. IIP, an unified architecture, obtained by merging IMS Call Session Control Functions (CSCFs), Application services, enhanced IMS and centralized services, under a single layer with a common set of control and routing functions, to converge heterogeneous protocols, functional entities and applications. The competency of the IIP and DAC-RP is validated by comparing the performance metrics of the RT and NRT applications, simulated for IIP based UMB-WiMAX-WLAN network developed using OPNET with the scenario using existing IMS and with UMTS-WiMAX-WLAN network.
Keywords: Radio Resource Provisioning; Quality of Service; Broadband wireless network; absolute partition; Heterogeneous network; Call Control layer; Real- Time; Non-Real-Time Application; IP Multimedia Subsystem.
Image Denoising using Fast Non Local Means Filter and Multi-Thresholding with Harmony Search Algorithm for WSN
by Rekha Haridoss, Samundiswary Punniakodi
Abstract: Image denoising is one of the challenging tasks in Wireless Sensor Network (WSN). Several image denoising algorithms are developed so far to obtain a better denoised sensor images. But they fail to preserve the edges of the images because of spatial averaging. In order to overcome the loss of image edges, an attempt has been made in this paper by incorporating the various filters like Fast Non Local Means Filter (FNLMF) and high boost filter with the existing wavelet thresholding based denoising method. However, the denoised output and the computation time are affected by the wavelet properties significantly. Hence, instead of using wavelet thresholding, this paper concentrates on Histogram based Multi-Thresholding (HMT) as a major part to denoise the image. Here, the corrupted image is first denoised by applying FNLMF filtering section. Then the edges and minimal details of the denoised output are enhanced by utilizing the HMT with Harmony Search Algorithm (HSA) based optimization technique. Further, various images with different noise deviations are considered in order to evaluate the performance of the proposed method by using MATLAB simulation. The simulation results indicates that the proposed method shows better results in terms of Peak Signal to Noise Ratio (PSNR), Image Quality Index (IQI) and computation time than that of existing method.
Keywords: Denoising; Multi-Thresholding; Bilateral Filtering; Non Local Means Filter; Harmony Search Algorithm.
Reduction of jitter in 3D video by transmitting over multiple network Paths
by Vishwa Kiran, Raghuram Shivram, Thriveni J, Venugopal K R
Abstract: Stereoscopic video transmission in telemedicine application requires data to be transferred with minimal jitter. It is not possible to send stereoscopic video at full HD rate on single Internet Service Providers (ISPs) as the bandwidth becomes a bottle-neck and congestion can lead to packet drops, eventually leading to jitter in a video. This could be circumvented by employing multiple ISPs to stream stereoscopic video utilizing multiple Real Time Packets (RTPs) sessions. Usage of multiple ISPs results in multiple network paths between video streaming device and video consumers. This concept effectively involves aggregation of bandwidth, delay, jitter, packet loss, and other qualitative network attributes with respect to every ISP participating in the video transmission process. This article analyses through simulation collective delay and jitter which affects the video reconstruction process and concludes with the estimation of minimum qualitative network parameters required.
Keywords: 3D Video; Bandwidth; Cloud Aggregation Server; Discrete Event Simulator; ISP; jitter; Multipath; Multiple ISPs; Simpy Simulator; Stereoscopic Video;.
Trust-based-Tuning of Bayesian watchdog Intrusion Detection for Fast and improved Detection of Black Hole Attacks in Mobile Adhoc Networks
by Ruchi Makani, B.V.R. Reddy
Abstract: The Watchdog is a well-known intrusion detection mechanism for Mobile Adhoc Networks (MANET), which is not only monitor the traffic between peer nodes but also perform analysis on the data to discern malicious activity and it has been widely adopted for detecting black-hole attacks. Watchdog suffers from serious limitations viz. high number of false positive/negative. Integration of the Bayesian filtering in watchdog, improves performance in terms of enhanced data throughput, speed in detection of attacks and accuracy in reporting malicious activity. The Bayesianwatchdog capability can be further enhanced by its effective tuning. This paper presents the concept of trust based tuning of the Bayesianwatchdog, which is a novel approach towards enhancing the detection speed, eliminating false alarms and improving data throughput. The proposed trust based tuning of Bayesian watchdog, has been evaluated through simulations and encouraging results have been obtained to support the proposed approach.
Keywords: Bayesian; Intrusion Detection; MANET; Trust; Watchdog.
Connectivity Analysis of Multihop Wireless Networks Using Route Distribution Model
by Abdullah Waqas
Abstract: Most of the management and routing protocols in multihop wireless networks rely on strict connectivity condition among nodes. In this paper, we construct mathematical framework to model connectivity of wireless ad hoc and wireless sensor networks. We calculate mathematical expressions to find the distribution of the distance between the nodes which is used to calculate the transmission power required to establish connection between the nodes which are inside the communication circle of each other. Then we present Route Distribution Model (RDM) to establish routes between the source and the destination which are outside the communication range of each other. The results show that the transmission power required to establish a connected network depends on the number of nodes in the network as well as distribution of the nodes. The results are analyzed for low, medium, and high density networks for uniform and Poisson distributed nodes. The results show that a connected network is achieved at relatively lower transmission power if nodes establish multihop routes to transmit their data towards the destination.rn
Keywords: Ad hoc networks; connectivity; minimum transmission range; node degree model; sensor networks.
Special Issue on: ICACCP-2017 Advanced Intelligent Computing and Communication Networks
A New Fuzzy and Gaussian Distribution Induced Two Directional Inverse FDA for Feature Extraction and Face Recognition
by Aniruddha Dey, Shiladitya Chowdhury, Jamuna Kanta Sing
Abstract: In the area of face recognition research, the high dimensionality of the data is indeed a crucial problem. The two-dimensional inverse Fishers discriminant analysis (2DIFDA) is the most popular method for binary class assignment. This paper proposes a new fuzzy and Gaussian distribution induced two directional inverse Fishers discriminant analysis (FGD-2DIFDA) which computes the fuzzy and Gaussian distribution membership values and thereby combined the values with the training samples to obtain the class-wise mean and the global mean. These fuzzy membership values are taking account in inter- and intra-class scatter matrices along x- and y-axis. Moreover, the intra-class scatter matrices include the Gaussian probabilistic distribution information. Finally, the eigenvalue problems are solved to fins the optimal inverse projection vectors. These vectors are used to generate significant discriminant features and to solve the binary classification problem. The FGD-2DIFDA method has been evaluated on the AT&T (formally known as ORL), UMIST and FERET face databases using support vector machine (SVM). Simulation results demonstrate that the proposed FGD-2DIFDA method can obtain higher recognition rates than some state-of-the-art face recognition methods.
Keywords: FGD-2DIFDA; projection vector; FKNN; SVM; Gaussian probability distribution; feature extraction.
Special Issue on: Intelligent Computation Systems
A Prominent Approach to Design Low Noise Amplifiers for 802.11 Wireless Receiver Frontends
by Sharath Rao, Suma Latha
Abstract: Abstract: With the efforts of innovative consortium, Wireless Local Area Network (WLAN) became alternative standard in Wireless Local Area Networking applications. From the beginning the data rate keeps pace from Mega Bits Per Second (MBPS) up-to Giga bits per second. The Ease of Adaptability of WLAN protocol and the need for high data rate communication system made a reason to develop a solution and to incorporate real time wireless transceivers on Network Interface Cards and FPGAs. Over the past decade, the RFIC technology became the hive of communication industry, with the proliferation of 802.11 a, g, n, ac, ad Wireless protocols. The applications like Ultra wide band (UWB), Digital TV Broadcasting, and Global Positioning System found their way to main stream gadgets. With the design preferences such as inter connect parasitic, interconnects and coupling between devices the passive elements must be precisely designed. In this article novel requirements of Low Noise Amplifier (LNA) design for 2.4 GHz application with noise modelling is presented.
Keywords: 802.11; Local Area Network (LAN); Wireless LAN(WLAN); Wireless Sensor Networks; (WSN’s); Low Noise Amplifier(LNA); Generic Process Design Kit (GPDK); Berkeley Short-Channel IGFET Model version 3 (BSIMv3); Cascode LNA; Folded Cascode LNA; Differential Folded Cascode LNA.
Attribute Weight Gain Ratio (AWGR): New Distance Measure to select optimal features from multivalued attributes
by Prakash LNC, Anuradha K
Abstract: Identifying the appropriate features or attributes remains the most prominent stage of any information retrieval and knowledge discovery. The process involves selecting specific features and their subsets holding the vital portion of the data. However, despite the prominence of this stage, most feature selection techniques opt for choosing mono-valued features. Accordingly, these techniques cannot be extended to use in multivalued attributes which require capturing different features from the dataset in parallel. To enable optimal feature selection for multivalued attributes, this manuscript proposes a novel technique aiming at calculating the optimal combination of multivalued attribute entries regarding clusters in unsupervised learning, and classes in supervised learning. The proposal is a distance metric that motivated from the traditional relevance assessing metrics information gain and gains ratio. To analyze the performance of the proposed technique, the classification approach SVM trained on optimal multivalued attribute features selected using proposed distance measuring metric, which is further used to perform classification process. Also, to evince the significance of the proposed distance measuring metric regarding clustering process, k-means clustering method with Attribute Weight Gain Ratio is executed on benchmark datasets. Simulation results depict superior performance of the model for feature selection for multivalued attributes.
Keywords: Multiclass attributes; optimal feature; k-means clustering; transaction weight; mining techniques.
Automated transformation of NL to OCL Constraints via SBVR
by Murali Mohanan
Abstract: This paper presents a neoteric method to automatically generate Object Constraint Language (OCL) Constraints from natural language (NL) statements. In Unified Modeling Language (UML) standards OCL is used to check whether a model follows given process or domain specific heuristics and also to improve the precision of model specifications. As constraints are the key components in the skeleton of business or software models one has to write constraints to semantically compliment business models or UML models. To support the software practitioners in using OCL we present a novel method. The aim of this method is to produce a framework so that the user of UML tool can write constraints and pre/post conditions in natural languge like English and the framework converts such natural language expressions to equivalent OCL statements. Here the state of art of the two well known technologies namely Open Natural Language Processing (OpenNLP) and Semantics of Business Vocabulary and Rules (SBVR) are used. OpenNLP is used as a preprocessing phase to process the natural language statements. Preprocessing includes sentence splitting, tokenization and parts of speech (POS) tagging. Then in the second phase i.e the transformation phase SBVR is used to automatically transform the preprocessed natural language statements to SBVR specifications. SBVR has major role in this transformation as it uses the syntax of natural language. The main aim of the research is to provide automated tool support for model processing tasks in UML models via SBVR to model transform the input SBVR specifications to OCL specifications as explained in Model Driven Architecture (MDA).
Keywords: Natural language processing;SBVR;UML;OCL.
Study of Skin flow motion pattern using photoplethysmogram
by Neelamshobha Nirala
Abstract: Microcirculatory dysfunction is related to many diseases and occurs long before their clinical manifestation. We used wavelet transform to study the microcirculatory regulatory mechanism in three different groups (18-diabetic, 8- peripheral arterial disease (PAD) and 14 healthy controls) using toe photoplethysmogram (PPG) and 11 different features were derived. Compared to healthy subjects we obtained a significant decrease in the neurogenic (VNe: 286.41 vs. 125.29(a. u), p-value=0.000), myogenic (VMe: 281.55 vs.29.02, p-value=0.000) and respiratory activity (VRe: 37.68 vs. 9.35, p-value=0.022) in the diabetic group and significant increase in the cardiac activity (VCe: 19.69 vs. 33.89, p-value=0.007) in PAD group. Result of linear multiple regressions analysis showed a significant negative association of age and BMI with myogenic activity (p-value=0.002, r-value=0.173) and neurogenic activity (p-value=0.036, r-value=0.375) respectively. Our study showed that PPG signal can be used as a non-invasive tool for studying the vasomotion impairment in the diabetic patient during resting condition.
Keywords: Continuous Wavelet Transform; Laser Doppler flow meter; Photoplethysmogram; Microcirculation; Skin blood flow; Vasomotion.
Incorporating Security in Opportunistic Routing and Traffic Management in Opportunistic Sensor Network.
by Mohammed Salman Arafath, Khaleel Ur Rahman Khan, K.V.N. Sunitha
Abstract: Nowadays Wireless sensors or Opportunistic sensor networks (OSN) and its technologies are mainly used for bridging the spaces between the physical world and the virtual electronics. Due to several potentials like limited storage, unreliable communication, higher latency in communication and unattended operations of networks and sensor node powers, OSN faces many problems on routing and traffic management. Security on OSN involves routing and data aggregation deployed in OSN involves collaborations among nodes on the network due to ad-hoc nature. To solve these problems, our proposed method involves the novel traffic management scheme and opportunistic secure routing on sensor networks. This method involves mainly three algorithms such as Range Based Clustering (RBC) for clustering the sensor nodes, Minimum Waiting Time Routing Algorithm (MWR) for routing the data packets and in order to provide secure communication while coalition attack and replica attacks present on the network, we propose Light Weight Key Generation Mechanism (LWKG). Thus our proposed system provides less computational complexity on the networks and we provide effective results on experimental evaluation based on average packet reception ratio, detection ratio of replicas, connectivity and coalition attack resistance.
Keywords: Wireless Opportunistic Sensor Network Security; Clustering; Routing and Traffic Management; Traffic Management; Routing; OSN; Secure Communication; Minimum Waiting Time; MWT; Range Based Clustering; RBC; Light Weight Key Generation Mechanism (LWKG);.
IMPROVING RELIABILITY IN MAS BY RULE BASED LOGIC AND CRYPTOGRAPHIC TECHNIQUES
by Prashant Kumar Mishra, Raghuraj Singh, Vibhash Yadav
Abstract: Mobile Agent (MA) is a composition of software paradigm, having the capability to move from one host to another across dynamic network environment and execute task assigned by its user. Reliability is one of the most important issues in Mobile Agent based System (MAS) which was addressed by many researchers. In this paper, we enhance the reliability in MAS with the support of Intrusion Detection System (IDS) and effective routing. Routing is an important process to find optimal path which improves network throughput and reliability. Collision-Free (CF) Network Graph Exploration method is designed for identifying optimal path for MAs. Security is a crucial aspect due to various malicious node activities and must be considered while estimating reliability. In our process HMAC-SHA1 algorithm is performed for detecting malicious agent and Rule Based Logic (RBL) is designed to identify malicious host. Further we calculate reliability of MAS with respect to status of network and its condition which includes link connectivity and malicious node probability. Finally we are simulating our performance by using various factors such as size of MAS, malicious ratio and number of mobile agents in MAS etc. These factors are significant to show improved reliability of our proposed system.
Keywords: Mobile Agent; Reliability; Intrusion Detection System; Malicious activities; Routing.
THD Minimization using genetic algorithm on the Nine level Multilevel Inverters
by RAVIKUMAR SUKUMARAN
Abstract: Determination of optimal switching angles in inverters is a significant research area and involves effective minimization of DC sources to increase the efficiency of the power output. This maximum and quality power is achieved by reducing the total harmonic distortion present in the output waveform. Hence by computing the appropriate switching angles of the inverters, these harmonics appearing in the output voltage could be reduced. The problem formulation in the proposed research paper lies in formulating an appropriate switching strategy to maintain optimality in power quality. A Genetic algorithm (GA) optimization technique to compute the optimum switching angles of a 9 level multi level inverter is investigated, algorithm formulated and experimental analysis carried out. In the case of three-phase multilevel inverters, the optimization algorithm is generally applied to the phase voltage of the inverter. This results in the minimum THD in phase voltage but not necessarily in the line-to-line minimum THD. In three-phase applications, the line-voltage harmonics are of the main concern from the load point of view. In this paper, using the genetic algorithm and sinusoidal PWM technique, a THD minimization process is applied to the line-to-line voltage of the inverter. This paper is based on a comparison between seven level cascaded and nine level diode clamped multilevel inverter.
Keywords: Optimal minimization of THD (OMTHD); Genetic algorithm (GA); line-voltage THD; multilevel inverter; Phase Voltage THD; THD reduction.
Simulink Implementation of RLS Algorithm for Resilient Artifacts Removal in ECG Signal
by V. Tejaswi, Surendar Aravindhan
Abstract: Noise is the undesired signal which affects the desired signal. This Noise has been a serious problem which is affecting the signals during transmission of information. In this project two different noisy signals are considered they are speech signal and ECG signal. The Speech signals are taken from the NOIZEUS database and ECG signals from Physio Net ECG database. The major noises affecting the ECG signal are baseline wander, Electrode motion, Power line interference, Muscle artifact noises. The baseline wander noise which is caused due to patient movement, breathing and bad electrode contact to skin, Electrode Motion noise occurs when electrode moves away from the skin which leads to impedance changes resulting in variations in ECG, Muscle Artifact noise which is caused due to contraction of other muscles besides the heart. Simulink model is designed for cancelling the noise from the noisy signals. The Adaptive algorithm that is chosen is the RLS algorithm because it has faster convergence rate when compared to other algorithms like LMS, NLMS, RLS. The Simulink model is tested for different cases to show that the model works efficiently and the performance can be observed from the mean square error obtained.
Keywords: Noise; ECG; Artifact; Baseline wander; Electrode motion; Muscle artifact; Power line interference; RLS filter; MSE.
Semantic Linkage of source content dynamically with virtual documents using wikipedia in Hadoop
by Priyadarshini R., LATHA TAMILSELVAN
Abstract: In recent years, the World Wide Web has developed enormously and become more multifaceted because of the rising number of users and emerging technologies. Web 1.0 is not interactive and it is static compared to current web. Web 2.0 has modified this by enabling the users to create and share the content and remain collaborative. As result of this there is enormous growth of web content. Hence relevant information retrieval became a challenging task. To handle this problem application of semantic web is very much essential. The traditional web content is added with semantic repository. There are semantic web based tools being developed and researched to make the information retrieval more efficient. This paper describes a semantic weblog which has the feature of locating the exact source content from the reference URLs. As Wikipedia is a large encyclopedia that contain more number of links and meta-content within it, the retrieval of original document content for the user created document is difficult. The proposed work extracts the meta-content of the link using wikiAPI and store it as repository based on mongo db. Then the meaningful words of the meta-content will be compared with created virtual document semantically with help of Alchemy API. The meta content matching is implemented by similarity measures jaccard similarity with Alchemy Api. Finally the exact source document for our virtual document will be retrieved for future use. The jaccard similarity measure along with the content categorisation yields accurate source URLs for selected content in Wiki. The time taken for static retrieval  and dynamic retrieval is compared and plotted. And also the recall & precision of dynamic retrieval is compared and plotted. The enhanced version of dynamic retrieval is also updated using wordnet semantically. Comparison of jaccard similarity with the various methods are compared and analyzed.
Keywords: Virtual document; Semantic repository; Locate source content; Semantic article; WikiAPI.
A Novel System for Early Detection of Breast Cancer using Area and Entropy Features of Malignant Tumor
by Varalatchoumy M, Ravishankar M
Abstract: Computer aided Detection and Classification system has been developed to detect breast cancer at an early stage by predicting the area and texture of malignant tumors. Noise removal and image enhancement is carried out in the preprocessing stage by using adaptive median filter and contrast limited histogram equalization techniques. Improved Watershed segmentation technique with appropriate internal and external markers, have proved to be an efficient approach in detecting the Region of Interest. The detected tumors are classified using feedforward Artificial Neural Network that are trained using textural features. Area and Entropy features extracted from malignant tumors aids in early detection of breast cancer by categorizing malignant tumors as belonging to stage I or stage II. The overall efficiency of the system, for identifying stages of malignant tumor is 92%, which has been identified to be high when compared to all existing systems. Mammogram images from Mammographic Image Analysis Society (MIAS) database was used for training the system and efficiency of the system was tested using real time hospital images.
Keywords: Novel CAD system; adaptive median filter,CLAHE; watershed segmentation; internal and external markers; textural features; ANN; area and Entropy of malignant tumor; stage of breast cancer.
BREAST CANCER DIAGNOSIS USING A MINKOWSKI DISTANCE METHOD BASED ON MUTUAL INFORMATION AND GENETIC ALGORITHM
by Neha Vutakuri, Amineni Uma Maheswari
Abstract: Breast cancer is one of the most frequently diagnosed cancers and can lead to death in women worldwide. Diagnosing breast cancer is one of the most challenging tasks as symptoms may only be present in later stages. Early diagnosis may save lives. Various algorithms and techniques have been proposed to diagnose breast cancer. This paper presents a MIGA (mutual information genetic algorithm) for diagnosing breast cancer. MIGA is a combination of two algorithms, mutual information (MI) and genetic algorithm (GA). Among the information theory approaches, MI is the best and most widely used approach due to its characteristics of non-linearity, robustness, and scalability. This process reduces computational complexity and improves the accuracy of the system. The method of this work is as follows: attributes of breast cancer patients were collected using the Breast Cancer Wisconsin Diagnostic dataset. Evolutionary computation has a variety of techniques and approaches based on natural selection. A breast cancer diagnosis system was developed using a GA and hybrid algorithm (genetic and K-nearest neighbor) and then used MI and GA. GA fitness was calculated using the Minkowski distance method. Nine attributes from the dataset were included: clump thickness, uniformity of cell size, uniformity of cell shape, marginal adhesion, single epithelial cell size, bare nuclei, bland chromatin, normal nuclei, and mitoses. The obtained solutions are verified for three algorithms (GA, GA+KNN, and MIGA). Finally, the results show that the highest accuracy (99%) obtained with the GA-based MI features. The proposed MIGA algorithm reveals an enhancement in performance compared with the methods of previous works.
Keywords: Breast cancer diagnosis; Genetic algorithm; Mutual information; Breast Cancer Wisconsin Dataset; Minkowski distance method.
Threshold Algorithm for the cell formation problem
by RAGURAMAN T.R, SUDHAKARAPANDIAN RAMASAMY, KAMALAKANNAN RAMALINGAM
Abstract: Advanced or smart manufacturing has recently been gaining increasing attention from the academia and industry in small and medium enterprises (SMEs). Smart Manufacturing for Industry 4.0, which integrates resources, information, materials, and people to formulate a cyber physical system, has been the priority of many enterprises, especially those that are small and medium-sized. Threshold accepting algorithm is useful to resolve the cell formation problem which is based on the three perturbation techniques such as pair-wise exchange, insertion and random insertion perturbation schemes. This paper aims to maximize the grouping efficacy as it is one of the best performance measures for the cell formation problem. The performance evaluation of threshold accepting algorithm had actually been carried out after testing the benchmark problems of various literatures. This evaluation method proves that three perturbation schemes have the ability to sort out the cell formation problem. Among these three perturbation schemes, the random insertion perturbation scheme is providing better solutions than the others.
Keywords: SMEs; Cellular Manufacturing System; Smart Manufacturing; Threshold Accepting Algorithm; Part Machine grouping; Grouping Efficacy.
Support Vector Machine based proactive fault-tolerant scheduling for Grid Computing Environment
by A.Shamila Ebenezer, Elijah Blessing Rajsingh, Baskaran Kaliaperumal
Abstract: To classify the reliable resources accurately and perform a proactive fault tolerant scheduling in grid computing environment, a combination of Support Vector Machine (SVM) with the Quantum-behaved Particle Swarm Optimization using Gaussian distributed local attractor point (GAQPSO) is proposed in this paper. When tuned with appropriate kernel parameters, the SVM classifier provides high accuracy in reliable resource prediction. The higher diversity of GAQPSO compared to other variants of QPSO, reduces the makespan of the schedule significantly. The performance of the SVM-GAQPSO scheduler is analyzed in terms of the makespan, reliability, and accuracy. The empirical result shows that the reliability of the SVM-GAQPSO scheduler is 14% higher than the average reliability of the compared algorithms. Also, the accuracy of prediction using the SVM classifier is 92.55% and it is 37.2% high compared to Classification and Regression Trees (CART), Linear Discriminant Analysis (LDA), K-Nearest Neighborhood (KNN), and Random Forest (RF) algorithm.
Keywords: SVM classification algorithm; Particle Swarm Optimization; Proactive Fault tolerance; Failure Data Analytics; Grid Computing.
Automatic Classification for Preventing Duplication of Online Multimedia Data in Secure Cloud Infrastructure
by Suganya E, Aravindhraj N, Sountharrajan S, Rajan C
Abstract: Cloud computing provides various types of software and hardware services collectively working together at different computational environment to the end user through internet. Cloud computing is an emerging technology that provides variety of applications to the end user and they can access their cloud services at anywhere, at any time in the world in secured way. Now-a- days free online hosting websites produce a large amount of multimedia data. It may include Images,audios,2D videos,3Dvideos. Some people will use videos and audios in the distributed environment against the copyrights from the original content creators. This will create a large amount of revenue loss for the original content creator. The protection of theses multimedia content is a challenging task. Cloud computing has some security issues like authentication, privacy and data security. Our proposed system will provide highly scalable storage infrastructure in cloud environment.It also protects the multimedia content by creating a depth signature and classifying them with duplicated contents using SVM classifier .This proposed system will avoid the security issues in multimedia content and it will increase the revenue for original content creator. The Proposed system can work in both public and private cloud Infrastructure.
Keywords: Video Watermarking; 3D videos; Cloud applications; Depth signatures; Cryptography; Security and Classification.
ENHANCEMENT OF ENTERPRISE RESOURCE PLANNING SYSTEM BY ANALYZING FEASIBILITY AND CRITICAL FACTORS
by Valanarasu R., Christy A
Abstract: ERP is a business strategy for industry domain for the specific applications to build the customer and service provider value network system. The issue which faces by previous systems has lack of integration in the access of information and their records. It is has the problem in lack information, financial information systems and the integration processes. A common platform is essential for integrating all the information for all the terminals. The existing system provides the integrated platform for the stakeholders to access the data but the overall complexity increases because of the potential management and environmental factors. A distributed module in this proposed work combines all the management process and the key features which integrates the platforms and overcomes the limitations in the existing system. ERP system is used to determine the organizational needs and adaption requirements which combine all the key features which lag in previous system. It also used in analysis of various adoption behavior of organization. This research work implements the ERP in large and small scale SMEs and identifies the solution for the practical problems in real world.
Keywords: ERP; Risk management Analysis.
Best-case, worst-case and mean integral-square-errors for reduction of continuous interval systems
by Vinay Pratap Singh, Jagadish Kumar Bokam, Sugandh Pratap Singh
Abstract: In this brief, best-case integral-square-error, worst-case integral-square-error and mean integral-square-error are defined for model reduction of continuous interval systems. First, rational transfer functions of interval system and those of the model are obtained using Kharitonov theorem and then different integral-square-errors are derived with the help of alpha and beta parameters obtained for rational transfer functions. The ISEs are obtained for impulse response and can be treated as measure of goodness for model reduction of continuous interval systems. The whole procedure is explained with the help of one numerical example.
Keywords: Kharitonov theorem; integral-square-error; interval systems; Model reduction.
Residential Load Scheduling Considering Maximum Demand Using Binary Particle Swarm Optimization
by Remani Thankamma, JASMIN EA, Imthias Ahamed
Abstract: Demand Response(DR) programs are gaining importance in Smart Grid, owing to the continuously increasing energy demand. The primary objective of DR programs is to motivate consumers to change the power consumption pattern to limit the maximum demand. The success of residential DR programs largely depends on schedulable loads and the nature of utility tariff. Binary Particle Swarm Optimization(BPSO) is an effective tool for solving scheduling problems. In this paper a BPSO based solution is presented through a case study for the residential load scheduling problem including the consumer demand constraints and Maximum Demand(MD) limit specified by the utility. The objective of the algorithm is to automatically schedule the consumer\'s load so as to minimize the energy cost subject to various constraints. The performance of the algorithm is investigated by considering a domestic consumer with schedulable and non-schedulable appliances. Simulation experiments are conducted under different tariff and MD limit conditions. Test results show that the proposed method saves energy cost of the domestic consumer and reduces the maximum demand on the system.\r\n
Keywords: Binary PSO; Load scheduling; Maximum Demand; Energy Management System; Demand Response.
SCALABLE INFORMATION RETRIEVAL SYSTEM IN SEMANTIC WEB BY QUERY EXPANSION AND ONTOLOGICAL BASED LSA RANKING SIMILARITY MEASUREMENT
by Uma Devi M, Meera Gandhi G
Abstract: In recent days, Semantic Web presents a key role in Intelligent retrieval of Information System that provides actual Semantic information in text documents. Several Semantic based research works have been encountered for Information Retrieval (IR). However, achieving the scalable IR in Semantic Web is a challenging issue since it has faced several problems in terms of inaccurate, irrelevant, and redundant information in large dataset. The Semantic IR problem is addressed by an Ontological based Semantic Similarity measurement using Natural Language Processing. It is proposed to concentrate mainly on Ontological representation, Query Expansion, Similarity measurement and Ranking. The two novel algorithms namely Syntactic Correlation Coefficient (SCC) and Mapping based K-Nearest Neighbor (M-KNN) for Semantic Similarity measurement is suggested which improves the accuracy of relevant result. The Ontological constructs with Word Sense Disambiguation (WSD) algorithm for document repository, improves the conceptual relationships and reduces the ambiguities in Ontology. The Ontology construction also improves Scalability by intensely analyzing the Semantic relationship as well as dynamically reconstructing the Ontology when numbers of documents are updated. The Query Expansion process based on pre-processing steps and Semantic analysis reduces vocabulary mismatch problem by including additional relevant terms. Ranking is done with Latent Semantic Analysis (LSA) after Semantic Similarity analysis, which improves the retrieval result and reduces the complexity in relevancy. Finally, the performance of the system is analyzed with respect to different metrics such as Processing Time, F-Measure, Time Complexity and Space Complexity. The metrics are significant to improve the result and overall performance of the proposed systems.
Keywords: Information Retrieval; Semantic Similarity; Ontology; K-Nearest Neighbor; Latent Semantic Analysis; Word Sense Disambiguation; SPARQL; Singular Value Decomposition.
Multi-objective Multi-Join Query Optimization using Modified Grey Wolf Optimization
by Deepak Kumar, Sushil Kumar, Rohit Bansal
Abstract: Nowadays information retrieved by a query is based upon extracting data across the world, which are located in different data sites. In Distributed Database Management Systems (DDBMS), due to partitioning or replication of data among several sites the relations required for an answer of a query may be stored at several Data Sites (DS). Many experimental results have showed that combination of Optimal Join Order (OJO) and optimal selection of relations in Query Plan (QP) gives out better results compare to the several existing query optimizing methodologies like Teacher-Learner Based Optimization (TLBO), Genetic Algorithm (GA) etc. In this paper an approach has been proposed to compute a best optimal QP that could answer the user query with minimal cost values and minimum time using Modified Grey Wolf Optimization Algorithm (MGWO) which is multi-objective constrained. Proposed approach also aims for producing OJO in order to reduce the dimensionality complexity of the QP.
Keywords: Data Site; Distributed Database Management Systems; Grey Wolf Optimization; Optimal Join Order; Teacher-Learner Based Optimization;.
2n Factorial design of view of thermal images for detection Correlation coefficient variants factors of object for environmental issues
by P. Mukilan, Mikias Hailu Kebede, N.M. Saravana Kumar
Abstract: Thermal imaging to improving a visibility and clarity of objects in a dark space environment by detect of the objects through camera via infrared radiation. The objects emit infrared energy (heat or flame) at the time of utility in their temperature, hotter object to emit more radiation, flame and it cannot predict the wave of heat sprite. A thermal image identifying heat sensor and detecting the different tiny hotness. The collects the infrared radiation from captures the scene, creates an angle position and outcome as based on information view. The research focus on 2n factorial design to finding various angle object and identifying the correlation coefficient factors capture image of front, back, right, left, bottom ad top view of object. To prove the 22 factorial design is an example for, i.e., X view and Y view i.e., right side view by help of Yates methods and identify affecting area objects.
Keywords: thermal image; 2n factorial design; thermal image object view; correlation coefficient factors.
Image Classification using higher order Statistics based ICA for NOAA Multispectral Satellite image
by Venkata Krishna Moorrthy, G.Umamaheswara Reddy
Abstract: The main objective of this Research work is object classification is done with reduced bands of multispectral National Oceanic and Atmospheric Administration image using higher order statistics based Independent Component Analysis and clustering method. Enhancement is not only improving the spatial resolution, but should be preserved spectral information also. Independent Component Analysis is used for dimensional reduction of Multispectral image and enhance techniques for imrpoving the spectral and spatial values, This integrated composite image is classified by using K-means clustering algorithm, objects are separate by homogeneity feature levels with pixel values, This unsupervised classification for extracting land, water and clouds with good accuracy and kappa coefficient values compared with band calibration values of NDVI and surface temperature. This proposed output image gives less color distortion, high resolution , improve visual quality and accurate information with good statistical parameter values .
Keywords: K-means clustering; maximum like hood; Kappa Coefficient; Independent Component Analysis; higher order statistics.
Speech based Automatic Personality Trait Prediction Analysis
by Jayaraman Sangeetha, R. Brindha, S. Jothilakshmi
Abstract: Automatic personality perception is the prediction of personality that others attribute to a person in a given situation. The aim of automatic personality perception is to predict the personality of the speaker perceived by the listener from nonverbal behaviour. Extroversion, Conscientiousness, Agreeableness, Neuroticism, and Openness are the speaker traits used for personality assessment. In this work, a speaker trait prediction approach for automatic personality assessment has been proposed. This approach is based on modelling the relationship between speech signal and personality traits. The experiments are performed over the SSPNet Speaker Personality Corpus. For speaker trait prediction, Support Vector Machines (SVM), Multilayer Perceptron (MLP), and Instance based k-Nearest neighbour were analysed with multiple features. Various features have been analysed to find suitable feature for various speaker traits. The analyses have been conducted using pitch, formant, and Mel frequency cepstral coefficients (MFCC) and the analysis results are presented. The accuracy of 100% has been obtained for MFCC features with 19 coefficients.
Keywords: Personality traits; Automatic Personality Perception; Mel Frequency Cepstral Coefficients.
Factors influencing effectiveness of testing applications in cloud using regression testing : A statistical analysis
by NARASIMHA MURTHY MS, Suma V, Chandrappa CN, Shankar MM
Abstract: The innovation in the field of software engineering and allied technologies has lined the way in software development life cycle through different strategies and approaches. Further, it guides the software testing process to improve upon by adopting the same such that, the effectiveness and efficiency of software testing can be enhanced to a greater level. The emergence of cloud computing has led to many changes in the way IT industry are operating in regard to software testing. Since, software testing plays a major role in producing good and outstanding software products with enhanced customer satisfaction index (CSI), it is essential to focus more on defects related factors. Despite several testing techniques, it is proven that regression testing is most popular in all software industries. Hence, this research is contributing towards identifying, measuring, and analyzing influential factors of regression testing with respect to defects. This paper focuses on empirical investigation through a case study comprising of two testing environments with projects taken from two different domains in order to analyze the factors influences/significant effectiveness of testing. Investigation indicates that testing applications in cloud has added benefit of detecting defects of high severity in addition to other proven benefits of cloud.
Keywords: Software Engineering; Software Testing; Cloud Computing; Regression Testing; Correlation; Performance; Customer Satisfaction Index.
Nature of Life and Survivability of Women and Men with Breast Cancer
by Smita Jhajharia
Abstract: Breast cancer stands out distinctively amongst the most widely recognized cancers that affect overall health and well-being in females and as a prominent cause for cancer-related causalities. Stoutness is a distinctive feature that has been connected with contraction of breast cancer. Weight misfortune post diagnosis of breast cancer has usually been connected with a lessening in danger of breast cancer re-occurrence and mortality. The reason for conducting this study is to inspect the hindrances, and manageability of a practice intercession program offered at one of the cancer research foundations in India to over-weight ladies with recently diagnosed breast cancer. Techniques: The Breast Cancer Database was questioned for ladies recently diagnosed with breast cancer and with Body Mass Index (BMI) ≥ 25 kg/m2. Qualified patients took part in the Moving Forever (MFL) practice program for 18 sessions. Surveys were regulated. Measurable investigations included descriptive and combined t-tests to abridge tolerant qualities and evaluate changes after some time. Results: of 46 patients, 24 declined, 22 agreed and 17 (77%) finished the study. The mean age was 62 yrs. (run: 34 - 72). The mean BMI was 31 kg/m2. After the intercession, there was a reduction in weight and BMI (p = 0.04). The normal weight reduction was 10 lbs. Members reported more prominent happiness regarding exercise (p = 0.02) and diminished torment identified with treatment (p = 0.05). These underlying positive results were not kept up following 6 months and 1 year. Conclusions: the MFL mediation had a high rate of acknowledgment among overweight ladies recently found to have breast cancer. These outcomes showed critical advantages of practice quickly after cancer detection and highlight the significance of creating maintainable way of life mediations. Intercessions focused at modifiable way of life figures ladies with early stage malady may give profit that is equivalent to certain adjuvant systemic treatments. In this manner, adjuvant way of life intercessions bolstered by clinicians may enhance breast cancer survival results.
Keywords: Breast; Cancer; Obesity; BMI.
Multi-layer composites shielding for Electromagnetic Radiation Protection
by Saradhi Vijay, Dharma Raju Ch
Abstract: Tremendous growth in the usage of ICT solutions has led to increased levels of radiation. Such development has direct impact in terms of electromagnetic pollution that is taking place. It is imperative with the rising radiation levels that there is need for improvised solutions of devices that can emit less quantum of radiation. Numerous research studies related to electromagnetic inference shielding were carried out in the past. Pbo-SiO2 coating for polyboron in the lab environment has always gained importance in EMI shielding; despite of its efficacy is yet to be proven. The proposed shielding solution in terms of using multi-layer pbO-SiO2 coating using polyboron solution, which can lower the intensity of γ radiation, is evaluated for the outcome in this study and it is imperative that the EMI shielding can be effectively improved with the proposed solution.
Keywords: Electromagnetic radiation; Polymer Combinations; mass attenuation coefficient; linear attenuation coefficient; PbO–SiO2; polyboron.
Optimization of Training Samples in Recognition of Overlapping Speech and Identification of Speaker in a Two Speakers Situation
by Shanthi Therese, Dr. Chelpa Lingam
Abstract: Recognition of overlapping speech is still a challenging problem in the area of Automatic Speech Recognition (ASR). In this paper, we have proposed a technique for overlapping speech recognition integrated with Firefly optimization technique. Overlapped segments are thoroughly analyzed for different dominant frequencies involved in the mixture. We have created an audio splitting function to split the mixture signal into independent individual signals using the frequency information obtained by the frequency analyzer. Split audio signals are converted into mel cepstral coefficients and the intensity variations of signals are indicated by their cepstrum. Phoneme Density Updated Cepstrum (PDUC) features are extracted from both spectrum feature analysis and Mel Frequency Cepstral Coefficients (MFCC) coefficients. Further, firefly optimization technique is used for clustering and selecting best relevant features. These features are used to separate the mixed sources in the input audio signal. Separated audio signals are passed to HKL classifier to measure both speech and speaker accuracy. Datasets of Speech Separation Challenge (SSC) Scopus are used to evaluate the results. From the results, we could conclude that minimum samples of 20% to 30% are sufficient to achieve recognition accuracy of above 90%. With a minimum number of training samples, our proposed PDUC feature extraction technique gives satisfactory recognition rate.
Keywords: Overlapping Speech Recognition; Audio Split; Spectrum Feature Analysis; Clustering using Firefly Algorithm.
Intentional/Un-intentional Islanding Control Strategy for Distributed Generation System
by Nayana P. Shetty, Chakrasali R L.
Abstract: The integration of Distributed Generation (DG) to the utility grid is ever increasing to provide reliable, secured and quality power. The DG minimizes peak loads, power losses, loading on distribution network and reserve margin and improves voltage regulation. The islanding is the situation where part of the utility grid is isolated from the rest of the system and continues to powered by the DG, can be intentional or unintentional. It is very essential to detect islanding in order to ensure safety of appliances as well as personnel working over there. In this paper, the DC source based DG is connected to the grid and operates in grid connected as well as in islanding mode. When the DG is operating in grid connected mode, the DG provides power to the utility grid at grid voltage and grid frequency. To provide seamless power to the local loads during maintenance of grid or if there is any grid fault occurs then intentional islanding of DG is done and made to work in islanded mode. Islanding is examined for this model in seamless operation. The model is developed in Matlab/Simulink environment and results are obtained. From the simulated result it is shown that the controller and inverters are operating without disturbance in seamless operation.
Keywords: DC Source; Distributed Generation; Inverter controller; Islanding.
Adapted Bucolic and Farming Region Pattern Classification Using Artificial Neural Networks for Remote Sensing Images
by P.S.Jagadeesh Kumar
Abstract: This paper explicates the utilization of multi-layer perceptron neural networks
based procedural for the classification of bucolic and farming regions of
remotely sensed images. In this paper, spectral remote sensing images were
apprehended in rural and agricultural taxonomy. Cumulative histogram,
voronoi tessellation, spatial pixel matrix extorted from the geographical
information system were used for the training of the dataset as endowment to
the multi-layer perceptron neural networks. The obstinacy of image texture
features using voronoi tessellation was prompted to be principal for aerial
image pattern classification of bucolic and farming regions.
Keywords: Agricultural Region; Cumulative Histogram; Geographical Information System; Multi-layer Perceptron Based Neural Networks; Pattern Classification; Remote Sensing Image; Voronoi Tessellation.
Special Issue on: CICBA-2017 Advances in Computational Intelligence
Machine Transliteration Using SVM and HMM
by Soma Chatterjee, Kamal Sarkar
Abstract: Name transliteration plays an important role in developing automatic machine translation and cross lingual information retrieval system because these systems cannot directly translate out-of-vocabulary (OOV) words. In this article, a SVM based name transliteration approach has been presented. This approach considers transliteration task as a multi-class problem of pattern classification, where the input is a source transliteration unit (chunks of source grapheme) and the classes are the distinct transliteration units (chunks of target grapheme) in the target language. A study on using Hidden Markov Model (HMM) for solving machine transliteration problem viewed as a sequence learning problem has also been presented in this paper. Bengali-to-English forward and backward name transliteration have been considered in this study. Our proposed methods have been compared with some existing transliteration method that uses a modified version of Joint-Source channel model. After the systems have been evaluated, the obtained results show that our proposed SVM based model gives the best results among the others. Our experiments also reveal that the performance of HMM based system is comparable with the SVM based system.
Keywords: Name Transliteration; Support Vector Machines; Hidden Markov Model; Modified Joint-Source Channel Model; Machine Transliteration; Machine Translation.
SPIDER-based Out-of-Order Execution Scheme for Ht-MPSOC
by Ramachandran Karthick, M. Sundararajan
Abstract: In this work, the influence of the dynamic task scheduling process is examined. Out-of-Order (OoO) implementation processes exhibit remarkable guarantee for task-level parallelism in multiprocessor system-on-chip (MPSOC) designs. The superior performance can be attained with the help of a precise mapping of tasks onto the right processors. Hence, to obtain this performance, a SPIDER based task parallelism is presented in this work. The software related dynamic operations are illustrated on a heterogeneous MPSOC (Ht-MPSOC). SPIDER abides by a cooperative population-based search, which demonstrates upon the social activities of spider grouping. SPIDER system merges local search approaches with global search approaches. This implementation decreases the issue of task management. The performance of the proposed design is compared with the existing work for power, area and speed analysis.
Keywords: Multiprocessor SOC; SPIDER; task-level parallelism; Out-of-Order implementation.
Modified FPred-Apriori: Improving Function Prediction of Target Proteins from Essential Neighbors by Finding their Association with Relevant Functional Groups Using Apriori Algorithm
by Sovan Saha, Abhimanyu Prasad, Piyali Chatterjee, Subhadip Basu, Mita Nasipuri
Abstract: Drug assistance to various harmful diseases is still not discoverable since functions
of proteins responsible for the cause of these diseases are still unannotated. So, computational function annotation to unknown protein is a very challenging task which can be used to formulate biological hypothesis. With the use of high throughput techniques, huge amount of protein sequence can only be annotated rapidly using computational technique rather than using costly, time consuming, low throughput wet lab experiments. Here a novel prediction method, Modified FPred-Apriori is proposed which aims to annotate proteins from its unannotated level-1 and annotated level-2 neighbors with less computational overhead. This is accomplished by selecting active target proteins efficiently at three levels of threshold: high, medium and low through the application of closure on adjacency matrix formed between the protein sets followed by the computation of protein connectivity scores. Once the target proteins get selected, their corresponding neighborhood interaction network is formed. Nonessential neighbors are pruned or filtered out from each of those formed interaction network of target set through functional overlap score. Functional association of level-1 and level-2 neighbors in the pruned neighborhood graph of target set is also investigated simultaneously and hence, only frequently occurred relevant functional groups are considered using Apriori algorithm for the annotation of functions of target set from the functions of their corresponding annotated level-2 neighbors. Modified FPred-Apriori is an improved modified version of
FPred-Apriori. The incorporation of closure, protein connectivity score, function overlap score and other features like reshuffling of neighborhood proteins of target set of proteins through mutual crossover of genetic algorithm make it unique in comparison to its predecessor. It achieves an overall precision, recall and F-score of 0.887, 0.708 and 0.787 respectively. The comprehensive comparison demonstrates that the proposed method outperforms the other competing methods.
Keywords: Protein-Protein Interaction Network; Apriori algorithm; Essential neighbor;
Function Overlapping Score; Closeness Centrality Score; Target proteins.
An Efficient Pattern Matching Approach Using Double Measures of Correlation and Rank Reduction
by Himanshu Jaiswal, Dakshina Ranjan Kisku
Abstract: This paper discusses an efficient pattern matching approach on the use of K-NN (K-nearest neighbor) based rank order reduction and Haar transform in order to detect a pattern in a large scene image. To accomplish the task, scene image is divided into a number of candidate windows and both input pattern and candidate windows are characterized by Haar transform. This characterization seeks to determine distinctive coefficients known as Haar Projection Values (HPVs). To obtain more relevant and useful representation of HPVs, rectangle sum is computed and further, sum of absolute (SAD) correlation measure is applied as successive measures between the input pattern and candidate windows. This leads to increase the possibility of finding the object in the scene image before being detected and localized. The proposed pattern matching approach is tested on COIL-100 database and the matching accuracy proves the efficacy of the proposed algorithm.
Keywords: Pattern Matching; Haar Transform; Sum of Absolute Difference; K-NN Approach.
Gain Parameter and Dropout Based Fine Tuning of Deep Networks
by M. Arif Wani, Saduf Afzal
Abstract: Dealing with high dimensional data is one of the major current challenges to many classical classification algorithms. While shallow architectures are best suited to small datasets with many features, they can be relatively inefficient at modeling variation in high dimensional datasets. Deep architectures such as deep neural networks can express complex relationships among variables than the shallower ones. Training of deep neural networks can involve two learning phases: unsupervised pretraining and supervised fine tuning. Unsupervised pretraining is used to learn the initial parameter values of deep networks, while as supervised fine tuning improves upon what has been learned in the pretraining stage. Backpropagation algorithm can be used for supervised fine tuning of deep neural networks. However, in the field of shallow neural networks, a number of modifications to backpropagation algorithm have been used by researchers that have improved the performance of trained model. One such variant is backpropagation with gain parameter. In this paper we evaluate the use of backpropagation with gain parameter algorithm for fine tuning of deep networks. We further propose a modification where backpropagation with gain parameter algorithm is integrated with the dropout technique and evaluate its performance in fine tuning of deep networks. The effectiveness of fine tuning done by proposed technique is also compared with other variants of backpropagation algorithm on benchmark datasets. The experimental results show that the fine tuning of deep networks using the proposed technique yields promising results among all the studied methods on the tested datasets.
Keywords: Deep Learning; Deep Neural Networks; Fine Tuning; Drop Out Technique; Gain Parameter and Drop Out Technique.
A New Image Binarization Technique for Segmentation of Text from Digital Images
by Ranjit Ghoshal, Sayan Das, Aditya Saha
Abstract: Text segmentation in digital images is requisite for many image analysis and interpretation tasks. In this article, we have proposed an effective binarization technique towards text segmentation from digital images. This image binarization technique creates numerous text as well as non-text connected components. Next, it is required to separate the possible text components from the obtained connected components. Further, to distinguish between text and non-text components, a set of features are considered. Then, during training, we consider the two feature files namely text and non-text prepared by us. Here, K-Nearest Neighbour (K-NN) and support vector machine (SVM) classifiers are considered for the present two class classification problem. The experiments are based on ICDAR 2011 Born Digital Dataset. Our binarization technique is also applied on publically available dataset Street View Text dataset (SVT), DIBCO 2009 and ICDAR 2011 Roust Reading Competition. We have accomplished in binarization and as well as segmenting between text and non-text.
Keywords: Binarization; Connected Component; Feature extraction; K-NN classifier; SVM classifier; Text segmentation.
Metaheuristics based routing optimization, balanced workload distribution and security strategy in IoT Environment.
by Subhrapratim Nath, Subir Kumar Sarkar
Abstract: The advancement of Wireless Sensor Networks (WSN) made the society ever so slightly gains new degrees of freedom, intrinsically with respect to connectivity. The emergence of Internet of Things (IoT) together with the onset and development of evolutionary computing, address the various needs of the growing urbanization with a significant allusion altogether. Applications built on cloud infrastructure are deployed for job segregation, rapid processing and utilization but with limitation of proper computation in high density IoT environment. This paper strives on resolving some of the issues so accosted by the IoT paradigm, with the help of a new meta-heuristics based data routing hybrid algorithm based on Directed Artificial Bat Algorithm (DABA) and Particle Swarm Optimization (PSO) to optimize connection issues such as real time delay, network congestion. This approach also introduces the clustering concept together with Fog Computing to distribute the network stress and to optimize the usage of bandwidth using the Dynamic Graph Partitioning Algorithm. The paper further proposes advance metaheuristics Constricted PSO (C-PSO) in the hybrid algorithm and effective proposal for security strategy to enhance the efficiency of the IoT environment.
Keywords: Internet of Things; Cloud computing; Fog Servers; Metaheuristics; Routing optimization; Directed Artificial Bat Algorithm; Particle Swarm Optimization; load balancing; Dynamic Graph Partitioning.
Special Issue on: Intelligent Solutions to Industrial Problems through Swarm Intelligence Algorithms
Swarm Intelligence for a Single Source Product Distribution
by Surafel Tilahun
Abstract: Distributing products under different constraint set is one of the challenging \r\ntask in industries. In this paper a single source resource distribution to different \r\ncenters using finite and small number of transportation system is discussed. For \r\na productions center, the source, there could be multiple centers to transport the \r\nproducts using trucks and each truck should have equal load in terms of centers which \r\nthey are going to distribute the products to. The problem can be seen as a set of \r\nmultiple traveling salesman problem. This problem will be formulated and a custom \r\nmade swarm intelligence algorithm based on prey predator algorithm will be used. \r\nThree data sates of different categories including cases when there is a restriction to \r\ngo from one center to another is generated and used to test the algorithm. The data \r\nsets are also attached in the appendix for future research and comparison.
Keywords: Swarm intelligence; product distribution; travelling salesman problem; prey predator algorithm.
Prediction of Exchange Rate Using Improved Particle Swarm Optimized Radial Basis Function Networks
by Trilok Pandey, Satchidananda Dehuri, Alok Jagadev
Abstract: In this paper, a radial basis function neural network (RBFN) model has been trained by canonical particle swarm optimization (PSO) and improved particle swarm optimization (IMPSO) algorithms to efficiently predict the exchange rate of Indian rupees against the exchange rate of G-7 countries for future days. We have used two variants of PSO such as canonical PSO and IMPSO for optimizing the parameters of radial basis function neural network through learning from the past data of exchange rate prediction. Here, we have considered forty three country's exchange rates to predict the Indian rupees against the G-7 countries. Forty three exchange rates have been collected and based on their correlation analysis a dataset has been prepared to validate the proposed model. In addition, a fair comparison has been carried out between IMPSO tuned RBFN and canonical PSO tuned RBFN with respect to the results obtained by varying the number of iterations for future days prediction. From the experimental results, it is observed that the predictive performance of IMPSO tuned RBFN model in the case of higher number of iterations is promising vis-
Keywords: radial basis function network; neural network; radial basis function; canonical particle swarm optimization; improved particle swarm optimization model; exchange rate.
Software Fault Prediction Using Hybrid Swarm Intelligent Cuckoo and Bat based k-means++ Clustering Technique
by Shruti Aggarwal, Paramvir Singh
Abstract: k-means and its various hybrids are popularly used for software fault prediction. k-means++ is a hybrid clustering algorithm which overcomes major issue of getting stuck at local optima. Here, in this paper swarm intelligence based hybrid techniques viz Cuckoo Algorithm which improves the fitness function and Bat Algorithm which swarms with varying speeds; are used on k-means++ algorithm to design a new hybrid clustering technique. KBat++ algorithm is a designed hybrid clustering technique with increased convergence rate which is further improvised using robust Cuckoo swarm intelligent technique on this designed algorithm to generate CKBat++ Algorithm which is predicted to generate optimized high quality clusters. Experiments are performed using open source UCI and Promise datasets to implement and compare performance of designed algorithms with KBat and k-means++ algorithms. Accuracy, cluster quality check, CPU Time etc. are used for performance comparisons. Results indicate that the designed technique which is used to predict and categorize software faults into faulty and non-faulty clusters to avoid errors and increase software reliability, is fairly better in performance than its counterparts.
Keywords: Fault prediction; clustering; swarm intelligence; Cuckoo Algorithm; Bat Algorithm; k-means; k-means++ algorithm.
A State-of-the-Art Neuro-Swarm Approach for Prediction of Software Reliability
by Ajit Behera, Ch. Sanjeev Dash, Mrutyunjaya Panda, Satchidananda Dehuri, Rajib Mall
Abstract: Software reliability is one of the foremost factors to assess the quality of software. It is evident from the past research that not a single general model has been developed in the arena of software reliability research to predict the reliability of software. Therefore, lots of attempt is continuously made from different corners of diversity to make a generic and widely acceptable model. In this paper, we propose a neuro-swarm software reliability model by combining the best attributes of functional link artificial neural network (FLANN) and Particle swarm optimization (PSO). FLANNs have been successfully employed to solve non-linear regression and time series problems; however, its application in software reliability is rare. This intensive work elucidates the feasibility of the use of FLANNs to predict software reliability. PSO is used to tune the parameters of FLANN during the development of the model. The extensive experimental study on a few benchmarking software reliability datasets reveals that the FLANN model with Particle swarm optimization (PSO-FLANN) results in better prediction than the methods like Back-propagation Neural Network (BPNN), Dynamic Evolving Neuro-Fuzzy Inference System (DENFIS), Non-linear Ensemble Back Propagation Neural Network (NEBPNN), and canonical FLANN. Hence, the proposed model may be a suitable and promising alternative for predicting software reliability.
Keywords: Software reliability; Functional link artificial neural network; Particle swarm optimization; Normalized Root Mean Square Error.
PERFORMANCE ENHANCEMENT IN OPTIMAL LOCATION AND SIZING OF WIND AND SOLAR BASED DISTRIBUTED GENERATION IN DISTRIBUTION SYSTEM USING COMMUNAL SPIDER OPTIMIZATION ALGORITHM
by Vijay Raviprabakaran
Abstract: The Distributed Generation (DG) appropriate placement in the Distribution System (DS) is even a very rebellious concern for attaining their extreme potential profits. This paper suggests the application of Communal Spider Optimization Algorithm (CSOA) on performance model of Wind Turbine Unit (WTU) and Photovoltaic (PV) array locating method. It also involves the power loss reduction and voltage stability improvement of the ring main distribution system. This paper replicates the efficiency of WTU and PV array enactment models in the placement of DG. To study the effectiveness the Voltage Stability Factor (VSF) is considered in computing the voltage stability levels of buses in the distributed system. The Optimal placement and sizing of Wind and Solar based DGs is tested on 15 & 69 test bus system. Attained results revealed that it produces the superior results with less simulation time for the taken problem.
Keywords: Distributed Generation; Distributed System; Wind Turbine Unit; PV Array; Voltage Stability; Communal Spider Optimization Algorithm.
Genetic Algorithm based Rule Generation for Approximate Keyword Search
by Priya Mani, Kalpana R
Abstract: A lot of problems in natural language processing, data mining, information retrieval, and bioinformatics can be legitimated as trying transformation. The task of the string transformation is once the input string is given, the system generates the k most likely occurring output strings resultant to the input string. The existing method for approximate keyword search based on rules uses two processes called learning and generation which provides the improvement in both accuracy and efficiency, but not to the expected level. A new genetic algorithm based approach to generate the rules is introduced to support pattern matching and the generated rules are learned by applying maximum a likelihood function in order to generate the rule dictionary. The given query keyword is searched in database by constructing tree based index called the aho corasick tree and perform the pattern matching with rule dictionary for retrieving the document even it contains misspelled string. The experimental result is very accuracy and efficiency when compared to existing methods
Keywords: Bigram Dice Coefficient; Rule dictionary; Divide and Conquer; Error Correction; Maximum a likelihood.
Enhanced Differential Evolution with Information Preserving Selection Strategy
by Pravesh Kumar
Abstract: In the present paper, two modifications for Differential Evolution (DE) algorithm are proposed. The first modification is the proposal of a new selection technique called Information Preserving Strategy (IPS), tries to preserve and utilize the important information about the search domain. The corresponding DE variant is called IpDE. The second modification is a new mutation strategy and called Enhance DE algorithm (EDE). Furthermore a new variant named IpEDE by combining EDE and IpEDE is also proposed. The performance of the proposed variants IpDE, EDE and IpEDE are validated on a set of test problems including standard test problems and the selected test problems of CEC-2008. The algorithms are compared with some of the prominent DE variants and it is observed that the proposed modifications help in improving the performance of DE in terms of convergence rate and solution quality.
Keywords: Differential evolution; Information preserving selection; Mutation; Global Optimization.
Improving the Flexible Neural Tree model with Swarm Intelligence
by Tomas Burianek, Sebastian Basterrech
Abstract: A type of feedforward Neural Network with a specific architecture was developed around ten years ago under the name of Flexible Neural Tree (FNT). The model has two families of adjustable parameters: the parameters presented in the activation function of the neurons, and the topology of the tree. The method uses meta-heuristic algorithms for finding a good tree topology and the set of embedded parameters. The technique has been successfully applied for solving machine learning problems with time-series and sequential data. The canonical FNT was introduced with the radial basis function as activation function of the neurons. In this article, we analyze the performance of the FNT when different type of activation functions are presented in the tree. We present a comparative analysis among different type of neurons. We study the performance of the model when the following four types of neurons are used: Gaussian, hyperbolic tangent, Fermi function and a linear variation of Fermi function. The empirical analysis was made over a well-known simulated time-series benchmark and a real-world networking problem.
Keywords: Feedforward Neural Network; Swarm Intelligence; Flexible Neural Tree; Time-series modeling; Forecasting.
A Novel Ant Colony Optimization Approach for Fretting Out Wormhole Attack in Mobile Ad Hoc Networks
by Ashutosh Sharma, Lokesh Tharani
Abstract: Ad-hoc networks are becoming prevalent day to day as for their viability and reasonable infrastructure. With the exponentially increasing esteem, ad-hoc networks are becoming vulnerable to various attacks. Wormhole attack is the most severe threat to the ad hoc networks, it can lure and bypass a large volume of traffic, enabling attacker to receive and wield networks traffic. This kind of entree to the network resources elevates invader to launch various other attacks. We propose a novel ant colony approach to detect the wormhole link by keeping a close watch on pheromone value on each BANT. The proposed technique not only detects the wormhole link but elevates the networks performance to manifest its edge over other techniques.
Keywords: Mobile ad hoc networks (Manets); AODV; Wormhole attack; Reverse Trip Time (RTT); Ant Colony Optimization; Network simulator; DelPHI; rnrn.
Feature Extraction and Analysis of Overt and Covert EEG Signals with Speller Devices
by Mridu Sahu, Saumya Vishwal, Sneha Shukla
Abstract: Brain Computer Interfaces are used by motor neuron disease patients for communication. Hence with the use of a BCI the improving performance of rehabilitative techniques for such patients increases a lot. The brain Electroencephalogram signals, detected and recorded by BCI devices are used for analysis. The P300 component of ERP used for the detection of attention towards a character is a popular approach. The first most successful speller device was proposed by Farewell and Donchin, named P300 Speller. The device consists of 6x6 matrix of alpha-numerics with rows and columns randomly intensified, in order to generate a stimulus in the users brain. This stimulus, which is elicited as a consequence of the rare intensification of the intended character, aids patients in communicating through an external device. This proposed method, however, works well in overt attention. But the signal to noise ratio of EEG signals is low which hinders the efficiency of communication. Hence for efficient use of BCI even in covert attention, a new method is devised. This is the Geometric Speller, used for patients with Locked-in Syndrome. rnIn our study, the two devices are compared on basis of user experience as well as efficiency and accuracy in signal detection. A dataset is selected with both the GeoSpeller and P300 Speller readings and is segmented. Statistical features are extracted from the data and are used for analysis. A comparative study of GeoSpeller and P300 Speller is showed in covert and overt attention for analysis of both the methods and their usability.
Keywords: Geometric Speller; P300 Speller; overt and covert attention; Brain Computer Interfaces; Electroencephalography; Motor Neuron Disease.
An Ant Colony Algorithm of Recurrent Target Assignment For a Group of Control Objects
by Viacheslav Abrosimov
Abstract: In practice, a common problem is to perform periodic monitoring of a territory under uncertain conditions when the current situation evolves unpredictably. In this case, at each cycle the routes of control objects must take into account the situation at the previous cycle. We consider a target assignment approach for a group of control objects performing such monitoring. For each object in the group, the routing problem is solved using an ant colony algorithm that involves in explicit form the situation parameters defined at the previous cycle of monitoring.
Keywords: vehicle routing problem; ant colony algorithm; control object; recurrence; situation intensity.
Special Issue on: ICASISET 2018 M-learning Applications for Future Mobile Networks
RESOURCE ALLOCATION WITH EFFECTIVE MULTI-ATTRIBUTE COMBINATIONAL DOUBLE AUCTION IN CLOUDUSING O-DBS
by N. Vijayaraj, T. Senthil Murugan
Abstract: Cloud provides the different types of resource based on the user requirements. Cloud User send the requirements and needed resources to the Cloud service provider (CSP) and CSP allocate different level of service and resource based on the user requirement resource and investment. Thus users and providers pitch into Auction, which is one of the interesting resource allocation scenarios that have also become a major area of research in recent trends. The key behind auction is demand and supply scheme based on cloud user requirement and cloud service provider. One of the major parts of the Double auction with Winner Determination Problem (WDP), since determining the winners are the hard combinational problem. The proposed scenario also concentrates on the Quality of Service (QoS) in Cloud Service Providers (CSP) and cloud user. As stated in the statement of problem, an effective multi attribute Combinational double auction, On-demand bidding strategy is also proposed and multi-round bidding strategy is imposed an effective imposition of penalty and compensation is proposed, where Imposition of Penalty/Compensation Strategy (IP-CS) for falling to provide the promised QoS is proposed. Imposition of Penalty/Compensation Strategy (IP-CS) simulated using cloudsim and java based simulator for simulating cloud environment. The result satisfied the double auction with multi-round bidding and Imposition of Penalty/Compensation Strategy (IP-CS).
Keywords: CSP; WDP; Double Auctioneer.
Evaluation of Quality Attributes of Software Design Patterns Using Association Rules
by Sudha Rajesh, A. Chandra Sekar
Abstract: For the past decade, there were many analyzing methods are used, which in turn to analyze only the views of single stakeholder. The analyzing progression facilitates us to guarantee the quality of overall design. By doing so, there were many limitations that lead to critical situation in the development process. They elaborated this situation to excessive amount of time to perform the complete analysis. This work proposes to collect all the stakeholders decisions in a single structural view, which expose the centric-view decision in an architectural design. It also uses to deal with the disagreement in vision by examining it, with premium software quality characteristic. It signifies more structural features, which demonstrates how the design deal with the concerns while necessities, intention, objective of stakeholders have for the architecture design. The proposed system also gives the higher evaluation of efficient architecture decisions, by analyzing the stakeholders views with finest eminence features, which congregate the non functional requirements. Consequently enclose the centric vision of stakeholders among excellent software quality characteristics, promises a best possible quality designed for software architecture. By the evaluation of best quality attributes using association rules, the software design can be improved. Association rules are implemented using R Tool to get the Support, Confidence, Lift and Count Values of each attributes along with its metric. The best quality attributes are chosen after managing the stakeholders conflicts; since all the systems are designed due to the stakeholders concerns. The best design patterns can be evaluated by the importance of the patterns along with the attributes. And these patterns are evaluated by fixing the basic principles of patterns based on the quality.
Keywords: Association Rules; Confidence; Design Patterns; Support; Quality Attributes.
Disabled Person Emotion Recognition in EEG signal using deep neural network
by Pradeep Kumar M S, K. Suresh
Abstract: Emotion recognition is an important field of research in Brain-Computer Interactions. As technology and the understanding of emotions are advancing, there are growing opportunities for automatic emotion recognition systems. The brain uses the neuromuscular channels to communicate and control its external environment, however many disorders can disrupt these channels. An electroencephalogram (EEG) signals are generated in the human brain, communicate with several neurons and low amplitude signals. In this paper, the hybrid feature extraction (Renyi and differential entropy) was performed on the acquired EEG signal in order to achieve feature subsets. The respective feature values were given as the input for a multi-objective classifier: Deep Neural Network (DNN) for classifying the disabled persons and their emotions. The proposed technique improves the emotion prediction accuracy for different sessions. The emotional recognition classification model includes three states: positive, neutral and negative. In experimental analysis, the proposed approach classifies the disabled persons and their emotions by means of specificity, sensitivity, and accuracy. The experimental outcome shows that the proposed methodology improved accuracy in emotion classification up to 8.01% compared to the existing methods: k-nearest neighbors (KNN) and Support Vector Machine (SVM).
Keywords: Deep Neural Network; Differential Entropy; Discrete Wavelet; Electroencephalogram; Emotion Recognition Renyi Entropy;.
REFORMED TIME SLOT ALLOCATION FOR DATA INTENSIVE CLUSTERED INDUSTRIAL WIRELESS SENSOR NETWORKS USING VIRTUAL GRID STRUCTURE WITH UWB
by Rama Sugavanam
Abstract: Sensor network is a special kind of wireless ad-hoc network with distributed sensing and processing capability, which is predominantly used in critical monitoring applications in industries, where the clustered nodes may desire to transmit huge volumes of collected information to the base station through the cluster heads. This sort of data intensive networks produces excessive communication collision and introduces significant deficits caused by data loss, retransmission, and latency, which may not be appreciated in the IWSN. Further, a defined, appropriate scheduling methodology for data transmission is desirable for mitigating and reforming the scarcity of such networks. With this declaration, here inclined a method for allocating conflict free time slots for transmitting the data in clustered sensor networks securing it from collision. This is lightened by partitioning the monitoring cluster regions into equal sized virtual grids and Latin Square characteristics support scheduling in individual grids on ultra-wide band. This decentralized protocol follows an AODV routing and thereby each sensor is aware of the neighboring nodes location and accepts the inherent topology changes. Moreover, the nodes that are participating in transmission remains in active state and the others become idle, thus conserving less energy. This distributed MAC scheduling method is particularly helpful in spatial usage of the communication channel for achieving scalability and efficiency and hence performance.
Keywords: MAC Scheduling; Performance enhancement; Virtual Grid; Channel Assignment.
Block Level Time Variant Dynamic Encryption Algorithm for Improved Cloud Security and Deduplication Using Block Level Topical Similarity
by S. Sabeetha Saraswathi, N. Malarvizhi
Abstract: The problem of security and memory management has been well studied. There are number of approaches has been discussed for the data security in cloud environment. However, the methods does not produce higher security and introduces higher data duplication in the cloud which affects the data management performance. To overcome these issues, an efficient block level encryption and de duplication scheme has been presented in this paper. The method represents the data into number of blocks and stores them in the cloud storage. For each block considered, a dynamic encryption/decryption keys has been generated at each time window. The user has been validated with different public/private key approach. For the data access, the algorithm generates different keys for different blocks which restrict the access of users. Based on the key being generated, the data has been encrypted by the system to be decrypted by the cloud user upon access. The method maintains a block key table which contains the keys to be used for the particular time window. The method produces higher efficient results on data security and improves the performance of data management.
Keywords: Cloud Computing; Data Security; Block level encryption; Time variant Keys; Deduplication; BLTS.
Improving the Accuracy of Item Recommendations by Combing Collaborative and Content Based Recommendations: A Hybrid Approach
by Desabandhu Parasuraman
Abstract: Recommender systems facilitate the users by providing the ample information of the items or the products they are interested. Users would not aware of items details without the help of recommender systems due to the size of information available on the web. Collaborative filtering and content based filtering are the two traditional filtering techniques of recommender systems. Both the filtering techniques have their advantages and certainly the disadvantages too. This can be solved by combing both the filtering techniques and improves the accuracy of recommendations. This leads to system as a hybrid recommender system. This paper presents a novel hybrid approach by combining a dynamic item based collaborative filtering with the content based filtering. Time variance and machine learning algorithms are applied on the filtering techniques to overcome the problems in recommendations. The approach is demonstrated using the MovieLens data sets to ensure the effectiveness of the proposed hybrid system.
Keywords: Recommender Systems (RS); Collaborative Filtering (CF); Content Based Filtering (CBF); Hybrid Recommender System (HRS); Machine Learning (ML).
Hybrid Mechanism for Medical Data Classification
by Ahelam Tikotikar, Mallikarjun .M. Kodabagi
Abstract: In recent years medical data mining is the hot topic of research. Earlier extraction was done manually based on the medical expert advice. This process was time consuming. Thus, to solve this drawback machine learning techniques emerged as new era in medical field, which work automatically to extract knowledge from the large dataset. In this paper, a fuzzy approach for classification of medical data is presented. The high number of irrelevant feature decreases the accuracy of medical data classification. Initially, the medical data processed for removal of missing values and then Orthogonal Local Preserving Projection (OLPP) algorithm is employed for feature dimension reduction. Then Fuzzy rules are generated and optimized by Binary Cuckoo Search Algorithm. Further the medical data classification is carried out using the generated fuzzy rules and the best rule is selected based on the highest accuracy using the decision tree classifier. The performance of the proposed technique has been evaluated using various University of California Irvine (UCI) datasets such as Cleveland, Hungarian, Mammographic masses, PID and Switzerland. After observation and comparison, the method achieves highest accuracy of 97.3% for PID dataset whereas the existing method provides 89.5%, both mammographic masses and PID dataset reported the 0.97 sensitivity and thus the highest specificity obtained is 0.95 for PID dataset. Based on the parameters like accuracy, sensitivity and specificity, the proposed method perform well by using OLPP algorithm when compared to the existing methods.
Keywords: Binary Cuckoo Search Algorithm; Data Mining; Decision Tree Classifier; Fuzzy System; and Orthogonal Local Preserving Projection (OLPP) Algorithm;.
Satellite Image Matching and Registration using Affine Transformation and Hybrid Feature Descriptors
by Anil NS, D.N. Chandrappa
Abstract: Image Registration (IR) is a primary image processing technique to determine geometrical transformation that gives the most accurate match between reference and floating images. The main operation of IR aligns multiple images for matching and finding the differences among them as well as produce the essential information among multiple images. Although several exiting methods are used to reduce the manual process associated to inter and intra operator subjective and decrease the time consuming task. In existing work, the inliers ratio and outliers ratios are equal. This reduce the image registration accuracy. So, to avoid the outliers ratio and increase the inliers ratio Hybrid Invariant Local Features descriptor is proposed. This feature descriptor consists of Binary Robust Invariant Scalable Key point (BRISK), Speed Up Robust Feature (SURF), and Feature from Accelerated Segment Test feature (FAST). The Hybrid feature descriptor extracts the relevant features from the image. Then, the feature matching step finds the correct correspondences from the two sets of features. After that, affine transformation avoids the false feature matching points. An experimental analysis describes the inliers ratio and repeatability evaluation metrics performance of individual feature descriptor, combined feature descriptor and proposed feature descriptor. The proposed hybrid feature descriptor achieves inliers ratio of 1.913 and repeatability is 0.121. So, compared to the exiting methods, the proposed hybrid feature descriptor shows better results.
Keywords: Affine Transformation; Binary Robust Invariant Scalable Key point; Feature from Accelerated Segment Test feature; Histogram Equalization; Image Registration; Line Blending; Speed Up Robust Feature;.
A Review on various Energy Sources and Harvesting Techniques in WSN
by Immanuvel Arokia James K, Prabakaran R, Kanimozhi R
Abstract: This paper proposes a various energy sources and harvesting methods applicable in Wireless Sensor Network field. It is strongly believed that the performance of the wireless sensor network can be improved by selecting proper energy sources. Similarly by implementing good harvesting methods, better routing algorithms, and battery (power supply) lifetime should be matching to that of sensor node lifetime. It can facilitate to increase the lifetime of the WSN. By implementing an efficient energy harvesting technique, it could remove the needs of frequent replacement of energy sources. Hence it offers a near perpetual network operating environment. This paper intends to help the researchers to design a better EH techniques for WSN based applications with the help of ambient energy sources. Mainly the consumption of energy by the sensor nodes are considered for doing further research. An examination of the different sources and fundamental measurements to get adequate vitality and different as of late proposed vitality forecast models that can possibly expand the vitality gathering in WSNs are discussed. We have completed an entire report on different vitality reaping strategies contrasting their exhibitions and have anticipated some future extents of this Vitality collecting research. Also, a portion of the difficulties that still should be routed to create effective and dependable vitality gathering frameworks for WSNs condition will be exhibited.
Keywords: WSN Applications; Energy Harvesting; Performance enhancement.
Diminishing the selfish nodes by reputation and pricing system through SRA scenario
by John Paul Antony T, Victor S P
Abstract: In MANET, every hub relies upon different hubs to forward the information to its expected goal. But those as it may, couple of hubs are not prepared to share the assets because of its narrow minded conduct. The reputation and Pricing system gives a solution to the existing problem. We propose a scenario of Price and Reputation system (P&RS) that helps to diminish the selfish nodes in a successful manner. Additionally by productively join the procedure of both Notoriety and Value Framework and by building the Stratified Territory cognizant Spread record Table (DAT) to internationally gather the notoriety data. The Stratified Report Assisted (SRA) frameworks overcome the deficiencies of these current frameworks by effectively consolidate the procedure of both Reputation and Price system.
Keywords: Selfish node; Disseminated table; watches dog; Reputation.
Supervised Microarray Gene Retrieval System Based on KLFDA and ELM
by Thomas Scaria, T. Christopher
Abstract: Microarray gene data processing has gained considerable research interest these days. However, processing microarray gene data is highly challenging due to its volume. Taking this challenge into account, this work proposes a supervised microarray gene retrieval system which relies on two phases namely, feature dimensionality minimization and classification. The objective of feature dimensionality minimization is to make the classification process easier by weeding out the unwanted data. The feature dimensionality of the datasets is minimized by KLFDA and the processed dataset is passed to the classification phase, which is achieved by ELM. The proposed approach is evaluated upon three different benchmark datasets such as colon tumour, central nervous system and ALL-AML. From the experimental results, it is proven that the proposed combination of KLFDA and ELM works better for all the three datasets in terms of accuracy, sensitivity and specificity rates.
Keywords: microarray gene retrieval; classification; feature dimensionality minimization.
Trust based Multi-level Secure Routing for Authentication and Authorization in WMN
by Parveen Kumar Sharma, Rajiv Mahajan, Surender Jangra
Abstract: Wireless Mesh Networking (WMN) is a developing technology, which is receiving improved attention as a high-performance, low-cost and rapid-deployment-solution for next generation wireless communication system. The main issue in WMN is providing security to the routing protocols. Few routers in WMNs exhibit malicious behaviour by snooping the exchanged data. Moreover, users may have different authorization levels such that they should not access information for which their authorization level is lower. Though, lot of papers have been available on privacy-preserving and secure routing in WMNs, they cannot address these issues effectively. Hence we propose to design a Trust based Multi-level Secure Routing protocol (RP) for Authentication and Authorization (TMSR-AA) in WMN. In this protocol, every node in the network upholds a trust table of its neighbouring nodes. The path trust value (PTV) is calculated along the route from these trust tables. Apart from trust, this protocol uses Multi-level security (MLS) mechanism in which information to be transmitted is categorized into different Security Levels (SLs). The packets transmitted in the network are assigned a SL based on the type of data. The mesh routers are assigned SL based on the estimated trust counter. During message transmission, a mesh-router with a specific SL should be allowed to send packets only at the same-level or at lower-level. The performance of the TMSR-AA-WMN protocol is compared with the ESR protocol in terms of the metrics Packet delivery ratio, Packet drop, Delay, Computational over-head, Communication Over-head and Detection Accuracy.
Keywords: Authentication; Authorization; Communication overhead; Computational overhead; Packet Delivery Ratio; Packet drop; Secure Routing; Security levels and Wireless Mesh Networks;.
A Hybrid Feature Selection Algorithm for Big Data Dimensionality Reduction
by Anto Praveena, Bha Rathi
Abstract: Data dimensionality causes the major problems like large storage requirements due to data redundancy, noisy data visualization and high computational cost in big data analytics processes. Hence, reduction of high data dimension data set is the vital task in big data analytics. Feature selection is a dimensionality reduction approach that targets to reduce the high data dimensions into small subset by attaining set of uncorrelated principal variables based on key features from the data source by eliminating redundant, noisy features and irrelevant. In the proposed work, Ant Colony Optimization (ACO) and Quick Branch and Bound (QBB) based hybrid algorithm for feature selection is presented to improve the efficiency of feature selection in big data analytics. ACO was implemented for feature selection by observing real ants in their search of their food resources. QBB algorithm is used to partition the large data set into two partitions initially. These two algorithms are combined and implemented to reduce the data dimensionality based on feature selection approach. This ACO-QBB hybrid algorithm was simulated and compared against existing feature selection algorithms in terms of a set of performance metrics such as precision, Recall, F-Score, Classification accuracy and the size of selected feature. Simulation results have proven the efficacy of the proposed hybrid algorithm for feature selection over the other competitive equivalents.
Keywords: Dimensionality Reduction; Feature Selection; Ant Colony Optimization; Quick Branch Bound.
GLOBAL COOPERATIVE IMAGE PREFETCHING IN MOBILE AD HOC NETWORKS
by SHASHIDHARA D.N., D.N. Chandrappa, Puttamadappa C
Abstract: Prefetching the data is a popular technique that improves data accessibility in wired or wireless networks. However, in mobile ad hoc networks, improvement in access latency and cache hit ratio may diminish because of the mobility and limited cache space of mobile hosts (MHs). The proposed algorithm uses a Global Cooperative Caching with Image Prefetching (GCCIP) which is a method that prefetches the image based on association among data items. The scheme prefetches highly related data items and considers confidence of association rules. Whenever a mobile node issues a request, the cache request processing module first logs this request into record and checks whether the desired data item is available in local cache of Mobile Node (MN) or in any of the Mobile Node in the cluster. If it is a cache hit, the cache manager still needs to validate the consistency of the cached item with the copy at the original server. To validate the cached item, the cache manager checks the validation of data item from its time to leave value. If the data item is verified as being up to date, it is returned to the Mobile Node immediately. If it is a cache hit, but the value is obsolete, the cache manager sends an uplink requests to the server and waits for the data broadcast. When the requested data item appears, the cache manager returns it to the requester and retains a copy in the cache. Experiments are done using both MATLAB and Network Simulator 2 tool to evaluate the performance of the proposed algorithm and also the parameters are compared with Global Cooperative Caching (GCC) . A Compared to GCC, proposed GCCIP algorithm can greatly improve the system performance in terms of packet loss, delay and high throughput compared with previous schemes using NS2. MSE increases with decrease in PSNR, in which performances are taken using MATLAB.
Keywords: Prefetching; Cooperative Caching; Mobile ad hoc networks; Data Mining; Association Rules; Global Cooperative Caching with Image Prefetching;.
SYMMETRIC KEY GENERATION ALGORITHM USING IMAGE BASED CHAOS LOGISTIC MAPS
by KALYANAPU SRINIVAS, V. Janaki
Abstract: One of the latest hot spot in the current security trends is Crypto Image security where images are used to provide security. In this paper, an algorithm for random bit sequence generator, based on images and two Chaos logistic maps, is proposed. The main goal of our algorithm is to generate a symmetric key (random bit sequence) at the sender and at the receiver. In this method, a symmetric key of length as per users choice can be generated by choosing a number of points on a selected image (minimum number of points=1). The pixel values of these chosen points are applied onto the chaos logistic map and the corresponding symmetric key is obtained. The selected pixel coordinates are transmitted to the receiver in a secure manner. Hence the symmetric key could be generated at the receiver from received coordinates. Our proposed methodology concentrates on symmetric key generation at the receiver rather than transmitting the key itself. The strength of the generated key is tested using NIST test suite and found that time required per bit is less and randomness is more in the generated symmetric key when compared with[6,][9-10].
Keywords: Symmetric algorithms; Chaos logistic maps; Images; Key Generation; NIST Test suite;.
Detection of malicious network traffic using enhanced neural network algorithm in Big Data
by Rajendran Bhojan, Saravanan Venkataraman
Abstract: Nowadays, the computing technologies and systems create big challenges to generate the large amount of data with the very biggest growth of internet. However, on the other hand, it also provides the light to the area of data analytics and mining .These areas is not working on the big data of uncovering the patterns and laws beneath. Recently, the analytics of big data are applied to areas. These areas are E-commerce, healthcare and industry. Security analytics receive a great attention based on big data. These analytics are based on the big data. This paper presents the verbose sketch of techniques. These techniques presented about the uses of big data. These applications of big data are placed in network security analytics. A new neural network algorithm is proposed to analyze network traffic. The worst data being sent through the networks and anomalous activities being carried out are detected by using this approach. Experiment is carried out using KDD dataset and performance of proposed approach is compared with traditional learning approaches in terms of false positive ratio and detection ratio.
Keywords: Big data; network security; KDD dataset; neural network.
Performance Measures of Diseases Affected Iris Images using Sigmoidal Multilayer Feed Forward Neural Network
by Gino Sophia, Ceronmani Sharmila.V
Abstract: Iris is a scarce natural password used for human identification with reliability and security. The iris is affected by the number of diseases, so it leads to affects the iris recognition process. So study and analyze the types of diseases affecting the eye images. The localization of an iris is to perform using edge detection with the parameters of neighbors of a pixel and the structuring element of the morphological technique. The iris images are trained by the neural networks and analyzing the regression and performance graphs. Compare the various diseases affected iris and normal images using the periodogram power spectral density of matlab.
Keywords: Enhancement; Histogram; Localization; Neighbors of a pixel; Neural Networks; Regression; Normalization.
Medical Image watermarking technique using IWT-BSVD
by Siva Kannan S, Keerthana Ganesh, Gaya Thri, Jaishree Sundar
Abstract: A special care and concealment is required for medical images, since judgment is done onthe information got through medical images. Transmission of medical image demands strong security and patent protection in telemedicine applications. A highly secured and robust watermarking technique is proposed for transmission of medical image through internet and mobile phones. The Region of Interest (ROI) and Non Region of Interest(RONI) of medical image are separated. Only RONI is used for embedded watermarks. The medical image watermarking technique presented here is based on integer wavelet transform(IWT) and bidiagonal singular value decomposition (BSVD). The original image is decomposed by IWT, then the grey scale watermark image is embedded in the bidiagonal singular values of the low-frequency sub band of the host image. An experimental result on benchmark images clearly tells that the proposed scheme is highly resistible to attacks and has good invisibility.
Keywords: ROI,RONI; Integer wavelet Transform(IWT); bidiagonal singular value decomposition (BSVD).
A BRIEF STUDY ABOUT NATURE INSPIRED OPTIMIZATION ALGORITHMS
by S.Thanga Revathi, N.Rama Raj
Abstract: Nature inspires Human beings to a greater extent as the Mother Nature has guided us to solve many complex problems around us. Algorithms are developed by analysing the behaviour of the nature and from the working of groups of social agents like ants, bees, and insects. An algorithm developed based on this is called Nature Inspired Algorithms. These nature-inspired algorithms can be based on swarm intelligence, biological systems, physical and chemical systems. A few algorithms are effective, and have proved to be very efficient and thus have become popular tools for solving real-world problems. Swarm intelligence is one of the most important algorithms developed from the inspiration of group of habitats. The purpose of this paper is to present a list of comprehensive collective algorithms that invoke the research scope in that area.
Keywords: Optimization algorithms; Nature Inspired algorithms; Genetic algorithms.
ROBUST MEDICAL IMAGE WATERMARKING TECHNIQUE USING INTEGER WAVELET TRANSFORM AND SHEARLET TRANSFORM WITH BSVD
by Siva Kannan S, Thiru Gnanam, Prabha Karan, Bennilo Fernandes
Abstract: There is an increased usage of digital devices in health care services for the last few decades because of the advancements made in the medical field. The manual diagnosis method has been with e-diagnosis system. The most appropriate method that is used for the enhancement of security and authentication of medical data is Medical Image Watermarking, which is crucial and used for further diagnosis and reference. This paper focuses on medical image watermarking techniques for the protection and authentication of medical data using hybrid transforms. Several developments on wavelet transform have been proposed in the field of mathematical analysis. One of the recent extensions of wavelet is shearlet transform. A hybrid scheme using Integer Wavelet Transform (IWT) and Discrete Shearlet Transform (DST) is presented in this paper. Here, the host image and then its low frequency sub-band is decomposed by using IWT and DST respectively. The selected sub-band from Shearlet transform is applied with the bidiagonal singular value decomposition (BSVD) and the gray-scale watermark image is embedded into its bidiagonal singular values. The images with different textures are examined by this method and resistance is evaluated against various attacks like image processing and geometric attacks. By this proposed method, the results are produced with good transparency and high robustness.
Keywords: Medical image Watermarking; Integer wavelet transform; Discrete shearlet transform; Bidiagonal singular value decomposition.
PREDICTION OF NTH FRIENDS USING SPATIAL DATA MINING IN SOCIAL NETWORKS
by DGandhi Mathi, AJohn Sangeev Kumar
Abstract: The role of social networks in particular twitter on intentional social action, the relationship between personal networks and patterns of usage are the major concern nowadays. The proposed work is used to explore the Human Activity Behavior, profiling and their interestingness. Twitter4J is used to obtain an individual specific data. Collection of individual profile is used to exploit the relationship between individuals. Nth friend of individual can be found based on likeness and interest using neighborhood based concept. In experimental result, proposed method gives better result in terms of accuracy and computational time.
Keywords: Social Network; Twitter; Spatial Mining; Clustering.
INFORMATION RETRIEVAL FROM MULTI DOMAIN SPECIFIC RESEARCH PROPOSAL USING HIERARCHICAL BASED NEURAL NETWORK CLUSTERING ALGORITHM
by R. Annamalai Saravanan, M. Rajesh Babu
Abstract: Research project selection is an essential task for government and private agencies. When a huge number of research proposals are received, it is common to group them along with their similarities in research discipline areas. In this paper, new framework is proposed to assign the proposal to reviewer and identify the proposal theme automatically. First, List of multi word terms are extracted automatically from the domain specific corpus of text documents into hierarchy based on their semantic similarity. Second, the proximity of two vectors can be calculated using measures such as probabilistic correlation. The conditional probability is used to model the probability that term appear together in a record. Third, Hierarchical based neural network clustering algorithm is used to get an Information retrieval from multi domain specific research proposal using Hierarchical based neural network clustering algorithm.
Keywords: Research & Development; probability correlation; neural network clustering algorithm; Feature vector.
Brain Tumour Segmentation using Weighted K-Means based on Particle Swarm Optimization
by Naresh Pal
Abstract: In medical science, Image Segmentation (IS) is a challenging task, it subdivides the image into mutually exclusive regions. An IS is the most fundamental and essential process of classification, description and visualization of the region of interest in several medical images. In the medical field, diagnosis of brain and other medical images are using Magnetic Resonance Imaging (MRI), which is a very helpful diagnostic tool. The traditional technique using MRI Brain Tumour Segmentation (BTS) is extremely time consuming task. This research paper concentrates on the improved medical IS method based on hybrid clustering methods. This hybrid technique is a combination of Weighted K-Means and Fuzzy C-Means (WKFCM), K-means and Particle Swarm Optimization (KPSO). The proposed techniques, identify the brain tumour accurately with less execution time. An experimental result demonstrated that proposed hybrid clustering technique performance is better than the earlier methods like FCM, KM, Mean Shift (MS), expectation maximization, and PSO in three different benchmark brain databases.
Keywords: Weighted K-means; Fuzzy C-means; image segmentation;.
Efficient Video Transmission Technique using Clustering and Optimization Algorithms in MANETs
by Vivekananda GN, Chenna Reddy P
Abstract: Mobile Ad hoc Networks (MANETs) are the infrastructure less wireless networks that can configure themselves, and works on the upper part of a link layer. For the transmission of video, along with the high bandwidth requirements, we need tight delay constraints, and for the continuous media streaming, packets must be delivered in timely fashion. The network may get frequently congested due to the external traffic. Many network designs are not able to provide an optimised solution for the layers to adopt particular application requirements and underlying channel conditions. To overcome these issues, our proposed method is utilized. Here initially at the sender side, the input video is partitioned into some frames. The Discrete Wavelet Transform (DWT) is applied to the frames to decompose the frames into sub-band, and then quantization is done to obtain the bit stream from subgroups. The bit stream is used for video transmission purpose with the Stream Control Transmission Protocol (SCTP) multi-steaming. The cross-layer mechanism is used in a proposed method to promote rapid video transmission. For the process of clustering the available nodes, we use Enhanced Fuzzy C Means algorithm (EFCM). The path selection for transmitting video stream is made using Enhanced Cuckoo Search (ECS) algorithm. At the receiver side reconstruction of frames is made possible by performing an inverse process of transmission section. Hence the original video tolerates the internal congestion and is viewed as the same original video on the receiver side without any distortion. Hence our proposed method has better video streaming performance. The proposed technique is assessed using delay, delivery ratio, overhead, throughput, and energy consumption by varying the number of nodes and rates.
Keywords: Clustering; MANETs; Optimization; SCTP; Video Streaming.
A Hybrid Approach for Deep Belief Networks and Whale Optimization Algorithm to Perform Sentiment Analysis for MOOC Courses
by Jayakumar Sadhasivam, Ramesh Babu Kalivaradhan
Abstract: Sentiment classification has won significant attention presently, as it provides the way for automatic analysis of peoples reviews to extract user information regarding a product or service. One of the widely used techniques is polarity classification that determines the polarity of the texts in the opinion. Accordingly, this paper presents a technique for sentiment classification of online course reviews using a novel classifier, Whale-based Deep Belief Network (WDBN). In the proposed technique, the input course review data is pre-processed, and important features are extracted from the data using Emotion-SentiWordNet based feature extraction process. For the classification of sentiments in the feature extracted data, WDBN is introduced by combining Deep Belief Networks (DBN) and Whale Optimization Algorithm (WOA) such that the weights of the network layers are selected optimally. The proposed technique, with the utilization of WDBN, classifies the course reviews into two classes, such as positive and negative class reviews. The proposed WDBN classifier is experimented using a publicly available online course review dataset and the performance of the classifier is evaluated using three metrics, such as sensitivity, specificity, and accuracy.
Keywords: Deep Belief Networks (DBN); MOOC; Sentiment classification; SentiWordNet; Whale Optimization Algorithm (WOA); Neural Network.
A robust low frequency integer wavelet transform based fractal encryption algorithm for image steganography
by Ambika , Rajkumar L. Biradar
Abstract: Image steganography is one of the emerging research areas in the field of information technology. In todays scenario, steganography is widely utilized in communication systems that send secret information through appropriate carriers. In this scenario, the secret information that embeds in the cover image is secret image. The primary objective of this paper is to develop a high level secure transmission technique to transmit messages through lossless channel. In this paper, we proposed an effective transform domain image steganography approach using lower frequency Integer Wavelet Transform (IWT) and fractal encryption with the combination of L shaped tromino theorem, which enhances the performance of the information hiding method. The proposed IWT-fractal encryption offers the advantage of better embedding capacity with low computational complexity and good secret image quality. The experimental outcome confirms that the proposed technique delivers a high security level network with low computational complexity compared to other existing approaches.
Keywords: Fractal encryption; Image steganography; Integer Wavelet Transform; L-shaped tromino;.
Optimal Web Page Classification Technique using Artificial Neural Network
by Anusha Mallikarjun Meti, Mallikarjun M Kodabagi
Abstract: The rapid growth of the World Wide Web (www) is demanding for an automated assistance for Web page classification and categorization. Web page classification is a supervised learning problem which is a hard topic in the area of data mining and machine learning. The web pages contains unstructured data, mining content from web documents and classifying them is a challenging problem. In this proposed work a method for web page classification is proposed. The method comprises of three phases such as feature extraction, information learning and classification. In the feature extraction phase we will extract object based features and utilizes these features to extract informative contents of the web pages. The information learning phase makes use of decision tree (ID3 i.e iterative dichotomise 3 algorithm) algorithm to extract the rules from the features calculated. Based on the rules extracted the classification phase utilizes a hybrid classifier know as Artificial Neural Network & Group Search Algorithm with Firefly (ANN-GSOFF) algorithm to improve the web page classification. The performance of the proposed technique has been evaluated using various WebKb dataset based on parameters like sensitivity, specificity and accuracy. The overall sensitivity values of projected NN-GSOFF provides 83.12%. The overall specificity value of projected technique provides 67.42%. The overall projected NN-GSOFF provides 74.77%.
Keywords: Artificial neural network; Classification; Feature extraction; Information Learning; Optimization; Webpage Classification;.
A Low Power Architecture for 1D Median Filter using Carry Look Ahead adder
by Sharana Basappa, P. Ravi Babu
Abstract: In image and signal processing applications, its very essential to suppress the noisy signal while protective the required information. In this paper, one dimensional median filter is used to reduce the Energy per Sample (EPS). Median filter is one of the fundamental and best building blocks in many image processing applications. This median filter design will be implemented in VLSI architecture to find hardware utilization, and cost. In this paper, Low Cost Carry Look-ahead Adder median filter (LC-CLA- MF) method is introduced to improve the speed of median filter architecture. Area, power, delay will be analysed for 5 windows and 9 windows for 8 bit and 16-bit median filter architecture using 180nm technology. In FPGA implementation, LUT, number of slices, flip flops, and frequency will be analysed for different Virtex devices. Finally, area, power, delay, Area Power Product (APP), and Area Delay Product (ADP) will be minimized in LC-CLA-MF method than conventional method.
Keywords: Very large scale integrator; carry look-ahead adder; FPGA; 180nm technology; APP; ADP;.
ADaas: A Secure Attribute Based Group Signature based Agri-Cloud Framework
by E. Poornima, N. Kasiviswanath, C. Shoba Bindu
Abstract: Cloud computing technique is very helpful for the farm management. This technique helps to increase the productivity of the farmers and also provide the protection of their product. It is an emerging way of computing in which applications, data and resources are provided as service to the user in the web. Some practical challenges that are faced while communicating among the farmers are poor knowledge about weather forecast, deficient production information, information about sales and distributions of products. In this paper, Agri-Cloud framework is developed to improve the cloud computing framework of Agriculture Data as a Service (ADaaS) over the public cloud. This model providesaccurate information to the various stakeholders. To improving the Agri-Cloud framework security Attribute Based Group Signature (ABGS) system is used, that provides secure data sharing in cloud data center. The experimental analysis demonstrates that the proposed Agri-cloud framework is better than the existing models such as modified water cloud model, Agro cloud model and AgroMobile model.
Keywords: Agriculture Data as a Service; AgroCloud; Agri-cloud Model; Cloud Infrastructure; Key Generation; Elliptic Curve Digital Signature; Signature generation;.
Evolutionary Cross-Layer Interactive Model for Sensor Communication
by Shoba Chandra, Kiran Kumari Patil, Suresha Talanki
Abstract: Cross-layer based approach has significant benefits compared to conventional layered based routing approach in Wireless Sensor Network (WSN); however, existing approaches in this direction has been found to offer symptomatic solution towards any problem. Hence, an evolutionary novel approach is introduced in this paper which predominantly emphasize on the interaction among the Network layer, Link layer, and MAC layer. The model is developed to i) ensure better Network Lifetime, ii) incorporate better supportability to ad-hoc routing policies, iii) leverage cooperative transmission scheme, and iv) maintain sustainability between data forwarding mechanism and energy consumption. The outcome of the simulation results show that the proposedrnscheme offers minimal Delay, maximized Data Delivery, and reduced Energy Consumption in WSN.
Keywords: Cross-Layer Interactivity; Network Lifetime; Energy; Wireless Sensor Network; Optimization;.
Special Issue on: ISTA'16 Metaheuristic Techniques and Applications
A Novel Self-Organization Model for Improving the Performance of Permutation Coded Genetic Algorithm
by Dinesh Karunanidy
Abstract: Genetic Algorithms (GA) are extremely powerful among evolutionary principles and being used in variety of fields for solving more complex problems. Varieties of assistive techniques have been proposed to improve the performance of Genetic Algorithms w.r.t. the nature of the application and Self organization is one such model, which is aimed at improving the performance of the GAs by all means. The Self-organization models enable the systems to acquire and maintain the structure by themselves, without any external control. It is highly evidenced that it gives greater benefits to solve the complex problems with competent efficiency levels in conjunction with the classical GAs. The combined version of SOM and GA has the power of better exploration. In this way, the work reported in this paper proposes an efficient pattern based self-organization model for improving the performance of the GA for the combinatorial optimization problem. The competency of the proposed model is demonstrated by means of a set of well-defined experiments over the selected benchmark Travelling Salesman Problem (TSP) instances. The assessments proved the efficiency of the technique in terms of a set of generic performance criteria like convergence rate, convergence time, error rate, nearest neighbor ratio and distinct individuals.
Keywords: Self-organization technique; Genetic algorithm; population seeding technique; traveling salesman problem; Pattern Replacement; Combinatorial Problem.
Hybrid Enhanced Shuffled Bat Algorithm (HESB) for Data Clustering
by Reshu Chaudhary, Hema Banati
Abstract: Enhanced Shuffled Bat algorithm (EShBAT) is a recently proposed variant of bat algorithm (BA) which has been successfully applied for numerical optimization. To leverage the optimization capabilities of EShBAT for clustering, HESB, a hybrid between EShBAT, K-Medoids and K-Means is proposed in this paper. EShBAT works by dividing the population of bats into groups called memeplexes, each of which evolve independently according to BA. HESB improves on that by employing K-Medoids and K-Means to generate a rich starting population for EShBAT. It also refines the memeplex best solutions at the end of every generation by employing K-Means algorithm. Both these modifications combined together produce an efficient clustering algorithm. HESB is compared to BA, EShBAT, K-Means and K-Medoids, over ten real-life datasets. The results demonstrate the superiority of HESB.
Keywords: Enhanced shuffled bat algorithm; k-means; k-medoids; data clustering.
Special Issue on: Advanced Intelligence and Computing Technology
Diminution of Power in Load/Store Queue for CAM and SRAM based Out-of-Order Processor
by Dhanalakshmi Gopal
Abstract: In a modern world for non numeric applications, Out-of-Order super scalar processors are designed to achieve higher performance. Unfortunately the improvement in the performance has lead to the increase in the chip power and energy dissipation. The Load/Store queue is a one of the major power consuming unit in the data path design during dynamic scheduling. Load/Store queue is designed to absorb busts in cache access and maintain the order of memory operations by keeping all in-flight memory instruction in program order. The proposed technique aims at reducing both dynamic and static power dissipation in the Load/Store Queue (LQ/SQ) by using Power-gating technique and priority encoder. Through this implementation, the least amount of redesign, verification efforts, lowest possible design risk, least hardware overhead is achieved without significant impact on the performance.
Keywords: Load /Store Queue; static Power; dynamic power; CAM; SRAM.
Design of an ultra-low power, low complexity and Low Jitter PLL with digitally controlled oscillator
by N.K. Anushkannan, H. Mangalam
Abstract: This paper proposes a new area-efficient, low-power and low jitter phased-locked loop (PLL) architecture working off a low frequency reference. In this paper, new PLL is proposed with a new locking procedure with low complexity which results in ultra low power design. The main challenge to design the proposed PLL is to keep the area small while meeting the required low jitter. The proposed method was designed using only two up-down counters for finding the reference frequency. An efficient glitch removal filter and new low power DCO also introduced in this paper. The proposed DCO achieves a reasonably high resolution of 1ps. The PLL architecture was demonstrated for different frequency ranges from 100 400 MHz. The power consumption of proposed PLL at 500 MHz frequency is 820
Keywords: phase-locked loop; digitally controlled oscillator; low power; low complexity; low jitter; glitch removal.
Effective content based pattern predicted text mining using PSE model
by Vijaya Kumar
Abstract: The main importance of Pattern Searching Engine (PSE) model provides solution for the applications, which involves Pattern based mining and find connections between patterns (e.g: emotions) and affective terms by categorizing the text in the content under examination. It discovers patterns of word-use and how to connect documents that shared similar patterns. In Pattern Searching Engine model uses both theme based examination and idea based investigation, those can foresee the normal example by utilizing Semantic Based natural seeking model which interface words with comparative implications and recognize employments of words with different implications in a viable and speedy way.
Keywords: Text mining; pattern based; pattern prediction; concept based.
Special Issue on: Advanced Intelligence Paradigms in Machine Vision, Image Processing and Pattern Analysis
Priority Based Trimmed Median Filter for Removal of High Density Salt and Pepper Noise
by Sudhakar R, Sudha V.K.
Abstract: This paper proposes an efficient and less complex Priority Based Trimmed Median Filter algorithm for restoring images corrupted by high density salt and pepper noise. Noisy pixel is replaced by trimmed median value of the horizontal and vertical adjacent four pixels, through this algorithm. If these four are 0s and 255s, then the next priority diagonal adjacent four pixels are used to calculate trimmed median for replacing noisy pixels. If these four are also found as 0s and 255s, then the noisy pixel is left unchanged until the next iteration. Experimental results on different gray scale and color images show that the proposed algorithm outperforms the Standard Median Filter, Adaptive median Filter, Decision Based Algorithm, Modified Progressive Switching Median Filter and Modified Decision Based Unsymmetric Trimmed Median Filter.
Keywords: Salt and Pepper noise; Median filter; Adaptive Median Filter; Unsymmetric Trimmed Median Filter.
An Efficient approach for handling degradation in Character Recognition
by Sandhya N
Abstract: Recognition of historical printed degraded Kannada characters is not solved completely and remains as a challenge to the researchers still. In this paper a scale for measuring degradation of a character is proposed. Further the degradation is characterized to high, medium and low based on this scale, and use it to study the efficiency of the character restoration technique designed. A new approach, Fit Discriminant analysis (FDA) for recognition is proposed and compares its recognition accuracy with the existing techniques Support Vector Machines (SVM) and Fisher Linear Discriminant Analysis (FLD). Through extensive experimentation it is established that rebuilding of characters improves the recognition accuracy of learning based approaches SVM, FDA, and FLD significantly. Further it is established that the proposed approach FDA gives the best recognition accuracy for historical printed degraded documents. It is also proved that training-testing set applying the proposed degradation measure is required for better recognition accuracy.
Keywords: Degraded characters; Support Vector Machines; Fisher Linear Discriminant Analysis; Broken characters.
Pattern Analysis and Texture classification using Finite State Automata scheme
by B. Eswara Reddy, Ramireddy Obulakonda Reddy
Abstract: The paper proposes a complete modeling of finite state automata along with the associated classifier for texture classification. Pattern analysis of the texture image is performed by proposing a symbolic pattern based algorithm. This algorithm is developed based on the symbolic dynamics and finite state automata theory for estimating the state transition of the texture variations. Texture image is divided into several partitions i.e. texture, background of the texture, shadow of the texture etc. Finite automata state transitions are used to extract the features from the symbolized image. A binary classifier is designed to classify the texture categories based on the feature extraction from the finite automata theory. Pattern analysis is performed on the KITH-TIPS dataset for 10 varied categories of texture. 99.12% Classification accuracy is achieved when compared with other state-of-art techniques. The experimental study shows the better efficiency of the proposed system when compared to other existing methods.
Keywords: Finite automata; symbolic pattern; texture; classification.
A Novel Method for Super Resolution Image Reconstruction
by Joseph Abraham Sundar K, Vaithiyanathan V
Abstract: The paper describes about a new method for super resolution based on surveying adjustment. The idea used in this method is, an observation model is developed for the sequence of low resolution images and based on this an observation equation is developed for the Super-Resolution Image Reconstruction (SRIR). The observation equations are used by the surveying adjustments in order to find the gray function. The validation of proposed method is done using simulated experiments and real time experiments. These experimental results are compared with various latest techniques using performance measures like peak signal to noise ratio and sharpness index. In both the cases of experiments the proposed surveying adjustment based super resolution image reconstruction has proved to be highly efficient which is needed for satellite imaging, medical imaging diagnosis, military surveillance, remote sensing etc.
Keywords: Super-Resolution; Image Reconstruction; Gray function; Observation model.
GLCM Based Detection and Classification of Microaneurysm in Diabetic Retinopathy Fundus Images
by Dhiravida Chelvi, Raja Mani, C.T.Manimegalai Murugeasn
Abstract: Pre-screening of eye is very important in Diabetic Retinopathy to help the ophthalmologists in providing relevant treatment. Diabetic retinopathy is a major cause of blindness and it includes the lesions like Microaneurysms, Haemorrhages, and Exudates. Microaneurysms are the first clinical sign of diabetic retinopathy and it is a small red dot on the retinopathy fundus images. These are early detectable signs in Diabetic Retinopathy which cause vision loss soon. The number of micro aneurysms is used to indicate the severity of the disease. The first step in preventing the disease is the automatic detection of the micro aneurysms at an early stage. The automatic identification of micro aneurysms reduces the manual workload and cost. A novel method of micro aneurysms detection for retinopathy images is proposed here. The proposed algorithm detects and classifies the Micro aneurysm from Diabetic Retinopathy Fundus images in low resolution images also. Initially the image is processed by a median filter and enhanced by Contrast limited Adaptive Histogram Equalization (CLAHE).Micro aneurysms are detected by extended minima method for candidate extraction. The PCA (principal component analysis) is used as a pre-feature extractor in terms of size, shape and colour of MA. To improve the efficacy of the system, finally statistical features are extracted by Gray level coocurrence matrix (GLCM) and are given to the Knn classifier to classify Micro Aneurysm accurately. These detected MA are validated by comparing with expert ophthalmologists hand-drawn ground-truth images. The simulation results show the performance such as sensitivity of 95.7%, specificity of 90.56%,and accuracy of 93% of the proposed algorithm.
Keywords: Micro aneurysm; Diabetic Retinopathy; Image Processing; Pre-Processing; Image Classification.
Face Recognition using combined Binary particle swarm optimization and Hidden layer of Artificial Neural Network
by S.G. Charan
Abstract: Face Recognition is one of the challenging domains. We have seen Artificial Neural Network perform very well in both detection and recognition. In this paper, we propose a novel method of feature extraction where features obtained at the end of hidden layer of neural network is utilized. This hidden layer output is our first level of features. On these features we apply Binary Particle Swarm Optimization (BPSO) to remove the redundancy, the few hidden units in the network. BPSO over hidden layer outputs can be implemented in two ways: 1) to apply BPSO over hidden layer in the training stage so the network is better optimized; 2) to directly use the BPSO on an optimized neural networks hidden layer output. Both the techniques performed well over traditional neural network and conventional BPSO. Experiments on FERET and LFW datasets shows promising results.
Keywords: Face Recognition; Hidden Data Mining; Particle Swarm Optimization; Artificial Neural Network; Hybrid Intelligent model.
Iris recognition system based on a new combined feature extraction method.
by Izem Hamouchene, Saliha Aouat
Abstract: Recent science studies are interested in automatic systems without human intervention. This concept is crucially needed in several researches and industrial world. Indeed, the security field is in a great need of automatic identification system based on the biometric (bioinformatics domain). The human iris is considered as the best biometric mark to the identification due to the stability, distinctiveness and unique features over time. Thus, the uniqueness of the texture present in the human iris is a natural password. This property is coveted by the field of security. In this paper, we propose a novel and an automated iris recognition approach. Our approach is based on a combination of two systems. The first system is based on the Regional Variation (RV) method. This method decomposes the iris image into several blocks. After that, the variation of the mean and the variance are encoded to generate the regional descriptors. The second system is based on a new feature extraction method called Rotation Invariant Neighborhood-based Binary Pattern (RINBP) Hamouchene and Aouat (2014). This method extracts the relative local information between the neighbors of pixels and is also robust against rotation. Two set of support vector machines (SVM) based learning algorithm is used to train the two systems. The output scores of the two systems are normalized. Dempster-Shafer theory is used to distribute unitary mass over the two output set of SVMs. Finally, the combined belief measures are transformed to a probability by applying the Dezert-Smarandache theory. In the experiments, the CASIA iris image database is used as a benchmark. The proposed systems are compared to famous iris recognition systems (Wildes Wildes (1997), Masek Masek (2003), Han et al Han et al. (2014), Rai et al Himanshu et al. (2014) and Izem et al. Hamouchene and Aouat (2014)). The experiments illustrate that the proposed recognition system has obtained better recognition rates. Experimental results illustrate the efficiency of the proposed iris recognition system especially the feature extraction methods (RV and RINBP) and the decision model which give promising results.
Keywords: Iris Recognition System; Neighborhood-based Binary; Texture analysis; Mean and variance variations; Dempster-Shafer theory; Support Vector Machines.
Enhanced method of using Contourlet transform for medical image compression
by Eben Sophia P, Anitha J
Abstract: With the aim of improving the compression performance using contourlet transform, Singular Value Decomposition (SVD) of intermediate subbands has been experimented. In this way, the size of contourlet transform subbands can be efficiently reduced to induce compression. This novel lossy compression technique enhances the compression performance of contourlet transform and produces good quality image even at lower bit rates. In addition to SVD, normalization and prediction of decomposed sub band coefficients also improve the compression performance. The method was tested using medical MRI (Magnetic Resonance Imaging) and CT (Computed Tomography) imaging modalities. The statistical results confirm the efficiency of the proposed method in terms of CR (Compression Ratio), PSNR (Peak Signal to Noise Ratio) and BPP (Bits Per Pixel). This method produces good compression with approximately 47 dB PSNR at bit rate as low as 0.1BPP. This is suggested good for medical image communication and storage applications such as PACS (Picture Archiving Communication System), RIS (Radiology Information System) etc. and also helps in easy search and retrieval process.
Keywords: Contourlet transform; singular value decomposition; prediction; lossy compression; Arithmetic coding; medical MRI and CT images etc.
Video-based assistive aid for blind people using object recognition in dissimilar frames
by Hanen Jabnoun, Faouzi Benzarti, Frédéric Morain-Nicolier, Hamid Amiri
Abstract: Developing visual aids for handicapped persons is an active research area in the computer vision community. This paper presents a visual substitution tool for blind people based on object recognition in video scene. It focuses on the optimization of the video processing using the calculation of dissimilarity between frames. The approach includes the Real Valued Local Dissimilarity Map method in the frames dissimilarity measures. It uses Scale Invariant Features Transform keypoints extraction and matching to identify objects in dissimilar frames. The experiment tests showsome encouraging results in the case of finding object of interest. Thus, the proposed method can be a choice for solving the problemof blind and disabilities persons in their interactionwith the surrounding environment.
Keywords: Pattern recognition; video processing; visual substitution system; Scale Invariant Features Transform; Real Valued Local Dissimilarity Map; keypoints matching.
Brachiopods classification based on fusion of Contour and Region based descriptors
by Youssef Ait Khouya, Faouzi Ghorbel
Abstract: In this paper, we propose a Contour-Region based shape descriptor for Brachiopods classification by using a combinations of Fourier descriptors and R-transform extracted from Radon transform. Fourier descriptors is supported by the well-developed and well-understood Fourier theory and are a powerful features for the recognition of two-dimensional connected shapes. We used the stable and complete Fourier descriptors proposed by Ghorbel to present the contour information. To depict shape interior content we used the R-transform. It's advantages lies in its low computational complexity and geometric invariance. We compared the proposed descriptor with Curvature Scale Space, R-transform and Ghorbel descriptors using City block distance measure and our Brachiopods database. We present the experimental results to reveal the performance of the proposed descriptor which is independent on the starting points and efficient.
Keywords: Brachiopod;Fourier descriptors; Radon transform; R-transform; Curvature Scale Space.
Identification of Human Activity Pattern in Controlled Web Environment: An Adaptive Framework
by A. Chakraborty, D. Banerjee, R.T. Goswami
Abstract: This paper projects a new aspect regarding the research works based on human web based activity pattern analysis. Web activity pattern analyzer is basically a part of the main goal of the research, human psycho emotional behavioural pattern analysis. In the recent era, human world is majorly dependant on internet in their life for various aspects and that is why internet usage pattern of each individual user is growing as a very powerful resource to know him. The need of web users can be mitigated more efficiently if their requirements are known to the providers. These usage patterns are found to be unique for each user, to some extent, as per the current psycho emotional state of that individual user when he or she is in a controlled web environment and this can be another mark of authentication of that particular user. This concept has already been applied in some real world application domains namely; User Authentication Protocol, Personalized E-Learning and Link Data Analysis for Resource Description Framework in Semantic Web.
Keywords: Session_Sequence; Activity Pattern; Adaptive Algorithm; Dempster–Shafer theory; Belief Function; Recommender Agent; RDF Graph.
Special Issue on: Advanced Pattern Recognition and Soft Computing Paradigms
Using a soft computing method for impedance modelling of li-ion battery current
by Mohammad (Behdad) Jamshidi, Rouzbeh Farhadi, Morteza Jamshidi, Zahra Shamsi, Seyedfoadin Naseh
Abstract: Using the soft computing as a powerful tool for modelling of complex systems is highly regarded. Adaptive neuro fuzzy inference system is one of the best methods of soft computing which identifies and models non-linear systems. In this paper, complex impedance behaviours of li-ion batteries are studied by adaptive neuro fuzzy inference system. To present an approach for modelling and identification of electrochemical systems is purposed. This method can be improved to reach the most accurate model of the batteries. In the presented work, complex current is modelled as the main important element of the batteries in impedance state. Modelling results showed that this method can have acceptable output for impedance modelling the batteries.
Keywords: Electrochemical; impedance modelling; li-ion battery; soft computing; complex systems; systems engineering; ANFIS.
Fuzzy Project Scheduling with Critical Path Including Risk and Resource Constraints Using Linear Programming
by Shahram Saeidi, Samira Alizadeh Aminloee
Abstract: Project scheduling is one of the important issues of project management which has raised the interest of the researches and several methods have been developed for solving this problem. While certain models are used in most studies, uncertainty is one of the intrinsic properties of most projects in real world which consist some activities with uncertain processing times and resource usages. In this paper, a fuzzy linear programming model is proposed for project scheduling considering risk and resources constraints under uncertain environment in which activity duration and the amount of resources used by each activity is defined as a fuzzy membership function. The proposed model is simulated in MATLAB R2009a software and four test cases adopted from the literature are implemented. The computational results show that the proposed model decreases the critical path length about 4% in competition with similar methods.rnrn
Keywords: Fuzzy Project Scheduling; Critical Path; Linear Programming.
OMCM-CAS: Organizational Model and Coordination Mechanism for Self-adaptation and Self-organization in Collective Adaptive Systems
by Ali Farahani, Eslam Nazemi
Abstract: The complexity of Information systems has grown in past decades and dealing with this complexity become a hot research field in computer science. One of the solutions for dealing with systems complexity and environmental changes is self-managing. It has announced under the term of autonomic computing by IBM at 2001. In recent years the using self-managing approaches in distributed systems without central control is trending. Self-organizing is known for its usage in distributed systems; hence, self-adaptation is mostly used in centralized systems. For having these two concepts alongside each other, self-adaptive concepts are combined with self-organization, an interdisciplinary term with applications in several fields. Different usages and definitions are provided for this term and also for its relation with self-adaptive systems. These differences have led to an ambiguity in this domain. Collective Adaptive System (CAS) is a distributed system which has heterogeneous agents with different capabilities in large scale. This research field cover a big majority of distributed systems. Having self-adaptiveness in CAS can address problems about coordination and cooperation of agents. This research compares self-organization with self-adaptation in a broader view and identifies the differences and correlations. Also, it considers the applicability of coordination, reflection and architectural approaches in both domains and presents a hybrid approach. Organizational models for self-organization in distributed environment have studied and have got analyzed. A new combined organizational model has been introduced based on the benefits and weak points of current organizational models. Based on presented organizational model, a coordination mechanism has been introduced for facilitating the cooperation in CAS. A Case study (NASA ANTS mission) have been discussed and simulated and simulation results support the applicability and effectiveness of the presented organizational model and coordination mechanism.
Keywords: Self-organization; Self-adaptation; Intelligent distributed system;
Decentralized control; Coordination Mechanism.
Provide a new clustering scheme based on density to enhance energy efficiency in wireless sensor networks
by Mahdis Fathi, Mousa Nazari
Abstract: The study and researches related to wireless sensor networks (WSN) are growing today due to its various uses in different fields. Wireless sensor network includes many small nodes that have been located in an intended environment. Since the dimensions of these sensors are small, they work with non-rechargeable batteries as energy limited instruments. So, energy conservation is very important. Clustering the sensor nodes is an effective way to diminish the consumed energy of these networks. Accordingly, a novel clustering scheme which is based on density-based clustering approach is presented in this article. In this new method, nodes that have been located within proximity of each other, are placed in one cluster and unlike some algorithms, it is no need to determine the exact number of clusters. Simulation outcomes indicate that lifetime and total packet delivery of proposed method have been increased rather than other related methods.
Keywords: clustering; density-based; energy efficiency; wireless sensor networks; WSNs.
The performance comparison of improved continuous mixed P-norm and other adaptive algorithms in sparse system identification
by Afsaneh Akhbari, Aboozar Ghaffari
Abstract: One of the essential usages of adaptive filters is in sparse system identification on which the performance of classic adaptive filters is not acceptable. There are several algorithms that designed especially for sparse systems, we call them sparsity aware algorithms. In this paper we studied the performance of two newly presented adaptive algorithms in which P-norm constraint is considered in defining cost function. The general name of these algorithms is continuous mixed P-norm (CMPN). The performances of these algorithms are considered for the first time in sparse system identification. Also the performance of l_0 norm LMS algorithm is analyzed and compared with our proposed algorithms. The performance analyzes are carried out with the steady-state and transient mean square deviation (MSD) criterion of adaptive algorithms. We hope that this work will inspire researchers to look for other advanced algorithms against systems that are sparse.
Keywords: Adaptive algorithms; sparse; mixed P-norm; system identification.
Energy-aware traffic engineering in IP networks using non-dominated sorting genetic II algorithm
by Raheleh Samadi, Mohammad Nassiri, Muharram Mansoorizadeh
Abstract: Wide spreading of computer networks along with increasing traffic demand throughout the Internet caused a dramatic increase in energy consumption by networking devices and Internet infrastructure. Energy-aware traffic engineering is a promising approach towards green networking to achieve a trade-off between energy saving and network utilization in backbone networks. In this paper, we propose to use non-dominated sorting genetic algorithm (NSGA-II) for energy-aware intra-domain traffic engineering. This algorithm tries to make a tradeoff between maximum link utilization (MLU) and energy reservation. For each pair of network topology and traffic matrix, NSGA-II computes the optimal set of links to put to sleep so that the resulting topology would be able to carry the traffic demand. We developed a simulator to evaluate the performance of our mechanism. The results of comprehensive evaluations show that our energy-aware TE approach increases the network performance in terms of energy conservation by 50% at the cost of slight increase in maximum link utilization.
Keywords: Energy saving; Traffic engineering; Link utilization; Genetic algorithm; non-dominated sorting.
A Comparison of Data mining Methods for Diagnosis and Prognosis of Heart Disease.
by Mohammad Reza Afrash, Mehdi Khalili, Maral Sedigh Salekde
Abstract: Heart disease is a term that covers a range of disorders that affect heart. Since medical decisions are still mostly based on the knowledge and experience of doctors and not on the basis of hidden knowledge in numerous cases Patient records, so this action is exposed to human errors, which may lead to late discovery of disease or influenced how services offered to patients. So create automatic or semi-automatic detection system with a combination of both knowledge and experience in the field of health care is very useful and necessary. Here, this paper compare data mining algorithm for diagnosis and prognosis heat disease as an automatic intelligent heart disease prediction system. Accordingly firstly we use data set with 14 attributes. Secondly, we develop a prediction model using Na
Keywords: Keywords: data mining techniques; heart disease; classification; weka;.
Special Issue on: Advances in Information Security, Privacy and Forensics of Multimedia Big Data in the Internet of Things
Botnet Detection based on DNS Traffic Similarity
by Ahmad Manasrah, Walaa Bani Domi, Nur Nadiyah Suppiah
Abstract: Despite the efforts in combating the threat of botnets, they still grow in size and evasion techniques. The bot software is written once and spreads to other machines all over the world. The bot software is preconfigured to locate the malicious domain name (if it is not static) through the DNS system, like any other legitimate host. In this paper, a scalable approach for detecting a group of bot hosts from their DNS traffic is proposed. The proposed approach leverages a signal processing technique, power spectral density (PSD) analysis, to discover the significant frequencies (i.e. periods) of the botnets periodic DNS queries. The proposed approach processes the timing information of the generated DNS queries, regardless of the number of queries or domain names. Measuring the level of similarity between hosts demonstrating periodic DNS queries should reveal the group of bot hosts in the monitored network. Finally, we evaluated the proposed approach using multiple DNS traces collected from different sources along with a real world botnet deployed under controlled environment. The evaluation result shows that the proposed approach was able to detect the group of bot hosts that demonstrates similar periodic DNS pattern with high accuracy and minimum false positives rates.
Keywords: Botnet detection; Traffic similarity; Traffic anomaly; Group Activity; Malware activity; Traffic behavior analysis; Network Intrusion Detection.
Fingerprinting Violating Machines with In-Memory Protocol Artifacts
by Mohammed Al-Saleh, Yaser Jararweh
Abstract: Cyber crime has increased as a side effect of the dramatic growth in Internet deployment. Identifying machines that are responsible about crimes is a vital step in an attack investigation. Tracking the IP address of the attacker to its origin is indispensable. However, apart from finding the attacker's (possible) machine, it is inevitable to provide supportive proofs to bind the attack to the attacker's machine, rather than depending solely on the IP address of the attacker, which can be dynamic. This paper proposes to implant such supportive proofs by utilizing the internals of three well-known Internet protocols: IP, TCP, and ICMP. Our results show that there can be potential proofs in the structures of these protocols. In addition, because a violator is unaware of (and has no control over) the involved protocols, the investigation process is empowered with stealth. To the best of our knowledge, we are the first to utilize protocol remnants in fingerprinting violating machines.
Keywords: Fingerprinting; violating machine; protocol artifacts.
Enhancement of 3-D Playfair Algorithm using dual key
by Arnab Kumar Das, Nabanita Das
Abstract: Playfair cipher is the one of the well known polyalphabetic cipher. In this paper we present a new approach f or secure transmission of a message by a modified version of the playfair cipher combining with ex-or operation and dual key. To develop this whole technique we used the three functions. One is generated the matrix and another two is encryption and decryption technique. The proposed extended 3d Playfair cipher is working with 256(4x8x8) characters, it selected 52 alphabets(upper case and lower case), 10 numerals and 194 most commonly used special characters of ASCII character set. We use the 3D Version of the playfair cipher but we use the digraph concept. The restrictions of existing 2D-Playfair ciphers and 3D-Playfair cipher using 4x4x4 matrices, 6x4x4 matrices are overcome in the proposed work. The proposed algorithm can accumulate more characters than the existing 3D- Playfair ciphers.
Keywords: playfair; cipher; polyalphabetic; encryption; decryption; ASCII.
A Knowledgebase Insider Threat Mitigation Model in the Cloud: A Proactive Approach
by Qutaibah Althebyan, Yaser Jararweh, Qussai Yaseen, Rami Mohawesh
Abstract: Security of cloud computing is a major concern for both organizations and individuals. Organizations are looking for more trust from individuals. At the same time cloud users want to make sure that their private data will be safe from disclosure either by outsiders of the cloud or even from (probably malicious) insiders of the cloud (cloud agents) from within the cloud. Hence, insiders' threats of the cloud computing is a major issue that needs to be tackled and resolved. In this paper, we propose a proactive insider threat model using a knowledgebase approach. Proactive in a sense that our model tries to detect (in advance) any deliberate deviation of the legal accesses an insider might try to perform so that the individuals private data will be protected and secured. At the same time the cloud resources will be insured to be secured as well as consistent at all times. Knowledgebase models were used earlier in preventing insider threats in both the system level and the database level. This knowledgebase work will be extended to cloud computing systems. The proposed model insures an in advance mitigation in the form of detection (and hence, a chance for prevention) of possible insider breaches. This mitigation correlates system insiders admins' knowledge who may grant undesired privileges to insiders of the underlying cloud data center. The proposed model handles the insider threat in a cloud data center at its several levels: the host level and the network level where insiders are categorized several levels of privileges according to their locations within the cloud data center. Simulation results show that the proposed model works well in predicting malicious acts of insiders of the cloud data center. It also shows that although our model is effective in predicting insiders' threats, it still performs well with minimum overhead to its performance. This in fact has been concluded by showing that the number of blocked insiders is reduced to the minimum.
Keywords: Insider; Proactive; Cloud Data Center; Knowledgebase; Prediction; Mitigation.
Digital Video Forensics: A Comprehensive Survey
by Mohammad A. Alsmirat, Ruba A. Al-Hussein, Wala'a T. Al-Sarayrah, Yaser Jararweh, Morad Etier
Abstract: The wide spread and the advancement of digital devices and tools causes the simplification of manipulating any digital multimedia content. Nowadays, digital videos and photos are not trusted to be used as a reliable evidences in major cases in courts. Such concerns results from the existence of various techniques that can be easily used to change the contents of these evidences. These facts raise the need of finding new ways or techniques to ensure the authenticity of digital multimedia contents. Usually, multimedia evidences are obtained either by downloading them from the Internet or using digital storage devices such as disks and tapes. In both cases, some methods should be used to guarantee the originality and the authenticity of the digital evidence. Experts in digital-signal processing conducted a huge number of researches to find new strategies, using digital forensics, to verify digital evidences and trace its origins. The main engine of such techniques is the assumption that the manipulating of such evidences cannot be reversed and it leaves a trace called "footprints". Such effects can be analyzed to determine whether this evidence has already been altered. The aim of this paper is to collect and provide the definitions of the main concepts related to media forensics. Also, this paper aims to give an overview of the different techniques used in media forensics concentrating on video forensics. Furthermore, this paper classifies the work done in the field according to the main technique used in the proposed solution approach.
Keywords: video forensic; image forensic; digital forensic; video compression; double compression; video manipulation.
Special Issue on: Green Mobile Computing for Energy-Efficient Next-Generation Wireless Communication
Vanet Routing Protocol with traffic aware approach
by Sangeetha Francis
Abstract: Vehicular Ad hoc NETwork (VANET), is a type of Mobile Ad hoc NETwork (MANET) that forms vehicles as nodes. Routing is the basic fundamental requirement of VANET applications. Therefore it is necessary to devise a routing protocol that fits well for rapid topology changes and disconnected network conditions. To address these specific needs of VANET we present a novel greedy routing protocol for vehicular networks called VRPTA that suite well for both city environment and the high way environment. With the help of localization system named GPS (Global Positioning System), the proposed protocol is designed to efficiently relay the data in the network by considering different scenarios like road traffic variation and various environment characteristics. The protocol communicates in between vehicles as well as vehicle to infrastructure whichever is applicable, thereby ensuring reliable transmission. In addition, we also consider the information about vehicles speed, direction and density of a city traffic configuration consisting of double direction roads, multi lanes and highway scenario. The work is implemented using NS2 simulator.
Real Time MAF Based Multi Level Access Restriction Approach for Collaborative Environment Using Ontology
by Rajeswari Sampath
Abstract: The collaborative environment encourages rapid development in many organizations but struggles with malicious access. There are many access control approaches for improving the performance of the collaborative environment. There have been discussed earlier, but unfortunately performance is not seen. This paper presents a novel real time malicious access frequency based multi level restriction scheme. The method maintains the ontology of resources which contain data of various kinds, their properties and the set of roles of environment could get access to. Also, the system maintains the logs about the previous access of various users of the environment. The log, helps computation of the method for the requested data and user by MAF. Using computed MAF value the method computes the multi attribute trust measure for each level and also the multi level trust weight. Based on computed value, the method performs access restriction to improve the quality of collaborative development.
Keywords: Collaborative Environment; MAF; Data Ontology; Access Restriction; Public Auditing; MLA.
Reconfigurable Communication Wrapper for QOS Demand for Network On Chip
by S. Beulah Hemalatha, Vigneswaran T
Abstract: Efficient Communication wrapper design is one of the important research issue in network on chip. A single wrapper with fixed design parameter will not be efficient one for the heterogeneous environmental network on chip scenario. The system on chip has many different computing and communication blocks with different data rate and data format. To interconnect such a heteronymous blocks a standard based wrapper frame work such as OCI wrapper are proposed .But such standard wrappers does not support the QOS demand of the every block .So this work proposes a frame work of reconfigurable communication wrapper design with support of QOS. The proposed frame work is simulated in LABVIEW software and tested on National Instruments FlexRIO 7845R FPGA hardware. The results shows that on the fly re-configurability is achievable with the frame work .
Keywords: OCI wrapper; reconfigurable communication wrapper; QOS; system on chip.
MMSI: A Multi-Mode Service Invocation Algorithm To Improve The Connectivity In Accessing Cloud Services In Heterogeneous Mobile Cloud
by R.K. Nadesh, M. Aramudhan
Abstract: Modern research in cloud environment is focused on where mobile users can access data through cloud services through arrangement of regional cloudlets when the connectivity with the cloud service provider is less or lost. As the cloud services can be activated anytime from anywhere, connection management should be handled in a fair manner to maintain the service requirements. However, cloud services can be invoked independent of location, if the service parameters do not meet the constraints, then the performance of the cloud system degrades. In this paper, we propose a multimode service invocation algorithm for improving cloud service to the mobile users. When a mobile user is connected to a cloud service and the service level lowers on random mobility, this algorithm is used for choosing the cloudlet or an adhoc cloud to provide identical service without any interruption. In our experiment we estimate parameters like delay, signal strength and energy. Based on the estimated levels and the value less than the threshold value, we invoke and bind with the nearest cloudlet or adhoc cloud whichever is possible. The client invokes the services through cellular networks in normal condition and every time interval, it computes the signal strength, energy level and delay factors in accessing the cloud service. When the estimated parameters are less than the threshold, it is connected with the local access point. The multi-mode algorithm computes the service invocation weight and selects the connectivity mode in continuing the service invocation. We prove that, this algorithm improves the performances of the user in accessing cloud services in terms of throughput, connectivity ratio and service completion.
Keywords: Cloud Computing; Mobile Adhoc Clouds; Cloudlets; Service Invocation.
Malicious node detection through Run Time Self healing algorithm in WSN
by B.R. Tapasbapu, L.C. Siddanna Gowd
Abstract: Wireless Sensor Network possess a large number of randomly deployed nodes. These nodes configure them self to form a network. WSNs major role is to monitor the environment, collect the data and communicate the data to the base node. The originality of the data communicated by the WSN nodes is important criteria to avoid the failure in the network. So Self-healing techniques are implemented to overcome the losses of data in routing due to misbehaving nodes. However, major protocols designed for self-healing are not energy constrained and not suitable for battery powered network. We here propose a new Run-time self-healing algorithm which posses individual monitoring nodes which scan the data and asses the stability of the nodes to ensure proper communication in the network. The proposed method was compared with self-healing hybrid sensor network architecture(SASHA) and Error Correction Code(ECC) algorithm to prove the improvement in efficiency of the network.
Keywords: Wireless sensor network; Fault Occurrence; Self healing; Nodes management; Dead node avoidence.
Classification of Neonatal Epileptic Seizures using Support Vector Machine
by Vimala Velayutham
Abstract: Neonates are infants who are in their first 28 days of life. The diagnoses of neonatal seizures have been advocated by the use of clinical observations and electroencephalography (EEG). The continuous monitoring of neonatal EEGs in neonatal intensive care units is tedious and involves experts intervention. The use of clinical decision support systems into the neonatal intensive care units has proved to produce aid to neonatal staff. The neonatal seizures of epileptic origin are more common and we recommend an approach to aid in the classification of the same using EEG signals of the neonates. Daubechies wavelet transform is used for the task of separation of frequency bands and the extraction of features. The theta rhythm of EEG reflects rightly the occurrence of epileptic seizures in neonates. The features taken into consideration for the classification are mean, variance, skewness and kurtosis. The Support Vector Machine (SVM) based classification is adopted for the development of the system which detects the presence or absence of epileptic seizures. The performance of this diagnostic aid system has been studied and the system has a sensitivity of 94% and specificity of 96%. The receiver operating characteristic curve is also used in the performance assessment.
Keywords: Classification; EEG; neonatal intensive care units; neonatal epileptic seizures; support vector machine.
A Novel approach for Secured Transmission of DICOM Images
by Priya Selvaraj
Abstract: Abstract DICOM communication(Digital Imaging and Communications in Medicine) mainly focuses on the transmission of medical images, storing the information in medical image and also for printing and securing the image. A medical image communication is mainly for secured medical facilities for the physicians and the patient. The medical image is compressed under the JPEG 2000 format. The hash value is find out using Additive Hash Function(AHF) and it is encrypted using RSA to form the digital signature. Combination of digital signature and text will be the watermark. This text consists of patient information, doctor information, disease information,and prescription. Reversible watermarking is a technique in which watermark is embedded and watermarked images passes through the authentication process, the original image is extracted along with the watermark.Strict authentication is provided in order to have high security for accessing the secure medical images by implementing Kerberos technique.
Keywords: Keywords—Reversible watermarking; Authentication; Medical Image Compression; JPEG2000 Compression; Additive Hash Function; RSA; Kerberos.
PRESERVING SECURITY USING CRISSCROSS AES AND FCFS SCHEDULING IN CLOUD COMPUTING
by Kalyanaraman Ramkumar, Gurusamy Gunasekaran
Abstract: Cloud computing is a developing technology in distributed computing which provides pay service model as per user need and requirement. Cloud includes the collection of virtual machines which have both computational and storage facility. The objective of cloud computing is to provide effective process to hyper distributed resources. Recently, cloud system is developing fast, and faces many challenges, two of them is scheduling process and other main challenge is security. Scheduling states about how scheduler adapts its scheduling strategy, according to the changing set of morals to control the order of work to be implemented by a computer system. In this research paper, a scheduling algorithm of collocate First Come First Server (FCFS) of supremacy elements is proposed where the system efficiency is improved using FCFS in parallel manner. To address security problem, crisscross Advance Encryption Standard (AES) is proposed by increasing the security in the cloud through the grid manner. Aggregate of this proposed work is to enhance the system efficiency and security by using the both crisscross AES and collocate FCFS of supremacy elements.
Keywords: Cloud computing; First Come First Server; Advance Encryption Standard; Security;.
BINARY HONEY BEE MATING PARTIAL TRANSMIT SEQUENCE TO IMPROVE OFDM
by Jagarlamudi Ravisankar, B. Seetha Ramanjaneyulu
Abstract: A huge shortcoming of Orthogonal Frequency Division Multiplexing (OFDM) is the extreme Peak-to-Average Power Ratio (PAPR) of the transmitted signals. Partial transmit sequence (PTS) method is capable of enhancing PAPR statistics of OFDM signals. In PTS method, data block to be forwarded is split into disjointed sub-blocks and the subblocks are merged through usage of phase factors for minimizing PAPR. Because generic PTS needs extensive search over every combination of permitted phase factors, search complexity rises in an exponential manner with quantity of subblocks. In the current work, a novel sub-optimal technique on the basis of Binary Honey Bee Mating (BHBM-PTS) protocol is suggested for searching better combination of phase factors. BHBM-PTS protocol may considerably decrease computation complexity for bigger PTS sub-blocks and provides lesser PAPR simultaneously. Simulations prove that BHBM-PTS protocol is an effective technique for achieving considerable PAPR decrease.
Keywords: Orthogonal Frequency Division Multiplexing (OFDM); Peak-to-Average Power Ratio (PAPR); Partial transmit sequence (PTS); Binary Honey Bee Mating (BHBM).
A Survey on Internet of Vehicles: Applications, Technologies, Challenges and Opportunities
by Priyan M K, Ushadevi G
Abstract: This work aims to provide a survey on Internet of Things (IoT), Internet of Vehicles (IoV) and Internet of Everything (IoE). The Internet of Things (IoT) provides interconnection between various physical devices such as sensors devices, mobile phones, laptop, PDA and so on. Nowadays, IoT also enables connection between vehicles, buildings and other items that are fixed with sensors, actuators and gateways. Internet of Vehicles (IoV) is identified from the Internet of Things (IoT). Internet of Vehicles (IoV) is used to make an interconnection between the things, vehicles and environments to transfer the data and information between the networks. Internet of Everything is an enhanced version of Internet based technologies such as Internet of Things, Internet of Humans and Internet of Digital. IoE provides end to end connectivity among procedures, knowledge and ideas engaged across all connectivity use cases. This paper discusses various challenges and issues in modern IoT, IoV, and IoE system. In addition, this paper also discusses security issues and various application of IoT in healthcare. Though, IoT devices are used in modern applications with good performance, however, some challenges are still exist. In order to overcome this issues, various open research problems are identified in this paper.
Keywords: Internet of Things; Internet of Vehicles; Internet of Everything; Vehicular ad hoc network; Big Data; Cloud Computing; Intelligent Transportation System.
Radio Spectrum Collision avoidness in Cluster Cognitive Network through gazer Nodes
by V. Nagaraju, L.C. Siddanna Gowd
Abstract: The spectrum deficiency in Cognitive Radio can be solved effectively by utilization of radio spectrum. The spectrum is not effectively shared among all the other users. Since the users are spread across different locations the spectrum allocation and spectrum sharing is important to use spectrum effectively and to allocate communication channel to all the devices in the network, by doing so all the nodes in the network can communicate covering large area. In Cognitive Radio, Spectrum sensing, spectrum allocation, and reuse scenarios approaches with the different algorithm help improve the utilization of the spectrum. Traditional spectrum allocation technique such as fuzzy logic and harmony search replaces the spectrum with the new spectrum scheme. However, the new technique brings more efficiency in achieving spectrum utilization. Still the cognitive in mesh network has the problem of collision between the secondary and primary users. To minimize the effect of collision we introduce a gazer based cognitive radio network (GCRN) which provides more freedom for frequency sharing paradigm. The novel algorithm provides the network to adopt automatically for every change in the environment of the cluster in cognitive radio network.
Keywords: Cognitive radio network; Gazer nodes; Spectrum Sensing; Resource Sharing; Control channel.
Intelligent Intrusion Detection Techniques for Secure Communications in Wireless Networks: A Survey
by K.P. Rama Prabha, N. Jeyanthi
Abstract: Communication is a heart of the day to day activity in the current world. Since the world has practiced with electronic devices for carrying out all the daily activities, electronic and wireless communication along with internet plays a major role in the success of providing a sophisticated life.Moreover, the internet users are also gradually increasing in the recent two decades for making their life easy through fast communication.In such a scenario, the numbers of intruders are increasing in the Internet dramatically.In this paper, we provide a survey on the use of machine learning algorithms for developing intelligent intrusion detection systems which are most useful for providing secure communication in wireless networks. Moreover, we compare all the important intelligent intrusion detection systems based on their performance and also suggest some new ideas for improving the decision accuracy of current intelligent intrusion detection systems.
Keywords: Intrusion Detection System; Machine Learning Algorithms; Pre-processing; Classification; Wireless Networks;.
Perlustration on existing techniques and applications in cloud computing for smart buildings using IoT
by D. Shiny Irene, T. Sethukarasi
Abstract: One of the emerging applications of IoT and its devices is to design and build smart devices for smart buildings. Though one of the design issues of smart devices such as anytime anywhere presence is achieved, there is a dearth need to ensure another challenging design issue viz., security ,interoperability and energy efficiency.. There are many emerging algorithms and techniques to address this issue. An attempt has been made in this paper to survey the emerging and optimistic algorithms that can address this ever dynamic issue in building smart cities using IoT. Energy efficient and environment friendly secured smart devices can be designed and developed in future to build perseverant smarter cities.
Keywords: Internet of Things; Smart Buildings; Smart Energy and Security; Cloud Computing.
FUZZY RULE SELECTION USING ARTIFICIAL BEE COLONY OPTMIZATION ALGORITHM
by Naga Ravikiran Desiraju, Dethe C G
Abstract: Wireless sensor network (WSN) brings an innovative model with embedded system with restrictions of computing ability, intercommunication, storage capacity, and energy resource which is applied for high range of applications in the situations when constructing the network based on conventional infrastructure is not feasible. Clustering with WSN is a successful technique to reduce the rate of energy use of sensor node. The fuzzy logic calculates the Cluster Head (CH) selection probability depending on the nodes earlier communication history to choose the CH. The set of rules applied to the fuzzified input is the fuzzy rule base. The output of the inference engine is changed to crisp output by defuzzification. Artificial Bee Colony (ABC), an optimization protocol owes its inspiration to the exploration behavior of honey bees. It is a comparatively innovative optimization algorithm which has proven to be on par with classical bio-inspired protocols. This work on ABC optimization algorithm is suggested for selecting fuzzy rules. Rule selection methods combine different rules from fuzzy rule set to decrease the rules while maintaining the performance of the system. The rules that decrease the performance of the system are removed, to get a fuzzy rule set with improved performance.
Keywords: Wireless sensor networks (WSN); Clustering; Artificial Bee Colony (ABC); fuzzy rule selection.
Image Encryption Techniques for Data Transmission in Networks: A Survey
by JAYANTHI RAMASAMY, John Singh K
Abstract: Todays the rapid growth of communication technology like internet, satellite, ground communications and mobile networks, resulted as the need to product the important information from the individual or general as well as their respective data against attackers. In this scenario, the issues of privacy, integrity, productivity, confidentiality and authenticity of images have become the significant issue for storage and communication of images. The encryption method is the best way towards maintaining the safety of transmitted data by transforming the information into an inconceivable form. In the past, the various encryption methods were proposed and applied towards product the trustworthy images from the unauthorized users. This study discussed, analyzed and identified the issues from the previous encryption methods. This paper discussed about the various encryption methods and reviewed the related works for each scheme. Finally, this study discussed the purpose of this image encryption technique in future.
Keywords: Image encryption; steganography; cryptography; Color image encryption; image quality measure; security analysis; Cryptoanalysis.
DETECTING NEAR-DUPLICATE IMAGES USING SEGMENTED MINHASH ALGORITHM
by S. Thaiyalnayaki, J. Sasikala, R. Ponraj
Abstract: Search engines involve the important role between the users thinking and visual images.Digital images are easy to manipulate and modify due to the powerful tools of image processing technique .but still the challenge tasks are matching slightly altered copies to their original appearance , which is termed near-duplicate image detection.Web image search results nowadays have a significant portion of near duplicates with images varying in size and resolution. However, since these images refer to the same or similar image, most search engines groups them in their result pages. The definition of a near duplicate image varies depending on what resolution and geometric variations are deemed acceptable.Near-duplicate (ND) image detection appears as a timely issue recently, being regarded as a powerful tool functionality various emerging applications. copyright enforcement, news topic tracking, image and video search are the tasks enables by the identification of near-duplicate image. In the paper, a method has proposed for Indexing Near-Duplicate Images using segmented minhash algorithm. First image enhancement is done based on user query image then features are extracted.SURF (Speeded up Robust Features) is used for extract the local invariant features of each web images. After this We introducing new algorithm called segmented minhash which is used for similarity is calculated among the feature extracted images. Finally,indexing near duplicate images and exact duplicate image based on user query.For indexing we use Locality Sensitive Hashing (LSH). We demonstrate that our proposed approach is extremely effective for collections of web images.
Keywords: Indexing,near-duplicates; near-duplicate detection; Image Enhancement.
Adaptive Multi loop IMC Based PID controller tuning using Bat Optimization algorithm for Two Interacting Conical Tank Process
by Lakshmanaprabu Sk
Abstract: In this paper, multi loop adaptive internal model controller (IMC) based PID is designed for the two interacting conical tank level process (TICTLP). The nonlinear TICTLP is decomposed into linear transfer function matrix around the operating points and the effective open loop transfer function (EOTF) is developed using simplified decoupler. The IMC based PID controller parameters are obtained for EOTF model using Bat optimization algorithm (BOA). A weighted sum of integral time absolute error is used as control design objective function for multi-loop IMC-PID design which yields faster settling time with minimum over shoot. The fuzzy based adaptive gain scheduling is used to provide complete control to TICTLP and fuzzy based adaptive decoupler is implemented to eliminate the dynamic interaction between control loops. The simulation results of proposed controller are compared with conventional ZN-PID, IMC controllers to show the superiority of proposed controller. The simulation response of proposed controller indicates the performance improvement control schemes interns of time domain performance indices, servo tracking, regulatory response and faster settling time.
Keywords: Conical tank process; Effective open loop transfer function; adaptive decoupler; Multi loop IMC control; IMC-PID; Relative Gain Array (RGA); Fuzzy gain scheduling; Bat Optimization Algorithm.
SWARM DYNAMICS FOR ENHANCED ENERGY AWARE CLUSTERING
by Ramana Rao M V, Adilakshmi T
Abstract: Energy can be efficiently conserved in WSN through clustering of nodes. As in all shared-medium networks, Medium Access Control (MAC) protocol enables the smooth functioning of the network. An important function of MAC is to prevent the bottle-neck between two nodes sending data simultaneously. Many MAC protocols have been developed for smooth functioning of WSN which includes Berkley Medium Access Control (BMAC) which utilizes minimal power listening as well as a proper preamble for minimal power communication. The main challenge of BMAC is overhearing and power wasting in long preambles. The aim of this work is to cluster BMA) protocol using heuristic methods based on River Formation Dynamics (RFD) and Particle Swarm Optimization (PSO). The suggested protocols performance is evaluated for Packet Delivery Ratio (PDR), end to end delay, hop as well as jitter. The outcome shows that the proposed River PSO cluster BMAC performs better than BMAC with flooding and BMAC with cluster based routing when compared with static and varying node mobility.
Keywords: Wireless Sensor Networks (WSN); Cluster Head (CH); Medium Access Control (MAC); River Formation Dynamics (RFD); Particle Swarm Optimization (PSO).
NEURAL NETWORK BASED VIRTUAL BACKBONE TREE CONSTRUCTION AND DYNAMIC SINK IMPLEMENTATION TO ENHANCE THE LIFETIME OF THE NETWORK AND MINIMIZE THE ENERGY CONSUMPTION
by Vimal Kumar Stephen K, Mathivanan V
Abstract: With the effect of technological development, the primary objective of this research aims at retaining the energy level of the sensor node for a long period in the wireless sensor network. Ensuring negligible energy drop leads to long life for the network. Secure group key management technique is imposed to solve the security problem such as authentication, confidentiality and scalability. Cluster key and Master key is exclusively used in the network to protect the sensed information while communication between nodes takes place. Static and movable mobile sinks are deployed to enhance the lifetimes of the sensors in the network. Initially, the static mobile sinks act as a trusted third party for computing and distributing keys between sensor nodes and the clusters. Further, movable sinks are used to receive sensed data from the sensor where it is being located which avoids unnecessary event of choosing new cluster head often. The energy is retained, since the presence of trusted third party sink performs all the computations of cluster head. Computation is reduced in cluster head thereby increases the life time of the particular cluster. Outcomes of experiments prove that the suggested technique produced better results compared to related study.
Keywords: Key Generation; Cluster key; Master key.
An Adaptive Low Power Coding Scheme for the NOC
by M. Jasmin, T. Vigneswaran
Abstract: Low power system design is important for system on chip design where many sub system blocks communicate with each other with higher data rate to realize the system functionality. Low power coding either will reduce energy by reducing self-switching activity or reduces energy consumption by reducing coupling switching activity. But under typical Network on Chip (NOC) system we require a low power coding scheme which has to handle different kinds of data traffic from different IP core at different instant and different places in System on Chip (SOC). A single low power coding scheme will not solve all the subsystem or application demands. So here in this paper a correlation analysis based adaptive data coding scheme is presented which will provide low power at any instant on any kind of data traffic. This is done by selecting and encoding the data with different coding scheme based on correlation level of the data traffic. The data traffic is classified into three categories as low correlated data traffic, moderate correlated data traffic and high correlated data traffic. Based on the classification different coding scheme is applied .The proposed system is simulated in labVIEW FPGA tool for the USRP RIO target which is a wireless transceiver that can inject megabits of test data per second for testing the coding scheme .The power consumption of the existing coding schemes are compared with the proposed adaptive scheme by taking different correlation based test data sets. The result shows that the proposed system will save 25% energy compared to other coding scheme at the worst case scenario.
Keywords: NOC;SOC;Correlation analysis;USRP RIO .
Severity of defect: An optimized prediction
by Kiran Kumar Reddi, Achuta Rao S. V.
Abstract: To assure the quality of software an important activity is performed namely Software Defect Prediction (SDP). Historical databases are used to detect software defects using different machine learning techniques. By doing so, there is increased potential with positive outcome. Conversely, there are disadvantages like testing becomes expensive, poor quality and so the product is unreliable for use .A bug report illustrates the severity of a defective code. The resources for testing and other planning activities are done based on Defect severity assessment. This paper classifies the severity of defects by using a method based on optimized Neural Network (NN).The above method is based on Shuffled Frog algorithm and the experimental outputs reveal that it can do better than Leven berg Marquardt based NN system (LM-NN).
Keywords: Software defect prediction (SDP); Severity; Neural Network; Levenberg Marquardt (LM); Shuffled Frog; fuzzy classifier.
High-level optimized systems design using hardware-software partitioning
by Lilia Kechiche, Lamjed Touil, Bouraoui Ouni
Abstract: Embedded systems have a wide range of use and have become essential parts of todays life. A typical embedded system consists of application-specific hardware and programmable software. Hardware-Software (HW/SW) partitioning problem plays a crucial role in embedded systems design as it allows the proposition of an optimized system with predefined constraints. It allows choosing which tasks should be mapped to software and hardware. In this paper, a heuristic algorithm, the hybrid-bee-colony-optimization for multiple-choice HW/SW partitioning is proposed with the objective of minimizing power consumption and execution time, while meeting area constraint. The heuristic algorithm is developed to generate an approximate solution in acceptable delay. The Virtex 5 is chosen as a target platform. Simulation results are compared with existing works and they show rapidity with the generation of an optimal solution near to the exact one.
Keywords: hardware-software partitioning; heuristic algorithm; bee-colony optimization; SOPC.
FEATURE EXTRACTION USING CMIM FOR SENTIMENT ANALYSIS
by Madhusudhanan Baskaran, Chitra S, Anbuchelian S
Abstract: Recently, a lot of attention paid to the domain of sentiment analysis (SA), with experts acknowledging the scientific trials as well as possible applications of the processing of subjective language. SA is the computational analysis of opinions or sentiments conveyed in a body of text. The aim of SA is the detection of subjective data present in several sources and figure out the attitudes of the author regarding the topic. In the current study, the feature extraction is carried out Term frequency / Inverse document frequency and features selection through CMIM. Feature classification is done through LogitBoost, CHAID as well as k-Nearest Neighbor classifiers. The experimental results were contrasted with one another.
Keywords: Sentiment Analysis; LogitBoost; CHAID; CMIM; k-Nearest Neighbor (kNN); Term frequency / Inverse document frequency; Stemming; Stop words.
A BAYESIAN APPROACH FOR BRAIN COMPUTER INTERFACE USING FEATURE FUSION TECHNIQUES
by Aswin Seshadri K, Thulasi Bai V
Abstract: In the recent past many laboratories explored the prospects of communication through cerebral activity for patients with neuromuscular disorders. A Brain-Computer Interface (BCI) enables control of devices or communication with brain activity without using muscles. It has been successfully used in scientific, therapeutic applications and helps increase the patients standard of life. Electroencephalography (EEG) recorded from a persons scalp is used for controlling the BCI. EEG signal analysis and classification is one of the prominent researches in the field of BCI. The major challenges of BCI are low signal-to-noise ratio of neural signals, and need of robustness of extracting feature set from the brain signals and classifying it. In this work, we review a data fusion techniques for EEG-based BCI along with Bayesian methods for BCI. This paper provides a comparison of the feature extraction techniques - Laplacian, kalman and fused Laplacian-kalman. The features obtained were classified using Naive Bayes classifier. Source identification and spatial noise reduction is achieved through the surface Laplacian. The two functions of surface Laplacian are associated with prediction accuracy as well as signal orthogonality in BCI.
Keywords: Brain–Computer Interface (BCI); Feature Extraction; Laplacian; Kalman Filter and Naïve Bayes Classifier.
DESIGN AND FABRICATION OF AN IMPROVED GPS ANTI JAMMING ARRAY ANTENNA
by Thiyagarajan Venkatesh
Abstract: Global Positioning System (GPS) satellites produce low power signals that travel great distances to reach the receiver. To negate a GPS system, an adversary needs only to generate jamming signal with enough power and suitable temporal or spectral signature to deny the use of GPS throughout a given area. The first system developed to increase the GPS anti-jam capability for users on the ground or in the air was controlled reception pattern antenna. This device consists of an array of antenna elements. The elements are all connected to the electronics box that controls either the phase or gain or both and combines them to give a single output. From both military and civilian perspective it is important to establish an adequate anti-jamming capability for GPS systems and ensure availability of this asset in all environments. This was recognized by the military and resulted in the development of several mitigation techniques in time domain, time-frequency domain, Adaptive Antenna Arrays (AAA) and PC based software defined radio concepts. In this study, circular geometry of 5 patch antennas operating at L2=1.227GHz are designed and fabricated. Phase only nulling technique based on hybrid optimization is proposed and evaluated using IE3D software.
Keywords: Global Positioning System (GPS); anti-jam; Adaptive Antenna Arrays (AAA); Circular geometry; patch antennas; Phase only nulling; Artificial Bee Colony (ABC) algorithm; Cuckoo Search (CS).
Power Audit: An estimation model-based tool as a support for monitoring power consumption in a distributed network infrastructure
by Aziz Dahbi, Asmaa El Hannani, Abdelhak Aqqal, Abdelfatteh Haidine
Abstract: Understanding the details of power consumption in IT distributed infrastructure has become essential to make efficient power management decisions. Indeed, increasingly, energy costs are a major factor in the Total Cost of Ownership (TCO) of IT equipments of both data centers and enterprise computing. However, measuring and monitoring the power consumption of systems in medium-scale to large-scale distributed infrastructures is often difficult due to large and dispersed deployment of heterogeneous equipments such as personal computers (PCs), routers, switches, printers, etc. The various aspects discussed in this study are then organized around: i) proposing an approach for measuring power consumption of devices in distributed infrastructure, especially for computers as a first step, and ii) collecting the measures on the monitoring server through the network in a supervisory objective using Simple Network Management Protocol (SNMP). We have designed and developed software named "Power Audit" as support of the above aspects.
Keywords: IT equipments; SNMP protocol; distributed infrastructure; power management; power consumption.
Non-linear Channel Tracking of a High Mobility Wireless Communication System
by Sudheesh P, Jayakumar M
Abstract: Recently evolved wireless communication systems incorporate the use of Multiple Input Multiple Output (MIMO) systems to overcome the effects of channel fading. Orthogonal Frequency Division Multiplexing (OFDM) is moreover used to overcome Inter-Symbol Interference (ISI) to ensure effective signal transmission. The channel parameters in wireless communication systems are generally non-linear. Channel estimation techniques for non-linear systems include Unscented Kalman Filter (UKF), Kalman Filter (KF) and Extended Kalman Filter (EKF). The Kalman filter is used for linear channel estimation whereas the EKF and UKF are applicable for non-linear systems as well. Particle filter is a type of Sequential Monte Carlo (SMC) method which uses Sequential Importance Sampling (SIS) technique to effectively track a non-linear system. Particle filter (PF) is an efficient method of tracking, which is able to deal with non-Gaussian and non-linear systems. In this paper, we estimate the channel parameters of a fast time varying MIMO-OFDM system using particle filter. The proposed scheme considers a first order Auto-Regressive (AR) system model. A Rayleigh fading channel for mobile systems which incorporates the Doppler shift that occurs in a mobile environment is used. The performance of the particle filter is compared with the other estimation methods like Kalman filter and extended Kalman filter. The mean square error (MSE) as a function of the signal to noise ratio (SNR) is plotted to compare the performance of the particle filter with other systems.
Keywords: Non-linear channel estimation; MIMO-OFDM system; Kalman Filter (KF); Extended Kalman Filter (EKF); Particle filter (PF).
Securing Ad Hoc Networks using Energy Efficient and Distributed Trust based Intrusion Detection System
by Deepika Kukreja, S.K. Dhurandher, B.V.R. Reddy
Abstract: Mobile Ad Hoc Networks (MANETs) are subject to broad variety of attacks.Black hole and gray hole attacks are security threats that make MANETs weak by inducing packet forwarding misbehavior. This paper proposes a method for detection & isolation of malicious nodes and selection of most reliable path for routing data. Intrusion Detection System (IDS) is utilized to catch the nodes exhibiting packet forwarding misbehavior. Monitoring scheme is appropriate for MANETs as it emphasis on energy reduction, has distributed nature and compliant with dynamic network topology. Proposed method is simulated using network simulator NS2. Findings show that the proposed system is efficient in terms of Packet Delivery Ratio (PDR), Routing Packet Overhead, End to End Delay and Energy management as compared to Dynamic Source Routing (DSR) protocol and other protocols in this area. The protocol improves the PDR by 43.44% as compared to DSR protocol in presence of malicious nodes.
Keywords: Ad Hoc Networks; Dynamic Source Routing Protocol; Intrusion Detection System; Trust; Gray hole attack; Energy.
Contribution to Radio Resource Distribution approach in Wireless Cellular Software Defined Networking
by Fall Hachim, Ouadoudi Zytoune, Mohamed Yahyai
Abstract: We witness actually huge wireless traffic demand on a limited bandwidth. This leads to develop complex and power-hungry network technologies that are often harder to manage. Thus, some core network features as Radio Resource Management (RRM) introduce important issues as scalability and energy efficiency. This paper debates on next generation wireless cellular network Radio Resource Distribution (RRD) algorithms. We leverage Software Defined Network (SDN) benefits by proposing AoD (Algorithms on Demand), which aggregates several schedulers at the network controller. Based on Markov prediction, a real time context data analysis adapts the most suited RRD scheme at the evolved Node B. This choice depends on cell status (load, interference, etc.), thanks to the device programmability feature of SDN. Moreover, AoD reduces power consumption by optimizing always the transmission rate. Simulations show that one can approach 5G (fifth generation) radio policies by AoD theory with Quality of Experience and low carbon footprint as benefits.
Keywords: Terms: Energy Efficiency; Markov Model Prediction; Openness; Radio Resource Management; Software-Defined Networking.
Special Issue on: Nature-inspired Computing and Its Applications
Improving the Search Efficiency of Differential Evolution Algorithm by Population Diversity Analysis and Adaptation of Mutation Step Sizes
by Dhanya M. Dhanalakshmy, M.S. Akhila, C.R. Vidhya, G. Jeyakumar
Abstract: Abstract: The aim of this research work is to improve the efficiency of Differential Evolution (DE) algorithm, at the cases of its unsuccessful searches. Initially, this work discusses and compares different methods to measure the population diversity of DE algorithm implemented for DE/rand/1/bin variant for a set of benchmarking functions. A method which well demonstrates difference in population diversity evolution at successful and unsuccessful cases of DE search is identified based on comparison. This work is then extended to detect unsuccessful searches in advance using the evolution of population diversity measured by the identified method. On detecting a search as unsuccessful, a parameter adaptation strategy to adapt the mutation step size (F) is added to DE algorithm to recover from it. The improved DE algorithm, which comprises of the logic of adapting F value based on the population diversity, is compared with its classical version and found outperforming. The comparison results are reported in this paper.
Keywords: Differential Evolution; Premature Convergence; Stagnation; Mutation Step Size; Parameter Adaptation; Population Diversity; Population Variance.
Towards Real-time Recognition of Activities in Smart Homes
by Sook-Ling Chua, Lee Kien Foo, Saed Juboor
Abstract: Many supervised methods have been proposed to infer the particular activities of the inhabitants from a variety of sensors attached in the home. Current activity recognition systems either assume that the sensor stream has been pre-segmented or use a sliding window for activity segmentation. This makes real-time activity recognition task difficult due to the presence of temporal gaps between successive sensor activations. In this paper, we propose a method based on a set of hidden Markov models that can simultaneously solve the problem of activity segmentation and recognition on streaming sensor data without relying on any sliding window methods. We demonstrate our algorithm on sensor data obtained from two publicly available smart homes datasets.
Keywords: Real-time; Activity Recognition; Activity Segmentation; Streaming Data; Hidden Markov Model.
Supervised Approach for Object Identification using Speeded Up Robust Features
by Pooja Agrawal, Teena Sharma, Nishchal K. Verma
Abstract: This paper proposes a vision based novel approach for real-time object counting. The proposed approach uses the textural information for object counting. Speeded Up Robust Features (SURF) are used to extract the textural information from the image. Firstly, the approach selects stable SURF features from prototype image object of interest. These features are matched with the SURF features of scene image captured using vision interface. Feature Grid Vectors (FGVs) and Feature Grid Clusters (FGCs) are formed for matched SURF features in the scene to indicate the presence of object. Support Vector Machine (SVM) Learning is used to identify true instances of the object. A parameter tuning approach is used to find optimized heuristics for more accuracy and less computation. The proposed approach performs well irrespective of illumination, rotation and scale. A run time environment of the proposed approach is also developed to get real-time status of the object count.
Keywords: Object identification; object counting; SURF; SVM classifier; feature grid vector; feature grid cluster.
Optimal Design of QFT Controller for Pneumatic Servo Actuator System using Multi-objective Genetic Algorithm
by Nitish Katal, Shiv Narayan
Abstract: Loop shaping is the principle step for synthesizing the Quantitative Feedback Theory (QFT) based robust controllers. The controller assures performance robustness in the presence of plant uncertainties. This paper explores a template and bounds free approach for the automated synthesis of low order fixed structure QFT controller for a highly uncertain pneumatic servo actuator system. In this work, the loop-shaping problem has been posed as a multi-objective optimization problem and solved using the multi-objective variant of the genetic algorithm. At the end of the design process, a set of Pareto optimal solutions (POS) are obtained, to aid the decision maker in choosing an ideal solution from the POS, use of level diagrams has been explored. The simulation of the results and time and frequency domain analysis has been carried out using Matlab and the results obtained clearly unveil that the designed QFT controller offers robust behavior over a range of plants parametric uncertainty.
Keywords: Quantitative Feedback Theory; Multi-objective Genetic Algorithm; Automatic Loop Shaping; Robust Stability; Level Diagrams.
Hybrid BATGSA: A Meta Heuristic Model For Classification of Breast Cancer Data
by M. Umme Salma, Doreswamy
Abstract: Nature inspired algorithms have a vast range of applications. One such application is in the field of medical data mining where, major focus is on building models for the classification and prediction of various diseases.
Breast cancer has grabbed the interest of numerous researchers because, it is the major killer disease, killing millions of women across the globe. In this paper, we propose a hybrid diagnostic model which is a fusion of Bat Algorithm (Bat), Gravitational Search Algorithm (GSA), and feed forward neural network (FNN). Here, the potential of the FNN and the advantages of nature inspired algorithms have been exploited to build a hybrid model used for classification of breast cancer data. The proposed model consists of two modules. First, is the training module where the data is properly trained using a feed forward neural network and the second, is an error minimizing module, which is built using Bat and GSA meta heuristic algorithm. The hybrid model minimizes the error thus, producing better classification results. The accuracy obtained for Wisconsin Diagnostic Breast Cancer (WBCD) data set is found to be 94.28% and 92.10% for training and testing respectively.
Keywords: Breast Cancer; Bat algorithm; Gravitational Search Algorithm; Classification; Metaheuristic.
Special Issue on: New Trends for Security in Network Analytics and Internet of Things
A Novel Encryption Compression Scheme using Julia sets
by Kunti Mishra, Bhagwati Prasad
Abstract: The intent of the paper is to propose a novel fractal based encryption compression scheme using logistic map and Julia sets. In our study of medical images, we obtain significant lossless compression and secure encryption of the image data. The proposed technique is expected to be useful for the transmission of various confidential image data relating to medical imaging, military and other multimedia applications.
Keywords: Logistic map; Encryption; Decryption; Compression; Julia sets.
Perplexed Bayes Classifier based Secure & Intelligent Approach for Aspect Level Sentiment Analysis.
by Sumit Kumar Yadav, Devendra K. Tayal, Shiv Naresh Shivhare
Abstract: In this work, we are using machine learning methods to classify a review document. We are using two machine learning methods - Naive Bayes Classifier and Perplexed Bayes Classifier. First we will briefly introduce the Naive Bayes Classifier, its shortcomings and Perplexed Bayes Classifier. Further, we will be training the classifiers using a small training set and will use a test set with reviews having dependency among its features. We will then show that how Naive Bayes Classifier fails to classify such reviews and will be showing that Perplexed Bayes Classifier can be used to classify the given test set, having dependency among its features.
Keywords: sentiment-analysis; machine-learning techniques; naïve bayes; perplexed bayes; aspect level sentiment analysis.
An Efficient Crypto-compression Scheme for Medical Images by Selective Encryption using DCT
by Med Karim Abdmouleh, Hedi Amri, Ali Khalfallah, Med Salim Bouhlel
Abstract: Nowadays, modern communication inevitably uses computer networks. The Images transmitted across these networks are special because of their large amount of information. Thus, the use of the information technology in the medical field generates many applications (especially telemedicine) where the exchange of medical information remains the foundation of their success. The transmission of these images raises a large number of unresolved problems. The efficiency of a transmission network depends, on the one hand, on the degree of security and, on the other hand, on the times of transmission and archiving. These requirements can be satisfied by encryption and compression. This work presents a method of a partial or selective encryption for medical Images. It is based on the encryption of some quantified Discrete Cosine Transform (DCT) coefficients in low and high frequencies. The results of several experiments show that the proposed scheme provides a significant reduction of the processing time during the encryption and decryption, without tampering the high compression rate of the compression algorithm.
Keywords: Crypto-compression; Medical image; Telemedicine; DCT; RSA.
Hybrid Approach to Enhance Contrast of Image for Forensic Investigation Using Segmented Histogram
by Sachin Dube, Kavita Sharma
Abstract: Digital images can be used in detection of various crimes, ranging from active to passive attack applications. To suit a particular attack application an image needs to be enhanced, and should have good quality in general for forensic investigation. For normal investigation use; vibrant, vivid and eye pleasing image is desired. In this paper, various existing methods and their drawbacks are examined. This information is then used to develop an approach for contained enhancement to retain natural look of image, enhance its quality to make it usable for evidence. Existence of a spike in histogram can result in Over-enhancement of image. Spike is created when a large no. of pixels are having small set of intensities. Ten most commonly used standard images are used for performance comparison. Proposed method outperforms compared methods in terms of PSNR and AMBE values, while keeping entropy and standard deviation almost similar to input image.
Keywords: Image Forensic; Segmented Histogram; Image Contrast Enhancement;.
Use of A Light Weight Secure Image Encryption Scheme Based on Chaos & DNA Computing for Encrypted Audio Watermarking
by Bhaskar Mondal, Tarni Mandal, Tanupriya Choudhury
Abstract: Watermarking is one of the best way to authenticate the ownership or the source of data by embedding copyright information onto the image, audio or video. At the same time to maintain anonymity or source of data from unintended users its need to encrypt before embedding. This paper presents an effective use of encryption algorithm in audio watermarking. The watermark data is initially encrypted with A Light Weight Secure Image Encryption Scheme Based on Chaos DNA Computing". In the second part, the encrypted data embedded onto an audio using Discrete Cosign Transformation (DCT) and Discreet Wavelet Transformation (DWT). The test results are promising and the watermarked audio does not looses its quality.
Keywords: Audio watermarking; cryptography; deribonucleic acid (DNA); watermark encryption.
Malware Intelligence: Beyond Malware Analysis
by Ekta Gandotra, Divya Bansal, Sanjeev Sofat
Abstract: A number of malware samples are available online but a little research has attempted to thoroughly analyze these for obtaining insights or intelligence about their behavioral trends, which can further be used to issue early warnings about future threats. In this paper, we have performed an in-depth analysis of about 0.1 million historical malware specimens in a sandbox environment to generate their attributes and behavior. Afterwards, the intelligent information is mined using statistical analysis to study their behavioral trends and capabilities. The information so obtained can help to gain insight into the future measures that malware authors can use to design their programs. The paper also highlights the challenges evolving out of these trends which provide the future research directions to malware analysts and security researchers. Further, the insights generated can be shared with security Experts, CERTs (Computer Emergency Response Teams) or other stakeholders so that they can issue the preventive measures for future threats or at least to minimize the risks posed by them. Furthermore, this type of analysis facilitates research community in selecting the parameters/factors for building faster and improved techniques for detecting unknown malicious programs.
Keywords: Malware analysis; statistical analysis; security intelligence; behavioral trends; prediction.
Trust evaluation of websites: A comprehensive study
by Himani Singal, Shruti Kohli
Abstract: People rely heavily on internet to fulfill even the minuscule of their need. According to a survey, 41% of time spent on web is for finding some information from search engines or reading some information. This is majorly due to easily accessible, cost effective and perceived high value information. But, this perceived high value information can prove fatal, if consumed without any authoritarian checks; especially if related to issues like health. Some template is necessitated to measure trustworthiness of such information. This paper explores a novel approach to quantify trust in such information-led websites. Analytical data is collected for various informational websites using similarweb.com and trust is modeled for these websites using human behavior as an aggregate. Analytical data is believed to capture actual behavior of each and every visitor visiting the website for information; thus making the study reliable and dependable. Results have been compared with some other acceptable studies and have found to be encouraging.
Keywords: Content Trust; Health Information; Medical Trust; Online Interaction; User Satisfaction; Web Trust.
An Epidemic Model for Security and Performance of Wireless Sensor Networks
by Rudra Pratp Ojha, Kavita Sharma, Pramod Kumar Srivastava, Goutam Sanyal
Abstract: Wireless sensor networks have imminent constrains that makes security a crucial issue.Transmission of starts from a single node and spread in the entire network through wireless communication. This process leads to the failure of whole wireless sensor network. The proposed mathematical model based on epidemic theory in which the different class of nodes considered and to examine the effect of different class on the network and develop control mechanism to prevent worm transmission in the sensor networks. Discuss the role of communication radius on the stability of net-work. We examine the proposed model using stability theory of differential equation. Determine the basic reproduction number and relate with communication radius. Analyze the proposed model that improves the efficiency of the network in terms of stability and energy efficiency. Validate the proposed model through extensive simulation results.
Keywords: Epidemic model; Wireless Sensor Network; Equilibrium; Stability; Communication Radius; Basic reproduction number.
Secure Handoff Technique with Reduced Authentication Delay in Wireless Mesh Network
by Geetanjali Rathee, Hemraj Saini
Abstract: The aim of manuscript is to propose a secure handoff procedure by generating the tickets for the mesh clients which are divided into different zones of mesh routers according to their communication range. An authentication server looks over the entire network after a specific interval of time and is responsible for generating and updating the corresponding tickets of clients according to their zonal routers range. Whenever a mesh client enters into the range of another domain, to access the services from foreign mesh routers, roaming client has to prove its authenticity to the corresponding zonal router. Each mesh router stores the ticket of its zonal mesh client issued from authentication server and validates the roaming client by matching the ticket. The proposed mechanism reduces the issue of storage overhead and security threats at mesh client as all the tickets are stored in authentication server database and are issued upon the request. The proposed technique is validated over authentication delay and different probabilistic scenarios of authentication and is proved legitimate by discussing an empirical study against reported literature.
Keywords: Wireless Mesh Network; secure handoff; authentication; security threats; network delay; storage overhead.
A Secure, Fast Insert and Efficient Search Order Preserving Encryption Scheme for Outsourced Databases
by K. Srinivasa Reddy, Ramachandram S
Abstract: Order Preserving Encryption (OPE) schemes have been studied to a great extent in the cryptography literature because of their potential application to database design. For the first time, a scheme called mutable order preserving encoding (mOPE) is introduced to achieve IND-OCPA (Indistinguishability under Ordered Chosen Plaintext Attack) security. However, even mOPE scheme potentially leaks the distribution of repeated ciphertexts and is less efficient. In this paper, a new scheme is introduced called as a Secure and Cost efficient Order Preserving Encryption (SCOPE), which is considerably more secure and efficient than mOPE scheme. A new form of strong security notion called as Indistinguishability under Ordered Chosen Repeated Plaintext Distribution Attack (IND-OCRPDA) is proposed and we show that SCOPE scheme is IND-OCRPDA secure. Finally, the experimental results show that SCOPE achieves good performance in the context of an encrypted database and have a reasonable overhead which is 3.5
Keywords: efficiency; functionality; order preserving encryption; trusted proxy; security.
Security Model against worms attack in Wireless Sensor Network
by Rudra Pratap Ojha, Pramod Kumar Srivastava, Goutam Sanyal
Abstract: TThe Wireless Sensor Network is an innovative category of communication network,which has earned universal attention due to its great potential in application of various areas.This is one of the insecure system due to attack of worms.In order to efficaciously defend wireless sensor network against worms, we have proposed an epidemic model with two latent periods and vaccination.We have formulated ODE of the model and studied the dynamic behavior of worm propagation as well as designed a model to secure the system from worm attack.The model has been simulated by MATLAB. In this proposed study, we have determined the basic reproduction number for the study of dynamic performance of worms in the wireless sensor network. The global stability of worm free equilibrium has been established using a Lyapunov function, while the simulation results helped in validation of the theoretical analysis.
Keywords: Security; Epidemic model; Wireless Sensor Network; Latent period; Basic reproduction number.
Untraceable privacy-preserving authentication protocol for RFID tag using salted hash algorithm
by Pinaki Ghosh, Mahesh TR
Abstract: Radio Frequency Identification (RFID) is now becomes a core technology in the Internet of Things (IoT). It has gained the attention of industry and academia in tremendous ways. Due to the openness in nature, RFID tags suffer with potential security threats. One of the major threats is privacy leakage during the authentication process. A strong Privacy Preserving Authentication (PPA) protocol is always a need to this system. In this paper we have proposed salted secure hash based mutual authentication protocol as a solution. The proposed protocol is designed to sends random response from tag to the server without disclosing its identity information to intermediate entities like readers. It also updates secret keys without transmitting the secret values.
Keywords: RFID; privacy; untraceability; tag authentication; salted hash; keyed hash algorithm; mutual authentication.
Comparison of different RSA Variants
by Seema Verma, Manoj Kumar
Abstract: RSA is the first public key algorithm used for encryption and decryption. Its simplcity and complexity lies in factoring a very large composite integer. It is still popular even after thirty nine years of its origin. In this long journey, RSA is studied many times and many security loopholes are found. To remove the loop holes researchers designed many variants of RSA. The work shows the study of different RSA variants which are popular in literature. This study includes the analysis in terms of performance and security of different RSA variants.
Keywords: RSA; Public key; Cryptography; Encryption; Complexity; Security; Comparison.
GASER: Genetic Algorithm based Secure and Energy aware Routing protocol for Sparse Mobile Ad Hoc Networks
by Deepika Kukreja, Deepak Kumar Sharma, S.K. Dhurandher, B. V. R. Reddy
Abstract: Sparse Mobile Ad hoc Networks are characterized by sparse node deployment and longer network partitions. Nodes in an ad hoc network are mobile, have limited energy and are deployed in areas where connections between the nodes may be inconsistent. In a number of scenarios it is likely that the route between source-destination pair does not exist for longer duration of time. Routing in such a network where nodes deployment is sparse and the connections between the nodes occur less frequently is a challenging task. In this paper, nature inspired Genetic Algorithm based Secure and Energy aware Routing (GASER) protocol for Sparse Mobile Ad Hoc Networks is proposed. Black hole and gray hole attacks are two security threats that make Mobile Ad Hoc Networks (MANETs) weak by inducing packet forwarding misbehavior in the network. By incorporating genetic algorithm with other methods, the GASER protocol selects the best path for routing the packets between source and destination in such a way that the selected path is shortest. Nodes of the selected path have highest message forwarding possibility among the other nodes of the network and have enough energy to receive and then forward messages. GASER avoids the nodes inducing gray hole/black hole attack in the network as it selects the next hop having more message forwarding probability thus making the routing protocol secure. Simulation results prove that GASER outperforms PROPHET, Epidemic and Spray and Wait in terms of packet delivery ratio, average residual energy, overhead ratio and number of deceased nodes.
Keywords: Sparse Mobile Ad Hoc Networks; Genetic algorithm; Black hole attack; Gray hole attack; Energy aware routing; Secure routing.
Special Issue on: Soft Computing Application and Reviews
Performance Comparison of Bat Search and Cuckoo Search Using Software Artifact Infrastructure Repository and Regression Testing
by Arun Prakash Agrawal, Arvinder Kaur
Abstract: Software Testing is inevitable to have confidence in quality and reliability of software products. Regression testing is conducted to ensure that no new errors have been introduced into the software as a result of the maintenance activity performed. Re-executing all the existing test cases is one of the approaches to gain this confidence. This is however a highly expensive and time consuming approach. Previous research revealed that nature inspired algorithms have vast application in this area. Bat Search and Cuckoo Search are two such powerful nature inspired metaheuristic algorithms. In this paper, Bat search algorithm is tested against Cuckoo Search algorithm to solve regression test case selection problem. Two factors: Number of faults covered and computational time are considered for the comparison. Extensive experiments have been conducted over the objects adopted from benchmarked Software Artifact Infrastructure Repository. Rigorous statistical tests are conducted to draw a conclusion that demonstrates that Cuckoo Search is marginally advantageous over Bat search algorithm with respect to performance parameters. The underlying motivation to conduct this research is to create awareness among the researchers about the computational capability of both the algorithms. We believe that the results reported in this paper will enable researchers to develop more powerful algorithm for testing in the near future.
Keywords: Regression Testing; Test Effort Optimization; Metaheuristics; Bat Search Algorithm; Cuckoo Search Optimization.
On the Convergence and Optimality of the Firefly Algorithm for Opportunistic Spectrum Access
by Lakshmana Rao Kalabarige, Shanti Chilukuri
Abstract: Meta-heuristic algorithms have been proven to be efficient forrnengineering optimization. However, the convergence and accuracy of suchrnalgorithms depends on the objective function and also on several choices madernduring algorithm design. In this paper, we focus on the firefly algorithm forrnoptimal channel allocation in cognitive radio networks. We study the effect ofrnvarious probability distributions including the Lev́y alpha stable distributionrnfor randomization of firefly movement. We also explore various functions forrnconverting firefly positions from the continuous space to the discrete space, asrnis necessary in the spectrum allocation problem. Simulation results show that inrnmost cases, Lev́y flight gives better convergence time and results for commonrnoptimization problems such as maximizing the overall channel utilization,rnmaximizing the channel allocation for the bottleneck user and maximizingrnproportional fairness. We also note that no single discretization function givesrnboth good convergence and optimality.
Keywords: metaheuristic algorithms; Lev́y flight; spectrum allocation; cognitive radio networks.
Meta-Heuristic Algorithm to Generate Optimized Test Cases for Aspect-Oriented Software Systems
by Abhishek Singhal, Abhay Bansal, Avadhesh Kumar
Abstract: Optimized test case generation is challenging for software industry. Test all approach is commonly used in software industry but it is not an effective approach in terms of computational cost. There exist literature available that show the applicability of meta-heuristic algorithm to address the issues, but the results received are not as perfect as it was expected, so scope of more optimized approaches still persists. In this paper, we propose an Artificial Bee Colony based test case optimization approach for aspect oriented software systems. Experiments are conducted using six benchmark problems, which validates the effectiveness of proposed approach. The results state the reduction of 20-40% number of test cases and more than 90% of code coverage in the optimized test suite, which shows the superiority of proposed approach. This clearly indicates that the computational time and complexity of the approach adopted shows remarkable improvement over GA
Keywords: Aspect-oriented; artificial bee colony algorithm; genetic algorithm; Meta-heuristic; optimization; test cases; test case generation.
Fuzzy system for classification of microarray data using a hybrid ant stem optimisation algorithm
by S. Arul Antran Vijay, Pugalendhi GaneshKumar
Abstract: Microarray data analysis provides an effective methodology for the diagnosis of diseases and cancers. Although much research has been performed on applying several techniques for microarray data classification during the past years, it has been shown that conventional machine learning and statistical techniques have intrinsic drawbacks in achieving accurate and robust classifications. This paper presents fuzzy based classification system to analyse microarray data. The mutual information approach is used in this approach to extract the most informative genes from the microarray dataset. In the design of fuzzy expert systems, a novel hybrid ant stem (HAS) algorithm used to extract the if-then rules using the membership functions from the given diabetes microarray data is presented. The performance of the proposed technique evaluated using the two diabetes microarray dataset are simulated. These results prove that the proposed HAS algorithm produces a highly accurate fuzzy expert system.
Keywords: microarray data; fuzzy expert system; ant colony optimisation; stem cell optimisation; mutual information.
Flower Pollination Based K-Means algorithm for Medical Image Compression
by G. Vimala Kumari, G. Sasibhushana Rao, B.Prabhakara Rao
Abstract: Image compression plays a significant role in digital image storage and transmission because of limited availability of storage devices space and insufficient bandwidth and is beneficial for all multimedia applications. Magnetic Resonance Imaging (MRI) of a human body produces an image of huge size and is to be compressed but medical field demands high image quality for better diagnosis of disease. In this technologically advanced world, intelligence systems try to simulate human intelligence. It is applied in the field of engineering, industry, medicine and education problems and it makes decisions by using the several inputs. However, the search process is enormous and convergence time depends on algorithm structure. In this paper first time methaheuristic algorithms are used for near optimum solutions. This paper introduces Flower Pollination Algorithm (FPA) based vector quantization for better image compression with better reconstructed image quality. Performance of proposed method is evaluated by using Peak Signal to Noise Ratio (PSNR), Mean Square Error (MSE) and Fitness Function.
Keywords: Image compression; Particle Swarm Optimization; Quantum Particle Swarm Optimization; Flower Pollination Algorithm.
Image compression based on adaptive image thresholding by maximising Shannon or fuzzy entropy using teaching learning based optimisation
by Karri Chiranjeevi, Umaranjan Jena, M.V. Nageswara Rao
Abstract: In this paper, teaching leaning based optimisation (TLBO) is used for maximising Shannon entropy or fuzzy entropy for effective image thresholding which leads to better image compression with higher peak signal to noise ratio (PSNR). The conventional multilevel thresholding methods are efficient when bi-level thresholding. However, they are computationally expensive extending to multilevel thresholding since they exhaustively search the optimal thresholds to optimise the objective functions. To overcome this drawback, a TLBO based multilevel image thresholding is proposed by maximising Shannon entropy or fuzzy entropy and results are compared with differential evolution, particle swarm optimisation and bat algorithm and proved better in standard deviation, PSNR, weighted PSNR and reconstructed image quality. The performance of the proposed algorithm is found better with fuzzy entropy compared to Shannon entropy.
Keywords: image compression; image thresholding; Shannon entropy; fuzzy entropy; bat algorithm; teaching learning based optimisation.
An efficient and optimized approach for secured file sharing in Cloud Computing
by Neha Agarwal, Ajay Rana, J.P. Pandey
Abstract: Cloud computing is a new paradigm which refers to delivering the services over the internet to the customers on demand and releases them from worry of infrastructure requirements. To maintain efficiency and to cut the cost the customers investment the cloud service provider put their data on public clouds and so arises the major concern maintain the security of the outsourced data in the cloud. To address the issue we have proposed a hybrid encryption algorithm which comprises of symmetric and asymmetric public key encryption algorithms. This hybrid encryption algorithm has taken benefit of fast performance of symmetric and high security of asymmetric encryption algorithms. We have also introduced the concept of proxy re-encryption in order to ensure the security of outsourced data from colluded cloud and unauthorized users. Our results have proved that our proposed algorithm is more efficient and secured while sharing file with other users on cloud.
Keywords: Cloud computing; Cryptography; Proxy Re-encryption; Outsourced Data Security; Privacy; Genetic Algorithm.
Development of ANFIS based Algorithm for MPPT Controller for Standalone Photovoltaic System
by Astitva Kumar, M. Rizwan, Uma Nangia
Abstract: The maximum power point tracking controller is an integral part for efficient implementation of PV system. In this paper, an adaptive neuro fuzzy inference system (ANFIS) based new algorithm for maximum power point tracking (MPPT) has been developed and implemented to track maximum power point in the standalone photovoltaic system (PV). The work proposes to control the switching of DC-DC boost converter using ANFIS approach and replace the conventional PI controller to detect the error signal. The results of the proposed approach are compared with incremental conductance approach under constant and varying irradiance and temperature conditions. From the proposed approach, the percentage error, rise time and voltage fluctuations have been improved as compared to the incremental conductance method. Further, the proposed adaptive controller effectively tracks the MPP considering all the major non-linear variables and it improves the rise time and the steady state characteristics of the PV system.
Keywords: Photovoltaic Systems; Adaptive Neuro-Fuzzy Inference System; Maximum Power Point Tracking; Control Algorithms.