International Journal of Advanced Intelligence Paradigms (279 papers in press)
Comparing Product Features of Motor Cycles - A Multi-group Analysis
by Alagirisamy Kamatchi Subbiah Sukumaran
Abstract: The study compares the product features of motor cycles based on the preferences of the consumers. Inman (2001) felt the need for creating an inventory of features which can be used to elicit the choice of the consumers. The literature dealing with satisfaction from product features are mostly based on social and demographic characteristics. There are not many studies which analyze the product-features themselves based on the choice of the consumers. This study is unique to attempt multi-group analysis in addition to regression, for the purpose of identifying the unique product features of two leading motor cycle brands. The results of the study will be useful to the two wheeler manufacturers and marketers to identify the distinctive product features of the motor cycles, formulation of marketing strategy and, in the development of new models of motor cycles.
Keywords: motor cycles; product features; multi-group analysis.
Natural Language Processing for Hybrid Knowledge Representation
by Poonam Tanwar, T.V. Prasad, Kamlesh Dutta
Abstract: The excessive amount of knowledge increased the demand of tools for Organizing, Processing and Extracting the Knowledge. To organize different types of the Knowledge and to infer the Knowledge intelligently is the biggest task in Knowledge Engineering (KE), Natural Language Processing (NLP), Information Retrieval (IR) and Knowledge Management. The key factor for the development of this Society/Nation is the interactive access to information in Current Era. The source of information available in todays life is benefiting the people those who are familiar with the English language, but the biggest question is what about the generation who are not familiar with English not only in India but aboard too. One solution to this problem is graphical visualization, easy and fast access to the information available through any source. This paper present the system that provides the user friendly interface for all the users (very good English, good English, not too good in English) for knowledge gathering, discovery and retrieval.
Keywords: Knowledge Representation (KR); Natural Language Processing (NLP); Semantic net; Script; Classification.
Marker-Based Augmented Reality Interface with Gesture Interaction to Access Remote File System
by Shriram K Vasudevan, Naveen T, Padminy KV, Shruthi Krithika J, Geethan P
Abstract: Abstract: Augmented Reality is a technology which enriches the real world with digital information. An Augmented Reality (AR) Interface superimposes digital objects or interactive computer graphics, on to the real world dynamically. Since its introduction, the technology has been capable of presenting possibilities that have been challenging for other technologies to offer and meet. Nevertheless, AR environments has been largely limited to simple browsing or simply viewing of virtual information registered to the real world. In a few more years, AR will definitely change the way individuals view the world. In this paper, we have designed a system where a remote files and directories are augmented in real-time over the camera view of the smartphone, tablet or PC. The users can access the remote file system and perform operations using gestures. This system provides a smooth and continuous interaction between the user and the digital space by only using hand gestures, without the use of any special purpose devices like a mouse or a joystick.
Keywords: Marker-Based Augmented Reality; AR Interface; Leap Motion Controller; Gesture Interaction; Remote System Access.
An Extension of the Ontology Web Language with Multi-Viewpoints and Probabilistic Reasoning
by Mounir Hemam
Abstract: A real world entity is unique but it can have several representation, due to various interests or perspectives. In this paper, we are interested in the problem of multi-representation in ontology. We believe that the most appropriate way is to use viewpoint notion in order to build an ontology called multi-viewpoints ontology. This type of ontology confers to the same universe of discourse, several partial descriptions, where each one is relative to a particular viewpoint. Moreover, these partial descriptions share at global level, probabilistic ontological elements allowing the representation of uncertain knowledge between the various viewpoints. The treatment of this kind of information requires new approaches for knowledge representation and reasoning on the web as existing Semantic Web languages are based on classical logic which is known to be inadequate for representing uncertainty. So, our goal is to propose an ontology web language, which extend OWL language with viewpoint and probabilistic uncertainty, to allow multi-viewpoints and probabilistic reasoning with OWL ontologies.
Keywords: knowledge engineering; ontology; semantic web; multiple viewpoints; probabilistic reasoning.
Dynamic vs Static agent ordering in Distributed Arc Consistency
by Saida Hammoujan, Imade Benelallam, El Houssine Bouyakhf
Abstract: Recently, many approaches were proposed for solving Distributed Constraint Satisfaction Problems DisCSPs. One of these approaches, we cite Asynchronous Maintenance of Arc Consistency AMAC that it has proven to be an efficient algorithm. AMAC algorithm performs an asynchronous arc consistency process during sequential search. In this paper, we propose two new approaches based on AMAC. However, instead of using a lexicographic ordering as a static agent/variable ordering, we present two asynchronous algorithms that exploit the structure of the DisCSPs by the use of powerful agent/variable ordering heuristics and enforce arc-consistency during resolution. The first algorithm we propose, AMAC_DO, uses Dynamic variable ordering heuristics, that are very useful in centralized CSPs. The second algorithm, ILAAC, is based on the split of the problem into several sub-problems using the pseudo-tree structure of the constraint graph. We offer an analysis and interpretation of experimental evaluation of the proposed approach. The experimental results show clearly the usefulness of arc consistency process combined with variable ordering heuristics for random problems in terms of communication cost and computation effort.
Keywords: Distributed Constraint Satisfaction Problems; Arc Consistency; Variable Ordering Heuristics; Pseudo-tree.
An Effective e-Learning system through learners' scaffolding
by Suman Bhattacharya, Sankhayan Chowdhury, Samir Roy
Abstract: Scaffolding is an age-old technique of teachers intervention to augment and quickens the learners learning process. While a human teacher has the scope to interact with the learner in contact mode and apply his intelligence to assess the need of such effort, an e-learning system does not have this capacity. This paper presents a scaffolding system for an e-learner. It is targeted towards school level children. The system is structured around the concept of finite state machine to model the cognitive state of the learner. Learning experiences are also taken into consideration. The system is tested on a large number of school going children. Experimental results indicate that under this system, the students achieve their learning objectives to a greater extent with better experience.
Keywords: Intelligent Tutoring Systems; Interactive Learning Environments; Pedagogical issues; Teaching/Learning strategies; scaffolding; e-learning; finite state machine; learning experience.
An empirical study of feature selection for classification using genetic algorithm
by Saptarsi Goswami, Amlan Chakrabarti, Basabi Chakraborty
Abstract: Feature selection is one of the most important preprocessing steps for a data mining, pattern recognition or machine learning problem. Finding an optimal subset of features among all the possible feature subsets is a NP-Complete problem. Use of evolutionary algorithms to tackle the above kind of problems is one of the approaches. Genetic algorithm (GA) is one of the variants of evolutionary processes based on selection, mutation and reproduction. The selection process is based on survival of the fittest principle. An optimal feature subset should be one having highest association with target variable and lower inter feature association. As per literature study, most of the approaches combine the above objectives in a single numeric measure. In this paper, in contrast the problem of finding optimal feature subset has been formulated as a multi objective problem. The concept of redundancy is refined with a concept of threshold value. An objective to maximize entropy of individual attributes has been added in one of the multi objective experiment setups. Experiments on thirty-three publicly available datasets have been conducted with 3 multi-objective and 2 single objective settings. Analysis of results reveals better classification accuracy in the multi objective methods as compared to the single objective methods. A 12% improvement in classification accuracy can be observed on an average. It is shown to further improve by 2-3%, after refining the concept of redundancy (mIRMR) using probabilistic threshholding and then by addition of maximizing entropy (mIRMRE) as an objective. The performance improvement is statistical significant as found by pair wise t-test and Friedmans test.
Keywords: Feature Selection; Classification; Genetic Algorithm (GA); Multi-objective; Filter.
Verification on Factors of Information Technology Acceptance for Construction Users based on the Davis's Technology Acceptance Model : Focused on the Application Case of IT in Construction
by Eun Soo Park, Tai Sik Lee, Min Seo Park
Abstract: Since the information wave of the 1990s, various changes, including improved business processes and management styles, have emerged as different fields attempt to create more profits through increased productivity. And, it is clear that the use of IT in the construction industry will spread with the changing paradigm, and the extent of IT application will increase rapidly as well. Thus, a study showing the extent of users acceptance of IT within the context of the construction industry needs to be conducted. Based on this perception, we conducted actual research on IT acceptance by individuals using it in the construction industry based on Daviss (1989) technology acceptance model (TAM). To introduce Daviss model into the construction IT model, we hypothesized whether each internal and external construction IT factor would influence information accepters. Throughout the survey, based on the construction IT model and the statistical analysis of the survey, we observed the accepters perceived level of usefulness and ease of use before the introduction of new ITs. Finally, we aim to grasp how the IT being utilized in the construction industry impacts users and to determine whether the latest IT should be introduced by analyzing whether the users are able to accept it.
Keywords: Information Technology Acceptance; Construction Users; Verification; Davis; Technology Acceptance Model.
Secure Minimum Loss Route Selection of MIMO based MANET in combined (Indoor, Outdoor, and Forest) Terrain
by Swati Chowdhuri, Pranab Banerjee, Sheli Sinha Chaudhury, Nilanjan Dey, Arun Mandal, V. Santhi
Abstract: Multiple-input multiple-output (MIMO) is a very promising technique in modern wireless communication systems, which can meet the demand of high data rate with limited bandwidth. Integration of MIMO technology with mobile ad hoc network can improve the performance of transmission process in hazardous environment. In order to design a real MIMO wireless system and predict its performance under certain circumstances, it is necessary to have accurate MIMO wireless channel models for different scenarios. In general, mobile ad hoc network with multiple antennas suffers with scattering effects. In this work, a combination of two ring model and random scattering model is discussed to evaluate channel impulse response of the network. Similarly, channel impulse response or channel matrix is used to estimate the propagation loss of the MIMO based mobile ad hoc network in different terrain. Finally, a minimum loss secure path selection is carried out by proposed PASR (Path loss based Administrator selection Secure Routing Protocol) protocol. The efficiency of the proposed protocol is verified through obtained results.
Keywords: Mobile ad hoc network (MANET); Multi Input Multi Output (MIMO); Impulse Response; Propagation loss; Routing;.
Automated Lumbar-lordosis Angle Computation from Digital X-ray Image Based on Unsupervised Learning
by Raka Kundu, Amlan Chakrabarti, Prasanna Lenka
Abstract: Computation of lumbar-lordosis angle (LLA) of spine is a common measure for patients suffering from lower back pain (LBP). LLA is one of the measures for proper monitoring of patients suffering from spine problem. The angle formed between the extreme superior lumbar vertebra (L1) and the superior sacrum vertebra (S1) is the LLA. Based on Gaussian Mixture Model (GMM), an unsupervised method, an automated image processing technique was developed for computation of LLA from spine sagittal X-ray image where lumbar-sacral curvature was identified and the curvature angle (Cobbs method) was measured to get the LLA. Determination of LLA is one of the major parameter that carries importance in finding out the credible etiology of LBP syndromes. Objective of our proposed automated technique is to ease real-life issues in medical treatment, act as a primitive investigation in patients with suspected LBP syndromes and to assess the severity of the disease. To the extent of our knowledge the proposed technique for automated LLA angle computation from digital X-ray is first of its kind. Validation of the technique was done on 22 X-ray images and promising results were achieved from the performed experiments.
Keywords: Automated computer-aided detection and diagnosis; lumbar-lordosis Cobb’s angle (LLA); Digital X-ray image; Gaussian mixture model; expectation maximization; lumbar-lordosis (LL).
A Simulation of Model Selection Strategy in Hierarchical System Using the Analytic Hierarchy Process
by Gabsi Mounir, Rekik Ali
Abstract: Existing literature has recognized that superior organizational capabilities, primarily stemming from knowledge Integration, bring firms strong strategic outcomes. This article argues that organizational capability can be used as a bridge to explain relationships involving strategy and knowledge .There are many kinds of management information systems (MISs) whether in industries or in enterprises, A model selection strategies focal flow management resources can help a firm achieve its strategic goals, and further, to align its knowledge management with its strategies.; develop synthesis of coordination management strategies of resource flows to aggregate distributed hierarchical system topics. In this paper we proposed a simulation of model selection strategy of work travels company using the method of analysis hierarchies
Keywords: Strategy; Aggregation of hierarchy; synthesis strategies; Coordination strategies; AHP; aggregation; Decision making; Goals.
A New Fractal Watermarking Method for Images of Text
by Kourosh Kiani, Arash Mousavi, Shahaboddin Shamshirband
Abstract: A new method, using the orthogonal fractal coding is developed for fractal watermarking of high contrast, low-density images of texts. In this method, image is divided into one pixel height sub-images. Each sub-image is coded separately using the orthogonal fractal coding technique. A binary watermark is re-ordered using a chaotic sequence. The binary watermark is inserted into the range block means of fractal codes. This fractal code is further decoded to obtain the watermarked image. The watermark sequence is retrieved by comparing the original image and the watermarked code. The extracted watermark is re-ordered using the key of the chaotic sequence. The method is robust against JPEG and noise attacks and has a very low watermark visibility.
Keywords: fractal watermarking; high contrast image; text watermarking; steganography.
A Clustering Based Recommendation Engine for Restaurants
by Aarti Singh, Anu Sharma
Abstract: With the wide spread of tourism industry, restaurant recommendation systems have become an important application area for any Recommendation Systems (RS). Designing an efficient and scalable solution for restaurant recommendation is still an open area of research. Many researchers have contributed to the idea of generating recommendation systems for restaurants. But none of these approaches used clustering of user profile database to reduce the search space before applying Recommendation Techniques (RT). The aim of this research is to provide a more scalable solution for recommending restaurants. This work applies existing RT on reduced rating data obtained by clustering of user profiles. Results suggested that there is considerable decrease in the processing time while maintaining the accuracy of the recommendation.
Keywords: clustering; k-means; recommendation techniques; user profiling; restaurant recommendation.
Word Sense Based Approach for Hindi to Tamil Machine Translation Using English as Pivot Language
by Vimal Kumar K, Divakar Yadav
Abstract: Machine translation is defined as the translation of source text to a desired target text. There is a great need of machine translation system as there is globalization in every field in this current internet world. As there is resource availability in different languages in the internet world, there is need to share the knowledge to a different set of audience who knows only their native language. This proposed system is aimed to build a word sense based statistical machine translation system for translating Hindi to Tamil language. Since there is a lack of resources in these languages, there is need of some other intermediate pivot language which has high resource availability. In this proposed system, English has been identified as a pivot language due to its rich resource availability. Initially, the Hindi text is subjected to preprocessing phase where the text is morphologically and syntactically analyzed. Based on the analysis, the senses of the words are identified using Latent Semantic Analysis (LSA) in order to provide a meaningful translation. Once these analysis are done, the sentence is subjected to statistical translation from source to pivot and then from pivot to target language. This system has an improved efficiency when compared with the system that doesnt have sense identification and pivot language.
Keywords: Statistical Machine Translation; Word Sense Disambiguation; Latent Semantic Analysis; Pivot based Machine Translation.
Quality Factor Optimization of Spiral Inductor using Firefly Algorithm and its Application in Amplifier
by Ram Kumar, Fazal.A Talukdar, Nilanjan Dey, Valentina E. Balas
Abstract: This proposal details an optimized design of a CMOS Spiral Inductor for output matching circuit of Low Noise Amplifier by employing nature inspired intelligence based technique called Firefly Optimization Algorithm (FA). Optimization of these parameters has been carried out by considering single objective function. Penalty factor method is considered for handling the constraints. Using FA technique, the inductor with a high-quality factor of 5.87 is obtained at 5.5 GHz frequency in Matlab environment. A computer aided design tool ASITIC is used for the validation. The output matching circuit of Low Noise Amplifier is designed using Pi model obtained from ASITIC. The designed LNA has a cascode structure with inductive source degeneration topology and is implemented in UMC 0.18 μm CMOS technology using CADENCE software. The designed LNA has a simulated value at 5.5 GHz frequency.
Keywords: Spiral Inductor; Optimization Technique; Particle swarm optimization; Firefly optimization; Low noise amplifier; Quality factor; ASITIC.
Ant_VRP: Ant-Colony based Meta-heuristic Algorithm to Solve the Vehicle Routing Problem
by Majid Nikougoftar Nategh, Ali Asghar Rahmani Hosseinabadi, Valentina Emilia Balas
Abstract: Vehicle routing problem is one of the most important combinatorial optimization problems and is very important for researchers and scientists today. In this kind of problems, the aim is to determine the minimum cost needed to move the vehicles, which start simultaneously from the warehouse and returned to it after visiting customers. There are two constraints for costumers and vehicles, first, each node must be visited by only one vehicle and second, each vehicle mustnt load more than its capacity. In this paper, a combination of Ant Colony Algorithm and mutation operation named Ant_VRP is proposed to solve the vehicle routing problem. The performance of the algorithm is demonstrated by comparing with other heuristic and meta-heuristic approaches.
Keywords: Vehicle Routing Problem; Optimization; Ant Colony Algorithm; Mutation.
Context Aware Power Management in Smart Grids Using Load Balancing Approach
by RajaSekhar Reddy NV, Venkata Krishna P
Abstract: Smart grid is an electrical grid with an advanced digital communication network. In the past decade, the development in the field of smart girds attracted more researchers towards it. This paper presents the power aware model for context awareness and load management in the smart homes. In the adaptive system of the smart grid, the smart homes are treated as smart node. The proposed power aware smart home management model (PASH) incorporates the evolutionary programming algorithm and context awareness rules for communicating with the users. The PASH model explains the advantage of load balancing in the smart homes for both users and smart grid. The context management module in the PASH helps to utilize the power efficiency in the peak demands.
Keywords: Power management; load balancing; context; smart homes; smart grids; evolutionary programming.
Methodology of Wavelet Analysis in Research of Dynamics of Phishing Attacks
by Mehdi Dadkhah, Vyacheslav V. Lyashenko, Zhanna V. Deineko, Shahaboddin Shamshirband, Mohammad Davarpanah Jazi
Abstract: Safety of transfer and reception of various data over the Internet can be accompanied by a presence of harmful components in a passed content. The phishing attack is one of versions of such harmful components. Thus it is important to know the relationship between the Phishes Verified as Valid and Suspected Phishes Submitted. This is necessary for the forecast. To solve this problem, we use the wavelet analysis of time series which represent Phishes Verified as Valid and Suspected Phishes Submitted. We are considering the change of Hurst indicator; we analyze of a spectrum of wavelet energy. This allows you to identify the features of the main characteristics of time series which are considered. Conducted researches have shown the presence of essential duration of long-term dependence of investigated data. We also identified presence of trend component in structure of investigated series of data. It allows you to investigate recurrence of occurrence of phishing attacks that allows to concentrate forces and means during the periods of activization of such harmful influences. The analysis is spent on real data that reflects the importance of the conclusions obtained.
Keywords: Internet; phishing; trend; wavelet analysis; wavelet energy; wavelet expansion; Hurst indicator; Daubechies Wavelet.
Real Time Navigation of a Mobile Robot
by P. Raja, Ambati Akshay, Akshay Kumar Budumuru
Abstract: Path planning is one of the major challenges in navigation of a robot. In case a robot is unable to decide the direction in which it has to move with the information available, the decision can be made with the help of external source. Most of the algorithms involve computations, which may sometime require lot of memory and time. Our work deals with avoiding those calculations by sending direct visual feed to the control room where the environment can be analyzed, which enables the robot move from source to the goal position. The robot also displays the coordinates on the console screen which can be used for creating maps. During our work we observed that by decreasing the time interval for refreshing the coordinates the closest distance the robot can go towards the obstacle also decreases thus allowing the robot to move through very narrow paths it can fit.
Keywords: Mobile robot; navigation; mapping; control room.
Energy and Velocity based Multipath Routing Protocol for VANET
by Bhagyavathi Miriam, Saritha Vankadara
Abstract: The VANET is a type of network that can be built randomly, quickly and temporarily without any standard infrastructure. In VANET, routing of data is an interesting and challenging task because of the high mobility. Therefore, the routing algorithm for VANETs is an imperative issue, particularly in vehicle to vehicle communication. This paper proposes a multipath routing algorithm for VANET named Energy and Velocity based Multipath Routing Protocol (EVMRP) based on available bandwidth, residual energy and relative velocity. The most important point of the proposed algorithm is setting the CWmax as the available bandwidth of the path. The proposed algorithm is tested on the QoS parameters like end-to-end delay, throughput and packet loss. The results clearly indicate that the proposed algorithm, EVMRP outperforms when compared to the legacy systems like AOMDV.
Keywords: Routing; VANET; available bandwidth; Multipath.
Mutation Based Genetic Algorithm for Efficiency Optimization of Unit Testing
by Rijwan Khan, Mohd. Amjad
Abstract: Fault in a software program can be detected by mutation testing . However, mu-rntation testing is an expensive process in a software testing domain. In this paper, wernhave introduced a method based on Genetic Algorithm and Mutation Analysis for unitrntesting process. Software industry produces high quality software in which softwarerntesting has an important role. First, we make a program/software and intent somernmutant in this program/software, nd most critical path and optimize test cases usingrngenetic algorithm for the unit testing. Initially generated test cases are rened usingrngenetic algorithm. We use a mutant function for measuring the adequacy of the testrncase set. The given mutant function is used to calculate a mutant score. We havernachieved 100% path coverage and boundary coverage using mutation testing. The ob-rnjective is to produce a set of good test cases for killing one or more undesired mutantsrnand produces dierent mutant from original software / program. Unlike simple algo-rnrithms, Genetic Algorithms provide suitability for reducing the data generation at arncomparable cost. An optimized test cases has been generated by proposed approachrnfor cost reduction and revealing or killing undesired test cases .
Keywords: Genetic Algorithms (GA); Software Testing (ST); Automatic Test Case Coverage (ATCC),rnBoundary Value Analysis (BVA); Mutation Testing (MT).
University-timetabling problem and its solution using GELS algorithm: A Case Study
by Majid Nikougoftar Nategh, Ali Asghar Rahmani Hosseinabadi, Valentina Emilia Balas
Abstract: Course scheduling includes a large volume of data with numerous constraints and unchangeable specifications and each university deals with several times a year. Course scheduling is a NP-Hard problem and using traditional methods to solve it is very difficult. But evolutionary algorithms suggest good solutions for this type of problems. In this paper we used Gravitational Emulation Local Search Algorithm to solve the course scheduling problem which is an evolutionary algorithm. Results demonstrate the good quality of time table provided by proposed algorithm and also decreased time against other algorithms.
Keywords: Course scheduling; GELS Algorithm; Genetic Algorithm.
AMST-MAC: Adaptive Sleeping Multi-Frames Selective Data Transmission Control for Wireless Sensor Networks
by Bindiya Jain, Gursewak Brar, Jyoteesh Malhotra
Abstract: Energy efficiency is the major issue in the designing of wireless sensor networks. Keeping in view the importance of energy efficiency in wireless sensor networks, designing the efficient MAC protocol is of paramount importance which intends to make them energy efficient. The proposed MAC protocol called AMST- MAC (Adaptive sleeping multi frames selective data transmission MAC) is an energy saving mechanism whose objective is to remove the redundancy, reduce the number of packets sent for same amount of information by using SDT It allows the node to sleep for the time when it is idle even in the data cycle using the concept of DDC. The aim of this simulation study was to evaluate the reliability of the proposed protocol in terms of energy efficiency, end-to-end delay and packet delivery ratio compared to SMAC protocol without degrading service quality. The results obtained clearly shows that the proposed AMST-MAC protocol is more energy efficient in comparison to SMAC protocol and it maintains the lowest sender and receiver duty cycles. AMST MAC decreases the delay by a factor of 10% thus overall mean delay will show a reasonable decrease .AMST-MAC protocol consumes less energy in every round enabling AMST-MAC to be a better protocol as compare to S-MAC protocol without SDT & with SDT.
Keywords: Sensor networks; Medium Access Control; Energy Efficient; AMST-MAC Protocol; Selective data Transmission; Dynamic duty cycle.
An Intelligent Clustering Approach for Improving Search Result of a Website
by Shashi Mehrotra, Shruti Kohli, Aditi Sharan
Abstract: These days internet has become part of our life, and thus web data usage is increased tremendously. We proposed a model that will improve the search result using clustering approach. Clustering is being used to group the data into the relevant folder so that accessing of information will be fast. The K-Means clustering algorithm is very efficient in terms of speed and suitable for large data set. However, K-Means algorithm has some drawbacks, such as a number of clusters need to be defined in the starting itself, initialization affects the output, and it often gets stuck to local optima. We proposed a hybrid model that determines the number of clusters itself and gives global optimal result. The number which has been obtained is passed as a parameter for The K-Means. Thus, our novel hybrid model integrates the features of K-means and Genetic algorithm. The model will have best characteristics of K-Means and genetic algorithm, and overcomes the drawbacks of K-Means and genetic algorithm.
Keywords: Clustering; K-Means algorithm; Genetic algorithm; Hybrid algorithm.
A Clustered Neighborhood Consensus Algorithm for a Generic Agent Interaction Protocol
by Aarti Singh, Dimple Juneja, Rashmi Singh, Saurabh Mukherhjee
Abstract: The premise of the paper is twin fold. It not only improves the existing generic agent interaction protocol (GIPMAS) but also uniquely addresses the issue of generating consensus amongst agents participating in generic agent interaction protocol. In a multi-agent system, agents cooperate and coordinate to reach to decision while sending the information. Now, in a clustered multiagent system, all member agents of the given cluster send the data to cluster head which then forwards the processed information to next level for further processing. It is quite apparent that agents in close proximity (belonging to same or different clusters) would transmit the redundant information. Hence, it is desired that before sending the raw data, member agents should mutually agree on common decision (based on some common metrics) and send in the only relevant and agreed upon information to next higher level. The paper significantly contributes a consensus algorithm which is marriage of neighborhood algorithm and discrete time consensus protocol. The proposed neighborhood algorithm focuses on providing more weight to communication links/edges joining two clusters as compared to links joining two agents in a cluster, which increases rate of convergence of information. Thus in clustered network of agents, cluster head and executive cluster head would be responsible for deriving consensus in the received information. Simulation reflects that proposed mechanism improves time of convergence of information. However, slight increase in task execution is also observed due to trade off between quality of output and complexity of mechanism.
Keywords: Multiagent Systems; Agent Interaction Protocol; Clustered Network; Neighborhood Algortihm.
Evolutionary Optimization Based Fractional Order
controller for Web Transport Systems in Process
by Haripriya N., Kavitha Paneerselvam, Seshadhri Srinivasan, Juri Belikov
Abstract: This investigation presents an optimization based design of fractional
order proportional integral (FO-PI) controller for web transport systems
used in paper industries. The objective of the optimization algorithm is to
reduce the integral absolute error of the closed loop web transport systems
considering the underlying physical and operating constraints. The resulting
optimization problem is non-linear, and to compute the controller parameters,
evolutionary algorithms- particle swarm optimization (PSO) and bacterial
foraging optimization (BFO) are used. The performance improvements achieved
usingFOCis compared with traditional proportional integral derivative controller.
Our results show that BFO tuned FOC shows better performance
Keywords: Web Transport Systems (WTS); Web Transport Controllers (WTC);
Fractional Order Controllers (FOC); Particle Swarm Optimization (PSO);
Bacterial Foraging Optimization (BFO); Offline optimization.
Protagonist and deuteragonist based video indexing and retrieval system for movie and video song sequences
by Tushar Ratanpara, Narendra Patel
Abstract: Protagonist and deuteragonist are two main characters that plays leading role in Indian Hindi movie (IHM). Currently such information is attached using textual caption which is highly unreliable. The research presented in this paper is to automatically index and retrieve the content based on protagonist and deuteragonist from the large IHM and video song sequences (VSS). Video song sequence indexes are extracted using audio based approach from IHM in module 1. These indexes are used as an input in module 2. Faces are identified from every VSS. Colour histogram and spatiogram descriptor are extracted from faces. Similarity between two faces is computed using Bhattacharyya coefficient. Similarity based clustering technique is performed to obtain clusters of faces. Recognition of protagonist and deuteragonist is done using surf feature points. The experimental results are carried out using Indian Hindi movies of different genres.
Keywords: Content based video indexing and retrieval; song sequences; clustering; similarity; color histogram; Spatiogram.
An Improved Key Management Scheme in Cloud Storage
by VijayaKumar V, Abdul Quadir, Kiran Mary Matthew
Abstract: Nowadays, cloud services are used by numerous people all around the globe. One of its major applications is in the field of cloud storage. Users can store data in cloud without the need to have hardware resources for their storage. They will just have to pay for the amount of resources they use. For storage applications, usually the user provides the cloud with the data to be stored. Cloud encrypts this data and returns a key to the user. So the user needs to store only this key for decryption. Storage of this key is a matter of concern. If the key is lost, then the probability of data loss is very high. In order to avoid this, a large number of key management techniques have been proposed. In this paper, a key management scheme is proposed that regenerates the key, in case of its loss, using the attributes of the user.
Keywords: cloud computing; data privacy; key management;.
An Effective System for Video Transmission and Error Recovery Mechanisms in Multimedia Networks
by U. Rahamathunnisa, R. Saravanan
Abstract: In this paper an effective system has been proposed for video transmission. This system solves the problems arised due to occurrence of errors in transmitted video and delivers video with required quality of services. The reconstructed video maintains the quality of services at the decoder side. Video dynamics based error concealment algorithm is applied for recovering errors occurred during transmission. The performance of the proposed system is measured by means of simulations using JM reference software.
Keywords: Error concealment; Video dynamics; Video transmission; Quality of service; Reconstructed Video.
CLUSTERING MIXED DATA USING NEIGHBORHOOD ROUGH SETS
by Sharmila Banu Kather, B.K. Tripathy
Abstract: Data in varied nature and huge quantities are being generated every day. They range from tabulated, structured and semi-structured as well as numerical or categorical in terms of attributes. Data preprocessing presents data in a favourable format to apply analytics algorithm and derive knowledge therein. Data analytics has revolutionized millennial mankind unwinding the knowledge and patterns mined from data. Clustering is an unsupervised learning pattern which has popular algorithms based on distance, density, dimensions and other functions. These algorithms are operational on numerical attributes and special algorithms for data involving categorical features are also reported. In this paper we propose a straight forward way of clustering data involving both numerical and categorical features based on Neighborhood Rough Sets. It does not include calculation of any extra parameters like entropy, saliency, dependency or call for discretization of data. Hence its complexity is lesser than algorithms proposed for categorical or mixed data and offers better efficiency.
Keywords: clustering; mixed; categorical and numerical data; continuous data; rough sets; neighborhood rough sets; granulation.
A Unified Approach for Skin Colour Segmentation Using Generic Bivariate Pearson Mixture Model
by B.N. Jagadesh, K. Srinivasa Rao, Ch Satyanarayana
Abstract: Skin colour segmentation is rapidly growing area of research in computer science for identification and authentication of persons. In this paper, a novel generic bivariate Pearsonian mixture model for skin colour segmentation is proposed. It is observed that the hue and saturation of the colour image better characterize the features of the individual human races. In general, the human race can be characterized in to three categories namely Asian, African and European. The feature of the African skin colour can be modeled by bivariate Pearson type-IIb distribution, the Asian skin colour feature can be modeled by bivariate Pearson type-IIaα distribution and the European skin colour feature can be modeled by bivariate Pearson type-IVa distribution. The combination of all these three races of people in an image can be characterized by a three component mixture model. Deriving the updated equations of the EM-Algorithm of the generic bivariate Pearson mixture model parameters is estimated. The initialization of the model parameters are done through moment method of estimation and K-Means algorithm. The segmentation algorithm is developed using component maximum likelihood under Bayesian frame. The performance of the proposed algorithm is carried by experimentation with random sample of five images collected from our own database and various magazine websites with a combination of three races (Asian, African and European) and computing the segmentation performance metrics such as PRI, GCE and VOI. The efficiency of the proposed model with that of Bivariate GMM is carried through confusion matrix and ROC curves. It is observed that the proposed algorithm outperform the existing algorithms.
Keywords: Skin colour segmentation; Generic bivariate Pearsonian mixture model; EM-Algorithm; Segmentation performance metrics; Feature Vector.
An Intelligent and Interactive AR based Location Identifier for Indoor Navigation
by Shriram K Vasudevan, Karthik Venkatachalam, Harii Shree, Keerthana Rani, Priya Dharshini
Abstract: Augmented Reality (AR) has been in existence for more than five decades, but the techniques and methods for implementing this technology are developing only in the recent past i.e. for the past one decade. We have built an application using AR techniques with Android as base platform. We have combined Global Positioning System (GPS) and Augmented Reality (AR) to build an application for indoor navigation. Even though other applications like Google maps already exist for navigation, our application offers the users with more ease and attractiveness through AR. The data of the surroundings of a particular location is being stored in the form of latitude, longitude and altitude (geo location) in the cloud. When a user visits a location for the first time, the geo location details are entered and subsequently stored in the cloud. Consequently, the next time when the same user visits the location or when a new user visits, the stored information will be displayed about the location. The location details are updated as and when a new location is identified. These location details are displayed in the form of markers through the camera that has been integrated into the application For example, when a new student visits a school or college for cultural fest, even after finding the correct building it becomes a tedious task to locate the correct venue or classroom as the area could be too vast. Whereas, with our app, one would reach the correct venue and the augmented reality feature makes it more interactive and user friendly.
Keywords: Augmented Reality (AR); Android Application Development; Global Positioning System (GPS); Geo Location; Location; Location Manager; Indoor Navigation; Cloud Computing;.
River flow prediction with memory based artificial neural networks: A case study of Dholai river basin
by Shyama Debbarma, Parthatsarathi Choudhury
Abstract: Prediction of hydrologic time series has been one of the most challenging tasks in water resources management due to the non-availability of adequate data. Recently, applications of Artificial Neural Networks (ANNs) have proved quite successful in such situation in various fields. This paper demonstrates the use of memory-based ANNs to predict daily river flows. Two different networks, namely the gamma memory neural network (GMN) and genetic algorithm-gamma memory neural network (GA-GMN) have been chosen. The best network topologies for both the ANN models are achieved with Tanh transfer function and Levenberg-Marquardt learning rule after calibrations with multiple combinations of network parameters. The selected ANN models are then used to predict the daily mean flows of Dholai (Rukmi) river in Assam, India, a sub-basin of the Barak river basin. A comparative study of both networks indicates that the GA-GMN model performed better than the GMN model. The GA-GMN model gave better results for both training and testing dataset with minimum training MSE as 0.018 and minimum testing MSE as 22.97. Hence GA-GMN model is selected as an effective tool for predicting flow features of the Dholai river.
Keywords: Prediction; gamma memory; genetic algorithm; flow.
The Recommender System: A Survey
by Bushra Alhijawi, Yousef Kilani
Abstract: Recommender system is a helpful tool for helping the user in cutting the time needs to find personalized products, documents, friends, places and services. In addition, the recommender system handles the century web problem: information overload. In the same time, many environments or technologies (i.e. cloud, mobile, social network) become popular today and facing the problem of large amount of information. Therefore, the researchers recognize that the recommender system is a suitable solution to this problem in those environments. This paper, reviews the recent research papers that were applied the recommender system in mobile, social network, or cloud environment. We classify these recommender systems into four groups (i.e. mobile recommender system, social recommender system, cloud recommender system and traditional (PC) recommender system) depending on technology or environment that the RS is applied in. This survey presents some compression, advantages and challenges of these types of recommender systems. Also, it will directly support researchers and professionals in their understanding of those types of recommender systems.
Keywords: Recommender system; Collaborative filtering; Recommendation; Hybrid; Mobile; Cloud; Social; cold-start; Content-based filtering; Demographic-based filtering.
Occlusion Detection and Processing using Optical Flow and Particle Filter
by Wesam Askar, Osama Elmowafy, Anca Ralescu, Aliaa Youssif, Gamal Elnashar
Abstract: Object tracking systems continue to be an intensive area of research, for which detection and processing of occlusion is a well-known challenge. This paper proposes a new approach to detection and handling of occlusion based on the integration of two known techniques, optical flow and particle filtering. Results of preliminary experiments show that the proposed method can detect and overcome the occlusion problem successfully during the tracking process.
Keywords: Video tracking; optical flow; particle filter; occlusion.
Stochastic Modeling and Pilot Data Analysis towards Provisioning of Ambulance for Handling Emergency
by Bidyutbiman Sarkar, Pulak Kundu, Nabendu Chaki
Abstract: Emergency Medical Services (EMS) refer to various health care related services. Like several other EMS activities, ambulance site selection for fastest service to improve the chance of saving a human life is indeed a very important activity in developing countries. Besides other factors, uncertainty in demand of ambulance at a particular location depends on the type of casualty, its service time, and its availability from the nearest service point.The relocation model of ambulances in emergency is one of the typical oldest optimization problems. In case of a distributed setup, the complexity of these algorithms increases exponentially with the increase of the number of constraints. In this work we try to find an alternative frame work to reduce EMS time using latest art of the technologies along with other additional EMS services at a reasonable cost using a generalized stochastic Petri Net (GSPN).
Keywords: Ambulatory Care; Distributed System; ICT; PN; GSPN; EMS.
Improving Recommendation quality and performance of Genetic-Based Recommender System
by Bushra Alhijawi, Yousef Kilani, Ayoub Alsarhan
Abstract: The recommender system came to help the user in finding the required itemrnin a short time by filtering the available choices. This paper addresses the problem of recommending items to users by presenting new three genetic-based recommender system (GARS+, GARS + + and HGARS). HGARS is a combination of GARS+ with GARS + +. It is an enhanced version of GARS which is works without the need of using the hybrid model. In the proposed algorithm, the genetic algorithm is used to find the optimal similarity function. This function depending on a liner combination of values and weights. We experimentally prove that HGARS improves the accuracy by 16.1%, the recommendation quality by 17.2% and the performance by 40%.
Keywords: Collaborative filtering; Recommender System; Genetic Algorithms; Similarity.
ENERGY AWARE TASK SCHEDULING USING HYBRID FIREFLY - GA IN BIG DATA
by M. Senthilkumar, P. Ilango
Abstract: Task scheduling is important of research in big data and its made in two traditions user level and system level. In user-level issues with scheduling between the service provider and customer. In system level issues in scheduling with resources management in the data center. The drawbacks of various existing methods to increase in power consumption of data centers have become a significant issue. Now the Map Reduce clusters constitute a major piece of the data center for Big Data Applications. Simply the absolute size, high fault-tolerant nature and low utilization levels make them less energy efficient. The complexity of scheduling increases when there is an increase in the size of the task, it becomes very tedious to perform scheduling effectively. The drawback with existing scheduling algorithm generates higher computational cost and less efficient. The multi-objective scheduling with cloud computing makes it difficult to resolve the problem in the case of complex tasks. These are the primary drawbacks of several existing works, which prompt us to manage this research on task scheduling in cloud computing
Keywords: Firefly algorithm (FA); genetic algorithm (GA); task scheduling; Hadoop; Map Reduce framework.
An Interactive and Innovative Application For Hand Rehabilitation Through Virtual Reality
by Shriram K. Vasudevan, S. Aakash Preethi, Karthik Venkatachalam, Mithula G, Navethra G, Krithika Nagarajan
Abstract: Physiotherapy has been very monotonous for patients and they tend to lose interest and motivation in exercising. Introducing games with short term goals in the field of rehabilitation is the best alternative, to maintain patients' motivation. Our research focuses on gamification of Hand Rehabilitation exercises to engage patients wholly in rehab and to maintain their compliance to repeated exercising, for a speedy recovery from hand injuries (wrist, elbow and fingers). This is achieved by integrating Leap Motion Sensor with Unity game development engine. Exercises (as gestures) are recognized and validated by Leap Motion Sensor. Game application for exercises are developed using Unity. Gamification alternative has been implemented by very few in the globe and it has been taken as a challenge in our research. We could successfully design and build an engine which would be interactive and real-time, providing platform for rehabilitation. We have tested the same with patients and received positive feedbacks. We have enabled the user to know the score through GUI
Keywords: Rehabilitation; Physiotherapy; Gesture; Leap Motion Sensor; Recovery; Virtual Reality;.
Discovering Communities for Web Usage Mining Systems
by Yacine SLIMANI, Abdelouaheb MOUSSAOUI, Yves LECHEVALLIER, Ahlem DRIF
Abstract: Discovering the community structure in the field of web usage mining structure has been addressed in many different ways. In this paper, we present a new method for detecting community structure using Markov chains based on the set of frequent motifs. The basic idea is to analyze the occurrence probability of different frequent sequences during different user sessions in order to extract the communities that describe the users behavior. The proposed method is constructed and successfully applied on the web site in the university campus of Farhat AbbAs Setif.
Keywords: Web usage mining; Community detection; Complex networks; Markov chains; Quality function.
Person Re-Identification Using kNN Classifier Based Fusion Approach
by Poongothai Elango, Andavar Suruliandi
Abstract: Re-identification is the process of identifying the same person from images or videos taken from different cameras. Although many methods have been proposed for re-identification, it is still challenging because of unsolved issues like variation in occlusions, viewpoint, pose and illumination changes. The objective of this paper is, to propose a fusion based re-identification method to improve the identification accuracy. To meet the objective, texture and colour features are considered. In addition the proposed method employs Mahalanobis metric based kNN classifier for classification. The performance of proposed method is compared with the existing feature based re-identification methods. CAVIAR, VIPeR, 3DPes, PRID datasets are used for experiment analysis. Results show that the proposed method outperforms the existing methods. Further it is observed that Mahalanobis metric based kNN classifier improves the recognition accuracy in re-identification process.
Keywords: Person re-identification; Colour features; Texture feature; Feature Fusion.
Graph Embedded Discriminant Analysis for the Extraction of Features in Hyperspectral Images
by Hannah M. Adebanjo, Jules R. Tapamo
Abstract: In remote sensed hyperspectral imagery (HSI), class discrimination has been a major concern in the process of reducing the dimensionality of hyperspectral images. Local Discriminant Analysis (LDA) is a widely accepted dimensionality reduction (DR) technique in HSI processing. LDA discriminates between classes of interest in order to extract features from the image. However, the drawbacks of its application to HSI is the presence of few labeled samples and its inability to extract equivalent number of features for the classes in the image rnThis paper proposes a new graphical manifold DR algorithm for HSI. The proposed method has two objectives: to maximize class separability using unlabeled samples and preserve the manifold structure of the image. The unlabeled samples are clustered and the labels from the clusters are used in our semi--supervised feature extraction approach. Classification is then performed using Support Vector Machine and Neural Networks. The analysis of the result obtained shows that proposed algorithm can preserve both spatial and spectral property of HSI while reducing the dimension. Moreover, it performs better in comparison with some related state of the art dimensionality reduction methods.rn
Keywords: feature extraction; graph-based methods; manifold learning; hyperspectral image(HSI).
Adaptive Tutoring System based on Fuzzy Logic
by Makram Soui, Abed Mourad, Ghannem Adnane, Daouas Karim
Abstract: In recent years, education method has changed and has become very innovative and modern. In this way, online adaptive learning seems to be a revolutionary competitive method. The advancement of computer and networking technologies is the key to this whole change from the classic education to the modern online adaptive education. The majority of E-learning systems are based on Boolean logic. In fact, the system considers that the learner like or not a course characteristic but the user can prefer gradually this parameter (low, medium, high). To this end, the proposed approach exploits semantic relations between data elements and learners preferences to determine adapted UI components appropriate to learners characteristics based on fuzzy logic. The results of evaluation confirm the efficiency of our technique with an average of more than 77% of precision and recall.
Keywords: Adaptation ; Adaptive course ; Evaluation; Multi-criteria Decision Making; Intelligent Tutoring System.
Technical Analysis based Fuzzy support system for stock market Trading
by Aviral Sharma, Vishal Bhatnagar, Abhay Bansal
Abstract: Technical analysis form an integral part of the life of a stock trader. In econometric analysis, technical analysis is method for predicting the course of prices of security under consideration through the study of past statistics relating to the equity, mostly price and volume. Traders tend to use this type of analysis to take a decision regarding a particular security. Fuzzy logic based systems could be used in developing decision models where the experience of a traders can be incorporated inthe decision model. In this paper, we present a hybrid approach between fuzzy logic and technical analysis. The system generates a signal on direction of movement of the stock. Thus, helping the trader to better understand the underlying behavior of the stock under consideration and take a decision accordingly.
Keywords: Technical analysis; Commodity Channel index; relative strength index; William %R; ultimate oscillator; Aroon; Fuzzy Logic; Artificial intelligence.
Adaptive Savitzky-Golay Filtering and Its Applications
by Jozsef Dombi, Adrienn Dineva
Abstract: Noise reduction is a central issue of the theory and practice of signal processing. The Savitzky-Golay (SG) smoothing and differentiation filter is widely acknowledged as a simple and efficient method for denoising. However only few book on signal processing contain this method. As is well known, the performance of the classical SG-filter depends on the appropriate setting of the windowlength and the polynomial degree, which should match the scale of the signal since, in the case of signals with high rate of change, the performance of the filter may be limited. This paper presents a new adaptive strategy to smooth irregular signals based on the Savitzky-Golay algorithm. The proposed technique ensures high precision noise reduction by iterative multi-round smoothing and correction. In each round the parameters dynamically change due to the results of the previous smoothing. Our study provides additional support for data compression based on optimal resolution of the signal with linear approximation. Here, simulation results validate the applicability of the novel method.
Keywords: Savitzky-Golay filter; adaptive multi-round smoothing;iterative smoothing and correction;noise removal; data compression.
A new hybrid Genetic Algorithm for job shop scheduling problem
by Marjan Kuchaki Rafsanjani, Milad Riyahi
Abstract: Job shop scheduling problem is an NP-Hard problem. This paper proposes a new hybrid genetic algorithm to solve the problem in an appropriate way. In this paper, a new selection criterion to tackle premature convergence problem is introduced. To make full use of the problem itself, a new crossover based on the machines is designed. Furthermore, a new local search is designed which can improve the local search ability of proposed GA. This new approach is run on the some problems and computer simulation shows the effectiveness of the proposed approach.
Keywords: Job shop scheduling problem (JSSP); Genetic algorithm; Selection operator; Crossover operator; Local search.
An Optimized Component Selection Algorithm for Self-Adaptive Software architecture using the Component Repository
by Mohana Roopa Y., Rama Mohan Reddy A
Abstract: Component based software engineering focus on the development and reuse of the component. The component reuse is depending on the storage and retrieves process. The storage and retrieve process is carried by a component repository. This paper presents the component repository model for the developers to achieve good productivity. The component selection from the component repository according the functionality and requirements is a crucial part. This paper proposed an algorithm for optimizing component selection with the functionality constraints like customer size, reliability and performance. The experimental result evaluates the performance of the algorithm and it is proved that the proposed algorithm had better performance in terms of component selection.
Keywords: component; software system selection; adaptability; functionality.
Test Optimization: An Approach Based On Modified Algorithm For Software Network
by Manju Khari, Prabhat Kumar, Gulshan Shrivastava
Abstract: Testing is an indispensable part of the software development life cycle. It is performed to improve the performance, quality, efficiency and reliability of the software network. In this paper, three algorithms are implemented namely, Genetic Algorithm (GA), Cuckoo Search Algorithm (CSA), and Artificial Bee Colony (ABC) algorithm for the purpose of Test Suite Optimization and with the help of results obtained from the implementation of these three algorithms, a novel Hybrid Algorithm will be proposed to enhance the result of optimization. To test a system, suitable test cases are developed but these test cases need to be optimized, as executing all the test cases is a time-consuming process. Testing a system with all possible test cases will increase the time required for testing and will also affect the cost of a product. Thus, it is a good idea to reduce the number of test cases which in turn reduces the testing time andwork of a software tester. Authors focus on optimizing test suites so that only the best test cases need to be executed to test software network. In order to optimize test cases, nature-inspired algorithms are used as they provide the best optimization techniques. The proposed algorithm is implemented and experiments are conducted on various real-time programs to evaluate the efficiency of the proposed approach. Experimental results show that hybrid algorithm generates better/comparable results as compared to the existing state-of-the-art algorithms.
Keywords: Genetic; Cuckoo Search; Artificial bee; Test suite Optimization; Hybrid algorithm; software network; Test data.
Application of Artificial Neural Network (ANN) on deformation and densification behaviour of sintered Fe-C steel under cold upsetting
by Kandavel Thanjavur Krishnamoorthy, Ashok Kumar T, Vijay D, Aswanth Samraj
Abstract: Cold upsetting is one of the densification processes used in P/M materials to achieve the desired density by applying required amount of load. The present work aims to study the deformation and densification characteristics of plain carbon steel (Fe-C) containing various levels of carbon viz. 0.2%, 0.5% and 1% under cold upsetting.
Elemental powders of iron (Fe) and graphite (C) were accurately weighed based on the compositions requirement and blended homogeneously using a pot mill. Cylindrical preforms of Fe and Fe-C powders were prepared using 100 T capacity Universal Testing Machine (UTM) by applying suitable axial pressure to get 80% theoretical density of respective alloy steels. The green compacts were sintered using 3.5kW electric muffle furnace and nitrogen gas was purged to prevent oxidation during sintering.
The sintered preforms of various compositions of Fe-C were subjected to cold upset. The axial and lateral deformations were calculated from the physical measurements taken from the deformedand non-deformed specimens and the density of the deformed preforms was measured by Archimedes principle. The experimental data were used further to generate the deformation and densification model using Artificial Neural Networks (ANN).
It is observed from the experimental results that increasing carbon content improves the deformation and densification properties of iron material as it behaves like a lubricant and increases the binding strength between the grains. As the target value of ANN model approaches unity, it could be concluded that the ANN prediction and experimental values have good agreement with each other. It is also added that ANN can be used as a prediction model on deformation and densification behaviours of any P/M materials.
Keywords: Artificial Neural Network; Powder metallurgy; Densification; Deformation; True axial stress; Plain carbon steel.
A study of Total Technical Life (TTL) for an Aircraft with implementation and suggestions for improvisation.
by Balachandran A, P.R. Suresh, Shriram K. Vasudevan
Abstract: Travel has become more sophisticated and inevitable these days. Aircraft has become one of the best opted ways to reach the target. Not only for civilians, but it is also used by the military for operational purposes. With so much of complicated design, there is a need to havernmore reliable systems and also effective use of service life of the aircraft known as Total Technical Life (TTL). The present system of fixing the TTL for an aircraft is passive method in which, the predicted values are compared with the value which is obtained from sample aircraftrnespecially monitored for this purpose. However, the actual fatigue of each aircraft is different as all the aircraft undergo different way of flying under different conditions at different locations. In order to cater for this unknown parameters, factor of safety is applied and hence safe utilization life is obtained. When the aircraft reaches the safe life limits, it is withdrawn from service though still useful life is available in the aircraft. In the absence of actual data available for each aircraft,rnpresent method is the only way to fly the aircraft safe at the cost of under utilization. In the recent years, much advancement has taken place in data sensing, capturing and processing. The computing platforms are available with very high reliable factor with much cheaper cost. With this technological advancement, it is possible to monitor the fatigue of all the aircraft structures dynamically and collect the actual data. The actual fatigue experienced by the aircraft during the usage period can be compared against the predicted value so that the life of an aircraft can bernextended without compromising the safety aspects. The proposed methodology is tested with a model aircraft and the readings are found to be consistent. The proposed system is one of the ways forward for optimal use of aircraft and scientific way of providing life extensions to the aircraft based on actual data rather than approximation of service life of aircraft fleet.
Keywords: TTL; Aircraft; Total Technical Life; Under utilization; Life of the aircraft; safety; Arduino; Microcontroller;.
A Stable Routing Algorithm for Mobile Ad Hoc Network Using Fuzzy Logic System
The Mobile Ad Hoc Network (MANET) is an infrastructure-less network, where the nodes communicate either directly or indirectly through intermediate nodes. The network topology can change frequently due to its dynamic nature and limited resource availability. In MANET energy-efficient routing is a major issue because nodes are operated with a limited battery power. The energy-efficient routing algorithm can confirm the high performance by increasing the network lifetime. In order to make the network more scalable, the routing algorithm needs to maximize the usage of network resources. This paper proposes a novel routing approach Energy Aware Fuzzy Controlled Routing (EAFCR) algorithm. The proposed algorithm enriches the intelligence to the node by applying the fuzzy decision tools to develop a more stable and energy-efficient route during the route discovery phase. The fuzzy logic system uses the per hop delay, available energy and link quality to form a more stable route. With the proposed EAFCR algorithm, the packet delivery ratio, end-to-end delay, residual energy, and throughput show an improvement of 3.05%,1.38%, 4.25% and 3.3% respectively, than the existing Fuzzy Logic Modified AODV Routing (FMAR) protocol.
Keywords: infrastructure-less, topologies, fuzzy decision, routing, protocol.
Keywords: infrastructure-less; topologies; fuzzy decision; routing; protocol. rnrn.
Automatic Short Answer Grading using Rough Concept Clusters
by Udit Kr. Chakraborty, Debanjan Konar, Samir Roy, Sankhayan Choudhury
Abstract: Evaluation of text based answers has stayed as a challenge for researchers in recent years and with the growing acceptance of e-learning system, a solution needs to be achieved fast. While assessing the knowledge content, correctness of expression and linguistic patterns are complex issues in themselves, a smaller answer may be evaluated using keyword matching only. The work proposed in this paper is aimed at evaluating smaller text answers, no longer than a single sentence using keyword matching. The proposed method agglomerates keywords from a group of model answers forming clusters of words. The evaluation process thereafter exploits the inherent roughness of the keyword clusters to evaluate a learners response through comparison and keyword matching. The novelty in the proposed system lies in the usage of fuzzy membership functions along with rough set theory to evaluate the answers.
Rigorous tests have been conducted on dataset built for the purpose returned good correlation values with the average of two human evaluators. The proposed system also fares better than Latent Semantic Analysis (LSA) based and Link Grammar based evaluation systems.
Keywords: Text answer; Single Sentence; Keyword; Concept Cluster; Rough Set; Latent Semantic Analysis; Link grammar.
A hybrid grey wolf optimization and pattern search algorithm for automatic generation control of multi area interconnected power systems
by Vikas Soni, Girish Parmar, Mithilesh Kumar
Abstract: A hybrid grey wolf optimization-pattern search (hGWO-PS) algorithm has been proposed to optimize the parameters of two degree of freedom-proportional integral derivative (2DOF-PID) controllers in multi area power systems for automatic generation control. The integral of time multiplied by absolute error (ITAE) has been considered as an objective function in the present work. Firstly, this algorithm has been applied to two area non reheat thermal power system; secondly, the analysis of ITAE, dynamic responses and robustness of the same has also been carried out. The dynamic behaviour of the system optimized by the proposed approach hardly alters with the broad changes in the load and system parameters within the range [-50%, +50%]. The proposed algorithm has also been applied extensively to three area hydro-thermal power system with appropriate generation rate constraints (GRC). The simulation results show that the proposed algorithm performs better when compared with recently published approaches in terms of less ITAE value, settling time, overshoot and faster system ability to return at zero frequency and tie line power deviations.
Keywords: Automatic generation control; two area parallel interconnected thermal power system; three area interconnected hydro thermal power system; two degree of freedom-PID controllers; grey wolf optimization; pattern search; generation rate constraints; governor dead band nonlinearities.
PRIVACY PRESERVATION USING HYBRID CLOUD ENVIRONMENT AND MAP-REDUCE FOR DATA DEDUPLICATION
by Rutuja Mote, Ambika Pawar
Abstract: Cloud is an umbrella wherein the internet based development and services are scrutinized and then explored. Cloud can be entitled as an enigma, wherein the novel opportunities are pioneered to manifest a large scale and flexible computing framework. The actors of a cyber supply chain can be commenced through the important functionalities such as, the utility model of consumption with elasticity, the abstraction of the framework and so on. Hybrid clouds vary greatly in sophistication facilitating portability of workloads across the entire inter-cloud, without compromising users availability, security, or performance requirements. This paper comprehensively helps to enhance a privacy design model with cloud computing adaptation hitting the fast lane. In the first phase, the system assimilates and devises the formation of hybrid cloud architecture. In the second phase, the system implements various security tactics that are Advanced Encryption Standard (AES) Technique, Byte Replacement Shuffling (BRS) algorithm in consonance with sensitivity level, assigned to the file to preserve privacy. The third phase delineates the optimization of response time (to upload and download a file) and workflow using Map-Reduce for data deduplication for a cavernous privacy and security solution.
Keywords: Hybrid Cloud Architecture; File Upload; File Download; Byte Replacement Shuffling; Map-Reduce; Data Deduplication; Security; Privacy.
Possible Adoption of Various Machine Learning Techniques in Cognitive Radio-A Survey.
by Barnali Dey, Ashraf Hossain, Rabindranath Bera
Abstract: The concept of Cognitive Radio (CR) system is the need for next generation Wireless Communication technology in terms of providing intelligence and superior performance to a wireless device. The CR is mainly an intelligent system which is aware of its environment and is well capable to adapt in accordance with the changing environment and user needs. The concept of adaptation of the communication system can be realised well with machine learning capability inculcated within the system. It is a well known fact that, the key strengths of any Machine Learning paradigm is its ability to adapt with respect to the dynamic changing system parameters. In this paper an attempt has been made to compile various applications of machine learning techniques for different activities of CR cycle. Further, this note reviews the work on development of machine learning techniques for spectrum sensing of CR in order to make the CR system as a whole practically feasible and robust, thus mitigating its existing computational limitations due to the use of conventional techniques.
Keywords: Cognitive Radio; Machine Learning; Spectrum Sensing; Energy Detection.
Raga Recognition through Tonic Identification using Flute Acoustics
by Sinith M S, Shikha Tripathi, Murthy K V V
Abstract: Tonic identification is traditionally approached using
pitch histogram. Acoustic characteristics of musical instruments
have not been used for the purpose. The conventional tonic
identifiers are either knowledge based or multi-pitch analysis
based. These methods either directly or indirectly depend on the
drone sound. The efficiency of these systems drastically decreases
in the absence of the later. In this paper, a Tonic Identification
method which is independent of drone sound is proposed for
flute signals which makes use of acoustic characteristics of the
instrument. In addition, tonic identification is utilized for real-
time raga recognition.
Keywords: Tonic identification; Indian Classical Music; Raga recognition; Flute acoustics.
Performance improvement in Cardiology department of a hospital by Simulation
by Shriram K. Vasudevan, Narassima Seshadri, Anbuudayasankar SP, Thennarasu M
Abstract: Healthcare industry plays a vital role in life of humankind and in economic development of a country. Healthcare services have to be provided to mankind as and when required without time delay and compromise on quality. This research focusses on reduction of waiting time of patients as it is considered as one of the important parameters that governs the service quality and is considered to improve patient satisfaction. This was achieved by performing a case study in Cardiology outpatient department of a private hospital in South India. Cardiology was chosen as it is one of the most critical areas which demands immediate attention. The study follows a Discrete Event Simulation approach for analysing the trajectory of patients in cardiology department, determining various performance parameters, suggesting changes in the existing system and developing alternate models to compare the results with those of existing model. Reducing waiting time permits physicians to address more number of patients in a given period which are evident from the results obtained from the developed models. Simulation results revealed that the four alternate systems proposed were effective than the existing system.
Keywords: Discrete Event Simulation; Arena model; Healthcare; Cardiology; Outpatient department; Waiting time reduction;.
Computing the Shortest path with Words
by Arindam Dey, Anita Pal
Abstract: Computing with Words is a soft computing technique to solve the decision making problem with the information described in natural language. It is inspired by the remarkable human capability to perform a wide variety of physical and mental tasks without any measurements or computations. In this paper, we propose a generalized Diskrtras algorithm to solve the shortest path problem from a specific node to every other nodes on a fuzzy graph, in which words taken from natural language are assigned to the arcs as their arc length.rnWe call this problem as computing the shortest path with word (CSPWW). In arnshortest path problem, the arc lengths may represent time or cost. Human beingrndescribe those arc costs in real life by the terms small, large, some, etc termrnwhich do not supply any natural numbers or fuzzy numbers. We describe thosernterms as words. Same word may have different meaning to different people. So,rnuncertainty appears in description of the word in natural language. Here, we usernInterval Type 2 Fuzzy Set (IT2FS) to capture the uncertainty of the words. Arnperceptual computer model is introduced to use in our algorithm. The Per-C thatrnis associated with shortest path problem is called a shortest path advisor (SPA),and its design is described in detail in this paper. It consists of three components:encoder, CWW engine and decoder. The encoder receives all the words present in the path and transforms all the words into IT2FSs. The CWW engine adds all the IT2FSs and returns an IT2FS for the corresponding path. The decoder receives the output of CWW engine and calculates the corresponding centroid based ranking value of the path. This rank is used to determine the shortest path. A numerical example of transportation network is used to illustrate the effectiveness of the proposed method.
Keywords: Computing with words; Interval type-2 fuzzy sets; perceptualrncomputer; centroid rank.
An Analysis of the Most Accident Prone Regions within the Dhaka Metropolitan Region Using Clustering
by M. Rashedur Rahman
Abstract: Most of the worlds developed countries have decreased the unusual deaths like traffic accidents of their citizens by taking efficient steps. In Bangladesh, injuries because of road accidents have become a regular incident. The highly-populated cities in Bangladesh are still having such incidents daily. As the number of vehicles is increasing and most of the drivers are not willing to follow the traffic rules, injuries due to traffic accidents are not going down at all. Among all those big cities in Bangladesh, Dhaka city has highest amount of road accidents. So, in this paper we focus on the most hazardous regions in Dhaka Metropolitan area. We have collected the accident related data from Accident Research Institute (ARI) at Bangladesh University of Engineering and Technology (BUET) that is located in the city of Dhaka. In our paper, we have used the Fuzzy C-means Clustering, Expectation Maximization, Hierarchical Agglomerative Clustering and K-means Clustering to identify the regions where traffic incidents occur the most in Dhaka Metropolitan area. The missing values for some attributes in the dataset are overwritten by the mean/mode of that attribute itself.
Keywords: data mining; accidental injury severity; clustering; hazardous areas; dhaka metropolitan area.
A Statistical Comparison for Evaluating the Effectiveness of Linear and Nonlinear Manifold Detection Techniques for Software Defect Prediction
by Soumi Ghosh, Ajay Rana, Vineet Kansal
Abstract: Software systems are associated with the common problem of having wide range of defects. Nowadays, most of the software systems are released without predicting any defect, and therefore, it is very essential to predict the defects in time for improving the software qualities, security and obtaining the desired result at minimum cost. This may be possible if, defects in a software system can be predicted in the initial stage of software development process by application of proper and effective techniques. This paper presents, a new technique-Manifold Detection Technique (MDT) which is different than earlier applied conventional methods like Regression, Feature Selection Methods etc. that has been used for software defect prediction. In this paper, the performance of classifiers has been compared when applied with or without MDTs in order to evaluate the effectiveness of different MDTs (Linear & Nonlinear) by reducing the dimensions of software datasets. In this process, eight classifiers were applied to four PROMISE datasets to determine the best performing classifier with respect to prediction performance measuring factors (accuracy, precision, recall, F-Measure, AUC, misclassification error) are bias reduced by use of 10-fold cross validation test, when applied with or without MDTs. The experimental results proved that FastMVU is the most accurate result producing technique as compared to all other Nonlinear MDTs, when applied with any defective software dataset. A comparative analysis and evaluation of prediction performance of all classifiers demonstrated that Bayesian Network (BN) is the most effective technique for software defect prediction using with (Linear & Nonlinear) or without MDTs. The performance of all the classifiers with/without MDTs has been statistically analyzed and tested by performing paired two-tailed t-test.
Keywords: Defects; Linear; Nonlinear; Manifold Detection; Promise Datasets; Prediction; Software System.
Modified SVPWM Technique for a Sensorless Controlled Induction Motor Drive using Neural Network Observer and Predictive Controller
by Shoeb Hussain, Mohammad Abid Bazaz
Abstract: The use of multi-level inverter in a sensorless control scheme increases reliability in state parameter estimation. In this paper, sensorless control is presented using a neural network observer that uses the direct and quadrature current and voltage components for speed estimation. Distortion in current and voltage will result in deviations in speed estimation. In order to address the problem, this paper presents a modified space vector modulation scheme for sensorless control of induction motor drive fed by a multi-level inverter. The modulation scheme uses lesser switching states and is employed on a cascaded H-bridge inverter configuration. This results in reliable speed estimation by reducing distortion in current and voltage measurement. Moreover the paper uses predictive controller for speed control. Simulation is carried out in MATLAB and results show improved performance of sensorless operation.rnrn
Keywords: Induction motor; predictive controller; neural network observer; Sensorless Vector control; SVPWM.
Determination of Reliability Index of Cantilever Retaining Wall by RVM, MPMR and MARS
by Pijush Samui, Rahul Kumar, Sunita Kumari, Sanjiban Sekhar Roy
Abstract: Overturning criterion is an important parameter for designing cantilever retaining wall. This study adopts Relevance Vector Machine (RVM) based First Order Second Method (FOSM), Minimax Probability Machine Regression(MPMR) based FOSM and Multivariate Adaptive Regression Spline (MARS) based FOSM for determination of reliability index of cantilever retaining wall based on overturning criterion. RVM, MPMR and MARS have been used to overcome the limitations of the FOSM model. An example is illustrated how the proposed RVM based FOSM, MPMR based FOSM and MARS based FOSM analysis can be carried out. A comparative study has been carried out between the developed models. The results demonstrate that the developed models have ability to overcome the limitations of FOSM
Keywords: Retaining Wall; Reliability; First Order Second Moment Method; Minimax Probability Machine Regression; Relevance Vector Machine; Multivariate Adaptive Regression Spline.
Thumb Movement for Prosthetic Hand based Fuzzy Logic
by Anilesh Dey, Amarjyoti Goswami, Abdur Rohman, Jamini Das, Nilanjan Dey, Amira S. Ashour, Fuqian Shi
Abstract: Electromyography innovation leads to the development of modern prostheses (artificial limbs) control. Prosthetic hands are developed to assist amputees during their daily activities. Over the years, it is seen that the fluid movements which are required to carry out different functions, such as gripping and holding are not reaching its full potential, especially in thumb movement pattern. Consequently, the current work proposed an efficient mechanism for the movement of the prosthetic thumb in order to position the thumb even at intermediate angles as 45.3 degrees and 78.6 degrees. Obtaining such flexibility in the movement will lead to a movement pattern which is more similar to the human hand. A fuzzy-based control strategy is implied to design a prosthetic thumb with the above-mentioned movement pattern. The Mamdani fuzzy control model is proposed with three input variables, namely the thumbs first joint bend, second joint bend and the second joint movement in the left and right direction. The proposed system provided the expected results, where twenty-seven combinations of the rules facilitate the alignment of the prosthetic thumb at different degrees.
Keywords: Intermediate movements; Mamdani fuzzy control; Prosthetic thumb movement.
QoS-Aware Online Mechanism for Dynamic VM Provisioning in Cloud Market Using Q-learning
by Ayoub Alsarhan
Abstract: Cloud provider (CP) leases various resources such as CPUs, memory, and storage in the form of Virtual Machine (VM) instances to clients over internet. This paper tackles the issue of quality of service (QoS) provisioning in cloud environment. We examine using Q-learning for provisioning VMs in the cloud market. The extracted decision function should decide when rejecting new request for VMs that violate QoS guarantee. This problem requires the reward for CP be maximized while simultaneously meeting a quality of service (QoS) constraints. These complex contradicting objectives are embedded in our Q-Learning model that is developed and implemented as shown in this paper. Numerical analysis shows the ability of our solution to earn significantly higher revenue than alternatives.
Keywords: Quality of Service; Cloud Computing; Resource Management; Q-learning; Cloud Service Trading.
Using modified background subtraction for detecting vehicles in Videos
by Mohamed Maher Ata, Mohamed El-Darieby, M.Abd Elnaby, Sameh A. Napoleon
Abstract: In this paper; a comparison study has been introduced between the traditional foreground detector based (background subtraction technique) and a modified background subtraction based (empty frame subtraction technique). Our case study was estimating average vehicular speed and the level of crowdedness in 3 test traffic videos with 5 different indices; frame rate, resolution, number of frames, duration, and extension). The proposed modification in the background subtraction detector strategy aims to reduce vehicle detection processing time which increase vehicle tracking efficacy. In addition, we have applied some sort of video degradations (salt and pepper noise, Gaussian noise, and speckle noise) to the appropriate traffic videos in order to evaluate the effect of a challenging weather condition case study on the detection processing time. This degradation has been applied in both traditional and modified background subtraction for detecting vehicles in traffic videos. Results show an obvious enhancement in the processing time of the detected vehicles according to this modification in the background subtraction of interest rather than the traditional background detector.
Keywords: computer vision; foreground object detection; background subtraction; video degradation.
An Efficient prefix based labeling scheme for Dynamic update of XML Documents
by Dhanalekshmi Gopinathan, Krishna Asawa
Abstract: The increasing volume of XML documents and the real-world requirement to support the updations has motivated the research community to develop dynamic labeling schemes. Each of the dynamic labeling schemes proposed till date differs in characteristics and has its own advantages and limitations. They may differ in terms of the query supported, their update performance, label size etc. In this paper, a new prefix based labeling scheme is proposed which is compact, dynamic. And, it also facilitates the computation of structural relationships which is the core part of query processing. The proposed scheme can handle both static as well as dynamic XML documents. The experimentation is conducted to evaluate the performance of storage requirement, structural relationship computation and update processing. The result is compared with some of the existing labeling mechanisms.
Keywords: Labeling Scheme; XML; Structural relationship; dynamic update; ancestor-descendant; parent-child relationship.
Content based load balancing of tasks using task clustering for cost optimization in cloud computing environment
by Kaushik Sekaran, Venkata Krishna P
Abstract: Cloud computing is the recent mantra for all the techies and internet users all around the world. The power of cloud computing is enormous as it provides big services in an optimal cost as well as in a reliable manner. Load balancing of tasks in the cloud server is an important issue to be addressed. In this paper, we propose a task clustering algorithm to minimize the load across the cloud servers through content based load balancing of tasks using task clustering methods and cost reduction method for optimal energy consumption at all the cloud data center heads. The results analysed in our paper are better when compared with existing content based load balancing models. Our approach clearly represents the achievement of optimal load balancing of tasks with respect to upload bandwidth utilization, minimal latency and some other QoS (Quality of service) metrics.
Keywords: Cloud computing; load balancing; tasks clustering; cost reduction; energy consumption; QoS (Quality of service) metrics.
A Two Step Clustering Method for Facility Location Problem
by Ashish Sharma, Ashish Sharma, A.S. Jalal, Krishna Kant
Abstract: Facility location problems are designed with the objective to gain more profit. The profit can be gained when the maximum demand is satisfied. The demand can be satisfied when maximum number of customers are covered or served. To attain maximum number of customers, there are various approaches have been investigated. In general, most of the approaches consider for the facility location models are based on radius as a service area of facility. Therefore, such facilities which fulfill their service in a radius can be served by conventional approach. However, conventional approaches fail to allocate those facilities which are not inclined by topographical and road network barriers. In this paper, we propose a model to optimized facility allocation in such scenarios. In the propose model, we have used a two step clustering approach to solve the facility location problem. Experimental results illustrate that the proposed algorithm based on density affinity propagation (DAP) for the Facility location problem can be used to construct a solution for maximal service and covering area.
Keywords: Facility location; Proximity; Density; Approximation; Clustering.
Marker and Modified Graph Cut Algorithm for Augmented Reality Gaming.
by Shriram K. Vasudevan, R.M.D. Sundaram
Abstract: Augmented reality aims at superimposing a computer generated image on a users view of the real world thereby creating a composite view. Virtual reality on the other hand keeps the user isolated from the real world and immersed in a world that is completely fabricated. The main objective of this research is to capture a real life image and augment it as a component of a gaming environment using the principles of augmented reality. For this research implementation, we have chosen car racing as our gaming environment. The core elements are the image segmentation using CIELAB color space based graph cut algorithm, 2D to 3D modelling, and game development with augmented reality. The tools utilised are Mat Lab, insight3d and Unity3D.The proposed idea will enable someone to view a virtual environment with real components that are integrated dynamically.
Keywords: Augmented Reality; Gaming; Image extraction; Modelling; Image segmentation; Racing.
Predicting longitudinal dispersion coefficient in natural streams using Minimax Probability Machine Regression and Multivariate Adaptive Regression Spline
by Sanjiban Sekhar Roy, Pijush Samui
Abstract: This article employs Minimax Probability Machine Regression(MPMR) and Multivariate Adaptive Regression Spline(MARS) for prediction of longitudinal dispersion coefficient in natural streams.The variables of hydraulic features such as channel width(B),flow depth(H), flow velocity(U), shear velocity(u*) and geometric features such as channel sinuosity (σ) and channel shape parameter(β) were taken as the input.The dispersion coefficient Kx was the decision parameter for the proposed machine learning models.MARS does not assume any functional relationship between inputs and output.The MARS model is a non-parametric regression model that splits the data and fits each interval into a basis function.MPMR is a probabilistic model which maximizes the minimum probability of predicted output. MPMR also provides output within some bound of the true regression function.The proposed study gives an equation for prediction of Longitudinal Dispersion Coefficient based on the developed MARS. The developed MARS has been compared with proposed MPMR. Finally, the performances of the models have been measured by different performance metrics.
Keywords: Longitudinal Dispersion Coefficient;Natural Streams;Minimax Probability Machine Regression;Prediction; Multivariate Adaptive Regression Spline.
A Brain-like Cognitive Process with Shared Methods
by Kieran Greer
Abstract: This paper describes a new entropy-style of equation that may be useful in a general sense, but can be applied to a cognitive model with related processes. The model is based on the human brain, with automatic and distributed pattern activity. Methods for carrying out the different processes are suggested. The main purpose of this paper is to reaffirm earlier research on different knowledge-based and experience-based clustering techniques. The overall architecture has stayed essentially the same and so it is the localised processes or smaller details that have been updated. For example, a counting mechanism is used slightly differently, to measure a level of cohesion instead of a correct classification, over pattern instances. The introduction of features has further enhanced the architecture and the new entropy-style equation is proposed. While an earlier paper defined three levels of functional requirement, this paper re-defines the levels in a more human vernacular, with higher-level goals described in terms of action-result pairs.
Keywords: Cognitive model; distributed architecture; entropy; neural network; concept tree.
Cross-corpus Classification of Affective Speech
by Imen Trabelsi, Mohammed Salim Bouhlel
Abstract: Automatic speech emotion recognition still has to overcome severalrnobstacles before it can be employed in realistic situations. One of these barriersrnis the lack of suitable training data, both in quantity and quality. The aim of thisrnstudy is to investigate the effect of cross-corpus data on automatic classification ofrnemotional speech. In thiswork, features vectors, constituted by the Mel FrequencyrnCepstral Coeffcients (MFCC) extracted from the speech signal are used to trainrnthe Support Vector Machines (SVM) and Gaussian mixture models (GMM). Wernevaluate on three different emotional databases from three different languagesrn(English, Polish, and German) following a three cross-corpus strategies. In thernintra-corpus scenario, the accuracies were found to vary widely between 70%rnand 87%. In the inter-corpus scenario, the obtained average recall is 70.87%. Thernaccuracies of the cross-corpus scenario were found to be below to 50%.
Keywords: Cross corpus strategies; Speech emotion recognition; GMM; SVM;rnMFCC.
GA based efficient Resource allocation and task scheduling in multi-cloud environment
by Tamanna Jena, Jnyana Ranjan Mohanty
Abstract: Efficient resource allocation to balance load evenly in heterogeneous multi-cloud computing environment is challenging. Resource allocation followed by competent scheduling of tasks is of crucial concern in cloud computing. Load balancing is assigning incoming job-requests to resources evenly so that each involved resources are efficiently utilized. Number of cloud users are immense and volume of incoming job-request is arbitrary and data is enormous in cloud application. In cloud computing resources are limited, therefore it is challenging to deploy various applications with irregular capacities as well as functionalities in heterogeneous multi-cloud environment. In this paper Genetic Algorithm based task mapping, followed by priority scheduling in multi-cloud environment is proposed. The proposed algorithm has two important phases, namely mapping and scheduling. Performed rigorous simulations on synthetic data for heterogeneous multi-cloud environment. Experimental results are compared with existing First In First Out (FIFO) mapping and scheduling. Validity of mapping and scheduling clearly proves better performance of the entire system in terms of makespan time and throughput.
Keywords: Load Balancing; Task Scheduling; Cloud Computing; multi-cloud environment; Genetic Algorithm.
Using Artificial Intelligence Techniques in Collaborative Filtering Recommender Systems: Survey
by Yousef Kilani, Bushra Alhijawi, Ayoub Alsarhan
Abstract: The Internet currently contains a huge data which is exponentially growing. This leads to the problem of information overload that makes the task of searching for information difficult and time consuming. Recommendation system is a filtering technique that recommend items to the users in order to reduce the list of choices and hence saves their times. There are two types of algorithms for building the recommender systems: collaborative filtering methods and content-based filtering methods. It is a common knowledge that the collaborative filtering recommendation algorithm is one of the most commonly used recommendation algorithms. Therefore, our interest in this work is in the collaborative filtering algorithms. There are many type of algorithm used to build the RS includes data mining techniques, information retrieval techniques and artificial intelligence algorithms. Although a number of studies have developed recommendation models using collaborative filtering, few of them have tried to adopt both CF and other artificial intelligence techniques, such as genetic algorithm, as a tool to improve recommendation results. This survey presents the state-of-the-art artificial intelligence techniques used to build the collaborative filtering recommender systems. These techniques include fuzzy algorithms, genetic algorithms, ant colony algorithms, swarm optimization algorithms, neural network algorithms, and machine learning algorithms.
Keywords: Recommendation system; web intelligence; artificial intelligence ; survey;.
Efficient and Secure Approaches for Routing in VANETs
by Marjan Kuchaki Rafsanjani, Hamideh Fatemidokht
Abstract: Vehicular ad hoc networks (VANETs) are a particular type of Mobile Ad Hoc Networks (MANETs). These networks provide communication services between nearby vehicles and between vehicles and roadside infrastructure that improve road safety and provide travelers' comfort. Due to the characteristics of VANET, such as self-organization, low bandwidth, variable network density, rapid changes in network topology, providing safe driving, enhancing traffic efficiency, etc., and the applications of them, problems related to these networks, such as routing and security, are popular research topics. A lot of research has been performed for providing efficient and secure routing protocol. In this paper, we investigate and compare various routing protocols based on swarm intelligence and key distribution in VANET.
Keywords: Vehicular ad hoc networks (VANETs); Swarm intelligence; Routing protocols; Cryptography.
Analysis of Energy Efficiency Based on Shortest Route Discovery in Wireless Sensor Network
by Mohit Mittal
Abstract: Todays scenario is totally based on advancement of existing technologies to get more reliable wireless communication. Wireless sensor networks are one of the popular emerging technologies that are deployed commonly in harsh environment. These networks main dependency is on battery powers. Our mission is to reduce the energy consumption as much as possible. Every routing protocol has been designed for sensor network based on minimum energy consumption. In this paper, LEACH protocol has been modified with various shortest path algorithms to find out best performance of sensor network. Simulation result shows that Dijsktra algorithm has found to be better among other algorithms.
Keywords: LEACH; Energy efficiency; Bellman-ford algorithm; Dijkstra algorithm; BFS algorithm.
Optimum Generation and VAr Scheduling on a Multi-Objective Framework using Exchange Market Algorithm
by Abhishek Rajan, T. Malakar
Abstract: This paper presents an application of Exchange Market Algorithm (EMA) in solving multi-objective optimization problems of power systems. This optimization algorithm is based on the activities of shareholders to maximize their profit in the Exchange Market. The uniqueness of this algorithm lies in the fact that, it enjoys double exploitation and exploration property unlike several other algorithms. In order to investigate its search capability, the EMA is utilized to solve power systems active and reactive related objectives simultaneously in presence of several non-linear constraints. Both optimum generation and VAr planning problems are formulated as conventional Optimal Power Flow (OPF) problem. Fuel cost (Active related objective), Transmission Line Loss and Total Voltage Deviation (reactive related objectives) are taken as different objective functions. The multi-objective optimization problem is performed through weighted sum approach. Both fuzzy and equal weight approach is utilized to declare the compromised solution. Programs are developed on MATLB and simulations are performed on Standard IEEE-30 & IEEE-57 bus systems. The search capability of EMA in solving the multi-objective power system problems are compared with PSO based solutions.
Keywords: Optimal Power Flow; Exchange Market algorithm; multi-objective optimization; Pareto front; fuzzy decision making.
A Novel Three-Tier Model with Group Based CAC for Effective Load Balancing in Heterogeneous Wireless Networks
by Kalpana S, Chandramathi S, Shriram KV
Abstract: Seamless and ubiquitous connections are the ultimate objectives of 4G technologies. But due to randomised mobility and different service class of applications, the connection failure rate increases, which can be overcome through handover (HO). With the increased demand for handovers, the number of networks scanned for decision making and the number of negotiations for connectivity become too large. To improve their efficiency, a three tier model is proposed, where requests for similar type are grouped and a common negotiation is made to reduce the number of communication messages. Only qualified networks among all the reachable access points are chosen for decision. Handover need estimation is performed to reduce the unwanted handovers. Finally, an adaptive resource management is made possible through a group based call admission control (GB-CAC) algorithm that harmonises up to 50 percent of the resource utilisation, ensuring higher numbers of connections with negligible percent call blocking and dropping.
Keywords: Point of Attachment; handover; candidate networks; elimination factor; queues; Quality of Service; Smart Terminal.
Knowledge based Semantic Discretization using Data Mining Techniques
by Jatinderkumar R. Saini, Omprakash Chandrakar
Abstract: Discretization is an important and, sometimes, an essential pre-processing step for data mining. Certain data mining techniques such as Bayesian networks, induction rules or association rule mining can be applied only on discretized nominal data. Various studies show significant improvement for certain data mining techniques, when applied on discretized data rather than continuous data. Several discretization methods have been reported in literature, which are based on statistical techniques. Such statistical techniques are inadequate in capturing and exploiting the underling knowledge inherent in data and context of study. Big data with high dimension, and unavailability of any a priori knowledge on the study context, even make the situation miserable. To overcome this limitation, we propose a novel knowledge based semantic discretization method using data mining techniques, in which discretization is done based on Semantic data. Semantic data is domain knowledge inherent in the data itself and context of the study. Unlike semantic data mining, no explicit ontology associated with the data for semantic discretization. Therefore, its a challenging task to identify, capture, interpret and exploit the semantic data for semantic discretization. This study presents the novel concept of semantic discretization and demonstrate the application of data mining techniques in extracting semantic data, which is further used in knowledge based semantic discretization. We show the effectiveness of the proposed methodology by applying it on Pima Indian Diabetes dataset, which is a standard dataset, taken from UCI Machine learning repository.
Keywords: Association rule mining; Data mining; Discretization; Machine learning; Pima Indian Diabetes Dataset; Prediction Model; Semantic Discretization; Type-2 Diabetes.
Intricacies in Image steganography and Innovative Directions
by Krishna Veni, Sudhakar P
Abstract: With the advancement in digital communication and data sets getting huge due to computerization of data gathering worldwide, the need for data security in transmission also increases. Cryptography and steganography are well known methods available to provide security where the former use techniques that control information in order to cipher or hide their presence and the latter concentrates on data concealment. Steganography is the practice of masking data especially multimedia data within another data. Visual contents gets more importance from people compared to audio contents and moreover visual content file is huge when compared to audio file thereby helping increase robustness of the hiding algorithms. In this paper, we consider three domains in which the image steganography algorithms are proposed along with the experimentation results on USC-SIPI image database which prove the betterment of the algorithms as compared with the traditional algorithms. We propose to use rule based LSB substitution method in spatial domain, XOR based hiding in frequency domain and data encryption standard based embedding in wavelet domain. We find that the proposed algorithms have a better PSNR value averaging close to 53 after embedding the secret data, while the existing algorithms has values of around 50.
Keywords: Peak Signal to Noise Ratio; Quantization; Discrete Cosine Transformation; Wavelet; Steganalysis; cipher text;.
Fuzzy Soft Set Approach for Classifying Malignant and Benign Breast Tumors
by Sreedevi Saraswathy Amma, Elizabeth Sherly
Abstract: Breast cancer is one of the most common health problems faced by women all over the world and mammography is an effective technique used for its early detection. This work is concentrated on developing machine learning algorithms combined with a mathematical model for classifying malignant or benign images in digital mammograms. The mathematical concept of the fuzzy soft set theory is advocated here, which is an extension of crisp and fuzzy with parameterization. Even though fuzzy and other soft computing techniques have made great progress in solving complex systems that involve uncertainties, imprecision and vagueness, the theory of soft sets open up a new way for managing uncertain data with parameterization. The classification is performed by using fuzzy soft aggregation operator to identify the abnormality in a mammogram image as malignant or benign. This work is a fully automated computer aided detection method which involves automated noise removal, pectoral muscles removal, segmentation of ROI, identification of micro-calcification clusters, feature extraction and feature selection followed by classification. The experiment is performed on images from MIAS dataset resulted in 95.12% accuracy.
Keywords: Digital Mammography; computer-aided diagnosis (CAD); fuzzy soft set theory; fuzzy c-means; NL-means; fuzzy soft aggregation operator.
Using Lego EV3 to Explore Robotic Concepts in a Laboratory
by Jeffrey W. Tweedale
Abstract: During a recent Massive Open On-line Course (MOOC) at the Queensland University of Technology (QUT) titled an Introduction to Robotics, a young student used the forum to question the skills required to gain employment. The resounding response was the need for multiple disciplines that typically included mechatronics, software, mechanical and electrical/electronics engineering. Similarly the curriculum focused on professional systems and the scientific rigour involved in their evolution. This limits the growing community of enthusiasts and keen observers seeking greater involvement as they are often constrained by the lack of Science Technology Engineering and Maths (STEM) skill sets. For these reasons a means of accelerating the learning of key concepts is required as well as a mechanism of providing cheap and reliable access to the tools and techniques required to participate. AlthoughLEGOMindstorms is considered a toy that has traditionally been targeted toward the 8-14 year group of children, it does cater for enthusiasts and is increasingly being used to support STEM initiatives. Because of its low cost and availability, Mindstorms was recently used as the focal solution in the MOOC course to enable every student to demonstrate robotic concepts independent of the pre-requisite skills. This raises a new question about how useful LEGO can be used to explore robotic concepts in a laboratory. The course shows it can be used for sensor development and was successfully used to enhance conceptual learning for the uninitiated (enthusiast, interested observer, undergraduate, post-graduate and even those being integrated within the domain).
Keywords: Cartesian Coordinates; Forward Kinematics; Inverse Kinematics; Lego; Mindstorms; Robotics.
Detection of Melanoma Skin Disease by Extracting High Level Features for Skin Lesions
by VIKASH YADAV, VANDANA DIXIT KAUSHIK
Abstract: Melanoma is a very dangerous type of skin cancer as compare to others. It can be cured, when diagnosed in its early stage. The detection and diagnosis of skin cancer is difficult using earlier conventional methods. The accurate detection and diagnosis of melanoma is possible using suitable image processing techniques. High level features, measures asymmetry of skin lesion images. These features can be used to diagnose lesions as skin cancer (melanoma). This paper presents large set of low level features for analyzing skin lesions. The best classification is obtained by combining the low level feature set with the high level feature set. The result shows that this method can be used and further developed as a tool for detection and classification of skin cancer (melanoma).
Keywords: Feature extraction; Feature descriptor; Melanoma; Skin lesion; Radial search.
Applying Genetic Algorithm to Optimize the Software Testing Efficiency with Euclidean Distance
by Rijwan Khan
Abstract: Software testing ensures that a developed software is error free and reliable for customer use. For verification and validation of software products, testing has been applied on these products in various different software industries. So before the delivery of the software to the customer, all the types of testing have been applied. In this paper, automatic test cases have been developed with the help of a genetic algorithm for data flow testing and these tests are divided in different groups using Euclidean distance. Elements of each group are applied on the data flow diagram of the program/software and all the du paths are found, covering the given test suits. New test suits are generated with the help of the genetic algorithm to cover all du-paths.
Keywords: Software Testing; Automatic test cases; Data flow testing; Genetic Algorithm.
How can Reasoning improve ontology based Context-Aware system?
by Hatim Guermah, Tarik Fissaa, Bassma Guermah, Hatim Hafiddi, Mahmoud Nassar, Abdelaziz Kriouile
Abstract: Over the past two decades, the large evolution of software engineering, telecommunication and pervasive devices has lead to emergence of a new vision of development aiming at building systems to meet more complex and personalized needs known as Context-Aware systems. This type of systems is becoming the next computing paradigm in which infrastructure and services are sensitive to any change of the context, so that plays a crucial role to provide interactive intelligent environments. In parallel, Contextual Situation refers to a higher level of information inferred from different context data flow that can be extracted from physical and virtual sensors. The power of using Situation is lies in their ability to provide a simple and comprehensible representation of context property, which preserve the services that manipulate them from the complexity of sensor readings, data transmission errors and inferencing activities. In this work, we aim to explore the added value of using ontology-based reasoning, focusing on first-order logic and fuzzy logic, to produce contextual situations.
Keywords: Context; Context-Aware; Situation; Semantic Web; Ontologies; Context modeling; First Order Reasoning; Fuzzy logic Reasoning; inference and Reasoning.
Fractional Inverse Full State Hybrid Projective Synchronization
by Adel Ouannas, Ahmad Taher Azar, Toufik Ziar
Abstract: Referring to fractional-order systems, thisrnpaper investigates the inverse full state hybrid projective synchronizationrn(IFSHPS) of non-identical systems characterized by different dimensions andrndifferent orders. By taking a master system of dimension $n$ and a slavernsystem of dimension $m$, the method enables each master system state to bernsynchronized with a linear combination of slave system states, where thernscaling factor of the linear combination can be any arbitrary realrnconstants. Based on fractional Lyapunov approach and stability theory ofrnlinear fractional-order systems, the method enables commensurate andrnincommensurate fractional-order systems with different dimension to bernsynchronized. Two different numerical examples are reported. The examplesrnclearly highlight the capability of the conceived approach in effectivelyrnachieving synchronized dynamics for any scaling constants.
Keywords: Full state hybrid projective synchronization; Fractional chaos,rnIncommensurate and commensurate systems; Fractional Lyapunov approach.
Dominion Algorithm- A novel metaheuristic optimization method
by Bushra Alhijawi
Abstract: In this paper, a novel bio-inspired and nature-inspired algorithm, namely Dominion Algorithm is proposed for solving optimization tasks. The fundamental concepts and ideas which underlie the proposed algorithm is inspired from nature and based on the observation of the social structure and collective behavior of wolves pack in the real world. Several experiments were preformed to evaluate the proposed algorithm and examine the correlation between its main parameters.
Keywords: Dominion Algorithm; Metaheuristic methods; Biologically-inspired algorithm; Artificial intelligence.
Fitness Inheritance in Multi-objective Genetic Algorithms: A Case Study on Fuzzy Classification Rule Mining.
by Harihar Kalia, Satchidananda Dehuri, Ashish Ghosh
Abstract: In this paper, the trade-off between accuracy and interpretability in fuzzy rule-based classifier has been examined through the incorporation of fitness inheritance in multi-objective genetic algorithms. The aim of this mechanism is to reduce the number of fitness evaluation spared by estimating the fitness value of the offspring individual from the fitness value of their parents. The multi-objective genetic algorithms with efficiency enhancement technique is a hybrid version of Michigan and Pittsburgh approaches. Each fuzzy rule is represented by its antecedent fuzzy sets as an integer string of fixed length. Each fuzzy rule-based classifier, which is a set of fuzzy rules is representedrnas a concatenated integer string of variable length. Our algorithm simultaneously maximizes the accuracy of rule sets and minimizesrntheir complexity (i.e., maximization of interpretability). As a result of adopting fitness inheritance, it minimizes the total fitness computation time (i.e., overall time to generate rule set). The accuracyrnis measured by the number of correctly classified training samples,rnwhile the rule complexity is measured by the number of fuzzy rulesrnand/or the total number of antecedent conditions of fuzzy rules. Thernefficiency enhancement technique such as fitness inheritance is usedrnto minimize the overall computation time of generating the rule set.rnWe examine our method through computational experiments on somernbenchmark datasets. The experimental outcome conforms that thernproposed method reduces the computational cost, without decreasingrnthe quality of the results in a significant way.
Keywords: Classification; fuzzy classification; multi-objective genetic algorithm; fitness inheritance; accuracy; and interpretability.
Geometric Based Histograms for Shape Representation and Retrieval
by Nacera Laiche, Slimane Larabi
Abstract: In this paper, we present a new approach for shape representation and retrieval based on histograms. In the drawback of the proposed histograms descriptor, we consider the concept of curves points. This integration in the proposed histogram-based approach is quite different since geometric description is stored in histograms. The proposed description is not only effective and invariant to geometric transformations and deformations, but also is insensitive to articulations and occluded shapes as it has the advantage of exploring the geometric information of points. The generated histograms are then used to establish matching of shapes by comparing their histograms using dynamic programming. Experimental results of shape retrieval on different kinds of shape databases show the efficiency of the proposed approach when compared with existing shape matching algorithms in literature.
Keywords: Log-polar histogram; Least squares curve; High curvature points; Shape description; Shortest augmenting path algorithm; Shape retrieval.
Improved Biogeography-based Optimization
by Raju Pal, Mukesh Saraswat
Abstract: Biogeography-based optimization (BBO) is one of the popular evolutionary algorithms, inspired by the theory of island biogeography. It has been successfully applied in various real world optimization problems such as image segmentation, data clustering, combinatorial problems, and many more. BBO finds the optimal solution by using two of its main operators namely; migration and mutation. However, sometimes it traps into local optimum and converges slowly due to poor population diversity generated by mutation operator. Moreover, single feature migration property of BBO gives poor performance for non-separable functions. Therefore, this paper introduces a new variant of BBO known as improved BBO (IBBO) by enhancing its migration and mutation operators.
The proposed variant successfully improves the population diversity and convergence behavior of BBO as well as shows better solutions for non-separable functions. The performance of proposed variant has also been compared and analyzed with other existing algorithms over 20 benchmark functions.
Keywords: Evolutionary algorithm; Biogeography-based optimzation; Migration operator; Mutation operator.
Sequential Pattern based Activity Recognition model for Ambient Computing
by GITANJALI J, Muhammad Rukunuddin Ghalib
Abstract: In the recent years, the human activity recognition gain popularity in ambient computing. The human activity recognition is composed of identifying the daily activities of the users by observing their actions. Action identification is more complex task from senor data generated by each sensor. In this paper, sequential pattern based activity recognition is proposed for identifying sequential patterns among actions on the given dataset. This support value is used as a parameter to validate the sequence. The experimental evaluation is performed on the real time dataset and it is observed that the sequential pattern approach is very beneficial in reducing the execution time and increasing the classification accuracy of the classifiers.
Keywords: Action; Activity; sensor based data; sequence patterns; classifiers.
Evaluation of Large Shareholders Monitoring or Tunneling Behavior in Companies Accepted in Tehran Stock Exchange
by Sahar Mojaver
Abstract: Shareholders' wealth in the real world of finance is very important and focus on it has become very important in recent years. Although the purpose of each investment and consequently, the main purpose of each company has been maximizing shareholder wealth but over the past decades, most companies have not paid enough attention to it. Ownership composition, particularly the ownership concentration of majority shareholders is one of the most important factors influencing on the control and managing companies. When large shareholders or internal shareholders like managers have the capacity to control the company, they may have some incentives to get private benefits. Given the importance of monitoring and behavior of controlling shareholders, this study investigates the large shareholders monitoring or tunneling behavior in companies accepted in Tehran Stock Exchange. To do so, 125 companies over the period of 2010 to 2011 (a total of 750 years- company) are analyzed using systematic elimination sampling method. Results show that there is a significant relationship between large shareholders tunneling behavior and financial performance (return on equity and Tobin's Q indexes) in companies accepted in Tehran Stock Exchange, and this relationship is U shaped.
Keywords: Tunneling Behavior; Large Shareholders; Companies Accepted in Tehran Stock Exchange.
A practical approach to Energy Consumption in Wireless Sensor Networks
by Sonam Khera, Neelam Turk, Navdeep Kaur
Abstract: A Wireless Sensor Network (WSN) is network formed by large number of spatially distributed, wirelessly communicating sensor nodes deployed for remote environment monitoring. These networks are specifically deployed to perform various sensing operations like measurement of temperature, pressure, vibrations and humidity etc. in an environment where human intervention is not possible. Thus once deployed, the WSN starts performing its functions and consumes the energy from the limited power source installed in sensor nodes. Due to inaccessibility of sensor nodes, these power sources are non-replaceable, once the nodes are deployed in the physical environment. Therefore the energy consumption of sensor nodes plays significant role in determining the life of a WSN. Various studies have been undertaken using available simulation environments to increase the lifetime of the network by reducing the energy consumption. In our previous studies it has been observed that controlled software environment is created with the help of various modelling tools and simulators available like MATLAB, NS2, OMNET++ etc. Though the simulation and modelling done in the software environment has been found to be convenient in terms of scalability and for simulating various scenarios but it lacks the exposure to the real time issues faced during the actual deployment. We have written this paper based on our experience of creating a physical WSN test bed to get first hand information about of real time deployment. The test bed has been designed with an aim to understand practical aspects of energy consumption in sensor networks. It monitors the temperature at different locations in a building. In this paper we have also covered different scenarios to analyse the energy consumption in our WSN test bed.
Keywords: WSN; wireless sensor network; sensor; energy efficiency; power consumption; sleep mode; testbed.
Local Patterns for Offline Arabic Handwritten Recognition
by Yasser Qawasmeh, Sari Awwad, Ahmed Otoom, Feras Hanandeh, Emad Abdallah
Abstract: Off-line recognition of Arabic handwritten text is a challenging problem due to the cursive nature of the language and high inter and intra variability in the way of writing. Majority of the existing approaches are based on structural and statistical features and are constrained for a specific task with vast amount of pre-processing steps. In this paper, we explore the performance of local features for unconstrained offline Arabic text recognition with no prior assumptions or pre-processing steps.Our approach is based on local SIFT features. To capture important information and remove any redundancy, we apply a fisher encoding algorithm, and a dimensionality reduction approach, Principle Component Analysis (PCA) . The resulted features are combined with a contemporary Support Vector Machine (SVM) classifier and tested on a dataset of 12 different classes. There has been great improvements in recall and precision values in comparison with that of SIFT features alone or with that of SIFT features and other encoding algorithms, with more that 35% improvements when tested with 5-fold cross-validation test.
Keywords: Local Features; Offline Recognition; Arabic Handwriting; Fisher Encoding;.
A Supervised Learning Approach for Link Prediction in Complex Social Networks
by Upasana Sharma
Abstract: The use of internet based social media for establishing links with family, friends and customers has become very popular. In current scenario, social networking is being used for social and business purpose such as facebook, twitter and LinkedIn. New link is being created in every fraction of second. To predict the future link is a major challenge in link prediction domain. Various techniques have been proposed in past that are based on Similarity, Maximum likelihood estimation and Machine learning. The focus of this work is on supervised machine learning approach for link prediction in complex social networks. In past, many researchers have been worked on supervised approach by using only unweighted networks. Our aim is to assign weight to each connection in the network. Weight represents the strength of the connection and it improves the accuracy of the link predictor. This paper introduced a new approach using closed triangle concept to recommend future links in social networks. Extensive experiments have been performed on real YouTube data set and the proposed technique performs well.
Keywords: Link Prediction; Social Networks; Artificial Neural Network; Supervised Learning Approach; Learning Algorithms.
Trust Based Quality Awareness Using Combinatorial Auction Web Service Selection In Service Based Systems
by Suvarna Pawar, Prasanth Yalla
Abstract: The service-oriented paradigm offers support for engineering service-based systems (SBSs) based on service composition where existing services are composed to create new services. The selection of services with the aim to fulfil the quality constraints becomes critical and challenging to the success of SBSs, especially when the quality constraints are stringent. However, none of the existing approaches for quality-aware service composition has sufficiently considered the following two critical issues to increase the success rate of finding a solution: 1) the complementarities between services; and 2) the competition among service providers. This paper proposes a novel approach called combinatorial auction for service selection (CASS) to support effective and efficient service selection for SBSs based on combinatorial auction. In CASS, service providers can bid for combinations of services and apply discounts or premiums to their offers for the multi-dimensional quality of the services. Based on received bids, CASS attempts to find a solution that achieves the SBS owners optimization goal while fulfilling all quality constraints for the SBS. When a solution cannot be found based on current bids, the auction iterates so that service providers can improve their bids to increase their chances of winning.
Keywords: Combinatorial auction; Quality of service; Service composition; Service selection; Trust.
Computational Modelling of Cerebellum Granule Neuron Temporal Responses for Auditory and Visual Stimuli
by Arathi Rajendran, Asha Vijayan, Chaitanya Medini, Bipin Nair, Shyam Diwakar
Abstract: Sensorimotor signals from cerebral cortex modulate the pattern generating metaheuristic capabilities of cerebellum. To better understand the functional integration of multisensory information by single granule neurons and the role of multimodal information in motor guidance of cerebellum, we have modelled granular layer microcircuit in the cerebellum and analysed the encoding of information during auditory and visual stimuli. A multi-compartmental granule neuron model comprising of excitatory and inhibitory synapses was used and in vivo like behaviour was modelled with short and long bursts. The change in intrinsic parameters in the model helped to quantify the effect of spike-time dependent plasticity in the firing of granule neurons. Computer simulations implicate coding correlation of output patterns to temporal excitatory stimuli. We observed the role of induced plasticity and granular layer role in sparse recoding of auditory and visual inputs and the model predict how plasticity mechanisms affect the average amount of information transmitted through the single granule neurons during multimodal stimuli.
Keywords: Cerebellum; Computational Neuroscience; Auditory; Visual; Plasticity; Sparse Coding.
Resource discovery in inter-cloud environment: A Review
by Mekhla Sharma, Ankur Gupta, Jaiteg Singh
Abstract: The Inter-cloud has emerged as a logical evolution to cloud computing extending computational scale and geographic boundaries through collaboration across individual Cloud Service Providers (CSPs). Resource discovery in this large-scale, distributed and highly heterogeneous environment remains a fundamental challenge to enable effective cross-utilization of resources and services. This review paper examines various resource discovery approaches in the inter-cloud outlining key challenges. Finally, the paper presents some ideas to build effective and efficient resource discovery strategies for the inter-cloud.
Keywords: inter-cloud resource discovery; inter-cloud challenges; resource discovery challenges; resource discovery approaches.
Building a Simulated Educational Environment for the Diagnosis of Lumbar Disk Herniation Using Axial View MRI Scans
by Mohammad Alsmirat, Khaled Alawneh, Mahmoud Al-Ayyoub, Mays Al-dwiekat
Abstract: Computer-aided diagnosis systems have been the focus of many research endeavors. They are based on the idea of processing and analyzing various types of inputs (such as patients medical history, physical examination results, images of different parts of the human body, etc.) to help physicians reach a quick and accurate diagnosis. In addition to being a great asset for any hospital (especially the less fortunate ones with no or with small number of radiologists), such systems represent invaluable platforms for educational and research purposes. In this work, we propose a system for the diagnosis and training on the diagnosis of lumbar disk herniation from Magnetic Resonance Imaging (MRI) scans. The proposed system has three main novel contributions. First, it utilizes the axial MRI spine view of the suspected region instead of using MRI sagittal spine view. Axial view is usually more accurate and provides more information about lumbar disk herniation. Second, instead of simply classifying cases as normal or abnormal, the proposed system is capable of determining the type of lumbar disk herniation and pinpoint its location. To the best of our knowledge, this is the first work to address the problem of determining the type and location of lumbar disk herniation based on the axial MRI spine view. The final contribution of this work is the simulated training environment, that can be used to train novice radiologists on the diagnosis of lumbar disk herniation. The experiments conducted to evaluate the system show that it is quick and accurate besides being very useful for training purposes.
Keywords: Axial MRI Spine View; Classification; Computer-aided Diagnosis; Feature Extraction; Lumbar Disk Herniation; ROI Enhancement; ROI Extraction.
Types of fuzzy graph coloring and polynomial ideal theory
by Arindam Dey, Anita Pal
Abstract: The graph coloring problem (GCP) is one of the most importantrnoptimization problems in graph theory. In real life scenarios, many applicationsrnof graph coloring are fuzzy in nature. Fuzzy set and fuzzy graph can manage thernuncertainty, associated with the information of a problem, where conventionalrnmathematical models/graph may fail to reveal satisfactory result. To include thosernfuzzy properties in solving those types of problems, we have extended the variousrntypes of classical graph coloring methods to fuzzy graph coloring methods. Inrnthis study, we describe three basic types of fuzzy graph coloring methods namely,rnfuzzy vertex coloring, fuzzy edge coloring and fuzzy total coloring.We introducerna method to color the vertices of the fuzzy graph using the polynomial idealrntheory and find the fuzzy vertex chromatic number of the fuzzy graph. A practicalrnexample of scheduling committees meeting is given to demonstrate our proposedrnalgorithm.
Keywords: Fuzzy graph; Fuzzy coloring; Chromatic number; Polynomialrnideal; Groebner basis.
Selective Harmonic Elimination Strategy in the Multilevel Inverters for Grid Connected Photovoltaic System
by Sihem Ghoudelbourk, Ahmad Taher Azar, Djalel Dib, Amar Omeiri
Abstract: In recent years, power electronic converters are widely used in industrial as well as domestic applications for the control of power flow for automation and energy efficiency. The topologies of multilevel inverter have several advantages such as high output voltage, lower total harmonic distortion (THD) and reduction of voltage ratings of the power semiconductor switching devices. The paper deals with the multilevel converters control strategy for Photovoltaic (PV) system integrated in distribution grids. The objective of the proposed work is to design multilevel inverters for solar energy applications so as to reduce the Total Harmonic Distortion (THD) and to improve the power quality. The use of multilevel converters as power interface for PV grid-connected systems are very appropriate due to the Grid-connected photovoltaic power plants are consistently increasing in power rating and also the reduction in the cost of photovoltaic modules. The proposed control strategy ensures an implementation of selective harmonic elimination (SHE) modulation for a 5, 7, 9 and 11 levels. This technique is a method to get rid of harmonics by judicious selection of the firing angles of the inverter and eliminates the need of the expensive low pass filters in the system. Previous research considered constant and equal DC sources with invariant behavior; therefore, the circuit can be called an unequal DC sources multilevel converter. The voltage levels depend on the availability of DC sources so its possible to reduce the harmonic contents for unequal DC sources multilevel converter. This article deals with the reduction of harmonics for multilevel converter sources converter for equal DC voltage cases and then it is extended for unequal DC voltages.
Keywords: Multilevel inverter; Selective Harmonic Elimination (SHE); Total harmonic distortion (THD); Photovoltaic (PV); Battery.
Design and Analysis of SRRC filter in wavelet based multiuser environment of mobile WiMax
by Harpreet Kaur, Manoj Kumar, Ajay Sharma, Harjit P. Singh
Abstract: Wavelets amid its capability to provide simultaneous information in both time and frequency domain along with minimization of interference and improved bandwidth efficiency is considered as an efficient approach to replace Fast fourier transform (FFT) in the conventional Orthogonal Frequency Division Multiplexing (OFDM) systems. To improve the Quality of service (QoS) in such systems spectrally efficient filter pulses are employed in order to mitigate the effect of inter-symbol interference (ISI) as well as thy satisfy the bandwidth limitations imposed by the multipath fading channels. Morever by allowing multiple users to utilize the transmission channel at the same time aspires towards achieving optimal resource allocation with acceptable error rates considering undesirable effects of correlated fading in the channel. In this paper, multi user environment is simulated in wavelet based OFDM for Wimax system with SRRC pulses employed as transmit and receive filters to perform matched filtering. The performance analysis in terms of Bit Error rate (BER) as a function of Signal to Noise Ratio (SNR) is investigated by varying number of users for the purpose of comparing their relative performances for various modulation schemes under AWGN channel. The simulation outcome substantiates that implementation of multiuser environment while overcoming co-channel interference elevates channel capacity and meet higher data rate demand along with effective utilization of the spectral resources. This simulation model is developed in MATLAB.
Keywords: DWT; OFDM; Square Root Raised Cosine,;Pulse shaping filter; multiuser; mobile WiMax.
A hybrid approach for improving data classification based on PCA and enhanced ELM
by Doaa El-Bably, Khaled Fouad
Abstract: The efficient and effective process of extracting the useful information from high-dimensional data is a worth studying problem. The high-dimensional data is a big and complex that it becomes difficult to be processed and classified. Dimensionality reduction (DR) is an important and a key method to address these problems.rnThis paper presents a hybrid approach for data classification constituted from the combination of principal component analysis (PCA) and enhanced extreme learning machine (EELM). The proposed approach has two basic components. Firstly, PCA; as a linear data reduction, is implemented to reduce the number of dimensions by removing irrelevant attributes to speed up the classification method and to minimize the complexity of computation. Secondly, EELM is performed by modifying the activation function of single hidden layer feed-forward neural network (SLFN) perfect distribution of categories. rnThe proposed approach depends on a static determination of the reduced number of principal components. The proposed approach is applied on several datasets and is assisted its effectiveness by performing different experiments. For more reliability, the proposed approach is compared with two of the previous works, which used PCA and ELM in data analysis.rn
Keywords: Data mining; Data classification; Principal component analysis (PCA); Neural Network; Extreme Learning Machine (ELM).
Fuzzy Fault-Tolerant Control for doubly fed induction generator in wind energy conversion system
by Samir Abdelmalek, Ahmad Taher Azar, Djalel Dib
Abstract: Fault-tolerant control systems have received considerable interest in academic researches area. This paper presents an efficient Fault-Tolerant Control of Additive Voltage Measurement Faults (AVMFs) of a controlled doubly-fed induction generator (DFIG) driven wind energy conversion system (WECS). First, the nonlinear model of a DFIG is transformed into an equivalent TakagiSugeno (TS) fuzzy model by using the sector nonlinear approach (SNA). Then, based on the obtained model of the generator, a new FTC strategy is proposed in order to ensure the nominal performances and stability of the plant while the occurrence of AVMFs and Noisy Outputs (NOs). Furthermore, the proposed FTC strategy is combined of a fuzzy proportional integral observer, nominal and faulty system. In addition, the stability of the closed-loop system is demonstrated by means of Lyapunov analyses which are formulated in terms of Linear Matrix Inequalities (LMIs) to prove the stability of the whole closed-loop system and to reduce the actuator faults effects and noisy outputs attenuation. Finally, simulation has been performed in MATLAB/Simulink environment to highlight the designed FTC strategy performances and robustness with respect to AVMFs occurrence.
Keywords: Fault-tolerant control; Additive Voltage Measurement Faults (AVMFs); Observer; Doubly-fed induction Generator (DFIG); Fuzzy Proportional Integral Observer.
A Comprehensive Review on Time Series Motif Discovery using Evolutionary Techniques
by RAMANUJAM ELANGOVAN, Padmavathi S
Abstract: Time series data are produced daily in a large quantity virtually in most of the fields. Most of the data are stored in time series database. Time Series Motif is a frequent or recurrent or unknown pattern that occurs in a time series database used to aid in decision making process. The Time series motif mining is a useful technique for summarizing additional techniques like classification or clustering process. Recently, diverse techniques have been proposed for time series motif discovery techniques. In which, this paper explores the time series motif discovery using evolutionary techniques in various real time data with its characteristics. The primary aim of this research is to provide a glossary for interested researchers in time series motif discovery and to aid in identifying their potential research direction using evolutionary techniques.
Keywords: Time Series; Motif; DataMining; Evolutionary techniques; Genetic Algorithm;.
Performance index assessment of intelligent computing methods in e-learning systems
by Aditya Khamparia, Babita Pandey
Abstract: In the current advancing and smart growing technology, e-learning system strikes the most dominant position for the learning style. Many research studies have evaluated e-learning system by using various criteria like prediction accuracy, satisfaction degree, pre-post analysis etc., but none of the results have explored the common methodology for appraising such systems. The proposed research work is focused on resolving the drawbacks of common benchmarks for evaluating the performance of e-learning system by including the Importance (I), Complexity(CC) and also determined the measurements of different learning problems and learning techniques. Finally the Performance Index (PI) is computed on the basis of I and CC, which is represented on graph with comparative view of Importance (I), Complexity (CC) and Performance Index (PI) for all the models.
Heterogeneous Mixing of Dynamic Differential Evolution Variants in Distributed Frame work for Global Optimization Problems
by Gurusamy Jeyakumar, Shunmuga Velayuthma
Abstract: Differential Evolution (DE) is a real parameter optimization algorithm added to the pool of algorithms under Evolutionary Computing field. DE is well known for simplicity and robustness. The Dynamic Differential Evolution (DDE) was proposed in the literature as an extension to DE, to alleviate the static population update mechanism of DE. Since the island based distributed models are the natural extension of DE to parallelize it with structured population, they can also be extended for DDE. This paper, initially, implements distributed versions for 14 variants of DDE and also proposes an algorithm hmDDEv (heterogeneous mixing of dynamic differential evolution variants) to mix different DDE variants in island based distributed model. The proposed hmDDEv algorithm is implemented and validated against a well defined benchmarking suite with 14 benchmarking functions, by comparing it with its constituent DDE variants. The efficacy of hmDDEv is also validated with two state-of-the-art distributed DE algorithms.
Keywords: Dynamic Differential Evolution; Island Models; Distributed Algorithm; Mixed Variants.
A new Approach For Automatic Arabic-Text Detection and Localization in video frames
by Sadek Mansouri, Mbarak Charhad, Mounir Zrigui
Abstract: Text embedded in video frames provides useful information forrnsemantic indexing and browsing system. In this paper, we propose an efficientrnapproach for automatic Arabic-text detection which combines edge informationrnand Maximally Stable Extremal Region (MSER) method in order to extract textrnregion candidates. These regions are, then, grouped and filtered on the b asisrnof geometric properties such as area and orientation. Besides, we introduce arnnew geometric descriptor of Arabic text called baseline to improve the filtering process. Our proposed approach was tested on a large collection of Arabic TV news and experimental results have been satisfying.
Keywords: Arabic text detection;Arabic news; baseline estimation;MSER.
Proposed Enhancement for Vehicle Tracking in Traffic Videos Based Computer Vision Techniques
by Mohamed Maher Ata, Mohamed El Darieby, Mustafa Abdelnabi, Sameh A. Napoleon
Abstract: In this paper, Traffic video enhancement has been approached according to means of computer vision algorithms. We have measured the average number of tracks which assigned correctly in the whole video. These tracks express the correct prediction of vehicles that guarantee the keep track process of each vehicle from the first frame until the end frame. In addition, some video degradations (i.e. salt & pepper, speckle, and Gaussian noise) have been applied in order to measure the effect of these degradations on the tracking efficacy. Some filtering systems have been applied to the degraded traffic video in order to conclude the best filter mask which satisfies the least deviation in the value of assigned tracks. Experimental results show that both wiener and disk filters are the best mask for salt and pepper video degradation. However, median filter mask is the best choice for both speckle and Gaussian video degradations.
Keywords: Video disturbance; Prediction; Assigned track; GMM; Spatial filtering.
Speckle Noise Reduction in SAR Images using TypeII NeuroFuzzy Approach
by S. Vijayakumar, V. Santhi
Abstract: Synthetic Aperture RADAR (SAR) images play a vital role in remote sensing applications and thus it insist the requirement of quality enhancement as it gets affected with speckle noise. It is a kind of noise that gets multiplied with pixel intensities due to interference of backscattering signal. In this paper, computational intelligence based approach is proposed to remove speckle noise by preserving edges and texture information. In particular, the proposed system uses typeII Neuro-Fuzzy approach using pixel neighbourhood topologies. The performance efficiency of the proposed system is proved by comparing its results with existing methods.
Keywords: SAR Image; Speckle Noise; Fuzzy Logic System; Artificial Neural Network Approach; Noise Reduction; Gaussian Model.
IMPLEMENTING REVERSE UP GROWTH TRACKING APPROACH UNDER DISTRIBUTED DATAMINING
by Aswini R, Praveen Kumar Rajendran, Piosajin A
Abstract: Data mining is the methodology which discovers useful and hidden information from large databases. Many Researchers have proposed innumerable algorithms in the field of data mining. In this system Improvised UP-Growth is considered for mining high utility itemset from Potential High Utility Itemset and improvised under different constraints. The Node Utility and Reorganized Transaction Utility are the aspects considered as the key term in the proposed system which are manipulated using the technique as in UP-growth. However mining Potential High Utility Itemset from RTU using UP-Growth needs number of tree traversals. This is reduced in the proposed system by introducing bottom up approach and merging certain manipulations. However, working the system as a sequential process will be time consuming. Distributed environment is considered in the proposed system to overwhelm the problem in existing methodology.
Keywords: Node utility; Transaction Utility;Transaction Weight Utility;Reorganized Transaction Utility;Potential High Utility Itemset.
An Enhanced Secure Data Aggregation Routing Protocol for Sensor Networks
by A.L. SREENIVASULU, Chenna Reddy P
Abstract: From the past decade, the utilization of sensor devices in the real world applications is increased rapidly. To meet the demand of applications, the sensor nodes are deployed in remote areas where the operation is very complex. The security of the sensor nodes will be compromised at any time. Therefore, a secure data aggregation mechanism is needed to overcome their limitations. In this paper, a secure data aggregation mechanism is proposed for securing the data from unauthorized access. The proposed method concentrated on three modules such as data encryption, data aggregation and data decryption. Additionally, the data aggregation module concentrated on removing the redundant data for minimizing the energy consumption of the sensor nodes. The proposed method is evaluated under different conditions. The proposed method showed superior performance in terms of reducing the communication overhead, minimizing the difference in energy consumption and increased the data aggregation accuracy.
Keywords: Data Communication; Aggregation; Encryption; Security; Sensor nodes.
An Efficient Approach towards Building CBIR Based Search Engine for Embedded Computing Board
by Shriram K Vasudevan, P.L.K. Priyadarsini, Sundaram RMD
Abstract: Investigating a picture gives us more information than how it is expressed through words. Image processing is such a field which is ever booming and handles n number of images. Thanks to technology we are able to store and retrieve such a massive data set of image based data from anywhere. Search engines provide a way to link images and queries. They are searched using various factors like keyword, image dimension, texture etc. which is called as content based image retrieval. In this search methodology, the input query image is analysed and its properties or features are saved. Using the recorded features, other images are retrieved which match with the input image. But then again, searching by just name, colour or texture is not very efficient and so we have proposed a novel algorithm for the same. The proposed algorithm takes features like colour, texture, SURF, entropy etc. and finds out how differently they work and what distinct results they produce when combined. Implementation of CBIR on Beagle board led us to some satisfactory results, which encouraged us to do further research.
Keywords: Retrieval; Wavelet; Histogram; Texture; OpenCV; MATLAB; Region of interest.
CAYLEY BIPOLAR FUZZY GRAPHS ASSOCIATED WITH BIPOLAR FUZZY GROUPS
by Ali Asghar Talebi, Samaneh Omidbakhsh
Abstract: In this paper, we introduce the concept of Cayley bipolar fuzzy graphs on the bipolar fuzzy groups. Also some properties of Cayley bipolar fuzzy graphs as connectivity, transitivity are provided.
Keywords: Bipolar fuzzy groups; Cayley fuzzy graphs; isomorphism.
Impact of multimedia in learning profiles
by Ariel Zambrano, Daniela Lopez De Luise
Abstract: The present paper has as original contribution the definition of an automated model of the behavior of a user against a certain type of images in a context of playful learning. Therefore, the Entropy is used to classification profiles, starting from temporary information, which is mixed with certain characteristics previously extracted from the images. The aim of all this is to determine to what extent visual images trigger functions of comprehension and abstraction on topics of high degree complexity. As part of the obtained model, is intended to generate learning profiles, which will enrich in the future with other Non Invasive device, to observe the behavior of the user. For example: cameras, monitory keyboard, mouse, and use among others. The profiles are discovered and described with the minimum information needed. The collected information is processed with Bio Inspired techniques, which are essentially bases on Deep Learning concept.
Keywords: Audiovisual Techniques; Engineering Teaching; Video Games; Learning Model; Deep Learning; Multimedia; Data Mining.
Domination number of complete restrained fuzzy graphs
by R. Jahir Hussain, S. Satham Hussain, Sankar Sahoo, Madhumangal Pal
Abstract: This work is concerned with the restrained complete domination number and triple connected domination number of fuzzy graphs. Some basic definitions and needful results are given with an example. The necessary and sufficient conditions for the fuzzy graph to be complete restrained domination set is formulated and proved. Also
the relation between complete restrained domination set and $n$-dominated set is illustrated. Finally, triple connected domination number of a restrained complete fuzzy graph is provided.
Keywords: Fuzzy graphs; Complete restrained domination set; Complete restrained domination number; Triple connected domination number.
Introducing the Rock Hyrax Intelligent Optimization Algorithm: An Exploration for Web 3.0 Domain Selection
by B. Suresh Kumar, Deepshikha Bharghava, Arpan Kumar Kar, Chinwe Peace Igiri
Abstract: Currently, the immense growth of internet usage has become a bottleneck situation for web developers to meet the customer requirements. To analyze this changing scenario, the developers need to meet these requirements through the introduction of various optimization techniques. Various enumerable optimization techniques are available in the market to explore the Web 3.0 domain. In this research, the author proposed a new metaheuristic approach that aimed at providing an appropriate solution to the analysis and optimization issues. The main aim to design this algorithm in spite of existing algorithms is for wider search space and less time for optimization as based on the foraging time by Rock Hyraxes. Here, the swarm intelligence metaheuristic approach is proposed based on the biological behavior of Rock Hyrax available in East Africa. This novel Rock Hyrax Intelligent Optimization Algorithm (RHIO) is used to optimize the results in the Web 3.0 domain.
Keywords: Metaheuristics; Web 3.0; Optimization; Swarm Intelligence; Rock Hyrax Intelligent Optimization (RHIO).
Hardware Implementation of a New Chaotic Secured transmission System
by Hamid Hamiche, Karim Kemih, Sid-Ali ADDOUCHE, Ahmad Taher Azar, Rafik Saddaoui, Mourad Laghrouche
Abstract: In this paper, a novel secured transmission system implemented in Arduino-Uno boards is proposed. The transmission scheme is composed of two coupled discrete-time chaotic systems and two combined ob- servers. In the first observer, some sufficient conditions on varying equal impulsive distance are established in order to guarantee the impulsive synchronization method. In the second observer, we design an exact discrete- time observer in order to reconstruct all states and the message information. Simulation results are presented to highlight the performances of the proposed method. One of the main contributions is to show that the pro- posed scheme based on impulsive synchronization of discrete-time chaotic systems is experimentally feasible with digital devices using the Arduino-Uno boards. The obtained experimental results validate our approach.
Keywords: Chaotic synchronization; Impulsive synchronization; Step by step observer; Implementation; Arduino-Uno board.
Energy Efficient Cluster Head Selection for Wireless Sensor Network by Improved Firefly Optimization
by Achyut Shankar, Dr N. Jaisankar
Abstract: In WSN, the energy efficiency is a major issue in networks for enhancing the network lifetime. Due to the more data collections and packet transmission this issue becomes even more critical in large sensor networks. In this study, an energy efficient cluster head selection methodology has been proposed for WSN using Firefly with Dual Update Process (FFDUP) algorithm. The proposed approach produces maximum energy and prolongs the network lifetime. Subsequently the analysis based on the network sustainability, manner of cluster head distribution, risk mode and trade-off occurred from the proposed FFDUP algorithm is determined and validated by comparing with conventional algorithms such as Artificial Bee Colony (ABC), FABC, Firefly (FF) and Artificial Bee Colony- Dynamic Scout Bee (ABC-DS). The simulation results revealed that the proposed algorithm provides superior performance while comparing with the existing algorithm.
Keywords: WSN; Cluster head selection; Energy Awareness; Risk awareness; FFDUP.
A Novel Statistical Approach to an Event Management A study and analysis of a Techfest with suggestions for improvements.
by Narassima Seshadri, Shriram KV
Abstract: Events play a vital role in day-to-day life, either in a casual or a professional manner. Some formal events that occur routinely over a period of time, needs to be successful to become sustainable. Event management strategies vary consistently as choices of different people and even same people change as time progresses. Educational institutions showcase their talents by organizing annual fests, gather likeminded people from various institutions to exhibit their talents and gain knowledge. These events need to be successful in order to attract audience and sustain over a long time. This study aims to study about various aspects of Anokha 2016, the sixth annual Techfest of Amrita School of Engineering, so as to improve Anokha 2017. The paper investigates various aspects that remained favorite and also the aspects that were not up to the expectations of the participants, and the suggestions to improve these aspects have also been discussed.
Keywords: Event management; Techfest; Educational institution; Reliability analysis; Construct validity; Hypothesis testing;.
A hybrid approach of Missing Data Imputation for Upper Gastrointestinal Diagnosis
by Khaled Fouad
Abstract: Gastrointestinal and liver diseases (GILDs) are the major causes of death and disability in Middle East and North Africa. The investigation of upper gastrointestinal (GI) symptoms of a medically limited area resource is a challenge. The real-world clinical data analysis using data mining techniques often is facing observations that contain missing values saved for number of attributes. The main challenge of mining real clinical dataset of upper GI to diagnose the diseases, is the existence of missing values. The missing values should be first tackled to achieve high accurate and effective results of data mining approach for diagnosing and predicting upper GI diseases.
In this paper, the proposed approach to missing data imputation is accomplished to pre-process the real clinical dataset of upper GI to apply the feature selection and classification algorithms with accurate and effective results. These accurate and effective results will provide accurate diagnosing and predicting for upper GI diseases. The proposed approach aims at tackling the missing data onto the upper GI categorical dataset and enhancing the accuracy of the classifiers by exploiting the feature selection method before imputation process. This approach is evaluated by implementing experimental framework to apply five phases. These phases aim at partitioning the dataset to eight different datasets; with various ratio of missing data, performing the feature selection, imputing the missing data, classifying the imputed data, and finally, evaluating the outcome using k-fold cross validation for nine evaluation measures.
Keywords: Data mining; Data classification; Feature selection; Missing data imputation; Categorical Data mining; Diagnosis of upper GI diseases.
Evaluation Method based on a Tracing Mechanism for Adaptive User Interfaces: Application in Intelligent Transport Systems
by Soui Makram, Soumaya Moussa, Christophe Kolski, Mourad Abed
Abstract: Nowadays, Adaptive User Interfaces (AUI) are more and more present everywhere in our daily life activities (at home, at work, in public places, etc.). Moreover, they can have different adaptation capabilities, can be disseminated in the environment of the users, and take into account different user profiles. Many academic and industrial studies are conducted about user modelling, design methods and tools for User Interface (UI) generation. However, the evaluation of such user interfaces is difficult. In fact, there exist relatively few works in the literature about such AUI evaluation. To fill up this lack, it is necessary to envisage new evaluation methods focused on adaptation quality of UI. In this research work, we propose an evaluation method called MetTra (METhod based on a TRAcing system). This method has been validated by evaluating AUIs in the transportation field.
Keywords: Adaptive User Interface (AUI); Evaluation; MetTra; Intelligent Transport Systems (ITS).
Types of uncertain nodes in a fuzzy graph
by Arindam Dey, Anita Pal
Abstract: The graph theory has numerous applications in the problems of operations research, economics, systems analysis, and transportation systems. However, in real applications of a graph theory are full of linguistic vagueness, i.e., uncertainty. For e.g., the vehicle travel time or number of vehicles on a road network may not be known precisely. In those types of problem, fuzzy graph model can be used to deal those uncertainties. In a fuzzy graph, it is very important to identify the nature (strength) of nodes and no such analysis on nodes is available in the literature. In this paper, we introduce a method to find out the strength of the node in a fuzzy graph. The degree of the node and maximum membership value of the adjacent edges of that node are used to compute the strength of the node. The strength of a fuzzy node itself is a fuzzy set. Depending upon the strength of the nodes, we classify the nodes of a fuzzy graph into six types namely α- strong fuzzy node, - strong fuzzy node, regular fuzzy node, α- weak fuzzy node, - weak fuzzy node and balance fuzzy node.
Keywords: Keywords: Fuzzy graph; fuzzy node; Strength of node; vagueness of object.
A kernel based SVM for Semantic Relations Extraction from Biomedical Literature
by Kanimozhi Uma
Abstract: To identify and extract semantic relationships among named entities, relation extraction is a significant approach in knowledge representation. In order to capture the semantic as well as syntactic structures in text and to enable deep understanding of biomedical literatures, relation expression become essential. The automatic extraction of disease gene relations is presented by utilizing shallow linguistic features of global and local word sequence context with string kernel based supporting vector machine (SVM) for efficient disease-gene relation extraction. The performance of the proposed work shows that the bag-of-features kernel-based SVM classification is a promising resolution for specific disease-gene association mining.
Keywords: Biomedical Relation Extraction; Natural Language Processing; Machine Learning; Biomedical Literature.
Implementing RSA algorithm for network security using Dual Prime Secure Protocol (DPSP) in crypt analysis
by Durga R, Sudhakar P
Abstract: Cryptography in Network security is the most important approach for secure communication. Demonstrating security is well experimented by utilizing the RSA algorithm is commonly used in efficient cryptographic mechanisms. RSA algorithm is used to monitor the scenario involving hackers and to change the way of transpositions. The original RSA crypto mechanism is needed to perform the behavioural characteristics of multi-privacy system and for exploring specified research strengthening technique. In this methodology the user uses the RSA (DPSP) algorithmic program and generates dual prime pairs for the encrypted messages that are sorted priority-wise to measure the respective of accurate derived system, translate and rotate the intractable algorithm to obtain essential security enhancement.This methodology reduces the danger of man-in-middle attacks and temporal arrangement attacks. Because the encrypted and decrypted messages are additionally disordered to support their priority. RSA (DPSP) algorithm is mainly applied for distributing the data with different environment and variety of approaches is available to implement computation and consumption of designing algorithm. To perform the process with real time data, the cryptographic encryption algorithm along with RSA crypto algorithm is used. Introducing secure RSA (DPSP) for secure file transmission, since there are several cases where we would like secure file transmission to avoid any kind of attack from intruders to process the location with approachable positions to represent with proper identifications of a system. In RSA (DPSP) algorithm the most important key representation is symmetrical random key of crypto mechanism. We apply the mechanism to enhance the effect for confidentially transferring the data and for managing different sizes of messages using time complexity methodologies. It approaches the sizes of the messages and the size of the key proposed by prime numbers.
Keywords: Cryptography; RSA algorithm secured protocol; file transmission; nodes; ns2 tool; Priority programming;.
RELIABILITY ANALYSIS OF SHALLOW FOUNDATION BASED ON SETTLEMENT CRITERIA
by Pijush Samui, Aditi Palsapure, Sanjiban Roy
Abstract: Foundation settlement is an important design criterion as it affects the durability of a structure. Conventional methodologies calculate only a global factor of safety to determine the safety of the structure. However this does not account for the uncertainties due to soil variability and measurement errors. Therefore reliability based design principles must be incorporated to determine the performance and reliability of a structure. The First Order Second Moment Method (FOSM) is generally used for this analysis but it is time consuming. On the other hand, Relevance Vector Machine (RVM) achieves very good generalization performance. Thus in our study we have used RVM based FOSM and ELM and compared the results obtained from both. For this, a dataset of 480 readings was developed for cohesive frictional soil taking Poissons ratio and elastic modulus parameters as random variables. 70% of the readings were used for training and 30% were used for testing. Normalised data was used. Additionally, several error and correlation functions were also calculated to assess the performance of the models.
Keywords: settlement; Reliability analysis; FOSM; RVM; ELM.
Some Applications Of Vague Sets
by Hossein Rashmanlou, Kishore Kumar Krishna, S. Firouzian, Mostafa Noori
Abstract: In this paper, we gave a concise note on vague fuzzy sets. We present two applications on vague sets namely an application of vague fuzzy sets in career determination using an assumed data. The application was conducted with the aid of a new distance measure of vague fuzzy sets. Also the second one deals with research questionnaire construction, filling, analysis, and interpretation is given. Respondents decision is obtained assuming questionnaire is distributed among respondents. The respondents decision is converted into vague data set, analysed, and from which interpretation is drawn.
Keywords: Vague set; fuzzy set; distance measure; hesitancy.
Domination in Hesitancy Fuzzy Graphs
by R. Jahir Hussain, S. Satham Hussain, Sankar Sahoo, Madhumangal Pal
Abstract: Hesitant fuzzy sets (HFS) are introduced by author Torra which is a novel and recent extension of fuzzy sets that aims to model the uncertainty originated by the hesitation to arise in the assignment of membership degrees of the elements to a fuzzy set. Hesitancy Fuzzy Graphs (HFG) is introduced to capture the common intricacy that occurs during a selection of membership degree of an element from some possible values that makes one to hesitate. HFG are used to choose a Time Minimized Emergency Route (TiMER) to transport accident victims. This paper addresses the study of domination in hesitancy fuzzy graphs. By using the concept of strength of a path, strength of connectedness and strong arc, domination set is established. The necessary and sufficient condition for the minimum domination set of HFG is investigated. Further some properties of independent domination number of HFG are obtained and the proposed concepts are described with suitable examples.
Keywords: Domination number; Hesitancy fuzzy graphs; Independent domination set; Necessary and sufficient condition; Strong arc.
Multi-objective Artificial Bee Colony Algorithm in Redundancy Allocation Problem
by Monalisa Panda, Satchidananda Dehuri, Alok Jagadev
Abstract: This paper presents an empirical study of uncovering Pareto fronts by multi-objective artificial bee colony for redundancy allocation problem (RAP). Multi-objective artificial bee colony has been successfully applied in many optimization problems; however, a very little effort has been extended towards solving RAP. In this work, we have considered simultaneous optimization of the unavoidable objectives that are maximization of reliability, minimization of cost, and minimization of weight in a series parallel system, which leads to a multiple objective redundancy allocation problem (MORAP). The objective of this paper is to uncover true Pareto fronts populated with non-dominated solution sets as a solution to MORAP using multi-objective artificial bee colony algorithm (MOABC). Two MOABC algorithms have been developed and are inspired from the popular and established multi-objective genetic algorithms like Vector Evaluated Genetic Algorithm (VEGA) and Non-dominated Sorting Genetic Algorithm II (NSGA II). We named these two algorithms as MOABC-I and MOABC-II, respectively. From the experimental results, we visualize that the approximation of true Pareto front by MOABC-II is better than Pareto front obtained through MOABC-I. Further this resultant Pareto fronts are supervised by two inherent multi-criterion decision making (MCDM) methods like Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) and Analytical hierarchy process (AHP) to reach at a definite goal.
Keywords: Redundancy allocation problem; Genetic algorithms; Multi-objective optimization; Artificial bee colony; Multi-objective artificial bee colony; Multi-criteria decision making.
Cost Effective Hybrid Genetic Algorithm for Scheduling Scientific Workflows in Cloud under Deadline Constraint
by Gursleen Kaur, Mala Kalra
Abstract: Cloud has emerged as a convenient platform for executing complicated scientific applications from multiple disciplines by providing on-demand and scalable infrastructure on rental basis. Research and scientific community often opt for workflows to model these scientific applications Workflow scheduling has been extensively studied for decades with regard to grid and cluster computing, but few initiatives have been tailored for cloud. Whats more, the previous work fails to incorporate the basic principles of IaaS clouds like pay-as-you-go model, elasticity, heterogeneity, dynamic provisioning, and issues of VMs performance variation and acquisition delay besides other QoS requirements. This paper proposes a resource provisioning and scheduling strategy using genetic algorithm with the aim to optimize the overall execution cost while staying below the given deadline. The performance is further enhanced by using a high quality seed generated by Predict Earliest Finish Time (PEFT) algorithm which acts as a catalyst and helps the algorithm to converge faster. The proposed approach is simulated in WorkflowSim and evaluated using various well-known different sized realistic scientific workflows. The results validate the better performance of our approach over numerous state-of-art-algorithms.
Keywords: Cloud Computing; Workflow Scheduling; PEFT; Genetic Algorithm; Time-Cost trade off; Dynamic Resource Provisioning.
Optimal capacitor placement and sizing in distribution system using Competitive Swarm Optimizer algorithm
by Soumyabrata Das, Tanmoy Malakar
Abstract: This article investigates the implementation of Competitive Swarm Optimizer (CSO) algorithm for solving the Optimal Capacitor Locations and Sizing (OCLS) problems for Radial Distribution System (RDS) networks. The problem is formulated as a two-stage Mixed Integer Non-Linear Programming problem. In the first stage, a new parameter called Relative Emission Index is developed to assess the impact of Shunt Capacitors (SC) on environment and using the same, the probable capacitors locations are determined and in the second stage, a novel CSO algorithm is applied to find the optimal capacitors locations and sizes in RDS networks. The SC locations and their output are taken as binary and discrete control variables respectively in the optimization problem. The proposed algorithm is tested on IEEE 34 bus and IEEE 85 bus RDS networks with different loading conditions. The parametric sensitivity studies are performed to select the optimum values of the free parameters of the CSO algorithm. The results obtained by this proposed algorithm are compared with other reported results in the state-of-art literature and comparison confirms the superiority of CSO algorithm over other methods in solving OCLS problems for RDS networks.
Keywords: Competitive Swarm Optimizer; Radial Distribution System; Optimal Capacitor Locations and Sizing; Mixed Integer Non-Linear Programming; Emission.
Multi-Key Searchable Encryption Technique For Index - Based Searching
by P. Sri Vani, S. Ramachandram, R. Sridevi
Abstract: Multi - key searchable encryption scheme enables keyword search on encrypted data with different keys. The scheme is practical to apply for client server applications to achieve data confidentiality and makes the server to perform the search operation on the encrypted data. So far, this algorithm can be implemented for sequential search. This paper presents an improved version of multi key searchable encryption algorithm implemented for index based searching and also shows the experimental results of index based multi key searchable encryption scheme implemented using C and pbc library. This research uses ECC technique (Elliptic Curve Cryptography) for key improving security. This ECC technique contains the sequence of steps for a secure key generation by the user using hash function. Through the use of the hash function in ECC, performance is enhanced for index-based searching. Experimental results show the execution time of an improved version of Multi key searchable scheme for index based search constructed for different elliptic curves. An application is also designed by using new scheme to perform the search on one lakh encrypted collections with java as front end and MongoDB as back end.
Keywords: encryption; search token; delta; token; client; server; confidentiality; searchable ; multi key; index; public key; elliptic curve cryptography.
Further Improved Stability Condition for T-S Fuzzy Time-Varying Delay Systems via Generalized Inequality
by Rupak Datta, Rajeeb Dey, Baby Bhattacharya
Abstract: This paper deals with the problem of stability analysis for a class of nonlinear systems with time-varying delay, which is represented by the Takagi-Sugeno (T-S) fuzzy model. By choosing an appropriate augmented Lyapunov-Krasovskii (L-K) functional and utilizing the efficiency of a generalized integral inequality combining with the reciprocal convex lemma, a new and improved delay-range-dependent stability condition is obtained in terms of linear matrix inequalities (LMIs) for guaranteeing the asymptotic stability of the studied fuzzy systems. Two numerical examples are solved to validate the efficiency and improvement of the proposed theoretical results over some existing stability methods.
Keywords: Stability analysis; T-S fuzzy model; Integral inequality; L-K functional; Linear matrix inequalities.
Optimization of SPARQL queries over the RDF data in the Cloud Environment
by Ranichandra Dharmaraj, Tripathy B.K.
Abstract: The semantic web is built with the support of Resource Description framework (RDF). The changing faces of semantic web created the requirement of new approaches to store and query the RDF data. The RDF data contains large volume of data that have more number of binding. Processing the SPARQL queries over the RDF data in the cloud creates some challenges. The network cost and query processing time majorly impacts the performance of queries over the cloud. This paper proposed an optimization algorithm for query processing in the large datasets. The proposed algorithm considered the parallel execution of queries as a major objective to reduce the network cost as well as to minimize the response time of the query. The experimental evaluation is carried using the LUBM 400 university dataset along with the hardware rented with amazon web services. The proposed algorithm proved their efficiency in terms of reducing the query response time and minimizing the network traffic.
Keywords: Query; SPARQL; RDF data; Response time; Distributed cloud.
Privacy Preserving Using Diffie-Hellman and An Envelope Protocol Through Key Handling Techniques In Cloud Storage
by Shanthi Sri
Abstract: Cloud computing is unremittingly advancing and demonstrating reliable development in the arena of computing. It is in receipt of fame by giving distinctive computing administrations as distributed storage, cloud facilitating, and cloud servers and so forth for various sorts of enterprises and in addition in scholastics. On the opposite side there are heaps of issues identified with the cloud security and protection. Security is as yet basic test in the distributed computing worldview. These difficulties incorporate client\'s mystery information misfortune, information spillage and revealing of the individual information security. In view of the security and protection inside the cloud there are different vulnerabilities to the client\'s sensitive information on cloud storage. In this paper we are considering the risk of storing the data in cloud from third party and accessing the stored data by cloud users we are proposing a novel mechanism that will give the confidence to cloud user about the security from third party that is cloud service provider and also providing privacy to cloud data users using efficient group key management schema.
Keywords: cloud computing; Privacy Preserving; Data owner; Cloud User; key.
Hybrid Rough Set with Black Hole Optimization Based Feature Selection Algorithm for Protein Structure Prediction
by H. Hannah Inbarani, Ahmad Taher Azar, M. Bagyamathi
Abstract: The Protein structure prediction is one of the most important problems in modern computational biology. The Structure of protein is predicted using Amino acid composition (AAC) and pseudo amino acid composition (PseAAC) features extracted from its primary sequence. A major problem of protein dataset is the complexity of its analysis due to their enormous number of features. Feature selection techniques are capable of dealing with this high dimensional space of features. Rough set theory is one of the effective methods to feature selection, which can preserve the originality of features. The essence of rough set approach to feature selection is to find a subset of the original features. Since finding a minimal subset of the features is a NP-hard problem, it is necessary to investigate effective and efficient heuristic algorithms. In this paper, we propose a new approach hybridizing Rough Set Quick Reduct and Relative Reduct approaches with Black Hole optimization algorithm. This algorithm is inspired of black holes. A black hole is a region of space-time whose gravitational field is so strong that nothing which enters it, not even light, can escape. Every black hole has mass, and charge. In this Algorithm, each solution of problem is considered as a black hole and gravity force is used for global search and electrical force for local search. The proposed algorithm is compared with leading algorithms such as , Rough Set Quick Reduct, Rough Set Relative Reduct, Rough Set PSO based Quick Reduct, Rough Set based PSO Relative Reduct, Rough Set Harmony Search based Quick Reduct, and Rough Set Harmony Search based Relative Reduct. The experiments are carried out on protein primary sequence data sets that are derived from PDB on SCOP classification, based on the structural class prediction such as all α, all β, all α+β and all α / β. The effectiveness of the proposed new approach of black hole algorithm combining with Rough Set Quick Reduct and Relative Reduct for protein structure prediction are studied and compared based on classification techniques. Experimental results on protein data sets show that the proposed algorithm offers its efficiency and comparable testing accuracy to that of the existing algorithms.
Keywords: Data Mining; Bioinformatics; Feature Selection; Protein Sequence; Rough Set; Quick Reduct; Relative Reduct; Black Hole algorithm; Particle Swarm Optimization; Harmony Search; Protein Structure Prediction; classification.
A Comparative Investigation of Approaches for Web Search Results Clustering
by Zaher Salah, Abdel-rahman Al-ghuwairi, Ahmad Aloqaily, Aladdin Baarah, Ayoub Alsarhan
Abstract: Online files especially textual documents that have different forms (books, papers, emails, news, lyrics, etc.) are now in billions and up to increase. The huge diversity of topics covered by this massive amount of documents is expected as these documents are originated from various resources worldwide and expected to cover different topics in science, engineering, economy, politics and history etc. Looking to all of these aspects, how to search and find specific documents relative to a specific topic in the user mind and how to facilitate the browsing process? And how to reflect properly the user intention to the information retrieval system to perform the searching and delivering task in precise and fast process? This paper investigates various techniques used for clustering the web search results produced from a web search engine as a result of running a user's query in order to meet that user's information needs. The goal of clustering is not to facilitate finding specific documents only (navigation between documents), but also to make it easier to preview the general structure and distribution of the topics among documents. Furthermore, clustering may be used to induce or reveal hidden or embedded topics in the corpus. The aim of this paper is to provide the reader with the relevant background concerning clustering of web search results (short-text snippets) in much more detail.
Keywords: Information Retrieval; Machine Learning; Text Mining; Web Search Results Clustering.
Certain graph parameters in bipolar fuzzy environment
by Ganesh Ghorai, Sankar Sahoo, Madhumangal Pal
Abstract: Yang et. al  introduced the concept of generalized bipolar fuzzy graphs in 2013. In this paper, we have introduced certain concepts of covering, matching and paired domination using strong arcs in bipolar fuzzy graphs with suitable examples. We investigated some properties of them. Also, we have calculated strong node covering number, strong independent number and other parameters of complete and complete bipartite bipolar fuzzy graphs.
Keywords: Bipolar fuzzy graphs; strong arcs; covering; matching; paired domination.
Teaching Learning Based Optimization for Job Scheduling in Computational Grids
by Tarun Kumar Ghosh, Sanjoy Das
Abstract: Grid computing is a framework that enables the sharing, selection and aggregation of geographically distributed resources dynamically to meet the current and growing computational demands. Job scheduling is the key issue of Grid computing and its algorithm has a direct effect on the performance of the whole system. Because of distributed heterogeneous nature of resources, the job scheduling in computational Grid is an NP-complete problem. Thus, the use of meta-heuristic is more appropriate option in obtaining optimal results. In this paper, a recent Teaching Learning Based Optimization (TLBO) is proposed to solve job scheduling problem in computational Grid system with minimization of makespan, processing cost and job failure rate, and maximization of resource utilization criteria. In order to measure the efficacy of proposed TLBO, Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) are considered for comparison. The comparative results exhibit that the proposed TLBO technique outperforms other two algorithms.
Keywords: Computational Grid; Job Scheduling; Makespan; Processing Cost; Fault Rate; Resource Utilization; GA; PSO; TLBO.
An Intelligent Model for diagnosis of breast cancer
by Raj kamal Kaur Grewal, Babita Pandey
Abstract: Breast cancer, the most common disease in India in comparison to the United States and China, is not easily diagnosed in its initial stage. The early diagnosis of breast cancer can save lives, therefore it is very important to diagnose it at the initial stage. The development of an effective diagnosis model is an important issue in breast cancer treatment. This study accordingly employs J48 classification algorithm and case-based reasoning to construct an intelligent integrated diagnosis model aiming to provide a comprehensive analytic framework to raise the accuracy of breast cancer diagnosis at two levels. The dataset used in the diagnosis is based on the advice and assistance of doctors and medical specialists of breast cancer. At the first level, J48 algorithm is deployed for classifying the breast cancer dataset into malignant and benign cancer types. At the second level, malignant cases are further classified as Ductal carcinoma in situ, Lobular carcinoma in situ, Invasive ductal carcinoma, Invasive lobular carcinoma, and Mucinous carcinoma using case-based reasoning. The result specifies that the J48 accuracy rate is 90%. In case-based reasoning at a second level, the new case is supported by a similar ratio, and the case-based reasoning diagnostic accuracy rate is 98.25%. The implemented result shows that the intelligent integrated diagnosis model is able to examine the breast cancer with considerable accuracy. This model can be helpful for making the decision regarding breast cancer diagnosis.
Keywords: Breast cancer, Data mining, Case based reasoning, J48.
On the learning machine in quaternionic domain and its application
by Sushil Kumar, Bipin Kumar Tripathi
Abstract: There are various high-dimensional engineering and scientific applications in communication, control, robotics, computer vision, biometrics, etc.; where researchers are facing problem to design an intelligent and robust neural system which can process higher dimensional information efficiently. In various literatures, the conventional real-valued neural networks are tried to solve the problem associated with high-dimensional parameters, but the required network structure possesses high complexity and are very time consuming and weak to noise. These networks are also not able to learn magnitude and phase values simultaneously in space. The quaternion is the number, which possesses the magnitude in all four directions and phase information is embedded within it. This paper presents a learning machine with a quaternionic domain neural network that can finely process magnitude and phase information of high dimension data without any hassle. The learning and generalization capability of the proposed learning machine is presented through 3D linear transformations, 3D face recognition and chaotic time series predictions (Lorenz system and Chua’s circuit) as benchmark problems,which demonstrate the significance of the work.
Keywords: Quaternion; quaternionic domain neural network; 3D motion; 3D imaging; time series prediction.
Special Issue on: ICONI 2015 Internet Computing and its Applications
Pedagogical Agility, and Agile Methodologies, in Computer System Development Education
by Roy Morien
Abstract: The Agile Development debate has been won, and there is substantial evidence to support this contention. Agile Development can now be considered as the major, first preference software development method, and there is much research to support its effective use and efficient practices, as well as its widespread adoption in organizations. The battle that now must be won is the acceptance of agile development methods as an integral part of the systems development curriculum in colleges and universities. Agile Development in software system development can also be viewed as being part of a much larger context, which we can call organizational agility‟. The term Organizational Agility is not unknown in the web universe and in management literature. It basically means the ability of organization‟s to rapidly change or adapt in response to changes in the market. An associated concept is that of Lean Development which has been known and understood for many years, and the concepts and practices of which have, of particular interest here, been adapted to be Lean Software Development, a sub-set of Agile Development.\r\nThis paper therefore is best seen as an education paper, based on a research approach now understood as the teacher-researcher in the classroom‟. The author draws on 30 years of experience as a teaching academic to propose a radical approach to computer systems development pedagogy. It is considered to be now an imperative to include Agile Development in university and college educational curriculum. As well, it is proposed that the philosophy and practices inherent in Organizational Agility‟ and Lean Product Development‟ are adopted to inform the educational and pedagogical processes, particularly in the Teaching and Learning of computer system development courses, however styled; Information Systems, Information Technology, Business Computing, Computer Science.
Keywords: Agile Development; Agile Adoption; Organizational Agility; Lean Education; Agile Education; Student Self-Assessment; Project Based Learning; teacher-based research.
A Framework for Collaborative Information Management in Construction industry
by Qusay Al-Maatouk, Mohd Shahizan Othman
Abstract: Majority of Architecture, Engineering and construction projects spent a considerable time collecting and analyzing related information throughout the execution of each single project activity. The flow of theses related information among project activities is usually more frequent than the work flow itself. Therefore, collaboration is vital to project success and considered as one of the causal success factors in project management and development. Teams with high levels of collaboration and coordination have been shown to be more effective. There is a global realization of how important to implement and integrate IT in the construction process in order to reduce cost and achieve more efficient projects. In the other hand, the ineffective use of IT in managing information exacerbates the amount of rework that occurs during many construction projects.
Keywords: collaboration; cloud computing; Information management; Architecture Engineering and Construction.
A software defined networking-based resilient framework combined with power-efficient PS-LTE network
by Muhammad Afaq, Wang-Cheol Song, M.G. Kang
Abstract: Computer networks have an increasingly important societal role, requiring them to be resilient to a range of challenges. For a network to be resilient, it should be accompanied by a state-of-art monitoring system which should not only be tailored to the requirements of the network, but should also be able to provide real-time network-wide visibility. Besides resilience, a network should also be power-efficient. For this purpose, we propose to combine an IP-based resilient SDN framework with a power-efficient PS-LTE network. With this combination, data communication can still be made possible in case of disaster occurrence. In this paper, we aim to focus on (1) sFlow monitoring system that is required to make our SDN-based framework resilient against disasters, (2) power-efficient PS-LTE network. Our goal is also to trigger more profound discussion on combining SDN-based framework with power-efficient PS-LTE network.
Keywords: SDN; PS-LTE; resilience; power-efficient; sFlow monitoring.
Media-aware Scheduling Method for Transmitting Signaling Message over MPEG Media Transport-based broadcast
by Yejin Sohn, Minju Cho, Jongho Paik
Abstract: The broadcasting system should send signaling messages frequently, and it is necessary because users randomly access to the broadcast service. However, overhead size caused by repeat transmission remains a big issue because of the limited bandwidth. To solve this problem, we propose a media-aware scheduling method of signaling messages for MPEG Media Transport (MMT)-based broadcasting. MMT recommends that the sending entity may send signaling messages at regular time intervals, but our method considers the media type when it sends messages. We compared and analyzed between two methods with various media encoding parameters which impact on overhead size. As a result, the proposed method not only maintained the latency time as similar as the MMTs proposal but also reduced overhead size.
Keywords: : broadcasting system; scheduling method; signaling message;
MPEG Media Transport; Scalable High Efficiency Video Coding; media-aware; media encoding; random access.
An Automatic Detection of a Natural Marker and Augmentation of 3D Models in AR with Sketch-based Object Matching
by Junchul Chun, Jaejoon Seho
Abstract: This paper introduces a sketch-based localization approach to
detect a desired natural marker from an input video image. The proposed
method also retrieves a 3D virtual object to be augmented in Augmented
Reality from a 3D database based on the object matching method. Sketchbased
image matching has been used for content-based retrieval to
compare the database images with a sketch-based image drawn by users
and estimate the degree of similarity between the database images and the
query image. In this paper, we adopt sketch-based object matching
method to localize the natural marker of the video images to register a 3D
virtual object in AR system. Most similar object in the input image is
determined as a natural marker of the AR by comparing the user defined
sketched image based on the basic features of the sketched object. Unlike
other image matching methods, this matching technique is possible to
produce query image without constraints by drawing the image intuitively.
In addition, in the proposed sketched based AR system, the 3D object
augmented on the marker will be also determined by object matching
between the detected marker and 3D database images.
Keywords: Augmented Reality; Sketch-based Image Matching; Object Matching; SURF; GrabCut Method; Local Binary Pattern; Natural Marker Detection.
Dynamic Spectrum Access for M2M-WANs: The African Regulators Spectrum Policy Reform Conundrum
by Luzango Mfupe, Fisseha Mekuria
Abstract: This paper presents work that has been done to address the network capacity demands for Internet of things (IOT) and Machine-to-Machine (M2M) communications, based on efficient management and utilization of radio spectrum resources. IOT/M2M applications are predicted to exponentially grow and cause a massive up-surge in network traffic increase. We argue that existing mobile network architectures are not optimized to handle billions of small intermittent transactions generated by M2M connections, therefore, a technology based on Dynamic Spectrum Access (DSA)-enabled low-power M2M- Wide Area Networks (WANs) is proposed. Subsequently, the article presents a use-case scenario demonstrating a possible deployment of a smart metering M2M-WAN network using TV white-space (TVWS) channels and geo-location spectrum database technique. The simulated experimental M2M-WAN showed that with only 4 TVWS channels an entire metropolitan city can be covered to provide smart-metering services. Furthermore, the article suggests changes in existing spectrum management policies and technical regulations to accommodate the new DSA-based technologies. Hence a cost-effective techno-regulatory-policy model is suggested to promote DSA-enabled low-power M2M-WANs.
Keywords: IOT; M2M-WAN; TVWS; DSA; Smart-meter; Spectrum; Spectrum policy; Geo-location spectrum database; WSD; Spectrum regulator; Low-Power.
An Integrated framework for Posture Recognition
by Shipra Madan, Devpriya Soni, Harvinder
Abstract: Postures can be identified and then classified from the video sequences using scale invariant keys as classification features and the results of classification can be used in various fields like surveillance, medical diagnosis and training purpose. In this paper frames are extracted from the given video files and transformed into a large collection of local feature vectors using Scale Invariant Feature Transform (SIFT), each of which is not affected by image translation, scaling and rotation, and to some extent invariant to illumination changes and affine or 3D projection.Features are grouped using k-means clustering in which each posture belongs to the cluster with the nearest mean. Multiclass support vector machine i.e. Directed acyclic graph (DAGSVM) then assigns labels to the centers obtained from clustering. Adaboost is incorporated to boost the performance accuracy of the classifier. Dataset used in this study is Bharatnatyam video dataset. The posture classification model is also shown to outperform state-of-art classification systems on videos as classification accuracy achieved using this frame work is 89%.
Keywords: Posture classification; Support Vector Machine (SVM); Scale Invariant Feature Transform(SIFT); k-means clustering.
Low-Illuminated SPOT-5 Image Improvement for Density-based Vegetation Identification using 3-Layer Color Manipulation Approach
by Nursyafikah Hamid, Hishammuddin Asmuni, Rohayanti Hassan, Razib M. Othman
Abstract: Poor illumination quality of a satellite image is one of the challenges encountered in vegetation analysis, especially with regard to pan-sharpened medium spatial resolution SPOT-5 imagery. Hence, the accuracy of vegetation identification will be affected. In this paper, a 3-Layer color manipulation approach is proposed to overcome this issue of low illuminated SPOT-5 images in order to increase the performance of precise vegetation identification. The SPOT-5 image is pre-processed and three layers of image enhancement techniques are used to, specifically: identify vegetation, reduce shadow appearance, as well as contrast enhancement for color uniformities in order to improve low illumination quality of images. These steps are then followed by a supervised classification process for density-based vegetation area discrimination. This research was tested using multispectral medium spatial resolution SPOT-5 imagery covering the Ramsar Convention site of Tanjung Piai located at the southernmost tip of mainland Asia over the years 2008, 2011 and 2013. The results showed that the proposed approach performed better than existing techniques when dealing with low-illuminated medium resolution multispectral imagery specifically with regard to density-based vegetation identification. The results are supported with accuracy assessments and ground truth validation.
Keywords: low illumination; illumination enhancement; medium spatial resolution; vegetation identification; multispectral image.
A Study on the Security Impact of the Web Services Implementation in the Malaysian Governments Online Applications
by Weilin Chan, Mohammad Faidzul Nasrudin, Ibrahim Mohamed
Abstract: Over the years since its introduction, most organizations believe that web services could be the best solution to address security issues in online application services. One of the top security issues is to resolve existing threats and vulnerabilities. However, without proper configuration, web services may introduce new problems to the application environment without the concerns of the system developer. The purpose of this paper is to determine the relevant security factors and the degree of security of each factor provides when implementing web services in Malaysian governments online applications. The result from this study is a model of security level determinant factors with each factor colour coded base on the impact it has on security. This model consists of four core groups of factors were discovered, namely policies, expected vulnerabilities, security standards, quality of services and others. An additional 13 environmental factors that were found to be influenced the core factors in web services implementation. The classifications of these factors were based on the nature of business and code of conduct in public sector agencies. Factor groups assessed with an impact value between 2 and 3 require high attention and express action to be taken by the organization as the impact level is high and may affect the severity in their web services implementation. The model will assist administrators or decision makers determine which vital factors of the security factors require protections against a possible threat to the organization.
Keywords: e-Government; web services security; online security policy; online applications; application vulnerabilities; security standards; quality of service.
Validated Agile Cost Management Success Factors in Software Development Projects.
by Zulkefli Mansor, Saadiah Yahya, Noor Habibah Arshad
Abstract: Effective and efficient in managing agile project costs is important in ensuring the project is successful. Therefore, the objective of this paper to validates the success factors which contributed to the success of agile cost management. This paper outlines eight key success factors such as customer engagement, changes in requirements, Communication, Corporate Culture, Time Allocation, Simplicity, Cost Effective Management Process and Selection computerized tool that contributes to an agile management costs. This study employed mixed method through questionnaires and interviews to collect data. The Rasch Measurement Model was used to analyze the results. The results showed that all eight factors contribute to the success of cost management agile. The results of this study can assist practitioners or academics in avoiding problems in managing the cost of the agile software development projects
Keywords: Effective; Practitioners; Requirements; Culture; Time; Simplicity; Process; Tool.
Computer Forensic Problem of Sample Size in File Type Analysis
by Hassan Chizari, Shukor Abd Razak, Mojib Majidi, Shaharuddin Bin Salleh
Abstract: File Type Identification (FTI) is the problem of determining the file type from its content. FTI, as a computer forensic challenge, has been studied extensively with many solutions provided by researchers. One of the most popular methodologies to do so is the mathematical analysis, which examines the distribution of bytes to explore the file type (Byte Frequency Distribution (BFD) equations). The main question, which is left behind, is that how one can generalize his or her proposed FTI algorithm to all files? In this work, firstly, a normality assessment test has been applied for various BFDs equations, which showed none of the BFDs histogram is normal distribution. Then, using Renkonen correlation to compare non-normal distributions, the proper sample sizes, which is population representative, were presented based upon the file type and BFDs equations. Finally, it has been shown that using Bootstrap method the BFDs distribution can be converted into a normal distribution.
Keywords: File Type Identification; Sample Size; Non-normal Distribution; Byte Frequency Distribution.
An Improved Data Pre-Processing Method for Classification and Insider Information Leakage Detection
by Sung-Sam Hong, Dong-Wook Kim, Myung-Mook Han
Abstract: Data pre-processing, a step performed prior to data processing, converts data into a form that is easy to analyze. In this study, we propose a method for the pre-processing and integration of data collected from various sources to detect insider information leakage; further, we evaluate the performance of data pre-processing by performing classification and detection experiments with collected normal and abnormal log data. An insider information leakage attack scenario was created, and the attack data for this scenario were generated in order to collect the corresponding log data. The log data in a normal environment were also collected. During the normalization of log data, the log was extracted as atypical data to normalized as mathematical model, and dimension reduction was performed on the high-dimension feature matrix. This pre-processing stage improved the efficiency of information leakage analysis and detection, as demonstrated by the results of our experiments. From the experimental results, we observe that securing the attack scenario and actual attack data is a very important factor in insider information leakage detection owing to the small amount of attack data. The results of classification can improve, depending on the number of classification categories and the amount of data. Therefore, it is important to secure existing data and to build a knowledge base. In addition, the experimental results have shown that the Naive-Bayes (NB) classifier and the Support Vector Machine (SVM) classifier have superior performance with accuracies of 0.9991 and 0.9997, respectively, in source classification.
Keywords: Data Pre-Processing; Data Leakage Detection; Classification; Log Analysis; Information Security; Intelligent Security Data Analysis; Feature Extraction.
A method of improving PRR for WiFi Interference Avoidance in ZigBee Networks in Indoor Environments
by Youn-Sik Hong, Sung-Jae Kho, Uk-Jin Jang, Jae-Ho Lee
Abstract: This paper focuses on how to avoid RF interference when deploying WiFi and IEEE 802.15.4/ZigBee radios simultaneously or in close proximity in indoor environments. The circumstances are particularly unfavorable for ZigBee networks that share the 2.4 GHz ISM band with WiFi senders capable of 10 to 100 times higher transmission power. However, the nature of ZigBee devices is to transmit small amount of data infrequently. Thus, we propose a solution for minimizing interference from WiFi, while limits ZigBees occupancy rate.
Another important point to be considered in this paper is that packet reception ratio (PRR) varies with the shape of crossing corridors. In general, there are typical shapes of L, T, and + depending on crossing corridors. Thus, a mobile ad-hoc network topology must be configured to transmit wireless packets via intermediate nodes.
The method to be proposed in this paper to avoid interference is the use of channel hopping. This channel hopping occurs by evaluating of two values on receiver node: the latest received signal strength (RSS) values and the received acknowledged packets (ACK). The minimum RSS value is given to 50dBm to guarantee a reliable transmission. Our experiment shows that a receiver node with PRR less than 65% cannot receive two or more consecutive ACK packets.
The another method to be taken in this paper to increase PRR depending on the type of crossing corridors is to deploy intermediate nodes with the shortest distance to its neighbours. This method conducts an efficient topology of multi-hop ad hoc wireless network
Keywords: Ad-hoc wireless networks; indoor environment; coexistence; interference; channel hopping.
Special Issue on: Sensor Networks and Cloud Computing
Prolonging the Network life in Wireless Sensors Network - using Refined Region of Interest
by Pritee Parwekar, Sireesha Rodda
Abstract: Wireless Sensor Networks are testing new domains with increasingly new applications. Resource constraints has been the classic problem associated with these networks and maximizing the network life without compromising on the efficacy of the network is the focus of every research endeavor. Considering the relevance of data, this paper talks about a concept of refining the region of interest and concentrating the network resources in such area to optimize the network life without loosing out on the relevant data with adequate resolution. Field trials using limited sensors have been undertaken to validate the idea of refined region on interest. The concept has helped increase the network life compared to its traditional equivalent.
Keywords: Wireless Sensor Networks; internet of things; region of interest; energy conservation; optimum protocol.
A Review on Congestion Control System using APU and D-FPAV in VANET
by Christy Jackson, Vijayakumar V
Abstract: Over the last few years, Vehicular Adhoc Networks has been playing a vital role in many researches around the world. Vehicular Adhoc networks (VANET) have a wide range of application in which Intelligent Transport Systems (ITS) is a major area. Application such as safety, entertainment on the go and traffic advisor are some of the recent advances in VANET. This paper addresses the issues concerning the vehicular traffic congestion. It is observed that United States and United Kingdom lose 2% and 5% of its Gross National Product (GNP) due to traffic congestion. The Paper provides a review on two congestion mechanisms, Adaptive Position Update (APU) and Distributed Fair Transmit Power Adjustment in VANET (D-FPAV). Adaptive Position Update (APU) is a strategy in which it dynamically adjusts the frequency of position updates based on the mobility dynamics of the nodes and the forwarding patterns in the network. D-FPAV controls congestion by adjusting the node transmission power, where the nodes transmit power setting depends on predictions of application-layer traffic and the observed number of vehicles in the surrounding. The paper includes a simulation of a normal VANET setup without congestion and a VANET setup with APU and D-FPAV employed. The simulation results prove that employing these congestion techniques reduce the time delay caused due to traffic congestion.
Keywords: APU; D-FPAV; GPSR; VANET.
Fault Tolerant Big Bang-Big Crunch for Task Allocation in Cloud Infrastructure
by Punit Gupta, Satya Prakash Ghrera
Abstract: Cloud computing is now an industrial standard for large-scale computing and solving problems with high reliability. This has been accepted by companies worldwide like Google, Microsoft, apple for resource computing and resource sharing. But as the number of request over the data centers in cloud increases, load and failure probability over a data center increases. So the requests need to be balanced in such an efficient manner which having more effective strategy for resources utilization, request failure and improved system reliability. Moreover a survey on cloud computing shows that failure probability increases if the load over the distributed independent resources increases. So to overcome these issues in cloud Infrastructure as a service (IaaS), we have proposing a learning based fault aware big bang-big crunch algorithm for task allocation to minimize the request failure and improve QoS (Quality of Service) over a data center. Proposed algorithm has been inspired from theory of evolution in astrology. Proposed strategy has proven to have better performance in term of execution time, scheduling time and request failure rate as compared to previously proposed task allocation algorithm.rn
Keywords: Cloud computing; QoS; Resource utilization; Failure probability; Reliability; Cloud Infrastructure as a service; Makespan.rn.
Adaptive Type-2 Fuzzy Controller for a Nonlinear Delay Dominant MIMO Systems: An Experimental Paradigm in LabVIEW
by M. Kalyan Chakravarthi, Nithya Venkatesan
Abstract: Higher order non linear systems with prevailing delay have been very challenging in significance with stability and process performance. This paper investigates the performance of a Type-2 Mamdani intelligent controller implemented a delay dominant system which is modelled using black box approach identified to be a second order non linear model for a Dual Spherical Tank Liquid Level System (DSTLLS) under LabVIEW environment. The adaptive approach of the intelligent Mamdani based fuzzy controller proves itself to be very competent compared with the earlier experimented methods that already exist. The performance indices like Integrated Absolute Error (IAE) and Integrated Squared Error (ISE) are also calculated for different varying set point changes of DSTLLS. The response and error reduction efficiency of this method of Adaptive Type-2 Intelligent Fuzzy (ATIF) Controller has been experimented for different flow configurations of this DSTLLS, like Multiple Input Single Output (MISO), Multiple Input Multiple Output (MIMO) and Single Input Single Output (SISO).
Keywords: Non linearity; Mathematical Modelling; MIMO systems; Fuzzy Controllers; Mamdani.
Round Estimation Period for cluster based routing in Mobile Wireless Sensor Networks
by Maryam El Azhari
Abstract: The recent technological advances in digital electronic and robotics manufacturing has enabled the evolvement of sensors to an upper level. Sensor nodes can be changing their locations according to the zone of coverage. The overall set of scattered sensor nodes forms a particular type of wireless communication networks, named Mobile Wireless Sensor Networks. This type of wireless networks is widely used in many applications including environmental, healthcare and military applications. The mobility constraint when added to a communication process, brings in a number of challenges. The reliability constitutes a major problematic to take into account.
In this paper, we proposed a new technique to enhance the performance of cluster based routing protocol for MWSNs. A probabilistic approach is used to balance the reconfiguration frequency of cluster forming for data transmission within a mobile environment. The results proved the efficiency of our technique, as it increased the performance of cluster based routing protocols in terms of energy consumption, end to end delay and throughput.
Keywords: Mobility; Cluster Based Routing; Sensor Networks; MobileWirelessrnSensor Networks; routing protocols; Cluster Head; LEACH; poison distribution.
Energy Efficient Virtual Machine Consolidation for Cloud Data Centers Using Analytic Hierarchy Process
by Oshin Sharma, Hemraj Saini
Abstract: Ever increasing demand of a cloud computing immensely increases the consumption of energy and power. Data centers consume 1.1% to 1.5% of overall electricity consumed in the world which is growing by 12% every year. There are many electric components present inside it and therefore, it needs a huge amount of electricity to power and cool down the electric components which results in emissions of high carbon dioxide. Minimization of energy consumption in the data centers is very important for environmental sustainability and it can be minimized by using lesser number of cloud resources and improving the utilization of these resources. Dynamic consolidation of VMs (virtual machines) plays an important role and an effective method for the reduction of energy consumption. Consolidation of VMs can be done on the basis of CPU utilization, memory occupied by the VM, migration time taken by VMs from one host to another. Along with this, correlation policy and by switching the mode of idle servers to sleep or hibernate or switching them off can also reduce energy and power consumption. In the current study a peculiar technique based on Analytic Hierarchy process for the selection of VM for migration has been proposed to minimize the total energy consumption and SLA violation of cloud environment. Results obtained from CloudSim toolkit using PlanetLab data set revealed that the proposed approach was found better for energy saving and QOS metrics of cloud data centers as compared to conventional techniques.
Keywords: VM consolidation; VM Migration; Energy consumption; Analytic Hierarchy Process; Cloud computing; VM selection.
Special Issue on: Advanced Intelligence Paradigms in Machine Vision, Image Processing and Pattern Analysis
Face Recognition using combined Binary particle swarm optimization and Hidden layer of Artificial Neural Network
by S.G. Charan
Abstract: Face Recognition is one of the challenging domains. We have seen Artificial Neural Network perform very well in both detection and recognition. In this paper, we propose a novel method of feature extraction where features obtained at the end of hidden layer of neural network is utilized. This hidden layer output is our first level of features. On these features we apply Binary Particle Swarm Optimization (BPSO) to remove the redundancy, the few hidden units in the network. BPSO over hidden layer outputs can be implemented in two ways: 1) to apply BPSO over hidden layer in the training stage so the network is better optimized; 2) to directly use the BPSO on an optimized neural networks hidden layer output. Both the techniques performed well over traditional neural network and conventional BPSO. Experiments on FERET and LFW datasets shows promising results.
Keywords: Face Recognition; Hidden Data Mining; Particle Swarm Optimization; Artificial Neural Network; Hybrid Intelligent model.
Video-based assistive aid for blind people using object recognition in dissimilar frames
by Hanen Jabnoun, Faouzi Benzarti, Frédéric Morain-Nicolier, Hamid Amiri
Abstract: Developing visual aids for handicapped persons is an active research area in the computer vision community. This paper presents a visual substitution tool for blind people based on object recognition in video scene. It focuses on the optimization of the video processing using the calculation of dissimilarity between frames. The approach includes the Real Valued Local Dissimilarity Map method in the frames dissimilarity measures. It uses Scale Invariant Features Transform keypoints extraction and matching to identify objects in dissimilar frames. The experiment tests showsome encouraging results in the case of finding object of interest. Thus, the proposed method can be a choice for solving the problemof blind and disabilities persons in their interactionwith the surrounding environment.
Keywords: Pattern recognition; video processing; visual substitution system; Scale Invariant Features Transform; Real Valued Local Dissimilarity Map; keypoints matching.
Priority Based Trimmed Median Filter for Removal of High Density Salt and Pepper Noise
by Sudhakar R, Sudha V.K.
Abstract: This paper proposes an efficient and less complex Priority Based Trimmed Median Filter algorithm for restoring images corrupted by high density salt and pepper noise. Noisy pixel is replaced by trimmed median value of the horizontal and vertical adjacent four pixels, through this algorithm. If these four are 0s and 255s, then the next priority diagonal adjacent four pixels are used to calculate trimmed median for replacing noisy pixels. If these four are also found as 0s and 255s, then the noisy pixel is left unchanged until the next iteration. Experimental results on different gray scale and color images show that the proposed algorithm outperforms the Standard Median Filter, Adaptive median Filter, Decision Based Algorithm, Modified Progressive Switching Median Filter and Modified Decision Based Unsymmetric Trimmed Median Filter.
Keywords: Salt and Pepper noise; Median filter; Adaptive Median Filter; Unsymmetric Trimmed Median Filter.
An Efficient approach for handling degradation in Character Recognition
by Sandhya N
Abstract: Recognition of historical printed degraded Kannada characters is not solved completely and remains as a challenge to the researchers still. In this paper a scale for measuring degradation of a character is proposed. Further the degradation is characterized to high, medium and low based on this scale, and use it to study the efficiency of the character restoration technique designed. A new approach, Fit Discriminant analysis (FDA) for recognition is proposed and compares its recognition accuracy with the existing techniques Support Vector Machines (SVM) and Fisher Linear Discriminant Analysis (FLD). Through extensive experimentation it is established that rebuilding of characters improves the recognition accuracy of learning based approaches SVM, FDA, and FLD significantly. Further it is established that the proposed approach FDA gives the best recognition accuracy for historical printed degraded documents. It is also proved that training-testing set applying the proposed degradation measure is required for better recognition accuracy.
Keywords: Degraded characters; Support Vector Machines; Fisher Linear Discriminant Analysis; Broken characters.
Pattern Analysis and Texture classification using Finite State Automata scheme
by B. Eswara Reddy, Ramireddy Obulakonda Reddy
Abstract: The paper proposes a complete modeling of finite state automata along with the associated classifier for texture classification. Pattern analysis of the texture image is performed by proposing a symbolic pattern based algorithm. This algorithm is developed based on the symbolic dynamics and finite state automata theory for estimating the state transition of the texture variations. Texture image is divided into several partitions i.e. texture, background of the texture, shadow of the texture etc. Finite automata state transitions are used to extract the features from the symbolized image. A binary classifier is designed to classify the texture categories based on the feature extraction from the finite automata theory. Pattern analysis is performed on the KITH-TIPS dataset for 10 varied categories of texture. 99.12% Classification accuracy is achieved when compared with other state-of-art techniques. The experimental study shows the better efficiency of the proposed system when compared to other existing methods.
Keywords: Finite automata; symbolic pattern; texture; classification.
A Novel Method for Super Resolution Image Reconstruction
by Joseph Abraham Sundar K, Vaithiyanathan V
Abstract: The paper describes about a new method for super resolution based on surveying adjustment. The idea used in this method is, an observation model is developed for the sequence of low resolution images and based on this an observation equation is developed for the Super-Resolution Image Reconstruction (SRIR). The observation equations are used by the surveying adjustments in order to find the gray function. The validation of proposed method is done using simulated experiments and real time experiments. These experimental results are compared with various latest techniques using performance measures like peak signal to noise ratio and sharpness index. In both the cases of experiments the proposed surveying adjustment based super resolution image reconstruction has proved to be highly efficient which is needed for satellite imaging, medical imaging diagnosis, military surveillance, remote sensing etc.
Keywords: Super-Resolution; Image Reconstruction; Gray function; Observation model.
GLCM Based Detection and Classification of Microaneurysm in Diabetic Retinopathy Fundus Images
by Dhiravida Chelvi, Raja Mani, C.T.Manimegalai Murugeasn
Abstract: Pre-screening of eye is very important in Diabetic Retinopathy to help the ophthalmologists in providing relevant treatment. Diabetic retinopathy is a major cause of blindness and it includes the lesions like Microaneurysms, Haemorrhages, and Exudates. Microaneurysms are the first clinical sign of diabetic retinopathy and it is a small red dot on the retinopathy fundus images. These are early detectable signs in Diabetic Retinopathy which cause vision loss soon. The number of micro aneurysms is used to indicate the severity of the disease. The first step in preventing the disease is the automatic detection of the micro aneurysms at an early stage. The automatic identification of micro aneurysms reduces the manual workload and cost. A novel method of micro aneurysms detection for retinopathy images is proposed here. The proposed algorithm detects and classifies the Micro aneurysm from Diabetic Retinopathy Fundus images in low resolution images also. Initially the image is processed by a median filter and enhanced by Contrast limited Adaptive Histogram Equalization (CLAHE).Micro aneurysms are detected by extended minima method for candidate extraction. The PCA (principal component analysis) is used as a pre-feature extractor in terms of size, shape and colour of MA. To improve the efficacy of the system, finally statistical features are extracted by Gray level coocurrence matrix (GLCM) and are given to the Knn classifier to classify Micro Aneurysm accurately. These detected MA are validated by comparing with expert ophthalmologists hand-drawn ground-truth images. The simulation results show the performance such as sensitivity of 95.7%, specificity of 90.56%,and accuracy of 93% of the proposed algorithm.
Keywords: Micro aneurysm; Diabetic Retinopathy; Image Processing; Pre-Processing; Image Classification.
Iris recognition system based on a new combined feature extraction method.
by Izem Hamouchene, Saliha Aouat
Abstract: Recent science studies are interested in automatic systems without human intervention. This concept is crucially needed in several researches and industrial world. Indeed, the security field is in a great need of automatic identification system based on the biometric (bioinformatics domain). The human iris is considered as the best biometric mark to the identification due to the stability, distinctiveness and unique features over time. Thus, the uniqueness of the texture present in the human iris is a natural password. This property is coveted by the field of security. In this paper, we propose a novel and an automated iris recognition approach. Our approach is based on a combination of two systems. The first system is based on the Regional Variation (RV) method. This method decomposes the iris image into several blocks. After that, the variation of the mean and the variance are encoded to generate the regional descriptors. The second system is based on a new feature extraction method called Rotation Invariant Neighborhood-based Binary Pattern (RINBP) Hamouchene and Aouat (2014). This method extracts the relative local information between the neighbors of pixels and is also robust against rotation. Two set of support vector machines (SVM) based learning algorithm is used to train the two systems. The output scores of the two systems are normalized. Dempster-Shafer theory is used to distribute unitary mass over the two output set of SVMs. Finally, the combined belief measures are transformed to a probability by applying the Dezert-Smarandache theory. In the experiments, the CASIA iris image database is used as a benchmark. The proposed systems are compared to famous iris recognition systems (Wildes Wildes (1997), Masek Masek (2003), Han et al Han et al. (2014), Rai et al Himanshu et al. (2014) and Izem et al. Hamouchene and Aouat (2014)). The experiments illustrate that the proposed recognition system has obtained better recognition rates. Experimental results illustrate the efficiency of the proposed iris recognition system especially the feature extraction methods (RV and RINBP) and the decision model which give promising results.
Keywords: Iris Recognition System; Neighborhood-based Binary; Texture analysis; Mean and variance variations; Dempster-Shafer theory; Support Vector Machines.
Enhanced method of using Contourlet transform for medical image compression
by Eben Sophia P, Anitha J
Abstract: With the aim of improving the compression performance using contourlet transform, Singular Value Decomposition (SVD) of intermediate subbands has been experimented. In this way, the size of contourlet transform subbands can be efficiently reduced to induce compression. This novel lossy compression technique enhances the compression performance of contourlet transform and produces good quality image even at lower bit rates. In addition to SVD, normalization and prediction of decomposed sub band coefficients also improve the compression performance. The method was tested using medical MRI (Magnetic Resonance Imaging) and CT (Computed Tomography) imaging modalities. The statistical results confirm the efficiency of the proposed method in terms of CR (Compression Ratio), PSNR (Peak Signal to Noise Ratio) and BPP (Bits Per Pixel). This method produces good compression with approximately 47 dB PSNR at bit rate as low as 0.1BPP. This is suggested good for medical image communication and storage applications such as PACS (Picture Archiving Communication System), RIS (Radiology Information System) etc. and also helps in easy search and retrieval process.
Keywords: Contourlet transform; singular value decomposition; prediction; lossy compression; Arithmetic coding; medical MRI and CT images etc.
Brachiopods classification based on fusion of Contour and Region based descriptors
by Youssef Ait Khouya, Faouzi Ghorbel
Abstract: In this paper, we propose a Contour-Region based shape descriptor for Brachiopods classification by using a combinations of Fourier descriptors and R-transform extracted from Radon transform. Fourier descriptors is supported by the well-developed and well-understood Fourier theory and are a powerful features for the recognition of two-dimensional connected shapes. We used the stable and complete Fourier descriptors proposed by Ghorbel to present the contour information. To depict shape interior content we used the R-transform. It's advantages lies in its low computational complexity and geometric invariance. We compared the proposed descriptor with Curvature Scale Space, R-transform and Ghorbel descriptors using City block distance measure and our Brachiopods database. We present the experimental results to reveal the performance of the proposed descriptor which is independent on the starting points and efficient.
Keywords: Brachiopod;Fourier descriptors; Radon transform; R-transform; Curvature Scale Space.
Identification of Human Activity Pattern in Controlled Web Environment: An Adaptive Framework
by A. Chakraborty, D. Banerjee, R.T. Goswami
Abstract: This paper projects a new aspect regarding the research works based on human web based activity pattern analysis. Web activity pattern analyzer is basically a part of the main goal of the research, human psycho emotional behavioural pattern analysis. In the recent era, human world is majorly dependant on internet in their life for various aspects and that is why internet usage pattern of each individual user is growing as a very powerful resource to know him. The need of web users can be mitigated more efficiently if their requirements are known to the providers. These usage patterns are found to be unique for each user, to some extent, as per the current psycho emotional state of that individual user when he or she is in a controlled web environment and this can be another mark of authentication of that particular user. This concept has already been applied in some real world application domains namely; User Authentication Protocol, Personalized E-Learning and Link Data Analysis for Resource Description Framework in Semantic Web.
Keywords: Session_Sequence; Activity Pattern; Adaptive Algorithm; Dempster–Shafer theory; Belief Function; Recommender Agent; RDF Graph.
Special Issue on: Green Mobile Computing for Energy-Efficient Next-Generation Wireless Communication
Reconfigurable Communication Wrapper for QOS Demand for Network On Chip
by S. Beulah Hemalatha, Vigneswaran T
Abstract: Efficient Communication wrapper design is one of the important research issue in network on chip. A single wrapper with fixed design parameter will not be efficient one for the heterogeneous environmental network on chip scenario. The system on chip has many different computing and communication blocks with different data rate and data format. To interconnect such a heteronymous blocks a standard based wrapper frame work such as OCI wrapper are proposed .But such standard wrappers does not support the QOS demand of the every block .So this work proposes a frame work of reconfigurable communication wrapper design with support of QOS. The proposed frame work is simulated in LABVIEW software and tested on National Instruments FlexRIO 7845R FPGA hardware. The results shows that on the fly re-configurability is achievable with the frame work .
Keywords: OCI wrapper; reconfigurable communication wrapper; QOS; system on chip.
A Novel approach for Secured Transmission of DICOM Images
by Priya Selvaraj
Abstract: Abstract DICOM communication(Digital Imaging and Communications in Medicine) mainly focuses on the transmission of medical images, storing the information in medical image and also for printing and securing the image. A medical image communication is mainly for secured medical facilities for the physicians and the patient. The medical image is compressed under the JPEG 2000 format. The hash value is find out using Additive Hash Function(AHF) and it is encrypted using RSA to form the digital signature. Combination of digital signature and text will be the watermark. This text consists of patient information, doctor information, disease information,and prescription. Reversible watermarking is a technique in which watermark is embedded and watermarked images passes through the authentication process, the original image is extracted along with the watermark.Strict authentication is provided in order to have high security for accessing the secure medical images by implementing Kerberos technique.
Keywords: Keywords—Reversible watermarking; Authentication; Medical Image Compression; JPEG2000 Compression; Additive Hash Function; RSA; Kerberos.
Adaptive Multi loop IMC Based PID controller tuning using Bat Optimization algorithm for Two Interacting Conical Tank Process
by Lakshmanaprabu Sk
Abstract: In this paper, multi loop adaptive internal model controller (IMC) based PID is designed for the two interacting conical tank level process (TICTLP). The nonlinear TICTLP is decomposed into linear transfer function matrix around the operating points and the effective open loop transfer function (EOTF) is developed using simplified decoupler. The IMC based PID controller parameters are obtained for EOTF model using Bat optimization algorithm (BOA). A weighted sum of integral time absolute error is used as control design objective function for multi-loop IMC-PID design which yields faster settling time with minimum over shoot. The fuzzy based adaptive gain scheduling is used to provide complete control to TICTLP and fuzzy based adaptive decoupler is implemented to eliminate the dynamic interaction between control loops. The simulation results of proposed controller are compared with conventional ZN-PID, IMC controllers to show the superiority of proposed controller. The simulation response of proposed controller indicates the performance improvement control schemes interns of time domain performance indices, servo tracking, regulatory response and faster settling time.
Keywords: Conical tank process; Effective open loop transfer function; adaptive decoupler; Multi loop IMC control; IMC-PID; Relative Gain Array (RGA); Fuzzy gain scheduling; Bat Optimization Algorithm.
An Adaptive Low Power Coding Scheme for the NOC
by M. Jasmin, T. Vigneswaran
Abstract: Low power system design is important for system on chip design where many sub system blocks communicate with each other with higher data rate to realize the system functionality. Low power coding either will reduce energy by reducing self-switching activity or reduces energy consumption by reducing coupling switching activity. But under typical Network on Chip (NOC) system we require a low power coding scheme which has to handle different kinds of data traffic from different IP core at different instant and different places in System on Chip (SOC). A single low power coding scheme will not solve all the subsystem or application demands. So here in this paper a correlation analysis based adaptive data coding scheme is presented which will provide low power at any instant on any kind of data traffic. This is done by selecting and encoding the data with different coding scheme based on correlation level of the data traffic. The data traffic is classified into three categories as low correlated data traffic, moderate correlated data traffic and high correlated data traffic. Based on the classification different coding scheme is applied .The proposed system is simulated in labVIEW FPGA tool for the USRP RIO target which is a wireless transceiver that can inject megabits of test data per second for testing the coding scheme .The power consumption of the existing coding schemes are compared with the proposed adaptive scheme by taking different correlation based test data sets. The result shows that the proposed system will save 25% energy compared to other coding scheme at the worst case scenario.
Keywords: NOC;SOC;Correlation analysis;USRP RIO .
Vanet Routing Protocol with traffic aware approach
by Sangeetha Francis
Abstract: Vehicular Ad hoc NETwork (VANET), is a type of Mobile Ad hoc NETwork (MANET) that forms vehicles as nodes. Routing is the basic fundamental requirement of VANET applications. Therefore it is necessary to devise a routing protocol that fits well for rapid topology changes and disconnected network conditions. To address these specific needs of VANET we present a novel greedy routing protocol for vehicular networks called VRPTA that suite well for both city environment and the high way environment. With the help of localization system named GPS (Global Positioning System), the proposed protocol is designed to efficiently relay the data in the network by considering different scenarios like road traffic variation and various environment characteristics. The protocol communicates in between vehicles as well as vehicle to infrastructure whichever is applicable, thereby ensuring reliable transmission. In addition, we also consider the information about vehicles speed, direction and density of a city traffic configuration consisting of double direction roads, multi lanes and highway scenario. The work is implemented using NS2 simulator.
Real Time MAF Based Multi Level Access Restriction Approach for Collaborative Environment Using Ontology
by Rajeswari Sampath
Abstract: The collaborative environment encourages rapid development in many organizations but struggles with malicious access. There are many access control approaches for improving the performance of the collaborative environment. There have been discussed earlier, but unfortunately performance is not seen. This paper presents a novel real time malicious access frequency based multi level restriction scheme. The method maintains the ontology of resources which contain data of various kinds, their properties and the set of roles of environment could get access to. Also, the system maintains the logs about the previous access of various users of the environment. The log, helps computation of the method for the requested data and user by MAF. Using computed MAF value the method computes the multi attribute trust measure for each level and also the multi level trust weight. Based on computed value, the method performs access restriction to improve the quality of collaborative development.
Keywords: Collaborative Environment; MAF; Data Ontology; Access Restriction; Public Auditing; MLA.
MMSI: A Multi-Mode Service Invocation Algorithm To Improve The Connectivity In Accessing Cloud Services In Heterogeneous Mobile Cloud
by R.K. Nadesh, M. Aramudhan
Abstract: Modern research in cloud environment is focused on where mobile users can access data through cloud services through arrangement of regional cloudlets when the connectivity with the cloud service provider is less or lost. As the cloud services can be activated anytime from anywhere, connection management should be handled in a fair manner to maintain the service requirements. However, cloud services can be invoked independent of location, if the service parameters do not meet the constraints, then the performance of the cloud system degrades. In this paper, we propose a multimode service invocation algorithm for improving cloud service to the mobile users. When a mobile user is connected to a cloud service and the service level lowers on random mobility, this algorithm is used for choosing the cloudlet or an adhoc cloud to provide identical service without any interruption. In our experiment we estimate parameters like delay, signal strength and energy. Based on the estimated levels and the value less than the threshold value, we invoke and bind with the nearest cloudlet or adhoc cloud whichever is possible. The client invokes the services through cellular networks in normal condition and every time interval, it computes the signal strength, energy level and delay factors in accessing the cloud service. When the estimated parameters are less than the threshold, it is connected with the local access point. The multi-mode algorithm computes the service invocation weight and selects the connectivity mode in continuing the service invocation. We prove that, this algorithm improves the performances of the user in accessing cloud services in terms of throughput, connectivity ratio and service completion.
Keywords: Cloud Computing; Mobile Adhoc Clouds; Cloudlets; Service Invocation.
Malicious node detection through Run Time Self healing algorithm in WSN
by B.R. Tapasbapu, L.C. Siddanna Gowd
Abstract: Wireless Sensor Network possess a large number of randomly deployed nodes. These nodes configure them self to form a network. WSNs major role is to monitor the environment, collect the data and communicate the data to the base node. The originality of the data communicated by the WSN nodes is important criteria to avoid the failure in the network. So Self-healing techniques are implemented to overcome the losses of data in routing due to misbehaving nodes. However, major protocols designed for self-healing are not energy constrained and not suitable for battery powered network. We here propose a new Run-time self-healing algorithm which posses individual monitoring nodes which scan the data and asses the stability of the nodes to ensure proper communication in the network. The proposed method was compared with self-healing hybrid sensor network architecture(SASHA) and Error Correction Code(ECC) algorithm to prove the improvement in efficiency of the network.
Keywords: Wireless sensor network; Fault Occurrence; Self healing; Nodes management; Dead node avoidence.
Classification of Neonatal Epileptic Seizures using Support Vector Machine
by Vimala Velayutham
Abstract: Neonates are infants who are in their first 28 days of life. The diagnoses of neonatal seizures have been advocated by the use of clinical observations and electroencephalography (EEG). The continuous monitoring of neonatal EEGs in neonatal intensive care units is tedious and involves experts intervention. The use of clinical decision support systems into the neonatal intensive care units has proved to produce aid to neonatal staff. The neonatal seizures of epileptic origin are more common and we recommend an approach to aid in the classification of the same using EEG signals of the neonates. Daubechies wavelet transform is used for the task of separation of frequency bands and the extraction of features. The theta rhythm of EEG reflects rightly the occurrence of epileptic seizures in neonates. The features taken into consideration for the classification are mean, variance, skewness and kurtosis. The Support Vector Machine (SVM) based classification is adopted for the development of the system which detects the presence or absence of epileptic seizures. The performance of this diagnostic aid system has been studied and the system has a sensitivity of 94% and specificity of 96%. The receiver operating characteristic curve is also used in the performance assessment.
Keywords: Classification; EEG; neonatal intensive care units; neonatal epileptic seizures; support vector machine.
PRESERVING SECURITY USING CRISSCROSS AES AND FCFS SCHEDULING IN CLOUD COMPUTING
by Kalyanaraman Ramkumar, Gurusamy Gunasekaran
Abstract: Cloud computing is a developing technology in distributed computing which provides pay service model as per user need and requirement. Cloud includes the collection of virtual machines which have both computational and storage facility. The objective of cloud computing is to provide effective process to hyper distributed resources. Recently, cloud system is developing fast, and faces many challenges, two of them is scheduling process and other main challenge is security. Scheduling states about how scheduler adapts its scheduling strategy, according to the changing set of morals to control the order of work to be implemented by a computer system. In this research paper, a scheduling algorithm of collocate First Come First Server (FCFS) of supremacy elements is proposed where the system efficiency is improved using FCFS in parallel manner. To address security problem, crisscross Advance Encryption Standard (AES) is proposed by increasing the security in the cloud through the grid manner. Aggregate of this proposed work is to enhance the system efficiency and security by using the both crisscross AES and collocate FCFS of supremacy elements.
Keywords: Cloud computing; First Come First Server; Advance Encryption Standard; Security;.
BINARY HONEY BEE MATING PARTIAL TRANSMIT SEQUENCE TO IMPROVE OFDM
by Jagarlamudi Ravisankar, B. Seetha Ramanjaneyulu
Abstract: A huge shortcoming of Orthogonal Frequency Division Multiplexing (OFDM) is the extreme Peak-to-Average Power Ratio (PAPR) of the transmitted signals. Partial transmit sequence (PTS) method is capable of enhancing PAPR statistics of OFDM signals. In PTS method, data block to be forwarded is split into disjointed sub-blocks and the subblocks are merged through usage of phase factors for minimizing PAPR. Because generic PTS needs extensive search over every combination of permitted phase factors, search complexity rises in an exponential manner with quantity of subblocks. In the current work, a novel sub-optimal technique on the basis of Binary Honey Bee Mating (BHBM-PTS) protocol is suggested for searching better combination of phase factors. BHBM-PTS protocol may considerably decrease computation complexity for bigger PTS sub-blocks and provides lesser PAPR simultaneously. Simulations prove that BHBM-PTS protocol is an effective technique for achieving considerable PAPR decrease.
Keywords: Orthogonal Frequency Division Multiplexing (OFDM); Peak-to-Average Power Ratio (PAPR); Partial transmit sequence (PTS); Binary Honey Bee Mating (BHBM).
A Survey on Internet of Vehicles: Applications, Technologies, Challenges and Opportunities
by Priyan M K, Ushadevi G
Abstract: This work aims to provide a survey on Internet of Things (IoT), Internet of Vehicles (IoV) and Internet of Everything (IoE). The Internet of Things (IoT) provides interconnection between various physical devices such as sensors devices, mobile phones, laptop, PDA and so on. Nowadays, IoT also enables connection between vehicles, buildings and other items that are fixed with sensors, actuators and gateways. Internet of Vehicles (IoV) is identified from the Internet of Things (IoT). Internet of Vehicles (IoV) is used to make an interconnection between the things, vehicles and environments to transfer the data and information between the networks. Internet of Everything is an enhanced version of Internet based technologies such as Internet of Things, Internet of Humans and Internet of Digital. IoE provides end to end connectivity among procedures, knowledge and ideas engaged across all connectivity use cases. This paper discusses various challenges and issues in modern IoT, IoV, and IoE system. In addition, this paper also discusses security issues and various application of IoT in healthcare. Though, IoT devices are used in modern applications with good performance, however, some challenges are still exist. In order to overcome this issues, various open research problems are identified in this paper.
Keywords: Internet of Things; Internet of Vehicles; Internet of Everything; Vehicular ad hoc network; Big Data; Cloud Computing; Intelligent Transportation System.
Radio Spectrum Collision avoidness in Cluster Cognitive Network through gazer Nodes
by V. Nagaraju, L.C. Siddanna Gowd
Abstract: The spectrum deficiency in Cognitive Radio can be solved effectively by utilization of radio spectrum. The spectrum is not effectively shared among all the other users. Since the users are spread across different locations the spectrum allocation and spectrum sharing is important to use spectrum effectively and to allocate communication channel to all the devices in the network, by doing so all the nodes in the network can communicate covering large area. In Cognitive Radio, Spectrum sensing, spectrum allocation, and reuse scenarios approaches with the different algorithm help improve the utilization of the spectrum. Traditional spectrum allocation technique such as fuzzy logic and harmony search replaces the spectrum with the new spectrum scheme. However, the new technique brings more efficiency in achieving spectrum utilization. Still the cognitive in mesh network has the problem of collision between the secondary and primary users. To minimize the effect of collision we introduce a gazer based cognitive radio network (GCRN) which provides more freedom for frequency sharing paradigm. The novel algorithm provides the network to adopt automatically for every change in the environment of the cluster in cognitive radio network.
Keywords: Cognitive radio network; Gazer nodes; Spectrum Sensing; Resource Sharing; Control channel.
Intelligent Intrusion Detection Techniques for Secure Communications in Wireless Networks: A Survey
by K.P. Rama Prabha, N. Jeyanthi
Abstract: Communication is a heart of the day to day activity in the current world. Since the world has practiced with electronic devices for carrying out all the daily activities, electronic and wireless communication along with internet plays a major role in the success of providing a sophisticated life.Moreover, the internet users are also gradually increasing in the recent two decades for making their life easy through fast communication.In such a scenario, the numbers of intruders are increasing in the Internet dramatically.In this paper, we provide a survey on the use of machine learning algorithms for developing intelligent intrusion detection systems which are most useful for providing secure communication in wireless networks. Moreover, we compare all the important intelligent intrusion detection systems based on their performance and also suggest some new ideas for improving the decision accuracy of current intelligent intrusion detection systems.
Keywords: Intrusion Detection System; Machine Learning Algorithms; Pre-processing; Classification; Wireless Networks;.
Perlustration on existing techniques and applications in cloud computing for smart buildings using IoT
by D. Shiny Irene, T. Sethukarasi
Abstract: One of the emerging applications of IoT and its devices is to design and build smart devices for smart buildings. Though one of the design issues of smart devices such as anytime anywhere presence is achieved, there is a dearth need to ensure another challenging design issue viz., security ,interoperability and energy efficiency.. There are many emerging algorithms and techniques to address this issue. An attempt has been made in this paper to survey the emerging and optimistic algorithms that can address this ever dynamic issue in building smart cities using IoT. Energy efficient and environment friendly secured smart devices can be designed and developed in future to build perseverant smarter cities.
Keywords: Internet of Things; Smart Buildings; Smart Energy and Security; Cloud Computing.
FUZZY RULE SELECTION USING ARTIFICIAL BEE COLONY OPTMIZATION ALGORITHM
by Naga Ravikiran Desiraju, Dethe C G
Abstract: Wireless sensor network (WSN) brings an innovative model with embedded system with restrictions of computing ability, intercommunication, storage capacity, and energy resource which is applied for high range of applications in the situations when constructing the network based on conventional infrastructure is not feasible. Clustering with WSN is a successful technique to reduce the rate of energy use of sensor node. The fuzzy logic calculates the Cluster Head (CH) selection probability depending on the nodes earlier communication history to choose the CH. The set of rules applied to the fuzzified input is the fuzzy rule base. The output of the inference engine is changed to crisp output by defuzzification. Artificial Bee Colony (ABC), an optimization protocol owes its inspiration to the exploration behavior of honey bees. It is a comparatively innovative optimization algorithm which has proven to be on par with classical bio-inspired protocols. This work on ABC optimization algorithm is suggested for selecting fuzzy rules. Rule selection methods combine different rules from fuzzy rule set to decrease the rules while maintaining the performance of the system. The rules that decrease the performance of the system are removed, to get a fuzzy rule set with improved performance.
Keywords: Wireless sensor networks (WSN); Clustering; Artificial Bee Colony (ABC); fuzzy rule selection.
Image Encryption Techniques for Data Transmission in Networks: A Survey
by JAYANTHI RAMASAMY, John Singh K
Abstract: Todays the rapid growth of communication technology like internet, satellite, ground communications and mobile networks, resulted as the need to product the important information from the individual or general as well as their respective data against attackers. In this scenario, the issues of privacy, integrity, productivity, confidentiality and authenticity of images have become the significant issue for storage and communication of images. The encryption method is the best way towards maintaining the safety of transmitted data by transforming the information into an inconceivable form. In the past, the various encryption methods were proposed and applied towards product the trustworthy images from the unauthorized users. This study discussed, analyzed and identified the issues from the previous encryption methods. This paper discussed about the various encryption methods and reviewed the related works for each scheme. Finally, this study discussed the purpose of this image encryption technique in future.
Keywords: Image encryption; steganography; cryptography; Color image encryption; image quality measure; security analysis; Cryptoanalysis.
DETECTING NEAR-DUPLICATE IMAGES USING SEGMENTED MINHASH ALGORITHM
by S. Thaiyalnayaki, J. Sasikala, R. Ponraj
Abstract: Search engines involve the important role between the users thinking and visual images.Digital images are easy to manipulate and modify due to the powerful tools of image processing technique .but still the challenge tasks are matching slightly altered copies to their original appearance , which is termed near-duplicate image detection.Web image search results nowadays have a significant portion of near duplicates with images varying in size and resolution. However, since these images refer to the same or similar image, most search engines groups them in their result pages. The definition of a near duplicate image varies depending on what resolution and geometric variations are deemed acceptable.Near-duplicate (ND) image detection appears as a timely issue recently, being regarded as a powerful tool functionality various emerging applications. copyright enforcement, news topic tracking, image and video search are the tasks enables by the identification of near-duplicate image. In the paper, a method has proposed for Indexing Near-Duplicate Images using segmented minhash algorithm. First image enhancement is done based on user query image then features are extracted.SURF (Speeded up Robust Features) is used for extract the local invariant features of each web images. After this We introducing new algorithm called segmented minhash which is used for similarity is calculated among the feature extracted images. Finally,indexing near duplicate images and exact duplicate image based on user query.For indexing we use Locality Sensitive Hashing (LSH). We demonstrate that our proposed approach is extremely effective for collections of web images.
Keywords: Indexing,near-duplicates; near-duplicate detection; Image Enhancement.
SWARM DYNAMICS FOR ENHANCED ENERGY AWARE CLUSTERING
by Ramana Rao M V, Adilakshmi T
Abstract: Energy can be efficiently conserved in WSN through clustering of nodes. As in all shared-medium networks, Medium Access Control (MAC) protocol enables the smooth functioning of the network. An important function of MAC is to prevent the bottle-neck between two nodes sending data simultaneously. Many MAC protocols have been developed for smooth functioning of WSN which includes Berkley Medium Access Control (BMAC) which utilizes minimal power listening as well as a proper preamble for minimal power communication. The main challenge of BMAC is overhearing and power wasting in long preambles. The aim of this work is to cluster BMA) protocol using heuristic methods based on River Formation Dynamics (RFD) and Particle Swarm Optimization (PSO). The suggested protocols performance is evaluated for Packet Delivery Ratio (PDR), end to end delay, hop as well as jitter. The outcome shows that the proposed River PSO cluster BMAC performs better than BMAC with flooding and BMAC with cluster based routing when compared with static and varying node mobility.
Keywords: Wireless Sensor Networks (WSN); Cluster Head (CH); Medium Access Control (MAC); River Formation Dynamics (RFD); Particle Swarm Optimization (PSO).
NEURAL NETWORK BASED VIRTUAL BACKBONE TREE CONSTRUCTION AND DYNAMIC SINK IMPLEMENTATION TO ENHANCE THE LIFETIME OF THE NETWORK AND MINIMIZE THE ENERGY CONSUMPTION
by Vimal Kumar Stephen K, Mathivanan V
Abstract: With the effect of technological development, the primary objective of this research aims at retaining the energy level of the sensor node for a long period in the wireless sensor network. Ensuring negligible energy drop leads to long life for the network. Secure group key management technique is imposed to solve the security problem such as authentication, confidentiality and scalability. Cluster key and Master key is exclusively used in the network to protect the sensed information while communication between nodes takes place. Static and movable mobile sinks are deployed to enhance the lifetimes of the sensors in the network. Initially, the static mobile sinks act as a trusted third party for computing and distributing keys between sensor nodes and the clusters. Further, movable sinks are used to receive sensed data from the sensor where it is being located which avoids unnecessary event of choosing new cluster head often. The energy is retained, since the presence of trusted third party sink performs all the computations of cluster head. Computation is reduced in cluster head thereby increases the life time of the particular cluster. Outcomes of experiments prove that the suggested technique produced better results compared to related study.
Keywords: Key Generation; Cluster key; Master key.
Severity of defect: An optimized prediction
by Kiran Kumar Reddi, Achuta Rao S. V.
Abstract: To assure the quality of software an important activity is performed namely Software Defect Prediction (SDP). Historical databases are used to detect software defects using different machine learning techniques. By doing so, there is increased potential with positive outcome. Conversely, there are disadvantages like testing becomes expensive, poor quality and so the product is unreliable for use .A bug report illustrates the severity of a defective code. The resources for testing and other planning activities are done based on Defect severity assessment. This paper classifies the severity of defects by using a method based on optimized Neural Network (NN).The above method is based on Shuffled Frog algorithm and the experimental outputs reveal that it can do better than Leven berg Marquardt based NN system (LM-NN).
Keywords: Software defect prediction (SDP); Severity; Neural Network; Levenberg Marquardt (LM); Shuffled Frog; fuzzy classifier.
High-level optimized systems design using hardware-software partitioning
by Lilia Kechiche, Lamjed Touil, Bouraoui Ouni
Abstract: Embedded systems have a wide range of use and have become essential parts of todays life. A typical embedded system consists of application-specific hardware and programmable software. Hardware-Software (HW/SW) partitioning problem plays a crucial role in embedded systems design as it allows the proposition of an optimized system with predefined constraints. It allows choosing which tasks should be mapped to software and hardware. In this paper, a heuristic algorithm, the hybrid-bee-colony-optimization for multiple-choice HW/SW partitioning is proposed with the objective of minimizing power consumption and execution time, while meeting area constraint. The heuristic algorithm is developed to generate an approximate solution in acceptable delay. The Virtex 5 is chosen as a target platform. Simulation results are compared with existing works and they show rapidity with the generation of an optimal solution near to the exact one.
Keywords: hardware-software partitioning; heuristic algorithm; bee-colony optimization; SOPC.
FEATURE EXTRACTION USING CMIM FOR SENTIMENT ANALYSIS
by Madhusudhanan Baskaran, Chitra S, Anbuchelian S
Abstract: Recently, a lot of attention paid to the domain of sentiment analysis (SA), with experts acknowledging the scientific trials as well as possible applications of the processing of subjective language. SA is the computational analysis of opinions or sentiments conveyed in a body of text. The aim of SA is the detection of subjective data present in several sources and figure out the attitudes of the author regarding the topic. In the current study, the feature extraction is carried out Term frequency / Inverse document frequency and features selection through CMIM. Feature classification is done through LogitBoost, CHAID as well as k-Nearest Neighbor classifiers. The experimental results were contrasted with one another.
Keywords: Sentiment Analysis; LogitBoost; CHAID; CMIM; k-Nearest Neighbor (kNN); Term frequency / Inverse document frequency; Stemming; Stop words.
A BAYESIAN APPROACH FOR BRAIN COMPUTER INTERFACE USING FEATURE FUSION TECHNIQUES
by Aswin Seshadri K, Thulasi Bai V
Abstract: In the recent past many laboratories explored the prospects of communication through cerebral activity for patients with neuromuscular disorders. A Brain-Computer Interface (BCI) enables control of devices or communication with brain activity without using muscles. It has been successfully used in scientific, therapeutic applications and helps increase the patients standard of life. Electroencephalography (EEG) recorded from a persons scalp is used for controlling the BCI. EEG signal analysis and classification is one of the prominent researches in the field of BCI. The major challenges of BCI are low signal-to-noise ratio of neural signals, and need of robustness of extracting feature set from the brain signals and classifying it. In this work, we review a data fusion techniques for EEG-based BCI along with Bayesian methods for BCI. This paper provides a comparison of the feature extraction techniques - Laplacian, kalman and fused Laplacian-kalman. The features obtained were classified using Naive Bayes classifier. Source identification and spatial noise reduction is achieved through the surface Laplacian. The two functions of surface Laplacian are associated with prediction accuracy as well as signal orthogonality in BCI.
Keywords: Brain–Computer Interface (BCI); Feature Extraction; Laplacian; Kalman Filter and Naïve Bayes Classifier.
DESIGN AND FABRICATION OF AN IMPROVED GPS ANTI JAMMING ARRAY ANTENNA
by Thiyagarajan Venkatesh
Abstract: Global Positioning System (GPS) satellites produce low power signals that travel great distances to reach the receiver. To negate a GPS system, an adversary needs only to generate jamming signal with enough power and suitable temporal or spectral signature to deny the use of GPS throughout a given area. The first system developed to increase the GPS anti-jam capability for users on the ground or in the air was controlled reception pattern antenna. This device consists of an array of antenna elements. The elements are all connected to the electronics box that controls either the phase or gain or both and combines them to give a single output. From both military and civilian perspective it is important to establish an adequate anti-jamming capability for GPS systems and ensure availability of this asset in all environments. This was recognized by the military and resulted in the development of several mitigation techniques in time domain, time-frequency domain, Adaptive Antenna Arrays (AAA) and PC based software defined radio concepts. In this study, circular geometry of 5 patch antennas operating at L2=1.227GHz are designed and fabricated. Phase only nulling technique based on hybrid optimization is proposed and evaluated using IE3D software.
Keywords: Global Positioning System (GPS); anti-jam; Adaptive Antenna Arrays (AAA); Circular geometry; patch antennas; Phase only nulling; Artificial Bee Colony (ABC) algorithm; Cuckoo Search (CS).
Power Audit: An estimation model-based tool as a support for monitoring power consumption in a distributed network infrastructure
by Aziz Dahbi, Asmaa El Hannani, Abdelhak Aqqal, Abdelfatteh Haidine
Abstract: Understanding the details of power consumption in IT distributed infrastructure has become essential to make efficient power management decisions. Indeed, increasingly, energy costs are a major factor in the Total Cost of Ownership (TCO) of IT equipments of both data centers and enterprise computing. However, measuring and monitoring the power consumption of systems in medium-scale to large-scale distributed infrastructures is often difficult due to large and dispersed deployment of heterogeneous equipments such as personal computers (PCs), routers, switches, printers, etc. The various aspects discussed in this study are then organized around: i) proposing an approach for measuring power consumption of devices in distributed infrastructure, especially for computers as a first step, and ii) collecting the measures on the monitoring server through the network in a supervisory objective using Simple Network Management Protocol (SNMP). We have designed and developed software named "Power Audit" as support of the above aspects.
Keywords: IT equipments; SNMP protocol; distributed infrastructure; power management; power consumption.
Non-linear Channel Tracking of a High Mobility Wireless Communication System
by Sudheesh P, Jayakumar M
Abstract: Recently evolved wireless communication systems incorporate the use of Multiple Input Multiple Output (MIMO) systems to overcome the effects of channel fading. Orthogonal Frequency Division Multiplexing (OFDM) is moreover used to overcome Inter-Symbol Interference (ISI) to ensure effective signal transmission. The channel parameters in wireless communication systems are generally non-linear. Channel estimation techniques for non-linear systems include Unscented Kalman Filter (UKF), Kalman Filter (KF) and Extended Kalman Filter (EKF). The Kalman filter is used for linear channel estimation whereas the EKF and UKF are applicable for non-linear systems as well. Particle filter is a type of Sequential Monte Carlo (SMC) method which uses Sequential Importance Sampling (SIS) technique to effectively track a non-linear system. Particle filter (PF) is an efficient method of tracking, which is able to deal with non-Gaussian and non-linear systems. In this paper, we estimate the channel parameters of a fast time varying MIMO-OFDM system using particle filter. The proposed scheme considers a first order Auto-Regressive (AR) system model. A Rayleigh fading channel for mobile systems which incorporates the Doppler shift that occurs in a mobile environment is used. The performance of the particle filter is compared with the other estimation methods like Kalman filter and extended Kalman filter. The mean square error (MSE) as a function of the signal to noise ratio (SNR) is plotted to compare the performance of the particle filter with other systems.
Keywords: Non-linear channel estimation; MIMO-OFDM system; Kalman Filter (KF); Extended Kalman Filter (EKF); Particle filter (PF).
Securing Ad Hoc Networks using Energy Efficient and Distributed Trust based Intrusion Detection System
by Deepika Kukreja, S.K. Dhurandher, B.V.R. Reddy
Abstract: Mobile Ad Hoc Networks (MANETs) are subject to broad variety of attacks.Black hole and gray hole attacks are security threats that make MANETs weak by inducing packet forwarding misbehavior. This paper proposes a method for detection & isolation of malicious nodes and selection of most reliable path for routing data. Intrusion Detection System (IDS) is utilized to catch the nodes exhibiting packet forwarding misbehavior. Monitoring scheme is appropriate for MANETs as it emphasis on energy reduction, has distributed nature and compliant with dynamic network topology. Proposed method is simulated using network simulator NS2. Findings show that the proposed system is efficient in terms of Packet Delivery Ratio (PDR), Routing Packet Overhead, End to End Delay and Energy management as compared to Dynamic Source Routing (DSR) protocol and other protocols in this area. The protocol improves the PDR by 43.44% as compared to DSR protocol in presence of malicious nodes.
Keywords: Ad Hoc Networks; Dynamic Source Routing Protocol; Intrusion Detection System; Trust; Gray hole attack; Energy.
Contribution to Radio Resource Distribution approach in Wireless Cellular Software Defined Networking
by Fall Hachim, Ouadoudi Zytoune, Mohamed Yahyai
Abstract: We witness actually huge wireless traffic demand on a limited bandwidth. This leads to develop complex and power-hungry network technologies that are often harder to manage. Thus, some core network features as Radio Resource Management (RRM) introduce important issues as scalability and energy efficiency. This paper debates on next generation wireless cellular network Radio Resource Distribution (RRD) algorithms. We leverage Software Defined Network (SDN) benefits by proposing AoD (Algorithms on Demand), which aggregates several schedulers at the network controller. Based on Markov prediction, a real time context data analysis adapts the most suited RRD scheme at the evolved Node B. This choice depends on cell status (load, interference, etc.), thanks to the device programmability feature of SDN. Moreover, AoD reduces power consumption by optimizing always the transmission rate. Simulations show that one can approach 5G (fifth generation) radio policies by AoD theory with Quality of Experience and low carbon footprint as benefits.
Keywords: Terms: Energy Efficiency; Markov Model Prediction; Openness; Radio Resource Management; Software-Defined Networking.
Special Issue on: New Trends for Security in Network Analytics and Internet of Things
Perplexed Bayes Classifier based Secure & Intelligent Approach for Aspect Level Sentiment Analysis.
by Sumit Kumar Yadav, Devendra K. Tayal, Shiv Naresh Shivhare
Abstract: In this work, we are using machine learning methods to classify a review document. We are using two machine learning methods - Naive Bayes Classifier and Perplexed Bayes Classifier. First we will briefly introduce the Naive Bayes Classifier, its shortcomings and Perplexed Bayes Classifier. Further, we will be training the classifiers using a small training set and will use a test set with reviews having dependency among its features. We will then show that how Naive Bayes Classifier fails to classify such reviews and will be showing that Perplexed Bayes Classifier can be used to classify the given test set, having dependency among its features.
Keywords: sentiment-analysis; machine-learning techniques; naïve bayes; perplexed bayes; aspect level sentiment analysis.
A Novel Encryption Compression Scheme using Julia sets
by Kunti Mishra, Bhagwati Prasad
Abstract: The intent of the paper is to propose a novel fractal based encryption compression scheme using logistic map and Julia sets. In our study of medical images, we obtain significant lossless compression and secure encryption of the image data. The proposed technique is expected to be useful for the transmission of various confidential image data relating to medical imaging, military and other multimedia applications.
Keywords: Logistic map; Encryption; Decryption; Compression; Julia sets.
An Efficient Crypto-compression Scheme for Medical Images by Selective Encryption using DCT
by Med Karim Abdmouleh, Hedi Amri, Ali Khalfallah, Med Salim Bouhlel
Abstract: Nowadays, modern communication inevitably uses computer networks. The Images transmitted across these networks are special because of their large amount of information. Thus, the use of the information technology in the medical field generates many applications (especially telemedicine) where the exchange of medical information remains the foundation of their success. The transmission of these images raises a large number of unresolved problems. The efficiency of a transmission network depends, on the one hand, on the degree of security and, on the other hand, on the times of transmission and archiving. These requirements can be satisfied by encryption and compression. This work presents a method of a partial or selective encryption for medical Images. It is based on the encryption of some quantified Discrete Cosine Transform (DCT) coefficients in low and high frequencies. The results of several experiments show that the proposed scheme provides a significant reduction of the processing time during the encryption and decryption, without tampering the high compression rate of the compression algorithm.
Keywords: Crypto-compression; Medical image; Telemedicine; DCT; RSA.
Hybrid Approach to Enhance Contrast of Image for Forensic Investigation Using Segmented Histogram
by Sachin Dube, Kavita Sharma
Abstract: Digital images can be used in detection of various crimes, ranging from active to passive attack applications. To suit a particular attack application an image needs to be enhanced, and should have good quality in general for forensic investigation. For normal investigation use; vibrant, vivid and eye pleasing image is desired. In this paper, various existing methods and their drawbacks are examined. This information is then used to develop an approach for contained enhancement to retain natural look of image, enhance its quality to make it usable for evidence. Existence of a spike in histogram can result in Over-enhancement of image. Spike is created when a large no. of pixels are having small set of intensities. Ten most commonly used standard images are used for performance comparison. Proposed method outperforms compared methods in terms of PSNR and AMBE values, while keeping entropy and standard deviation almost similar to input image.
Keywords: Image Forensic; Segmented Histogram; Image Contrast Enhancement;.
Use of A Light Weight Secure Image Encryption Scheme Based on Chaos & DNA Computing for Encrypted Audio Watermarking
by Bhaskar Mondal, Tarni Mandal, Tanupriya Choudhury
Abstract: Watermarking is one of the best way to authenticate the ownership or the source of data by embedding copyright information onto the image, audio or video. At the same time to maintain anonymity or source of data from unintended users its need to encrypt before embedding. This paper presents an effective use of encryption algorithm in audio watermarking. The watermark data is initially encrypted with A Light Weight Secure Image Encryption Scheme Based on Chaos DNA Computing". In the second part, the encrypted data embedded onto an audio using Discrete Cosign Transformation (DCT) and Discreet Wavelet Transformation (DWT). The test results are promising and the watermarked audio does not looses its quality.
Keywords: Audio watermarking; cryptography; deribonucleic acid (DNA); watermark encryption.
Malware Intelligence: Beyond Malware Analysis
by Ekta Gandotra, Divya Bansal, Sanjeev Sofat
Abstract: A number of malware samples are available online but a little research has attempted to thoroughly analyze these for obtaining insights or intelligence about their behavioral trends, which can further be used to issue early warnings about future threats. In this paper, we have performed an in-depth analysis of about 0.1 million historical malware specimens in a sandbox environment to generate their attributes and behavior. Afterwards, the intelligent information is mined using statistical analysis to study their behavioral trends and capabilities. The information so obtained can help to gain insight into the future measures that malware authors can use to design their programs. The paper also highlights the challenges evolving out of these trends which provide the future research directions to malware analysts and security researchers. Further, the insights generated can be shared with security Experts, CERTs (Computer Emergency Response Teams) or other stakeholders so that they can issue the preventive measures for future threats or at least to minimize the risks posed by them. Furthermore, this type of analysis facilitates research community in selecting the parameters/factors for building faster and improved techniques for detecting unknown malicious programs.
Keywords: Malware analysis; statistical analysis; security intelligence; behavioral trends; prediction.
Trust evaluation of websites: A comprehensive study
by Himani Singal, Shruti Kohli
Abstract: People rely heavily on internet to fulfill even the minuscule of their need. According to a survey, 41% of time spent on web is for finding some information from search engines or reading some information. This is majorly due to easily accessible, cost effective and perceived high value information. But, this perceived high value information can prove fatal, if consumed without any authoritarian checks; especially if related to issues like health. Some template is necessitated to measure trustworthiness of such information. This paper explores a novel approach to quantify trust in such information-led websites. Analytical data is collected for various informational websites using similarweb.com and trust is modeled for these websites using human behavior as an aggregate. Analytical data is believed to capture actual behavior of each and every visitor visiting the website for information; thus making the study reliable and dependable. Results have been compared with some other acceptable studies and have found to be encouraging.
Keywords: Content Trust; Health Information; Medical Trust; Online Interaction; User Satisfaction; Web Trust.
An Epidemic Model for Security and Performance of Wireless Sensor Networks
by Rudra Pratp Ojha, Kavita Sharma, Pramod Kumar Srivastava, Goutam Sanyal
Abstract: Wireless sensor networks have imminent constrains that makes security a crucial issue.Transmission of starts from a single node and spread in the entire network through wireless communication. This process leads to the failure of whole wireless sensor network. The proposed mathematical model based on epidemic theory in which the different class of nodes considered and to examine the effect of different class on the network and develop control mechanism to prevent worm transmission in the sensor networks. Discuss the role of communication radius on the stability of net-work. We examine the proposed model using stability theory of differential equation. Determine the basic reproduction number and relate with communication radius. Analyze the proposed model that improves the efficiency of the network in terms of stability and energy efficiency. Validate the proposed model through extensive simulation results.
Keywords: Epidemic model; Wireless Sensor Network; Equilibrium; Stability; Communication Radius; Basic reproduction number.
Secure Handoff Technique with Reduced Authentication Delay in Wireless Mesh Network
by Geetanjali Rathee, Hemraj Saini
Abstract: The aim of manuscript is to propose a secure handoff procedure by generating the tickets for the mesh clients which are divided into different zones of mesh routers according to their communication range. An authentication server looks over the entire network after a specific interval of time and is responsible for generating and updating the corresponding tickets of clients according to their zonal routers range. Whenever a mesh client enters into the range of another domain, to access the services from foreign mesh routers, roaming client has to prove its authenticity to the corresponding zonal router. Each mesh router stores the ticket of its zonal mesh client issued from authentication server and validates the roaming client by matching the ticket. The proposed mechanism reduces the issue of storage overhead and security threats at mesh client as all the tickets are stored in authentication server database and are issued upon the request. The proposed technique is validated over authentication delay and different probabilistic scenarios of authentication and is proved legitimate by discussing an empirical study against reported literature.
Keywords: Wireless Mesh Network; secure handoff; authentication; security threats; network delay; storage overhead.
A Secure, Fast Insert and Efficient Search Order Preserving Encryption Scheme for Outsourced Databases
by K. Srinivasa Reddy, Ramachandram S
Abstract: Order Preserving Encryption (OPE) schemes have been studied to a great extent in the cryptography literature because of their potential application to database design. For the first time, a scheme called mutable order preserving encoding (mOPE) is introduced to achieve IND-OCPA (Indistinguishability under Ordered Chosen Plaintext Attack) security. However, even mOPE scheme potentially leaks the distribution of repeated ciphertexts and is less efficient. In this paper, a new scheme is introduced called as a Secure and Cost efficient Order Preserving Encryption (SCOPE), which is considerably more secure and efficient than mOPE scheme. A new form of strong security notion called as Indistinguishability under Ordered Chosen Repeated Plaintext Distribution Attack (IND-OCRPDA) is proposed and we show that SCOPE scheme is IND-OCRPDA secure. Finally, the experimental results show that SCOPE achieves good performance in the context of an encrypted database and have a reasonable overhead which is 3.5
Keywords: efficiency; functionality; order preserving encryption; trusted proxy; security.
Security Model against worms attack in Wireless Sensor Network
by Rudra Pratap Ojha, Pramod Kumar Srivastava, Goutam Sanyal
Abstract: TThe Wireless Sensor Network is an innovative category of communication network,which has earned universal attention due to its great potential in application of various areas.This is one of the insecure system due to attack of worms.In order to efficaciously defend wireless sensor network against worms, we have proposed an epidemic model with two latent periods and vaccination.We have formulated ODE of the model and studied the dynamic behavior of worm propagation as well as designed a model to secure the system from worm attack.The model has been simulated by MATLAB. In this proposed study, we have determined the basic reproduction number for the study of dynamic performance of worms in the wireless sensor network. The global stability of worm free equilibrium has been established using a Lyapunov function, while the simulation results helped in validation of the theoretical analysis.
Keywords: Security; Epidemic model; Wireless Sensor Network; Latent period; Basic reproduction number.
Untraceable privacy-preserving authentication protocol for RFID tag using salted hash algorithm
by Pinaki Ghosh, Mahesh TR
Abstract: Radio Frequency Identification (RFID) is now becomes a core technology in the Internet of Things (IoT). It has gained the attention of industry and academia in tremendous ways. Due to the openness in nature, RFID tags suffer with potential security threats. One of the major threats is privacy leakage during the authentication process. A strong Privacy Preserving Authentication (PPA) protocol is always a need to this system. In this paper we have proposed salted secure hash based mutual authentication protocol as a solution. The proposed protocol is designed to sends random response from tag to the server without disclosing its identity information to intermediate entities like readers. It also updates secret keys without transmitting the secret values.
Keywords: RFID; privacy; untraceability; tag authentication; salted hash; keyed hash algorithm; mutual authentication.
Comparison of different RSA Variants
by Seema Verma, Manoj Kumar
Abstract: RSA is the first public key algorithm used for encryption and decryption. Its simplcity and complexity lies in factoring a very large composite integer. It is still popular even after thirty nine years of its origin. In this long journey, RSA is studied many times and many security loopholes are found. To remove the loop holes researchers designed many variants of RSA. The work shows the study of different RSA variants which are popular in literature. This study includes the analysis in terms of performance and security of different RSA variants.
Keywords: RSA; Public key; Cryptography; Encryption; Complexity; Security; Comparison.
GASER: Genetic Algorithm based Secure and Energy aware Routing protocol for Sparse Mobile Ad Hoc Networks
by Deepika Kukreja, Deepak Kumar Sharma, S.K. Dhurandher, B. V. R. Reddy
Abstract: Sparse Mobile Ad hoc Networks are characterized by sparse node deployment and longer network partitions. Nodes in an ad hoc network are mobile, have limited energy and are deployed in areas where connections between the nodes may be inconsistent. In a number of scenarios it is likely that the route between source-destination pair does not exist for longer duration of time. Routing in such a network where nodes deployment is sparse and the connections between the nodes occur less frequently is a challenging task. In this paper, nature inspired Genetic Algorithm based Secure and Energy aware Routing (GASER) protocol for Sparse Mobile Ad Hoc Networks is proposed. Black hole and gray hole attacks are two security threats that make Mobile Ad Hoc Networks (MANETs) weak by inducing packet forwarding misbehavior in the network. By incorporating genetic algorithm with other methods, the GASER protocol selects the best path for routing the packets between source and destination in such a way that the selected path is shortest. Nodes of the selected path have highest message forwarding possibility among the other nodes of the network and have enough energy to receive and then forward messages. GASER avoids the nodes inducing gray hole/black hole attack in the network as it selects the next hop having more message forwarding probability thus making the routing protocol secure. Simulation results prove that GASER outperforms PROPHET, Epidemic and Spray and Wait in terms of packet delivery ratio, average residual energy, overhead ratio and number of deceased nodes.
Keywords: Sparse Mobile Ad Hoc Networks; Genetic algorithm; Black hole attack; Gray hole attack; Energy aware routing; Secure routing.
Special Issue on: ISTA'16 Metaheuristic Techniques and Applications
A Novel Self-Organization Model for Improving the Performance of Permutation Coded Genetic Algorithm
by Dinesh Karunanidy
Abstract: Genetic Algorithms (GA) are extremely powerful among evolutionary principles and being used in variety of fields for solving more complex problems. Varieties of assistive techniques have been proposed to improve the performance of Genetic Algorithms w.r.t. the nature of the application and Self organization is one such model, which is aimed at improving the performance of the GAs by all means. The Self-organization models enable the systems to acquire and maintain the structure by themselves, without any external control. It is highly evidenced that it gives greater benefits to solve the complex problems with competent efficiency levels in conjunction with the classical GAs. The combined version of SOM and GA has the power of better exploration. In this way, the work reported in this paper proposes an efficient pattern based self-organization model for improving the performance of the GA for the combinatorial optimization problem. The competency of the proposed model is demonstrated by means of a set of well-defined experiments over the selected benchmark Travelling Salesman Problem (TSP) instances. The assessments proved the efficiency of the technique in terms of a set of generic performance criteria like convergence rate, convergence time, error rate, nearest neighbor ratio and distinct individuals.
Keywords: Self-organization technique; Genetic algorithm; population seeding technique; traveling salesman problem; Pattern Replacement; Combinatorial Problem.
Hybrid Enhanced Shuffled Bat Algorithm (HESB) for Data Clustering
by Reshu Chaudhary, Hema Banati
Abstract: Enhanced Shuffled Bat algorithm (EShBAT) is a recently proposed variant of bat algorithm (BA) which has been successfully applied for numerical optimization. To leverage the optimization capabilities of EShBAT for clustering, HESB, a hybrid between EShBAT, K-Medoids and K-Means is proposed in this paper. EShBAT works by dividing the population of bats into groups called memeplexes, each of which evolve independently according to BA. HESB improves on that by employing K-Medoids and K-Means to generate a rich starting population for EShBAT. It also refines the memeplex best solutions at the end of every generation by employing K-Means algorithm. Both these modifications combined together produce an efficient clustering algorithm. HESB is compared to BA, EShBAT, K-Means and K-Medoids, over ten real-life datasets. The results demonstrate the superiority of HESB.
Keywords: Enhanced shuffled bat algorithm; k-means; k-medoids; data clustering.
Special Issue on: Advances in Information Security, Privacy and Forensics of Multimedia Big Data in the Internet of Things
Fingerprinting Violating Machines with In-Memory Protocol Artifacts
by Mohammed Al-Saleh, Yaser Jararweh
Abstract: Cyber crime has increased as a side effect of the dramatic growth in Internet deployment. Identifying machines that are responsible about crimes is a vital step in an attack investigation. Tracking the IP address of the attacker to its origin is indispensable. However, apart from finding the attacker's (possible) machine, it is inevitable to provide supportive proofs to bind the attack to the attacker's machine, rather than depending solely on the IP address of the attacker, which can be dynamic. This paper proposes to implant such supportive proofs by utilizing the internals of three well-known Internet protocols: IP, TCP, and ICMP. Our results show that there can be potential proofs in the structures of these protocols. In addition, because a violator is unaware of (and has no control over) the involved protocols, the investigation process is empowered with stealth. To the best of our knowledge, we are the first to utilize protocol remnants in fingerprinting violating machines.
Keywords: Fingerprinting; violating machine; protocol artifacts.
Digital Video Forensics: A Comprehensive Survey
by Mohammad Alsmirat, Wala'a Al-Sarayrah, Yaser Jararweh, Morad Etier, Ruba Al-Hussien
Abstract: The wide spread and the advancement of digital devices and tools causes the simplification of manipulating any digital multimedia content. Nowadays, digital videos and photos are not trusted to be used as a reliable evidences in major cases in courts. Such concerns results from the existence of various techniques that can be easily used to change the contents of these evidences. These facts raise the need of finding new ways or techniques to ensure the authenticity of digital multimedia contents. Usually, multimedia evidences are obtained either by downloading them from the Internet or using digital storage devices such as disks and tapes. In both cases, some methods should be used to guarantee the originality and the authenticity of the digital evidence. Experts in digital-signal processing conducted a huge number of researches to find new strategies, using digital forensics, to verify digital evidences and trace its origins. The main engine of such techniques is the assumption that the manipulating of such evidences cannot be reversed and it leaves a trace called "footprints". Such effects can be analyzed to determine whether this evidence has already been altered. The aim of this paper is to collect and provide the definitions of the main concepts related to media forensics. Also, this paper aims to give an overview of the different techniques used in media forensics concentrating on video forensics. Furthermore, this paper classifies the work done in the field according to the main technique used in the proposed solution approach.
Keywords: video forensic; image forensic; digital forensic; video compression; double compression; video manipulation.
Botnet Detection based on DNS Traffic Similarity
by Ahmad Manasrah, Walaa Bani Domi, Nur Nadiyah Suppiah
Abstract: Despite the efforts in combating the threat of botnets, they still grow in size and evasion techniques. The bot software is written once and spreads to other machines all over the world. The bot software is preconfigured to locate the malicious domain name (if it is not static) through the DNS system, like any other legitimate host. In this paper, a scalable approach for detecting a group of bot hosts from their DNS traffic is proposed. The proposed approach leverages a signal processing technique, power spectral density (PSD) analysis, to discover the significant frequencies (i.e. periods) of the botnets periodic DNS queries. The proposed approach processes the timing information of the generated DNS queries, regardless of the number of queries or domain names. Measuring the level of similarity between hosts demonstrating periodic DNS queries should reveal the group of bot hosts in the monitored network. Finally, we evaluated the proposed approach using multiple DNS traces collected from different sources along with a real world botnet deployed under controlled environment. The evaluation result shows that the proposed approach was able to detect the group of bot hosts that demonstrates similar periodic DNS pattern with high accuracy and minimum false positives rates.
Keywords: Botnet detection; Traffic similarity; Traffic anomaly; Group Activity; Malware activity; Traffic behavior analysis; Network Intrusion Detection.
Enhancement of 3-D Playfair Algorithm using dual key
by Arnab Kumar Das, Nabanita Das
Abstract: Playfair cipher is the one of the well known polyalphabetic cipher. In this paper we present a new approach f or secure transmission of a message by a modified version of the playfair cipher combining with ex-or operation and dual key. To develop this whole technique we used the three functions. One is generated the matrix and another two is encryption and decryption technique. The proposed extended 3d Playfair cipher is working with 256(4x8x8) characters, it selected 52 alphabets(upper case and lower case), 10 numerals and 194 most commonly used special characters of ASCII character set. We use the 3D Version of the playfair cipher but we use the digraph concept. The restrictions of existing 2D-Playfair ciphers and 3D-Playfair cipher using 4x4x4 matrices, 6x4x4 matrices are overcome in the proposed work. The proposed algorithm can accumulate more characters than the existing 3D- Playfair ciphers.
Keywords: playfair; cipher; polyalphabetic; encryption; decryption; ASCII.
A Knowledgebase Insider Threat Mitigation Model in the Cloud: A Proactive Approach
by Qutaibah Althebyan, Yaser Jararweh, Qussai Yaseen, Rami Mohawesh
Abstract: Security of cloud computing is a major concern for both organizations and individuals. Organizations are looking for more trust from individuals. At the same time cloud users want to make sure that their private data will be safe from disclosure either by outsiders of the cloud or even from (probably malicious) insiders of the cloud (cloud agents) from within the cloud. Hence, insiders' threats of the cloud computing is a major issue that needs to be tackled and resolved. In this paper, we propose a proactive insider threat model using a knowledgebase approach. Proactive in a sense that our model tries to detect (in advance) any deliberate deviation of the legal accesses an insider might try to perform so that the individuals private data will be protected and secured. At the same time the cloud resources will be insured to be secured as well as consistent at all times. Knowledgebase models were used earlier in preventing insider threats in both the system level and the database level. This knowledgebase work will be extended to cloud computing systems. The proposed model insures an in advance mitigation in the form of detection (and hence, a chance for prevention) of possible insider breaches. This mitigation correlates system insiders admins' knowledge who may grant undesired privileges to insiders of the underlying cloud data center. The proposed model handles the insider threat in a cloud data center at its several levels: the host level and the network level where insiders are categorized several levels of privileges according to their locations within the cloud data center. Simulation results show that the proposed model works well in predicting malicious acts of insiders of the cloud data center. It also shows that although our model is effective in predicting insiders' threats, it still performs well with minimum overhead to its performance. This in fact has been concluded by showing that the number of blocked insiders is reduced to the minimum.
Keywords: Insider; Proactive; Cloud Data Center; Knowledgebase; Prediction; Mitigation.
Special Issue on: Intelligence in Communication Systems
Distributed Genetic Algorithm for Lifetime Coverage Optimization in Wireless Sensor Networks
by Ali Kadhum IDREES, Wathiq Laftah Al-Yaseen
Abstract: The coverage problem represents a research challenge in designing energy-efficient Wireless Sensor Networks (WSNs), in which both coverage ratio and energy saving should be considered. In this paper, a protocol called Distributed Genetic Algorithm for Lifetime Coverage Optimization (DiGALCO) is suggested to preserve the coverage and enhance the lifetime of a WSN. DiGALCO protocol is based on two steps. First, the sensing field is logically divided into smaller uniform subfields. Second, DiGALCO protocol is then implemented at each sensor node in each subfield. To achieve our goal, the proposed protocol combines three energy-efficient schemes: virtual network subdivision into subfields, distributed cluster head selection in each subfield, followed by sensor activity scheduling based Genetic Algorithm (GA) optimization performed by each elected cluster head. DiGALCO protocol works into rounds. More precisely, a round consists of three phases: (i)~Discovery, (ii)~Cluster head selection, (iii)~GA Decision and Sensing. The decision process, which results in an activity scheduling vector, is carried out by a cluster head node through executing the GA in order to pick out a set of sensors staying active for monitoring through the current sensing round. Every set is constructed to guarantee coverage at a low cost of energy, enabling to improve the WSN lifetime. In comparison with some other protocols, several experimental results were done by using OMNeT++ network simulator show that DiGALCO protocol is capable of prolonging the lifetime of WSN and gives enhanced coverage performance.
Keywords: Wireless Sensor Networks; Coverage; Network lifetime; Genetic Algorithm; Scheduling.
Non-Dominated Sorting Particle Swarm Optimization (NSPSO) for Multi-Channel Cooperative Spectrum Sensing in Heterogeneous Green CRNs
by Senthil Kumar Babu, Ch.V.M.S.N.Pavan Kumar
Abstract: In the radio spectrum the exact help of white spaces is needed speed, robust and the correct approaches for the recognition of scarcity. To overcome these troubles new approaches is introduced in the radio spectrum to identify the white spaces. These kinds of approaches are specifically used in the Cognitive Radio Networks (CRNs), and the node executes a CSS based on power detection in an accommodating path or not. In this paper mainly focused the online algorithms to reduce, a Non-dominated Sorting Particle Swarm Optimization (NSPSO) is engaged to iteratively estimate its result. However, in the minimum point of view the sub routine of the overall viewpoint to estimate the performance of a persons results and these results is a single example of the entire potential cluster development. It means, in the cluster formation at the micro level the following procedure are used for the execution. The first task is to choose the Cluster Heads (CHs) with the exact reporting paths with the reduced faults among the members of clusters and the CHs. Another task is to resolve the optimal sensing attributes like sensing periods and identification of thresholds of the entire Secondary User (SU) to reduce the power consumption of the entire cluster to the Primary User (PU) fortification and the spectrum usage of the constraints. Employing Poisson-Beta-binomial distribution, a new and common K-out-of- N voting regulations is implemented for the heterogeneous CRNs to permit the Secondary Users to have the various identification performances. After that a convex optimization structural design is implemented to reduce the intra-cluster power price by mutually acquiring the optimal sensing periods and thresholds of feature detectors for the new voting regulations. The simulation outcomes is illustrated that the grouping of new CH selection and the collaboration methodologies provides a efficient performance with respect to the energy efficiency and the robustness in opposition to the faults.
Keywords: Cooperative Spectrum Sensing (CSS); Non-dominated Sorting Particle Swarm Optimization (NSPSO); Poisson-Beta-binomial distribution; Cognitive Radio (CR); Clustering; Cluster Head (CH) selection; Energy; Heterogeneous Green Cognitive Radio Networks(HCRNs).
Investigations on Scheduling Algorithms in LTE-Advanced Networks with Carrier Aggregation.
by Shaffath Husssain Shakir S, Rajesh A
Abstract: The driving force for Long Term Evolution - Advanced (LTE-A) development was to provide high data rates in a cost efficient way and also to fulfill the requirements set by International Telecommunication Union (ITU) for Fourth Generation (4G). LTE-A is the fastest growing technology that supports variety of applications like audio/video conferencing, video streaming, Voice over LTE (VoLTE), Voice over Internet Protocol (VoIP), browsing and file transfer. To support multiple applications an effective and efficient Radio Resource Management (RRM) procedure is required, which plays a major role for maximizing the resource utilization. With Carrier Aggregation (CA) concepts included in LTE-Advanced protocol, the complexity of the RRM and scheduling of data increases. Third Generation Partnership Project (3GPP) does not define any specification on scheduling algorithms; hence it became a special interest for vendors and service providers. In this paper, basic LTE-A concepts and a study of different downlink scheduling algorithms published in various literatures are discussed and also performance evaluations of different algorithms are done. The key issues of scheduling algorithms to be considered are also discussed.
Keywords: LTE-Advanced; resource allocation; radio resource management; scheduling.
An Integrated and Secured Medical Data Framework for Effective Tele Health Applications
by Vallathan Govindu, Jayanthi K
Abstract: Abstract: The rapid progression of health care technologies systems and transmission strategies makes it reliable to gain, dispense and manage data over medical devices and as well improves conventional hospital information systems (HIS) to deliver effective health care services. When the medical information is communicated through wireless network, there exists a high chance of modifying the information. Before examining the patient, the physician has to check for the integrity of received medical image. A futuristic tele healthcare framework has been proposed to ensure the security, data integrity, quality and minimization of bandwidth requirements for offering complete healthcare services at reduced cost. In this paper, the proposed framework encompasses the integration of three modules viz. steganography, compression and encryption. Initially, the Bhattacharya coefficient segmentation method is applied over the brain tumour images which are segregated into Region of Interest (ROI) and Non-region of Interest (NROI) region. Patient information and hash value of ROI are embedded on to the NROI using improved data embedding algorithm and successively SPIHT encoding technique is applied over the embedded image by signifying the image at different scales and directions to accomplish better compression ratio. This framework also validates the integrity of ROI, furthermore it ensures the robustness for the embedded data in NROI and lends ROI perfectly for investigation. Finally, the whole image is encrypted with Logistic map encryption in order to afford complete medical data security. Experimental outcomes demonstrate that the proposed framework offers robustness in terms of security, quality and reliability which alleviate misdiagnosis at the physician end in telemedicine applications.
Keywords: Bhattacharya coefficient segmentation; SHA-1; Contourlet Transform; SPIHT Encoding Technique; Improved EMD technique; Modified Chaotic Map Encryption.
Development and Analysis of Downlink Scheduling Algorithm in LTE System with Imperfect Channel Quality Indicator
by S.Fouziya Sulthana, R. Nakkeeran
Abstract: Long Term Evolution (LTE) is the broadband technology, introduced by Third Generation Partnership Project (3GPP) to support variety of multimedia services. Scheduling in LTE plays an important role to meet the system performance targets. In this paper, a new scheduling method is proposed, which considers the achievable rate based on the estimated channel condition of user in the priority metric calculation. It is highly uncertain to have perfect Channel Quality Indicator (CQI) report at scheduler end due to poor quality of the channel and the variation of current channel condition with that of received CQI report. So, the channel condition of the user is ascertained with imperfect CQI, where the Kalman filter is used to estimate the channel condition and effectively recover the correct CQI from imperfect CQI to improve the system performance. The proposed scheduling provides better performance interms of throughput, delay and Packet Loss Rate (PLR) when compared with the already proposed Service Based Scheduler (SBS) method.
Keywords: LTE; resource allocation; scheduling; channel quality indicator; Kalman filter; throughput; delay; packet loss rate.
Dynamic Service Oriented Resource Allocation system for Interworking Broadband Networks
by Kokila Subramanian, Sivaradje Gopalakrishnan
Abstract: Optimizing available radio resource efficiently to the diverse traffic categories in a Heterogeneous Interworking Network, is the key issue of Radio Resource Management (RRM). In this paper, an advanced RRM method specified as, Dynamic Application Centric Resources Provisioning Algorithm (DAC-RP), to provide users with a dedicated set of suitable channels to real- time (RT) and non-real- time (NRT) services based on bandwidth conditions, to maximize the capacity with satisfied QoS constraints is proposed. The DAC-RP is realized over an Ultra Mobile Broadband (UMB) -Worldwide Interoperability for Microwave Access (WiMAX) Wireless Local Area Network (WLAN) hybrid interworking network, linked over a novel Intelligent Internet Protocol (IIP) architecture. IIP, an unified architecture, obtained by merging IMS Call Session Control Functions (CSCFs), Application services, enhanced IMS and centralized services, under a single layer with a common set of control and routing functions, to converge heterogeneous protocols, functional entities and applications. The competency of the IIP and DAC-RP is validated by comparing the performance metrics of the RT and NRT applications, simulated for IIP based UMB-WiMAX-WLAN network developed using OPNET with the scenario using existing IMS and with UMTS-WiMAX-WLAN network.
Keywords: Radio Resource Provisioning; Quality of Service; Broadband wireless network; absolute partition; Heterogeneous network; Call Control layer; Real- Time; Non-Real-Time Application; IP Multimedia Subsystem.
Image Denoising using Fast Non Local Means Filter and Multi-Thresholding with Harmony Search Algorithm for WSN
by Rekha Haridoss, Samundiswary Punniakodi
Abstract: Image denoising is one of the challenging tasks in Wireless Sensor Network (WSN). Several image denoising algorithms are developed so far to obtain a better denoised sensor images. But they fail to preserve the edges of the images because of spatial averaging. In order to overcome the loss of image edges, an attempt has been made in this paper by incorporating the various filters like Fast Non Local Means Filter (FNLMF) and high boost filter with the existing wavelet thresholding based denoising method. However, the denoised output and the computation time are affected by the wavelet properties significantly. Hence, instead of using wavelet thresholding, this paper concentrates on Histogram based Multi-Thresholding (HMT) as a major part to denoise the image. Here, the corrupted image is first denoised by applying FNLMF filtering section. Then the edges and minimal details of the denoised output are enhanced by utilizing the HMT with Harmony Search Algorithm (HSA) based optimization technique. Further, various images with different noise deviations are considered in order to evaluate the performance of the proposed method by using MATLAB simulation. The simulation results indicates that the proposed method shows better results in terms of Peak Signal to Noise Ratio (PSNR), Image Quality Index (IQI) and computation time than that of existing method.
Keywords: Denoising; Multi-Thresholding; Bilateral Filtering; Non Local Means Filter; Harmony Search Algorithm.
Reduction of jitter in 3D video by transmitting over multiple network Paths
by Vishwa Kiran, Raghuram Shivram, Thriveni J, Venugopal K R
Abstract: Stereoscopic video transmission in telemedicine application requires data to be transferred with minimal jitter. It is not possible to send stereoscopic video at full HD rate on single Internet Service Providers (ISPs) as the bandwidth becomes a bottle-neck and congestion can lead to packet drops, eventually leading to jitter in a video. This could be circumvented by employing multiple ISPs to stream stereoscopic video utilizing multiple Real Time Packets (RTPs) sessions. Usage of multiple ISPs results in multiple network paths between video streaming device and video consumers. This concept effectively involves aggregation of bandwidth, delay, jitter, packet loss, and other qualitative network attributes with respect to every ISP participating in the video transmission process. This article analyses through simulation collective delay and jitter which affects the video reconstruction process and concludes with the estimation of minimum qualitative network parameters required.
Keywords: 3D Video; Bandwidth; Cloud Aggregation Server; Discrete Event Simulator; ISP; jitter; Multipath; Multiple ISPs; Simpy Simulator; Stereoscopic Video;.
Trust-based-Tuning of Bayesian watchdog Intrusion Detection for Fast and improved Detection of Black Hole Attacks in Mobile Adhoc Networks
by Ruchi Makani, B.V.R. Reddy
Abstract: The Watchdog is a well-known intrusion detection mechanism for Mobile Adhoc Networks (MANET), which is not only monitor the traffic between peer nodes but also perform analysis on the data to discern malicious activity and it has been widely adopted for detecting black-hole attacks. Watchdog suffers from serious limitations viz. high number of false positive/negative. Integration of the Bayesian filtering in watchdog, improves performance in terms of enhanced data throughput, speed in detection of attacks and accuracy in reporting malicious activity. The Bayesianwatchdog capability can be further enhanced by its effective tuning. This paper presents the concept of trust based tuning of the Bayesianwatchdog, which is a novel approach towards enhancing the detection speed, eliminating false alarms and improving data throughput. The proposed trust based tuning of Bayesian watchdog, has been evaluated through simulations and encouraging results have been obtained to support the proposed approach.
Keywords: Bayesian; Intrusion Detection; MANET; Trust; Watchdog.
Connectivity Analysis of Multihop Wireless Networks Using Route Distribution Model
by Abdullah Waqas
Abstract: Most of the management and routing protocols in multihop wireless networks rely on strict connectivity condition among nodes. In this paper, we construct mathematical framework to model connectivity of wireless ad hoc and wireless sensor networks. We calculate mathematical expressions to find the distribution of the distance between the nodes which is used to calculate the transmission power required to establish connection between the nodes which are inside the communication circle of each other. Then we present Route Distribution Model (RDM) to establish routes between the source and the destination which are outside the communication range of each other. The results show that the transmission power required to establish a connected network depends on the number of nodes in the network as well as distribution of the nodes. The results are analyzed for low, medium, and high density networks for uniform and Poisson distributed nodes. The results show that a connected network is achieved at relatively lower transmission power if nodes establish multihop routes to transmit their data towards the destination.rn
Keywords: Ad hoc networks; connectivity; minimum transmission range; node degree model; sensor networks.
Special Issue on: Soft Computing Application and Reviews
On the Convergence and Optimality of the Firefly Algorithm for Opportunistic Spectrum Access
by Lakshmana Rao Kalabarige, Shanti Chilukuri
Abstract: Meta-heuristic algorithms have been proven to be efficient forrnengineering optimization. However, the convergence and accuracy of suchrnalgorithms depends on the objective function and also on several choices madernduring algorithm design. In this paper, we focus on the firefly algorithm forrnoptimal channel allocation in cognitive radio networks. We study the effect ofrnvarious probability distributions including the Lev́y alpha stable distributionrnfor randomization of firefly movement. We also explore various functions forrnconverting firefly positions from the continuous space to the discrete space, asrnis necessary in the spectrum allocation problem. Simulation results show that inrnmost cases, Lev́y flight gives better convergence time and results for commonrnoptimization problems such as maximizing the overall channel utilization,rnmaximizing the channel allocation for the bottleneck user and maximizingrnproportional fairness. We also note that no single discretization function givesrnboth good convergence and optimality.
Keywords: metaheuristic algorithms; Lev́y flight; spectrum allocation; cognitive radio networks.
Meta-Heuristic Algorithm to Generate Optimized Test Cases for Aspect-Oriented Software Systems
by Abhishek Singhal, Abhay Bansal, Avadhesh Kumar
Abstract: Optimized test case generation is challenging for software industry. Test all approach is commonly used in software industry but it is not an effective approach in terms of computational cost. There exist literature available that show the applicability of meta-heuristic algorithm to address the issues, but the results received are not as perfect as it was expected, so scope of more optimized approaches still persists. In this paper, we propose an Artificial Bee Colony based test case optimization approach for aspect oriented software systems. Experiments are conducted using six benchmark problems, which validates the effectiveness of proposed approach. The results state the reduction of 20-40% number of test cases and more than 90% of code coverage in the optimized test suite, which shows the superiority of proposed approach. This clearly indicates that the computational time and complexity of the approach adopted shows remarkable improvement over GA
Keywords: Aspect-oriented; artificial bee colony algorithm; genetic algorithm; Meta-heuristic; optimization; test cases; test case generation.
Fuzzy System for Classification of Microarray Data using a Hybrid Ant Stem Optimization Algorithm
by Arul Antran Vijay Subramanian, GaneshKumar Pugalendhi
Abstract: In the resent years microarray analysis has become a widely used tool for gene expression profiling and data analysis. Microarray data analysis and classification has demonstrated convincingly that it provides an effective methodology for the effective diagnosis of diseases and cancers. Although much research has been performed on applying several techniques for microarray data classification during the past years, it has been shown that conventional machine learning and statistical techniques have intrinsic drawbacks in achieving accurate and robust classifications. This paper presents fuzzy based classification system to analyse microarray data. In classification, feature selection is an important, but difficult problem. The Mutual Information technique is used in this approach to extract the most informative genes from the microarray dataset. In the design of fuzzy expert systems a novel Hybrid Ant Stem (HAS) algorithm to extract the if-then rules using the membership functions from the given diabetes microarray data is presented. The algorithm uses a novel Stem Cell Optimization (SCO) to tune the points of membership function and Ant Colony Optimization (ACO) to generate the optimal rule set. The performance results of the proposed technique evaluated using the two diabetes microarray dataset are simulated. These results prove that the proposed Hybrid Ant Stem algorithm produces a highly accurate fuzzy expert system with the rule sets that that can be interpreted than the existing methodologies.
Keywords: Microarray Data; Fuzzy Expert System; Ant Colony Optimization; Stem Cell Optimization; Mutual Information.
Flower Pollination Based K-Means algorithm for Medical Image Compression
by G. Vimala Kumari, G. Sasibhushana Rao, B.Prabhakara Rao
Abstract: Image compression plays a significant role in digital image storage and transmission because of limited availability of storage devices space and insufficient bandwidth and is beneficial for all multimedia applications. Magnetic Resonance Imaging (MRI) of a human body produces an image of huge size and is to be compressed but medical field demands high image quality for better diagnosis of disease. In this technologically advanced world, intelligence systems try to simulate human intelligence. It is applied in the field of engineering, industry, medicine and education problems and it makes decisions by using the several inputs. However, the search process is enormous and convergence time depends on algorithm structure. In this paper first time methaheuristic algorithms are used for near optimum solutions. This paper introduces Flower Pollination Algorithm (FPA) based vector quantization for better image compression with better reconstructed image quality. Performance of proposed method is evaluated by using Peak Signal to Noise Ratio (PSNR), Mean Square Error (MSE) and Fitness Function.
Keywords: Image compression; Particle Swarm Optimization; Quantum Particle Swarm Optimization; Flower Pollination Algorithm.
Image Compression based on adaptive Image Thresholding by maximizing Shannon or Fuzzy Entropy using Teaching Learning Based Optimization
by Karri Chiranjeevi, Umaranjan Jena
Abstract: In this paper, Teaching Leaning Based Optimization (TLBO) is used for maximizing Shannon entropy or Fuzzy entropy for effective image thresholding which leads to better image compression with higher peak signal to noise ratio (PSNR). The conventional multilevel thresholding methods are efficient for bi-level thresholding. However, they are computationally expensive extending to multilevel thresholding since they exhaustively search the optimal thresholds to optimize the objective functions. To overcome this drawback, a TLBO based multilevel image thresholding is proposed by maximizing Shannon entropy or Fuzzy entropy and results are compared with differential evolution, Particle swarm optimization and bat algorithm and proved better in standard deviation, PSNR, weighted PSNR and reconstructed image quality. The performance of the proposed algorithm is found better with fuzzy entropy compared to Shannon entropy.
Keywords: Image compression; Image thresholding; Shannon entropy; Fuzzy entropy; Bat algorithm; Teaching learning based optimization.
An efficient and optimized approach for secured file sharing in Cloud Computing
by Neha Agarwal, Ajay Rana, J.P. Pandey
Abstract: Cloud computing is a new paradigm which refers to delivering the services over the internet to the customers on demand and releases them from worry of infrastructure requirements. To maintain efficiency and to cut the cost the customers investment the cloud service provider put their data on public clouds and so arises the major concern maintain the security of the outsourced data in the cloud. To address the issue we have proposed a hybrid encryption algorithm which comprises of symmetric and asymmetric public key encryption algorithms. This hybrid encryption algorithm has taken benefit of fast performance of symmetric and high security of asymmetric encryption algorithms. We have also introduced the concept of proxy re-encryption in order to ensure the security of outsourced data from colluded cloud and unauthorized users. Our results have proved that our proposed algorithm is more efficient and secured while sharing file with other users on cloud.
Keywords: Cloud computing; Cryptography; Proxy Re-encryption; Outsourced Data Security; Privacy; Genetic Algorithm.
Development of ANFIS based Algorithm for MPPT Controller for Standalone Photovoltaic System
by Astitva Kumar, M. Rizwan, Uma Nangia
Abstract: The maximum power point tracking controller is an integral part for efficient implementation of PV system. In this paper, an adaptive neuro fuzzy inference system (ANFIS) based new algorithm for maximum power point tracking (MPPT) has been developed and implemented to track maximum power point in the standalone photovoltaic system (PV). The work proposes to control the switching of DC-DC boost converter using ANFIS approach and replace the conventional PI controller to detect the error signal. The results of the proposed approach are compared with incremental conductance approach under constant and varying irradiance and temperature conditions. From the proposed approach, the percentage error, rise time and voltage fluctuations have been improved as compared to the incremental conductance method. Further, the proposed adaptive controller effectively tracks the MPP considering all the major non-linear variables and it improves the rise time and the steady state characteristics of the PV system.
Keywords: Photovoltaic Systems; Adaptive Neuro-Fuzzy Inference System; Maximum Power Point Tracking; Control Algorithms.
Performance Comparison of Bat Search and Cuckoo Search Using Software Artifact Infrastructure Repository and Regression Testing
by Arun Prakash Agrawal, Arvinder Kaur
Abstract: Software Testing is inevitable to have confidence in quality and reliability of software products. Regression testing is conducted to ensure that no new errors have been introduced into the software as a result of the maintenance activity performed. Re-executing all the existing test cases is one of the approaches to gain this confidence. This is however a highly expensive and time consuming approach. Previous research revealed that nature inspired algorithms have vast application in this area. Bat Search and Cuckoo Search are two such powerful nature inspired metaheuristic algorithms. In this paper, Bat search algorithm is tested against Cuckoo Search algorithm to solve regression test case selection problem. Two factors: Number of faults covered and computational time are considered for the comparison. Extensive experiments have been conducted over the objects adopted from benchmarked Software Artifact Infrastructure Repository. Rigorous statistical tests are conducted to draw a conclusion that demonstrates that Cuckoo Search is marginally advantageous over Bat search algorithm with respect to performance parameters. The underlying motivation to conduct this research is to create awareness among the researchers about the computational capability of both the algorithms. We believe that the results reported in this paper will enable researchers to develop more powerful algorithm for testing in the near future.
Keywords: Regression Testing; Test Effort Optimization; Metaheuristics; Bat Search Algorithm; Cuckoo Search Optimization.
Special Issue on: Intelligent Computation Systems
Incorporating Security in Opportunistic Routing and Traffic Management in Opportunistic Sensor Network.
by Mohammed Salman Arafath, Khaleel Ur Rahman Khan, K.V.N. Sunitha
Abstract: Nowadays Wireless sensors or Opportunistic sensor networks (OSN) and its technologies are mainly used for bridging the spaces between the physical world and the virtual electronics. Due to several potentials like limited storage, unreliable communication, higher latency in communication and unattended operations of networks and sensor node powers, OSN faces many problems on routing and traffic management. Security on OSN involves routing and data aggregation deployed in OSN involves collaborations among nodes on the network due to ad-hoc nature. To solve these problems, our proposed method involves the novel traffic management scheme and opportunistic secure routing on sensor networks. This method involves mainly three algorithms such as Range Based Clustering (RBC) for clustering the sensor nodes, Minimum Waiting Time Routing Algorithm (MWR) for routing the data packets and in order to provide secure communication while coalition attack and replica attacks present on the network, we propose Light Weight Key Generation Mechanism (LWKG). Thus our proposed system provides less computational complexity on the networks and we provide effective results on experimental evaluation based on average packet reception ratio, detection ratio of replicas, connectivity and coalition attack resistance.
Keywords: Wireless Opportunistic Sensor Network Security; Clustering; Routing and Traffic Management; Traffic Management; Routing; OSN; Secure Communication; Minimum Waiting Time; MWT; Range Based Clustering; RBC; Light Weight Key Generation Mechanism (LWKG);.
IMPROVING RELIABILITY IN MAS BY RULE BASED LOGIC AND CRYPTOGRAPHIC TECHNIQUES
by Prashant Kumar Mishra, Raghuraj Singh, Vibhash Yadav
Abstract: Mobile Agent (MA) is a composition of software paradigm, having the capability to move from one host to another across dynamic network environment and execute task assigned by its user. Reliability is one of the most important issues in Mobile Agent based System (MAS) which was addressed by many researchers. In this paper, we enhance the reliability in MAS with the support of Intrusion Detection System (IDS) and effective routing. Routing is an important process to find optimal path which improves network throughput and reliability. Collision-Free (CF) Network Graph Exploration method is designed for identifying optimal path for MAs. Security is a crucial aspect due to various malicious node activities and must be considered while estimating reliability. In our process HMAC-SHA1 algorithm is performed for detecting malicious agent and Rule Based Logic (RBL) is designed to identify malicious host. Further we calculate reliability of MAS with respect to status of network and its condition which includes link connectivity and malicious node probability. Finally we are simulating our performance by using various factors such as size of MAS, malicious ratio and number of mobile agents in MAS etc. These factors are significant to show improved reliability of our proposed system.
Keywords: Mobile Agent; Reliability; Intrusion Detection System; Malicious activities; Routing.
THD Minimization using genetic algorithm on the Nine level Multilevel Inverters
by RAVIKUMAR SUKUMARAN
Abstract: Determination of optimal switching angles in inverters is a significant research area and involves effective minimization of DC sources to increase the efficiency of the power output. This maximum and quality power is achieved by reducing the total harmonic distortion present in the output waveform. Hence by computing the appropriate switching angles of the inverters, these harmonics appearing in the output voltage could be reduced. The problem formulation in the proposed research paper lies in formulating an appropriate switching strategy to maintain optimality in power quality. A Genetic algorithm (GA) optimization technique to compute the optimum switching angles of a 9 level multi level inverter is investigated, algorithm formulated and experimental analysis carried out. In the case of three-phase multilevel inverters, the optimization algorithm is generally applied to the phase voltage of the inverter. This results in the minimum THD in phase voltage but not necessarily in the line-to-line minimum THD. In three-phase applications, the line-voltage harmonics are of the main concern from the load point of view. In this paper, using the genetic algorithm and sinusoidal PWM technique, a THD minimization process is applied to the line-to-line voltage of the inverter. This paper is based on a comparison between seven level cascaded and nine level diode clamped multilevel inverter.
Keywords: Optimal minimization of THD (OMTHD); Genetic algorithm (GA); line-voltage THD; multilevel inverter; Phase Voltage THD; THD reduction.
Simulink Implementation of RLS Algorithm for Resilient Artifacts Removal in ECG Signal
by V. Tejaswi, Surendar Aravindhan
Abstract: Noise is the undesired signal which affects the desired signal. This Noise has been a serious problem which is affecting the signals during transmission of information. In this project two different noisy signals are considered they are speech signal and ECG signal. The Speech signals are taken from the NOIZEUS database and ECG signals from Physio Net ECG database. The major noises affecting the ECG signal are baseline wander, Electrode motion, Power line interference, Muscle artifact noises. The baseline wander noise which is caused due to patient movement, breathing and bad electrode contact to skin, Electrode Motion noise occurs when electrode moves away from the skin which leads to impedance changes resulting in variations in ECG, Muscle Artifact noise which is caused due to contraction of other muscles besides the heart. Simulink model is designed for cancelling the noise from the noisy signals. The Adaptive algorithm that is chosen is the RLS algorithm because it has faster convergence rate when compared to other algorithms like LMS, NLMS, RLS. The Simulink model is tested for different cases to show that the model works efficiently and the performance can be observed from the mean square error obtained.
Keywords: Noise; ECG; Artifact; Baseline wander; Electrode motion; Muscle artifact; Power line interference; RLS filter; MSE.
Semantic Linkage of source content dynamically with virtual documents using wikipedia in Hadoop
by Priyadarshini R., LATHA TAMILSELVAN
Abstract: In recent years, the World Wide Web has developed enormously and become more multifaceted because of the rising number of users and emerging technologies. Web 1.0 is not interactive and it is static compared to current web. Web 2.0 has modified this by enabling the users to create and share the content and remain collaborative. As result of this there is enormous growth of web content. Hence relevant information retrieval became a challenging task. To handle this problem application of semantic web is very much essential. The traditional web content is added with semantic repository. There are semantic web based tools being developed and researched to make the information retrieval more efficient. This paper describes a semantic weblog which has the feature of locating the exact source content from the reference URLs. As Wikipedia is a large encyclopedia that contain more number of links and meta-content within it, the retrieval of original document content for the user created document is difficult. The proposed work extracts the meta-content of the link using wikiAPI and store it as repository based on mongo db. Then the meaningful words of the meta-content will be compared with created virtual document semantically with help of Alchemy API. The meta content matching is implemented by similarity measures jaccard similarity with Alchemy Api. Finally the exact source document for our virtual document will be retrieved for future use. The jaccard similarity measure along with the content categorisation yields accurate source URLs for selected content in Wiki. The time taken for static retrieval  and dynamic retrieval is compared and plotted. And also the recall & precision of dynamic retrieval is compared and plotted. The enhanced version of dynamic retrieval is also updated using wordnet semantically. Comparison of jaccard similarity with the various methods are compared and analyzed.
Keywords: Virtual document; Semantic repository; Locate source content; Semantic article; WikiAPI.
A Novel System for Early Detection of Breast Cancer using Area and Entropy Features of Malignant Tumor
by Varalatchoumy M, Ravishankar M
Abstract: Computer aided Detection and Classification system has been developed to detect breast cancer at an early stage by predicting the area and texture of malignant tumors. Noise removal and image enhancement is carried out in the preprocessing stage by using adaptive median filter and contrast limited histogram equalization techniques. Improved Watershed segmentation technique with appropriate internal and external markers, have proved to be an efficient approach in detecting the Region of Interest. The detected tumors are classified using feedforward Artificial Neural Network that are trained using textural features. Area and Entropy features extracted from malignant tumors aids in early detection of breast cancer by categorizing malignant tumors as belonging to stage I or stage II. The overall efficiency of the system, for identifying stages of malignant tumor is 92%, which has been identified to be high when compared to all existing systems. Mammogram images from Mammographic Image Analysis Society (MIAS) database was used for training the system and efficiency of the system was tested using real time hospital images.
Keywords: Novel CAD system; adaptive median filter,CLAHE; watershed segmentation; internal and external markers; textural features; ANN; area and Entropy of malignant tumor; stage of breast cancer.
BREAST CANCER DIAGNOSIS USING A MINKOWSKI DISTANCE METHOD BASED ON MUTUAL INFORMATION AND GENETIC ALGORITHM
by Neha Vutakuri, Amineni Uma Maheswari
Abstract: Breast cancer is one of the most frequently diagnosed cancers and can lead to death in women worldwide. Diagnosing breast cancer is one of the most challenging tasks as symptoms may only be present in later stages. Early diagnosis may save lives. Various algorithms and techniques have been proposed to diagnose breast cancer. This paper presents a MIGA (mutual information genetic algorithm) for diagnosing breast cancer. MIGA is a combination of two algorithms, mutual information (MI) and genetic algorithm (GA). Among the information theory approaches, MI is the best and most widely used approach due to its characteristics of non-linearity, robustness, and scalability. This process reduces computational complexity and improves the accuracy of the system. The method of this work is as follows: attributes of breast cancer patients were collected using the Breast Cancer Wisconsin Diagnostic dataset. Evolutionary computation has a variety of techniques and approaches based on natural selection. A breast cancer diagnosis system was developed using a GA and hybrid algorithm (genetic and K-nearest neighbor) and then used MI and GA. GA fitness was calculated using the Minkowski distance method. Nine attributes from the dataset were included: clump thickness, uniformity of cell size, uniformity of cell shape, marginal adhesion, single epithelial cell size, bare nuclei, bland chromatin, normal nuclei, and mitoses. The obtained solutions are verified for three algorithms (GA, GA+KNN, and MIGA). Finally, the results show that the highest accuracy (99%) obtained with the GA-based MI features. The proposed MIGA algorithm reveals an enhancement in performance compared with the methods of previous works.
Keywords: Breast cancer diagnosis; Genetic algorithm; Mutual information; Breast Cancer Wisconsin Dataset; Minkowski distance method.
Threshold Algorithm for the cell formation problem
by RAGURAMAN T.R, SUDHAKARAPANDIAN RAMASAMY, KAMALAKANNAN RAMALINGAM
Abstract: Advanced or smart manufacturing has recently been gaining increasing attention from the academia and industry in small and medium enterprises (SMEs). Smart Manufacturing for Industry 4.0, which integrates resources, information, materials, and people to formulate a cyber physical system, has been the priority of many enterprises, especially those that are small and medium-sized. Threshold accepting algorithm is useful to resolve the cell formation problem which is based on the three perturbation techniques such as pair-wise exchange, insertion and random insertion perturbation schemes. This paper aims to maximize the grouping efficacy as it is one of the best performance measures for the cell formation problem. The performance evaluation of threshold accepting algorithm had actually been carried out after testing the benchmark problems of various literatures. This evaluation method proves that three perturbation schemes have the ability to sort out the cell formation problem. Among these three perturbation schemes, the random insertion perturbation scheme is providing better solutions than the others.
Keywords: SMEs; Cellular Manufacturing System; Smart Manufacturing; Threshold Accepting Algorithm; Part Machine grouping; Grouping Efficacy.
Support Vector Machine based proactive fault-tolerant scheduling for Grid Computing Environment
by A.Shamila Ebenezer, Elijah Blessing Rajsingh, Baskaran Kaliaperumal
Abstract: To classify the reliable resources accurately and perform a proactive fault tolerant scheduling in grid computing environment, a combination of Support Vector Machine (SVM) with the Quantum-behaved Particle Swarm Optimization using Gaussian distributed local attractor point (GAQPSO) is proposed in this paper. When tuned with appropriate kernel parameters, the SVM classifier provides high accuracy in reliable resource prediction. The higher diversity of GAQPSO compared to other variants of QPSO, reduces the makespan of the schedule significantly. The performance of the SVM-GAQPSO scheduler is analyzed in terms of the makespan, reliability, and accuracy. The empirical result shows that the reliability of the SVM-GAQPSO scheduler is 14% higher than the average reliability of the compared algorithms. Also, the accuracy of prediction using the SVM classifier is 92.55% and it is 37.2% high compared to Classification and Regression Trees (CART), Linear Discriminant Analysis (LDA), K-Nearest Neighborhood (KNN), and Random Forest (RF) algorithm.
Keywords: SVM classification algorithm; Particle Swarm Optimization; Proactive Fault tolerance; Failure Data Analytics; Grid Computing.
Automatic Classification for Preventing Duplication of Online Multimedia Data in Secure Cloud Infrastructure
by Suganya E, Aravindhraj N, Sountharrajan S, Rajan C
Abstract: Cloud computing provides various types of software and hardware services collectively working together at different computational environment to the end user through internet. Cloud computing is an emerging technology that provides variety of applications to the end user and they can access their cloud services at anywhere, at any time in the world in secured way. Now-a- days free online hosting websites produce a large amount of multimedia data. It may include Images,audios,2D videos,3Dvideos. Some people will use videos and audios in the distributed environment against the copyrights from the original content creators. This will create a large amount of revenue loss for the original content creator. The protection of theses multimedia content is a challenging task. Cloud computing has some security issues like authentication, privacy and data security. Our proposed system will provide highly scalable storage infrastructure in cloud environment.It also protects the multimedia content by creating a depth signature and classifying them with duplicated contents using SVM classifier .This proposed system will avoid the security issues in multimedia content and it will increase the revenue for original content creator. The Proposed system can work in both public and private cloud Infrastructure.
Keywords: Video Watermarking; 3D videos; Cloud applications; Depth signatures; Cryptography; Security and Classification.
ENHANCEMENT OF ENTERPRISE RESOURCE PLANNING SYSTEM BY ANALYZING FEASIBILITY AND CRITICAL FACTORS
by Valanarasu R., Christy A
Abstract: ERP is a business strategy for industry domain for the specific applications to build the customer and service provider value network system. The issue which faces by previous systems has lack of integration in the access of information and their records. It is has the problem in lack information, financial information systems and the integration processes. A common platform is essential for integrating all the information for all the terminals. The existing system provides the integrated platform for the stakeholders to access the data but the overall complexity increases because of the potential management and environmental factors. A distributed module in this proposed work combines all the management process and the key features which integrates the platforms and overcomes the limitations in the existing system. ERP system is used to determine the organizational needs and adaption requirements which combine all the key features which lag in previous system. It also used in analysis of various adoption behavior of organization. This research work implements the ERP in large and small scale SMEs and identifies the solution for the practical problems in real world.
Keywords: ERP; Risk management Analysis.
Best-case, worst-case and mean integral-square-errors for reduction of continuous interval systems
by Vinay Pratap Singh, Jagadish Kumar Bokam, Sugandh Pratap Singh
Abstract: In this brief, best-case integral-square-error, worst-case integral-square-error and mean integral-square-error are defined for model reduction of continuous interval systems. First, rational transfer functions of interval system and those of the model are obtained using Kharitonov theorem and then different integral-square-errors are derived with the help of alpha and beta parameters obtained for rational transfer functions. The ISEs are obtained for impulse response and can be treated as measure of goodness for model reduction of continuous interval systems. The whole procedure is explained with the help of one numerical example.
Keywords: Kharitonov theorem; integral-square-error; interval systems; Model reduction.
Residential Load Scheduling Considering Maximum Demand Using Binary Particle Swarm Optimization
by Remani Thankamma, JASMIN EA, Imthias Ahamed
Abstract: Demand Response(DR) programs are gaining importance in Smart Grid, owing to the continuously increasing energy demand. The primary objective of DR programs is to motivate consumers to change the power consumption pattern to limit the maximum demand. The success of residential DR programs largely depends on schedulable loads and the nature of utility tariff. Binary Particle Swarm Optimization(BPSO) is an effective tool for solving scheduling problems. In this paper a BPSO based solution is presented through a case study for the residential load scheduling problem including the consumer demand constraints and Maximum Demand(MD) limit specified by the utility. The objective of the algorithm is to automatically schedule the consumer\'s load so as to minimize the energy cost subject to various constraints. The performance of the algorithm is investigated by considering a domestic consumer with schedulable and non-schedulable appliances. Simulation experiments are conducted under different tariff and MD limit conditions. Test results show that the proposed method saves energy cost of the domestic consumer and reduces the maximum demand on the system.\r\n
Keywords: Binary PSO; Load scheduling; Maximum Demand; Energy Management System; Demand Response.
Multi-objective Multi-Join Query Optimization using Modified Grey Wolf Optimization
by Deepak Kumar, Sushil Kumar, Rohit Bansal
Abstract: Nowadays information retrieved by a query is based upon extracting data across the world, which are located in different data sites. In Distributed Database Management Systems (DDBMS), due to partitioning or replication of data among several sites the relations required for an answer of a query may be stored at several Data Sites (DS). Many experimental results have showed that combination of Optimal Join Order (OJO) and optimal selection of relations in Query Plan (QP) gives out better results compare to the several existing query optimizing methodologies like Teacher-Learner Based Optimization (TLBO), Genetic Algorithm (GA) etc. In this paper an approach has been proposed to compute a best optimal QP that could answer the user query with minimal cost values and minimum time using Modified Grey Wolf Optimization Algorithm (MGWO) which is multi-objective constrained. Proposed approach also aims for producing OJO in order to reduce the dimensionality complexity of the QP.
Keywords: Data Site; Distributed Database Management Systems; Grey Wolf Optimization; Optimal Join Order; Teacher-Learner Based Optimization;.
2n Factorial design of view of thermal images for detection Correlation coefficient variants factors of object for environmental issues
by Mukilan P., Saravanakumar N M
Abstract: Thermal imaging is a kind of method that improving a visibility and clarity of objects in a dark space environment by detect of the objects by camera Via infrared radiation and creating data based on that information about the thermal image. All kind of objects emit infrared energy (heat or flame) at the time of utility in their temperature or hotter of an object to emits more radiation or flame and it cannot predict the wave of heat sprite. A thermal image can be identifying the essentially a heat sensor of object and capable of detecting the different between all view of tiny differences in hotness. The device collects the infrared radiation from objects in the any one view, captures the scene, and creates an angle position of the image and outcome as based on information view. Because, the objects are different temperature around them, a thermal camera can detect them in a single view and they will appear as distinct in an another view of thermal image. The research proposal methodology is 2n factorial design of thermal images to finding the various angle of image view about object and to identifying the correlation coefficient factors capture image of front, back, right, left, bottom ad top view of object. In this research, to prove the 22 factorial design is an example factors of view i.e. X View of thermal image (Front view) and Y View of thermal image i.e. right side view of object by help of Yates methods and correlation coefficient formulization and images to help users identify objects at different view angle of the object temperatures for affecting the area of the object.
Keywords: Thermal image; 2n factorial design; thermal image object view; correlation co-efficient factors.
Automated transformation of NL to OCL Constraints via SBVR
by Murali Mohanan
Abstract: This paper presents a neoteric method to automatically generate Object Constraint Language (OCL) Constraints from natural language (NL) statements. In Unified Modeling Language (UML) standards OCL is used to check whether a model follows given process or domain specific heuristics and also to improve the precision of model specifications. As constraints are the key components in the skeleton of business or software models one has to write constraints to semantically compliment business models or UML models. To support the software practitioners in using OCL we present a novel method. The aim of this method is to produce a framework so that the user of UML tool can write constraints and pre/post conditions in natural languge like English and the framework converts such natural language expressions to equivalent OCL statements. Here the state of art of the two well known technologies namely Open Natural Language Processing (OpenNLP) and Semantics of Business Vocabulary and Rules (SBVR) are used. OpenNLP is used as a preprocessing phase to process the natural language statements. Preprocessing includes sentence splitting, tokenization and parts of speech (POS) tagging. Then in the second phase i.e the transformation phase SBVR is used to automatically transform the preprocessed natural language statements to SBVR specifications. SBVR has major role in this transformation as it uses the syntax of natural language. The main aim of the research is to provide automated tool support for model processing tasks in UML models via SBVR to model transform the input SBVR specifications to OCL specifications as explained in Model Driven Architecture (MDA).
Keywords: Natural language processing;SBVR;UML;OCL.
Study of Skin flow motion pattern using photoplethysmogram
by Neelamshobha Nirala
Abstract: Microcirculatory dysfunction is related to many diseases and occurs long before their clinical manifestation. We used wavelet transform to study the microcirculatory regulatory mechanism in three different groups (18-diabetic, 8- peripheral arterial disease (PAD) and 14 healthy controls) using toe photoplethysmogram (PPG) and 11 different features were derived. Compared to healthy subjects we obtained a significant decrease in the neurogenic (VNe: 286.41 vs. 125.29(a. u), p-value=0.000), myogenic (VMe: 281.55 vs.29.02, p-value=0.000) and respiratory activity (VRe: 37.68 vs. 9.35, p-value=0.022) in the diabetic group and significant increase in the cardiac activity (VCe: 19.69 vs. 33.89, p-value=0.007) in PAD group. Result of linear multiple regressions analysis showed a significant negative association of age and BMI with myogenic activity (p-value=0.002, r-value=0.173) and neurogenic activity (p-value=0.036, r-value=0.375) respectively. Our study showed that PPG signal can be used as a non-invasive tool for studying the vasomotion impairment in the diabetic patient during resting condition.
Keywords: Continuous Wavelet Transform; Laser Doppler flow meter; Photoplethysmogram; Microcirculation; Skin blood flow; Vasomotion.
SCALABLE INFORMATION RETRIEVAL SYSTEM IN SEMANTIC WEB BY QUERY EXPANSION AND ONTOLOGICAL BASED LSA RANKING SIMILARITY MEASUREMENT
by Uma Devi M, Meera Gandhi G
Abstract: In recent days, Semantic Web presents a key role in Intelligent retrieval of Information System that provides actual Semantic information in text documents. Several Semantic based research works have been encountered for Information Retrieval (IR). However, achieving the scalable IR in Semantic Web is a challenging issue since it has faced several problems in terms of inaccurate, irrelevant, and redundant information in large dataset. The Semantic IR problem is addressed by an Ontological based Semantic Similarity measurement using Natural Language Processing. It is proposed to concentrate mainly on Ontological representation, Query Expansion, Similarity measurement and Ranking. The two novel algorithms namely Syntactic Correlation Coefficient (SCC) and Mapping based K-Nearest Neighbor (M-KNN) for Semantic Similarity measurement is suggested which improves the accuracy of relevant result. The Ontological constructs with Word Sense Disambiguation (WSD) algorithm for document repository, improves the conceptual relationships and reduces the ambiguities in Ontology. The Ontology construction also improves Scalability by intensely analyzing the Semantic relationship as well as dynamically reconstructing the Ontology when numbers of documents are updated. The Query Expansion process based on pre-processing steps and Semantic analysis reduces vocabulary mismatch problem by including additional relevant terms. Ranking is done with Latent Semantic Analysis (LSA) after Semantic Similarity analysis, which improves the retrieval result and reduces the complexity in relevancy. Finally, the performance of the system is analyzed with respect to different metrics such as Processing Time, F-Measure, Time Complexity and Space Complexity. The metrics are significant to improve the result and overall performance of the proposed systems.
Keywords: Information Retrieval; Semantic Similarity; Ontology; K-Nearest Neighbor; Latent Semantic Analysis; Word Sense Disambiguation; SPARQL; Singular Value Decomposition.
Special Issue on: CICBA-2017 Advances in Computational Intelligence
SPIDER based Out-of-Order Execution Scheme for Ht-MPSOC
by Karthick Ramachandran
Abstract: In this work, the influence of the dynamic task scheduling process is examined. Out-of-Order (OoO) implementation processes exhibit remarkable guarantee for task-level parallelism in multiprocessor system-on-chip (MPSOC) designs. The superior performance can be attained with the help of a precise mapping of tasks onto the right processors. Hence, to obtain this performance, a SPIDER based task parallelism is presented in this work. The software related dynamic operations are illustrated on a heterogeneous MPSOC (Ht-MPSOC). SPIDER abides by a cooperative population-based search, which demonstrates upon the social activities of spider grouping. SPIDER system merges local search approaches with global search approaches. This implementation decreases the issue of task management. The performance of the proposed design is compared with the existing work for power, area and speed analysis.
Keywords: Multiprocessor SOC; SPIDER; task-level parallelism; Out-of-Order implementation.
Modified FPred-Apriori: Improving Function Prediction of Target Proteins from Essential Neighbors by Finding their Association with Relevant Functional Groups Using Apriori Algorithm
by Sovan Saha, Abhimanyu Prasad, Piyali Chatterjee, Subhadip Basu, Mita Nasipuri
Abstract: Drug assistance to various harmful diseases is still not discoverable since functions
of proteins responsible for the cause of these diseases are still unannotated. So, computational function annotation to unknown protein is a very challenging task which can be used to formulate biological hypothesis. With the use of high throughput techniques, huge amount of protein sequence can only be annotated rapidly using computational technique rather than using costly, time consuming, low throughput wet lab experiments. Here a novel prediction method, Modified FPred-Apriori is proposed which aims to annotate proteins from its unannotated level-1 and annotated level-2 neighbors with less computational overhead. This is accomplished by selecting active target proteins efficiently at three levels of threshold: high, medium and low through the application of closure on adjacency matrix formed between the protein sets followed by the computation of protein connectivity scores. Once the target proteins get selected, their corresponding neighborhood interaction network is formed. Nonessential neighbors are pruned or filtered out from each of those formed interaction network of target set through functional overlap score. Functional association of level-1 and level-2 neighbors in the pruned neighborhood graph of target set is also investigated simultaneously and hence, only frequently occurred relevant functional groups are considered using Apriori algorithm for the annotation of functions of target set from the functions of their corresponding annotated level-2 neighbors. Modified FPred-Apriori is an improved modified version of
FPred-Apriori. The incorporation of closure, protein connectivity score, function overlap score and other features like reshuffling of neighborhood proteins of target set of proteins through mutual crossover of genetic algorithm make it unique in comparison to its predecessor. It achieves an overall precision, recall and F-score of 0.887, 0.708 and 0.787 respectively. The comprehensive comparison demonstrates that the proposed method outperforms the other competing methods.
Keywords: Protein-Protein Interaction Network; Apriori algorithm; Essential neighbor;
Function Overlapping Score; Closeness Centrality Score; Target proteins.
An Efficient Pattern Matching Approach Using Double Measures of Correlation and Rank Reduction
by Himanshu Jaiswal, Dakshina Ranjan Kisku
Abstract: This paper discusses an efficient pattern matching approach on the use of K-NN (K-nearest neighbor) based rank order reduction and Haar transform in order to detect a pattern in a large scene image. To accomplish the task, scene image is divided into a number of candidate windows and both input pattern and candidate windows are characterized by Haar transform. This characterization seeks to determine distinctive coefficients known as Haar Projection Values (HPVs). To obtain more relevant and useful representation of HPVs, rectangle sum is computed and further, sum of absolute (SAD) correlation measure is applied as successive measures between the input pattern and candidate windows. This leads to increase the possibility of finding the object in the scene image before being detected and localized. The proposed pattern matching approach is tested on COIL-100 database and the matching accuracy proves the efficacy of the proposed algorithm.
Keywords: Pattern Matching; Haar Transform; Sum of Absolute Difference; K-NN Approach.
Gain Parameter and Dropout Based Fine Tuning of Deep Networks
by M. Arif Wani, Saduf Afzal
Abstract: Dealing with high dimensional data is one of the major current challenges to many classical classification algorithms. While shallow architectures are best suited to small datasets with many features, they can be relatively inefficient at modeling variation in high dimensional datasets. Deep architectures such as deep neural networks can express complex relationships among variables than the shallower ones. Training of deep neural networks can involve two learning phases: unsupervised pretraining and supervised fine tuning. Unsupervised pretraining is used to learn the initial parameter values of deep networks, while as supervised fine tuning improves upon what has been learned in the pretraining stage. Backpropagation algorithm can be used for supervised fine tuning of deep neural networks. However, in the field of shallow neural networks, a number of modifications to backpropagation algorithm have been used by researchers that have improved the performance of trained model. One such variant is backpropagation with gain parameter. In this paper we evaluate the use of backpropagation with gain parameter algorithm for fine tuning of deep networks. We further propose a modification where backpropagation with gain parameter algorithm is integrated with the dropout technique and evaluate its performance in fine tuning of deep networks. The effectiveness of fine tuning done by proposed technique is also compared with other variants of backpropagation algorithm on benchmark datasets. The experimental results show that the fine tuning of deep networks using the proposed technique yields promising results among all the studied methods on the tested datasets.
Keywords: Deep Learning; Deep Neural Networks; Fine Tuning; Drop Out Technique; Gain Parameter and Drop Out Technique.
Machine Transliteration Using SVM and HMM
by Soma Chatterjee, Kamal Sarkar
Abstract: Name transliteration plays an important role in developing automatic machine translation and cross lingual information retrieval system because these systems cannot directly translate out-of-vocabulary (OOV) words. In this article, a SVM based name transliteration approach has been presented. This approach considers transliteration task as a multi-class problem of pattern classification, where the input is a source transliteration unit (chunks of source grapheme) and the classes are the distinct transliteration units (chunks of target grapheme) in the target language. A study on using Hidden Markov Model (HMM) for solving machine transliteration problem viewed as a sequence learning problem has also been presented in this paper. Bengali-to-English forward and backward name transliteration have been considered in this study. Our proposed methods have been compared with some existing transliteration method that uses a modified version of Joint-Source channel model. After the systems have been evaluated, the obtained results show that our proposed SVM based model gives the best results among the others. Our experiments also reveal that the performance of HMM based system is comparable with the SVM based system.
Keywords: Name Transliteration; Support Vector Machines; Hidden Markov Model; Modified Joint-Source Channel Model; Machine Transliteration; Machine Translation.
A New Image Binarization Technique for Segmentation of Text from Digital Images
by Ranjit Ghoshal, Sayan Das, Aditya Saha
Abstract: Text segmentation in digital images is requisite for many image analysis and interpretation tasks. In this article, we have proposed an effective binarization technique towards text segmentation from digital images. This image binarization technique creates numerous text as well as non-text connected components. Next, it is required to separate the possible text components from the obtained connected components. Further, to distinguish between text and non-text components, a set of features are considered. Then, during training, we consider the two feature files namely text and non-text prepared by us. Here, K-Nearest Neighbour (K-NN) and support vector machine (SVM) classifiers are considered for the present two class classification problem. The experiments are based on ICDAR 2011 Born Digital Dataset. Our binarization technique is also applied on publically available dataset Street View Text dataset (SVT), DIBCO 2009 and ICDAR 2011 Roust Reading Competition. We have accomplished in binarization and as well as segmenting between text and non-text.
Keywords: Binarization; Connected Component; Feature extraction; K-NN classifier; SVM classifier; Text segmentation.
Metaheuristics based routing optimization, balanced workload distribution and security strategy in IoT Environment.
by Subhrapratim Nath, Subir Kumar Sarkar
Abstract: The advancement of Wireless Sensor Networks (WSN) made the society ever so slightly gains new degrees of freedom, intrinsically with respect to connectivity. The emergence of Internet of Things (IoT) together with the onset and development of evolutionary computing, address the various needs of the growing urbanization with a significant allusion altogether. Applications built on cloud infrastructure are deployed for job segregation, rapid processing and utilization but with limitation of proper computation in high density IoT environment. This paper strives on resolving some of the issues so accosted by the IoT paradigm, with the help of a new meta-heuristics based data routing hybrid algorithm based on Directed Artificial Bat Algorithm (DABA) and Particle Swarm Optimization (PSO) to optimize connection issues such as real time delay, network congestion. This approach also introduces the clustering concept together with Fog Computing to distribute the network stress and to optimize the usage of bandwidth using the Dynamic Graph Partitioning Algorithm. The paper further proposes advance metaheuristics Constricted PSO (C-PSO) in the hybrid algorithm and effective proposal for security strategy to enhance the efficiency of the IoT environment.
Keywords: Internet of Things; Cloud computing; Fog Servers; Metaheuristics; Routing optimization; Directed Artificial Bat Algorithm; Particle Swarm Optimization; load balancing; Dynamic Graph Partitioning.
Special Issue on: Advanced Pattern Recognition and Soft Computing Paradigms
Energy-aware traffic engineering in IP networks using non-dominated sorting genetic II algorithm
by Raheleh Samadi, Mohammad Nassiri, Muharram Mansoorizadeh
Abstract: Wide spreading of computer networks along with increasing traffic demand throughout the Internet caused a dramatic increase in energy consumption by networking devices and Internet infrastructure. Energy-aware traffic engineering is a promising approach towards green networking to achieve a trade-off between energy saving and network utilization in backbone networks. In this paper, we propose to use non-dominated sorting genetic algorithm (NSGA-II) for energy-aware intra-domain traffic engineering. This algorithm tries to make a tradeoff between maximum link utilization (MLU) and energy reservation. For each pair of network topology and traffic matrix, NSGA-II computes the optimal set of links to put to sleep so that the resulting topology would be able to carry the traffic demand. We developed a simulator to evaluate the performance of our mechanism. The results of comprehensive evaluations show that our energy-aware TE approach increases the network performance in terms of energy conservation by 50% at the cost of slight increase in maximum link utilization.
Keywords: Energy saving; Traffic engineering; Link utilization; Genetic algorithm; non-dominated sorting.
Using a soft computing method for impedance modelling of li-ion battery current
by Mohammad (Behdad) Jamshidi, Rouzbeh Farhadi, Morteza Jamshidi, Zahra Shamsi, Seyedfoadin Naseh
Abstract: Using the soft computing as a powerful tool for modelling of complex systems is highly regarded. Adaptive neuro fuzzy inference system is one of the best methods of soft computing which identifies and models non-linear systems. In this paper, complex impedance behaviours of li-ion batteries are studied by adaptive neuro fuzzy inference system. To present an approach for modelling and identification of electrochemical systems is purposed. This method can be improved to reach the most accurate model of the batteries. In the presented work, complex current is modelled as the main important element of the batteries in impedance state. Modelling results showed that this method can have acceptable output for impedance modelling the batteries.
Keywords: Electrochemical; impedance modelling; li-ion battery; soft computing; complex systems; systems engineering; ANFIS.
Fuzzy Project Scheduling with Critical Path Including Risk and Resource Constraints Using Linear Programming
by Shahram Saeidi, Samira Alizadeh Aminloee
Abstract: Project scheduling is one of the important issues of project management which has raised the interest of the researches and several methods have been developed for solving this problem. While certain models are used in most studies, uncertainty is one of the intrinsic properties of most projects in real world which consist some activities with uncertain processing times and resource usages. In this paper, a fuzzy linear programming model is proposed for project scheduling considering risk and resources constraints under uncertain environment in which activity duration and the amount of resources used by each activity is defined as a fuzzy membership function. The proposed model is simulated in MATLAB R2009a software and four test cases adopted from the literature are implemented. The computational results show that the proposed model decreases the critical path length about 4% in competition with similar methods.rnrn
Keywords: Fuzzy Project Scheduling; Critical Path; Linear Programming.
OMCM-CAS: Organizational Model and Coordination Mechanism for Self-adaptation and Self-organization in Collective Adaptive Systems
by Ali Farahani, Eslam Nazemi
Abstract: The complexity of Information systems has grown in past decades and dealing with this complexity become a hot research field in computer science. One of the solutions for dealing with systems complexity and environmental changes is self-managing. It has announced under the term of autonomic computing by IBM at 2001. In recent years the using self-managing approaches in distributed systems without central control is trending. Self-organizing is known for its usage in distributed systems; hence, self-adaptation is mostly used in centralized systems. For having these two concepts alongside each other, self-adaptive concepts are combined with self-organization, an interdisciplinary term with applications in several fields. Different usages and definitions are provided for this term and also for its relation with self-adaptive systems. These differences have led to an ambiguity in this domain. Collective Adaptive System (CAS) is a distributed system which has heterogeneous agents with different capabilities in large scale. This research field cover a big majority of distributed systems. Having self-adaptiveness in CAS can address problems about coordination and cooperation of agents. This research compares self-organization with self-adaptation in a broader view and identifies the differences and correlations. Also, it considers the applicability of coordination, reflection and architectural approaches in both domains and presents a hybrid approach. Organizational models for self-organization in distributed environment have studied and have got analyzed. A new combined organizational model has been introduced based on the benefits and weak points of current organizational models. Based on presented organizational model, a coordination mechanism has been introduced for facilitating the cooperation in CAS. A Case study (NASA ANTS mission) have been discussed and simulated and simulation results support the applicability and effectiveness of the presented organizational model and coordination mechanism.
Keywords: Self-organization; Self-adaptation; Intelligent distributed system;
Decentralized control; Coordination Mechanism.
Provide a new clustering scheme based on density to enhance energy efficiency in wireless sensor networks
by Mahdis Fathi, Mousa Nazari
Abstract: The study and researches related to wireless sensor networks (WSN) are growing today due to its various uses in different fields. Wireless sensor network includes many small nodes that have been located in an intended environment. Since the dimensions of these sensors are small, they work with non-rechargeable batteries as energy limited instruments. So, energy conservation is very important. Clustering the sensor nodes is an effective way to diminish the consumed energy of these networks. Accordingly, a novel clustering scheme which is based on density-based clustering approach is presented in this article. In this new method, nodes that have been located within proximity of each other, are placed in one cluster and unlike some algorithms, it is no need to determine the exact number of clusters. Simulation outcomes indicate that lifetime and total packet delivery of proposed method have been increased rather than other related methods.
Keywords: clustering; density-based; energy efficiency; wireless sensor networks; WSNs.
The performance comparison of improved continuous mixed P-norm and other adaptive algorithms in sparse system identification
by Afsaneh Akhbari, Aboozar Ghaffari
Abstract: One of the essential usages of adaptive filters is in sparse system identification on which the performance of classic adaptive filters is not acceptable. There are several algorithms that designed especially for sparse systems, we call them sparsity aware algorithms. In this paper we studied the performance of two newly presented adaptive algorithms in which P-norm constraint is considered in defining cost function. The general name of these algorithms is continuous mixed P-norm (CMPN). The performances of these algorithms are considered for the first time in sparse system identification. Also the performance of l_0 norm LMS algorithm is analyzed and compared with our proposed algorithms. The performance analyzes are carried out with the steady-state and transient mean square deviation (MSD) criterion of adaptive algorithms. We hope that this work will inspire researchers to look for other advanced algorithms against systems that are sparse.
Keywords: Adaptive algorithms; sparse; mixed P-norm; system identification.
A Comparison of Data mining Methods for Diagnosis and Prognosis of Heart Disease.
by Mohammad Reza Afrash, Mehdi Khalili, Maral Sedigh Salekde
Abstract: Heart disease is a term that covers a range of disorders that affect heart. Since medical decisions are still mostly based on the knowledge and experience of doctors and not on the basis of hidden knowledge in numerous cases Patient records, so this action is exposed to human errors, which may lead to late discovery of disease or influenced how services offered to patients. So create automatic or semi-automatic detection system with a combination of both knowledge and experience in the field of health care is very useful and necessary. Here, this paper compare data mining algorithm for diagnosis and prognosis heat disease as an automatic intelligent heart disease prediction system. Accordingly firstly we use data set with 14 attributes. Secondly, we develop a prediction model using Na
Keywords: Keywords: data mining techniques; heart disease; classification; weka;.
Special Issue on: Advanced Intelligence and Computing Technology
Diminution of Power in Load/Store Queue for CAM and SRAM based Out-of-Order Processor
by Dhanalakshmi Gopal
Abstract: In a modern world for non numeric applications, Out-of-Order super scalar processors are designed to achieve higher performance. Unfortunately the improvement in the performance has lead to the increase in the chip power and energy dissipation. The Load/Store queue is a one of the major power consuming unit in the data path design during dynamic scheduling. Load/Store queue is designed to absorb busts in cache access and maintain the order of memory operations by keeping all in-flight memory instruction in program order. The proposed technique aims at reducing both dynamic and static power dissipation in the Load/Store Queue (LQ/SQ) by using Power-gating technique and priority encoder. Through this implementation, the least amount of redesign, verification efforts, lowest possible design risk, least hardware overhead is achieved without significant impact on the performance.
Keywords: Load /Store Queue; static Power; dynamic power; CAM; SRAM.
Design of an ultra-low power, low complexity and Low Jitter PLL with digitally controlled oscillator
by N.K. Anushkannan, H. Mangalam
Abstract: This paper proposes a new area-efficient, low-power and low jitter phased-locked loop (PLL) architecture working off a low frequency reference. In this paper, new PLL is proposed with a new locking procedure with low complexity which results in ultra low power design. The main challenge to design the proposed PLL is to keep the area small while meeting the required low jitter. The proposed method was designed using only two up-down counters for finding the reference frequency. An efficient glitch removal filter and new low power DCO also introduced in this paper. The proposed DCO achieves a reasonably high resolution of 1ps. The PLL architecture was demonstrated for different frequency ranges from 100 400 MHz. The power consumption of proposed PLL at 500 MHz frequency is 820
Keywords: phase-locked loop; digitally controlled oscillator; low power; low complexity; low jitter; glitch removal.
Effective content based pattern predicted text mining using PSE model
by Vijaya Kumar
Abstract: The main importance of Pattern Searching Engine (PSE) model provides solution for the applications, which involves Pattern based mining and find connections between patterns (e.g: emotions) and affective terms by categorizing the text in the content under examination. It discovers patterns of word-use and how to connect documents that shared similar patterns. In Pattern Searching Engine model uses both theme based examination and idea based investigation, those can foresee the normal example by utilizing Semantic Based natural seeking model which interface words with comparative implications and recognize employments of words with different implications in a viable and speedy way.
Keywords: Text mining; pattern based; pattern prediction; concept based.
Special Issue on: Nature-inspired Computing and Its Applications
Improving the Search Efficiency of Differential Evolution Algorithm by Population Diversity Analysis and Adaptation of Mutation Step Sizes
by Dhanya M. Dhanalakshmi, M.S. Akhila, C.R. Vidhya, Gurusam,y Jeyakumar
Abstract: Abstract: The aim of this research work is to improve the efficiency of Differential Evolution (DE) algorithm, at the cases of its unsuccessful searches. Initially, this work discusses and compares different methods to measure the population diversity of DE algorithm implemented for DE/rand/1/bin variant for a set of benchmarking functions. A method which well demonstrates difference in population diversity evolution at successful and unsuccessful cases of DE search is identified based on comparison. This work is then extended to detect unsuccessful searches in advance using the evolution of population diversity measured by the identified method. On detecting a search as unsuccessful, a parameter adaptation strategy to adapt the mutation step size (F) is added to DE algorithm to recover from it. The improved DE algorithm, which comprises of the logic of adapting F value based on the population diversity, is compared with its classical version and found outperforming. The comparison results are reported in this paper.
Keywords: Differential Evolution; Premature Convergence; Stagnation; Mutation Step Size; Parameter Adaptation; Population Diversity; Population Variance.
Towards Real-time Recognition of Activities in Smart Homes
by Sook-Ling Chua, Lee Kien Foo, Saed Juboor
Abstract: Many supervised methods have been proposed to infer the particular activities of the inhabitants from a variety of sensors attached in the home. Current activity recognition systems either assume that the sensor stream has been pre-segmented or use a sliding window for activity segmentation. This makes real-time activity recognition task difficult due to the presence of temporal gaps between successive sensor activations. In this paper, we propose a method based on a set of hidden Markov models that can simultaneously solve the problem of activity segmentation and recognition on streaming sensor data without relying on any sliding window methods. We demonstrate our algorithm on sensor data obtained from two publicly available smart homes datasets.
Keywords: Real-time; Activity Recognition; Activity Segmentation; Streaming Data; Hidden Markov Model.
Supervised Approach for Object Identification using Speeded Up Robust Features
by Pooja Agrawal, Teena Sharma, Nishchal K. Verma
Abstract: This paper proposes a vision based novel approach for real-time object counting. The proposed approach uses the textural information for object counting. Speeded Up Robust Features (SURF) are used to extract the textural information from the image. Firstly, the approach selects stable SURF features from prototype image object of interest. These features are matched with the SURF features of scene image captured using vision interface. Feature Grid Vectors (FGVs) and Feature Grid Clusters (FGCs) are formed for matched SURF features in the scene to indicate the presence of object. Support Vector Machine (SVM) Learning is used to identify true instances of the object. A parameter tuning approach is used to find optimized heuristics for more accuracy and less computation. The proposed approach performs well irrespective of illumination, rotation and scale. A run time environment of the proposed approach is also developed to get real-time status of the object count.
Keywords: Object identification; object counting; SURF; SVM classifier; feature grid vector; feature grid cluster.
Optimal Design of QFT Controller for Pneumatic Servo Actuator System using Multi-objective Genetic Algorithm
by Nitish Katal, Shiv Narayan
Abstract: Loop shaping is the principle step for synthesizing the Quantitative Feedback Theory (QFT) based robust controllers. The controller assures performance robustness in the presence of plant uncertainties. This paper explores a template and bounds free approach for the automated synthesis of low order fixed structure QFT controller for a highly uncertain pneumatic servo actuator system. In this work, the loop-shaping problem has been posed as a multi-objective optimization problem and solved using the multi-objective variant of the genetic algorithm. At the end of the design process, a set of Pareto optimal solutions (POS) are obtained, to aid the decision maker in choosing an ideal solution from the POS, use of level diagrams has been explored. The simulation of the results and time and frequency domain analysis has been carried out using Matlab and the results obtained clearly unveil that the designed QFT controller offers robust behavior over a range of plants parametric uncertainty.
Keywords: Quantitative Feedback Theory; Multi-objective Genetic Algorithm; Automatic Loop Shaping; Robust Stability; Level Diagrams.
Hybrid BATGSA: A Meta Heuristic Model For Classification of Breast Cancer Data
by Umme Salma M, Doreswamy H
Abstract: Nature inspired algorithms have a vast range of applications. One such application is in the field of medical data mining where, major focus is on building models for the classification and prediction of various diseases.
Breast cancer has grabbed the interest of numerous researchers because, it is the major killer disease, killing millions of women across the globe. In this paper, we propose a hybrid diagnostic model which is a fusion of Bat Algorithm (Bat), Gravitational Search Algorithm (GSA), and feed forward neural network (FNN). Here, the potential of the FNN and the advantages of nature inspired algorithms have been exploited to build a hybrid model used for classification of breast cancer data. The proposed model consists of two modules. First, is the training module where the data is properly trained using a feed forward neural network and the second, is an error minimizing module, which is built using Bat and GSA meta heuristic algorithm. The hybrid model minimizes the error thus, producing better classification results. The accuracy obtained for Wisconsin Diagnostic Breast Cancer (WBCD) data set is found to be 94.28% and 92.10% for training and testing respectively.
Keywords: Breast Cancer; Bat algorithm; Gravitational Search Algorithm; Classification; Metaheuristic.