International Journal of Information and Communication Technology (107 papers in press)
Mobile entities in wireless sensor networks:Comparative study and Performance analysis
by Regis Anne, Elijah Blessing Rajsingh
Abstract: Wireless Sensor Network (WSN) is a collection of intelligent sensors that can communicate to form a self-organizing network and can function without human intervention for a long amount of time. Traditionally, WSN was static, but due to the necessity of todays applications, there has been a paradigm shift from a static WSN to dynamic mobile WSN. This dynamism can be realized by adding mobility to static WSN. Mobility can be incorporated by introducing extra network elements called Mobile Entities (MEs) like Mobile Sinks (MSs), Mobile Cluster Heads (MCHs), Mobile Relays (MRs), Mobile Sensor Nodes (MSNs), Mobile Agents (MAs) and Mobile Chargers (MCs). Adding MEs to WSN has attracted much research interest because it significantly improves the capability and functionality of the WSN by making it flexible to failures, ease data collection, increase energy efficiency, enhance connectivity, improve coverage and prolong network lifetime. So the full potential of MEs can be harnessed to yield maximum benefits in static WSN. In this work, first we have analyzed the parameters that affect the lifetime when an ME is added to a static network. Second, we present a survey and performance analysis of each of the MEs discussed in literature. Finally, this paper concludes with the simulations and summaries of each of the MEs. This paper intends to spark new interest and development in mobile-assisted WSN.
Keywords: Mobility, Mobile Sink, Mobile Cluster Head, Mobile Relay, Mobile Sensor Node, Mobile Agent, Mobile Charger, Network Lifetime, Theory, Performance Metrics.
Modeling and Simulation of Wireless Link for Vehicular Mobiles in HAPS CDMA System
by Mingxiang Guan
Abstract: An information system formed by HAP (High Altitude Platform) will be a new generation-system for the wireless communications and HAPS (HAP Station) communication system combines the advantages of both terrestrial and satellite communication systems and avoids, to different extents, their disadvantages. The link simulation scenarios and results have been given for vehicular mobiles in 3.84Mcps TDD systems in three different kinds of channels: Case 3 in 3GPP. It is shown that the maximum performance degradation due to high mobility in Case 3 and Vehicular A channel is about 3dB while only less than 1 dB performance loss is observed in Rician channel. Simulations show that no significant performance impact is introduced by high speed mobility. Even in the worst channel conditions, only extra 3 dB margin is required to deliver the high speed data service.
Keywords: HAPS, CDMA, vehicular mobile, wireless link, channel
Cooperative Caching Architecture for Mobile Ad hoc Networks
by Preetha Joy, K. Poulose Jacob
Abstract: Cooperative caching is widely used in mobile ad hoc networks to efficiently reduce data access cost.. In this paper, we propose a new cooperative caching algorithm, COPN (Cooperative New) for efficient cache discovery and replacement. Existing cache discovery protocols pays less attention to message overhead occurring in the process of cache discovery. Our proposed algorithm explores a zone based approach to optimize cache discovery. We also propose a cache replacement policy that aims at increasing the cache hit ratio. The simulation results show that, compared with other representative broadcast based algorithm and cluster based algorithm, our proposed scheme can significantly increase the cache hit ratio and reduce both message cost and access delay.
Keywords: cooperative caching; cache discovery; cache replacement; cache consistency.
Meaning Negotiation based on merged individual Context Ontology and Part of semantic Web Ontology
by Nabil Keskes
Abstract: The Pragmatic Web is an overwhelming step in the advances of the Web-based technology. This paper discusses the meaning negotiation process, which is a key factor in the construction of the pragmatic Web. We mainly focus on whether or not the merging of ontology affects the meaning negotiation process. We propose merging a part of Ontology of Semantic Web and Individual Context Ontology for pragmatic Web. We then, study at what extent this merging may affect the meaning negotiation process. This is shown through a case study that uses ontology to capture the semantic of context knowledge on communication and meaning negotiation processes. Furthermore, experiments demonstrate the relevance of the proposed method.
Keywords: Pragmatic Web; Semantic Web; Meaning Negotiation; Ontology Merging; Individual Context Ontology; Common CommunityContext Ontology.
Enhancement of Web Proxy Caching Using Discriminative Multinomial Naive Bayes Classifier
by Julian Benadit, Sagayaraj Francis, Muruganantham Udhayasuriyan
Abstract: The Discriminative Multinomial Na
Keywords: Web caching; Proxy server; Cache replacement; Classification; Discriminative Multinomial Naive Bayes classifier.
Routing Protocols in Wireless Mesh Networks: A Survey
by Jamal N. Al-Karaki, Ghada Al-Mashaqbeh
Abstract: Wireless Mesh Networks (WMNs) are one of the emerg- ing wireless networking technologies that can deliver self- configuring, scalable, flexible, and market-viable networks. Routing protocols play a crucial rule in the functionality of WMNs since their performance affect the network throughput, connectivity, and the quality of service (QoS) level, among others. Initially, WMNs adopted many routing protocols that were originally designed for other types of networks, e.g., Mobile Ad hoc Networks (MANETs). However, due to the specific characteristics and architecture of WMNs, many new routing protocols with various metrics were specifically de- veloped for WMNs over the past few years. In this paper, we survey these routing protocols with emphasis on their routing metrics, operation, and design considerations. Extensive sum- maries and many comparisons among different categories of these protocols are discussed throughout this paper. We also highlight the main issues that affect the general design of both protocols and routing metrics in WMNs. The paper concludes with future directions in this vital field.
Keywords: Wireless Mesh Networks; Routing Protocols; Design Challenges; Survey.
PTS using Novel constellation extension scheme for PAPR Reduction of OFDM signals without side information
by Alok Joshi, Davinder Saini
Abstract: High Peak to average power ratio (PAPR) is a major hindrance in the performance of orthogonal frequency division multiplex (OFDM) systems. When OFDM signal with high PAPR passed through high power amplifiers (HPA), it requires a large back-off to operate in linear region, this leads to large dynamic range of HPA, such HPA are costly and complex with poor efficiency. Partial Transmit Sequence (PTS) is one of the most promising technique for PAPR reduction. The major drawback of this technique is it requires exhaustive searching of all possible and allowable phase factors to find optimal phase factor with lowest PAPR , this information is to be sent to receiver as side information (SI) for decoding purpose. Sending SI puts additional burden on transmission bandwidth thus reducing bandwidth efficiency. In this paper a novel constellation extension based PTS is proposed which uses novel octagonal geometry (OM-PTS) for mapping purpose. This scheme does not require transmission of any side information for decoding purpose at receiver. Simulations are presented to show that proposed scheme provides same PAPR performance as conventional PTS but without need of SI transmission.
Keywords: OFDM; PAPR; C-PTS; Side information; OM-PTS.
Balanced Energy Routing Protocol for Clustered
Wireless Sensor Network
by Meenakshi Tripathi, Manoj Singh Gaur, Vijay Laxmi
Abstract: In a Wireless Sensor Network (WSN) hundreds of tiny sensors with limited resources are accommodated to sense the information from the field. Transfer of gathered information from the sensing field to the base station must be done in proficiently to sustain the network longer. Clustering of sensor nodes is one way to achieve this goal. This paper introduces an Energy Efficient clustered routing protocol based on LEACH-C for WSN. In LEACH-C (Low Energy Adaptive Clustering Hierarchy-Centralized), the cluster heads are selected by the base station randomly. This paper introduces a novel cluster based routing protocol in which, the base station finds the highest energy node among the cluster and mark it as a cluster head for the current time. Thus in the proposed system the energy consumption of various nodes becomes more uniform as compared to LEACH-C. The simulation results indicate that our proposed method leads to efficient transmission of data packets with less energy and therefore increases the network longevity as compared to LEACH-C and LEACH.
Keywords: WSN; Cluster; LEACH-C; Energy Efficiency; NS2.
Cluster Analysis for User Segmentation in e-Government Service Domain
by Leo Iaquinta, M. Alessandra Torsello
Abstract: E-Government (e-Gov) is becoming more attentive towards providing personalized services to citizens so that they can benefit from better services with less time and effort. To develop user-centred services, a crucial activity is represented by user segmentation that consists in mining needs and preferences of users by identifying homogeneous groups of users, also known as user segments, sharing similar characteristics. This work provides a comprehensive analysis of studies focusing on user segmentation. Moreover, it proposes an approach based on cluster analysis for deriving and characterizing segments of users experiencing services in the e-Gov domain. Examples of application of the proposed approach on two real-world case studies are described in order to show its suitability in deriving useful user segments.
Keywords: cluster analysis; user segmentation; partitional around medoid clustering; segment profiles; survey study; e-Gov service domain; user-centred services; segmentation variables; demographic segmentation; geographic segmentation.
CONFIGURABLE QUASI CYCLIC LDPC DECODER FOR MULTIPLE CODE LENGTHS OF WiMAX
by G.Amirtha Gowri Gopalakrishnan, S.Subha Rani Sundaresan
Abstract: In wireless communication, the code parameters should have great flexibility to adapt to varying channel conditions. Hence, there is a need for configurable decoders capable of meeting various service requirements and interference conditions. Therefore, a reconfigurable LDPC decoder has been proposed to support multiple code lengths (19 different code lengths) with code rate
Keywords: Keywords - Low density parity Check codes; Min-sum decoding Algorithm; configurable data router; IEEE 802.16e WiMax standard; Field Programmable Gate Arrays.
Analysis of Energy Preservation Technique based on ACR-LEACH and Cyclic LEACH
by Anita Sofia, Arokiasamy S, Ranjit Jeba Thangaiah
Abstract: Wireless Sensor Networks (WSNs) is an emerging field which has enlarged the grasping attention from both the research area and authentic users. The risky facts relating to energy consumption reduction are determined; network lifespan can be prolonged to sensible times since sensor nodes are typically power constrained device. This work aims to contribute the development of an Energy Preservation Technique based on Active Clustering Rule for LEACH (ACRLEACH) and a Cyclic Model called Cyclic LEACH. The proposed ACRLEACH performs the choice of the Cluster Head (CH), detects the changing patterns, and disregards the redundant information. Cyclic LEACH detects the ideal nodes and performs the vague off which improvise the lifespan of the network. The experimental results shows that the proposed ACR and Cyclic LEACH protocol results better packet delivery ratio, throughput with lesser energy consumption and overhead than the existing LEACH and R-HEED protocols.
Keywords: Active Clustering Rule (ACR) - LEACH; Cyclic model; Cluster Head (CH); Cross layer approach.
E-GOVERNANCE SERVICE DEPLOYMENT: AN EMPIRICAL STUDY WITH STRUCTURAL EQUATION MODEL
by Prasant Kumar Patra, Arun Kumar Ray, Ramkrushna Padhy
Abstract: The deployment of e-governance in public organizations promises to connect better with citizens, boost public participation in government decision making, improve the efficiency of service delivery and information dissemination. While each of these outcomes is important for both, the government and its citizens; we know little about how various factors mediate the effectiveness of ICTs on producing these outcomes. This study tries to find out the factors responsible for the successful implementation of e-governance. Four factors related to technology, organizational, legal and regulatory aspects and implementation aspects are measured using latent variables and the causal relationship between them are captured in the hypothetical model. The hypotheses conceptualized in the study are tested using structural equation model and the relationships are established. The study results in some interesting findings. The results show that the technological aspects and the organizational aspects are dominant in influencing the implementation of electronic governance in the state of Odisha. The research findings show that the legal and regulatory policies are dictated by the change in technology. However, it has not much effect on the implementation of the electronic governance.
Keywords: electronic governance; structural equation model; factor analysis; implementation.
A rough set based expert system for diagnosing information system communication networks
by Aaron Don Africa
Abstract: In Information System Communication Networks diagnostics, delays in troubleshooting happen when there is incomplete information. Diagnostics is a straightforward process if the information is complete enough to deduce the possible causes but there are cases when the information is insufficient to deduce the possible cause. In certain cases, it is possible to design an Expert System algorithm that can create diagnostic rules even if the inputted information is incomplete using Rough Set Theory. The design of this Expert System algorithm was done by developing a Theorem to help on formulating the Data Structures. The Data Structures satisfy the conditions of the Theorem. Thus, the Expert System can output the correct possible cause even if the inputted symptoms are incomplete.rnrnThe Expert System algorithm created diagnostic rules and the rules are verified giving a 100% validity by using Empirical Testing and the possible causes outputted by the Expert System were verified by comparing these with historical data which gave a 90% score. This research developed an Expert System algorithm that can handle incomplete information.rn
Keywords: Rough Set Theory; Expert Systems; Information Technology; Information Systems.
Mimir: a Term-Distributed Retrieval System for Secret Documents
by Guoqiang Gao
Abstract: In order to access sensitive documents shared over government, army and enterprise intranets, users rely on an indexing facility where they can quickly locate relevant documents they are allowed to access, (1) without leaking information about the remaining documents, (2) with a balanced load on the index servers. To address this problem, we propose Mimir, a distributed cipher retrieval system for sensitive documents. Mimir constructs the distributedrnindexes based on load balanced term distribution for better search efficiency and load balanced query. Mimir utilizes encryption with random key, partial key update to protect sensitive data and improve query efficiency. Our experiments show that Mimir can effectively protect secret data and answer queries nearly as fast as an ordinary inverted index.
Keywords: ciphertext retrieval system; index; search; term distribution; encryption.
Research on eye localization method based on adaptive correlation filter
by Jin Wu
Abstract: Eye location is one of the most important steps in the face recognition and visual tracking system. This article combines the correlation filter with integral projection to detect eye position precisely, and puts forward two improvements in the training and test phases. Firstly, the adaptive synthetic correlation filter are rotated within the angle range of -0.2,-0.1,0,0.1,0.2, and the corresponding location of the maximum grey value was served as an initial anchor point. Secondly, within the anchor point 5
Keywords: Eye Localization; Correlation Filter; Grey Integral Projection; Adaptive.
An Adaptive Event-based System for Anytime, Anywhere, Awareness Services in Online Teamworks
by Vladi Kolici, Fatos Xhafa, Santi Caballé, Leonard Barolli
Abstract: The fast development in mobile technologies IS drastically changing the way people work, learn, collaborate and socialize. One such important activity that has emerged and is being consolidated each time more is the online learning through virtual campuses. While most of online learning services are at present offered through web-based platforms, due to ever-increasing use of smart devices such as smartphones and tablets, researchers and developers are paying attention to exploit the advantages of mobile systems to support online learning. Specifically, the implementation of the A3 paradigm: Anytime, Anywhere, Awareness that is, notifying users about ongoing activity in their online workspace provides various advantages to online learners organized in online teams. In this paper we present the requirement analysis, the building blocks of the architecture for efficient event-based system and a prototype implementation of the A3 paradigm that adaptively supports the online collaborative activity.
Keywords: Online learning ; collaborative team work ; events ; mobile computing ; awareness ;.
A new approach to Supporting Runtime Decision Making in mobile OLAP
by Djenni.Rezoug Nachida
Abstract: Mobile OLAP (On Line Analyses Processing) system offers to decision makers the real-time and relevant analyses anywhere and at anytime. In order, to generate them, a mobile OLAP should not only use user preferences, but also exploits information about contextual situation (meeting, business travel, office work, or home work) where analyses are done. For instance, when generating analyses, a mobile OLAP could take into account whether the decision maker's contextual situation is a business travel (uses a device with limited resources) or an office work (uses a device with high capacities). For this end, we investigate in this paper to propose a mobile context-aware recommender system (MCARS for short) based on both user preference and context. But, unfortunately, the limited resources in the MCARS make reducing a context acquisition a necessary need. To achieve this goal, our system proposes: (i) a learned approach which generates relevant contextual factors (contextual factors shown to be important); (ii) deduces a relationship between a context and user's preferences (called contextual preference s) and finally (iii) recommends a set of analysis based on user's contextual preferences
Keywords: relevant Context; Context-aware recommender system; knowledge-based recommender system; K2.
Parallel Topic Model and Its Application on Document Clustering
by Lidong Wang, Yuhuai Wang, Shihua Cao, Yun Zhang, Kang An
Abstract: This paper presents PLDACOL, our parallel implementation on LDACOL model, to effectively cluster large-scale documents. Since phrases contain more semantic information than the sum of its individual word, we use topic model LDACOL for phrase discovery, and use Gibbs Sampling for parameter inference. PLDACOL overcomes the high computation time cost in parameter inference by the distributed computing framework based on Hadoop. We show that our PLDACOL can be applied to the clustering of large-scale documents in different size and produces significant improvements on both effectiveness and efficiency compared with other related traditional algorithms.
Keywords: document clustering; topic model; parallel computing; Hadoop; LDACOL model;.
Finding the High Probabilistic Potential Fishing Zone by Accelerated SVM Classification
by Andrews Samraj, B.S. Varun Babu, P. Subash, M.R. Swaminathan
Abstract: Potential Fishing Zones (PFZ) advisories play an important role in forecasting the spots for precised and optimized fishing. The PFZ advisories obtained from Indian National Centre for Ocean Information Services (INCOIS), during November 2003 to December 2011 of Keelakarai coast was analyzed to understand the effective and High Probabilistic fishing spot location.rn The original data for Keelakarai had four directions; they are South, Southeast, Southwest and East. INCOIS data is converted into FFT features to classify the areas using support vector machine (SVM) based on directions and plot the support vectors to distinguish locations of interest. The identified support vectors for the given direction combinations namely, South-Southeast, South-Southwest, South-East, Southeast-Southwest, Southeast-East and Southwest-East helps to a greater extent for classification of PFZ groups.rn SVM, a clustering technique, used to plot the support vectors to differentiate direction zones. We have swiftened the process by fine tuning the box constraint value which is used to fix the support vectors. In this work the high probabilistic potential fishing zones are effectively cropped to get the assured fishing zones.rn
Keywords: PFZ; Fishing advisories; Fourier transform; Support Vector Machines.
A adaptive denoising algorithm for chaotic signals based on complete ensemble empirical mode decomposition
by Mengjiao Wang, Yeiyu Yu, Jiuchao Feng
Abstract: In this paper, a new algorithm to denoise chaotic signals based on complete ensemble empirical mode decomposition (CEEMD-based) is proposed. In our scheme, the CEEMD technique is first used to decompose the noisy chaotic signal into the so-called intrinsic mode functions (IMFs). A novel criterion is proposed to determine which modes are used to reconstruct the denoised signal. In the experiments, we compare the proposed method with the algorithm based on empirical mode decomposition (EMD-based) and the algorithm based on ensemble empirical mode decomposition (EEMD-based). The experimental results show that the proposed method performs better than the EMD-based method and the EEMD-based method.
Keywords: chaos; denoising; complete ensemble empirical mode decomposition; adaptive filtering.
Fruit fly image segmentation and species determination algorithm
by Pei XU
Abstract: For large-scale, real-time and accurate monitoring of citrus orchard fruit flies, an algorithm was developed to identify three kinds of mature fruit flies as Bactrocera dorsalis, B. cucurbitae and B. tau (Dipetra: Tephritidae) based on machine vision technology. A fruit fly sample image library was produced and the body characteristics for distinguishing the three different kinds of flies were analyzed. Within the identification algorithm, image target and background region segmentation was conducted under the YCbCr color space and long-axis searching of the scutellum area. Image registration was achieved using Hough transform, and the BP neural network fruit fly identification model was established to distinguish the fruit fly in each image. Every algorithm in this study was programmed on Matlab software. The image registration algorithm was applied to 120 images, each of which contains a single specimen of Bactrocera dorsalis, B. cucurbitae or B. tau (Dipetra: Tephritidae) from the self-captured sample library. The BP neural network model was applied to the identification of the fruit flies. Experimental results indicated that: (1) the yellow scutellum at the waist and abdomen of three different kinds of fruit fly contained the largest yellow area within the body, thus it could be used as the long-axis searching area during image registration. The vertical yellow lines in the middle of the back torso of both B. cucurbitae and B. tau (Dipetra: Tephritidae) could be identified to distinguish these two types of fruit fly from the Bactrocera dorsalis. Further, the area rate of these lines to the whole body could be used to distinguish among the three types of fruit fly. (2) The image registration accuracy was 100% with an average registration time consumption of 0.4 seconds. (3) The BP neural network model accuracy was 100%, indicating satisfactory identification. Further research will focus on the improvement of the self-adaptive aspect of the threshold selection, since the threshold selection for target region segmentation was influenced by the experimental environment.
Keywords: machine vision; fruit fly; citrus orchard; Hough transform; precision agriculture.
Boosting Prediction Performance on Imbalanced Dataset
by Masoumeh Zareapoor, Pourya Shamsolmoali
Abstract: Mining from imbalance data is an important problem in algorithmic and performance evaluation. When a dataset is imbalanced, the classification technique is not equally considering both the classes. It is obvious that the standard classifiers are not suitable to deal with imbalanced data, since they be likely to classify all the instances into the Majority class, which is less important class. Additionally some of the performance measurement, like accuracy- which is known to be bias metrics in the case of imbalance data-does not have a very good performance when the data is imbalance. In this paper we tried to apply various techniques that used commonly to handle class imbalance, before giving the data to the classifiers. But, the performance of the classifiers is found degrading because of the highly imbalanced nature of the datasets. Hence we propose an integrated sampling technique with an ensemble of AdaBoost to improve the prediction performance. Meanwhile, through empirical, we show the more appropriate performance measures for mining imbalanced datasets.
Keywords: Imbalanced dataset; classification; re-sampling; ensemble.
Convergence Analysis of Adaptive MSFs used for Acoustic Echo Cancellation
by Alaka Barik, Mihir N. Mohanty, Kunal Das
Abstract: Through the technology advancement grows day by day, new challenges raise accordingly. In modern era of communication echo cancellation is a major problem. Cancellation of acoustic echo from loudspeaker and microphone coupling is an essential as well as challenging task. Mostly the adaptive filters are used to cancel the echo in present scenario and many researchers are still working on this area. The LMS algorithm design is used extensively in communication networks to correct for the echoes created by line impedance mismatches and is useful to compensate for the imperfection in telephony networks. This paper shows how the LMS algorithm through Multiple Sub Filter (MSF) is useful to solve echo problems. A large amount of coefficients is required for acoustic echo cancellation for a long path. Similarly lengthy filter results slow convergence.This paper comprises this tradeoff using LMS algorithm for the echo cancellation purpose. The proposed algorithm is based upon decomposing a long adaptive filter into smaller sub filters. Different types of errors have been analyzed that is mean error , common error and combination of both. Also, the comparative analysis among Mean Error and common error analysis shows its performance in terms of convergence. Simulations results show that the decomposed algorithm shows better than the long adaptive filter.
Keywords: Echo Cancellation; Adaptive Algorithm; Convergence; Mean Error; Common Error; Composite Error.
Probability Least Squares Support Vector Machine with L1 Norm for Remote Sensing Image Retrieval
by Jinhua Zhu
Abstract: This paper proposes a probability least squares support vector machine (PLSSVM) classification method that aims at remote sensing image data, like high-dimension, nonlinearity, and massive unlabeled samples. Hybrid entropy was designed by combining quasi-entropy with entropy difference, which was used to select the most valuable samples to be labeled from a larger set of samples. An L1 norm distance measuring was then used to further select and remove outliers and redundant data. Finally, based on the originally labeled samples and the screened samples, the PLSSVM was gained through training. The experimental results of the classification of ROSIS hyperspectral remote sensing images show that the overall accuracy and Kappa coefficient of the proposed classification method are more accurate than existing methods. The proposed method obtains higher classification accuracy with fewer training samples, which makes it very applicable to current problems of classification.
Keywords: remote sensing image; hybrid entropy; L1 norm; active learning; PLSSVM (probability least squares support vector machine).
Application of Soft Computing Neural Network Tools to Line Congestion Study of Electrical Power Systems
by Prasanta Kumar Satpathy
Abstract: This paper presents a scheme for application of soft computing neural network tools namely Feed Forward neural network with backpropagation, and Radial Basis Function neural network for the study of transmission line congestion in electrical power systems. The authors performed sequential training of the two proposed neural networks for monitoring the level of line congestion in the system. Finally, a comparative analysis is drawn between the two neural networks and it is observed that Radial Basis Function neural network yields fastest convergence. The proposed method is tested on the IEEE 30-bus test system subject to various operating conditions.
Keywords: line congestion index; neural network; hidden layer; training performance.
Development and implementation of a wireless sensor system for landslide monitoring application in Vietnam
by Dinh-Chinh Nguyen, Duc-Tan Tran
Abstract: The effect of climate change and human activities leads to a series of dangerous phenomena, such as landslides, ﬂood, etc. Therefore, it is necessary to build a system to monitor environmental hazards. There are some studies that built landslide monitoring systems based on wireless sensor network (WSN). However, there is not any WSN that is the best standard for landslide monitoring system. The energy saving which helps extend the lifetime of the landslide monitoring system is very important. This paper focuses on the development of a WSN system with the proposed energy efficient scheme. In this work, we build a complete system that consists of sensor nodes, a gateway node, a database, a website interface and an Android application for landslide monitoring application. The energy efﬁciency scheme is applied at sensor nodes to increase the lifetime of sensor node up to 154 times and signiﬁcantly increase the rate of successfully received packets. The system is also assembled and measured in the lab and the outdoor to analyze initial results.
Keywords: Landslide; Energy; Wireless Sensor Network; Power Consumption.
A 24 GHz Dual FMCW Radar to Improve Target Detection for Automotive Radar Applications
by Quang Nguyen, MyoungYeol Park, YoungSu Kim, Franklin Bien
Abstract: A 24 GHz automotive radar with Dual Frequency Modulated Continuous Waveform (FMCW) is proposed. By using this modulation waveform, the ghost targets can be avoided, especially in multi-target situations. Thus, the detection ability of radar systems can be significantly improved. In this paper, in order to generate the Dual FMCW signal, a Dual FMCW Modulation Control Logic (DFMCL) is proposed. This block incorporates a 24 GHz Fractional-N Phase Locked Loop (PLL), synthesize 24 GHz modulation waveform. The proposed architecture was designed using 130 nm CMOS process. Two alternative chirps of 12 ms and 6 ms were generated in the coherent processing interval. The modulation bandwidth was 200 MHz. Moreover, a radar transceiver, which consists of 24 GHz Dual FMCW generator and Monolithic Microwave Integrated Circuits (MMICs), was accomplished. A behavioural simulation was conducted to evaluate the operation of the proposed generator. Then, the transceiver was modelled for testing the detection ability of the automotive radar. The results demonstrated that the proposed scheme has the ability to realize the Dual FMCW waveform for automotive radar systems. Furthermore, the system can avoid the ghost targets in multi-target scenarios.
Keywords: automotive radar; 24 GHz; FMCW; transceiver; multi-target.
Converged Services Composition with Case-Based Reasoning
by Hui Na Chua, S.M.F.D Syed Mustapha
Abstract: In order to achieve the converged service composition in Next Generation Network environment, it is necessary to have an approach that is capable to manage the difficulties of potential complexities due to service unavailability and network failures. In response to these challenges, we propose a converged service composition (CSC) framework having a management function that uses case-based reasoning (CBR) for handling services unavailability and/or network failures during the service composition process.
Keywords: web service composition; case-based reasoning; next generation network service layer
Optimised Cost Considering Huffman Code for Biological Data Compression
by Youcef GHERAIBIA, Sohag Kabir, Abdelouahab Moussaoui, Smaine Mazouzi
Abstract: Classical Huffman code has been widely used to compress biological datasets. Though a considerable reduction of size of data can be obtained by classical Huffman code, a more efficient encoding is possible by treating binary bits differently considering requirement of transmission time, energy consumption, and similar. A number of techniques have already modified the Huffman code algorithm to obtain optimal prefix-codes for unequal letter costs in order to reduce overall transmission cost (time). In this paper, we propose a new approach to improve compression performance of one such extension, the cost considering approach (CCA), by applying a genetic algorithm for optimal allocation of the codewords to the symbols. The idea of the proposed approach is to sacrifice some cost to minimise the total number of bits, hence, the genetic algorithm works by giving penalty on the cost. The performance of the approach is evaluated by using it to compress some standard biological datasets. The experiments show that the proposed approach improves the compression performance of the CCA considerably without increasing the cost significantly.
Keywords: Data Compression; Huffman Code; Information Coding; Genetic Algorithm; Cost Considering Approach; Optimization.
Sybil Attack Resistant Location Privacy in VANET
by Balaram Allam, Pushpa S
Abstract: Vehicular Ad Hoc Networks (VANETs) are more susceptible to a large number of attacks due to its open medium and anomalous nature. Location privacy is an imperative challenge in VANET, an attacker can easily trace the vehicle activities, if it has the knowledge of the location. In the most of the location privacy preserving mechanisms, each RSU provides secret information to a vehicle entered into the range, and it is known only to the corresponding RSU. However, these are vulnerable to Sybil attack, whereby a malicious vehicle compromises an RSU, and it pretends as multiple vehicles. Thus, an effective mechanism is in need to identify the attacker and compromised RSU in VANET. This paper proposes a Sybil Attack Resistant Location Privacy (SARLP) system to identify the attacker even a compromised RSU present in the VANET. The SARLP system employs a Location Privacy Unit (LPU) to provide an effective authentication mechanism. It hides the real identity of a vehicle by providing temporary key and a trusted certificate to the user, and thus improving the location privacy. The RSU imposes its signature on a generated secret random number of the vehicle entered into the region. Each vehicle and RSU maintain the random number secretly. If two vehicles communicate, sender reveals the sequence of secret random numbers received from different RSUs along its traveling path. The secret maintenance mechanism verifies the secret random number at the interference range of two RSUs, and thus a genuine RSU can successfully detect the compromised RSU. The simulation results show that the SARLP system achieves a higher level of location privacy preservation of vehicles and attack resilience in networks compared to the existing footprint scheme.
Keywords: VANET; Location privacy; Compromised RSU; Sybil attack; Authentication.
A trust management based on a cooperative scheme in VANET
by Ahmed Zouinkhi, Amel Ltifi, Chiraz Chouaib, Mohamed Naceur Abdelkrim
Abstract: VANET is a Vehicular ad hoc network, which is highly dynamic, self-organized and without a preexisting infrastructure. VANET works properly if the participating vehicles cooperate to ensure the exchange of packets. This special network confronts many constraints, such as attacks of malicious entity and absence of trust between nodes. To solve these problems, we proposed an approach having a decentralized architecture combining two models: the cluster model and the trust management model. Our approach encourages cooperation between vehicles by broadcast packets using the reward concept. It also ensures the detection of selfish nodes using the trust concept. By applying the punishment mechanism, our approach aimed to prevent malicious nodes from disrupting the network by injecting false information. Besides, in our network, we guaranteed authentic forwarding packets which were controlled by the group leader which takes the function of watchdog. Our approach is based on asymmetric cryptography, which used RSA encryption and digital signature to ensure security. The simulation results provided by the NS3 simulator showed that our approach has better performance.
Keywords: VANET, Cooperation, Trust management, Security, NS3.
Visible Light Communication based High-Speed High-Performance Multimedia Transmission
by Atul Sewaiwar, Samrat Vikramaditya Tiwari, Yeon-Ho Chung
Abstract: A novel scheme for high-speed high-performance multimedia transmission using visible light communication is presented. Initially, the multimedia image is converted to digital data. This digital data is divided into three sub-streams and each sub-stream is then transmitted via three parallel channels. Prior to transmission, each sub-stream data is modulated using On-Off Keying (OOK) modulation. Three parallel channels corresponding to each color (red, green and blue) of a RGB LED are utilized for transmission, thus giving a total of 9 channels. Color filters (CFs) and selection combining (SC) are also utilized for performance improvement and high speed transmission. Simulations are performed to evaluate the effectiveness of the proposed scheme. Results show that the proposed scheme is efficient in terms of bit-error-rate (BER) performance and data rate. Thus, it can effectively be used for high speed multimedia transmission.
Keywords: free space optical communications; visible light communication; multimedia; multiplexer; demultiplexer
Analysis of Energy Aware Job Offloading in Mobile Cloud
by Junyoung Heo, Hong Min, Jinman Jung
Abstract: Mobile cloud computing is the combination of cloud computing and mobile computing, and provides rich computational resources to a mobile computer.
In mobile cloud computing, computation offloading techniques are used to overcome the limitations of resource-constrained mobile devices.
Offloading techniques perform some parts of a job of mobile devices in the cloud in behalf of mobile devices.
If the cost of the operation in that part of the job in a mobile device is larger than the cost associated with offloading, the part is executed in the cloud.
Traditional cost analysis models for deciding which parts of a job to execute in the cloud or a mobile device were estimated by using only the cost of offloading, which is composed of data transfer and response time required for the function call.
In this paper, we propose a novel offloading cost analysis model based on the data synchronization rate and the data exchange rate for the input of the function to improve the accuracy of offloading cost estimation.
We confirm through experiments that the offloading technique with the proposed model can reduce the execution time of a job and consequentially improve the energy efficiency when compared to previous techniques.
Keywords: Mobile Cloud Computing, Profiling, offloading, Remote Procedure Call
Method of Trajectory Privacy Protection Based on Restraining Trajectory in LBS*
by Zwmao Zhao, Jiabo Wang, Chuanlin Sun, Youwei Yuan, Bin Li
Abstract: With the development of mobile positioning technology, location-based services are becoming more and more widely used in the life, but it has produced the security problem of the users privacy leakage. In this paper, the problem of user trajectory privacy protection in location-based services is introduced, and a method of trajectory privacy protection based on restraining trajectory in LBS is proposed. The proposed method is done by restraining the release of sensitive position and choosing a non-sensitive position that the user might stay at with the maximum probability to replace the sensitive position. So it will prevent the leakage of sensitive position of user's trajectory, and protect the user's activity trajectory, and give the method of calculating the privacy protection degree of restraining method.
Keywords: Location-based service(LBS); privacy preservation; trajectory privacy; restraining trajectory.
An improved design of P4P based on distributed Tracker
by Lixin Li, Feng Wang, Wentao Yu, Xiuqing Mao, Mengmeng Yang, Zuohui Li
Abstract: The P4P architecture is mainly composed of appTracker, iTracker and Peer. The single appTracker manages sharing resources in the different ISP domains. Every Peer registers with appTracker when joining the network, and then requests resources from appTracker. In this architecture, there is too much workload for the single appTracker, thus, the bottleneck problem often appears when the scale of the network is enlarged in this centralized structure. An improved design of P4P based on distributed Tracker is proposed to solve the overload problem of single appTracker server. In the improved P4P system, a distributed Tracker overlay network replaces the appTracker to manage the resources in the different ISP domains. The functions of iTracker arranged by ISP are extended and the info interfaces of the iTracker are designed in detail in order to realize sharing the resources among the different iTrackers. The experiments prove that the P4P framework based on the distributed tracker can solve the server bottleneck problem and improve the scalability and stability while maintaining the characteristics of locality and transmission capacity.
Keywords: Proactive network Provider Participation for P2P (P4P);distributed Tracker; overlay network;weighted graph
Chinese-Naxi Syntactic Statistical Machine Translation Based on Tree-to-Tree
by Shengxiang Gao, Zhiwen Tang, Zhengtao Yu, Chao Liu, Lin Wu
Abstract: For the purpose of using Naxi syntax information efficiently, we put forward a method of Chinese-Naxi syntactic statistical machine translation based on the tree-to-tree model. Firstly, for using syntax information of source language and target language, collecting Chinese-Naxi aligned parallel corpus and making a syntax parsing on both side, the method obtains corresponding phrase structure trees of Chinese and Naxi. Then, by using GMKH algorithm to extract a large number of translation rules between Chinese treelets and Naxi treelets, inferring their probabilistic relationship from these rules, it obtains the translation templates. Finally, using these translation templates, through a tree-parsing algorithm, to guide the decoding, translating each Chinese phrase treelet in bottom-up, it obtains the final translation text. In comparison with the tree-to-string model, the experiments show that this method improves 1.2 BLEU value. This proves that both Chinese syntactic information and Naxi syntactic information are very helpful in improving the performance of Chinese-Naxi machine translation.
Keywords: machine translation; Chinese-Naxi; syntax; tree-to-tree
Research on wireless sensor network for mechanical vibration monitoring
by Liang Zong, Wencai Du, Yong Bai
Abstract: The mechanical vibration monitoring system based on the cable connection has characteristics, such as the complexity of the cabling layout, high cost and poor maintainability and system flexibility. This paper introduces wireless sensor network (WSN) into the mechanical vibration monitoring. The monitoring data transmission in the wireless sensor network is completed by radio waves. There are significant advantages that low cost, remote monitoring, facilitates the diagnosis and maintenance. The multi-hop function and topological flexibility of WSN can effectively avoid wireless signal attenuation effect of buildings and equipment. This paper presents a wireless sensor network model for the mechanical vibration monitoring, analyses the two kinds of wireless sensor network topology, and sets up a vibration sensor monitoring network system. The system takes vehicle vibration sensors to collect monitoring data, and combines the ADTCP algorithm for multi-hops network, puts forward a scheme that limits the congestion window to reduce the network congestion. In this paper, the scheme can effectively alleviate the sink nodes congestion in the mechanical vibration monitoring wireless sensor network, and improve the performance of network monitoring.
Keywords: sensor network; mechanical vibration; vibration monitoring
Security and Robustness of a Modified ElGamal Encryption Scheme
by Karima Djebaili, Lamine Melkemi
Abstract: In this paper we propose a new and practical variant of ElGamal encryption which is secure against every passive and active adversary. Under the hardiness of the decisional Diffie-Hellman assumption, we can prove that the proposed scheme is secure against an adaptive chosen ciphertext attacks in the standard model. Such security verifies not only the confidentiality but also verifies the integrity and the authentication of communications. We display that the modified scheme furthermore achieves anonymity as well as strong robustness.
Keywords: ElGamal encryption; adaptive chosen ciphertext attacks; decisional Diffie-Hellman assumption; robustness.
Design and Realisation of a Wireless Data Acquisition System for Vibration Measurements
by Surgwon Sohn
Abstract: Nowadays, sensing, processing, and analysing of vibration signals are
key components to structural health monitoring systems. Wireless-based data
acquisition (DAQ) systems for vibration measurements become more and more
important in the sensing and processing field. This paper presents a hardware
and software design of wireless data acquisition system for this purpose. The
DAQ system is based on the TMS320 digital signal processor which enables us
to process real-time vibration signals. Sensitivity is one of the key performance
features in the DAQ system, and we use an integrated circuit piezoelectric (ICP)
accelerometer as a detection sensor for best performance. For faster wireless
transmission of large amount of acceleration signals, a new data link protocol
of Bluetooth interface between wireless DAQ and smartphone is proposed.
An Android smartphone is a good choice of user interface in the mobile data
acquisition system. In order to display vibration signals in real time at the Android
smartphone, a commercial Java graphic library tool is used.
Keywords: Wireless Data Acquisition System; Vibration Signal; Accelerometer; Data Link Protocol; Smartphone Interface.
Weighted Estimation for Texture Analysis based on Fisher Discriminative Analysis
by Xiaoping Jiang, Chuyu Guo, Hua Zhang, Chenghua Li
Abstract: The traditional Texture Analysis methods only use relative contribution of each face area to mark the global similarity. For solving the problem of feature extraction which cause by local instead of global, Weighted Estimation for Texture Analysis method (WETA) based on the Fisher Discriminative Analysis (FDA) is proposed. First, Local Binary Pattern (Local Binary Pattern, LBP) or partial Phase Quantization (Local Phase Quantization, LPQ) is used for image texture encoding. Then, the image is divided into Local small pieces which are all equal and not overlap. The most discrimination axis, which are extracted from similarity space, are applied into texture analysis by FDA method, then the best solution through weight optimization is given. Finally, in the, experiments on two major general face databases (FERET and FEI) verify the effectiveness of the proposed method. The experimental results show that compared with texture methods in other papers, the proposed method in this paper has obtained better recognition performance.
Keywords: Face recognition; Fisher discriminative analysis; Weighted estimation; Texture coding.
A New Lightweight RFID Mutual Authentication Protocol Based on CRC Code
by Xiaohong Zhang, Juan Lu
Abstract: A new lightweight RFID mutual authentication protocol which the security keys updated dynamically is presented by using Cyclic Redundancy Check code (CRC code) operations and some simple logic operations. GNY logic proof and security analysis show that the protocol not only achieve the mutual authentication requirements effectively between the readers and the tags, but also prevent many security privacy problems such as eavesdropping attack, replay attack, replication attack, the problem of tag tracking on the basis of without increasing computation and communication traffic of the RFID system. Especially, the new protocol meets the EPC Class1 Gen2 standard, compared with the same safety degree of other existing protocols, the new protocol is low complexity in the aspect of hardware implementation which accounts for only 18.75% of the reserved space in terms of label storage and it is suitable for low-cost RFID system. Hence, the new protocol can realize the combination of high security and low tag cost.
Keywords: RFID; CRC code; mutual authentication protocol; GNY logic; security analysis.
Application of PatchNet in Image representation
by Hao Cheng, Zhonglin He
Abstract: PatchNet, as a graph model with hierarchical structure, is a new technology of image representation. Its description structure for images can well conform to the cognitive features of human visual system. The semantic information and geometric structural information can be stored and represented compactly. PatchNet can realize abstract representation of an input image. This paper introduces the PatchNet representation of images, describes the detailed structure of PatchNet, and analyses how to apply PatchNet in content searching and editing based on image library.
Keywords: PatchNet; Image representation; Geometric structure.
Experimental study on fibre Bragg grating temperature sensor and its pressure sensitivity
by Deng-pan ZHANG, Jin WANG, Yong-jie WANG
Abstract: At present, ocean temperature almost depends on electric signal inspection, but the sensor is not safe in the water．In order to overcome this shortage and improve measurement veracity and safety, the temperature sensing theory of Bragg grating was analyzed and a new ocean temperature sensor was presented with advantages of all optic elements, underwater safety and feasibility of sensor network, etc. By setting a spring in the metal tube, the temperature was measured with pressure isolation encapsulation. Multiple plus-minus temperature tests were carried out. Results show that the sensor under the condition of no pressure and the temperature of 0~35℃ has an excellent repeatability, hysteresis and linearity. Temperature sensitivity is 29.9pm/℃,which is close to the theoretical value. Pressure sensitivity tests were executed under the condition of 0~5MPa. Results show that the wavelength has no change and is not affected by pressure.
Keywords: FBG; temperature; sensor; pressure; ocean.
Design and Development of Smart fishing Poles using ICT enabled systems
by Zhenghua Xin, ma Lu, Guolong Chen, Hong Li, Qixiang Song, Meng Xiao
Abstract: Consumers wish fish poles to have high quality, diverse functions and personalization. This intelligent float mainly focuses on its functions. Compared with the existing fish poles products, it can fish at night. Because it has the night light. When the floats are put into the river, lake or the sea, the LED light will flash and the green light is easy to recognize. This fish pole can make fish more accurate. And anglers enjoy themselves.
This fish pole also has the accelerometer sensor. When the float is pulled by the fish in the water suddenly, the acceleration generates. It makes the light turn into the red. It prompts there are fish on a hook. It is especially convenient for us to use in the evening.
Now, the intelligent telephone is very important for people to communicate with others. When the fish bites food on hooks, the SMS will send to the telephone. It tells people to take up the rod line. We deliberately designed the Bluetooth module linked with the floats. So that we play with mobile phones and fish simultaneously. When fish bite the food on the hook, the line triggers the infrared sensor. And then the SMS which content is get fish generates and sends to the angler. The designed alarm works automatically to make anglers enjoy the game. Similarly, the shocked float triggers the infrared sensor and the buzzer works to remind the angler.
Keywords: Keywords: smart fishing poles; the infrared sensors; the Bluetooth
Detection and Filling of Pseudo-hole in Complex Curved Surface objects
by Mei Zhang
Abstract: The detection and filling of pseudo-hole in complex curved surfaces is always a hot and difficult problem in computer vision field. Aiming at the defect that the traditional method can only detect the small curvature of the pseudo-hole region, this paper proposes a new method to improve the pseudo-hole detection. Firstly, the hole can be classified by the projection method, it is divided into simple holes and complex holes, and then, the method of filling the simple hole is described. For complex holes, it is divided into a number of simple holes, and then filling every simple holes, to complete the filling of a complex pseudo-hole, finally map the results of filling the simple hole back to the object hole area.
Keywords: Laser point cloud; complex curved surfaces object; pseudo-hole detection; pseudo-hole filling
A variant of Random WayPoint mobility model to improve routing in Wireless Sensor Networks
by Lyamine Guezouli, Kamel Barka, Souheila Bouam, Abdelmadjid Zidani
Abstract: The mobility of nodes in a wireless sensor network is a factor affecting the quality of service offered by this network. We think that the mobility of the nodes presents an opportunity where the nodes move in an appropriate manner. Therefore, the routing algorithms can benefit from this opportunity. Studying a model of mobility and adapt it to ensure an optimal routing in an agitated network is the purpose of our work. We are interested in applying a variant of the mobility model RWP (named Routing-Random WayPoint "R-RWP") on the whole network in order to maximize the coverage radius of the Base Station (which will be fixed in our study) and thus to optimize the data delivery end-to-end delay.
Keywords: WSN; wireless sensor networks; Random Waypoint; Mobility model; Routing; RWP; random waypoint.
A Modified Extended Particle Swarm Optimization Algorithm to Solve the Directing Orbits of Chaotic Systems
by Simin Mo, Jianchao Zeng, Weibin Xu, Chaoli Sun
Abstract: In order to solve the problem of the poor local search capability of the extended particle swarm optimization algorithm (EPSO)，the modify Extended Particle Swarm Optimization algorithm(MEPSO) was proposed, which reduces magnitude of total forces exerting on each particle through decreasing the number of each particle effected by other particles. Meanwhile, the number of the particles removed is analyzed theoretically. And it was proved that MEPSO can converge to the global optimum with the probability 1. Compared with the related algorithms, the presented algorithm can effectively balance the global and local search and improve optimization performances. Finally, MEPSO can better solve the problem of directing orbits of chaotic systems.
Keywords: Extended Particle Swarm Optimization algorithm; magnitude of total forces exerting on each particle; global and local search.
Development and application research of smart distribution district based on IDTT-B new-type transformer terminal unit
by Aidong Xu, Lefeng Cheng, Xiaobin Guo, Ganyang Jian, Tao Yu, Wenxiao Wei, Li Yu
Abstract: Amid at the problems of low automation degree and failed remote monitoring of operation situation in distribution district, an ITDD-B-type of transformer terminal unit (TTU) based smart distribution district (SDD)is designed, which provide scientific and advanced technical methods for operation management units in aspect of achieving fine management in distribution districts. The construction of SDD is stated , i.e. the upgrading and reconstruction work of original distribution district, the construction of new-type SDD based on ITDD-B TTU and the building of communication network and main station. It is focus on design of IDTT-B-type TTU and which can finish transformer monitoring, power quality monitoring, temperature measurement and low voltage switch communication, etc. and has high integration, and supports remote control , communication and software upgrading, and also has low investment and high cost performance. The technical features and total performance and application situation of IDTT-B based SDD are introduced. Finally, an applied example of SDD based on IDTT-B is given and the power quality detection and analysis are made. The construction of new-type distribution district has certain significance for unified building of strong and smart grid, also provides certain guidance and reference for operation and management units.
Keywords: distribution district; transformer terminal unit; distribution automation; operation management; safe protection; main station.
PRIVACY PRESERVING METHOD FOR KNOWLEDGE DISCOVERED BY DATA MINING
by Sara Tedmori
Abstract: In spite of its success in a wide variety of applications, data mining technology raises a variety of ethical concerns which include among others privacy, intellectual property rights, and data security. In this paper, the author focuses on the privacy problem of unauthorized use of information obtained from knowledge discovered by secondary usage of data in clustering analysis. To address this problem, the author proposes the use of a combination of isometric data transformation methods as an approach to guarantee that data mining does not breach privacy. The three transformation methods of reflection, rotation, and translation are used to distort confidential numerical attributes for the purposes of satisfying the privacy requirements, while maintaining the general features of the cluster in clustering analysis. Experimental results show that the proposed algorithm is effective and provide acceptable values for balancing privacy and accuracy.
Keywords: Privacy Preserving, Data Mining, Discovering Knowledge, Data Engineering
Recognition of the Anti-Collision Algorithm for RFID Systems Based on Tag Grouping
by Zhi Bai, Yigang He
Abstract: In RFID system, one of the problems that we must solve is the collision between tags which is a key technique of the RFID system. An anti-collision algorithm for RFID systems based on tag grouping is put forward. The proposed algorithm compared to conventional ones, when there are a large number of tags in the field, can achieve high system efficiency by restricting the number of unread tags. Simulation results show that the proposed algorithm improves the slot efficiency above 80% at least compared to the conventional algorithms when the number of tags reach 1000.
Keywords: RFID; anti-collision algorithm; tag grouping; adaptive frame slotted
Towards a robust palmprint representation for person identification
by Meraoumia Abdallah, Bendjenna Hakim, Chitroub Salim
Abstract: Biometrics, which refer to automatic identification of individuals based on their physiological and/or behavioral characteristics, is a widely studied field. This identification technology has rapidly evolving and it has a very strong potential to be widely adopted in many civilian applications such as e-banking, e-commerce, and access control. Among the physiological biometric modalities, those based on palm have received the most attention due to its steady and unique features, which are rich in information with a low resolution. Although there is several palm capture devices, however, no of them is apt to provide the full features of the same palm. By using different capture devices, the palm features can be represented with different formats such as: grayscale images, near-infrared images, color images, multispectral images and 3D shapes. In this context, we present in this paper a study that permits to propose robust palmprint representation for a reliable person identification system. Thus, a comparative study of the used palmprint image representations in the practice is performed. A new scheme for improving the person identification using the palmprint images is proposed. The proposed scheme uses the combination of several classifiers results. The discrete CoNtourlet Transform (2D-CNT) is used as feature extraction technique. At each level, the palm image is decomposed into several bands using the 2D-CNT technique. Subsequently, some of resulting bands are used to create the feature vector. Given this vector of features, two sub-systems can be created. The first one is based directly on this vector of features. While the second one uses the Hidden Markov Model (HMM) in order to modeling the feature vector. For the combination of classifiers results, the matching score level fusion strategy is used. The proposed system is tested and evaluated using several databases of Hong Kong polytechnic university that contain 400 users.
Keywords: Biometrics; Person Identification; Palmprint; Feature extraction; Contourlet transform; Hidden Markov Model; Data fusion.
Robust Adaptive Array Processing based on Modified Multistage Wiener Filter Algorithm
by Peng Wang, Ke Gong, Shuai-bin Lian, Qiu-ju Sun, Wen-xia Huang
Abstract: Multistage Wiener filter (MSWF) is a very efficient algorithm for adaptive array processing because of low-complexity and prominent rank-reduction advantage. However, if training sample data was contaminated by outliers, especially when outliers having the same DOA with target emerge, the MSWF results will be decreased severely. In this paper, MSWFs backward iteration was improved, and median cascaded canceler (MCC) strategy was adopted so that optimal weighting calculation can be obtained via sorting and median processing, meaning impact of outliers were removed effectively. Blocking matrix solving of MSWF forward iteration was completed by Householder transform to enhance fix-point format performance. The new-designed algorithm attained excellent compromise between robustness and complexity. To verify presented algorithms performance, array with 50 elements was established in simulation platform, and the simulated results also proved it can cope with outlier-contaminated applications effectively.
Keywords: Multistage Wiener filter; rank-reduction; Householder transform; outlier; median cascaded canceler;
Improving Multidimensional Point Query Search using Multiway Peer-to-Peer Tree Network
by Shivangi Surati, Devesh Jinwala, Sanjay Garg
Abstract: Nowadays, Peer-to-Peer (P2P) networks are widely accepted inrnmultidimensional applications like social networking, multiplayer games, P2Prne-learning, P2P mobile ad-hoc networks etc. Various P2P overlay networksrncombining Multidimensional Indexing(MI) methods are preferable for efficientrnmultidimensional point or range search in a distributed environment. However,rnpoint query search in existing P2P has limitations viz. (i) either doesnt givernsupport to MI or uses replications to support MI or (ii) point query search cost is limited to O(log2N). Hence, traditional MI techniques based on the multiway tree structure (having larger fanout) can be employed to enhance the multidimensional point query search capabilities. Based on our observations, a hybrid model combining m-ary (m fanout of the tree, > 2) P2P tree network and MI based on space containment relationships is preferred to reduce point query search performance bound to O(logmN) using single overlay network. The present paper shows how this model improves the search performance of the point queries in O(logmN) steps, independent of the dimensionality of objects.
Keywords: Peer-to-Peer overlay networks; Distributed computing; Multidimensional Indexing; Point query search; Multiway trees
IoT-based risk monitoring system for safety management in warehouses
by Sourour Trab, Ahmed Zouinkhi, Eddy Bajic, Mohamed Naceur Abdelkrim, Hassen Chekir
Abstract: This paper relies on the concepts and architecture of IoT to design a risk monitoring system for a hazardous product warehouse. The enhancement of product into smart product as a sensor-equipped communicating device allows to control and monitor the product interactions in the objective of risks prevention and avoidance. A generic warehouse safety policy supported by the smart products is presented that relies on a set of parametric safety rules for storage, picking and handling of products. Our proposal aims to provide the benefits of information availability, communication and decision-making, deep in the warehousing physical world, and oriented toward a global safety assurance. We present an implementation case for chemical products warehousing, that uses ZigBee wireless sensor network platform and Labview software. The achievement of smart products and remote monitoring allows dynamic risk assessment by analysis of product's information and status, and ambient condition parameters of warehouse for safety assurance.
Keywords: IoT; Risk monitoring system; WMS; Intelligent product; Safety management.
The Study of Access Point Outdoor Coverage Deployment for Wireless Digital Campus Network
by Augustinus B. Primawan, Nitin K. Tripathi
Abstract: Wireless Local Area Network design needs more development to obtain appropriate and effective results. Site surveys in the design process give realistic results, but require time and effort. Developing ways of predicting signal strength using empirical models can give appropriate results in access point placement to get good signal coverage.
Geospatial analysis, such as Inverse Distance Weighting, Kriging and Global Polynomial Interpolation, has been compared. This study showed that Kriging analysis is an appropriate method to predict value the coverage area. Furthermore, predictive signal strength models such as classical, empirical and COST 231 Hatta models have been studied. The empirical model was shown to do the best predictive calculations.
The empirical model used to predict signal strength combined with Kriging geographical statistical analysis gave usable signal coverage prediction for access point placement. This model will support GIS spatial analysis tools to perform effective planning in access point placement.
Keywords: Access Point Placement; GIS Spatial Analysis; Received Signal Level.
Feature-Opinion Pair Identification Method in Two-Stage based on Dependency Constraints
by Shulong Liu, Xudong Hong, Zhengtao Yu, Hongying Tang, Yulong Wang
Abstract: Feature-opinion pair identification includes opinion words, opinion targets extraction and their relations identification, is important for analysis online reviews. In this paper, we propose a feature-opinion pair identification method in two-stage based on dependency constraints according to the relationship between the identification of feature-opinion pair and dependency constraints. In the first stage, we construct dependency constraints based on the dependency information of words. Then, dependency constraints and seed words are employed to extract opinion words and opinion targets. In the second stage, we use opinion words and opinion targets extracted in the first stage to construct candidate feature-opinion pairs. Thereafter, integrate dependency constraints, location features and part-of-speech features into support vector machine to identify feature-opinion pair. Our experimental result using online reviews demonstrates that the proposed method is effective in the identification of feature-opinion pairs, and the F-score has reached 83.85%.
Keywords: opinion mining; opinion word; opinion target; dependency constraints; feature-opinion pair.
Middleware-managed High Availability for Cloud Applications
by Ali Kanso, Abdelouahed Gherbi, Yves Lemieux
Abstract: High availability is a key non-functional requirement that software and telecom service providers strive to achieve. With the on-going shift to Cloud computing, the challenges of satisfying the high availability requirement become more arduous, as the Cloud introduces additional features, such as on demand access, scalability, virtualization, which add to the complexity of the high availability solution. In this paper we target the issue of achieving high availability for the software applications running in the Cloud. We first benchmark two of the most prominent middleware implementations for HA. We then build on the most responsive one to present our solution to address the complexity issue of applications high availability deployed in the Cloud. Finally, we discuss a quantitative and qualitative assessment of the overhead of our proposed solution.
Keywords: High-availability; Cloud platform; HA middleware; State-aware applications; Runtime integration; High-availability as a service; REST architecture.
Low-Complexity LDPC-Convolutional Codes based on Cyclically Shifted Identity Matrices
by Fotios Gioulekas, Constantinos Petrou, Athanasios Vgenis, Michael Birbas
Abstract: In this study, a construction methodology for Low-Density Parity-Check Convolutional Codes (LDPC-CCs) ensembles based on cyclically shifted identity matrices is proposed. The proposed method directly generates the syndrome former matrices according to the specified code parameters and constraints i.e. code-rate, degree-distribution, constraint length, period and memory, in contrast to the majority of the available approaches that produce relevant error-correcting codes based on either block ones, protographs or spatially-coupled type of codes. Simulation results show that the constructed ensembles demonstrate advanced error-correcting capability of up to 0.2 dB in terms of frame-error and bit-error rates at the convergence region, when compared with the performance of error-correcting schemes adopted by various communication standards, with equivalent hardware complexity even at short codeword-lengths. Specifically, the constructed LDPC-CCs have been assessed against the corresponding error-correcting codes used in WiMAX and G.hn standards for wireless and wireline telecommunications, respectively.
Keywords: FEC; LDPC-Convolutional Codes; Complexity; error-correction; WiMAX; G.hn; cyclically shifted identity matrices; LDPC-Block Codes; Schedulable memory; syndrome-former.
A Big Data and Cloud Computing Specification, Standards and Architecture: Agricultural and Food Informatics
by N.P. Mahalik, Na Li
Abstract: Big data has gone from emerging to a widely used technology in industrial, commercial, research, and database applications. It is used for processing and analyzing massive sets of data to derive useful patterns, inferences, and relations. The real-time data storage and management architecture plays important role. This paper introduces big data, includes the background and definitions, characteristics, related technologies, and challenges associated with implementing big data-based application technologies. This paper also introduces cloud computing, a related yet independent emerging technology and includes the modern technologies and standards, definitions, service and deployment models, advantages and challenges, and development prospects. The paper considers computing specification, standardized procedure, and system architecture in regard to big data systems and could computing.
Keywords: Big data; KDD; data mining; cloud computing; Virtualization; Architecture.
Wi-Fi Received Signal Strength Based Hyperbolic Location Estimation for Indoor Positioning Systems
by Anvar Narzullaev, MOHD Hasan Selamat, Khaironi Yatim Sharif, Zahriddin Muminov
Abstract: Nowadays, Wi-Fi fingerprinting based positioning systems provide enterprises the ability to track their various resources more efficiently and effectively. Main idea behind fingerprinting is to build signal strength database of target area prior to location estimation. This process is called calibration and the positioning accuracy highly depends on calibration intensity. Unfortunately, calibration procedure requires huge amount of time and effort, and makes large-scale deployments of Wi-Fi based indoor positioning systems non-trivial.
In this research we present a novel location estimation algorithm for Wi-Fi based indoor positioning systems. Proposed algorithm combines signal sampling and hyperbolic location estimation techniques to estimate the location of mobile users. The algorithm achieves cost-efficiency by reducing the number of fingerprint measurements while providing reliable location accuracy. Moreover, it does not require any additional hardware upgrades to the existing network infrastructure. Experimental results show that the proposed algorithm with easy-to-build signal strength database performs more accurate than conventional signal strength based methods.
Keywords: Indoor positioning; hyperbolic location estimation; wi-fi fingerprinting; TDOA; trilateration; received signal strength.
Target coverage algorithm with energy constraint for wireless sensor networks
by Liandong Lin, Chengjun Qiu
Abstract: As wireless sensor networks are made up of low-cost and low-power tiny sensor nodes, it is of great importance to study on how to both cover targets and save energy consumptions. In this paper, we propose a novel target coverage algorithm with energy constraint for wireless sensor networks. Wireless sensor networks can be described as a graph model, in which nodes and edges represent sensors and maximum signal transmission ranges respectively. Particularly, there types of sensor nodes are utilized: 1) base stations, 2) gateways, and 3) sensors. The main innovations of this paper lie in that we organize the network lifetime by a cycle mode, and divide the network lifetime to rounds of equal period. At the beginning of each round, sensors independently determine which sensing units should be turned on in the working step. Afterwards, the status of each sensing unit is determined by integrating the sensing ability and remaining energy together. Finally, we construct a simulation environment to test the performance of our algorithm. Experimental results demonstrate that the proposed algorithm performs better than the Remaining energy first and Max-lifetime target coverage scheme under various number of sensors and attributes, and performance of our proposed algorithm is next only to integer programming. Furthermore, we also find that the proposed algorithm is able to effectively cover target with low energy consumption.
Keywords: Wireless sensor networks; Target coverage; Energy constraint; Network lifetime.
Safety Message Data Transmission Model and Congestion Control Scheme in VANET
by Zhixiang Hou, Jiakun Gao
Abstract: When VANET meets large traffic density, Beacons produced by periodical safety message may occupy the whole bandwidth of channel, resulting in link congestion. In order to ensure the safety in message data transmission and promote the effectiveness of congestion control, in this paper, we propose an actively safe information congestion control framework via analyzing VANET features which contain channel detection, load estimation, congestion control, sending restoration, and so on. Therefore, we propose a novel congestion control mechanism based on adjusting Beacon frequency and Vehicle communication model. Based on the control theory and the features of VANET, an active safety information congestion structure is put forward at first. Then CACP algorithm is adopted to estimate the link bandwidth for congestion prediction. For periodic status messages and security messages a channel assignment algorithm is also proposed to ensure there is enough channel resource to transfer emergency messages. As a consequence of this, the status message accuracy and the system safety can be ensured and the number of accommodating users can be increased, which will avoid network congestion and improve the channel utilization. From the simulation results, it can be observed that the proposed algorithm can effectively and accurately detect link load degree, improve throughput, decrease delay, reduce networked energy consumption and then guarantee data fidelity. Finally, the conclusion can be drawn that the proposed can achieve a safe and efficient transmission mechanism for VANET
Keywords: VANET; Congestion control; Channel; Beacon; D-FPAV.
Design and Implementation of ITS Information Acquisition System under IoS Environment
by Ying Zhang, Jiajun Li, Baofei Xia
Abstract: The problems and efficient in existing intelligent transportation system (ITS) models are studied and the significance of the model and architecture of ITS under Internet of things (IoS) are also discussed. By the research status at home and abroad about the relationship between IoS and ITS, the necessity of IoS technology introduce to ITS is explained first. Then, based on logical structure and physical model of ITS, the ITS structure model under IoS environment is established. Furthermore, the system architecture on a prefect and comprehensive ITS information system acquisition system is formed. The design principles of system demand, overall design, function modules design, database design are described in detail. The parts of key modules in system are introduce and tested. The tests results show the acquisition system can effectively monitor real-time of vehicle speed, real-time traffic of road vehicles, and obtain effective information of vehicle and road environment. It can also send the information acquired via Internet or GPRS network to data processing center, for future process and intelligent decision of ITS.
Keywords: ITS; IoS; information acquisition; GPRS; communication.
Traffic Route Optimization Based on Clouding Computing Parallel ACS
by Changyu Li
Abstract: Intelligent traffic has demand for massive data environment and high performance processing, which needs cloud computing platform to process massive data, and applying distributed parallel guidance algorithms to improve system efficiency. Therefore, this paper proposes an improved scheme based on clouding computing ACS algorithm. It first adopts MapReduce to parallelize traditional ACS, to process the solving problem with distributed parallel mode, and to improve the defects in ACS. The improved ACS applies Map function to parallelize the part which has the most time consuming, that is, the independent solving process of each ant. Then Reduce function is used to describe the processes of pheromone updating and obtaining better solutions. Simultaneously, for the defects of ACS on long searching time, and premature convergence to a non optimal solution, we integrate simulated annealing algorithm to ACS and provide corresponding realization process. The experiments construct Hadoop cloud computing platform and the improved algorithm is operated and tested on this platform. By the analysis on experimental results, we find the parallel ACS designed by us has improved the query efficiency of the shortest path, which also has advantage on the performance of running time and speedup ratio compared to classic algorithms.
Keywords: cloud computing; ACS; MapReduce; Traffic network; Pheromone.
Optimal configuration of M-for-N shared spare-server systems
by Hirokazu Ozaki
Abstract: In this study, we investigate the user-perceived availability of M-for-N shared spare-server systems. We assume that there are N identical working servers, each serving a single user group, and M identical shared spare servers in the system. We also assume that the time to failure of the server is subject to an exponential distribution, and the time needed to repair a failed server is subject to the Erlang type-k distributions. Under these assumptions, our numerical computation shows that there exists an optimal size (M + N) for shared spare-server systems with respect to the availability and cost for a given condition.
Keywords: Cloud computing; user-perceived reliability; shared protection systems; probability distribution; availability.
Image analysis by efficient Gegenbauer Moments Computation for 3D Objects Reconstruction
by Bahaoui Zaineb, Hakim El Fadili, Khalid Zenkouar, Hassan Qjidaa
Abstract: In this paper, we suggest a new technique for fast computation of Gegenbauer orthogonal moments for the reconstruction of 3D images/objects; A typical comparison of the proposed method with the conventional ZOA methods shows significant improvements in term of error reduction, image quality and consumption time . Then we compare our new approach with an existing methods using Legendre and Zernike moments in the case of 3D image/object. The obtained results prove that Legendre and Zernike moments are slightly better than Gegenbauer, they are still very efficient and gives very good results in terms of MSE and PSNR. But, Zernike moments have higher computational cost than Gegenbauer moments.
Keywords: Gegenbauer Moments computation; Legendre Moments; Zernike moments; 3D images/object; Computation time.
Integration of a quantum scheme for key distribution and authentication within EAP-TLS protocol
by GHILEN AYMEN, Mostafa AZIZI
Abstract: The extensive deployment of wireless networks has led to a significant progress in security approaches that aim to protect confidentiality. The current methods for exchanging a secret key within Extensible Authentication Protocol-Transport Layer Security (EAP-TLS) protocol is based on Public Key Infrastructure (PKI). Although this technique remains one of the most widely implemented solution to authenticate users and to ensure secure data transmission, its security is only computational. In other words, by the emergence of the quantum computer, the existing cryptosystems will become completely insecure. Improving the contemporary cryptographic schemes by integrating quantum cryptography becomes a much more attractive prospect since its technology does not rely on difficult mathematical problems such as factoring large integers or computing discrete logarithms. Thus, we propose a quantum extension of EAP-TLS that allows exchanging a cryptographic key and authenticating a remote client with unconditional security, ensured by the laws of quantum physics. PRISM tool is applied as a probabilistic model checker to verify specific security properties for the new scheme.
Keywords: EAP-TLS; Quantum Cryptography; Authentication;Key Agreement; Entanglement; PRISM; Model Checking.
A rapid detection method of earthquake infrasonic wave based on decision-making tree and the BP neural network
by Yun Wu, Zuoxun Zeng
Abstract: In this paper, a rapid detection method of earthquake infrasonic wave combining decision-making tree and neural network is proposed. This method is designed for the automated monitoring system of earthquake occurring and advance forecast. Firstly, different kinds of signal data are collected together and analyzed to find the most meaningful attributes, which is important to describe the features of signal. Then, in the decision part, two decision line are designed to supply a final result. In the first one part, these important attributes are chosen to determine to be the nodes of decision-making tree. In the second part, many previous stored signals are analyzed utilizing the neural network to build the mapping model between attributes of signals and its classification. Lastly, the most suitable nodes sequence and the thresholds are determined in this process according to two experiments. Experiments analysis is also presented in this paper to discuss some important issues in decision tree, and determine the decision system finally.
Keywords: Infrasonic Wave; Earthquake; Decision-making Tree; Neural NetWork.
Data Dissemination on MANET by repeatedly transmission via CDN nodes
by Nattiya Khaitiyakun, Teerapat Sanguankotchakorn, Kanchana Kanchanasut
Abstract: Recently, many researches on MANET (Mobile Ad Hoc network) have been carried out due to its various applications in information exchange. The efficient data dissemination in such an infrastructure-less environment as MANET is considered as one of the challenging issues. This paper proposes to adopt the concept of CDN (Content Delivery Network) Technique, normally used in Internet, for disseminating information in MANET. The source node disseminates data to surrounding nodes by repeatedly transmitting batches of packets via a set of CDN nodes acting as relay nodes. Our proposed data dissemination technique via CDN nodes is developed based on the OLSR (Optimized Link State Routing Protocol) on MANET. The limited number of CDN nodes is selected from MPRs (Multi-Point Relay) in OLSR in order to optimally cover all subscriber nodes and to avoid the interference problem as well. The packets are transmitted from CDN to destination nodes using the broadcasting technique the same technique as the one adopted in MPR, a broadcasting technique. In this work, the performance of our proposed technique is evaluated in terms of probability of successful transmission by simulation using NS3. The performance is compared with the typical OLSR and the recent work called Clustering-based data transmission algorithm in VANET (Vehicular Ad Hoc Network). It is apparent that our proposed algorithm improves drastically the overall probability of successful transmission when comparing with the typical OLSR. Additionally, it achieves higher probability of successful transmission at high nodes density when comparing with Clustering-based data transmission algorithm in VANET. Finally, the closed-form mathematical expression of the probability of successful transmission of our proposed algorithm in multi-hop network environment is derived and verified.
Keywords: Content Delivery Network(CDN); MANET; OLSR.
Suboptimal Joint User Equipment Pairing and Power Control for Device-to-Device Communications Underlaying Cellular Networks
by Chaoping Guo, Xiaoyan Li, Wei Li, Hongyang Li
Abstract: Device-to-device(D2D) communications underlying cellular networks, results in the cellular interference to D2D User Equipment(DUE) which is larger than the D2D interference to Cellular User Equipment(CUE). A joint resource allocation scheme is presented to both perform user equipment pairing and power allocation to minimize total interference of DUE and CUE. The scheme is composed of two parts: in the first one the base station assigns power to each CUE and each D2D transmitter by graphic method, and in the second one it selects the optimal CUE to pair with D2D pair by modified Hungarian algorithm in order to minimize total interference. The simulation results show that the proposed scheme can not only decrease the interference caused by D2D pair and that caused by CUE, but also increase the number of permitted D2D connections.
Keywords: device-to-device communications; power control; resources allocation; Suboptimal Joint; Cellular Networks; User Equipment.
A Cognitive Approach of Collaborative Requirements Validation based on Action Theory
by Sourour MAALEM, Nacereddine ZAROUR
Abstract: The requirements must be validated at an early stage of the analysis. Requirements validation usually involves natural language processing, which is often inaccurate and error-prone, or translated in formal models, which are difficult to understand and use for non-technical stakeholders. The majority of existing approaches for validating requirements are used in a heterogeneous process, using a variety of techniques, relatively independent without any methodological or cognitive approach through in which, mechanisms of human thought or artificial are used. In this work we present a cognitive approach to collaborative requirements validation based on the theory of action, through a set of steps that must increase the involvement of the Client in this stage of the engineering cycle; and to bring the customer mental model to the analyst one. In the proposed process, the analyst starts by extracting needs one by one from requirements documents. Each need goes through a step of formulating the intention, which results in a transformation of needs into requirements. This transformation is performed by respecting a new syntax in any generated a checklist (quality attributes) from the viewpoint of each stakeholder, followed by a step of specification of actions that engenders on the basis of the intentions, actions which the analyst perceives and verifies to build a rapid prototype of software interface, executable on machine. The customer perceives the prototype, interprets it and validates these needs. A valid needs database is created, needs that remain invalid must be negotiated, and if conflicts persist, another error base is created. At the end of this collaborative process of requirements validation, decisions will be made concerning who must participate during needs validation meetings with respect to the Mental Effort metric which will classify according to the mental difficulty of execution of the prototype, the articulatory and semantic problems, and measure commitment and motivation of stakeholders.
Keywords: Requirements engineering; Requirements validation; Action Theory; Prototype; Cognitive Approach.
A comprehensive review and evaluation of LPT, MULTIFIT, COMBINE and LISTFIT for scheduling identical parallel machines
by Dipak Laha, Dhiren Kumar Behera
Abstract: This paper addresses the problem of scheduling n independent jobs processed non-preemptively on m identical parallel machines with the objective of minimising makespan. We consider four popular construction algorithms, LPT, MULTIFIT, COMBINE, and LISTFIT from the literature, which are used for minimising makespan for solving identical parallel machine scheduling problems. The objectives of this study are two-fold: first, a critical review of previous literature of these algorithms for minimising makespan in parallel machine scheduling is reported; next, we present an experimental framework to investigate performance of these algorithms for a comprehensive comparative evaluation. The computational results reveal that LISTFIT performs best among the four algorithms in most of the problem instances considering different problem sizes. Regarding the computational times, it has been observed that LISTFIT due to its higher time complexity requires more time compared to those of MULTIFIT and COMBINE, whereas LPT consumes the least computational time.
Keywords: scheduling; parallel machine scheduling; makespan; optimisation; algorithm.
Multi-image authentication in frequency domain
by Anirban Goswami, Nabin Ghoshal
Abstract: Today multimedia documents are widely used for data authentication. In this paper, we propose a model for image authentication with robustness against certain attacks. To achieve this, we have effectively utilised two distinct colour cover images simultaneously and applied discrete cosine transform to convert them from spatial to frequency domain. The payload data, payload size and a 160 bit message digest are inserted in blocks of pixels of size 2 × 2 taking alternately from the two images. Capacity of embedding is increased by fabricating two payload bits in each converted frequency component whereas for enhanced security we have used pseudo-random insertion positions for the payload bits. On the receiving end the integrity of the extracted payload is verified by using the message digest. The algorithm has been tested effectively against certain steganographic attacks. Also experimental results have been analysed by using different metrics and compared with other similar algorithms.
Keywords: image authentication; discrete cosine transform; DCT; inverse discrete cosine transform; IDCT; message digest; MD; statistical attack; visual attack; collusion attack; HSI.
Wavelet-FastICA-based separation method for single-channel and time-frequency overlapped signal in electromagnetic surveillance
by Lihui Pang, Bin Tang
Abstract: In signal processing society, much attention has been paid in blind source separation (BSS) due to its 'blind' property and wide applications. However, there are still some open problems, such as single-channel BSS (SCBSS). In this paper, a SCBSS method called wavelet-fast independent component analysis (FastICA) is proposed for simultaneously received multi-system frequency-overlapped signals in a single-channel electromagnetic surveillance system. Firstly, wavelet is employed to decompose the single-channel recording into high dimensional data, as most of ICA algorithms rely on spatial (i.e., multichannel) analysis. Morlet wavelet is selected in this work for its non-orthogonality. Then, the method adopts FastICA to process the wavelet decomposition results, as so to find the independent components. Finally, the spectrum of the independent components (ICs) provided by FastICA is used to identity and recover the original sources. Numerical simulation results obtained in evaluating the proposed methodology's performance confirmed the effectiveness of the proposed algorithm and demonstrated its anti-noise superiority.
Keywords: single-channel BSS; SCBSS; frequency-overlapped signal; wavelet-FastICA; electromagnetic surveillance system.
A cross layer protocol to mitigate effects of radio's linear impairments
by Abhay Samant, Sandeep Kumar Yadav, Venkataramana Badarla, Shirish Mishra
Abstract: The impact of signal-to-noise ratio (SNR) of an additive white Gaussian noise (AWGN) channel on bit error rate (BER) has been extensively studied in the literature through theoretical, simulation-based, and experimental results. However, it is too simplistic to assume that BER in a real-world communication system is influenced by AWGN alone. The system's radio, which contributes both linear and nonlinear impairments, also impacts BER. This paper studies the impact of linear impairments, namely quadrature skew and gain imbalance, on the BER of an angular digital modulation scheme, such as M-ary phase shift keying (M-PSK). It presents a mathematical derivation of a technique to measure these impairments. It implements this technique on an experimentation system and presents statistical properties of these measurements. A novel contribution of this paper is a cross layer protocol through which the receiver shares these measured values with the transmitter. The transmitter uses this information to combat the effects of radio's impairments on BER, thereby improving system performance. This paper also presents a cost benefit analysis which proves that the idea proposed in this paper is feasible and practical to implement on real-world systems.
Keywords: cross layer design; gain imbalance; quadrature skew; radio front end; throughput; USRP.
Location area planning problem in WiMAX networks using nature inspired techniques: performance study
by J. Sangeetha, Nikhil Goel, Ram P. Rustagi, K.N. Balasubramanya Murthy
Abstract: Worldwide Interoperability for Microwave Access (WiMAX) is a broadband wireless technology that provides an efficient service to mobile stations (MS). Whenever there is a need to establish communication and provide service to MSs, the network has to track the location of the MSs through the base station. Tracking the location of the MSs is a very difficult and complex problem in the WiMAX network. This paper discusses a location area planning problem, which can be solved by partitioning the WiMAX network into location areas, so that the cost per call arrival is minimum. Finding the optimal number of location areas and the corresponding configuration of the partitioned network is an NP-complete problem. In this study, we use nature inspired techniques, namely, genetic algorithm (GA), artificial bee colony (ABC) and artificial immune system (AIS) to find an optimal solution to the location area planning problem. The performance is analysed and compared for all these nature inspired techniques. In this study, we also compare these techniques to gauge their suitability for solving the location area planning problem. From the obtained results, we conclude that ABC gives better optimal solution and AIS takes less computational time to locate the optimal solution.
Keywords: location area planning; WiMAX network; nature inspired techniques.
New image quality assessment metric based on distortion classification
by Xin Jin, Mei Yu, Shanshan Liu, Yang Song, Gangyi Jiang
Abstract: Image quality assessment (IQA) has been a crucial task in image proceeding applications. In this paper, we propose an adjusted structure similarity (ASSIM) metric by considering the properties of different distortion types. In detail, we firstly define four particular features in accordance with different characteristics of distortion types. Secondly, a support vector machine-based multi-classifier is built to identify the distortion type of each distorted image on the basis of those four features. Thirdly, the well-known SSIM metric is adjusted by reallocating weighting values among its three evaluating factors for each distortion types. Finally, by combining the distortion classification (DC) and ASSIM, the subjective quality of each image is predicted. The experimental results derived from public test image databases show that the proposed DC method outperforms existing method across databases and degradations. Moreover, the proposed ASSIM metric can achieve high consistency with subjective perception.
Keywords: image processing; image quality assessment; IQA; distortion classification; support vector machine; SVM; multi-classifier.
Enhancing clustering accuracy by finding initial centroid using k-minimum-average-maximum method
by S. Dhanabal, S. Chandramathi
Abstract: Determining 'the initial seed for clustering' is an issue in k-means which has attracted considerable interest, especially in recent years. Despite its popularity among clustering algorithms, k-means still has many problems such as converging to the local optimum solutions, the results obtained are strongly depends upon the selection of initial seeds, number of clusters need to be known in advance etc. Various initialisation methods were proposed to improve the performance of k-means algorithm. In this paper, a novel approach, k-minimum-average-maximum (k-MAM), is proposed for finding the initial centroids by considering distance on extreme ends. The proposed algorithm is tested with UCI repository datasets and data collected from Facebook. We compared our proposed method with simple k-means and k-means++ initialisation method based on efficiency and effectiveness. The results show that the proposed algorithm converges very fast with better accuracy.
Keywords: clustering; k-means; initialisation techniques; k-minimum-averagemaximum; k-MAM.
New degree distribution design scheme for LT codes
by Meng Zhang, Weijia Lei
Abstract: The decoding efficiency deteriorates badly when the existing degree distributions are used to encode the source packets with small size. In this paper, we present a new way to design a degree distribution for LT codes. First, we combine two degree distributions with certain proportion and further adjust the degree by optimising the ripple size, so as to get a new degree distribution which can perform well when the message size is small. Through Monte-Carlo simulation experiments, we have verified that the degree distribution generated by our scheme can highly improve the performances of encoding and decoding, which remain fair even when the source data size gets small.
Keywords: LT codes; degree distribution; ripple size.
Cronbach alpha reliability coefficient-based reputation mechanism for mitigating root node attack in MANETs
by S. Parthiban, Paul Rodrigues
Abstract: In MANETs, establishing a secure and trustworthy route between the source and the destination nodes are always difficult due to the lack of centralised infrastructure and high dependency on intermediate nodes for routing the packets during a multicast communication. Moreover, reliable dissemination of data necessitates a trustworthy route that is free from root node attack, which interrupts the degree of coordination in group communication. Hence, a need arises for devising a mechanism that mitigates root node attack. In this paper, we propose a Cronbach alpha reliability coefficient-based reputation mechanism (CARCRM) for mitigating root node attack in an ad hoc environment. In this mitigation mechanism, the detection of malicious nodes is based on a factor called Cronbach alpha reliability coefficient (CARC), which aids in estimating the reputation level of each and every mobile node participating in the group for effectively and efficiently mitigating root node attack. The performance analysis of CARCRM is carried out based on ns-2 simulator by considering the performance metrics like packet delivery ratio, throughput, control overhead and total overhead. Simulation results obtained portrays that the proposed CARCRM outperforms the existing mechanisms proposed for mitigating root node attack.
Keywords: root node attack; multicast ad hoc on-demand distance vector; MAODV; Cronbach alpha reliability coefficient; CARC; optimal mitigation point; group communication; packet drop variance.
Special Issue on: Collaboration Technologies and Systems for Disaster Management
Coordinated Route Reconfiguration for Throughput Optimization under Rician Fading Channel
by Adnan Fida, Nor Tuah Jaidi, Trung Dung Ngo
Abstract: This paper focuses on the high throughput data transmission to deliver
ample sensor data such as images or videos in a post disaster scenario using
mobile wireless sensor networks. We formulate the route optimization problem
in the presence of Rician fading channel. The coordinated route reconfiguration
strategy integrates the communication quality, shortest path algorithm, particle
swarm optimization, and mobility of multiple routers to optimize the end to-end
throughput of data transmission routes. We show how a coordinator can be used
for identification of routers with critical links and to gradually manoeuvre them
towards a reconfigured route with higher end-to-end throughput. The evaluation of
our solution shows that proposed strategy considers the inherent characteristics of
Rician fading channel, and takes benefit of the routers mobility to provide routes
with better performance as compared to the routes generated by non-coordinated
route reconfiguration framework.
Keywords: Throughput Optimization; Rician Fading Channel; Communication Aware; Particle Swarm Optimization; Mobility.
Coverage Enhancement with Occlusion Avoidance in Networked Rotational Video Sensors for Post-Disaster Management
by Nawel Bendimerad, Bouabdellah Kechar
Abstract: Wireless Video Sensor Networks (WVSNs) have recently emerged as a new class of sensor networks in which large amounts of visual data are sensed and processed in real-time. This kind of networks, strengthened with rotation capability for each video sensor node due to dramatic advances in miniature robotic and its accessible prices today in market, is envisioned to be deployed in the physical environment to monitor with a great flexibility a plethora of real-world phenomena such as surveillance, monitoring and disaster recovery. In this work, we aim to propose a model of WVSN for post-disaster management in order to assist search and rescue operations by locating survivors, identifying risky areas and making the rescue crew more aware of the overall situation. To build an overview of the environment and to assess the current situation, rotatable video sensors are randomly deployed and used to switch to the best direction with the purpose of getting high coverage while avoiding obstacles. To address fault tolerance problem in WVSN which may arise in case of possible damages caused by disasters, we build potential cover sets using redundant video sensors to unsure Field of Views coverage of failed nodes. We have conducted a set of comprehensive experiments using the well-know OMNeT++ simulator and the obtained results reveal that our proposal gives better performance in terms of coverage enhancement and fault tolerance.
Keywords: Wireless Video Sensor Network; coverage; scheduling; fault tolerance; occlusion; obstacles avoidance; OMNeT++.
Constructing collective competence: A new CSCW-based Approach
by Djalal Hedjazi
Abstract: Within the majority of contexts, it is persons that are considered to be competent or incompetent. However, in many cases it is the performance of groups and teams that is most important. This implies a concept of collective competence that integrates the set of skills in a group. In addition, the collective competence construction process is also enriched through collaboration which implies exchanges, confrontations, negotiations and interpersonal interactions.
This paper presents our CSCW-based approach supporting collective competence construction. As a case of study, the industrial maintenance workspace is fundamentally a collaborative context. Our contribution in this area led us first, to analyze the related task in order to highlight collaborative maintenance vital needs and design the appropriate required group awareness supports which will be used to support collective competence. Finally, the experimentation study identifies the highly effective group awareness tools.
Keywords: Computer-supported cooperative work; Collective competence construction; Groupware assessment; collaborative E-maintenance.
RTCP: a Redundancy aware Topology Control Protocol for Wireless Sensor Networks
by Bahia Zebbane, Manel Chenait, Chafika Benzaid, Nadjib Badache
Abstract: Topology control based sleep-scheduling aims at exploiting node
redundancy to save energy and extend the network lifetime, by
putting as many nodes as possible in sleep mode, while maintaining a
connected network. In this paper, we propose a Redundancy aware
Topology Control Protocol (RTCP) for a wireless sensor network which
exploits the sensor redundancy in the same region. This is achieved
by dividing the network into groups so that a connected backbone can
be maintained by keeping a necessary set of working nodes and
turning off the redundant ones. RTCP allows applications to
parameterize the desired connectivity degree. It identifies node
redundancy, in terms of communication; it groups redundant nodes
together according to their redundancy degrees and threshold of
connectivity level. Finally, it schedules nodes in groups for active
or sleep mode. The simulation results illustrate that RTCP
outperforms some other existing algorithms, in terms of energy
conservation, network lifetime and connectivity guarantee.
Keywords: WSNs; topology control; energy conservation; Duty-Cycling;rnconnectivity; node redundancy; leader election.
Architecture for Gathering and Integrating Collaborative Information for Decision Support in Emergency Situations
by Tiago Marino, Maria Luiza Campos, Marcos Borges
Abstract: The involvement of citizens in supporting crisis situations is no longer a new phenomenon. In the past, the lack of data was one of the main barriers faced by public managers in the decision-making process. Today the situation has been reversed, such that the challenge faced is managing an excessive mass of data, which is totally dynamic and originating from different sources such as remote environmental sensors, social networks, response teams in the field. During an emergency response, the concern is no longer being able to collect data for a better understanding of the af-fected environment, but in knowing how to organize, aggregate and separate what is actually useful for crisis managers. This research proposes a collaborative information architecture that considers aspects from environmental complexity in the context of emergency scenarios, in order to support response teams with decision making, through gathering and integrating information that has originated from different media resources and, hence, enriching a decision teams Current Contextual Knowledge base.
Keywords: Emergency;Heterogeneous Information Sources;Complex Systems;Information Processing;Crisis and Disaster Management;Social Media;Collaboration;Domain Vocabulary;Decision Support;Data Integration.
A Framework combining Agile, User-centered design and Service Oriented Architecture approaches for Collaborative Disaster Management system Design
by Karima Ait Abdelouhab, Djilali Idoughi, Christophe Kolski
Abstract: Disaster Management is a special type of human complex organization in which heterogeneous human actors belonging to different authorities collaborate and work together with the shared aim to solve, or at least reduce, the disaster situation. Thus, the collaboration in this case within team members and with other teams operating at the disaster site(s) is very critical and complex; the achievement of the desired goal heavily depends on this collaboration. Interactive and easy to use services in these scenarios are very valuable and necessary as they can improve collaboration, coordination, and communication amongst team(s) to achieve the desired goals. For this purpose, in this paper, we propose a novel design framework for complex disaster management systems. We combine agile characteristics and principles, user-centered techniques and service oriented architecture paradigm. Our aim is to take into account the needs of the disaster managers in an iterative development process, to improve the human actors involvement in the design projects, to offer the possibility to accept any changes in order to produce highly usable and interactive service based collaborative services.
Keywords: User-centered design; agile methods; SOA; Service design; Disaster management; collaboration.
A Hybrid Ad Hoc Networking Protocol for Disaster Recovery Resource Management
by Doug Lundquist, Aris Ouksel
Abstract: Following a major disaster, the infrastructure supporting wired and mobile networking is expected to be inoperable over large areas. Thus, emergency response teams must communicate using their own networking equipment. A large-scale peer-to-peer network offers fast and flexible deployment but requires cooperation among nodes. Usage constraints must be imposed to prevent overloading the shared network capacity. In particular, disparate communication models (for route building and optimization, resource management, and localized status flooding) must be integrated. We propose a hybrid network protocol which dynamically assigns network capacity to these three communication models by imposing transmission delays in accordance with their attempted usage rates.
Keywords: Disaster recovery; communication systems; ad hoc networking; hybrid protocol design; contingency theory.
Medium Access Control in Wireless Sensor Networks: A Survey
by Mohamed Hefeida, Ashfaq Khokhar
Abstract: Wireless Sensor Networks (WSNs) are being integrated across a wide spectrum
of military, commercial, and environmental applications, such as field surveillance, environmental monitoring, and disaster management. This variety in WSN applications led to the development of a large number of Medium Access Control (MAC) protocols with different objectives. A common goal of these protocols however, is preserving energy in order to maximize network lifetime. In an effort to facilitate studying existing MAC protocols and developing novel medium access techniques, this paper presents a classification and critical review of existing MAC protocols adopted in different WSN environments. We classify these protocols based on channel access into three main categories: contention-based, contention-free, and hybrid protocols. Cross-Layer protocols (involving MAC layer) are also studied. We describe major characteristics of these classes, differences among them, possible improvements, and outline ongoing and future research challenges.
Keywords: Medium Access Control; Duty cycling; Energy Efficiency; Contention; Wireless Sensor Networks; Disaster Management.
Special Issue on: Cloud Computing and Data Centre Security
A Review: The Effects Of Imperfect Data On Incremental Decision Tree
by Hang Yang, Xiaobin Guo, Huajun Chen, Zhiqiang Lin
Abstract: Decision tree, as one of the most widely used methods in data mining, has
been used in many realistic application. Incremental decision tree handles streaming data
scenario that is applicable for big data analysis. However, imperfect data are unavoidable
in real-world applications. Studying the state-of-art incremental decision tree induction
using Hoeffding bound, we investigated the influence of imperfect data on decision tree
model. Additionally we found the imperfect data worsen the performance of decision tree
learning, resulting in worse accuracy and more consumed resource. This paper would be
good reference for the future research. When thinking of a new generation of incremental
decision tree, we should try to overcome the negative effects of imperfect data.
Keywords: data mining; data stream mining; classification;
The Design of Strain Sensitizing of High-sensitivity SAW Sensor Based on FBG
by Wei Zhang
Abstract: It is very necessary to build the integrity of the oil and gas transport pipe network in the steady development of the oil and gas industry. There are a lot of oil and gas pipelines have been in service for more than two decades in china, which means a big security risk. Pipeline leak is a typical example of security incidents. Therefore, the establishment of a system of effective detection pipeline leak will be a priority.rnWe propose FBG (Fiber Bragg grating) based optical fiber sensors for detecting pipeline leakage in oil and gas transportation systems. By placing a sensor on each end of a pipe, the velocity and position of the pipeline leakage can be detected through a series of calculations on the correlation analysis, the time difference, and the characteristics of surface waves. We improve the sensitivity and the accuracy of positioning for the FBG based optical fiber sensors by enhancing the FBG strain response sensitivity.
Keywords: oil and gas transport; pipeline leakage; surface wave; FBG based optical fiber sensors
A Novel Identity-based Anonymous Authentication Scheme from Multilinear Maps
by Zhengjun Jing, Guoping Jiang, Chunsheng Gu
Abstract: Anonymous authentication is very useful to protect the users privacy, and plays an important role in building e-commence where involves many partners, such as in cloud computing. With the anonymous property, ring signature provides a cryptographic tool to construct a secure authentication scheme. In this paper, we construct an identity-based ring signature (IBRS) based on Garg-Gentry-Halevi (GGH) graded encoding system which is a candidate multilinear maps from ideal lattice, and prove its security in random oracle model. Under the GGH graded decisional Diffie-Hellman (GDDH) assumption, the proposed ring signature guarantees the anonymity of signer against full keys exposure attacks. Considering unforgeability, under the GGH graded computational Diffie-Hellman (GCDH) assumption, the new scheme provides unforgeability both against selectively chosen subring attacks and insider corruption.
Keywords: Multilinear maps; Ring signature; Anonymous; Privacy; Cloud computing
Quantum Information Exchange Protocol Associated with the Quantum Cloud
by Xiaoqing Tan, Xiaoqian Zhang
Abstract: Quantum cryptography discusses the security of quantum information. Cloud computing has emerged as a computational paradigm and an alternative to the conventional computing. Cloud security is associated with cloud computing. In this paper, we define the new concept about quantum cloud and discuss the security of quantum information exchange in quantum internet. A tripartite simultaneous quantum information exchange protocol associated with the quantum cloud based on entanglement swapping and Bell states is proposed. The proposed secure quantum information exchange protocol can resist intercept- and-resend attack, intercept-and-measure attack, intercept-and-entangle auxiliary attack and denial-of-service attack. It can also be generalized toN-party case that is feasible and efficient.
Keywords: cloud computing; quantum cloud; quantum information exchange;rnintercept-and-resend attack; intercept-and-measure attack; intercept-and-rnentangle auxiliary attack; denial-of-service attack.
An Efficient Speech Perceptual Hashing Authentication Algorithm Based on DWT and Symmetric Ternary String
by Zhang Qiuyu
Abstract: According to the situation that speech perceptual hashing methods are not appropriated for real-time speech content authentication in mobile computing environment, a novel DWT-based perceptual hashing algorithm, which uses combinated features from time-domain and frequency-domain, was proposed to protect the speech data in the cloud. Firstly, by discrete wavelet transform (DWT), a new signal in frequency-domain is generated from the original speech signal after pre-processing and intensity-loudness transform (ILT). Secondly, coefficients of low frequency wavelet decomposition are partitioned into equal- sized and non-overlapping blocks, and logarithmic short-time energy of each block is computed to obtain speech signals features in frequency-domain. Finally, combined with spectral flux features (SFF) of speech signal in time-domain, a Ternary perceptual hashing sequence is created. Experiment results illustrate that Ternary form is better to stand for hash digest than Binary form, the proposed algorithm has a good robustness against content preserving operations, discrimination, good compaction and high efficiency, and detects the tamper localization as well.
Keywords: speech perceptual authentication; perceptual hashing; DWT; symmetric ternary string; tamper localization
Intelligent Phishing Detection System using Similarity Matching Algorithms
by B B Gupta
Abstract: Today, Phishing attack is one of the most common and serious threat over Internet. It is used to fraud users and steals their personal information either by using spoofed emails or fake websites or both. In this paper, we proposed a novel intelligent Phishing detection system, i.e. CUMP (CSS and URI Matching based Phishing detection system) to detect zero-day phishing attacks. Our proposed approach is based on the concept of URI (Uniform Resource Identifier) and CSS (Cascading Style Sheet) matching. This concept is used, as Phisher always tries to mimic the URI pattern and visual design in the hope that even experienced user will not be able to detect phishing Website by visualization. To mimic the visual appearance, phishers generally use same CSS style. Without using same CSS, it is very difficult to achieve same design. To defend against phishing websites attacks especially zero-day attacks, our proposed system used the basic properties of any phishing attacks for URI and CSS matching. Our proposed solution is very effective in detecting a wide range of website phishing attacks with TP and TN rate of 93.27% and 100%, respectively and results in less false positive rate.
Keywords: Intrusion, Spoofing, Phishing, Website, E-mail, URI, CSS, Zero-day attack
Effective and Secure Data Storage in Multi-Cloud Storage Architectures
by Syam Kumar Pasupuleti
Abstract: In multi-cloud storage, the data owners host their data on cloud servers and access the data from cloud servers, in which we consider the existence of multiple cloud service providers to cooperatively store and maintain the owners data. However, this new paradigm of data hosting service also introduces new security challenges such as Integrity, Availability, and Confidentiality of data. The prior works on ensuring remote data integrity in multi-cloud storage often lacks the address of confidentiality and availability issues, which are always important aspects of Quality of Service(QoS). In this paper, we propose an effective and secure data storage protocol in multiple cloud environments. In our design, we encrypt and encode the data before outsourcing into clouds to ensure confidentiality and availability of data respectively. Then, we consider the task of allowing a Third Party Auditor (TPA) to verify the integrity of the data stored in the multi-cloud which greatly reduces the computational overhead of the data owner. Futher, we extend our verification method to support batch verification for multiple owners to improve the system performance. Extensive security, performance analysis and experimental results show that our scheme is efficient and more secure in multi-cloud environments against data corruptions and data leakage. Finaly, we compare the results of our scheme with existing schemes
Keywords: multi-cloud; data storage; availability; integrity; confidentiality; bilinear maps; sobol sequence; raptor codes.
Grey Neural Network Prediction Model based on Fruit Fly Optimization Algorithm and its application
by Yang Jing
Abstract: The vinyl acetate (VAC) polymerization rate is considered as an important quality index in the production of polyvinyl alcohol. However, the quality of polyvinyl alcohol can not be controlled effectively because it can not be measured online. Therefore, how to design reliable estimation of VAC polymerization rate is crucial. The fruit fly optimization algorithm (FOA), as a novel meta-heuristic and evolutional algorithm, has several merits, such as higher prediction accuracy, having few parameters to be adjusted and able to achieve global optimum. This paper proposes a grey neural network prediction model to improve the prediction performance by taking advantage of FOA to optimize the whitening parameters of this grey neural network. Based on the data from a real plant, the proposed grey neural network prediction model combined with FOA (FOA_GNN) is evaluated and proved to be valid. Its simulation experimental results also confirm the advantage of FOA_GNN algorithm to the traditional grey neural network model (GNN), the adaptive compete genetic neural network prediction model (ACGA) and radial basic function neural network model (RBF).
Keywords: fruit fly optimization algorithm (FOA); grey neural network (GNN); prediction; the vinyl acetate polymerization rate
Performance overhead Analysis of Virtualization on ARM
by Xiaoli Gong, Zhaoyang Shao, Qi Du, Aimin Yu, Jin Zhang
Abstract: There is growing interest to replace traditional servers with low power systems. ARM has released new architecture with hardware virtualization extension support. However, the performance of virtualization on ARM based low power platform is unclear. In this paper, a number of performance measurements on basic operations such as disk speed, CPU, memory and network throughput were done on a dual-core ARM Cortex-A7 based hardware platform with two popular open source hypervisor software, Xen and KVM. Basic benchmarks and three popular applications are measured to make a comparison between virtual machine and native machine. Our result shows that the average performance overhead of Xen and KVM virtual machine is between 3 and 4 percent when the host is lightly loaded, while the performance of three applications drops drastically with the workload stress increase in the system. The IO virtualization should be optimized for the industrial usage of ARM based virtualization server.
Keywords: ARM Cortex-A7; Xen; KVM; virtualization; performance
S2PAD: Secure Self-certified Public Auditing for Data Integrity in Cloud Storage and Its Extension
by Jianhong Zhang, Weinan Zhen
Abstract: Cloud storage is an important service of cloud computing, it can o
Keywords: cloud computing, self-certified cryptography, integrityrnchecking, security proof, provably secure, random oracle model,rncryptography
Frequent Failure Monitoring and Reporting in Virtualization Environment using Backing Algorithm Technique
by ANTHONIRAJ SELVARAJ, Dr S.Saraswathi
Abstract: In virtual platforms, Disaster Management yields to the platform oriented service which precisely taught using virtual machine. Due to user execution comes under various verities, simplified structure becomes a complex task. So that the automated system called Backing Technique method is introduced, in which proficient performance administration and simplified constitution are guaranteed. But, In our paper, we present the concept of backing method with additional feature, which persuades the structure so that intricacy diminution is made feasible. Analysis shows that malfunction of Exe, Win-x, kernel, and host can be detrimental for a machine. On the basis of this, a new algorithm is developed called Backing Algorithm which is finest in its concert. The backing algorithm is intelligent to afford continuous monitoring and can accomplish failure reporting proficiently. In addition to Backing Algorithm concept, we extensively build Virtual Machine Solution Provider (VMSP) to sustain this system by frequent monitoring using Virtual Machine Monitor (VMM). These results provides, Backing algorithm with VMM gives the competent precaution technique when the crash or system boot failure occurs and also it provides the best alert solution for data recovery process in the time of failure.
Keywords: virtual machine; VM; physical machine PM; virtual machine monitor ;VMM; virtual machine solution provider ;VMSP; xen; virtual power sub controller ;VPSC; virtual disk sub controller ;VDSC; virtual network sub controller ;VNSC;
Special Issue on: Information Technology for Organisation Development
De-Noising by Gammachirp and Wiener Filter Based Methods for Speech Enhancement
by Hajer Rahali
Abstract: In this paper, we propose a method for enhancing of speech corrupted by noise. The new speech enhancement approach combines RASTA, Wiener (WF) and the Gammachirp Filter (GF) in series connection to construct a two-stage hybrid system (named RASTA-WF-GF) in frequency domain to enhance the speech with additive noise. It is shown that the proposed method significantly outperforms, spectral subtraction (SS), Wiener filter (WF), Kalman filter (KF) and RASTA speech enhancement methods, in the presence of noise.
Keywords: Gammachirp filter; Wiener filter; robust speech recognition; noise reduction.
A lossless image encryption algorithm using matrix transformations and XOR operation
by Assia Beloucif, Lemnouar Noui
Abstract: Encryption is the way to ensure confidentiality of different data, digital images have special features as large data, bulky data, and strong correlation between pixels, which makes traditional encryption algorithms not suitable for image encryption. For this concern we propose a novel lossless encryption scheme for digital images based on combination of matrix transformations and XOR operation. The numerical experimental results confirms that the proposed method achieves high security level against brute force attacks, statistical attacks and sensitivity analysis, moreover the suggested algorithm provides a good randomness properties, thus our method can be applied for image encryption and transmission in sensitive domains.
Keywords: lossless image encryption; confidentiality; matrix transformations; multimedia security.
A New Multi-Criteria Decision Process to Prioritize Requirements
by Amroune Mohamed, Zarour Nacereddine, Charrel Pierre Jean
Abstract: Most software projects have more candidate requirements than can be realized within the time and cost constraints. Prioritization helps to identify the most valuable requirements from this set by distinguishing the critical few from the trivial. However, many studies showed that requirements prioritization is still an ambiguous concept and current practices in the companies are informal. This paper presents a novel multi-criteria decision analysis process to prioritize requirements.rnrnThe novelty of the presented idea is three-fold. Firstly, to prioritize requirements, it distinguishes two categories of requirements according to their level of abstraction: low-level requirements and high-level requirements, i.e. business goals. Then, it relates business goals to the requirements of low-level which contribute to their fulfillment. This will improve the completeness and traceability of requirements. Secondly, requirements prioritization is based on their degree of contribution to the identified business goals and their importance. So, business goals become the evaluation criteria. Finally, this process takes into account relationships and dependencies that may exist between business goals. At this end, we employ Analytic Hierarchy Process (AHP) method to give weights to business goals and Choquet integral to calculate a global score, i.e. priority, for requirements. A case study is presented to illustrate the applicability of this process and the effectiveness of using the fuzzy Choquet integral and AHP method.
Keywords: Requirements prioritization; Decision analysis; Fuzzy measure; Choquet integral; AHP.
Modeling UML State Machines with FoCaLiZe
by Messaoud Abbas, Choukri-Bey Ben-Yelles, Renaud Rioboo
Abstract: UML and OCL are largely adopted as a standard to describe the static and the dynamic aspects of systems and to specify their properties. Model Driven Engineering (MDE) techniques can be used to automatically generate code from such models. For critical systems, formal methods are frequently used together with UML/OCL models in order to analyze and check model properties. In this paper we propose a combination of UML/OCL and FoCaLiZe, an object-oriented development environment using a proof-based formal approach. More specifically, we propose a formal transformation of UML state machines conditioned with OCL constraints into FoCaLiZe specifications, that can be refined to generate executable code. The proposed transformation supports communication between a class structure and its state machine. Thanks to Zenon, the automatic theorem prover of FoCaLiZe, errors of the original UML/OCL model, if any, are automatically detected. We illustrate our translation with a concrete case study.
Keywords: FoCaLiZe; UML; OCL; state machine; MDE; proof; Coq; Zenon.
Remote sensing image retrieval using object based, semantic ,classifier Techniques
by Suresh Kumar Nagarajan, Arun Manoharan
Abstract: Data captured by satellites is increasing exponentially for agriculture and crop management, health, climate changes and future prediction of plants. Now a days it is required to have reliable, automated, satellite image classification and recovering system. Every day there is a massive amount of remotely-sensed data being collected and sent by satellite. Many retrieval systems are proposed for image content and information retrieval. However, the output of these approaches is generally not up to expectation. In this paper presents a new approach, remote sensing image retrieval scheme by content base image retrieval with grid computing and advanced database concept and this will speed up both input processing and system response time. This paper presents the idea of the parallel processing of input data, queries, and storing images in the database using advanced database concept like B+ or BST trees.
Keywords: 2-D MHMM ( Two-Dimensional Multi-resolution Hidden Markov Model); Remote Sensing Images; and Semantic network.
An Empirical Study of clone detection in MATLAB/Simulink Models
by Dhavleesh Rattan, Rajesh Bhatia, Maninder Singh
Abstract: Complex systems consisting of millions of components are very difficult to develop and manage. Thus model driven development has become an essential development paradigm. But very large scale models suffer from unexpected overlaps of parts. The overlapped and copied fragments in models are known as model clones which increase maintenance cost and resource requirements. Recent research has shown the presence of clones in MATLAB/Simulink models. To gain better understanding, we have conducted an in-depth empirical study on 18 MATLAB/Simulink models using ConQAT, an open source clone detection framework. Our study shows that there is a significant cloning in models and find out some interesting patterns of clones which are significant to improve maintenance.
Keywords: model clone detection; empirical study; software maintenance; matlab simulink models; ConQAT.
Intuitionistic Fuzzy Local Binary Pattern for Features Extraction
by Mohd Dilshad Ansari, Satya Prakash Ghrera
Abstract: We propose a novel intuitionistic fuzzy feature extraction method to encode local texture. The proposed method extends the fuzzy local binary pattern approach by incorporating intuitionistic fuzzy set theory in the representation of local patterns of texture in images. Intuitionistic fuzzy local binary pattern also contribute to more than one bin in the distribution of the intuitionistic fuzzy local binary pattern values which can be used as a feature vector. The proposed intuitionistic fuzzy local binary pattern approach was experimentally evaluated for Lena image of the size 256*256. The results validate the effectiveness of the proposed intuitionistic fuzzy local binary pattern over the local binary pattern and fuzzy local binary pattern feature extraction methods.
Keywords: Fuzzy local binary pattern; Intuitionistic Fuzzy sets; Intuitionistic fuzzy local binary pattern; Entropy;.
Swarm intelligence algorithms in cryptanalysis of Simple Feistel Ciphers
by Tahar Mekhaznia, Abdelmadjid Zidani
Abstract: Recent cryptosystems constitute a hard task for cryptanalysis algorithms due to the nonlinearity of their structure. This problem can be formulated as NP-Hard. It has long been subject to various attacks; related results remain insufficient especially when handling wide instances due to resources requirement which increase with the size of the problem. On another side, heuristic optimization methods are techniques able to investigate large spaces of candidate solutions. Swarm intelligence algorithms, part of heuristic methods represent a set of approaches characterized by their fast convergence and easy implementation. The purpose of this paper is to provide, and for a first time, a detailed study about the performance of two swarm intelligence algorithms, BAT algorithm and Wolf Pack Search Algorithm (WPS) for cryptanalysis of some variant of Feistel ciphers. Experiments were accomplished in order to study the effectiveness of such algorithms in solving the considered problem. Moreover, a comparison of VMMAS, PSO and DE algorithms establishes this advantage
Keywords: Cryptanalysis; Feistel Ciphers; BAT Algorithm; WPS Algorithm; VMMAS Algorithm; PSO Algorithm; DE Algorithm.