Forthcoming articles

 


International Journal of High Performance Computing and Networking

 

These articles have been peer-reviewed and accepted for publication in IJHPCN, but are pending final changes, are not yet published and may not appear here in their final order of publication until they are assigned to issues. Therefore, the content conforms to our standards but the presentation (e.g. typesetting and proof-reading) is not necessarily up to the Inderscience standard. Additionally, titles, authors, abstracts and keywords may change before publication. Articles will not be published until the final proofs are validated by their authors.

 

Forthcoming articles must be purchased for the purposes of research, teaching and private study only. These articles can be cited using the expression "in press". For example: Smith, J. (in press). Article Title. Journal Title.

 

Articles marked with this shopping trolley icon are available for purchase - click on the icon to send an email request to purchase.

 

Articles marked with this Open Access icon are freely available and openly accessible to all without any restriction except the ones stated in their respective CC licenses.

 

Register for our alerting service, which notifies you by email when new issues of IJHPCN are published online.

 

We also offer RSS feeds which provide timely updates of tables of contents, newly published articles and calls for papers.

 

International Journal of High Performance Computing and Networking (125 papers in press)

 

Regular Issues

 

  • Energy efficiency of a heterogeneous multicore system based on the enhanced Amdahl's law   Order a copy of this article
    by Songwen Pei, Junge Zhang, Naixue Xiong, Myoung-Seo Kim, Jean-luc Gaudiot 
    Abstract: Energy efficiency is one of the main challenges of designing future heterogeneous multicore systems, beyond performance. In this paper, we propose an energy efficiency analytical model for heterogeneous multicore system based on the enhanced Amdahl's law. The model extends the traditional computing-centric model by considering the overhead of data preparation, which potentially includes the overhead of communication, data transfer, synchronisation, etc. The analysis clearly shows that decreasing the overhead of data preparation is a promising approach to reach higher performance of computation and greater energy-efficiency. Therefore, more informed tradeoffs should be taken when we design a modern heterogeneous processor system within limited budget of energy.
    Keywords: energy efficiency; overhead of data preparation; dataflow computing model; performance evaluation; heterogeneous multicore system.

  • Pixel classified colorisation method based on neighborhood similarity priori   Order a copy of this article
    by Jie Chen, Zongliang Gan, Xiuchang Zhu, Jin Wang 
    Abstract: Colorisation is a kind of computer-aided technology that automatically adds colours to grayscale images. This paper presents a scribble-based colorisation method that treats the flat and edge pixels differently. First, we classify the pixels to flat or edge pixel categories using the neighborhood similarity pixels searching algorithm. Then, we compute the weighted coefficients of the edge pixels by solving a constraint quadratic programming problem and compute the weighted coefficients of the flat pixels based on their luminance distances. Finally, we transmit the weighted coefficients to the chrominance images according to the joint correlation property between luminance and chrominance channels, and combine with the colours scribbled on by the user to compute all the unknown colours. The experimental results show that our method is effective, especially on reducing colour bleeding in the boundary parts, and can give better results when only a few colours are scribbled on.
    Keywords: colorisation; joint correlation; linear weighted combination; quadratic programming; active set; neighborhood similarity pixels

  • A novel distributed node searching method in delay-tolerant networks   Order a copy of this article
    by Yiqin Deng, Ming Zhao, Zhengjiang Wu, Longhua Xiao 
    Abstract: Among conventional target tracking methods in delay-tolerant networks (DTNs), there exists a type of distributed approach which is efficient with regard to target tracking. These methods leave the position and time information of each node in their original sub-areas, and select particular nodes in each sub-region as anchor nodes and the messenger nodes are used to collect and update the spread information, so that the detector is able to track the target node. However, some nodes in these methods consume energy too quickly, resulting in energy holes and a short network lifetime. Based on the above, this research developed a novel target track strategy (MAtracking), on the basis of a mobile anchor (MA) and any correlation between nodes. It improves the effect of the target tracking through two new mechanisms: first, a new data collection method (MA) is adopted, which moves to collect real-time mobile location and time of nodes so as to balance the energy consumption of the whole network. Then, the approximate path is proposed by inferring the path of the MA into a non-deterministic polynomial (NP) problem for path optimisation. Secondly, the correlation between nodes is used to track the target in the absence of valid mobile information about it. By doing so, the node that has the shortest time to meet the target node is tracked, thus guaranteeing the success rate. A large number of experimental results from real data simulations show that the proposed MAtracking method can improve the network lifetime and tracking efficiency compared with existing approaches.
    Keywords: delay-tolerant networks; target tracking; mobile anchor; correlation of nodes.

  • ElasticQ: an active queue management algorithm with flow trust   Order a copy of this article
    by Su Chenglong, Jin Guang, Jiang Xianliang, Niu Jun 
    Abstract: Active Queue Management (AQM) can improve the network transmission performance and reduce the delay of packets. However, most of previous algorithms cannot achieve the efficiency and fairness simultaneously when high-bandwidth flows exist. In this paper, a novel scheme, named the ElasticQ (Elastically Fair AQM), is proposed to suppress the high-bandwidth non-responsive flows and enhance the fairness of different flows. Different from previous works, the concept of flow trust is introduced into the design of AQM algorithms to measure the flows reliability and security effectively. The trust degrees of different flows are estimated to decide whether the packets are discarded or not in the proposed scheme with the sample-match mechanism and the packet dropping interval. Simulation experiments results show that ElasticQ could ensure the fairness of various flows, maintain the stability of the queues, and decrease the completion time of responsive flows, especially when responsive flows and non-responsive flows coexist.
    Keywords: flow trust, active queue management, elastic fairness, non-responsive flow, responsive flow

  • A hybrid mutation artificial bee colony algorithm for spectrum sharing   Order a copy of this article
    by Ling Huang 
    Abstract: In order to alleviate the spectrum scarcity problem in wireless networks, achieve efficient allocation of spectrum resources, and balance the users permission to spectrum, a hybrid mutation artificial bee colony algorithm based on the artificial bee colony algorithm is presented. The presented algorithm aims to enhance the efficiency of global searching by improving the way of leaders searching the nectar source with differential evolution algorithm. In addition, the onlookers searching method is improved by bat algorithm to guarantee the convergence efficiency of the algorithm and the precision of the result, and it is assumed that the onlookers are equipped with bats echolocation to get close to the nectar source by adjusting the rate of pulse emission and loudness when they are searching nectar source. The simulation results show that the proposed algorithm has faster convergence, higher efficiency and more optimal solutions compared with other algorithms.
    Keywords: spectrum sharing, artificial bee colony algorithm, bat algorithm, differential evolution

  • When is the immune inspired B-Cell algorithm superior to the (1+1) evolutionary algorithm?   Order a copy of this article
    by Xiaoyun Xia, Langping Tang, Xue Peng 
    Abstract: There exist many experimental investigations of artificial immune systems (AIS), and it has been shown that AIS are useful and efficient for many real-world optimisation problems. However, we know little about that whether the AIS can outperform the traditional evolutionary algorithms on some optimisation problems in theory. This work rigorously proved that a simple AIS called the B-Cell Algorithm (BCA) with somatic contiguous hypermutations can efficiently optimise two instances of the multiprocessor scheduling problem in expected polynomial runtime, whereas the local search algorithms and the (1+1) evolutionary algorithm ((1+1) EA) using only one individual in the search space and with standard bit mutation are highly inefficient. This work is helpful for gaining insight into whether there exists any algorithm that is efficient for all specific problems.
    Keywords: artificial immune system; somatic contiguous hypermutations; multiprocessor scheduling problem; runtime analysis.
    DOI: 10.1504/IJHPCN.2016.10007954
     
  • Scalable bootstrap attribute reduction for massive data   Order a copy of this article
    by Suqin Ji, Hongbo Shi, Yali Lv, Min Guo 
    Abstract: Attribute reduction is one of the fundamental technique for knowledge acquisition in rough set theory. Traditional attribute reduction algorithms have to load the whole dataset into the memory at a time, however it is unfeasible for attribute reduction of the massive decision table due to hard limitation. To solve this problem, we propose the Bag of Little Bootstraps Attribute Reduction algorithm (BLBAR), which combines the bag of little bootstraps with attribute discernibility. Specifically, the algorithm first samples from the original decision table to generate a number of sub decision tables, and then finds the reducts of bootstrap samples of each sub-table through attribute discernibility; finally, all of the reducts are integrated as the reduct of the original massive decision table. Experimental results demonstrate that BLBAR leads to the improved feasibility, scalability and efficiency for attribute reduction on massive decision table.
    Keywords: bag of little bootstraps; attribute reduction; massive data; discernibility of attribute; reduct

  • Assessing nodes importance in complex networks using structural holes   Order a copy of this article
    by Hui Xu, Jianpei Zhang, Jing Yang, Lijun Lun 
    Abstract: Accurate measurement of important nodes in complex networks has great practical and theoretical significance. Mining important nodes should not only consider the core nodes, but also take into account the locations of the nodes in the network. Despite much research on assessing important nodes, the importance of nodes in the structural holes is still easily ignored. Therefore, a local measuring method is proposed, which evaluates the nodes' importance by the total constraints caused by the lack of primary structural holes and secondary structural holes around the nodes. This method simultaneously considers both the centrality and the bridging property of the nodes' first-order and second-order neighbours. Further to prove the accuracy of the method TCM, we carry out deliberate attack simulations through selective deletion in a certain proportion of network nodes. Then we calculate the decreased ratio of network efficiency before and after the attacks. Experiment results show that the average effect of the TCM in four real networks is improved by 50.64% and 14.92% compared to the clustering coefficient index and the k-shell decomposition method, respectively. Obviously, the TCM is more accurate to mine important nodes than the other two methods, and it is suitable for quantitative analysis in large-scale networks.
    Keywords: complex networks; structural holes; secondary holes; nodes importance; constraints.
    DOI: 10.1504/IJHPCN.2016.10014482
     
  • Load-balanced overlay recovery scheme with delay minimisation   Order a copy of this article
    by Shengwen Tian, Hongyong Yang 
    Abstract: Recovery from a link or node failure in the internet is often subjected to seconds or minutes of routing convergence, during which certain end-to-end connections may experience seconds or minutes of outage. According to this problem, existing approaches reroute the data traffic to a pre-defined backup path to detour the failed components. However, the maintenance of backup path increases the significant bandwidth expenditure. On the other hand, the diverted traffic may cause congestion on the backup path if it is not carefully split over multiple paths according to their available capacity. In this paper, we propose an efficient recovery scheme by using one-hop overlay multipath source routing, which is a post-failure recovery method. Once a failure happens, multiple one-hop overlay paths are constructed by selecting strategically multiple relay nodes, and the affected traffic is diverted to these paths in a well-balanced manner. We formulate the traffic allocation problem as a tractable linear programming (LP) optimisation problem, whose goal is to minimise the worst-case network congestion ratio. Simulations based on a real ISP network and a synthetic internet topology show that our scheme can effectively balance link usage dramatically and improve the reliability of the network.
    Keywords: failure recovery; load balance; overlay routing; linear programming.

  • Negation scope detection with recurrent neural network models in review texts   Order a copy of this article
    by Lydia Lazib, Yanyan Zhao, Bing Qin, Ting Liu 
    Abstract: Identifying negation scopes in a text is an important subtask of information extraction, which can benefit other natural language processing tasks, such as relation extraction, question answering and sentiment analysis. It also serves the task of social media text understanding. The task of negation scope detection can be regarded as a token-level sequence labelling problem. In this paper, we propose different models based on recurrent neural networks (RNNs) and word embedding that can be successfully applied to such tasks without any task-specific feature engineering effort. Our experimental results show that RNNs, without using any hand-crafted features, outperform feature-rich CRF-based model.
    Keywords: negation scope detection; natural language processing; recurrent neural networks.
    DOI: 10.1504/IJHPCN.2016.10011341
     
  • The optimisation of speech recognition based on convolutional neural network   Order a copy of this article
    by Weipeng Jing, Tao Jiang, Xingge Zhang, Liangkuan Zhu 
    Abstract: The convolutional neural network (CNN) as acoustic model is introduced into speech recognition system based on mobile computing. To improve speech accuracy, two optimised methods are proposed in speech recognition based on CNN. Firstly, aiming at the problem for existing pooling algorithms ignoring locally relevant characteristics of the speech data, a dynamic adaptive pooling (DA-pooling) algorithm is proposed in the pooling layer of the CNN model. DA-pooling algorithm calculates the Spearman correlation coefficient of the extracted data to determine data correlation, then selects an appropriate pool strategy for different correlativity of data according to weight. Secondly, in order to solve traditional dropout hiding neurons node randomly, a dropout strategy based on sparseness is proposed in the full-connected layer in the CNN model. By adding a unit sparseness determination mechanism in the output stage of the network unit, we can reduce the ratio of the influence of smaller units in the model results, thereby improving the generalisation ability of the model. Experimental results show that these strategies can improve the performance of the acoustic models based on CNN.
    Keywords: CNN; speech recognition; DA-pooling; overfitting; sparseness.

  • Hybrid feature selection technique for intrusion detection system   Order a copy of this article
    by MUHAMMAD HILMI KAMARUDIN, CARSTEN MAPLE, T.I.M. WATSON 
    Abstract: The enormous volume of network traffic has created a major challenge to the Intrusion Detection System (IDS). High dimensionalitys problems have rendered feature selection one of the most important criteria in determining the efficiency of the IDS. The objective of this study is to seek the best-fit for selecting features that can provide a high level of detection level with low false positive rate. In this study we have selected a hybrid feature selection model that potentially combines the strengths of both the filter and the wrapper selection procedure. The potential hybrid solution is expected to effectively select the optimal set of features in detecting intrusion. The use of several performance metrics to study the feature selection optimisation impact includes performance accuracy, true positive, true negative, detection rate, false positive, false negative and learning time. The proposed hybrid model was carried out using Correlation Feature Selection (CFS) together with three different search techniques known as best-first, greedy stepwise and genetic algorithm. The wrapper-based subset evaluation uses a Random Forest (RF) classifier to evaluate each of the features that were first selected by the filter method. The reduced feature selection on both KDD99 dataset (Network-based) and DARPA 1999 (Host-based) dataset was tested using RF algorithm classification with 10-fold cross-validation in a supervised environment. The experimental result shows that the hybrid feature selections had produced a satisfactory outcome in terms of least number of features with lower processing time. It also had achieved almost 100% detection rate of known attacks on both datasets. This encouraging result has proven the usage of hybrid feature selection model in IDS environment would be a method of choice.
    Keywords: machine learning; filter-subset evaluation; wrapper-subset evaluation; genetic algorithm; random forest.

  • A dynamic and QoS-effective resource management system   Order a copy of this article
    by Amril Nazir 
    Abstract: This paper presents the design and implementation of HASEX that supports a dynamic resource provisioning during application run-time and provides effective quality-of-service (QoS) resource management system for QoS- and deadline-driven applications. The most important feature of HASEX is the ability to serve high performance and distributed applications with minimal infrastructure and operating costs. The resource provisioning management is controlled by a 'rental' mechanism which is supported by a pool of computing resources that the system may rent from external resource owners/providers in times of high demand. HASEX differentiates the roles of application management, job scheduling, and resource provisioning tasks. This approach significantly reduces the overhead of managing distributed resources. We demonstrate the effectiveness of HASEX to rent groups of resource nodes across geographically disparate sites. We then perform performance comparison of HASEX with OpenNebula Cloud system to demonstrate its performance, scalability, and QoS effectiveness.
    Keywords: deadline-driven jobs; SLA management; SLA/QoS middleware; resource provisioning; autonomic and self management.
    DOI: 10.1504/IJHPCN.2016.10009696
     
  • Performance identification in large-scale class data from advanced facets of computational intelligence and soft computing techniques   Order a copy of this article
    by You-Shyang Chen 
    Abstract: Enormous hemodialysis (HD) treatments have caused concern regarding negative information as the world's highest prevalence of end-stage renal disease in Taiwan. This topic is the motivation to identify an adequate HD remedy. Although previous researchers have devised various models to address HD adequacy, the following five deficiencies form obstacles: (1) lack of consideration for imbalanced class problems (ICPs) with medical data; (2) lack of methods to season mathematical distributions for the given data; (3) lack of explanatory ability on the given data; (4) lack of effective methods to identify the determinants of HD adequacy; and (5) lack of appropriate classifiers to define HD adequacy. This study proposes hybrid models to integrate expert knowledge, imbalanced resampling methods, decision tree and random forests-based feature-selection methods, LEM2 algorithm, rough set theory, and rule-filtering techniques to process medical practice with ICPs. These models have better performance than the listed methods from the empirical results.
    Keywords: rough set theory; decision tree; random forests; imbalanced class problem data.

  • Assessing complex evolving cyber-physical systems: a case study on smart medical devices   Order a copy of this article
    by Jan Sliwa 
    Abstract: Our environment is more and more permeated by intelligent devices and systems that directly interact with physical objects, including our bodies. In this way, complex cyber-physical systems are created where the cyber part is intertwined with the physical part so that novel dynamical dependencies appear. These intelligent devices produce immense amounts of data that can be stored and analysed. Often, high hopes are raised that processing those data will easily increase our knowledge and permit to take good decisions based on hard facts. If not based on a solid understanding, such data processing can lead to the well-known problem of GIGO (garbage in - garbage out). If presented in a visually compelling way, useless results will look like truth and will be misleading and damaging. In order obtain valid analysis results than can be used as "actionable knowledge", it is necessary to understand the working of the physical systems and also to be aware of possible statistical fallacies, such as biased selection. Even if big data collected by intelligent devices are not perfect, we want nevertheless use them to evaluate cyber-physical systems, their safety, efficiency and quality. One of the major challenges is the changing nature of the technical systems, of the environment in which they operate, and of the humans who use them. This raises the problem of partial invalidation of collected statistics if the conditions change. We will discuss the general problems related to assessing cyber-physical systems and present an important and interesting case study: smart medical devices. We have to stress that the statistical questions raised here are open, and one of the goals of this paper is to raise interest and instigate a cooperation to solve them.
    Keywords: cyber-physical systems; quality assessment; smart medical devices; statistical models.

  • Distributed data-dependent locality sensitive hashing   Order a copy of this article
    by Yanping Ma 
    Abstract: Locality sensitive hashing (LSH) is a popular algorithm for approximate nearest neighbour (ANN) search. As LSH partitions vector space uniformly and the distribution of vectors is usually non-uniform, it poorly fits real datasets and has limited efficiency. In this paper, we propose a novel data-dependent LSH (DP-LSH) algorithm, which has a two-level structure. In the first level, we train a number of cluster centres, and use the centres to divide the dataset. So the vectors in each cluster have near uniform distribution. In the second level, we construct LSH tables for each cluster. Given a query, we first determine a few clusters that it belongs to, and perform an ANN search in the corresponding LSH tables. Furthermore, we present an optimised distributed scheme and a distributed DP-LSH algorithm. Experimental results on the reference datasets show that the search speed of DP-LSH can be increased by 48 times compared to E2LSH, while keeping high search precision; also, the distributed DP-LSH can further improve search efficiency.
    Keywords: locality sensitive hashing; approximate nearest neighbour; data-dependent; distributed high dimensional search.

  • Real-time human action recognition using depth motion maps and convolutional neural networks   Order a copy of this article
    by Jiang Li, Xiaojuan Ban, Guang Yang, Yitong Li 
    Abstract: This paper presents an effective approach for recognising human actions from depth video sequences by employing Depth Motion Maps (DMMs) and Convolutional Neural Networks (CNNs). Depth maps are projected onto three orthogonal planes, frame differences under each view (front/side/top) are then accumulated through an entire depth video sequence thus generating a DMM. We build a model architecture of Multi-View Convolutional Neural Network (MV-CNN) containing multiple networks to deal with three DMMs (DMMf, DMMs, DMMt). The output of full-connected layer under each view is integrated as feature representation, which is then learned in the last softmax regression layer to predict human actions. Experimental results on MSR-Action3D dataset and UTD-MHAD dataset indicate that the proposed approach achieves state-of-the-art recognition performance and is appropriate for real-time recognition.
    Keywords: real-time human action recognition; depth motion maps; multi-view convolutional neural networks.
    DOI: 10.1504/IJHPCN.2016.10011433
     
  • A compact construction for non-monotonic key-policy attributebased encryption   Order a copy of this article
    by Junqi Zhang, Haiying Gao 
    Abstract: The Attribute-Based Encryption (ABE) scheme with monotonic access structure cannot deal with an access structure that is associated with the negation of attributes, which is not convenient for real world applications. In this paper, we attempt to propose a more expressive non-monotonic ABE scheme through a new method. To achieve this goal, we firstly propose a linear Two-mode Identity-Based Broadcast Encryption (TIBBE) scheme based on an Identity-Based Broadcast Encryption (IBBE) scheme. We introduce the concept of Identity-Based Revocation (IBR) for this scheme without increasing the size of parameters in IBBE. The scheme is selective identity secure under the m-DBDHE assumption. Then we convert the TIBBE scheme into a non-monotonic Key-Policy ABE (KP-ABE) scheme with compact parameters. Our KP-ABE scheme could achieve constant-size ciphertexts, and the scale of the private keys grows linearly with the scale of the attribute set. Besides, the computational cost is the lowest compared with other existing non-monotonic KP-ABE schemes.
    Keywords: identity-based broadcast encryption; revocation scheme; linear secret-sharing schemes; non-monotonic access structure; selective security.
    DOI: 10.1504/IJHPCN.2016.10016085
     
  • Research on link blocks recognition of web pages   Order a copy of this article
    by Gu Qiong, Wang Xianming 
    Abstract: The link block is a typical type of block structure of web pages; it is also an important and basic research object in the fields of web data processing and web data mining. Nevertheless, the existing research on links only focuses on granularities such as websites, pages and single links. Research results based on block-level links are extremely rare. In view of the significance and deficiency of this issue, firstly, block and block tree are proposed as the basic concepts of subsequent explorations, and then an approach of building block trees is put forward. Secondly, four rules for link block discrimination and two indicators for recognition results evaluation are put forward based on the concept of block, the two evaluation indicators are named as Link Coverage Rate (LCR) and Code Coverage Rate (CCR) respectively. Finally, a strategy named Forward Algorithm for Discovery of Link Block (FAD) is proposed and a corresponding experiment with different parameters is performed to verify the strategy. The results show that the FAD can be flexible to achieve recognition of link blocks under different granularity conditions. Concepts and approaches presented in this paper have a good prospect in the fields of web data processing, web data mining such as advertising block recognition, web page purification, page importance evaluation and web content extraction.
    Keywords: web; block trees; link blocks; discrimination; recognition.

  • AdaBoost based conformal prediction with high efficiency   Order a copy of this article
    by Yingjie Zhang, Jianxing Xu, Hengda Cheng 
    Abstract: Conformal prediction presents a novel idea whose error rate is provably controlled by given significant levels. So the remaining goal of conformal prediction is its efficiency. High efficiency means that the predictions are as certain as possible. As we know, ensemble methods are able to obtain a better predictive performance than that obtained from any of the constituent models. An ensemble method such as random forest has been used as an underlying method to build a conformal predictor. But we dont know the differences of conformal predictors with and without ensemble methods, or how the corresponding performances are improved. In this paper, an ensemble method, AdaBoost, is used to build a conformal predictor, and we introduce another evaluation metric-correct efficiency, which measures the efficiency of correct classification correctly. The good performance of AdaBoost based conformal predictor (CP-AB) has been validated on seven datasets. The experimental results show that the proposed method has a much higher efficiency.
    Keywords: machine learning; conformal prediction; AdaBoost; efficiency; ensemble; support vector machine; decision tree; weak classifiers; p-value; prediction label.

  • Comparative analysis of hierarchical cluster protocols for wireless sensor networks   Order a copy of this article
    by Chirihane Gherbi, Zibouda Aliouat, Mohamed Benmohammed 
    Abstract: Wireless Sensor Networks (WSNs) basically consist of low cost sensor nodes deployed in an interesting area for collecting data from the environment and relaying them to a sink, where they will be processed and then sent to an end user. Since wireless nodes are severely power-constrained, the major concern is how to conserve nodes' energy so that network lifetime can be last longer enough till the expected normal end of the network mission. Since WSNs may be formed by a large number of nodes, it is rather complex, or even unfeasible, to analytically model a WSN and it usually leads to oversimplified analysis with limited confidence. Besides, deploying test-beds requires a huge effort. Therefore, simulation is essential to study WSN behaviour. However, it requires a suitable model based on solid assumptions and an appropriate framework to ease implementation. In addition, simulation results rely on the particular scenario under study (environment), hardware and physical layer assumptions, which are not usually accurate enough to capture the real behaviour of a WSN, thus jeopardising the credibility of results. However, detailed models yield to scalability and performance issues, owing to the large number of nodes that depend on application to be simulated. Therefore, the tradeoff between scalability and accuracy becomes a major issue when simulating WSN. In particular, we systematically analyse a few prominent WSN clustering routing protocols and compare these different approaches according to our taxonomy and several significant metrics. Finally, we summarise and conclude the paper with some pertinent future directions.
    Keywords: energy saving; distributed algorithm; load balancing; cluster-based routing; wireless sensor network.

  • Virtual cluster optimisation for MapReduce-like applications   Order a copy of this article
    by Cairong Yan, Guangwei Xu 
    Abstract: Infrastructure-as-a-service clouds are becoming ubiquitous for provisioning virtual machines on demand. Cloud service providers expect to use the least resources to deliver the best services. As users frequently request virtual machines to build virtual clusters and run MapReduce-like jobs for big data processing, cloud service providers intend to optimise the virtual cluster to minimise network latency and subsequently reduce data movement cost. In this paper, we focus on the virtual machine placement issue for provisioning virtual clusters with minimum network latency in clouds. We define the distance as the latency between virtual machines and use it to measure the affinity of a virtual cluster. Such metric of distance indicates the considerations of virtual machine placement and the topology of physical nodes in clouds. Then we formulate our problem as the classical shortest distance problem and solve it by building an integer programming model. A greedy virtual machine placement algorithm is designed to get a compact virtual cluster. Furthermore, an improved heuristic algorithm is also presented for achieving a global resource optimisation. The simulation results verify our algorithms and the experiment results validate the improvement achieved by our approaches.
    Keywords: virtual cluster; provisioning; resource optimisation; MapReduce programming model; shortest distance.
    DOI: 10.1504/IJHPCN.2016.10005007
     
  • Harnessing betweenness centrality for virtual network embedding in tree topologies   Order a copy of this article
    by Mydhili Palagummi, Ricardo Lent 
    Abstract: We examine the virtual network embedding problem with QoS constraints and formulate an approach that exploits the betweenness centrality of VNE requests to improve performance. A pay-per-use revenue model is introduced to evaluate the algorithm. An evaluation study using datacentre-like substrates and a wide area topology compares the approach with four embedding methods from the literature and reports on the average revenue rate, embedding success probability, average number of VNE deployments, cost, and impact of substrate failures on the operation of the VNEs, confirming the efficacy of the proposed approach.
    Keywords: virtual network embedding; revenue metric; cloud computing; network overlay; datacentre; simulation.

  • Detecting fake reviews via dynamic multimode network   Order a copy of this article
    by Jun Zhao, Hong Wang 
    Abstract: Online product reviews can greatly affect the consumers shopping decision. Thus, a large number of unscrupulous merchants post fake or unfair reviews to mislead consumers for their profit and fame. The common approaches to find these spam reviews are analysing the text similarity or rating pattern. With these common approaches we can easily identify ordinary spammers, but we cannot find the unusual ones who manipulate their behaviour to act just like genuine reviewers. In this paper, we propose a novel method to recognise these unusual ones by using relations among reviewers, reviews, commodities and stores. Firstly, we present four fundamental concepts, which are the quality of the merchandise, the honesty of the review, the trustworthiness of the reviewer and the reliability of the store, thus enabling us to identify the spam reviewers more efficiently. Secondly, we propose our multimode network model for identifying suspicious reviews and then give three corresponding algorithms. Eventually, we find that the multiview spam detection based on the multimode network can detect more subtle false reviews according to our experiments.
    Keywords: fake review detection; honesty degree; shopping behaviour; multiview spam detection; dynamic multimode network.

  • DBSCAN-PSM: An improvement method of DBSCAN algorithm on Spark   Order a copy of this article
    by Guangsheng Chen, Yiqun Cheng, Weipeng Jing 
    Abstract: DBSCAN is a density-based data clustering algorithm, widely used in image processing, data mining, machine learning and other fields. With the increasing size of clusters, the parallel DBSCAN algorithm is widely used. However, we consider the current partitioning method of DBSCAN is too simple, and steps of GETNEIGHBORS query repeatedly access the dataset on Spark. So we proposed DBSCAN-PSM, which applies a new data partitioning and merging method. In the first stage of our method we import the KD-Tree, combine the partitioning and GETNEIGHBORS query, reduce the number of accesses to the dataset, and decrease the influence of I/O in the algorithm. In the second stage of our method we use the feature of points in merging so as to avoid the time costing of the global label. Experimental results showed that our new method can improve the parallel efficiency and the clustering algorithm performance.
    Keywords: big data; DBSCAN; data partitioning; data merging.

  • Multimedia auto-annotation via label correlation mining   Order a copy of this article
    by Feng Tian 
    Abstract: How to automatically determine the label for multimedia object is crucial for multimedia retrieval. Over the past decade, significant efforts have been devoted to the task of multimedia annotation. The problem is difficult because an arbitrary multimedia object can capture a variety of concepts, each of which would require separate detection. The neighbour voting mechanism is known to be effective for multimedia object annotation. However, it estimates the relevance of a label with respect to multimedia content by labels' frequency derived from its nearest neighbours, which does not take into account the assigned label set as a whole. We propose LSLabel, a novel algorithm that achieves comparable results with label correlation mining. By incorporating the label correlation and label relevance with respect to multimedia content, the problem of assigning labels to multimedia object is formulated into a joint framework. The problem can be efficiently optimised in a heuristic manner, which allows us to incorporate a large number of feature descriptors efficiently. On two standard real-world benchmark datasets, we demonstrate that LSLabel matches the current state-of-the-art in annotation quality, and has lower complexity.
    Keywords: label correlation; multimedia annotation; auto-annotation; correlation mining.

  • Dynamic trust evaluation model based on bidding and multi-attributes for social networks   Order a copy of this article
    by Gang Wang, Jaehong Park, Ravi Sandhu 
    Abstract: Mutual trust is the most important basis in social networks. However, many malicious nodes often deceive, collaboratively cheat, and maliciously recommend other nodes for getting more benefits. Meanwhile, because of the lack of an effective incentive strategy, many nodes can neither evaluate nor recommend. Thus, malicious actions have been aggravated in social networks. To solve these issues, we design a bidding strategy to incentivise nodes to do their best to recommend or evaluate service node. At the same time, we also use the TOPSIS method of selecting a correct service node for the system from networks. To guarantee reliability of the service node selected, we bring recommendation time influential function, service content similarity function and recommendation acquaintance function into the model to compute the general trust of the node. Finally, we give an update method for trust degree of node and experiments analysis.
    Keywords: dynamic trust; trust evaluation model; bid; multi-attributes; TOPSIS; information entropy; recommendation trust; direct trust.

  • A personal local area information interaction system based on NFC and Bluetooth technology   Order a copy of this article
    by Tian Wang, Wenhua Wang, Ming Huang, Yongxuan Lai 
    Abstract: Taking attendance is a regular activity in society. Required class attendance is common in Chinese colleges and universities. In most traditional classrooms, taking attendance may consume much time and some students may cheat by pretending to be their classmates, which makes the results unbelievable. Moreover, the interaction mode between the teacher and students is single and cannot support fast data interaction. To solve these problems, this paper proposes an information interaction system, which can not only speed up the process of taking attendance but also extend the information exchange mode. Firstly, we propose the NFC (Near Field Communication)based method to take attendance, which uses the rapid information exchange characteristic of NFC in the mobile phone. Furthermore, an ad-hoc scheme is introduced, based on which some students may be selected as the relay, which can greatly accelerate the attendance-taking process. Moreover, a lazy unbinding mechanism is proposed to prevent the students from taking attendance for others. Finally, based on Bluetooth technology, the system realises file transfer, which extends the information exchange mode. Real experimental results demonstrate the feasibility of the proposed system.
    Keywords: taking attendance; information interaction; NFC; lazy unbinding scheme; relay scheme.
    DOI: 10.1504/IJHPCN.2016.10009054
     
  • A risk adaptive access control model based on Markov for big data in the cloud   Order a copy of this article
    by Hongfa Ding, Changgen Peng, Youliang Tian, Shuwen Xiang 
    Abstract: One of most important problems faced by cloud computing and cloud storage is identity and access management. The main problems of the application of access control in the cloud are the necessary flexibility and scalability to support a large number of users and resources in a dynamic and heterogeneous environment, with collaboration and information sharing needs. This paper proposes a risk self-adaptive dynamic access control model for big data stored in cloud and processed by cloud computing. The suggested model employs the Markov method and Shannon information theory. First, the simple formal adversary model for our risk adaptive access control model is presented. Second, a modification of eXtensible Access Control Markup Language (XACML) framework is given, and some new and enhanced components (including a novel risk evaluation component) are added in the modification. Then, we present Markov based methods to calculate the risk values of access requests, identify the user and supervise the access behaviour according to the job obligations of users and classification of data. At last, an incentive mechanism similar to a credit system is designed to supervise all the access behaviours of subjects, and the risky requests and risky users are restrained effectively by this mechanism. Our method is easy to deploy as the model is extended from the standard XACML. The administrator just needs to label the object data and record the request and access behaviour by comparing with the other work. This method is more effective and suitable to control the access in large-scale information system (e.g. cloud-based system), and protect the sensitive and privacy data for the data owners.
    Keywords: risk-based access control; privacy protection; risk management; cloud computing.

  • Coverless information hiding method based on the keyword   Order a copy of this article
    by Hongyong Ji, Zhangjie Fu 
    Abstract: Information hiding is an important
    Keywords: coverless information hiding; big data; Chinese mathematical expression; word segmentation.

  • Measurement method of carbon dioxide using spatial decomposed parallel computing   Order a copy of this article
    by Nan Liu, Weipeng Jing 
    Abstract: According to the carbon dioxide's characteristics of weak absorption in the ultraviolet and visible (UV-VIS) band, a measurement method based on spatial decomposed parallel computing of traditional differential optical absorption spectroscopy is proposed to measure CO2 vertical column concentration in ambient atmosphere. First, the American Standard Profile is used to define the solar absorption spectrum, and the spectrum acquisition of the incident light converged by the telescope is described as observed parameters. On these bases, a spectrometer line model is established. Then, atmospheric radiation transmission is simulated using parallel computing, which reduces the computational complexity while balancing the interference that participates in the fitting. Simulation analyses show that the proposed method can reduce the computational complexity, and the run time is reduced by 1.18 s compared with IMLM and IMAP-DOAS in the same configuration. The proposed method can also increase accuracy, with its inversion error reduced by 5.3% and residual reduced by 0.8% compared with differential optical absorption spectroscopy. The spatial decomposed parallel computing method has advantages in processing CO2, and it can be further used in the research into carbon sinks.
    Keywords: differential optical absorption spectroscopy; ultraviolet and visible band; spatial decomposed parallel computing method; vertical column concentration; spectrometer; fitting.

  • Design and implementation of an Openflow SDN controller in the NS-3 discrete-event network simulator   Order a copy of this article
    by Ovidiu Mihai Poncea, Andrei Sorin Pistirica, Florica Moldoveanu, Victor Asavei 
    Abstract: The NS-3 simulator comes with the Openflow protocol and a rudimentary controller for classic layer-2 bridge simulations. This controller lacks the basic functionality provided by real SDN controllers, making it inadequate for experimentation in the scientific community. In this paper, we propose a new controller with an architecture and functionality similar to that of full-fledged controllers yet simple, extensible and easy to modify - all characteristics specific to simulators.
    Keywords: networking; software defined networking; SDN controller; NS3; NS-3; simulators.

  • Exploring traffic conditions based on massive taxi trajectories   Order a copy of this article
    by Dongjin Yu, Jiaojiao Wang, Ruiting Wang 
    Abstract: As increasing volumes of urban traffic data become available, more and more opportunities arise for the data-driven analysis that can lead to the improvements of traffic conditions. In this paper, we focus on a particularly important type of urban traffic dataset: taxi trajectories. With GPS devices installed, moving taxis become valuable sensors for the traffic conditions. However, analysing these GPS data presents many challenges owing to their complex nature. We propose a new approach that transforms the trajectories of each moving taxi as a document consisting of the traversed street names, which enables semantic analysis of massive taxi datasets as document corpora. More specifically, we identify traffic topics through textual topic modelling techniques, and then cluster trajectories under these topics to explore the traffic conditions. The effectiveness of our approach is illustrated by case study using a large taxi trajectory dataset acquired from 3743 taxis in a city.
    Keywords: vehicle trajectory; map matching; traffic regions; latent Dirichlet allocation; trajectory clustering; visualisation.
    DOI: 10.1504/IJHPCN.2016.10011059
     
  • Keyword guessing on multi-user searchable encryption   Order a copy of this article
    by Zhen Li, Minghao Zhao, Han Jiang, Qiuliang Xu 
    Abstract: Searchable encryption provides a practical method that enables a client to store an encrypted database on an untrusted server, while supporting keyword search in a secure manner. It has gained extensive research interests with increasing concerns on security in cloud computing. Multi-user searchable encryption is more compatible for multi-tenancy and massive scalability property of cloud service. Most of these schemes are constructed using public key encryption. However, public key encryption with keyword search is vulnerable to keyword guessing attack. This is mainly because the keyword space is overwhelming smaller than the polynomial level of secure parameter and users usually query commonly-used keywords with low entropy. Consequently, a secure channel is necessarily involved for secret information transformation, which leads to an extra severe burden for the cloud system. This vulnerability is recognised in traditional searchable encryption, but it is still undecided whether it also exists in multi-user setting. In this paper, we firstly point out that keyword guessing attack is also a problem in the multi-user setting without the supposed secure channel. By an in-depth investigation of some multi-user searchable encryption schemes proposed recently and simulating the keyword guessing attack on them, we present that none of these schemes can resist this attack. We make a comprehensive security definition and propose some open problems related to multi-user searchable encryption.
    Keywords: cloud computing; keyword guessing; searchable encryption; multi-user.
    DOI: 10.1504/IJHPCN.2017.10007944
     
  • Semi-supervised dimensionality reduction based on local estimation error   Order a copy of this article
    by Xianfa Cai 
    Abstract: Graph construction is one of the key steps in graph-based semi-supervised learning. However, the neighbourhood graph of most semi-supervised methods is unstable by virtue of sensitivity to the selection of neighbourhood parameter and inaccuracy of the edge weights of the neighbourhood graph, which easily leads to dramatic degradation of performance. Since local models are trained only with the points related to the particular one,local learning methods often outperform global ones. The good performance of local learning methods indicates that the label of a point can be well estimated by its neighbours. Inspired by the good performance of the local learning method, this paper proposes a feasible strategy called semi-supervised dimensionality reduction based on local estimation error (LEESSDR) by using local learning projections (LLP) for semi-supervised dimensionality reduction. The algorithm sets the edge weights of neighbourhood graph through minimising the local estimation error, and can effectively preserve the global geometric structure of the sampled data set as well as preserving its local one. Since LLP does not require its input space to be locally linear, even if it is nonlinear, LLP maps it to the feature space by using kernel functions and then obtains its local estimation error in the feature space. The feasibility and effectiveness of the proposed method are verified on two popular face databases (YaleB and CMU PIE) with promising classification accuracy and favourable robustness.
    Keywords: local learning projections; side-information; semi-supervised learning; graph construction.

  • Securing SDN controller and switches from attacks   Order a copy of this article
    by Uday Tupakula, Vijay Varadharajan 
    Abstract: Software Defined Networking (SDN) enables programmable networks and offers several advantages to simplify network management tasks for administrators. Hence it is increasingly used for the management of complex networks such as the cloud. Although there are several benefits with SDN, it also leads to new types of attack. In this paper we describe how the attacks in traditional networks can lead to attacks in SDN. Then we propose techniques for securing the SDN controller and the switches from malicious end-host attacks. Our model makes use of trusted computing and introspection-based intrusion detection to deal with attacks in SDN. We have developed a security application for the SDN controller to validate the state of the switches in the data plane and enforce the security policies to monitor the virtual machines. The attack detection component takes into account the specific features of the virtual machine in detecting the attacks and isolating the malicious virtual machines. It uses the introspection feature at the hypervisor layer to collect the system call traces of programs running in a monitored VM. We have developed a feature extraction method, named vector of n-grams, which represents the traces in an efficient way without losing the ordering of system calls. The flows from the malicious hosts are dropped before they are processed by the switches or forwarded to the SDN controller. Hence our model protects the switches and the SDN controller from the attacks.
    Keywords: SDN security; trusted computing; virtual machine introspection; security attacks.

  • Multi-objective fuzzy optimisation of knowledge transfer organisations in the big data environment   Order a copy of this article
    by Chuanrong Wu, Feng Li 
    Abstract: With the advent of the big data era, the information from big data has become a type of important knowledge that enterprises need for innovation. The knowledge transfer mode and the influence factors of big data knowledge providers are different from those of traditional knowledge providers. Based on an analysis of organisational composition and characteristics of knowledge transfer in the big data environment, the influence factors and evaluation index systems of big data knowledge providers are established. A multi-objective fuzzy optimisation model is constructed to derive satisfactory knowledge providers by finding optimal sequences. Meanwhile, this model can provide a cooperative decision-making method for knowledge transfer organisations to enhance the efficiency of knowledge transfer in the big data environment.
    Keywords: big data; knowledge transfer; multi-objective optimisation; organisation; decision making method.

  • On Parallelisation of image dehazing with OpenMP   Order a copy of this article
    by Tien-Hsiung Weng, Yi-Siang Chen, Huimin Lu 
    Abstract: In this paper, we present our learning experience on the design and implementation of image dehazing parallel code with OpenMP developed from the existing fast sequential version. The aim of this work is to present an analysis of a case study showing the development of parallel haze removal with practical and efficient use of shared memory multi-core servers. Implementation technique and result discussions in terms of program improvements that may be needed to support parallel application developers with similar high performance goals are presented. Preliminary studies, results and experiments on haze removal application program are executed on multi-core shared memory platforms, and results show that the performance of the proposed parallel code is promising.
    Keywords: OpenMP; image haze removal; multicores; parallel programming.

  • Fast graph centrality computation via sampling: a case study of influence maximisation over online social networks   Order a copy of this article
    by Rui Wang, Yongkun Li, Yinlong Xu 
    Abstract: Graph centrality computation, e.g., asking for the most important vertices in a graph, may incur a high time cost with the increasing size of graphs. To address this challenge, this paper presents a sampling-based framework to speed up the computation of graph centrality. As a use case, we study the problem of influence maximisation, which asks for the k most influential nodes in a graph to trigger the largest influence spread, and presents an IM-RWS algorithm. We experimentally compare IM-RWS with the state-of-the-art influence maximisation algorithms IMM and IM-RW, and the results show that our solution can bring a significant improvement in efficiency as well as some improvement in empirical accuracy. In particular, our algorithm can solve the influence maximisation problem in graphs containing millions of nodes within tens of seconds, with an even better performance result in terms of influence spread.
    Keywords: random walk; sampling; graph centrality; online social networks; influence maximisation.

  • Non-intrusive load monitoring and its challenges in a non-intrusive load monitoring system framework   Order a copy of this article
    by Qi Liu, Min Lu, Xiaodong Liu, Nigel Linge 
    Abstract: With the increasing energy demand and electricity price, researchers show more and more interest among the residential load monitoring. In order to feed back the individual appliances energy consumption instead of the whole-house energy consumption, Non-Intrusive Load Monitoring (NILM) is a good choice for residents to respond to the time-of-use price and achieve electricity saving. In this paper, we discuss the system framework of NILM and analyse the challenges in every module. Besides, we study and compare the public datasets and accuracy metrics of NILM techniques.
    Keywords: NILM; system framework; data acquisition; event detection; feature extraction; load disaggregation.

  • A similarity algorithm based on Hamming distance used to detect malicious users in cooperative spectrum sensing   Order a copy of this article
    by Libin Xu, Pin Wan, Yonghua Wang, Ting Liang 
    Abstract: Spectrum sensing (SS) is a key technology in cognitive radio networks (CRNs). Many statistical methods have been proposed to improve the sensing performance. However, rare studies take security into account. The collaborative spectrum sensing (CSS) is vulnerable to the potential attacks from malicious users (MUs). Most existing MU detection methods are reputation-based, it is incapable when the MUs dominate the network. Although Markov model characterises the spectrum state behaviour more precisely, there is a scarcity of MU detection in the literature. In this paper, a Hamming distance check (HDC) is proposed to detect MUs. The Hamming distance between all the sensing nodes is calculated. For the reports from MUs is different from honest users (HUs), we can find the MUs and exclude in the fusion process. A new trust factor (TF) is proposed to increase the effects of trustworthy nodes in the final decision. The simulation results show that the impact of MU to CSS cannot be ignored. The proposed algorithm can effectively detect the MUs without prior knowledge. In addition, our proposed method can perform better than the existing approaches.
    Keywords: malicious user; attack; Hamming distance; cognitive radio networks.

  • A self-adaptive quantum steganography algorithm based on QLSb modification in watermarked quantum images   Order a copy of this article
    by Qu Zhiguo 
    Abstract: As one of important research branches of quantum information hiding, quantum steganography embeds secret information into quantum images for covert communication by integrating quantum secure communication technology and classical steganography. In this paper, based on the Novel Enhanced Quantum Representation (NEQR), a novel quantum steganography algorithm is proposed to transfer secret information by virtue of quantum watermarked images. In order to achieve this goal, the Least Significant Qubit (LSQb) of the quantum carrier image is replaced with the secret information by implementing a quantum circuit. Compared with the previous quantum steganography algorithms, the communicating parties can recover the secret information tampered, meanwhile the tamperers can be located effectively. In the experiment result, the Peak Signal-to-Noise Ratios (PSNRs) are calculated for different quantum watermarked images and quantum watermarks, which demonstrate the imperceptibility of the algorithm is good and the secret information embedded can be recovered by virtue of its self-adaptive mechanism.
    Keywords: quantum steganography; quantum least significant bit; watermarked quantum carrier image.

  • Distributed continuous KNN query over moving objects   Order a copy of this article
    by Xiaolin Yang, Zhigang Zhang, Yilin Wang, Cheqing Jin 
    Abstract: The continuous k-nearest neighbour(CKNN) queries over moving objects have been widely researched in many fields. However, existing centralised works cannot work anymore and distributed solutions suffer the problem of index maintainance, high communication cost and query latency. In this paper, we firstly propose a distributed hybrid indexing strategy that combines the SSGI (Spatial-temporal Sensitive Grid Index) and the DQI (Dynamic Quad-tree Index). The SSGI is proposed to locate the spatial range that contains the final results, and the DQI is used for data partitioning. Then, we introduce an algorithm named HDCKNN to implement the CKNN queries. In comparison of existing work, HDCKNN can achieve the final result in one round iteration, while existing methods require at least two rounds of iteration. Extensive experiments show that the performance of the proposed method is more efficient than state-of-the-art algorithms.
    Keywords: moving objects;continuous k-nearest neighbour query; distributed query processing; hybrid indexing.

  • A cross-layer QoS model for space DTN   Order a copy of this article
    by Aiguo Chen, Xuemei Li, Guoming Lu, Guangchun Luo 
    Abstract: Delay/disruption tolerant networking (DTN) technology is considered as a new solution for highly stressed communications in space environments. An IP-based DTN network can support more flexible communication services, which is one of the research hotspots of space networking. At the same time, it is still an important and difficult problem to optimise the limited network resources allocation and guarantee the QoS of different services in a DTN network. For this challenge, a cross-layer QoS model that considers application, network, and node layer QoS requirements and resource limitations is proposed in this paper. In addition, a comprehensive admission control scheme that ensures productivity and fairness is employed. The results of experiments and analyses conducted demonstrate the benefits to be derived from this approach.
    Keywords: DTN; QoS model; cross-layer; space network.
    DOI: 10.1504/IJHPCN.2017.10004929
     
  • Industrial software abnormal behaviour analysis based on multi-granularity error propagation   Order a copy of this article
    by Cheng Peng 
    Abstract: Bugs in software are inevitable. This study on industrial software abnormal behaviour propagation mechanism trigged by bugs provides the way for people to grasp the execution rule and to adopt corresponding pinning measurements. According to the situation of abnormal behaviour propagation at different granularity software entities, the factor of error propagation probability affecting the abnormal behaviour propagation is proposed; the corresponding definition and calculation method are also investigated. The software abnormal behaviour propagation process model is constructed with reference to the compartment model and individual models and the factor mentioned above, which improves the model expression ability and enhances the model competence and accuracy. Then, an abnormal behaviour propagation analytical method is applied to the online electronic shopping system. The results verify the correctness and feasibility of the propagation mechanism.
    Keywords: industrial software; abnormal behaviour; propagation model; system bug.

  • Modelling the propagation of soft errors in programs   Order a copy of this article
    by Lixing Xue, Decheng Zuo, Zhan Zhang 
    Abstract: Soft errors are a category of typical transient errors caused by multiple external factors. Owing to the continuous exponential growth of transistors in processors, the mainstream computing paradigm and increasingly complex operational environment, they have become an urgent challenge in ground-level systems. To handle these errors, making clear the error propagation is the key step. Fault injection campaigns that are used to study error propagation generate only limited results of injected soft errors and cannot deduce other errors. As the injection space becomes vast, this traditional method appears to be powerless. To attack this problem, this paper proposes a method to study and model the propagation of soft errors within a program. Based on dynamic instructions traced in an error-free program execution, the ACE analysis classifies soft errors in architectural registers into benign ones and non-benign ones. The benign errors are considered to be derated in the propagation. Then we build a crash model and an improved DDG to analyse the propagation of each non-benign error and to predict its consequence (crash or silent data corruption). If the error is considered to cause a crash, the crash latency and the propagation path are also predicted. The method can be used to predict outcomes of programs under soft errors as well as occurrence of a certain category of error consequences. Extensive fault-injection experiments are provided to validate the proposed method from multiple perspectives.
    Keywords: soft error; error propagation; silent data corruption; crash; architecturally correct execution; dynamic dependency graph.

  • Mimicry honeypot: an evolutionary decoy system   Order a copy of this article
    by Leyi Shi, Yuwen Cui, Han Xu, Honglong Chen, Deli Liu 
    Abstract: Motivated by the mimic-and-evolve phenomenon for species rivalry, we present a novel concept of mimicry honeypot, which can bewilder the adversaries through comprehensively exploiting the protective coloration, warning coloration and mimicry evolution according to the changes of network circumstance. The paper firstly gives the definition of the protective coloration and warning coloration for cyber defence, formalises the mimicry honeypot model, discusses the critical issues of environment perception and mimicry evolution, and implements a mimicry prototype through web service platform and genetic algorithm. Afterwards, we perform experiments with the mimicry prototype deployed both in our private campus network and on the internet. Our empirical study demonstrates that the mimicry honeypot has better efficiency than the traditional decoy system.
    Keywords: mimicry honeypot; warning coloration; protective coloration; evolution; genetic algorithm.

  • Protection of location private privacy with recommendation from a decision tree algorithm   Order a copy of this article
    by Hongjun Dai 
    Abstract: Location-based services (LBSs) have been widely used in apps on phones. They help apps to provide better service for us, for example, a navigation app guides us during a trip. However, LBSs can also cause the problem of privacy disclosure. For some apps, like weather apps, a coarse location accuracy (LA) is enough to achieve their function, while they get our location within several metres. In this paper, we propose a framework for LBS Management (LBSM) to protect users personal location message by providing multi-level LA to different apps. The framework will create two decision trees for apps and users to analyse the properties, and then use the decision trees to make the classification. According to the classification result, the LA level of an app is finally determined. For better service, the decision trees will adjust constantly when the cloud servers obtain new data of apps and users or their properties have changed. Furthermore, some improvements have been made on the basis of traditional Iterative Dichotomiser 3 (ID3) algorithm to make a more accurate classification result. Experiments show that users personal information can be better protected while also obtaining satisfactory service with our proposed LBSM.
    Keywords: privacy protection; LBS; ID3 algorithm; location accuracy; app classification; user classification.

  • Evaluation of directive-based performance portable programming models   Order a copy of this article
    by M. Graham Lopez, Wayne Joubert, Veronica Vergara Larrea, Oscar Hernandez, Azzam Haidar, Stanimire Tomov, Jack Dongarra 
    Abstract: We present an extended exploration of the performance portability of directives provided by OpenMP 4 and OpenACC to program various types of node architecture with attached accelerators, both self-hosted multicore and offload multicore/GPU. Our goal is to examine how successful OpenACC and the newer offload features of OpenMP 4.5 are for moving codes between architectures, and we document how much tuning might be required and what lessons we can learn from these experiences. To do this, we use examples of algorithms with varying computational intensities for our evaluation, as both compute and data access efficiency are important considerations for overall application performance. To better understand fundamental compute vs. bandwidth bound characteristics, we add the compute-bound Level 3 BLAS GEMM kernel to our linear algebra evaluation. We implement the kernels of interest using various methods provided by newer OpenACC and OpenMP implementations, and we evaluate their performance on various platforms including both x86_64 and Power8 with attached NVIDIA GPUs, x86_64 multicores, self-hosted Intel Xeon Phi KNL, as well as an x86_64 host system with Intel Xeon Phi coprocessors. We update these evaluations with the newest version of the NVIDIA Pascal architecture (P100), Intel KNL 7230, Power8+, and the newest supporting compiler implementations. Furthermore, we present in detail what factors affected the performance portability, including how to pick the right programming model, its programming style, its availability on different platforms, and how well compilers can optimise and target multiple platforms.
    Keywords: OpenMP 4; OpenACC; performance portability; programming models.
    DOI: 10.1504/IJHPCN.2017.10009064
     
  • Genuine and secure public auditing scheme for the outsourced data   Order a copy of this article
    by Jianhong Zhang 
    Abstract: The most common concerns for users in cloud storage are data integrity, confidentiality and availability, so various data integrity auditing schemes for cloud storage have been proposed in the past few years, some of which achieve privacy-preserving public auditing, data sharing and group dynamic, or support data dynamic. However, as far as we know, until now yet there doesn't exist a practical auditing scheme which can simultaneously realise all the functions above; in addition, in all the existing schemes, block authentication tag (BAT) is adopted by the data owner to achieve data integrity auditing. Nevertheless, it's a arduous task to compute BATs for the resource-constrained data owner. In this paper, we propose a novel privacy-preserving public auditing scheme for shared data in the cloud, which can also support data dynamic operations and group dynamic. Our scheme has the following advantages: (1) we introduce proxy signature into the existing auditing scheme to reduce the cloud user's computation burden; (2) by introducing a Lagrange interpolating polynomial, our scheme realises the identity's privacy-preserving without increasing computation cost and communication overhead, moreover it makes group dynamic simple; (3) it can realise the practical and secure dynamic operations of shared data by combining the Merkle Hash Tree and index-switch table which is built by us; (4) to protect the data privacy and resist the active attack, the cloud storage server hides the actual proof information by inserting its private key in producing proof information process. After theoretical analysis demonstrates our scheme's security,experiment results show that our scheme not only has the low computational and communication overhead for data verification but also can complete the group dynamics with great speed.
    Keywords: cloud computing; self-certified cryptography; integrity checking; security proof; provably secure; random oracle model; cryptography.

  • A new mobile opportunity perception network strategy and reliability research in coal mines   Order a copy of this article
    by Ping Ren, Jing-Zhao Li, Da-Yu Yang 
    Abstract: Aiming at the characteristics of application of Internet of Things in underground coal mines, this paper proposes a new method of mobile opportunity perception in coal mines, which makes full use of underground mine mobile resources. The method mainly uses the existing resources, without increasing or adding sensing devices, by deploying wired nodes, wireless fixed nodes and wireless mobile nodes, to establish the collaborative working mechanism of mobile nodes and fixed nodes, and accordingly builds environment opportunity perception and information transmission of the whole place in the mine and achieves a comprehensive perception of the mine. A novel sparse heterogeneous fusion network and its mixed node arrangement strategy and low-order error detection model between nodes are established, and the loss probability and redundancy perception between the mobile and fixed nodes are analysed, which improves the reliability of the system. The moving speed, the average moving distance and the communication coverage time of mobile nodes are analysed, and an interactive data quantity is obtained. On this basis, the system hardware design and experiment are carried out. Experimental results show the superiority of this method, which provides a new method and idea for industrial and mining enterprises in the sensing layer and transport layer application of the Internet of Things.
    Keywords: mobile sensing; opportunistic routing; heterogeneous fusion; low false alarm; lost probability.

  • Distributed admission control algorithm for random access wireless networks in the presence of hidden terminals   Order a copy of this article
    by Ioannis Marmorkos, Costas Constantinou 
    Abstract: We address the problem of admission control for wireless clients in WLANs taking into account collisions between competing access points and considering explicitly the effect of hidden terminals, which play a prominent role in optimised client association. We propose an efficient, distributed admission control algorithm, where the wireless client node decides locally on which access point it will associate with in order to maximise its link throughput. The client can choose to optimise either its uplink or downlink throughput, depending on the type of traffic it predominantly intends to exchange with the network. The proposed approach takes into account the full contention resolution of the RTS/CTS IEEE802.11 medium access control protocol and leads towards an increase of the total throughput for the whole network. Finally, an algorithm is proposed, which can serve also as the basis for the development of efficient traffic offloading protocols in heterogeneous 5G networks.
    Keywords: admission control; IEEE802.11; multi-cell; WiFi offloading.
    DOI: 10.1504/IJHPCN.2017.10013362
     
  • Modelling geographical effect of user neighborhood on collaborative web service QoS prediction   Order a copy of this article
    by Zhen Chen, Limin Shen, Dianlong You, Huihui Jia, Chuan Ma, Feng Li 
    Abstract: QoS prediction is a task to predict the unknown QoS value of an active user to a web service that he/she has not accessed previously for supporting appropriate web service recommendation. Existing studies adopt collaborative filtering methods for QoS prediction, while the inherent issues of data sparsity and cold-start in collaborative filtering have not been resolved satisfactorily, and the role of geographical context is also underestimated. Through data analysis on a public real-world dataset, we observe that there exists a positive correlation between a users QoS values and geographical neighborhoods ratings. Based on the observation, we model the geographical effect of user neighborhood on QoS prediction and propose a unified matrix factorisation model by capitalising the advantages of geographical neighborhood and latent factor approaches. Experimental results exhibit the significance of geographical context on modelling user features and demonstrate the feasibility and effectiveness of our approach on improving QoS prediction performance.
    Keywords: web service; QoS prediction; collaborative filtering; geographical effect; matrix factorisation.

  • Researches on data encryption scheme based on CP-ASBE of cloud storage   Order a copy of this article
    by Xiaohui Yang, Wenqing Ding 
    Abstract: The ciphertext-policy attribute-set based encryption (CP-ASBE) based on a single authorisation centre is easy to become the security bottleneck of the system. With the support of trusted measurement technology, a novel method of CP-ASBE based on multiple attribute authority (AA) is proposed to solve this problem, and an encryption scheme is designed for cloud storage, which includes data storage, data access, data encryption and trusted measurement scheme for AA. The security performance and time cost of the encryption scheme are simulated, and the results show that the scheme can improve the security of users' data in the cloud storage environment.
    Keywords: cloud storage; CP-ASBE; authorisation centre; attribute authority; trusted measurement.

  • Deferred virtual machine migration   Order a copy of this article
    by Xiaohong Zhang, Jianji Ren, Zhizhong Liu, Zongpu Jia 
    Abstract: The rapid growth in demand for cloud services has led to the establishment of huge virtual machines. Online Virtual Machine (VM) migration techniques offer cloud providers a means to reduce power consumption while keeping quality of service. However, the efficiency of online migration is suboptimal since it is degraded by the execution of redundant migrations. To alleviate this problem, we introduce a load-aware VM migration technique. The key idea is to defer these migrations and perform a quick analysis of the loads on target servers before launching any migration. This consolidation is applied in two steps: first, all migrations are tested and a set of candidates that are suspected to be redundant is formed and postponed for a short time. Then, the servers are analysed and only a subset of the migration candidates is activated. Our selection mechanism is conservative in the sense that it avoids selecting and activating VM migrations that are unlikely to cause a harmful overloading on the target servers. Taking this conservative migration policy leads to an overall effective execution of virtual machines. Our experiment results demonstrate the usefulness and effectiveness of our method.
    Keywords: cloud computing; virtual machine migration; virtual machine consolidation; power consumption.
    DOI: 10.1504/IJHPCN.2017.10013423
     
  • Impact of using multi-levels of parallelism on HPC applications performance hosted on Azure cloud computing   Order a copy of this article
    by Hanan Hassan, Mona Kashkoush, Mohamed Azab, Walaa Sheta 
    Abstract: The use of High-Performance Computing (HPC) applications has increased progressively in scientific research and industry. Cloud computing attracts HPC users because of its extreme cost efficiency. The reduced cost is the result of the successful employment of multilayer-virtualisation, enabling dynamic elastic resource-sharing between different tenants. In this paper, we evaluate the impact of using multi-levels of parallelism on computationally intensive parallel tasks hosted on a cloud virtualised HPC cluster. We use multi-levels of parallelism through a set of experiments employing both message passing and multi-threading techniques. Our evaluation addresses two main perspectives, the performance of applications and cost of running HPC applications on clouds. We use Millions of Operation per Seconds (MOPS) and speed-up to evaluate the computational performance. To evaluate the cost we use United States Dollar/MOPS (USD/MOPS). The experiments on two different clouds are compared against each other and with published results for Amazon EC2 cloud. Results show that balancing the workload between processes and threads per process is the key factor to maintain high performance with reasonable cost.
    Keywords: HPC performance on cloud computing; Azure cloud computing; hybrid MPI+OPENMP; NPB and NPB-MZ benchmarks.
    DOI: 10.1504/IJHPCN.2017.10011492
     
  • NFV deployment strategies in SDN network   Order a copy of this article
    by Chia-Wei Tseng, Po-Hao Lai, Bo-Sheng Huang, Li-Der Chou, Meng-Chiou Wu 
    Abstract: The emergence of the internet has resulted in the relative expansion of complicated network architectures. Accordingly, the traditional network architecture can no longer meet the demands of a new and rapidly changing network service. Given the emergence of software defined network (SDN) and network functions virtualisation (NFV), technologies can now transform the current and complicated network architecture into a programmable, virtualised, and standardised managed architecture. This study aims to design and implement the rapid deployment strategies of the NFV services in SDN. Six different rapid deployment technologies are addressed based on the linked clone and the full clone cases. These technologies can accelerate the speed of deployment by enabling the NFV technology with an intelligent configuration. Experimental results show that the proposed parallel clone strategy in the linked clone scenario exhibited a better performance in terms of time efficiency compared with the other strategies.
    Keywords: software defined network; network functions virtualisation; rapid deployment; network management.

  • A new localisation strategy with wireless sensor networks for tunnel space model   Order a copy of this article
    by Ying Huang, Yezhen Luo 
    Abstract: Because the three-dimensional distance vector hop (DV-Hop) localisation has some large error phenomena in the location of tunnel space model, an improved three-dimensional DV-Hop fixed node location strategy is proposed based on the wireless sensor network for the relationship between the geometric model of tunnel space. This strategy analysed the deficiency of traditional three-dimensional DV-Hop algorithm about counts hop and calculations distance on two WSN nodes, and it employed the relationship between beacon node and distance to modify the hop count of DV-Hop, then using the differential method corrects distance error from the unknown node to the beacon node. According to the tunnel space model, the selection mechanism is introduced, and three optimal beacon nodes are selected to locate the unknown nodes and further improve the location accuracy. The traditional three-dimensional DV-Hop calculation hops neglect the distance between nodes, and the improved strategy is used to correct the distance between unknown nodes and beacon nodes. The experimental results show that the improved DV-Hop localisation strategy has greatly improved the location accuracy, it can be widely used to solve the internet of things problems.
    Keywords: tunnel space model; three-dimensional distance vector hop; received signal strength indicator; localisation algorithm.

  • Path self-deployment algorithm of three-dimensional space in directional sensor networks   Order a copy of this article
    by Li Tan, Chaoyu Yang, Minghua Yang, Xiaojiang Tang 
    Abstract: In contrast to two-dimensional directional sensor networks, three-dimensional directional sensor networks increase complexity and diversity. External environment and sensor limitations impact the target monitoring and coverage. Adjustment strategies provide better auxiliary guide in the process of self-deployment, while strengthening the monitoring area coverage rate and monitoring capability of sensor nodes. We propose a path self-deployment algorithm TPSA (Three-dimensional Path Self-Deployment Algorithm) based on the above issues. The concept of virtual force is extended from two to three dimensions, and includes target path control. The node gets locational information about the monitoring target and target path in the initialisation, calculates the virtual force of them, and finally obtains the next movement location and direction. We analyse the process of self-deployment of both static and polymorphic nodes. The simulation results verify that the proposed algorithm enables better node control in the deployment process, and improves the efficiency of the sensor node deployment.
    Keywords: directional sensor networks; path self-deployment; three-dimensional deployment; virtual force.
    DOI: 10.1504/IJHPCN.2017.10013346
     
  • A secure reversible chaining watermark scheme with hidden group delimiter for wireless sensor networks   Order a copy of this article
    by Baowei Wang, Qun Ding, Xiaodu Gu 
    Abstract: Chaining watermarks are considered to be one of the most practical methods for verifying data integrity in wireless sensor networks. However, the synchronisation points (SPs) or the group delimiters (GDs), which are indispensable to keep the sender and receivers synchronised, have been the biggest bottlenecks of existing methods: 1) if the SPs are tampered, the false negative rate will be up to 50% and make the authentication meaningless; 2) the additional GDs are easily detected by adversaries. We propose a more secure reversible chaining watermark scheme, called RWC, to authenticate the data integrity in WSNs. RWC has the following characteristics: 1) fragile watermarks are embedded in a dynamic grouping chaining way to verify data integrity; 2) hidden group delimiters are designed to synchronise the sending and receiving sides in case the SPs are tampered; 3) a difference expansion-based reversible watermark algorithm can achieve lossless authentication. The experimental results show that RWC can authenticate the sensory data with free distortion and significantly improve the ability to detect various attacks.
    Keywords: chaining watermark; hidden group delimiter; reversible watermark; data integrity authentication; wireless sensor networks.

  • A new segmentation algorithm based on evolutionary algorithm and bag of words model   Order a copy of this article
    by Kangshun Li, Weiguang Chen, Ying Huang, Shuling Yang 
    Abstract: Crop disease and insect pest detection and recognition using machine vision can provide precise diagnosis and preventive suggestions. However, the complexity of agricultural pest and disease identification based on traditional bag of words (BOW) models is high, and the effect is general. This paper presents a histogram quadric segmentation algorithm based on an evolutionary algorithm to observe the features (colour, texture) of disease spots and to learn from the guided filtering algorithm. This process aims to obtain the precise positions of disease spots in images. Dense-SIFT, which can extract features, and spatial pyramid, which can map image features to high-spatial-resolution space, are simultaneously applied in the recognition of crop diseases and insect pests in the BOW model. The experimental results show that the new segmentation algorithm can effectively locate the positions of disease spots in corn images, and the improved BOW model substantially increases the recognition accuracy of crop diseases and insect pests.
    Keywords: evolutionary algorithm; disease spot segmentation; image recognition; diseases and insect pests.

  • A jamming detection method for multi-hop wireless networks based on association graph   Order a copy of this article
    by Xianglin Wei, Qin Sun 
    Abstract: Jamming attacks have been a great challenge for the researchers because such attacks can severely damage the Quality of Service (QoS) of Multi-Hop Wireless Networks (MHWNs). Therefore, how to detect and distinguish multiple jamming attacks and thus to restore network service has been a hot topic in recent years. Note that different jamming attacks will cause different network status changes in MHWN. Based on this observation, a jamming detection algorithm based on association graph is put forward in this paper. The proposed algorithm consists of two phases, i.e. learning and detection phases. At the learning phase, as different symptoms are extracted through learning from various samples collected from both jamming and jamming-free scenarios, a symptom-attack association graph is built. Then, at the detection phase, the built symptom-attack association graph is adopted to detect the jamming attacks that lead to the observed symptoms by some particular network node. A series of simulation experiments on NS3 validated that the proposed method can efficiently detect and classify the typical jamming attacks, such as reactive, random and constant jamming attacks.
    Keywords: jamming detection; multi-hop wireless network; association graph.

  • Secure deduplication of encrypted data in online and offline environments   Order a copy of this article
    by Hua Ma, Linchao Zhang, Zhenhua Liu, Enting Dong 
    Abstract: Deduplication is a very critical technology in saving cloud storage space. Especially, client-side deduplication can save both storage and bandwidth. However, there are some security risks in the existing client-side deduplication schemes, such as file proof replay attack and online/offline brute-force attack. Moreover, these schemes do not consider offline deduplication. Aiming at solving the above problems, we present a secure client-side deduplication scheme of encrypted data in online and offline environments. In our scheme, we adopt a technology, mixing the dynamic coefficient with the randomly selected original file, so that new file proof can be produced in each challenge. In the case of offline, we introduce a trusted third party as a checker to run the proof of ownership with an uploader. The main difference between online and offline deduplication is the input value, which ensures that the program can be used efficiently, so the cost of storage and design is reduced. Furthermore, the proposed scheme can resist online and offline brute-force attacks, which depend on per-client rate-limiting method and high collision hash function, respectively. Interestingly, the security of the proposed scheme relies on secure cryptographic hash function.
    Keywords: deduplication; proof of ownership; online/offline brute-force attack; file proof reply attack.

  • An efficient temporal verification algorithm for intersectant constraints in scientific workflow   Order a copy of this article
    by Lei Wu, Longshu Li, Xuejun Li, Futian Wang, Yun Yang 
    Abstract: It is usually essential to conduct temporal verification in order to ensure overall temporal correctness for the usefulness of execution results in a scientific workflow. For improving the efficiency of temporal verification, many efforts have been dedicated to the selection of more effective and efficient checkpoints along scientific workflow execution so that temporal violations can be found and handled in time. Most of checkpoint selection strategies involve temporal constraints, but the relations among the temporal constraints have been ignored. The only existing strategy considering relations among temporal constraints is for the nested situation and temporal dependency based only. However, the intersectant situation and non-dependency relationship of temporal constraints have to be considered. In this paper, a constraint adjustment strategy based algorithm for efficient temporal verification in scientific workflow involved in both temporal dependency and temporal reverse-dependency is presented for the intersectant situation. Simulations show that the temporal verification efficiency of our algorithm is significantly improved.
    Keywords: scientific workflow; temporal verification; temporal constraint; temporal consistency; intersectant situation; temporal dependency; temporal reverse-dependency.
    DOI: 10.1504/IJHPCN.2017.10016275
     
  • A united framework with multi-operator evolutionary algorithms and interior point method for efficient single objective optimisation problem solving   Order a copy of this article
    by Junying Chen, Jinhui Chen, Huaqing Min 
    Abstract: Single objective optimisation problem solving is a big challenge in science and engineering areas. This is because the optimisation problems usually have the properties of high dimensions, many local optima, and limited iterations. Therefore, an efficient single objective optimisation problem solving method is investigated in this study. A united algorithm framework using multi-operator evolutionary algorithms and interior point method is proposed. Within this framework, three multi-operator evolutionary algorithms are combined to search for the global optimum, and the interior point method is used to optimise the evolutionary process with efficient searches. The proposed algorithm framework was tested on the CEC-2014 benchmark suite, and the experimental results demonstrated that the framework presented good optimisation performance for most single objective optimisation problems through efficient iterations.
    Keywords: single objective optimisation; efficient problem solving; united framework; evolutionary algorithms; interior point method.

  • Load forecasting for cloud computing based on wavelet support vector machine   Order a copy of this article
    by Wei Zhong, Yi Zhuang, Jian Sun, Jingjing Gu 
    Abstract: Owing to the tasks submitted by users with random and nonlinear characteristics in the cloud computing environment, it is very difficult to forecast the load in the cloud data centre. In this paper, we combine the wavelet transform and support vector machine (SVM) to propose a wavelet support vector machine load forecast (WSVMLF) model for cloud computing. The model uses the wavelet transform to analyse the cycle and frequency of the input data while combining with the characteristics of the nonlinear regression of the SVM, so that the task load can be modelled more accurately. Then a WSVMLF algorithm is proposed, which can improve the accuracy of the cloud load prediction. Finally, the Google cloud computing centre dataset was selected to test the proposed WSVMLF model. The comparative experimental results show that the proposed algorithm has a better performance and accuracy than the similar forecasting algorithms.
    Keywords: cloud computing; wavelet transform; support vector machine; load forecasting algorithm.

  • A new method of text information hiding based on open channel   Order a copy of this article
    by Yongjun Ren, Wenyue Ma, Xiaohua Wang 
    Abstract: Text is one of most constantly and widely used information carriers. Comparing with several other carriers, a new method of text information hiding based on open channel is been put forward in this article. The public normal texts are used to transmit information directly without any changes in order to hide it. At the same time, this hidden information negotiates with the intended recipient to produce the session key. The text, including the location of hidden information and including the associated hidden rules, delivers information through the session key. Besides, this method avoids stenographic testing used by attackers unless they break the session key, which ensures the hidden informations safety. The instruction protocol of hidden information serves as a basic tool in the proposed method. In the paper, the idea of the implicit authentication in the MTI protocol families (key agreement protocol constructed by Matsumoto, Takashima, and Imai) is derived from an identity-based protocol without pairing. Moreover, this proposed protocol is provably secure in the CK model (Canetti-Krawczyk model) for text information hiding.
    Keywords: text information hiding; big text data; open channel.

  • Data residency as a service: a secure mechanism for storing data in the cloud   Order a copy of this article
    by K. Rajesh Rao, Ashalatha Nayak 
    Abstract: Recently, researchers have been working on cloud data assurance models to ensure that the data is in compliance with the policies. In such a model, the data is placed in the intended data centres without focusing on data residency issues. However, there are data residency issues, such as undesired data location, multi-jurisdiction laws, extraterritorial access and unauthorised access, that may violate the data privacy and confidentiality. So far, no framework has been developed to address these data residency issues and challenges that combine data security together with the work on compliance. Thus, in order to solve these issues, we have proposed a framework known as the Data Resident Storage (DRS) which is intended to achieve data residency protection, thus providing data residency as a service (DRaaS). DRaaS is a secure mechanism to protect the data in terms of data privacy and confidentiality, which are in compliance with data residency laws. In this paper, the model checking approach and cloud simulation environment are used for verification and validation of DRaaS, respectively. The finite state machine model is developed for the purpose of verifying the methodology in terms of data location and data access, which satisfies the identified specification. The validation is carried out in the CloudSim toolkit with the help of defined services. The developed test automation framework places the data only in data centres that are in compliance with data residency laws. Further, different scenarios are used to execute the experiment manually among end users having different roles, which can be used to validate data location along with data access. Finally, this framework can be hosted by the cloud service provider to provide DRaaS to the end users.
    Keywords: data residency; data residency protection; Data residency laws; Data privacy; Data confidentiality.

  • Visual vocabulary tree-based partial-duplicate image retrieval for coverless image steganography   Order a copy of this article
    by Yan Mu, Zhili Zhou 
    Abstract: The traditional image steganographic approaches embed the secret message into covers by modifying their contents. Therefore, the modification traces left in the cover will cause some damages to the cover, especially embedding more messages in the cover. More importantly, the modification traces make successful steganalysis possible. In this paper, visual vocabulary tree-based partial-duplicate image retrieval for coverless image steganography is proposed to embed the secret messages without any modification. The main idea of our method is to retrieve a set of duplicates of a given secret image as stego-images from a natural image database. The images in the database will be divided into a number of image patches, and then indexed by the features extracted from the image patches. We search for the duplicates of the secret image in the image database to obtain the stego-images. Each of these stego-images shares one similar image patch with the secret image. When a receiver obtains those stego-images, our method can recover the secret image approximately by using the designed protocols. Experimental results show that our method not only resists the existing steganalysis tools, but also has high capacity.
    Keywords: coverless image steganography; robust hashing algorithm; vocabulary tree; image retrieval; stego-image; image database; high capacity.

  • Telecom customer clustering via glowworm swarm optimisation algorithm   Order a copy of this article
    by YanLi Liu, Mengyu Zhu 
    Abstract: The glowworm swarm optimisation (GSO) algorithm is a novel algorithm with the simultaneous computation of multiple optima of multimodal functions. Data-clustering techniques are classification algorithms that have a wide range of applications. In this paper, the GSO algorithm is used for telecom customer clustering. We extract customer consumption data by means of the RFM (Recency Frequency Monetary) model and cluster standardised data automatically using the GSO algorithms synchronous optimisation ability. Compared with the K-means clustering algorithm, the GSO approach can automatically generate the number of clusters and use the RFM model to reduce effectively the size of the data processing. The results of the experiments demonstrate that the GSO-based clustering technique is a promising technique for the data clustering problems.
    Keywords: glowworm swarm optimisation; customers' subdivision; data clustering.

  • Efficient algorithm for on-line data retrieval with one antenna in wireless networks   Order a copy of this article
    by Ping He, Shuli Luan 
    Abstract: Given a set of requested data items and a set of multiple channels, the data retrieval problem is to find a data retrieval sequence for downloading all requested data items from these channels within a reasonable time. Most existing schemes are applied in an offline environment that the client can learn with prior knowledge of wireless data broadcast, such as the set of broadcast channels and the broadcast time of data items. However, this information is not known in the online data retrieval algorithm. So this paper proposes an online algorithm (MRLR) that selects the most recent and longest unretrieved channel for retrieving all requested data items, and holds (k-1) competitive rate compared with the optimal offline data retrieval algorithm, where k is the number of channels. For solving the problem of many redundant channel switches in MRLR, the paper proposes two online Randomised Marker algorithms (RM and MRM), which add two flags to mark the channel with the requested data item and the retrieved channel without the requested data item, respectively. Finally, RM and MRM algorithms hold (log(k-1)+k/2) and k/2 competitive rate, respectively. By comparing the competitive rates and experimental results of the proposed online algorithms, we observe that the performance of online data retrieval is improved in wireless networks.
    Keywords: wireless data broadcast; on-line data retrieval; data schedule; indexing.

  • A novel Monte Carlo based neural network model for electricity load forecasting   Order a copy of this article
    by Binbin Yong, Zijian Xu, Jun Shen, Huaming Chen, Jianqing Wu, Fucun Li, Qingguo Zhou 
    Abstract: The ongoing rapid growth of electricity use over the past few decades greatly promotes the necessity of accurate electricity load forecasting. However, despite a great number of studies, electricity load forecasting is still an enormous challenge for its complexity. Recently, the developments of machine learning technologies in different research areas have demonstrated its great advantages. General Vector Machine (GVM) is a new machine learning model, which has been proven very effective in time series prediction. In this article, we firstly review the basic concepts and implementation of GVM. Then we apply it in electricity load forecasting, which is based on the electricity load dataset of Queensland, Australia. A detailed comparison with traditional back-propagation (BP) neural network is presented. To improve the load forecasting accuracy, we specially propose to use the weights-fixed method, ReLu activation function, an efficient algorithm for reducing the time and the influence of parameter matrix β to train the GVM model. Analysis of our approach on the historical Queensland electricity load dataset has demonstrated that GVM could achieve better forecasting results, which shows the strong potential of GVM for general electricity load forecasting.
    Keywords: electricity demand forecasting; BP neural network; general vector machine.
    DOI: 10.1504/IJHPCN.2018.10011508
     
  • An efficient 3D point clouds covariance descriptor for object classification with mismatching correction algorithm   Order a copy of this article
    by Heng Zhang, Bin Zhuang 
    Abstract: We introduce a new covariance descriptor combining object visual (colour, gradient, depth, etc.) and geometric information (3D coordinates, normal vectors, Gaussian curvature, etc.) for mobile robot with RGB-D camera to deal with point cloud data. The improved mismatching correction algorithm is applied in the feature point mismatching correction of 3D point clouds. This descriptor is able to quickly match the feature points of the point clouds in the surrounding environment and realise the function of object classification. Experimental results show that this descriptor has an advantage of the compactness and flexibility compared with the previous descriptor, and greatly reduces the storage space required. At the same time, the instance and category recognition accuracy of the proposed descriptor for objects can respectively reach 94.6% and 86.8%, which are higher than those of the previously methods for object recognition of 3D point clouds.
    Keywords: object classification; point clouds; covariance descriptor; mismatching correction.

  • Parallel fast Fourier transform in SPMD style of CILK   Order a copy of this article
    by Tien-Hsiung Weng, Teng-Xian Wang, Meng-Yen Hsieh, Hai Jiang, Jun Shen, Kuan-Ching Li 
    Abstract: In this paper, we propose a parallel 1-D non-recursive fast Fourier transform (FFT) based on conventional Cooley-Tukeys algorithm written in C and parallelised using CILK in SPMD (Single Program Multiple Data) style. We compare our code with a highly tuned parallel recursive FFT using CILK, which is included in CILK package version 5.4.6, implemented by Matteo Frigo. The experimental results are running on a four dual-core CPUs AMD-OpteronTM 8200. Our newly designed non-recursive FFT code is highly compact, and the experimental results show that the performance of our CILK FFT parallel code on an eight-cores shared-memory machine is competitive.
    Keywords: FFT; SPMD; CILK.

  • Graph-based model and algorithm for minimising big data movement in a cloud environment   Order a copy of this article
    by Yassir Samadi, Mostapha Zbakh, Claude Tadonki 
    Abstract: In this paper, we discuss load balancing and data placement strategies in heterogeneous cloud environments. Load balancing is crucial in large-scale data processing applications, especially in a distributed heterogeneous context such as the cloud. The main goal in data placement strategies is to improve the overall performance through the reduction of data movements among the participating datacentres, which are geographically distributed, taking into account their characteristics such as speed of processing, storage capacity and data dependency. Load balancing and efficient data placement on cloud systems are critical problems, which are difficult to cope with simultaneously, especially in the emerging heterogeneous clusters. In this context, we propose a threshold-based load balancing algorithm, which first balances the load between datacentres, and afterwards minimises the overhead of data exchanges. The proposed approach is divided into three phases. First, the dependencies between the datasets are identified. Second, the load threshold of each datacenter is estimated based on the processing speed and the storage capacity. Third, the load balancing between the datacentres is managed through the threshold parameters. The heterogeneity of the datacentres, together with the dependencies between the datasets, are both taken into account. Our experimental results show that our approach can efficiently reduce the frequency of data movement and keep a good load balancing between the datacentres.
    Keywords: graph model; big data; cloud computing; load balancing; data placement; data dependency.
    DOI: 10.1504/IJHPCN.2018.10013848
     
  • Processed RGB-D SLAM based on HOG-Man algorithm   Order a copy of this article
    by Yanli Liu, Mengyu Zhu 
    Abstract: SLAM (Simultaneous Localization and Mapping) of robots is the key to achieve autonomous control of robots and also a significant topic in the field of mobile robotics. Aiming at 3D modelling of an indoor complex environment, this paper presents a fast three-dimensional SLAM method for mobile robots. On the basis of the HOG-Man algorithm, which is the core of RGB-D SLAM algorithm, the open-source software combining an RGB-D sensor such as Kinect with the wheeled mobile robots is used to obtain the odometry data, and then the information of their location is matched through the image feature extraction, and in the end the map is optimised by the HOG-Man algorithm. Finally, the feasibility and effectiveness of the proposed method are verified by experiments in indoor environment.
    Keywords: RGB-D SLAM; mobile robot; HOG-Man algorithm; Kinect.

  • A new revocable reputation evaluation system based on blockchain   Order a copy of this article
    by Haoxuan Li, Hui Huang, Shichong Tan, Ning Zhang, Xiaotong Fu 
    Abstract: Reputation evaluation system, as the publisher and analyser of evaluation, is an important influencing factor for users and sellers in online business. Traditional reputation evaluation system always requires a third party to achieve the operation of analysis and publishment. However, a third party often exposes the identity of the users, and leaks the information. As far as we know, all existing revocable reputation evaluation systems are based on the third-party model. In this paper, we present a new reputation evaluation system based on blockchain. Compared with traditional reputation evaluation systems, our system removes the third party, and allows users to modify their own evaluation information. Moreover, the user's privacy also can be protected. The experiment performance demonstrates that the overhead of the system is acceptable, the system is feasible and efficient.
    Keywords: reputation system; cryptography protocol; blockchain; linkable ring signature; smart contract.

  • A comparative study on automatic parallelisation tools and methods to improve their usage   Order a copy of this article
    by S. Prema, R. Jehadeesan, B.K. Panigrahi 
    Abstract: Automatic parallelisation assists users in parallelising the serial code even without acquiring knowledge about the application. Auto-parallelisers focus on loop-level parallelisation and dependence analysis. Apart from dependences present within the loop, specific coding complexities make the code not amenable to parallelisation owing to the limitations of the parallelising tool. To overcome these programming complications, we can explore the possibility of minimal manual intervention to make the code acquiescent for parallelisation. This paper provides a study of currently available auto-parallelisers and their competence on parallelisation of different programming features. The pitfalls faced by these tools are unveiled and categorised for detailed analysis. A solution-based approach in the form of coding changes circumvents the pitfalls and achieves efficient parallelisation. It also underlines the overall capability of the tools in supporting programming features during parallelisation.
    Keywords: automatic parallelisation; OpenMP parallel programming; coding complexities; loop parallelisation.

  • FollowMe: a mobile crowd-sensing platform for spatial-temporal data Sharing   Order a copy of this article
    by Mingzhong Wang 
    Abstract: Mobile crowd sensing is a promising solution for massive data collection with public participation. Besides the challenges of user incentives, and diversified data sources and quality, the requirement of sharing spatial-temporal data makes the privacy concerns of contributors one of the priorities in the design and implementation of a sound crowdsourcing platform. In this paper, FollowMe is introduced as a use case of a mobile crowd sensing platform to explain possible design guidelines and solutions to address these challenges. The incentive mechanisms are discussed according to both the quantity and quality of users contributions. Then, a k-anonymity based solution is applied to protect contributors' privacy in both scenarios of trustworthy and untrustworthy crowdsourcers. Thereafter, a reputation-based filtering solution is proposed to detect fake or malicious reports, and finally a density-based clustering algorithm is introduced to find hotspots which can help the prediction of future events. Although FollowMe is designed for a virtual world of the popular mobile game Pok
    Keywords: mobile crowd sensing; spatial-temporal data; crowdsourcing; privacy; k-anonymity; hotspot; reputation; incentive mechanism.

  • A novel graph compression algorithm for data-intensive scientific networks   Order a copy of this article
    by Xiao Lin, Haizhou Du, Shenshen Chen 
    Abstract: As one of the world's leading scientific and data-intensive computing grids, the Worldwide LHC Computing Grid (WLCG) faces the challenge of improving its computing efficiency and network utilisation. To achieve this goal, WLCG needs an important piece of information: the network topology graphs of participating computing grids. Directly collecting such information from all of the grids, however, would cause high communication overhead and raise many security issues. In this paper, we address these issues by proposing a novel algorithm to compress such a large network topology into a compact, equivalent network topology. We formally define our problem, develop a novel, efficient topology-compression algorithm and evaluate its performance using real-world network topologies. Our results show that our algorithm not only achieves a much higher topology compression ratio than state-of-the-art topology transformation algorithms, but also leads to up to 100x reduction in computation time.
    Keywords: network topology; data-intensive; compression; shortest path tree; weighted graph.

  • Lattice-based identity-based ring signature without trapdoors   Order a copy of this article
    by Yongxuan Sang, Zhongwen Li, Lili Zhang 
    Abstract: So far, most ring signature schemes rely on hard number theory problems, such as discrete logarithm, bilinear pairings and so on. Unfortunately, the above underlying number theory problems will be solvable in the post-quantum era. Lattice-based cryptography is a hotspot of research recently, owing to its implementation simplicity and provable security reductions. When the hash-and-sign signature scheme was constructed based on the hardness of worst-case lattice problems, provably secure lattice-based ring signature schemes were finally constructed. However, the hash-and-sign ring signatures were rather inefficient (with megabytes long signatures). In this paper, we propose an alternative method for constructing lattice-based and identity-based ring signature schemes that does not use the hash-and-sign methodology. In the random oracle model,the proposed signature scheme based on the problem in general lattices is unforgeable and holds anonymity. Compared with the previous instantiations of the hash-and-sign ring signature schemes, the lengths of secret key, public key and signatures in the proposed scheme are much shorter. The signing algorithm is quite simple, with matrix-vector multiplications and rejection samplings.
    Keywords: ring signature; lattice; rejection samplings; anonymity; unforgeable.
    DOI: 10.1504/IJHPCN.2018.10015388
     
  • Outlier detection of time series with a novel hybrid method in cloud computing   Order a copy of this article
    by Qi Liu, Zhen Wang, Xiaodong Liu, Nigel Linge 
    Abstract: In the wake of the development in science and technology, cloud computing has obtained more attention in different fields. Meanwhile, outlier detection for data mining in cloud computing is playing a more and more significant role in different research domains, and massive research works have been devoted to outlier detection, which includes distance-based, density-based and clustering-based outlier detection. However, the existing available methods require high computation time. Therefore, the improved algorithm of outlier detection, which has higher performance to detect outliers, is presented. In this paper, the proposed method, which is an improved spectral clustering algorithm (SKM++), is fit for handling outliers. Then, pruning data can reduce computational complexity and combine distance-based method Manhattan distance (distm) to obtain outlier score. Finally, the method confirms the outlier by extreme analysis. This paper validates the presented method by experiments with real collected data by sensors and comparison against the existing approaches. The experimental results show that our proposed method is an improvement.
    Keywords: cloud computing; data mining; outlier detection; spectral clustering; Manhattan distance.

  • A priority-based queuing system for P2P-SIP call communications control   Order a copy of this article
    by Mourad Amad, Djamil Aissani, Razika Bouiche, Nouria Madi 
    Abstract: Regarding the shortcomings of fundamental existing solutions for VoIP communications (e.g. SIP) based on centralisation, both academia and industry have initiated research projects focused on the integration of P2P paradigms into SIP communication systems (P2P-SIP). P2P-SIP builds an overlay network to provide efficient, interoperable and flexible SIP-based services. In this paper, we propose a new model for critical calls, which takes into consideration the priority aspect of some specific requests (e.g. emergency calls). The proposed model is generic regarding the P2P underlying and physical architectures. For illustration purposes, we consider Gnutella, as a representative of unstructured P2P networks, and Chord as, as a representative of structured P2P networks. In order to validate the proposed solution, a M/M/1 queuing model is considered. Performance evaluations show that the preliminary results are globally satisfactory, and that our proposed model, under certain conditions, is relevant.
    Keywords: VoIP; P2P-SIP; calls control; priority; queuing systems.

  • Budget-aware task scheduling technique for efficient management of cloud resources   Order a copy of this article
    by Mokhtar A. Alworafi, Atyaf Dhari, Sheren A. El-Booz, Suresha Mallappa 
    Abstract: Cloud computing technology offers many services using the pay per use concept, where the user gets to specify constraints such as the budget. Task scheduling algorithms are therefore the most preferred option under a budget constraint which is used to improve some of the metrics such as makespan and cost. In this paper, we propose a Budget-Aware Scheduling (BAS) model to schedule the tasks based on the budget constraint. At first, the VMs which meet the budget are labelled and the task priority is determined. Next, the task attributes are checked and assigned to the resources that meet the budget constraint to keep the makespan as low as possible with minimal cost for resource usage. The experiments demonstrate that the proposed model outperforms other algorithms by reducing average of makespan, mean of average response time, and the cost of resources with an increase in resource usage and profit of provider.
    Keywords: cloud computing; scheduling; budget constraint; budget-aware scheduling; makespan; provider profit.
    DOI: 10.1504/IJHPCN.2018.10014669
     
  • Sparse reconstruction of piezoelectric signal for phased array structural health monitoring   Order a copy of this article
    by Yajie Sun, Feihong Gu, Sai Ji 
    Abstract: Structural health monitoring technology has been widely used in the detection and identification of plate structure damage. Ultrasonic phased array technology has become an important method for structural health monitoring because of its flexible beam scanning and strong focusing performance. However, a large number of phased array signals will be produced, which creates difficulty in storing, transmitting and processing. Therefore, under the condition of which the signal is sparse, compressive sensing theory can achieve signal acquisition with much lower sampling rate than the traditional Nyquist sampling theorem. Firstly, the sparse orthogonal transformation is used to make the sparse representation. Then, the measurement matrix is used for the projection observation. Besides, the reconstruction algorithm is used for sparse reconstruction. In this paper, the experimental verification of the antirust aluminium plate material is carried out. The experiment shows that the proposed method is useful for reconstructing the signal of phased array structure health monitoring.
    Keywords: structural health monitoring; ultrasonic phased array; compressive sensing; matching pursuit algorithm.

  • Malicious webpages detection using feature selection techniques and machine learning   Order a copy of this article
    by Dharmaraj Patil, Jayantrao Patil 
    Abstract: Today, the popularity of the World Wide Web (WWW) and its usability in online banking, e-commerce and social networking has attracted cyber-criminals who exploit vulnerabilities for illegitimate benefits. Attackers use web pages to lure different types of attack, such as drive-by downloads, phishing, spamming, and malware distribution, to exploit legitimate users and obtain their identity to misuse. In recent years, many researchers have provided significant and effective solutions to detect malicious web pages; however, owing to the ever-changing nature of cyber attacks, there are still many open issues. This paper proposes a methodology for the effective detection of malicious web pages using feature selection methods and machine learning classifiers. Basically, our methodology consists of three modules: 1) feature selection; 2) training; and 3) classification. To evaluate our proposed methodology, six state-of-the-art feature selection methods and eight supervised machine learning classifiers are used. Experiments are performed on the balanced binary dataset using the feature selection methods and machine learning classifiers. It is found that by using feature selection methods, the classifiers achieved significant detection accuracy of 94-99% and above, error-rate of 0.19-5.55%, FPR of 0.006-0.094, FNR of 0.000-0.013 and minimum system overhead. Our multi-model system using majority voting classifier and Wrapper+Naive Bayes feature selection method with GreedyStepwise search technique using only 15 features achieved a highest accuracy of 99.15%, FPR of 0.017 and FNR of 0.000. The experimental analysis shows that our approach outperforms 18 well-known anti-virus and anti-malware softwares in terms of detection accuracy with an overall accuracy of 99.15%.
    Keywords: malicious web pages; feature selection; machine learning; web security; cyber security.

  • Greedily assemble tandem repeats for next generation sequences   Order a copy of this article
    by Yongqing Jiang, Jinhua Lu, Jingyu Hou, Wanlei Zhou 
    Abstract: Eukaryotic genomes contain high volumes of intronic and intergenic regions in which repetitive sequences are abundant. These repetitive sequences represent challenges in genomic assignment of short read sequences generated through next generation sequencing and are often excluded in analysis thus losing valuable genomic information. Here we present a method, known as TRA (Tandem Repeat Assembler), for the assembly of repetitive sequences by constructing contigs directly from paired-end reads. Using an experimentally acquired dataset for human chromosome 14, tandem repeats > 200 bp were assembled. Alignment of the contigs to the human genome reference (GRCh38) revealed that 84.3% of tandem repetitive regions were correctly covered. For tandem repeats, this method outperformed state-of-the-art assemblers by generating correct N50 of contigs up to 512 bp.
    Keywords: tandem repeat; assembly; NGS.

  • A novel localised network coding-based overhearing strategy   Order a copy of this article
    by Zuoting Ning, Lan He, Dafang Zhang, Kun Xie 
    Abstract: In recent years, more and more researchers research in wireless overhearing by using network coding, as network coding is a very effective approach to improve network throughput and reduce end-to-end delay. However, the existing approaches cant thoroughly solve the problem of how to deal with newly overheard packets when the buffer is full; meanwhile, the coding node doesnt schedule the packets in the coding queue according to the packets information in the overhearing buffer. Therefore, these methodologies lack of flexibility and demand quite a few assumptions. To address these limitations, we propose a new network coding overhearing strategy which is based on a data packet switching and scheduling (DPSS) algorithm. First, when the overhearing buffer is full and the sink nodes have overheard new packets, the sink nodes will drop the recently overheard packets but record their IDs. Second, sink nodes report the packets information to the coding node that schedules the packets in the coding queue for ease of encoding. Finally, sink nodes delete the packets that have been used for decoding, and call for the ever dropped packets when decoding ratio reaches the threshold. Theoretical analysis and simulation demonstrate that, compared with traditional overhearing policies, our scheme achieves higher coding ratio and less delay.
    Keywords: network coding; data packet switching and scheduling; overhearing.

  • GeaBase: a high-performance distributed graph database for industry-scale applications   Order a copy of this article
    by Zhisong Fu, Zhengwei Wu, Houyi Li, Yize Li, Xiaojie Chen, Xiaomeng Ye, Benquan Yu, Xi Hu 
    Abstract: Graph analytics has been gaining traction rapidly in the past few years. It has a wide array of application areas in industry, ranging from e-commerce, social network and recommendation systems to fraud detection and virtually any problem that requires insights into data connections, not just data itself. In this paper, we present {GeaBase}, a new distributed graph database that provides the capability to store and analyse graph-structured data in real-time at massive scale. We describe the details of the system and the implementation, including a novel update architecture, called {Update Center} (UC), and a new language that is suitable for both graph traversal and analytics. We also compare the performance of GeaBase to a widely used open-source graph database {Titan}. Experiments show that GeaBase is up to 182x faster than Titan in our testing scenarios. We also achieved 22x higher throughput on social network workloads in comparison.
    Keywords: graph database; distributed database; high performance.

  • Parallel big image data retrieval by conceptualised clustering and un-conceptualised clustering   Order a copy of this article
    by Ja-Hwung Su, Chu-Yu Chin, Jyun-Yu Li, Vincent S. Tseng 
    Abstract: Content-based image retrieval is a hot topic which has been studied for few decades. Although there have been a number of recent studies proposed on this topic, it is still hard to achieve a high retrieval performance for big image data. To aim at this issue, in this paper, we propose a parallel content-based image retrieval method that efficiently retrieves the relevant images by un-conceptualised clustering and conceptualised clustering. For un-conceptualised clustering, the un-conceptualised image data is automatically divided into a number of sets, while the conceptualised image data is divided into multiple sets by conceptualised clustering. Based on the clustering index, the depth-first-search strategy is performed to retrieve the relevant images by parallel comparisons. Through experimental evaluations on a large image dataset, the proposed approach is shown to improve the performance of content-based image retrieval substantially in terms of efficiency.
    Keywords: content-based image retrieval; un-conceptualised clustering; conceptualised clustering; big data; parallel computation.

  • Exponential stability of big data in networked control systems for a class of uncertain time-delay and packet dropout   Order a copy of this article
    by Huaiyu Zheng, Shigang Liu, Fengjie Sun 
    Abstract: This paper studies the problem of exponential stability for a networked control system with uncertain time-delay and packet dropout. The controller gain, designed to get better result, is assumed to have additive and multiplicative gain variations. Supposing networked control systems are uncertain time-delay which is not more than one sampling period and has packet dropout. Using the Lyapunov theory and linear matrix inequality formulation, we could obtain the sufficient condition of the asynchronous dynamical system for all admissible uncertainties and packet dropout. Finally, a simulation example illustrates the effectiveness of the approach.
    Keywords: networked control systems; data packet dropout; Lyapunov function; linear matrix inequalities; exponential stability.

Special Issue on: ICNC-FSKD 2015 Parallel Computing and Signal Processing

  • 1.25 Gbits/s-message experimental transmission using chaos-based fibre-optic secure communications over 143 km   Order a copy of this article
    by Hongxi Yin, Qingchun Zhao, Dongjiao Xu, Xiaolei Chen, Ying Chang, Hehe Yue, Nan Zhao 
    Abstract: Chaotic optical secure communications (COSC) are a kind of fast-speed hardware encryption techniques at the physical layer. Concerning to the practical applications, high-speed long-haul message transmission is always the goal to pursue. In this paper, we reported experimentally a scheme of long-haul COSC, where the bit rate reaches 1.25 Gbits/s and the transmission distance up to 143 km. Besides, a distinct advantage of low-cost is guaranteed with the off-the-shelf optical components, and no dispersion compensating fiber (DCF) or forward-error correction (FEC) is required. To the best of our knowledge, this is the first experimental evidence of the longest transmission distance in the COSC system. Our results show that high-quality chaotic synchronization can be maintained both in time- and frequency-domain, even after 143 km transmission; the bandwidth of the transmitter is enlarged by the external optical injection, which leads to the realization of 2.5 Gbits/s-message secure transmission up to 25 km. In addition, the effects of device parameters on the COSC are discussed for supplementary details.
    Keywords: long-haul, high-speed, chaotic optical secure communications, semiconductor laser

  • Optimisation of ANFIS using mine blast algorithm for predicting strength of Malaysian small and medium enterprises   Order a copy of this article
    by Kashif Hussain, Mohd. Najib Mohd. Salleh, Abdul Mutalib 
    Abstract: Adaptive Neuro-Fuzzy Inference System (ANFIS) is a popular fuzzy inference system, as it is widely applied in business and economics. Many have trained ANFIS parameters using metaheuristic algorithms, but very few have tried optimising its rule-base. The auto-generated rules, using grid partitioning, comprise both the potential and the weak rules, increasing the complexity of ANFIS architecture as well as computational cost. Therefore, pruning less or non-contributing rules would optimise the rule-base. However, reducing the complexity and increasing the accuracy of the ANFIS network needs an effective training and optimisation mechanism. This paper proposes an efficient technique for optimising the ANFIS rule-base without compromising on accuracy. A newly developed Mine Blast Algorithm (MBA) is used to optimise ANFIS. The ANFIS optimised by MBA is employed to predict the strength of Malaysian small and medium enterprises (SMEs). Results prove that the MBA optimised ANFIS rule-base and trained parameters are more efficient than Genetic Algorithm (GA) and Particle Swarm Optimisation (PSO).
    Keywords: ANFIS; neuro-fuzzy; fuzzy system; mine blast algorithm; rule optimisation; SME

  • Spectrum prediction and aggregation strategy in multi-user cooperative relay networks   Order a copy of this article
    by Yifei Wei, Qiao Li, Xia Gong, Da Guo, Yong Zhang 
    Abstract: In order to meet the constantly increasing demand by mobile terminals for higher data rate with limited wireless spectrum resources, cooperative relay and spectrum aggregation technologies have attracted much attention owing to their capacity in improving spectrum efficiency. Combining cooperative relay and spectrum aggregation technologies, in this paper, we propose a spectrum aggregation strategy based on the Markov prediction of the state of spectrum for the cooperatively relay networks on a multi-user and multi-relay scenario aiming at ensuring the user channel capacity and maximising the network throughput. The spectrum aggregation strategy is executed through two steps. First, the state of the spectrum is predicted through Markov prediction. Based on the prediction results of state of spectrum, a spectrum aggregation strategy is proposed. Simulation results show that the spectrum prediction process can observably lower the outage rate, and the spectrum aggregation strategy can greatly improve the network throughput.
    Keywords: Markov model; spectrum aggregation; multi-user; cooperative relay; outage probability; network throughput

Special Issue on: ICICS 2016 Next Generation Cloud, Mobile Cloud, Mobile Edge Computing and Internet of Things Systems and Networking

  • Classifying environmental monitoring data to improve wireless sensor network management   Order a copy of this article
    by Emad Alsukhni, Shayma Almallahi 
    Abstract: Wireless sensor networks are considered as the most useful way for collecting data and monitoring the environment. Owing to the large amount of data that are produced from these networks, data-mining techniques are required to get interesting knowledge. This paper presents the effectiveness of using data-mining techniques to discover knowledge that can improve the management of wireless sensor networks in environmental monitoring. Data reduction in a network increases the networks lifetime. The classification model can predict the effect of sensed data, which is used to reduce the number of readings that are reported to the sink, in order to improve wireless sensor network management. In this paper, we demonstrate the efficiency and accuracy of using data-mining classifiers in predicting the effect of sensed data. The results show that the accuracy of the J48 classification model, Multilayer Perceptron and REPTree classifiers reached 90%. Using the classification model, the results show that the number of reported readings decreased by 37%. Hence, this significant reduction increases the wireless sensor network's lifetime by reducing the consumed energy, i.e., the total energy dissipated.
    Keywords: classifying environmental monitoring data; data mining; wireless sensor network management; data reduction; energy consumption

  • Context-aware latency reduction protocol for secure encryption and decryption   Order a copy of this article
    by Muder Almiani, Abdul Razaque, Toufik Aidja, Ayman Al-Dmour 
    Abstract: Security is one of the biggest challenges for secure data communication. As the need of mobile phones increased, people started using several mobile applications to handle the internet of things. On the other hand, the frequent use of mobile phones in business caught the attention of researchers to protect the manufacturers and customers. Handling these issues, several paradigms for encrypting and decrypting the data-outsourcing have been proposed. However, there are still new threats and challenges for researchers owing to launch of new attacking methods and malicious actions of adversaries. Though the manufacturers have implemented an industry standard to protect the customers' privacy, launched standards do not provide a comfortable zone for the customers, particularly reducing the latency of the emerging applications being used in the internet of things. In this paper, we introduce a context-aware security (ConSec) protocol to support the internet of things applications to reduce the latency while encrypting and decrypting the applications. Furthermore, elliptic curve cryptography is used to fully secure the encryption and decryption processes. The proposed method is implemented using the Java platform, and results are verified and compared with full disk method from the latency perspective.
    Keywords: security; elliptic curve cryptography; mobile platform; encryption; decryption; context aware system.

  • A power-controlled interference management mechanism for femtocell-based networks   Order a copy of this article
    by Haythem Bany Salameh, Rawan Shabbar, Ahmad Al-shamali 
    Abstract: Femtocell deployment is one of the promising solutions to meet the increasing demands on wireless services and applications. Based on deploying femtocells within the already-existing macrocells, femtocells can improve indoor coverage and increase both spectral efficiency and data rate. This improvement can be achieved by reusing the available spectrum assigned for the macrocell, and being closer to the user. However, femtocell deployment faces many challenges. One of the most challenging issues is the interference management issue. In this paper, a power-controlled channel assignment algorithm to manage the interference between femtocell and macrocell networks is proposed. The proposed algorithm allows frequency reuse among femtocell users to provide better throughput performance. Simulation results have shown an improvement in throughput by up to 90% compared with previous models.
    Keywords: femtocell; macrocell.
    DOI: 10.1504/IJHPCN.2016.10015432
     
  • Exploring the relationships between web accessibility, web traffic, and university rankings: a case study of Jordanian universities   Order a copy of this article
    by Mohammed Al-Kabi 
    Abstract: Most statistics state that disabled people constitute approximately 20% of the world population; therefore, webmasters should consider this part of the world population when they design and implement their websites by following the Web Content Accessibility Guidelines. This paper aims to explore the relationships between web accessibility, web traffic, and university rankings using the metrics of 27 Jordanian university websites as a case study. Objective evaluations are conducted using a number of online tools. An extensive analysis of the tool outputs is presented, and the relationships of these tool outputs are explored. This study represents a longitudinal overview of the accessibility of Jordanian university websites and re-explores the different relationships between web accessibility, web traffic, and university rankings, as well as the differences between the accessibility of private and public Jordanian universities. This paper explores the relationships between three parameters: accessibility, web traffic, and university rankings. Therefore, its scope is larger than that of earlier publications.
    Keywords: university websites; accessibility guidelines; WCAG 1.0 and WCAG 2.0; web accessibility; web traffic; website popularity; university rankings; automated evaluation.
    DOI: 10.1504/IJHPCN.2016.10015584
     

Special Issue on: AINA'2016 Cloud Computing Projects and Initiatives

  • Transaction management across data stores   Order a copy of this article
    by Marta Patino 
    Abstract: Companies have evolved from a world where they only had SQL databases to a world where they use different kinds of data stores, such as key-value data stores, document-oriented data stores and graph databases. The reason why they have started to introduce this diversity of persistency models is because different NoSQL technologies bring different data models with associated query languages and/or APIs. However, they are confronted now with a problem in which they have the data scattered across different data stores. This problem lies in that when a business action requires to update the data, the data reside in different data stores, and they are subject to inconsistencies in the event of failure and/or concurrent access. These inconsistencies appear due to the lack of transactional consistency that was guaranteed in traditional SQL databases but is not guaranteed either within the NoSQL data stores or across data stores and databases. CoherentPaaS comes to remedy this need. CoherentPaaS provides an ultra-scalable transactional management layer that can be integrated with any data store with multi-versioning capabilities. The layer has been integrated with six different data stores, three NoSQL data stores and three SQL-like databases. In this paper, we describe this generic ultra-scalable transactional management layer and focus on its API and how it can be integrated in different ways with different data stores and databases.
    Keywords: data management; NoSQL; consistency.

  • Enabling and monitoring platform for cloud-based application   Order a copy of this article
    by Silviu Panica, Bogdan Irimie, Dana Petcu 
    Abstract: The deployment of applications able to consume cloud infrastructure services is currently a tedious process, especially when it should be often repeated in the application development phase. Even further, the deployed applications need to be monitored in order to detect and take action against possible anomalies. We present a technical solution that simplifies the deployment process, as being almost fully automated and capable of offering monitoring services for the deployed applications. Its integration as a module of an open-source platform for enforcing security controls by its users is further discussed.
    Keywords: cloud computing; enabling platform; automatic deployment; distributed monitoring.

  • Exploring the complete data path for data interoperability in cyber-physical systems   Order a copy of this article
    by Athanasios Kiourtis, Argyro Mavrogiorgou, Dimosthenis Kyriazis, Ilias Maglogiannis, Marinos Themistocleous 
    Abstract: The amount of digital information increases tenfold every year, owing to the exponential increase of Cyber-Physical Systems (CPS), real and virtual internet-connected sources. Most researches are focused on data processing and inter-connection fields, leading to the question concerning the interoperable use of data: if data is efficiently processed, how can unknown data be used in a different natures application? A three-stepped approach is presented in this paper, addressing this question, where following the data-lifecycle, a known CPSs dataset is firstly stored into domain-specific language, then translated into domain-agnostic language, and finally, using the fitting function of an ANN, it is compared with an unknown dataset, resulting in the translation of the unknown dataset into the first datasets domain. A scenario of that approach is provided, analysing the data interoperability challenges and needs, emerging from todays Internet of Everything evolution, studying the fields of data annotation, semantics, modelling, and characterisation.
    Keywords: cyber-physical systems; semantic; interoperability; domain agnostic; domain specific; data path; data lifecycle;.

  • Monitoring and management of a cloud application within a federation of cloud providers   Order a copy of this article
    by Rocco Aversa, Luca Tasquier 
    Abstract: Cloud federation is an emerging computing model where multiple resources from independent cloud providers are leveraged to create large-scale distributed virtual computing clusters, operating as within a single cloud organisation. This concept of service aggregation is characterised by interoperability features, which can address different problems about inter-cloud collaboration, such as vendor lock-in. Furthermore, it approaches challenges like performance and disaster-recovery through methods such as co-location and geographic distribution. One of the main issues within a cloud federation is related to the monitoring of the application deployed on resources coming from different vendors belonging to the federation. In this work we present an agent-based architecture and its prototypal implementation that aims at monitoring the user's cloud environment provided by the federation: the elasticity of the proposed architecture allows the configuration and customisation of the monitoring infrastructure to adapt it to the specific cloud application. A multi-layer architecture is proposed, where each part monitors different aspects of the multi-cloud infrastructure, starting from the detection of critical conditions on low level parameters for the computational units and composing different monitoring levels in order to check the federated SLA. The agent-based approach will introduce fault-tolerance and scalability to the monitoring architecture, while the agents' reactivity and proactivity capabilities will allow a deep and intelligent monitoring, where each agent can focus on different aspects of the monitoring activity, from low level performance indexes to the checking of the federated SLA compliance. Agents will be strengthened by algorithms and rules used to monitor QoS parameters that are critical for the specific application; the configuration of the adaptive monitoring environment will be made easier by an interface that will help the user in describing his/her application's deployment. The prototypal implementation of the proposed framework will be applied on a testbed application to validate the monitoring architecture.
    Keywords: cloud federation; agent-based monitoring; multi-cloud environment.

  • New architecture for virtual appliance deployment in the cloud   Order a copy of this article
    by Amel Haji, Asma Ben Letaifa, Sami Tabbane 
    Abstract: Cloud computing is a model that offers a convenient access to a set of configurable resources which could be provisioned and released with less of administration. The main lock is to assume interoperability and portablity between cloud services. We deal with special type of services called "Virtual Appliance". These services offers pre-packaged applications, preconfigured and encapsulated in virtual machine images running on virtualisation platforms. This type of service enables flexible and easy deployment, while ensuring interoperability, reuse and migration of services. This paper aims to offer architecture to deploy "Virtual Appliance" in cloud environment.
    Keywords: cloud computing; virtual appliance; SDN; NFV; OpenStack.

  • Intelligent medical record management: a diagnosis support system   Order a copy of this article
    by Flora Amato, Giovanni Cozzolino, Antonino Mazzeo, Sara Romano 
    Abstract: The increasing life expectation and low birth rates have radically changed the demographic structure of the European Union. The number of elderly people is growing as is the number of chronic diseases: this implies higher and higher healthcare costs, a reduction of the number of healthcare personnel, and requests for better care services. E-health has led to a growth of the health organisation, providing innovative and non-intrusive systems together with value-added services to the healthcare actors that contribute to enhance the efficiency and reduce the costs of complex informative systems. Many research efforts lead to innovative and non-intrusive e-health systems. In this work, we present a system for supporting medical decisions. It is based on semantic analysis of available medical data. The system implements an innovative methodology, which combines different semantic approaches in order to extract the representation of a given document expressed in natural language, and to associate it to a set of RDF triples.
    Keywords: e-health; knowledge management; data integration; natural language processing; decision support system.

  • Semi-supervised PSO clustering algorithm based on self-adaptive parameter optimization   Order a copy of this article
    by Xiuqin Pan, Wenmin Zhou, Yong Lu, Dongyin Sun 
    Abstract: Particle swarm optimisation (PSO) based on semi-supervised learning (SSPSO) is known for its higher clustering accuracy than other classical clustering algorithms. However, a fixed parameter, representing the use ratio of labelled sample and unlabelled sample for the clustering, is selected. Consequently, the determination method of this parameter means that it is difficult to reach the best clustering result. In this paper, we propose an improved clustering algorithm to solve the parameter optimisation problem for PSO based on semi-supervised learning. The new approach is called APO_SSPSO, which employs an adaptive strategy based on PSO to dynamically adjust the usage ratio of labelled and unlabelled samples for the clustering. Experiments are conducted on two sets of test samples. Simulation results show that the proposed algorithm is effective and valid.
    Keywords: particle swarm optimisation; clustering; semi-supervised; self-adaptive; parameter optimisation.

  • From business process models to the cloud: a semantic approach   Order a copy of this article
    by Beniamino Di Martino, Antonio Esposito, Giuseppina Cretella 
    Abstract: The fast evolution of IT services and the shift from server-centred applications to widely distributed frameworks and platforms, of which cloud computing is the best representative, have highly conditioned the way enterprise managers deal with their businesses. Several tools for the design and execution of business processes have been proposed, together with standards for their formal representation, such as BPMN. Despite the expressivity of such tools and representations, there is still a lack of integration with cloud services, mostly owing to the high variety of currently available offers, which create confusion among customers. Moreover, portability and interoperability issues tend to hinder the adoption of cloud solutions for the immediate deployment of Business processes. In order to overcome such issues and support the adoption of cloud services for the deployment of business processes, in this paper a semantic approach is proposed. The presented representation aims at easing the mapping process and to suggest users the best suitable Cloud services to compose. A case study that demonstrates the applicability and efficacy of the approach is also described.
    Keywords: business process; semantics; BPMN; OWL-S; cloud computing; ontologies; modelling; cloud services.
    DOI: 10.1504/IJHPCN.2016.10011967
     
  • Migrating mission-critical application in federated cloud: a case study   Order a copy of this article
    by Alba Amato, Rocco Aversa, Massimo Ficco, Salvatore Ventcinque 
    Abstract: Although, virtualisation, elasticity and resource sharing enable new levels of flexibility, convenience and economy benefits, they also add new challenges and more areas for potential failures and security vulnerabilities, which represent major concerns for companies and public organisations that want to shift their business and mission-critical applications and sensitive-data to the cloud. This paper discusses some technical issues that must be addressed to migrate mission-critical applications on public clouds. Moreover, by using a case study, an approach to broker cloud infrastructure needs to satisfy more restrictive critical requirements is presented.
    Keywords: mission-critical applications; cloud federation; brokering; security; dependability.
    DOI: 10.1504/IJHPCN.2016.10012339
     

Special Issue on: WACCPD High-level Programming Approaches for Accelerators

  • Acceleration of unstructured implicit low-order finite-element earthquake simulation using OpenACC on Pascal GPUs   Order a copy of this article
    by Takuma Yamaguchi, Kohei Fujita, Tsuyoshi Ichimura, Muneo Hori, Lalith Maddegedara 
    Abstract: We accelerate CPU-based unstructured implicit low-order finite-element simulations by porting to a GPU-CPU heterogeneous compute environment by OpenACC. We modified the algorithm of performance-sensitive parts, such as sparse matrix-vector multiplication and MPI communication, so that computations are suitable for GPUs. Other parts of the earthquake simulation code are ported by directly inserting OpenACC directives into the CPU code. This porting approach enables high performance with relatively low development costs. When comparing eight K computer nodes and eight NVIDIA Pascal P100 GPUs, we achieve 20.8 times speedup for the 3-by-3 block Jacobi preconditioned conjugate gradient finite-element solver. We show the effectiveness of the proposed method through many-case crust-deformation simulations and a large-scale computation using a finite element model with billion degrees-of-freedom on a GPU cluster.
    Keywords: MPI; element-by-element; conjugate gradient method.

  • PACC: a directive-based programming framework for out-of-core stencil computation on accelerators   Order a copy of this article
    by Nobuhiro Miki, Fumihiko Ino, Kenichi Hagihara 
    Abstract: We present a directive-based programming framework, i.e., the pipelined accelerator (PACC), to accelerate large-scale stencil computation on an accelerator device, such as a graphics processing unit (GPU). PACC provides a collection of extended OpenACC directives to facilitate out-of-core stencil computation accelerated using temporal blocking. The proposed framework includes a source-to-source translator capable of generating out-of-core OpenACC code from PACC code, i.e., large data is automatically decomposed into smaller chunks that are processed using limited capacity device memory. The generated code is optimized using a temporal blocking technique to minimise CPU-GPU data transfer. Furthermore, code is accelerated using a multithreaded pipeline engine that maximizes data copy throughput and overlaps GPU execution and data transfer. In experiments, we applied the proposed translator to three stencil computation codes. The out-of-core performance for 107 GB data on an NVIDIA Tesla K40 GPU with 12 GB memory reached 69.3 GFLOPS, which is 17% less than the in-core performance for 8 GB data. We believe that the proposed directive-based approach can be used to facilitate out-of-core stencil computation on a GPU.
    Keywords: accelerator; directive-based programming; out-of-core execution; OpenACC; GPU.

  • Efficient implementation of OpenACC cache directive on NVIDIA GPUs   Order a copy of this article
    by Ahmad Lashgar, Amirali Baniasadi 
    Abstract: OpenACC's programming model presents a simple interface to programmers, offering a trade-off between performance and development effort. OpenACC relies on compiler technologies to generate efficient code and optimise for performance. The cache directive is among the challenging to implement directives. The cache directive allows the programmer to use the accelerator's hardware- or software-managed caches by passing hints to the compiler. In this paper, we investigate the implementation aspect of cache directive under NVIDIA-like GPUs and propose optimisations for the CUDA backend. We use CUDA's shared memory as the software-managed cache space. We first show that a straightforward implementation can be very inefficient, and undesirably downgrade performance. We investigate the differences between this implementation and hand-written CUDA alternatives and introduce the following optimisations to bridge the performance gap between the two: i) improving occupancy by sharing the cache among several parallel threads and ii) optimising cache fetch and write routines via parallelisation and minimising control flow. Investigating three test cases, we show that the best cache directive implementation can perform very close to hand-written CUDA equivalent and improve performance up to 2.4X (compared with the baseline OpenACC.)
    Keywords: OpenACC; cache memory; CUDA; software-managed cache; performance.

  • Developments in memory management in OpenMP   Order a copy of this article
    by Jason Sewall, John Pennycook, Alejandro Duran, Christian Terboven, Xinmin Tian, Ravi Narayanaswamy 
    Abstract: Modern computers with multi-/many-core processors and accelerators feature a sophisticated and deep memory hierarchy, potentially including distinct main memory, high-bandwidth memory, texture memory and scratchpad memory. The performance characteristics of these memories are varied, and studies have demonstrated the importance of using them effectively. In this article, we explore some of the major issues in developing software to effectively and portably implement these technologies and describe enhancements being added to the OpenMP language to bridge this software-hardware gap. Our proposal separately exposes the characteristics of memory resources (such as kind) and the characteristics of allocations (such as alignment), and is fully compatible with existing OpenMP constructs.
    Keywords: memory management; programming languages; compiler directives; heap allocation.

  • Using the loop chain abstraction to schedule across loops in existing code   Order a copy of this article
    by Ian Bertolacci, Michelle Strout, Stephen Guzik, Jordan Riley, Eddie Davis, Catherine Olschanowsky 
    Abstract: Exposing opportunities for parallelisation while explicitly managing data locality is the primary challenge to porting and optimising existing computational science simulation codes to improve performance and accuracy. OpenMP provides many mechanisms for expressing parallelism, but it primarily remains the programmer's responsibility to group computations to improve data locality. The Loop Chain abstraction, where a summary of data access patterns is included as pragmas associated with parallel loops, provides compilers with sufficient information to automate the parallelism versus data locality tradeoff. In this paper, we present the syntax and semantics of loop chain pragmas for indicating information about the loops that belong to the chain and orthogonal specification of a high-level schedule (i.e., transformation plan) for the whole loop chain. We show example usage of the pragmas, detail attempts to automate the transformation of a legacy scientific code written with specific language constraints to loop chain codes, describe the compiler implementation for loop chain pragmas, and exhibit performance results for a computational fluid dynamics benchmark.
    Keywords: loop chain abstraction; loop optimisation; source-to-source compiler; performance optimisation.

  • Performance evaluation of OpenMPs target construct on GPUs   Order a copy of this article
    by Akihiro Hayashi, Jun Shirako, Ettore Tiotto, Robert Ho, Vivek Sarkar 
    Abstract: OpenMP is a directive-based shared memory parallel programming model and has been widely used for many years. From OpenMP 4.0 onwards, GPU platforms are supported by extending OpenMPs high-level parallel abstractions with accelerator programming. This extension allows programmers to write GPU programs in standard C/C++ or Fortran languages, without exposing too many details of GPU architectures. However, such high-level programming models generally impose additional program optimisations on compilers and runtime systems. Otherwise, OpenMP programs could be slower than fully hand-tuned and even naive implementations with low-level programming models such as CUDA. To study potential performance improvements by compiling and optimising high-level programs for GPU execution, in this paper, we (1) evaluate a set of OpenMP benchmarks on two NVIDIA Tesla GPUs (K80 and P100) and (2) conduct a comparable performance analysis among hand-written CUDA and automatically generated GPU programs by the IBM XL and clang/LLVM compilers.
    Keywords: OpenMP; GPUs; performance evaluation; compilers; target constructs; LLVM; XL compiler; Kepler; Pascal.
    DOI: 10.1504/IJHPCN.2017.10009068
     
  • Creating a portable, high-level graph analytics paradigm for compute and data-intensive applications   Order a copy of this article
    by Robert Searles, Stephen Herbein, Travis Johnston, Michela Taufer, Sunita Chandrasekaran 
    Abstract: High performance computing (HPC) offers tremendous potential to process large amounts of data, commonly referred to as big data. Owing to the immense computational requirements of big data applications, the HPC and big data communities are converging. As a result, heterogeneous and distributed systems are becoming commonplace. In order to take advantage of the immense computing power of these systems, distributing data efficiently and leveraging specialised hardware (e.g. accelerators) is critical. In this paper, we develop a portable, high-level paradigm that can be used to run big data applications on existing and future HPC systems. More specifically, we will target graph analytics applications, since these types of application are becoming increasingly popular in the big data and machine learning communities. Using our paradigm, we accelerate three real-world, compute- and data-intensive, graph analytics applications: a function call graph similarity application, a triangle enumeration subroutine, and a graph assaying application. Our paradigm uses the popular MapReduce framework, Apache Spark, in conjunction with CUDA, in order to simultaneously take advantage of automatic data distribution and specialised hardware present on each node of our HPC systems. We demonstrate scalability with regard to compute-intensive portions of the code that are parallelisable, as well as an exploration of the parameter space for each application. We show that our method yields a portable solution that can be used to leverage almost any legacy, current, or next-generation HPC or cloud-based system.
    Keywords: graph analytics; GPU; distributed systems; high performance computing; cloud; heterogeneous systems; cluster; big data; Apache Spark; CUDA.
    DOI: 10.1504/IJHPCN.2017.10007922
     

Special Issue on: IEEE TrustCom-16 Trust Computing and Communications

  • A trust-based evaluation model for data privacy protection in cloud computing   Order a copy of this article
    by Wang Yubiao, Wen Junhao, Zhou Wei 
    Abstract: For high quality and privacy protection problems, this paper proposes a trust-based evaluation model for data privacy protection in cloud computing (TEM-DPP). In order to make the final trust evaluation values more practical, the model introduces the comprehensive trust evaluation. The comprehensive trust is composed of direct trust and recommend trust. Services attribute and combining weights-based method are used to calculate the direct trust, reflecting the direct trust timeliness and rationality. In order to protect data security, we propose a data protection method based on a normal cloud model for data privacy protection. Then, the customer satisfaction, decay time, transaction amount and penalty factor will be used to update the direct trust. Simulation results showed that cloud services trust evaluation model can not only adapt to the dynamic changes in the environment, but also ensure the actual quality of service. It can improve the service requesters' satisfaction and has certain resilience to fraud entities.
    Keywords: cloud service; trust; privacy-aware; evaluation model.

  • A novel algorithm for TOP-K optimal path on complex multiple attribute graph   Order a copy of this article
    by Kehong Zhang, Keqiu Li 
    Abstract: In the rapidly-changing information world, the various users and personalised requirements lead to an urgent need for complex multiple attribute decision-making. In addition, the optimal solution of a single attribute decision cannot meet the actual needs. The TOP-K optimal path is an effective way to solve the above problem. The TOP-K mainly has non-repeatable vertex algorithm, repeatable vertex algorithm, index and other algorithm. But these techniques are mainly based on the single attribute. There are few documents introducing the complex multiple attribute decision-making problem so for. Therefore, a Tdp algorithm is presented in this paper. Firstly, it uses the technology of interval number and extreme value to solve the uncertain attribute value. Then TOPSIS technique solves the complex multiple attribute decision-making problems. In this way the comprehensive score was achieved. Secondly, by analysing the Yen algorithm, the paper proposes blocking and bidirectional shortest path algorithm for TOP-K optimal path. Finally, comparison and analysis between Tdp and the Yen were made. Results confirm that the Tdp algorithm improves the TOP-K optimal technology.
    Keywords: multiple attribute; optimal path; blocking; bidirectional; deviation vertex; TOP-K; decision-making.

Special Issue on: Security and Privacy in Complex Large-scale Computing Systems for Big Data Management

  • A high efficient map-matching algorithm for the GPS data processing intended for the highways   Order a copy of this article
    by Wenbo Mei, Hongyu Wang, Haihua Han, Xiaoguang Wang, Ruochen Fang 
    Abstract: This paper presents a highly efficient map-matching algorithm intended for the analysis of the big data collected at the highway, based on the topological characteristics of the highway road network. The map-matching algorithm for the vehicles' position error correction on the digital maps is crucial in the majority of the transportation research projects based on the floating car data. However, most of the studies are focused on the map-matching performance improvement for the complex road network (e.g.,urban area), while the highway road network is unjustly neglected although its importance in the Intelligent Transportation System (ITS) increases. The algorithm presented in this paper has two main improvements: it uses the fuzzy estimation algorithm to reduce the redundant calculations in map-matching processing on the arterial links; and it introduces a new parameter for evaluation of the most suitable travelling path for vehicles on the highway road network. The experiment results show that the proposed map-matching algorithm improves the efficiency and assures the high accuracy of the highway GPS data processing at the same time.
    Keywords: map-matching algorithm; highway traffic; vehicle tracking data; big data analysis; floating car data.

  • Comparison analysis and efficient implementation of reconciliation-based RLWE key exchange protocol   Order a copy of this article
    by Xinwei Gao, Jintai Ding, Saraswathy RV, Lin Li, Jiqiang Liu 
    Abstract: Error reconciliation is an important technique for Learning With Error (LWE) and Ring-LWE (RLWE)-based constructions. In this paper, we present a comparison analysis on two error reconciliation-based RLWE key exchange protocols: Ding et al. in 2012 (DING12) and Bos et al. in 2015 (BCNS15). We take them as examples to explain the core idea of error reconciliation, building key exchange over the RLWE problem, implementation, real-world performance and compare them comprehensively. We also analyse a LWE key exchange Frodo that uses an improved error reconciliation mechanism in BCNS15. To the best of our knowledge, our work is the first to present at least 128-bit classic (80-bit quantum) and 256-bit classic (>200-bit quantum) secure parameter choices for DING12 with efficient portable C/C++ implementations. Benchmarking shows that our efficient implementation is 11x faster than BCNS15, and one key exchange execution only costs 0.07 ms on a 4-year-old middle range CPU. Error reconciliation is 1.57x faster than BCNS15.
    Keywords: RLWE; post quantum; key exchange; implementation; analysis.

  • Distributed and personalised social network privacy protection   Order a copy of this article
    by Xiao-lin Zhang, Xiao-yu He, Fang-ming Yu, Li-xin Liu, Huan-xiang Zhang, Zhuo-lin Li 
    Abstract: Considering the privacy issues on social networks, a variety of anonymous techniques have been proposed, but these techniques neglect some differences among individuals in their demand for privacy protection. With the development of internet technology, the number of social network individuals increases yearly, and network data are poised for a massive change in trends. Motivated by this, we specify three levels of privacy information for victim individuals, and propose a personalised k-degree-m-label (PKDML) anonymity model. Furthermore, we design and implement a distributed and personalised k-degree-m-lable (DPKDML) anonymisation algorithm, which takes advantage of the 'vertex-centric' GraphX programming model to complete the entire anonymous process by multiple message passing and node value updating. Finally, we conduct experiments on a real social network datasets to evaluate the DPKDML. The experimental results show that our methods may overcome the shortcomings of traditional methods in processing massive data, and reduce anonymous costs and increase data utility.
    Keywords: social networks; privacy protection; distributed; personalised; GraphX.

  • A delegation token based method to authenticate the third party in transport layer security   Order a copy of this article
    by Lu Yan, Xiao Chen, Haojiang Deng, Xiaozhou Ye 
    Abstract: Transport layer security is an important security protocol, which is used to protect end-to-end communication. However, limitation occurs when it is applied to the content delivery network, in which the proxy server rather than the origin server provides service to the client. Under such circumstances, the proxy server serves as a third party and the client is not able to authenticate. This paper discusses the authentication problem for the proxy server. Afterwards, a delegation token based method is proposed to authenticate the proxy server, with multi-level proxy servers being taken into consideration. Furthermore, a client-based cache strategy is employed to improve the proposed method in terms of time consumption. Then the security of the method is also analysed. Experimental results demonstrate the effectiveness of our method. Moreover, with client-based cache strategy, the authentication process can be accomplished much more efficiently, with a 15.63% decrease in terms of connection time.
    Keywords: transport layer security; content delivery network; authentication; proxy server; delegation token.

  • A secure cloud storage scheme with key-updating in hybrid cloud   Order a copy of this article
    by Ge Gao, Lei Wu, Yunxue Yan 
    Abstract: As cloud computing continues to evolve, the role of hybrid cloud security becomes more and more important in the progress of cloud research. In order to improve the multi-user data sharing mechanism in hybrid clouds and enhance the security of hybrid clouds, this paper proposes a new security storage model of hybrid clouds, which combines the public clouds cost savings and elasticity with the private clouds security and customisation. In addition, by exploring the security of data storage and interoperability of hybrid clouds, this paper adopts a forward-secure key-updating encryption scheme, so as to ensure the security of privacy data in hybrid clouds.
    Keywords: hybrid cloud; forward-secure; key-updating; digital signature.

  • Survey of intrusion detection techniques and architectures in cloud computing   Order a copy of this article
    by Pinki Sharma, Jyotsna Sengupta, P.K. Suri 
    Abstract: Cloud computing helps end-users to easily connect to various services and applications through the internet. Cloud computing yields better usage of resources and hence a reasonable service access cost to end-users. With the use of virtualisation technologies, cloud computing virtually and dynamically provides the computing and data resources to a variety of users, based on their needs. As cloud computing is a shared facility and is accessed remotely, it is vulnerable to various attacks, including host- and network-based attacks and hence requires immediate attention. The cloud networks have their own features which are the reason for threats to the security in cloud. The best solution to protect the cloud from the attacks is use of Intrusion Detection Systems (IDS). IDS have become popular cloud security technology to detect attacks in a wide variety of networks. Cloud intrusion detection system is a better solution to achieve a higher level of security while maintaining its individuality. The cloud IDS is the most widely used technique where the system consists of IDS connected over the network in combination with various anomaly detection techniques. This paper provides an extensive survey of various cloud-based IDS implemented in a cloud environment for dealing with various security issues. This paper also discusses architectures of various cloud IDS.
    Keywords: security; cloud computing; intrusion detection system; intrusion detection techniques; intrusion detection architecture.

  • ROI-based fragile watermarking for medical image tamper detection   Order a copy of this article
    by Nour El-Houda Golea, Kamal Eddine Melkemi 
    Abstract: In this paper, we propose a Region of Interest (ROI) based fragile watermarking scheme for medical image tamper detection. The proposed methodology is inspired by network transmission, where the transmitted message is divided into packets of fixed size and redundant information is added to each packet to treat errors. In fact, the Cyclic Redundancy Check code (CRC) is one of the most crucial error-detecting checking tools used in various digital communication systems. Consequently, the ROI to be protected is considered as a message to be transmitted without errors. Thus, it must be decomposed on packets of fixed size and the CRC encoder procedure based on a standard polynomial generator CRC-32 having more particular mathematical properties that is performed on each packet to generate a checksum of size 32 bits considering as a watermark to be inserted in first and second least significant bits of the corresponding packet. At the reception end, the watermark is extracted and appended on each corresponding packet and the CRC decoder procedure is performed on this new sequence to detect errors. Depending on the degree of alteration and the interest of the packet, the receiver can ask the sender to retransmit only the corrupted packets. Results of experiments conducted on six different modalities of medical images show the validity of the proposed approach in terms of imperceptibility and efficiency to detect reliable and strong attacks.
    Keywords: fragile watermarking; medical image watermarking; tamper detection; cyclic redundancy check code.
    DOI: 10.1504/IJHPCN.2017.10013846
     

Special Issue on: Recent Advances in Security and Privacy for Big Data

  • A mathematical model for intimacy-based security protection in social networks without violation of privacy   Order a copy of this article
    by Hui Zheng, Jing He, Yanchun Zhang, Junfeng Wu 
    Abstract: Protection against spam, fraud and phishing becomes increasingly important in the applications of social networks. Online social network providers such as Facebook and MySpace collect data from users including their relation and education statuses. While these data are used to provide users with convenient services, improper use of these data such as spam advertisement can be annoying and even harmful. Even worse, if these data are somehow stolen or illegally gathered, the users might be exposed to fraud and phishing. To further protect individual privacy, we employ an intimacy algorithm without the violation of privacy. Also, we explore spammers through detecting unusual intimacy phenomenon. We, therefore, propose a mathematical model for intimacy based security protection in a social network without the violation of privacy in this paper. Moreover, the feasibility and the effectiveness of our model is testified theoretically and experimentally.
    Keywords: social network; privacy protection; intimacy; spam detection.

Special Issue on: Advances in Information Security and Networks

  • Dynamic combined with static analysis for mining network protocols' hidden behaviour
    by YanJing Hu 
    Abstract: Unknown protocols' hidden behaviour is becoming a new challenge in network security. This paper takes both the captured messages and the binary code that implement the protocol as the studied objects. Dynamic Taint Analysis combined with Static Analysis is used for protocol analysing. Firstly, we monitor and analyse the process of protocol program that parses the message in the virtual platform HiddenDisc prototype system developed by ourselves, and record the protocols public behaviour, then based on our proposed hidden behaviour perception and mining algorithm, we perform static analysis of the protocols hidden behaviour trigger conditions and hidden behaviour instruction sequences. According to the hidden behaviour trigger conditions, new protocol messages with the sensitive information are generated, and the hidden behaviours are executed by dynamic triggering. HiddenDisc prototype system can sense, trigger and analyse the protocols hidden behaviour. According to the statistical analysis results, we propose the evaluation method of protocol execution security. The experimental results show that the present method can accurately mining the protocols hidden behaviour, and can evaluate an unknown protocols execution security.
    Keywords: protocol reverse analysis; protocols' hidden behaviour; protocol message; protocol software.