Forthcoming articles

 


International Journal of High Performance Computing and Networking

 

These articles have been peer-reviewed and accepted for publication in IJHPCN, but are pending final changes, are not yet published and may not appear here in their final order of publication until they are assigned to issues. Therefore, the content conforms to our standards but the presentation (e.g. typesetting and proof-reading) is not necessarily up to the Inderscience standard. Additionally, titles, authors, abstracts and keywords may change before publication. Articles will not be published until the final proofs are validated by their authors.

 

Forthcoming articles must be purchased for the purposes of research, teaching and private study only. These articles can be cited using the expression "in press". For example: Smith, J. (in press). Article Title. Journal Title.

 

Articles marked with this shopping trolley icon are available for purchase - click on the icon to send an email request to purchase.

 

Articles marked with this Open Access icon are freely available and openly accessible to all without any restriction except the ones stated in their respective CC licenses.

 

Register for our alerting service, which notifies you by email when new issues of IJHPCN are published online.

 

We also offer RSS feeds which provide timely updates of tables of contents, newly published articles and calls for papers.

 

International Journal of High Performance Computing and Networking (155 papers in press)

 

Regular Issues

 

  • A real time vehicle management system implementation on cloud computing platform   Order a copy of this article
    by Hua-Yi Lin, Jiann-Gwo Doong, Meng-Yen Hsieh, Kuan-Ching Li 
    Abstract: This study attempts to use the high-speed computing capability of cloud computing and smart phones to achieve a vehicle management system running on moving cars. Our researches exploit smart phones or tablet PCs as a car machine that provides an application of location-based services on the mobile device with GPS (Global Position System), wireless camcorder, Google map, visualisation information, graphical presentation to provide personalised services[1,2]. This study allows users instantly to access the information and manage the movement of cars. Additionally, the location of golf carts, surrounding environment and personnel information are transmitted through a wireless network to the monitor centre. Thus the monitor centre can achieve real-time services, motorcade managements and caddy caring services. If the proposed mechanism performs well, this study will adopt it on some other applications in the near future.
    Keywords: cloud computing, vehicle management, GPS

  • pvFPGA: paravirtualising an FPGA-based hardware accelerator towards general purpose computing   Order a copy of this article
    by Miodrag Bolic, Wei Wang, Jonathan Parri 
    Abstract: This paper presents an ameliorated design of pvFPGA, which is a novel system design solution for virtualising an FPGA-based hardware accelerator by a virtual machine monitor (VMM). The accelerator design on the FPGA can be used for accelerating various applications, regardless of the application computation latencies. In the implementation, we adopt the Xen VMM to build a paravirtualised environment, and a Xilinx Virtex-6 as an FPGA accelerator. The data is transferred between the x86 server and the FPGA accelerator through direct memory access (DMA), and a streaming pipeline technique is adopted to improve the efficiency of data transfer. Several solutions to solve streaming pipeline hazards are discussed in this paper. In addition, we propose a technique, hyper-requesting, which enables portions of two requests bidding to different accelerator applications to be processed on the FPGA accelerator simultaneously through DMA context switches, to achieve request level parallelism. The experimental results show that hyper-requesting reduces request turnaround time up to 80%.
    Keywords: FPGA; hardware accelerator; paravirtualisation; hyper-requesting; streaming pipeline; DMA context switch

  • ReconsMap: a reliable controller-switch mapping in software defined networks   Order a copy of this article
    by Wenbo Wang, Binqiang Wang, Yuxiang Hu, Bangzhou Liu 
    Abstract: The software defined networking (SDN) innovates the future network structure, decoupling the control plane and data plane. In large SDN networks, multiple controllers or controller domains are deployed, where each controller has a logically centralised vision while managing a set of switches. Recent studies focus more on controller placement, but simply assign switches to its closest controller. Such latency-first controller-switch mapping may lead to controller overload and vulnerability of the spanning tree. In this paper, we illustrate this case and propose a Reliable Controller-Switch Mapping Model (ReconsMap Model). This model 1) adjusts the number of mapping switches according to the network traffic, 2) ensures an accepted propagation delay and 3) builds a robust spanning tree for each controller to ensure the lowest data loss. Computational results on OS3E and topology zoo show that ReconsMap effectively reduces the data loss and improves the controller overload and the robustness of the spanning tree.
    Keywords: SDN; controller-switch mapping; reliability; load balance

  • Improving the performance of message broadcasting in VANETS   Order a copy of this article
    by Sakthi Ganesh 
    Abstract: In vehicular adhoc networks, messages are broadcast by active mobile nodes spontaneously to all the neighborhood nodes within the connectivity range. These important messages have severe delay and are time sensitive. The main aim of this work is to broadcast the safety message, to avoid the packet collision or to reduce the packet loss so that the efficiency of the network is improved and these can be analysed using different protocols in the vehicular adhoc network. In typical carrier sense multiple accesses the users that content with channel access dont seem to be suitable for this application. Protocol sequence is the method we are following to broadcast the safety message. 0s and 1s are the protocol sequences used. When each user in the network reads the 0s and 1s they transmit the packet in the time slot. It doesnt require time synchronisation between the mobile nodes. We compare the delay performance with dedicated short range communication protocol, ALOHA type random access scheme and zone routing protocol. By arranging the data packets a hard assurance of delay might be achieved. The delay in the network reduces in zone routing protocol.
    Keywords: RSU, ALOHA, DSRC, ZRP.

  • Improved reconfigurable hyper-pipeline soft-core processor on FPGA for SIMD   Order a copy of this article
    by Maheswari Raja, Pattabiraman Venkatasubbu 
    Abstract: Reconfiguration becomes a powerful computational model in which the processors could be changed dynamically during the execution phase of the system. This paper presents dynamic reconfigurable register file allocation in hyper pipelined OR1200 (Open RISC) for SIMD (Single Instruction Multiple Data). The OR1200 instantly reconfigures the actual register file to the reconfigurable register file according to the requirement of the application. The unused general purpose registers obtained during the reconfiguration process can be used for hyper pipelining technique, which improves overall performance of the single core processor system. Thus releasing the unused register reduces the power consumption and increases the execution speed of OR1200. This proposed reconfigurable technique is implemented using Verilog and it is tested using the Mediabench multimedia benchmark dataset, which ensures reduced register usage of 16.80% for multimedia dataset and power reduction up to 72.7% with reconfigurable modules. The proposed technique is configured in Virtex-6 FPGA (Field Programmable Gate Array) and results were analysed with the existing and proposed reconfigured OR1200.
    Keywords: dynamic reconfiguration, MediaBench, hyper pipeline, soft-core, FPGA

  • Modelling epidemic routing with heterogeneous infection rate   Order a copy of this article
    by Peiyan Yuan, Shangwang Liu, En Zhang 
    Abstract: The epidemic routing has been integrated into many applications, ranging from the worm propagation of online social networks to the message diffusion in offline physical systems. Modelling epidemic routing provides a baseline to evaluate system performance, and it also becomes very desirable for engineers to have theoretical guidance before they deploy the real system. Early works analyse the dynamics of epidemic routing with the average contact rate, i.e., each node will encounter the same number of other nodes in a time slot. They neglect the status of encountered nodes (i.e., infected or susceptible), resulting in the defectiveness of existing models. In this paper, we observe that the infectivity of nodes has heterogeneity rather than homogeneity, two nodes with the same contact rate may behave different infectivities. Motivated by this observation, we first use infection rate to reflect the infectivity of infected nodes. We then model the epidemic routing with the average infection rate, instead of the contact rate. We finally compare our model with the existing works through theoretical analysis and simulations. The results show that our model has a closer match than those of the state-of-the-art works, which provides an upper bound on the number of infected nodes.
    Keywords: epidemic routing; scaling law; contact rate; infection rate.

  • Anonymous hierarchical identity-based encryption with bounded leakage resilience and its application   Order a copy of this article
    by Chengyu Hu, Pengtao Liu, Shanqing Guo, Qiuliang Xu 
    Abstract: Hierarchical identity-based encryption can be used to protect sensitive data in cloud systems. However, as the traditional security model does not capture side-channel attacks, many hierarchy identity-based encryption schemes do not resist this kind of attack, which can exploit various forms of unintended information leakage. Inspired by these, leakage-resilience cryptography formalises some models of side-channel attacks. In this paper, we consider the memory leakage resilience in anonymous hierarchical identity-based encryption schemes. By applying Lewko et al.'s tools, we construct a master key leakage-resilient anonymous hierarchical identity-based encryption scheme based on dual system encryption techniques. As an interesting application of our scheme, we consider security for public-key encryption with multi-keyword ranked search (PEMKRS) in the presence of secret key leakage in the Trapdoor generation algorithm, and provide a generic construction of leakage-resilient secure PEMKRS from a master key leakage-resilient anonymous hierarchical identity-based encryption scheme.
    Keywords: side-channel attacks; leakage-resilient; anonymous HIBE; multi-keyword ranked search

  • OGPADSM2: oriented-group public auditing for data sharing with supporting multi-user modification   Order a copy of this article
    by Jianhong Zhang, Xubing Zhao, Weina Zhen 
    Abstract: In most data sharing protocols for cloud storage, to update the outsourced data, the data updating operation is only executed by the data owner. Obviously, this is far from practical owing to the tremendous computational cost on the data owner. Until now, there are few protocols in which multiple cloud users are allowed to update the outsourced data with integrity assurance. And their protocols don't consider collusion attack between misbehaving cloud servers and the revoked users. This is an important challenge in data sharing protocol. In this paper, we propose a novel group-oriented public auditing for data sharing with multi-user modification. The scheme is provably secure under the bilinear Diffie-Hellman problem. And it can resist the collusion attack between the cloud server and the revoked user. Then an improved protocol is given to increase the efficiency of the auditor's verification. In the improved protocol, only one pairing operation is required in the auditor's verification phase. Meanwhile, our scheme also supports public checking and efficient user revocation as well as backward security. By comparison with the other protocols, our scheme has a lower computation cost on the auditor and stronger security.
    Keywords: public auditing; MHT; bilinear Diffie-Hellman problem; collision attack

  • A probabilistic mix-critical task scheduling algorithm for wireless networked control system   Order a copy of this article
    by Qiang Lin, Guowei Wu, Zihao Song, Chen Xu 
    Abstract: As a special application of real-time systems, a Wireless Networked Control System (WNCS) is a control system wherein the control loops are closed through a wireless communication network. In WNCSs, tasks have different time constraints as well as critical requirements, therefore, how to schedule mix-critical tasks is extremely important. However, traditional mix-critical task scheduling algorithms for WNCSs usually suffer from long transmission delay and high failure rate, which hinders the performance of WNCSs. In this paper, we propose a Probabilistic Real Time Scheduling algorithm (PRTS) for solving scheduling problems. We introduce a schedulability analysis scheme for testifying whether a group of tasks can be scheduled or not. Then the scheduling algorithm PRTS is developed, which aims at enhancing the successful rate of scheduling and reducing the real-time transmission delay. Finally, several simulations are conducted to show the advantages of our scheme. Simulation results reveal that PRTS has a higher ability in solving the mix-critical task scheduling problem. We conclude that PRTS is feasible to be used in WNCS.
    Keywords: wireless networked control system, scheduling, real time, probabilistic tasks

  • Load-balanced and efficient data collection protocol for wireless sensor networks   Order a copy of this article
    by Jumin Zhao, Deng-ao Li, Haibin Wen, Qingming Tang 
    Abstract: As the state-of-the-art data collection protocol for wireless sensor networks(WSN), Collection Tree Protocol (CTP) has been applied to many practical applications. However,it can lead to load imbalance and data congestion because each node chooses the optimal path in CTP. To address the problem of load imbalance, this paper presents and evaluates V-CTP, a novel data collection protocol for WSN based on the CTP. V-CTP considers the number of child nodes, and uses a virtual metric (v-eetx) to chose a path. We implement V-CTP and evaluate its performance on an indoor test-bed with 30 to 100 Telosb nodes. The experiment results show that the V-CTP protocol can effectively solve the problem of load imbalance and maintain high data delivery rate as well as CTP. In addition, V-CTP achieved desirable energy efficiency, prolonged the life of networks and improved the performance of networks.
    Keywords: WSN; CTP; data collection; load imbalance; data congestion; optimal path; V-CTP; data delivery rate; energy efficiency; network life.

  • Maximising resource use and minimising hardware cost in specific cloud   Order a copy of this article
    by Gao Yang, Li Keqiu 
    Abstract: Cloud computing is a new computing model, which provides various services through networks. Cloud customers can obtain resource and service on demand. There are a large number of storage services that can be used to manage the needs of customers in the cloud computing environment. The cloud storage services can offer users a convenient and reliable way to store and share data from anywhere, on any device, and at any time. However, a lot of resources and energies are wasted in the cloud computing environment. Especially there is huge waste when it's a low-IO demand cloud system. We seek to save resources and save hardware cost at the same time. Nowadays, the important problem is how to save the resources and energy. It's hard to handle it in a high-IO demand system, but we have found a way to handle it in a low-IO system. Maximising resource use is the most popular approach that saves resources and energy in cloud computing. In this paper, we propose to maximise resource use of storage in the cloud environment. We assign the data that is stored in chunk devices of cloud by our data placement algorithms. Our algorithms are running on a central 'decision center' which holds all devices' status of cloud. As experiments show, minimising the disks' usage can reduce the waste of resources and energy. Finally, our experiment shows that our approach is efficient.
    Keywords: data placement; cloud storage; maximising resource use; cost saving of resources and energies.

  • Study on routing protocol based on traffic density in VANET   Order a copy of this article
    by Bin Feng, Xiangjie Kong, Han Yao, Junyi Li, Jiusheng Peng 
    Abstract: This paper focuses on the problem of routing protocol in the VANET. For traditional GPSR protocol, owing to lack of neighbour nodes in small areas with low traffic density, overall network efficiency is reduced. To address this issue, FS-GPSR protocol based on traffic density is proposed. When the vehicle is located in a large area with high traffic density, the classic GPSR protocol is used for routing and forwarding. When the vehicle is located in a small area with low traffic density, to determine the next hop node, we use an improved FS-GPSR protocol combined with the vehicle travelling direction and speed. Simulation results show that the improved FS-GPSR protocol network achieves superior performance: smaller transmission overhead and higher data delivery success rate.
    Keywords: VANET, FS-GPSR; routing protocols; traffic density

  • A low-cost and highly integrated control system for lower limb rehabilitation robot   Order a copy of this article
    by Fucheng Cao, Yuanchun Li, Lirong Wang, Zhen Liu 
    Abstract: This paper presents a lower limbs rehabilitation control system designed to train stroke patients by providing variable damping on the knee joint. The hardware is based on the controller TMS320F28335 Digital Signal Processor (DSP) and the software is based on the DSP/BIOS embedded operation system. The effective control of BLDC can be implemented combined with the upper impedance control algorithms and the underlying FOC control algorithm, which improves the performance of BLDC control system. The integrated control system provided is robust, dependable and real-time; it is also easy to upgrade and expand.
    Keywords: rehabilitation robot; DSP/BIOS; brushless DC motor; field oriented control; impedance control

  • Optimising the differential trail of the authenticated encryption algorithm Tiaoxin-346   Order a copy of this article
    by Wenying Zhang, Zhen Xiao, Mengzhu Li 
    Abstract: Tiaoxin-346 is a nonce-based authenticated encryption scheme designed by Ivica Nikoli
    Keywords: Tiaoxin-346, authenticated encryption, CAESAR, AES

  • Advanced distributed architecture for a complex and large scale Arabic handwriting recognition framework   Order a copy of this article
    by Hamdi Hassen, Kay Dörnemann, Maher Khemakhem 
    Abstract: Several Arabic Handwritten Recognition Systems (AHRS) have been widely used since the early nineties, proposing high recognition rates, but limited to small and medium document quantity. Indeed, a variety of approaches, methods, algorithms and techniques have been proposed to build a powerful AHRS able to recognise and digitise such documents. Unfortunately, these methods cannot succeed to achieve this mission in case of large quantities of documents. Reading Arabic handwriting is hard sometimes even for humans. Intensive experiments and observations revealed that some of the existing approaches and techniques are complementary and can be combined to collaborate in order to improve the recognition rate when this is needed. These complementary approaches and techniques are commonly complex, and require enough computing power to reach acceptable recognition speed. Distributed computing architectures, such as clusters, grid computing, peer to peer (P2P) and cloud computing provide enough computing power and/or data storage capacities. For these reasons, we think that these technologies can help a lot in building fast and powerful AHRS systems and solutions. This paper describes the design and implementation of a large AHRS solution based on distributed computing architectures. The experiments were conducted on real grid com-puting environment, a P2P scheduling architecture, and the Amazon Elastic Compute Cloud, with a real large scaled dataset from the IFN/ENIT database. Experimental results confirm that distributed computing environments constitute adequate infrastructures for building a fast and powerful AHRS.
    Keywords: complex and large Arabic handwriting; Arabic handwriting; complementary approaches and techniques; distributed architecture; P2P; grid computing; cloud computing.

  • A new code-based encryption scheme and its applications   Order a copy of this article
    by Jiayong Tang, Fangguo Zhang 
    Abstract: More and more data has to be dealt with in the current network computing. An extremely efficient encryption algorithm can greatly improve the efficiency and security of the process in large data environments. As a significant candidate of post-quantum cryptosystem, McEliece public-key cryptosystem (PKC) has one remarkable advantage that it has a very fast and efficient encryption process. In this paper, we put forward a new efficient CPA-secure variant of the McEliece cryptosystem whose advantage is that we can enlarge the plain-text space while the cipher-text space unchanged. We formally prove the security of the scheme. Our proof is based on the learning parity with noise (LPN) problem. We also extend our scheme to a CCA-secure cryptosystem and a signcryption.
    Keywords: code-based cryptosystem; McEliece cryptosystem; IND-CPA; IND-CCA; signcryption; cryptography; public key encryption.

  • Revenue maximisation for scheduling deadline-constrained moldable jobs on high performance computing as a service platforms   Order a copy of this article
    by Kuo-Chan Huang, Chun-Hao Hung, Wei Hsieh 
    Abstract: Traditionally, High-Performance Computing (HPC) systems usually deal with the so-called best-effort jobs, which do not have deadlines and are scheduled in an as-quick-as-possible manner. Recently, the concept of HPC as a Service (HPCaaS) was proposed, aiming to transform HPC facilities and applications into a more convenient and accessible service model. To achieve that goal, there will be new issues to explore, such as scheduling jobs with deadlines and maximising the revenue of service providers. This paper presents a reservation-based dynamic scheduling approach for scheduling deadline-constrained moldable jobs with the aim of maximising a service providers revenue. The proposed approach has been evaluated with a series of simulation experiments. The experimental results indicate that our scheduling approach can achieve significantly higher revenue than previous methods. In the experiments, we also explored several research issues, including waiting queue sequencing, processor allocation decisions on time and space, admission control, and partial rescheduling.
    Keywords: moldable jobs; scheduling; deadline-constrained jobs; high performance computing; revenue maximisation; processor allocation

  • Ciphertext policy attribute-based encryption with hierarchical domain authorities and users in cloud   Order a copy of this article
    by Jing Wei, Leyou Zhang 
    Abstract: Attribute-based encryption (ABE) can propose the fine-grained access control policy for encrypted data in cloud. The characteristics of the attributes in the existing works are treated as the same level, but in real life the attributes and domain authorities are always on different levels. This produces additional computation costs and storage costs for users. To overcome these shortcomings, a new ABE scheme is proposed in this paper. The proposed scheme combines the ciphertext-policy attribute-based encryption (CP-ABE) scheme with hierarchical identity-based encryption (HIBE), which issues a ciphertext-policy hierarchical attribute-based encryption (CP-HABE). Both the main domain authorisations and users achieve hierarchical encryption under the fine-grained access control. In addition, the proposed scheme inherits flexibility and achieves scalability with short ciphertexts. Under three static assumptions instead of other strong assumptions, it also achieves full security in the standard model.
    Keywords: hierarchical attribute-based encryption; fully secure; ciphertext policy

  • Secure and verifiable outsourcing protocol for non-negative matrix factorisation   Order a copy of this article
    by Zhenhua Liu, Bin Li, Qi Han 
    Abstract: With the rapid development of cloud computing services, resource-constrained clients can outsource their expensive computation tasks, such as scientific computations, to untrusted cloud servers. Furthermore, it is essential for these clients to protect their sensitive data and verify the validity of the returned computation results. In this paper, we focus on outsourcing protocol of non-negative matrix factorisation, which is an expensive computation task and has been widely applied to image processing, face recognition, text analysis, and so on. The permutation technique is employed to transform the original problem into a new one in our proposed protocol so as to protect the privacy, and the matrix 1-norm technique is used to verify the result returned from the cloud server in order to reduce the verification cost. Based on these two techniques, we construct a secure and verifiable outsourcing protocol for non-negative matrix factorisation. Moreover, the theoretical analysis and the experimental results show that our proposed protocol brings great computation savings for resource-constrained clients and fulfills the goals of correctness, security, verifiability and high efficiency.
    Keywords: cloud computing; permutation; outsourcing; cloud security; verifiability; matrix factorisation

  • Comparable encryption scheme supporting multiple users in cloud computing   Order a copy of this article
    by Jun Ye, Meixia Miao, Peng Chen, Xiaofeng Chen 
    Abstract: Cloud computing provides a way for secure data sharing, where data should be converted into ciphertext forms before uploading to the cloud servers. At the same time, we need to compare the data in many occasions. Although the privacy is enhanced when data is encrypted, the comparison of encrypted data is inconvenient, especially the data encrypted by different keys. The techniques for secure ciphertexts comparison is becoming a hot research topic both in the academic research and the industry practice. However, to the best of our knowledge, only a single user's data encrypted by a single key can be compared in the existing schemes. In this paper, we designed the first comparable encryption scheme among multiple users, and a novel token transformation technique was provided. In our scheme, different ciphertexts encrypted by different keys can be compared with each other by using our new algorithm and token transformation method. In addition, the security and verifiability characteristics of our scheme guarantee that users can easily verify the comparison result, and the value of the data will not be revealed. Finally, our scheme is flexible, which can be widely used in cloud computing.
    Keywords: comparable encryption; multiple users; token transformation; security.

  • Estimating data stream tendencies to adapt clustering parameters   Order a copy of this article
    by Marcelo Keese Albertini, Rodrigo Mello 
    Abstract: A wide range of applications based on processing of data streams have emerged in the last decade. They require specialised techniques to obtain representative models and extract information. Traditional data clustering algorithms have been adapted to include continuously arriving data by updating the current model. Most data stream clustering algorithms aggregate new data into models according to parameters usually set by users. Problems arise when choosing the values of given parameters. When the phenomenon under study is stable, an analysis of a sample of the data stream or a priori knowledge can be used. However, when the behaviour changes over collection, parameters become obsolete and, consequently, the performance is degraded. In this paper we study the problem of automatically adapt control parameters of data stream clustering algorithms. In this sense, we introduce a novel approach to estimate and use data tendencies in order to automatically modify control parameters. We present a proof of the convergence of our approach towards an ideal and unknown value of the control parameter. Experimental results confirm the estimation of data tendency improves learning control parametrisation.
    Keywords: big data; data clustering ; data stream ; data sequence ; adaptive clustering ; data analysis

  • Fully secure hierarchical inner product encryption for privacy preserving keyword searching in cloud   Order a copy of this article
    by Leyou Zhang, Zhuanning Wang, Yi Mu, Yupu Hu 
    Abstract: Cloud computing provides dynamically scalable resources provisioned as a service over networks. But an untrustworthy Cloud Service Provider (CSP) offers a big obstacle for the adoption of the cloud service since a CSP can access data in the cloud without the data owner's permission. Hierarchical Inner Product Encryption (HIPE) covers all applications of anonymous encryption, fully private communication and search on encrypted data, and provides trusted data access control policy to CSP. However, the existing works only achieve either selective attribute-hiding or adaptive attribute-hiding under some strong assumptions in the public key setting. These schemes only support limited class of functions and cannot protect the privacy of the query. To overcome these shortcomings, a novel HIPE in private key setting is proposed. The scheme achieves both fully secure and security reduction under the Decisional Linear (DLIN) assumption in the standard model. It has longer O(l)-size secret keys and O(n)-size ciphertexts, and brings high computational complexity. Therefore, a variant of the basic scheme is presented with the same security but shorter ciphertexts.
    Keywords: cloud security, searching encryption, HIPE, DLIN assumption

  • Improved pre-copy algorithm using statistical prediction and compression model for efficient live memory migration   Order a copy of this article
    by Minal Patel, Sanjay Chaudhary, Sanjay Garg 
    Abstract: Cloud computing provides on-demand network access based on computing resources such as networks, servers, storage, etc. The core part of the cloud is the virtualisation that runs multiple instances of different operating systems simultaneously. Virtual machines can be moved between physical hosts with the help of the migration feature of virtualisation. The migration of a virtual machine is useful for load balancing, energy efficiency, fault tolerance, etc. Xen hypervisor is used to execute and migrate the guests on different architectures using a pre-copy algorithm. There are three major categories to improve pre-copy using live migration algorithms: i) reducing dirty pages; ii) predicating dirty pages; iii) compressing memory pages. The methods based on reducing dirty pages can lead to performance degradation so a new approach called combined approach (including prediction and compression) is proposed in this paper. The prediction of dirty pages during a migration is performed using the ARIMA (Auto-Regressive Integrated Moving Average) model to avoid repeatedly sending them. This method is used to predict a time series based low or high dirty pages as linear combinations of its past and error values. An LRU (Least Recently Used) stack distance based delta compression algorithm is proposed for the compression model to achieve efficient virtual machine migration. The results show that the ARIMA based model is able to predict 93% in the case of high dirty pages environment. The combined approach is able to reduce downtime by 19.16% and total migration time by 10.76% on average compared with Xen's pre-copy algorithm.
    Keywords: virtualisation; pre-copy; ARIMA; LRU; delta compression

  • Fail silence mechanism for dependable vehicular communications   Order a copy of this article
    by João Almeida, Joaquim Ferreira, Arnaldo Oliveira 
    Abstract: This paper presents a fault-tolerant architecture to improve the dependability of infrastructure-based vehicular networks. For that purpose, a fail silence enforcement mechanism for road-side units (RSUs) was designed, implemented and tested. Vehicular communications based on IEEE 802.11p are inherently non-deterministic. The presence of RSUs and a backhauling network adds a degree of determinism that is useful to enforce real-time and dependability, both by providing global knowledge and supporting the operation of collision-free deterministic MAC protocols. One of such protocols is V-FTT, for which the proposed mechanism was designed as a case study. Notice, however, that this mechanism is protocol independent and can be adapted to any wireless communications system. The proposed technique is capable of validating a frame for transmission by identifying faults both in value and time domains. Experimental results show that the fail silence enforcement mechanism has low latency and consumes few FPGA resources.
    Keywords: fail silence mechanism; fault-tolerant architecture; vehicular networks; dependable wireless communications; intelligent transportation systems

  • A revocable certificateless encryption scheme with high performance   Order a copy of this article
    by Sun Yinxia, Zhang Zhuoran, Shen Limin 
    Abstract: Featuring no key escrow and no certificate, certificateless public key cryptography (CLPKC) has received much attention. A necessary problem in CLPKC is how to revoke a compromised/misbehaving user, which still remains to be efficiently solved: all existing methods are not practical for use owing to either enormous computation (secret channels) or a costly mediator. In this paper, we present a new approach for revocation in CLPKC and give a revocable certificateless encryption (RCLE) scheme. In the new scheme, a user's private key contains three parts: an initial partial private key, a time key and a secret value. The time key is updated at every time period and is transmitted over a public channel. Our construction offers higher performance than previous solutions. Based on the bilinear Diffie-Hellman problem, our RCLE scheme is provably secure in the random oracle model.
    Keywords: revocation; certificateless encryption; Bilinear Diffie-Hellman problem; random oracle model

  • Towards distributed acceleration of image processing applications using reconfigurable active SSD clusters: a case study of seismic data analysis   Order a copy of this article
    by Mageda Sharafeddin, Hmayag Partamian, Mariette Awad, Mazen A. R. Saghir, Haitham Akkary, Hassan Artail, Hazem Hajj, Mohammed Baydoun 
    Abstract: We propose a high performance distributed system that consists of several middleware servers (MWS) each connected to a number of FPGAs with extended solid state storage that we call reconfigurable active solid state device (RASSD) nodes. A MWS manages a group of RASSD nodes and bridges the connection between a client and the RASSD nodes within a collaborative distributed environment. We provide in this system full data communication solution between middleware and RASSD nodes. Because of the high levels of parallelism possible with the configurable fabric and the extended storage available by adding SSD storage to FPGAs, RASSD nodes in our system cooperate to deliver unprecedented speedup in image data analysis. In this work we use seismic data analysis as a case study to quantify how and by how much our RASSD nodes can accelerate computational throughput. We present speedup of seismic data prediction time in our system when we accelerate Gray Level Co-occurrence Matrix (GLCM) calculations and discuss the impact of data communication on speedup. When both GLCM and Haralick features are accelerated, the data traffic is reduced and our system achieves 102
    Keywords: active solid state devices, distributed systems, FPGA, GLCM, Haralick attributes, intelligent systems, machine learning, parallel architectures, reconfigurable computing, seismic data analysis.

  • Distributed predictive performance anomaly detection for virtualised platforms   Order a copy of this article
    by Ali imran Jehangiri, Ramin Yahyapour, Edwin Yaqub, Philipp Wieder 
    Abstract: Predicting subsequent values of Quality of Service (QoS) properties is a key component of autonomic solutions. Predictions help in the management of cloud-based applications by preventing QoS breaches from happening. The huge amount of monitoring data generated by cloud platforms motivated the applicability of scalable data mining and machine learning techniques for predicting performance anomalies. Building prediction models individually for thousands of Virtual Machines (VMs) requires a robust generic methodology with minimal human intervention. In this work, we focus on these issues and present three main contributions. First, we compare several time series modelling approaches to evidence the predictive capabilities of these approaches. Second, we propose estimation-classification models that augment the predictive capabilities of machine learning classification methods (random forest, decision tree, support vector machine) by combining them with time series analysis methods (AR, ARIMA and ETS). Third, we show how the data mining techniques in conjunction with the Hadoop framework can be a useful, practical, and inexpensive method for predicting QoS attributes.
    Keywords: cloud; performance prediction; monitoring; QoS; analytics; distributed time-series database;

  • A GPU parallel optimised Blockwise NLM algorithm in a distributed computing system   Order a copy of this article
    by Salvatore Cuomo, Ardelio Galletti, Livia Marcellino 
    Abstract: Recently, advanced computing systems are widely adopted in order to intensively elaborate a huge amount of biomedical data in the e-health field. An interesting challenge is to perform real-time diagnosis by means of complex computational environments. In this paper, we suggest how to deal with the most computationally expensive processing steps of a distributed cloud e-health system by using Graphics Processing Units (GPUs). In the case study of magnetic resonance imaging (MRI), for improving the quality of denoising and helping the real time diagnosis, we have implemented a GPU parallel algorithm based on the Optimized Blockwise Non-Local Means (OB-NLM) method. Experimental results have shown a significant improvement of healthcare processing practice in terms of execution time.
    Keywords: cloud systems; e-health; MRI denoising, non-local Means, GPU computing

  • Active vehicle: a new approach integrating AmI technology for trust management in VANET   Order a copy of this article
    by Amel Ltifi 
    Abstract: Ambient intelligence (AmI) is a new concept that interests many researchers in different domains. It requires highly innovative services such as Vehicular Ad-hoc networks (VANETs). In fact, concerning vehicle trust management, the majority of the proposed solutions are dependent on roadside equipment and an investigation into a scalable infrastructure for all roads is unreasonable. Therefore, road security should be managed by vehicles themselves. Hence, we have introduced the concept of 'active vehicle', which combines the power of ambient intelligence and the V2V technologies. In this paper, we propose a new secure communication model between active vehicles for alert spreading. Through this new model, traffic accident alerts are verified depending on the trust level of the sender. A trust management system has the responsibility to cut the spread of false warnings through the network.
    Keywords: active vehicle; ambient communication; AmI; cooperation; cluster; security; trust management; VANET; V2V

  • Mining frequent itemsets over uncertain data streams   Order a copy of this article
    by Huiting Liu, Kaishen Zhou, Peng Zhao, Sheng Yao 
    Abstract: In recent years, owing to the wide applications of sensor network monitoring, RFID, moving object search and LBS, mining frequent itemsets over uncertain data streams has attracted much attention. However, existing hyper-structure-based algorithms cannot achieve high mining accuracy. In this paper, we present two sliding-window-based false-positive-oriented algorithms, called UFIM (Uncertain data stream Frequent Itemsets Mining) and UFIMTopK, to find threshold-based and rank-based frequent itemsets from uncertain data streams efficiently. UFIM uses a global GT-tree to maintain frequent itemsets in the sliding window and outputs them when needed. In addition, an efficient deleting strategy is designed to reduce time overhead. UFIMTopK is designed to find top-k frequent itemsets, and it is modified from UFIM. Experimental results show that our proposed algorithm UFIM can obtain higher mining accuracy than previous algorithms on synthetic and real-life datasets.
    Keywords: frequent itemsets; uncertain data streams; sliding window

  • A semantic similarity calculation model for event ontology mapping   Order a copy of this article
    by Xu Wang, Wei Liu, Yue Tan 
    Abstract: Ontology mapping is quite a useful solution for matching semantics between ontologies or schemas that were designed independently of each other. The traditional ontology is mainly to describe concepts and hierarchy relations between concepts, which easily cause Tennis Problem and impact the accuracy of concept mapping, especially while integrating data in event-centred domains. In this paper, we give the definition of event ontology mapping, and then propose a semantic similarity calculation model for event ontology mapping, which enables the mapping between event-based information with more abundant semantics, especially enabling the semantic mapping between event elements in different event classes. Experiments show that the proposed model can effectively identify the semantic relations of event classes in two event ontologies.
    Keywords: ontology mapping, event ontology, event semantic similarity

  • Lattice-based ring signature scheme under the random oracle model   Order a copy of this article
    by Shangping Wang, Ru zhao, Yaling Zhang 
    Abstract: On the basis of the signatures scheme without trapdoors from lattice, which was proposed by Lyubashevsky in 2012, we present a new ring signature scheme from lattice. The proposed scheme is an extension of the signature scheme without trapdoors. We prove that our scheme is strongly unforgeable against adaptive chosen message in the random oracle model, and prove that the security of our scheme can be reduced to the hardness of the small integer solution (SIS) problem by rejection samplings. Compared with the existing lattice-based ring signature schemes, our new scheme is more efficient and with shorter signature length.
    Keywords: lattice; ring signature; random oracle model; unforgeable;

  • Robust Student's-t mixture modelling via Markov random field and its application in image segmentation   Order a copy of this article
    by Taisong Xiong, Yuanyuan Huang, Xin Luo 
    Abstract: The finite mixture model has been widely applied to image segmentation. However, the technique does not consider the spatial information in images, which leads to unsatisfactory results for image segmentation. To address this problem, in this paper, a Student's-t mixture model is proposed for image segmentation based on Markov random field. There are three advantages in the proposed model. Firstly, a representation of spatial relationships among pixels is given. Secondly, Student's-t distribution is chosen to be the component function of the proposed model instead of the Gaussian distribution because of its heavy tail. Thirdly, to infer the parameters of the proposed model, the gradient descent method is applied during the inference process. Comprehensive experiments are carried out on gray scale noisy images and real-world colour images. The experimental results have shown the effectiveness and robustness of the proposed model.
    Keywords: Student's-t mixture model; Markov random field; image segmentation; gradient descent

  • Towards workload-aware cloud resource provisioning using a multi-controller fuzzy switching approach   Order a copy of this article
    by Amjad Ullah, Jingpeng Li, Amir Hussain 
    Abstract: Elasticity enables cloud customers to enrich their applications to dynamically adjust the underlying cloud resources as per their needs, in order to minimise the cost of infrastructure as well as to satisfy their performance goals. Over the past few years,a plethora of techniques have been introduced in order to implement elasticity. Control theory is one such technique that offers a systematic method to design feedback controllers to implement elasticity. A number of proposals based on feedback controller concepts have been introduced in the recent past in order to guarantee the QoS needs of a system deployed over a cloud. Many of these are based on the use of a single controller approach of various types, such as adaptive and fixed. However, for systems that operate in time-varying and unpredictable operating conditions, it becomes difficult for such approaches to perform effectively at all times, in order to comply with the system stated performance goals. The systems deployed over cloud are subject to unpredictable workload conditions that vary from time to time, e.g. an e-commerce website may face higher workloads than normal during festival or promotional schemes. This paper exploits the novel use of a recently developed multi-controller based approach, where each controller is specifically designed for one operating region. Moreover, the use of fuzzy logic is exploited to enable qualitative specification for the selection of the most suitable controller in run-time, based on system current behaviour. Initial experimental evaluation in comparison with the conventional single-controller approach demonstrates that our proposed method enhances the capability of an elastic application to comply with system performance goals.
    Keywords: cloud elasticity; control theory; multi-controller; fuzzy logic; auto-scaling;rndynamic resource provisioning

  • Exploiting node mobility for fault management in RPL-based wireless sensor networks   Order a copy of this article
    by Djamila Bendouda, Lynda Mokdad, Hafid Haffaf 
    Abstract: This paper aims to propose an effective new local repair technique for RPL protocol (IPv6 Routing Protocol for Low Power and Lossy Networks) in Wireless Sensor Networks (WSNs), using the mobile node to replace failure node in three cases of failure position. In addition, RPL was originally designed for static networks, with no support for mobility. However, handling the local repair mechanism for the RPL with mobile nodes is a real challenge. Simultaneous with replace nodes failures by the mobility of their predecessor without rebuild of the RPL tree, is a new proposed solution to avoided triggering to rebuilding the graph Destination Oriented Directed Acyclic Graph (DODAG), estimate to ensure continuity of service, collect and transmit data in the monitored environment. Our work is the first effort implemented for fault management using the Local Repair technique by Mobile Node with RPL protocol (MN-LR_RPL). The MN-LR_RPL technique is presented by an algorithm which is simulated using COOJA simulator in the Contiki operating system. The performance of this algorithm is evaluated for different network settings, in terms of the control traffic, delay, energy consumption and PDR (%). Performance evaluation results show that MN-LR_RPL method allows a great improvement compared with the standard specification in the case of local repair, mainly in terms of packet loss ratio and average network latency. Based on our new method, we offer a few suggestions for MN-LR_RPL implemented in Vehicular Ad hoc Networks applications (VANETs) and another point to using Wireless Software Defined Network (WSDN) to improve RPL protocol in the internet of things.
    Keywords: WSN; VANETs; fault management; local repair; mobile node; RPL protocol; Contiki; Cooja

  • VKSE-DO: verifiable keyword search over encrypted data for dynamic data-owner   Order a copy of this article
    by Yinbin Miao, Jianfeng Ma, Fushan Wei, Kai Zhang, Zhiquan Liu, Xu An Wang 
    Abstract: With the popularity of cloud storage, the search over encrypted data technique enables data-owners to securely store and efficiently retrieve ciphertext with a privacy-preserving way. However, in practice, the cloud service provider (CSP) is a semi-honest-but-curious entity that may honestly execute a fraction of search operations and return partial or incorrect search results to save its computation resources. Therefore, the searchable encryption (SE) scheme should provide the result verification mechanism to guarantee the correctness of search results. To the best of our knowledge, most existing SE schemes rarely consider the dynamic data-owner scenario. Moreover, these schemes incur extra computational overhead when updating the whole ciphertext. To this end, we devise a secure and efficient cryptographic primitive called verifiable keyword search over encrypted data for the dynamic data-owner scheme. Formal security analysis proves that our scheme is secure against chosen keyword attack without random oracle. As a further contribution, experimental results over real-world dataset show its efficiency in practice.
    Keywords: cloud storage; result verification; dynamic data-owner; chosen keyword attack.

  • Acceleration of sequential Monte Carlo for computationally intensive target distribution by parallel lookup table and GPU computing   Order a copy of this article
    by Di Zhao 
    Abstract: Sequential Monte Carlo (SMC) is the key solver for applications such as object tracking, signal processing and statistical distribution approximation. However, if the target distribution is complicated, the solution speed of the conventional SMC is too slow to satisfy the real-time requirement of applications. In this paper, by the novel idea of GPU based lookup table (GPU-LTU), the acceleration method for the conventional SMC (LTU-GPU accelerated SMC) is developed, and the efficiency of LTU-GPU accelerated SMC by a statistical approximation problem is illustrated. Computational results show that, LTU-GPU accelerated SMC is significantly faster than the conventional SMC from hours to seconds.
    Keywords: sequential Monte Carlo, parallel lookup table, GPU computing, statistical distribution approximation

  • HtComp: bringing reconfigurable hardware to future high-performance applications   Order a copy of this article
    by Alessandro Cilardo 
    Abstract: Current trends in computer architecture are increasingly moving towards heterogeneous platforms, now including FPGAs as first-class components enabling unprecedented levels of performance and power-efficiency. Programming such next-generation machines is however extremely difficult as it requires architecture-specific code and low-level hardware design. This paper describes the main outcomes of the HtComp project, a two-year research programme aimed at exploring methodologies and tools allowing the automated generation of FPGA-based accelerators from high-level applications written in traditional software languages. In particular, the paper describes the main contributions brought by the project, covering the generation of hardware systems from high-level parallel code, the performance-oriented optimisation of memory architectures tailored on the application access patterns, as well as the automated definition of application-driven special-purpose on-chip interconnects. Overall, the above innovations contributed to creating a viable path allowing generic software developers to access tomorrow's hardware-accelerated high-performance platforms with minimum development overheads.
    Keywords: HPC; FPGA; design automation; programming models

  • Enhanced e-learning system performance with a cloud and crowd oriented approach   Order a copy of this article
    by Chun- Hsiung Tseng, Ching-Lien Huang, Yung-Hui Chen, Chu-Chun Chuang, Han-Ci Syu, Yan-Ru Jiang, Fang-Chi Tsai, Pin-Yu Su, Jun-Yan Chen 
    Abstract: Flipped classroom is undoubtedly a very popular emerging teaching methodology and is highly emphasised in the education system today. There are several e-learning systems focusing on the flipped classroom concept. For example, with the "moodle" e-learning system, tutors prepare their materials in advance and request students to read these materials. The basic concept of flipped classroom is to have students learn by themselves before attending a real class at school. Once the background learning stage is performed outside class time, tutors have free time to lead students to participate in higher-order thinking. However, as shown in the report of Katie Ash (9), the performance of the flipped classroom method is in fact still arguable. Our survey shows that the contents offered by most modern e-learning systems are relatively static. Consider how fast new information appears on the web! Of course, teachers, or material providers, can upload new contents to e-learning systems. However, creating contents requires efforts. Today, teachers are ordered to do a lot of administrative tasks, such as recruiting students. Additionally, they have to devote their efforts to academic-industrial cooperative projects and research. The workload of our teachers is already heavy, so expecting teachers to update contents very frequently is not practical. Hence, the researchers believe that one of the major challenges faced by e-learning systems today is the richness of contents. Besides, crowd intelligence is usually considered helpful in enhancing learning performance. In the research, a system to use both cloud materials and crowd intelligence is proposed, and an experiment to validate the system is included.
    Keywords: e-learning; crowd intelligence

  • Broker-based resource management in dynamic multi-cloud environment   Order a copy of this article
    by Naidila Sadashiv, S. M. Dilip Kumar 
    Abstract: Cloud computing is being largely embraced by small, medium and large business organisations to host interactive web-based applications, as they provide more unlimited services than the classical computing approach. However, providing uninterrupted service at an economic price with efficient use of resources is the challenge faced by cloud service providers, especially in serving users spread across the globe. Services from many different clouds can be reaped to address resource availability issues and impart the desired QoS. This paper presents a resource management approach for deploying three-tier applications over a broker-based multi-cloud environment. Strategies for quick cloud site selection, dynamic resource adaptation, and two-level load balancing with high availability are considered as part of this approach. Experiments are carried out on an extended Cloudsim simulator using realistic session workloads that are synthesised based on different statistical distributions. Performance evaluation of the approach reveals that these strategies lead to improved resource use, throughput and compliance with SLA, even under varying workload scenarios.
    Keywords: cloud computing; resource management; multi-cloud environment; cloud site selection; dynamic adaptation; fair share load balancer; broker.

  • An efficient traceable data-sharing scheme in cloud computing for mobile devices   Order a copy of this article
    by Zhiying Wang, Jun Ye, Jianfeng Wang 
    Abstract: Owing to the convenience of mobile devices and the advantages of cloud computing, it is a fashion to efficiently share data by mobile devices in cloud computing. However, it might be insecure to upload the plaintext directly to the remote cloud server. In order to solve the problem, the data should be encrypted by a secure encryption scheme. However, most existing data-sharing schemes in cloud computing are either insecure against collusion attack or suffer from a large computational overhead. In this paper, an efficient data-sharing scheme in cloud computing for mobile devices is proposed based on convergent encryption and polynomial reconstruction techniques. Moreover, pre-decryption is used to further reduce the computation overhead of mobile devices. Finally, we use traitor tracing technology to track the users who have leaked private keys for their own benefit. Experimental results indicate that the proposed scheme is extremely suitable for mobile devices.
    Keywords: data sharing; mobile cloud computing; convergent encryption; polynomial reconstruction; traitor tracing.

  • Malicious URL detection with feature extraction based on machine learning   Order a copy of this article
    by Baojiang Cui, Shanshan He, Xi Yao, Peilin Shi 
    Abstract: Many web applications suffer from various web attacks due to the lack of awareness concerning security. To create a web attack, executable code is usually embedded into a URL. Therefore, it is necessary to improve the reliability and security of web applications by accurately detecting malicious URLs. In this paper, statistical analyses based on gradient learning and feature extraction using sigmoidal threshold level are combined to propose a new detection approach based on machine learning. Twenty-one features from five different dimensions are extracted by analysing the characteristics to distinguish benign from malicious behaviour. Moreover, the na
    Keywords: malicious URLs; feature selection; machine learning; multiple algorithms.

  • An efficient symmetric searchable encryption scheme for dynamic dataset in cloud computing paradigms   Order a copy of this article
    by Minghao Zhao, Han Jiang, Zhen Li, Qiuliang Xu, Hao Wang, Shaojing Li 
    Abstract: Searchable encryption is a significant cryptographic primitive to ensure storage security and data privacy in cloud computing environment. It allows a client to store a collection of encrypted documents on a server, and latterly, according to his specific search criteria, perform keyword based searches and retrieve the documents, meanwhile ensuring that it reveals minimal information to the server. Early research on searchable encryption mainly focused on the efficiency, the security and the query expressiveness, and nowadays, the studies have been paying attention to searchable encryption that supports dataset updates dynamically. In this paper, we propose a new dynamic symmetric searchable encryption scheme. In the aspect of efficiency, the complexity of search algorithm is O(1), while file addition and deletion are O(m'n) and O(N), respectively (here m' means the number of keywords in a document; N means the number of document-keyword pairs and n means the size of keywords dictionary), which indicates the overall efficiency is superior to the existing scheme; in terms of security, this scheme can resist selective keyword attack, and compared with former ones, it achieves less information leakage.
    Keywords: cloud computing security; dynamic; searchable encryption; security and privacy-preserving.

  • Modelling and coordinating multisource distributed power system models in service oriented architecture   Order a copy of this article
    by Yan Liu 
    Abstract: Distributed power grid applications aim to combine multisource sensor device data with conventional power grid monitoring and control data to improve data redundancy and model accuracy. To support such an application, a software platform is essential that coordinates multiple data sources and integrates with the data communication and computation infrastructure. In this paper, we present a service oriented coordination platform that exposes parallel programs as services and data sources as ports. We model services and ports by extending classes of standard Common Information Model (CIM) for energy management and distribution management. These services are then coordinated based on the typed ports in the context of domain specific power applications. These services are integrated with two platform components: (1) a middleware for data communication and aggregation; and (2) a job launching mechanism to run parallel programs on HPC clusters. Our contribution is building a service oriented platform that is compliant with CIM management standards and provides power engineers an integrated environment for composing distributed power applications, disseminating data and launching HPC jobs. We demonstrate this platform with a use case of deploying distributed state estimation on the IEEE 118 bus model. Our experience leads to insights on future research direction on building a general service oriented for exploring distributed power applications.
    Keywords: service oriented architecture, workflow, power grid

  • Energy efficiency of a heterogeneous multicore system based on the enhanced Amdahl's law   Order a copy of this article
    by Songwen Pei, Junge Zhang, Naixue Xiong, Myoung-Seo Kim, Jean-luc Gaudiot 
    Abstract: Energy efficiency is one of the main challenges of designing future heterogeneous multicore systems, beyond performance. In this paper, we propose an energy efficiency analytical model for heterogeneous multicore system based on the enhanced Amdahl's law. The model extends the traditional computing-centric model by considering the overhead of data preparation, which potentially includes the overhead of communication, data transfer, synchronisation, etc. The analysis clearly shows that decreasing the overhead of data preparation is a promising approach to reach higher performance of computation and greater energy-efficiency. Therefore, more informed tradeoffs should be taken when we design a modern heterogeneous processor system within limited budget of energy.
    Keywords: energy efficiency; overhead of data preparation; dataflow computing model; performance evaluation; heterogeneous multicore system.

  • Pixel classified colorisation method based on neighborhood similarity priori   Order a copy of this article
    by Jie Chen, Zongliang Gan, Xiuchang Zhu, Jin Wang 
    Abstract: Colorisation is a kind of computer-aided technology that automatically adds colours to grayscale images. This paper presents a scribble-based colorisation method that treats the flat and edge pixels differently. First, we classify the pixels to flat or edge pixel categories using the neighborhood similarity pixels searching algorithm. Then, we compute the weighted coefficients of the edge pixels by solving a constraint quadratic programming problem and compute the weighted coefficients of the flat pixels based on their luminance distances. Finally, we transmit the weighted coefficients to the chrominance images according to the joint correlation property between luminance and chrominance channels, and combine with the colours scribbled on by the user to compute all the unknown colours. The experimental results show that our method is effective, especially on reducing colour bleeding in the boundary parts, and can give better results when only a few colours are scribbled on.
    Keywords: colorisation; joint correlation; linear weighted combination; quadratic programming; active set; neighborhood similarity pixels

  • A novel distributed node searching method in delay-tolerant networks   Order a copy of this article
    by Yiqin Deng, Ming Zhao, Zhengjiang Wu, Longhua Xiao 
    Abstract: Among conventional target tracking methods in delay-tolerant networks (DTNs), there exists a type of distributed approach which is efficient with regard to target tracking. These methods leave the position and time information of each node in their original sub-areas, and select particular nodes in each sub-region as anchor nodes and the messenger nodes are used to collect and update the spread information, so that the detector is able to track the target node. However, some nodes in these methods consume energy too quickly, resulting in energy holes and a short network lifetime. Based on the above, this research developed a novel target track strategy (MAtracking), on the basis of a mobile anchor (MA) and any correlation between nodes. It improves the effect of the target tracking through two new mechanisms: first, a new data collection method (MA) is adopted, which moves to collect real-time mobile location and time of nodes so as to balance the energy consumption of the whole network. Then, the approximate path is proposed by inferring the path of the MA into a non-deterministic polynomial (NP) problem for path optimisation. Secondly, the correlation between nodes is used to track the target in the absence of valid mobile information about it. By doing so, the node that has the shortest time to meet the target node is tracked, thus guaranteeing the success rate. A large number of experimental results from real data simulations show that the proposed MAtracking method can improve the network lifetime and tracking efficiency compared with existing approaches.
    Keywords: delay-tolerant networks; target tracking; mobile anchor; correlation of nodes.

  • ElasticQ: an active queue management algorithm with flow trust   Order a copy of this article
    by Su Chenglong, Jin Guang, Jiang Xianliang, Niu Jun 
    Abstract: Active Queue Management (AQM) can improve the network transmission performance and reduce the delay of packets. However, most of previous algorithms cannot achieve the efficiency and fairness simultaneously when high-bandwidth flows exist. In this paper, a novel scheme, named the ElasticQ (Elastically Fair AQM), is proposed to suppress the high-bandwidth non-responsive flows and enhance the fairness of different flows. Different from previous works, the concept of flow trust is introduced into the design of AQM algorithms to measure the flows reliability and security effectively. The trust degrees of different flows are estimated to decide whether the packets are discarded or not in the proposed scheme with the sample-match mechanism and the packet dropping interval. Simulation experiments results show that ElasticQ could ensure the fairness of various flows, maintain the stability of the queues, and decrease the completion time of responsive flows, especially when responsive flows and non-responsive flows coexist.
    Keywords: flow trust, active queue management, elastic fairness, non-responsive flow, responsive flow

  • A hybrid mutation artificial bee colony algorithm for spectrum sharing   Order a copy of this article
    by Ling Huang 
    Abstract: In order to alleviate the spectrum scarcity problem in wireless networks, achieve efficient allocation of spectrum resources, and balance the users permission to spectrum, a hybrid mutation artificial bee colony algorithm based on the artificial bee colony algorithm is presented. The presented algorithm aims to enhance the efficiency of global searching by improving the way of leaders searching the nectar source with differential evolution algorithm. In addition, the onlookers searching method is improved by bat algorithm to guarantee the convergence efficiency of the algorithm and the precision of the result, and it is assumed that the onlookers are equipped with bats echolocation to get close to the nectar source by adjusting the rate of pulse emission and loudness when they are searching nectar source. The simulation results show that the proposed algorithm has faster convergence, higher efficiency and more optimal solutions compared with other algorithms.
    Keywords: spectrum sharing, artificial bee colony algorithm, bat algorithm, differential evolution

  • Outsourcing privacy-preserving ID3 decision tree over horizontal partitioned data for multiple parties   Order a copy of this article
    by Ye Li, Zoe L. Jiang, Xuan Wang, S.M. Yiu 
    Abstract: Today, many small and medium-sized companies want to share data for data mining; however, privacy and security concerns restrict such data sharing. Privacy-preserving data mining has emerged as a solution to this problem. Nevertheless, the traditional cryptographic solutions are too inefficient and infeasible to allow the large-scale analytics needed for big data. In this paper, we focus on the outsourcing of privacy-preserving ID3 decision trees over horizontally partitioned data for multiple parties. We outsource most of the protocol computation to the cloud and propose the OPPWAP to protect users data privacy. By this method, each party can have the correct results calculated with data from other parties and the cloud, and each partys data are kept private from other parties and the cloud. Our findings indicate that an increase in the number of participating parties results in a slight computing cost increase on the users side.
    Keywords: cloud computing; privacy preserving data mining; decision tree; horizontally partitioned data; multiple parties; PPWAP

  • When is the immune inspired B-Cell algorithm superior to the (1+1) evolutionary algorithm?   Order a copy of this article
    by Xiaoyun Xia, Langping Tang, Xue Peng 
    Abstract: There exist many experimental investigations of artificial immune systems (AIS), and it has been shown that AIS are useful and efficient for many real-world optimisation problems. However, we know little about that whether the AIS can outperform the traditional evolutionary algorithms on some optimisation problems in theory. This work rigorously proved that a simple AIS called the B-Cell Algorithm (BCA) with somatic contiguous hypermutations can efficiently optimise two instances of the multiprocessor scheduling problem in expected polynomial runtime, whereas the local search algorithms and the (1+1) evolutionary algorithm ((1+1) EA) using only one individual in the search space and with standard bit mutation are highly inefficient. This work is helpful for gaining insight into whether there exists any algorithm that is efficient for all specific problems.
    Keywords: artificial immune system; somatic contiguous hypermutations; multiprocessor scheduling problem; runtime analysis

  • Scalable bootstrap attribute reduction for massive data   Order a copy of this article
    by Suqin Ji, Hongbo Shi, Yali Lv, Min Guo 
    Abstract: Attribute reduction is one of the fundamental technique for knowledge acquisition in rough set theory. Traditional attribute reduction algorithms have to load the whole dataset into the memory at a time, however it is unfeasible for attribute reduction of the massive decision table due to hard limitation. To solve this problem, we propose the Bag of Little Bootstraps Attribute Reduction algorithm (BLBAR), which combines the bag of little bootstraps with attribute discernibility. Specifically, the algorithm first samples from the original decision table to generate a number of sub decision tables, and then finds the reducts of bootstrap samples of each sub-table through attribute discernibility; finally, all of the reducts are integrated as the reduct of the original massive decision table. Experimental results demonstrate that BLBAR leads to the improved feasibility, scalability and efficiency for attribute reduction on massive decision table.
    Keywords: bag of little bootstraps; attribute reduction; massive data; discernibility of attribute; reduct

  • Assessing nodes importance in complex networks using structural holes   Order a copy of this article
    by Hui Xu, Jianpei Zhang, Jing Yang, Lijun Lun 
    Abstract: Accurate measurement of important nodes in complex networks has great practical and theoretical significance. Mining important nodes should not only consider the core nodes, but also take into account the locations of the nodes in the network. Despite much research on assessing important nodes, the importance of nodes in the structural holes is still easily ignored. Therefore, a local measuring method is proposed, which evaluates the nodes' importance by the total constraints caused by the lack of primary structural holes and secondary structural holes around the nodes. This method simultaneously considers both the centrality and the bridging property of the nodes' first-order and second-order neighbours. Further to prove the accuracy of the method TCM, we carry out deliberate attack simulations through selective deletion in a certain proportion of network nodes. Then we calculate the decreased ratio of network efficiency before and after the attacks. Experiment results show that the average effect of the TCM in four real networks is improved by 50.64% and 14.92% compared to the clustering coefficient index and the k-shell decomposition method, respectively. Obviously, the TCM is more accurate to mine important nodes than the other two methods, and it is suitable for quantitative analysis in large-scale networks.
    Keywords: complex networks; structural holes; secondary holes; nodes importance; constraints

  • Load-balanced overlay recovery scheme with delay minimisation   Order a copy of this article
    by Shengwen Tian, Hongyong Yang 
    Abstract: Recovery from a link or node failure in the internet is often subjected to seconds or minutes of routing convergence, during which certain end-to-end connections may experience seconds or minutes of outage. According to this problem, existing approaches reroute the data traffic to a pre-defined backup path to detour the failed components. However, the maintenance of backup path increases the significant bandwidth expenditure. On the other hand, the diverted traffic may cause congestion on the backup path if it is not carefully split over multiple paths according to their available capacity. In this paper, we propose an efficient recovery scheme by using one-hop overlay multipath source routing, which is a post-failure recovery method. Once a failure happens, multiple one-hop overlay paths are constructed by selecting strategically multiple relay nodes, and the affected traffic is diverted to these paths in a well-balanced manner. We formulate the traffic allocation problem as a tractable linear programming (LP) optimisation problem, whose goal is to minimise the worst-case network congestion ratio. Simulations based on a real ISP network and a synthetic internet topology show that our scheme can effectively balance link usage dramatically and improve the reliability of the network.
    Keywords: failure recovery; load balance; overlay routing; linear programming.

  • Negation scope detection with recurrent neural network models in review texts   Order a copy of this article
    by Lydia Lazib, Yanyan Zhao, Bing Qin, Ting Liu 
    Abstract: Identifying negation scopes in a text is an important subtask of information extraction, which can benefit other natural language processing tasks, such as relation extraction, question answering and sentiment analysis. It also serves the task of social media text understanding. The task of negation scope detection can be regarded as a token-level sequence labelling problem. In this paper, we propose different models based on recurrent neural networks (RNNs) and word embedding that can be successfully applied to such tasks without any task-specific feature engineering effort. Our experimental results show that RNNs, without using any hand-crafted features, outperform feature-rich CRF-based model.
    Keywords: negation scope detection; natural language processing; recurrent neural networks.

  • The optimisation of speech recognition based on convolutional neural network   Order a copy of this article
    by Weipeng Jing, Tao Jiang, Xingge Zhang, Liangkuan Zhu 
    Abstract: The convolutional neural network (CNN) as acoustic model is introduced into speech recognition system based on mobile computing. To improve speech accuracy, two optimised methods are proposed in speech recognition based on CNN. Firstly, aiming at the problem for existing pooling algorithms ignoring locally relevant characteristics of the speech data, a dynamic adaptive pooling (DA-pooling) algorithm is proposed in the pooling layer of the CNN model. DA-pooling algorithm calculates the Spearman correlation coefficient of the extracted data to determine data correlation, then selects an appropriate pool strategy for different correlativity of data according to weight. Secondly, in order to solve traditional dropout hiding neurons node randomly, a dropout strategy based on sparseness is proposed in the full-connected layer in the CNN model. By adding a unit sparseness determination mechanism in the output stage of the network unit, we can reduce the ratio of the influence of smaller units in the model results, thereby improving the generalisation ability of the model. Experimental results show that these strategies can improve the performance of the acoustic models based on CNN.
    Keywords: CNN; speech recognition; DA-pooling; overfitting; sparseness

  • Hybrid feature selection technique for intrusion detection system   Order a copy of this article
    by MUHAMMAD HILMI KAMARUDIN, CARSTEN MAPLE, T.I.M. WATSON 
    Abstract: The enormous volume of network traffic has created a major challenge to the Intrusion Detection System (IDS). High dimensionalitys problems have rendered feature selection one of the most important criteria in determining the efficiency of the IDS. The objective of this study is to seek the best-fit for selecting features that can provide a high level of detection level with low false positive rate. In this study we have selected a hybrid feature selection model that potentially combines the strengths of both the filter and the wrapper selection procedure. The potential hybrid solution is expected to effectively select the optimal set of features in detecting intrusion. The use of several performance metrics to study the feature selection optimisation impact includes performance accuracy, true positive, true negative, detection rate, false positive, false negative and learning time. The proposed hybrid model was carried out using Correlation Feature Selection (CFS) together with three different search techniques known as best-first, greedy stepwise and genetic algorithm. The wrapper-based subset evaluation uses a Random Forest (RF) classifier to evaluate each of the features that were first selected by the filter method. The reduced feature selection on both KDD99 dataset (Network-based) and DARPA 1999 (Host-based) dataset was tested using RF algorithm classification with 10-fold cross-validation in a supervised environment. The experimental result shows that the hybrid feature selections had produced a satisfactory outcome in terms of least number of features with lower processing time. It also had achieved almost 100% detection rate of known attacks on both datasets. This encouraging result has proven the usage of hybrid feature selection model in IDS environment would be a method of choice.
    Keywords: machine learning; filter-subset evaluation; wrapper-subset evaluation; genetic algorithm; random forest.

  • A dynamic and QoS-effective resource management system   Order a copy of this article
    by Amril Nazir 
    Abstract: This paper presents the design and implementation of HASEX that supports a dynamic resource provisioning during application run-time and provides effective quality-of-service (QoS) resource management system for QoS- and deadline-driven applications. The most important feature of HASEX is the ability to serve high performance and distributed applications with minimal infrastructure and operating costs. The resource provisioning management is controlled by a 'rental' mechanism which is supported by a pool of computing resources that the system may rent from external resource owners/providers in times of high demand. HASEX differentiates the roles of application management, job scheduling, and resource provisioning tasks. This approach significantly reduces the overhead of managing distributed resources. We demonstrate the effectiveness of HASEX to rent groups of resource nodes across geographically disparate sites. We then perform performance comparison of HASEX with OpenNebula Cloud system to demonstrate its performance, scalability, and QoS effectiveness.
    Keywords: deadline-driven jobs; SLA management; SLA/QoS middleware; resource provisioning; autonomic and self management.

  • Performance identification in large-scale class data from advanced facets of computational intelligence and soft computing techniques   Order a copy of this article
    by You-Shyang Chen 
    Abstract: Enormous hemodialysis (HD) treatments have caused concern regarding negative information as the world's highest prevalence of end-stage renal disease in Taiwan. This topic is the motivation to identify an adequate HD remedy. Although previous researchers have devised various models to address HD adequacy, the following five deficiencies form obstacles: (1) lack of consideration for imbalanced class problems (ICPs) with medical data; (2) lack of methods to season mathematical distributions for the given data; (3) lack of explanatory ability on the given data; (4) lack of effective methods to identify the determinants of HD adequacy; and (5) lack of appropriate classifiers to define HD adequacy. This study proposes hybrid models to integrate expert knowledge, imbalanced resampling methods, decision tree and random forests-based feature-selection methods, LEM2 algorithm, rough set theory, and rule-filtering techniques to process medical practice with ICPs. These models have better performance than the listed methods from the empirical results.
    Keywords: rough set theory; decision tree; random forests; imbalanced class problem data.

  • Assessing complex evolving cyber-physical systems: a case study on smart medical devices   Order a copy of this article
    by Jan Sliwa 
    Abstract: Our environment is more and more permeated by intelligent devices and systems that directly interact with physical objects, including our bodies. In this way, complex cyber-physical systems are created where the cyber part is intertwined with the physical part so that novel dynamical dependencies appear. These intelligent devices produce immense amounts of data that can be stored and analysed. Often, high hopes are raised that processing those data will easily increase our knowledge and permit to take good decisions based on hard facts. If not based on a solid understanding, such data processing can lead to the well-known problem of GIGO (garbage in - garbage out). If presented in a visually compelling way, useless results will look like truth and will be misleading and damaging. In order obtain valid analysis results than can be used as "actionable knowledge", it is necessary to understand the working of the physical systems and also to be aware of possible statistical fallacies, such as biased selection. Even if big data collected by intelligent devices are not perfect, we want nevertheless use them to evaluate cyber-physical systems, their safety, efficiency and quality. One of the major challenges is the changing nature of the technical systems, of the environment in which they operate, and of the humans who use them. This raises the problem of partial invalidation of collected statistics if the conditions change. We will discuss the general problems related to assessing cyber-physical systems and present an important and interesting case study: smart medical devices. We have to stress that the statistical questions raised here are open, and one of the goals of this paper is to raise interest and instigate a cooperation to solve them.
    Keywords: cyber-physical systems; quality assessment; smart medical devices; statistical models.

  • Distributed data-dependent locality sensitive hashing   Order a copy of this article
    by Yanping Ma 
    Abstract: Locality sensitive hashing (LSH) is a popular algorithm for approximate nearest neighbour (ANN) search. As LSH partitions vector space uniformly and the distribution of vectors is usually non-uniform, it poorly fits real datasets and has limited efficiency. In this paper, we propose a novel data-dependent LSH (DP-LSH) algorithm, which has a two-level structure. In the first level, we train a number of cluster centres, and use the centres to divide the dataset. So the vectors in each cluster have near uniform distribution. In the second level, we construct LSH tables for each cluster. Given a query, we first determine a few clusters that it belongs to, and perform an ANN search in the corresponding LSH tables. Furthermore, we present an optimised distributed scheme and a distributed DP-LSH algorithm. Experimental results on the reference datasets show that the search speed of DP-LSH can be increased by 48 times compared to E2LSH, while keeping high search precision; also, the distributed DP-LSH can further improve search efficiency.
    Keywords: locality sensitive hashing; approximate nearest neighbour; data-dependent; distributed high dimensional search.

  • Real-time human action recognition using depth motion maps and convolutional neural networks   Order a copy of this article
    by Jiang Li, Xiaojuan Ban, Guang Yang, Yitong Li 
    Abstract: This paper presents an effective approach for recognising human actions from depth video sequences by employing Depth Motion Maps (DMMs) and Convolutional Neural Networks (CNNs). Depth maps are projected onto three orthogonal planes, frame differences under each view (front/side/top) are then accumulated through an entire depth video sequence thus generating a DMM. We build a model architecture of Multi-View Convolutional Neural Network (MV-CNN) containing multiple networks to deal with three DMMs (DMMf, DMMs, DMMt). The output of full-connected layer under each view is integrated as feature representation, which is then learned in the last softmax regression layer to predict human actions. Experimental results on MSR-Action3D dataset and UTD-MHAD dataset indicate that the proposed approach achieves state-of-the-art recognition performance and is appropriate for real-time recognition.
    Keywords: real-time human action recognition; depth motion maps; multi-view convolutional neural networks.

  • A compact construction for non-monotonic key-policy attributebased encryption   Order a copy of this article
    by Junqi Zhang, Haiying Gao 
    Abstract: The Attribute-Based Encryption (ABE) scheme with monotonic access structure cannot deal with an access structure that is associated with the negation of attributes, which is not convenient for real world applications. In this paper, we attempt to propose a more expressive non-monotonic ABE scheme through a new method. To achieve this goal, we firstly propose a linear Two-mode Identity-Based Broadcast Encryption (TIBBE) scheme based on an Identity-Based Broadcast Encryption (IBBE) scheme. We introduce the concept of Identity-Based Revocation (IBR) for this scheme without increasing the size of parameters in IBBE. The scheme is selective identity secure under the m-DBDHE assumption. Then we convert the TIBBE scheme into a non-monotonic Key-Policy ABE (KP-ABE) scheme with compact parameters. Our KP-ABE scheme could achieve constant-size ciphertexts, and the scale of the private keys grows linearly with the scale of the attribute set. Besides, the computational cost is the lowest compared with other existing non-monotonic KP-ABE schemes.
    Keywords: identity-based broadcast encryption; revocation scheme; linear secret-sharing schemes; non-monotonic access structure; selective security.

  • Research on link blocks recognition of web pages   Order a copy of this article
    by Gu Qiong, Wang Xianming 
    Abstract: The link block is a typical type of block structure of web pages; it is also an important and basic research object in the fields of web data processing and web data mining. Nevertheless, the existing research on links only focuses on granularities such as websites, pages and single links. Research results based on block-level links are extremely rare. In view of the significance and deficiency of this issue, firstly, block and block tree are proposed as the basic concepts of subsequent explorations, and then an approach of building block trees is put forward. Secondly, four rules for link block discrimination and two indicators for recognition results evaluation are put forward based on the concept of block, the two evaluation indicators are named as Link Coverage Rate (LCR) and Code Coverage Rate (CCR) respectively. Finally, a strategy named Forward Algorithm for Discovery of Link Block (FAD) is proposed and a corresponding experiment with different parameters is performed to verify the strategy. The results show that the FAD can be flexible to achieve recognition of link blocks under different granularity conditions. Concepts and approaches presented in this paper have a good prospect in the fields of web data processing, web data mining such as advertising block recognition, web page purification, page importance evaluation and web content extraction.
    Keywords: web; block trees; link blocks; discrimination; recognition.

  • AdaBoost based conformal prediction with high efficiency   Order a copy of this article
    by Yingjie Zhang, Jianxing Xu, Hengda Cheng 
    Abstract: Conformal prediction presents a novel idea whose error rate is provably controlled by given significant levels. So the remaining goal of conformal prediction is its efficiency. High efficiency means that the predictions are as certain as possible. As we know, ensemble methods are able to obtain a better predictive performance than that obtained from any of the constituent models. An ensemble method such as random forest has been used as an underlying method to build a conformal predictor. But we dont know the differences of conformal predictors with and without ensemble methods, or how the corresponding performances are improved. In this paper, an ensemble method, AdaBoost, is used to build a conformal predictor, and we introduce another evaluation metric-correct efficiency, which measures the efficiency of correct classification correctly. The good performance of AdaBoost based conformal predictor (CP-AB) has been validated on seven datasets. The experimental results show that the proposed method has a much higher efficiency.
    Keywords: machine learning; conformal prediction; AdaBoost; efficiency; ensemble; support vector machine; decision tree; weak classifiers; p-value; prediction label.

  • Comparative analysis of hierarchical cluster protocols for wireless sensor networks   Order a copy of this article
    by Chirihane Gherbi, Zibouda Aliouat, Mohamed Benmohammed 
    Abstract: Wireless Sensor Networks (WSNs) basically consist of low cost sensor nodes deployed in an interesting area for collecting data from the environment and relaying them to a sink, where they will be processed and then sent to an end user. Since wireless nodes are severely power-constrained, the major concern is how to conserve nodes' energy so that network lifetime can be last longer enough till the expected normal end of the network mission. Since WSNs may be formed by a large number of nodes, it is rather complex, or even unfeasible, to analytically model a WSN and it usually leads to oversimplified analysis with limited confidence. Besides, deploying test-beds requires a huge effort. Therefore, simulation is essential to study WSN behaviour. However, it requires a suitable model based on solid assumptions and an appropriate framework to ease implementation. In addition, simulation results rely on the particular scenario under study (environment), hardware and physical layer assumptions, which are not usually accurate enough to capture the real behaviour of a WSN, thus jeopardising the credibility of results. However, detailed models yield to scalability and performance issues, owing to the large number of nodes that depend on application to be simulated. Therefore, the tradeoff between scalability and accuracy becomes a major issue when simulating WSN. In particular, we systematically analyse a few prominent WSN clustering routing protocols and compare these different approaches according to our taxonomy and several significant metrics. Finally, we summarise and conclude the paper with some pertinent future directions.
    Keywords: energy saving; distributed algorithm; load balancing; cluster-based routing; wireless sensor network.

  • Virtual cluster optimisation for MapReduce-like applications   Order a copy of this article
    by Cairong Yan 
    Abstract: Infrastructure-as-a-service clouds are becoming ubiquitous for provisioning virtual machines on demand. Cloud service providers expect to use the least resources to deliver the best services. As users frequently request virtual machines to build virtual clusters and run MapReduce-like jobs for big data processing, cloud service providers intend to optimise the virtual cluster to minimise network latency and subsequently reduce data movement cost. In this paper, we focus on the virtual machine placement issue for provisioning virtual clusters with minimum network latency in clouds. We define the distance as the latency between virtual machines and use it to measure the affinity of a virtual cluster. Such metric of distance indicates the considerations of virtual machine placement and the topology of physical nodes in clouds. Then we formulate our problem as the classical shortest distance problem and solve it by building an integer programming model. A greedy virtual machine placement algorithm is designed to get a compact virtual cluster. Furthermore, an improved heuristic algorithm is also presented for achieving a global resource optimisation. The simulation results verify our algorithms and the experiment results validate the improvement achieved by our approaches.
    Keywords: virtual cluster; provisioning; resource optimisation; MapReduce programming model; shortest distance.

  • Harnessing betweenness centrality for virtual network embedding in tree topologies   Order a copy of this article
    by Mydhili Palagummi, Ricardo Lent 
    Abstract: We examine the virtual network embedding problem with QoS constraints and formulate an approach that exploits the betweenness centrality of VNE requests to improve performance. A pay-per-use revenue model is introduced to evaluate the algorithm. An evaluation study using datacentre-like substrates and a wide area topology compares the approach with four embedding methods from the literature and reports on the average revenue rate, embedding success probability, average number of VNE deployments, cost, and impact of substrate failures on the operation of the VNEs, confirming the efficacy of the proposed approach.
    Keywords: virtual network embedding; revenue metric; cloud computing; network overlay; datacentre; simulation.

  • Detecting fake reviews via dynamic multimode network   Order a copy of this article
    by Jun Zhao, Hong Wang 
    Abstract: Online product reviews can greatly affect the consumers shopping decision. Thus, a large number of unscrupulous merchants post fake or unfair reviews to mislead consumers for their profit and fame. The common approaches to find these spam reviews are analysing the text similarity or rating pattern. With these common approaches we can easily identify ordinary spammers, but we cannot find the unusual ones who manipulate their behaviour to act just like genuine reviewers. In this paper, we propose a novel method to recognise these unusual ones by using relations among reviewers, reviews, commodities and stores. Firstly, we present four fundamental concepts, which are the quality of the merchandise, the honesty of the review, the trustworthiness of the reviewer and the reliability of the store, thus enabling us to identify the spam reviewers more efficiently. Secondly, we propose our multimode network model for identifying suspicious reviews and then give three corresponding algorithms. Eventually, we find that the multiview spam detection based on the multimode network can detect more subtle false reviews according to our experiments.
    Keywords: fake review detection; honesty degree; shopping behaviour; multiview spam detection; dynamic multimode network.

  • DBSCAN-PSM: An improvement method of DBSCAN algorithm on Spark   Order a copy of this article
    by Guangsheng Chen, Yiqun Cheng, Weipeng Jing 
    Abstract: DBSCAN is a density-based data clustering algorithm, widely used in image processing, data mining, machine learning and other fields. With the increasing size of clusters, the parallel DBSCAN algorithm is widely used. However, we consider the current partitioning method of DBSCAN is too simple, and steps of GETNEIGHBORS query repeatedly access the dataset on Spark. So we proposed DBSCAN-PSM, which applies a new data partitioning and merging method. In the first stage of our method we import the KD-Tree, combine the partitioning and GETNEIGHBORS query, reduce the number of accesses to the dataset, and decrease the influence of I/O in the algorithm. In the second stage of our method we use the feature of points in merging so as to avoid the time costing of the global label. Experimental results showed that our new method can improve the parallel efficiency and the clustering algorithm performance.
    Keywords: big data; DBSCAN; data partitioning; data merging.

  • Multimedia auto-annotation via label correlation mining   Order a copy of this article
    by Feng Tian 
    Abstract: How to automatically determine the label for multimedia object is crucial for multimedia retrieval. Over the past decade, significant efforts have been devoted to the task of multimedia annotation. The problem is difficult because an arbitrary multimedia object can capture a variety of concepts, each of which would require separate detection. The neighbour voting mechanism is known to be effective for multimedia object annotation. However, it estimates the relevance of a label with respect to multimedia content by labels' frequency derived from its nearest neighbours, which does not take into account the assigned label set as a whole. We propose LSLabel, a novel algorithm that achieves comparable results with label correlation mining. By incorporating the label correlation and label relevance with respect to multimedia content, the problem of assigning labels to multimedia object is formulated into a joint framework. The problem can be efficiently optimised in a heuristic manner, which allows us to incorporate a large number of feature descriptors efficiently. On two standard real-world benchmark datasets, we demonstrate that LSLabel matches the current state-of-the-art in annotation quality, and has lower complexity.
    Keywords: label correlation; multimedia annotation; auto-annotation; correlation mining.

  • Dynamic trust evaluation model based on bidding and multi-attributes for social networks   Order a copy of this article
    by Gang Wang, Jaehong Park, Ravi Sandhu 
    Abstract: Mutual trust is the most important basis in social networks. However, many malicious nodes often deceive, collaboratively cheat, and maliciously recommend other nodes for getting more benefits. Meanwhile, because of the lack of an effective incentive strategy, many nodes can neither evaluate nor recommend. Thus, malicious actions have been aggravated in social networks. To solve these issues, we design a bidding strategy to incentivise nodes to do their best to recommend or evaluate service node. At the same time, we also use the TOPSIS method of selecting a correct service node for the system from networks. To guarantee reliability of the service node selected, we bring recommendation time influential function, service content similarity function and recommendation acquaintance function into the model to compute the general trust of the node. Finally, we give an update method for trust degree of node and experiments analysis.
    Keywords: dynamic trust; trust evaluation model; bid; multi-attributes; TOPSIS; information entropy; recommendation trust; direct trust.

  • A personal local area information interaction system based on NFC and Bluetooth technology   Order a copy of this article
    by Tian Wang, Wenhua Wang, Ming Huang, Yongxuan Lai 
    Abstract: Taking attendance is a regular activity in society. Required class attendance is common in Chinese colleges and universities. In most traditional classrooms, taking attendance may consume much time and some students may cheat by pretending to be their classmates, which makes the results unbelievable. Moreover, the interaction mode between the teacher and students is single and cannot support fast data interaction. To solve these problems, this paper proposes an information interaction system, which can not only speed up the process of taking attendance but also extend the information exchange mode. Firstly, we propose the NFC (Near Field Communication)based method to take attendance, which uses the rapid information exchange characteristic of NFC in the mobile phone. Furthermore, an ad-hoc scheme is introduced, based on which some students may be selected as the relay, which can greatly accelerate the attendance-taking process. Moreover, a lazy unbinding mechanism is proposed to prevent the students from taking attendance for others. Finally, based on Bluetooth technology, the system realises file transfer, which extends the information exchange mode. Real experimental results demonstrate the feasibility of the proposed system.
    Keywords: taking attendance; information interaction; NFC; lazy unbinding scheme; relay scheme.

  • A risk adaptive access control model based on Markov for big data in the cloud   Order a copy of this article
    by Hongfa Ding, Changgen Peng, Youliang Tian, Shuwen Xiang 
    Abstract: One of most important problems faced by cloud computing and cloud storage is identity and access management. The main problems of the application of access control in the cloud are the necessary flexibility and scalability to support a large number of users and resources in a dynamic and heterogeneous environment, with collaboration and information sharing needs. This paper proposes a risk self-adaptive dynamic access control model for big data stored in cloud and processed by cloud computing. The suggested model employs the Markov method and Shannon information theory. First, the simple formal adversary model for our risk adaptive access control model is presented. Second, a modification of eXtensible Access Control Markup Language (XACML) framework is given, and some new and enhanced components (including a novel risk evaluation component) are added in the modification. Then, we present Markov based methods to calculate the risk values of access requests, identify the user and supervise the access behaviour according to the job obligations of users and classification of data. At last, an incentive mechanism similar to a credit system is designed to supervise all the access behaviours of subjects, and the risky requests and risky users are restrained effectively by this mechanism. Our method is easy to deploy as the model is extended from the standard XACML. The administrator just needs to label the object data and record the request and access behaviour by comparing with the other work. This method is more effective and suitable to control the access in large-scale information system (e.g. cloud-based system), and protect the sensitive and privacy data for the data owners.
    Keywords: risk-based access control; privacy protection; risk management; cloud computing.

  • Watermarking based authentication and image restoration in multimedia sensor networks   Order a copy of this article
    by Hai Huang, Dongqing Xie 
    Abstract: In this paper, we proposed a novel watermarking based secure image acquisition scheme for Multimedia Sensor Networks (MSNs), which provides support for image authentication and restoration. In the proposed scheme, the sensor node groups image frames. Two successive frames compose the non-overlapping authentication and restoration group. The watermarking bits, which are the version of the down-sampling image, are computed from the first image and reversibly embedded into the second image. The sink does the verification and restoration by reversible watermark. Compared with previous work, our approach can not only implement authentication but also improve image quality. Experimental results show that our scheme can achieve gains in terms of authentication and packet loss tolerance to improve the image quality.
    Keywords: multimedia sensor networks; secure image restoration; authentication; watermarking.

  • Coverless information hiding method based on the keyword   Order a copy of this article
    by Hongyong Ji, Zhangjie Fu 
    Abstract: Information hiding is an important
    Keywords: coverless information hiding; big data; Chinese mathematical expression; word segmentation.

  • Measurement method of carbon dioxide using spatial decomposed parallel computing   Order a copy of this article
    by Nan Liu, Weipeng Jing 
    Abstract: According to the carbon dioxide's characteristics of weak absorption in the ultraviolet and visible (UV-VIS) band, a measurement method based on spatial decomposed parallel computing of traditional differential optical absorption spectroscopy is proposed to measure CO2 vertical column concentration in ambient atmosphere. First, the American Standard Profile is used to define the solar absorption spectrum, and the spectrum acquisition of the incident light converged by the telescope is described as observed parameters. On these bases, a spectrometer line model is established. Then, atmospheric radiation transmission is simulated using parallel computing, which reduces the computational complexity while balancing the interference that participates in the fitting. Simulation analyses show that the proposed method can reduce the computational complexity, and the run time is reduced by 1.18 s compared with IMLM and IMAP-DOAS in the same configuration. The proposed method can also increase accuracy, with its inversion error reduced by 5.3% and residual reduced by 0.8% compared with differential optical absorption spectroscopy. The spatial decomposed parallel computing method has advantages in processing CO2, and it can be further used in the research into carbon sinks.
    Keywords: differential optical absorption spectroscopy; ultraviolet and visible band; spatial decomposed parallel computing method; vertical column concentration; spectrometer; fitting.

  • Design and implementation of an Openflow SDN controller in the NS-3 discrete-event network simulator   Order a copy of this article
    by Ovidiu Mihai Poncea, Andrei Sorin Pistirica, Florica Moldoveanu, Victor Asavei 
    Abstract: The NS-3 simulator comes with the Openflow protocol and a rudimentary controller for classic layer-2 bridge simulations. This controller lacks the basic functionality provided by real SDN controllers, making it inadequate for experimentation in the scientific community. In this paper, we propose a new controller with an architecture and functionality similar to that of full-fledged controllers yet simple, extensible and easy to modify - all characteristics specific to simulators.
    Keywords: networking; software defined networking; SDN controller; NS3; NS-3; simulators.

  • Exploring traffic conditions based on massive taxi trajectories   Order a copy of this article
    by Dongjin Yu, Jiaojiao Wang, Ruiting Wang 
    Abstract: As increasing volumes of urban traffic data become available, more and more opportunities arise for the data-driven analysis that can lead to the improvements of traffic conditions. In this paper, we focus on a particularly important type of urban traffic dataset: taxi trajectories. With GPS devices installed, moving taxis become valuable sensors for the traffic conditions. However, analysing these GPS data presents many challenges owing to their complex nature. We propose a new approach that transforms the trajectories of each moving taxi as a document consisting of the traversed street names, which enables semantic analysis of massive taxi datasets as document corpora. More specifically, we identify traffic topics through textual topic modelling techniques, and then cluster trajectories under these topics to explore the traffic conditions. The effectiveness of our approach is illustrated by case study using a large taxi trajectory dataset acquired from 3743 taxis in a city.
    Keywords: vehicle trajectory; map matching; traffic regions; latent Dirichlet allocation; trajectory clustering; visualisation.

  • Keyword guessing on multi-user searchable encryption   Order a copy of this article
    by Zhen Li, Minghao Zhao, Han Jiang, Qiuliang Xu 
    Abstract: Searchable encryption provides a practical method that enables a client to store an encrypted database on an untrusted server, while supporting keyword search in a secure manner. It has gained extensive research interests with increasing concerns on security in cloud computing. Multi-user searchable encryption is more compatible for multi-tenancy and massive scalability property of cloud service. Most of these schemes are constructed using public key encryption. However, public key encryption with keyword search is vulnerable to keyword guessing attack. This is mainly because the keyword space is overwhelming smaller than the polynomial level of secure parameter and users usually query commonly-used keywords with low entropy. Consequently, a secure channel is necessarily involved for secret information transformation, which leads to an extra severe burden for the cloud system. This vulnerability is recognised in traditional searchable encryption, but it is still undecided whether it also exists in multi-user setting. In this paper, we firstly point out that keyword guessing attack is also a problem in the multi-user setting without the supposed secure channel. By an in-depth investigation of some multi-user searchable encryption schemes proposed recently and simulating the keyword guessing attack on them, we present that none of these schemes can resist this attack. We make a comprehensive security definition and propose some open problems related to multi-user searchable encryption.
    Keywords: cloud computing; keyword guessing; searchable encryption; multi-user.

  • Semi-supervised dimensionality reduction based on local estimation error   Order a copy of this article
    by Xianfa Cai 
    Abstract: Graph construction is one of the key steps in graph-based semi-supervised learning. However, the neighbourhood graph of most semi-supervised methods is unstable by virtue of sensitivity to the selection of neighbourhood parameter and inaccuracy of the edge weights of the neighbourhood graph, which easily leads to dramatic degradation of performance. Since local models are trained only with the points related to the particular one,local learning methods often outperform global ones. The good performance of local learning methods indicates that the label of a point can be well estimated by its neighbours. Inspired by the good performance of the local learning method, this paper proposes a feasible strategy called semi-supervised dimensionality reduction based on local estimation error (LEESSDR) by using local learning projections (LLP) for semi-supervised dimensionality reduction. The algorithm sets the edge weights of neighbourhood graph through minimising the local estimation error, and can effectively preserve the global geometric structure of the sampled data set as well as preserving its local one. Since LLP does not require its input space to be locally linear, even if it is nonlinear, LLP maps it to the feature space by using kernel functions and then obtains its local estimation error in the feature space. The feasibility and effectiveness of the proposed method are verified on two popular face databases (YaleB and CMU PIE) with promising classification accuracy and favourable robustness.
    Keywords: local learning projections; side-information; semi-supervised learning; graph construction.

Special Issue on: Advances in Big Data Processing via Convergence of Emerging Techniques

  • A fast and parallel algorithm for frequent pattern mining from big data in many-task environments   Order a copy of this article
    by Wei-Tee Lin, Chih-Ping Chu 
    Abstract: Many past studies have tried to efficiently discover frequent patterns from large databases, and can be classified into two main categories, apriori based algorithms and FP-growth based algorithms. Apriori algorithm is a generate-and-test approach, and the performance suffers from the testing of too many candidate itemsets. Therefore, most of the recent studies have applied the FP-growth approach to the discovery of frequent patterns. The rapid growth of data, however, has brought new challenges to mine frequent patterns, the execution efficiency and scalability. Big data often contains a large number of items, a large number of transactions and long average transaction length, which result in a large size of the FP-tree. In addition to the data characteristics, the size of the FP-tree is also sensitive to the minimum support threshold, because the small support is likely to bring many branches for nodes, greatly enlarging the size of the FP-tree and the number of reconstructed conditional pattern-based trees. In this paper, we will propose a novel algorithm and architecture for efficiently mining frequent patterns from big data in distributed and many-task environments. Through empirical evaluations in various simulation conditions, the proposed method is shown to deliver excellent performance in terms of execution time.
    Keywords: data mining; big data; many-tasks computing; frequent patterns mining

Special Issue on: Wireless Network Technologies and Applications

  • Checkpointing distributed application running on mobile ad hoc networks   Order a copy of this article
    by Houssem Mansouri, Nadjib Badache, Makhlouf Aliouat, Al-Sakib Khan Pathan 
    Abstract: Mobile Ad hoc NETwork (MANET) is a type of wireless network consisting of a set of self-configured mobile hosts that can communicate with each other using wireless links without the assistance of any fixed infrastructure. This has made it possible to create distributed mobile computing applications and has also brought several new challenges in the field of distributed algorithm design. Checkpointing is a well explored fault tolerance technique for the wired and cellular mobile networks. However, it is not directly applicable to MANET owing to its dynamic topology, limited availability of stable storage, partitioning and the absence of fixed infrastructure. In this paper, we propose an adaptive, coordinated and non-blocking checkpointing algorithm to provide fault tolerance in cluster-based MANET, where only a minimum number of mobile hosts in the cluster should take checkpoints. The performance analysis and simulation results show that the proposed scheme requires less coordinating-message cost and performs well compared with the related previous works.
    Keywords: MANET; distributed mobile computing; fault tolerance; clustering; checkpointing.

  • Cluster-based routing protocol using traffic information   Order a copy of this article
    by Hamza Toulni, Benayad Nsiri 
    Abstract: In recent years, vehicular ad-hoc networks (VANET) have gained much attention from researchers and the different actors in the field of transport, because of their crucial role in inter-vehicular communication and road safety. However, VANET still faces many challenges in terms of implementation. Furthermore, the routing in VANET is the most critical issue, because of its major role in the communication between vehicles. So the routing strategy must consider the periodic change in the topology and other characteristics of VANET. Moreover, the method adopted in the routing must provide the best flow of data in view of various performance metrics. In this paper, we propose a cluster-based routing protocol using the road traffic information to ensure the packet transmission in the most reliable manner and in record time. The proposed routing protocol is simulated in a city environment, and the experimental results show that our protocol is very effective.
    Keywords: ontology, vehicular ad-hoc networks, traffic information, cluster-based routing protocols

  • Detection and mitigation of pulse-delay attacks in pairwise-secured wireless sensor networks   Order a copy of this article
    by Tooska Dargahi, Hamid H.S. Javadi, Hosein Shafiei, Payam Mousavi 
    Abstract: With the advances in technology, there has been an increasing interest in the use of Wireless Sensor Networks (WSNs). WSNs are vulnerable to a wide class of attacks among which pulse-delay attack puts severe threats to the dependability of such networks. This paper proposes a distributed approach to detect and mitigate such attack in pairwise-secured WSNs. It provides a lower bound on the radio range under which the distributed approach can be performed. Our simulation experiments validate the correctness and efficiency of the proposed approach.
    Keywords: wireless sensor networks; security; time synchronisation; pulse-delay attacks; self-healing systems.

  • PSCAR: a proactive-optimal-path selection with coordinator agents assisted routing for vehicular ad hoc networks   Order a copy of this article
    by Souaad Boussoufa-Lahlah, Fouzi Semchedine, Louiza Bouallouche-Medjkoune, Nadir Farhi 
    Abstract: In this paper, we propose the Proactive-optimal-path Selection with Coordinator Agents Assisted Routing (PSCAR) protocol for Vehicular Ad hoc NETworks (VANETs) in an urban environment. The main idea of PSCAR is to contribute static nodes as coordinator agents placed at each intersection, in order to improve the routing performance and to deal with radio obstacles (buildings, trees...) and voids as encountered in urban environments. Since the coordinator agents are static nodes, each one knows all the paths to any other coordinator agent in the network. Thus, instead of searching an optimal path toward the destination node, PSCAR will determine an optimal path to the nearest coordinator agent to the destination node so as to better anticipate any change of the destinations position. The optimal path is selected according to two criteria: the total physical distance and the vehicle density on the path. The vehicle density is estimated based on a fundamental diagram of the traffic that allows estimating the vehicular traffic density on each road segment. To evaluate the performance of PSCAR we used the Network Simulator 2 (ns-2) and the mobility simulator SUMO. We compare our scheme with some existing solutions with the aim of showing its effectiveness, in terms of packet delivery ratio, end-to-end delay, and network overhead.
    Keywords: vehicular ad hoc networks; position-based routing; greedy forwarding; carry and forward; radio obstacles; urban environments; traffic density; ns-2; SUMO.

  • EAHKM+: energy-aware secure clustering scheme in wireless sensor networks   Order a copy of this article
    by Mohamed-Lamine Messai, Hamida Seba 
    Abstract: The effective technique to save energy in Wireless Sensor Networks (WSNs) is clustering. However, organising WSNs into clusters securely is a challenging task in regard to the vulnerabilities of these networks. In this paper, we provide a secured cluster formation by proposing a new symmetric key management scheme for hierarchical WSNs (HWSNs). The new scheme is called EAHKM+ (Energy Aware Hierarchical Key Management in WSNs). Only three keys are pre-distributed in each sensor node before deployment. EAHKM+ ensures the establishment of a pairwise key between each sensor node and its cluster head, thus the establishment of a broadcast key in each cluster in the network. Through comparison with other proposed key management schemes in hierarchical WSNs, we show that EAHKM+ is an energy-efficient, exible, robust and scalable solution to the key management problem in clustered WSNs.
    Keywords: secure clustering; hierarchical wireless sensor networks; key management; energy saving

  • RPSE: reenactment of data using polynomial-variant in sensor environment   Order a copy of this article
    by Ambika Nagaraj 
    Abstract: Sensor environment relieves the hassle by monitoring the environment under study and providing results to the investigator in time. The users need be concerned only about the whereabouts of the installations of these tiny elements in the environment under study. Such networks without any management and control can be intruded by adversaries. As a precautionary measure, encryption can be adopted. These encrypted messages are self-insured from being exposed unless the intruders have acquired the decryption keys. In this study, a polynomial-variant is generated and used to encrypt the transmitted data. The variant is calculated using various properties of the nodes deployed. Adopting this protocol helps the base station to originate the manipulated data. The study preserves backward and forward secrecy. The work minimises wormhole attack and sinkhole attack in the network.
    Keywords: prevention measure, wireless sensor network, polynomial-variant, wormhole attack, reconstruction of data, key generation, location-based keys, sinkhole attack.

  • Compact UWB BPF with notch-band using SIR and DGS   Order a copy of this article
    by Hassiba Louazene, Mouloud Challal, M’hamed Boulakroune 
    Abstract: This paper presents a novel ultra-wideband (UWB) bandpass filter (BPF) with a notch-band at specified frequency. Moreover, its equivalent circuit model (ECM) is also investigated. The proposed filter consists of a stepped impedance resonator (SIR), a parallel coupled feed-line on the top and a rectangular shaped defected ground structure (DGS) on the bottom of the structure. The notch-band can be shifted to any other desired frequency by tuning the length of the parallel coupled feed-line. Good performances in terms of wide stop-band rejection, low insertion loss, high return loss and more compact size (7.99 x 6 mm2) than those reported in literature are achieved. In addition, simulation results accomplished by ECM are in good agreement with the full-wave EM.
    Keywords: ultra-wideband; bandpass filter; stepped rnimpedance resonator; defected ground structure; notch-band.

Special Issue on: Big Data and Cloud Computing Challenges

  • Virtual networks dependability assessment framework   Order a copy of this article
    by Jemal Abawajy, Baker Alrubaiey 
    Abstract: Advances in virtualisation technology have enabled the development of network virtualisation that complements server virtualisation by enabling continuous workload agility irrespective of the network addressing and protocol of the underlying physical network. Despite huge benefits both in cost and accessibility, network virtualisation is susceptible to failure due to a wide variety of factors. Therefore, dependability in a virtualised network environment is a significant issue that need to be addressed before the full benefits of network virtualisation can be exploited. In this paper, we propose a framework to estimate dependability risks in virtual network environments considering variations in the virtual network configurations. The proposed framework uses the Reliability Block Diagram (RBD) and Continuous Time Markov Chains (CTMC) to model and analyse the dependability level of a virtual network. The proposed framework be helpful to the design and construction of more dependable VNE.
    Keywords: virtual network, substrate network, dependability, reliability, availability

  • HB-PPAC: hierarchy based privacy preserving access control technique in public clouds   Order a copy of this article
    by Sudha Senthilkumar, Madhu Viswanatham V 
    Abstract: Cloud computing is a term that has evolved drastically over the years. It involves deploying groups of remote servers and software that are networked, that will allow centralised data storage and online access to computer services or resources. However, an important problem in public clouds is how to selectively share data based on fine-grained attribute-based access control policies while at the same time assuring confidentiality of the data and preserving the privacy of users from the cloud. Keeping in mind the confidentiality of data that must be provided, many schemes and encryption algorithms have been implemented, but most of those schemes have high computational costs and overload on the data owner. In this paper, we are proposing a hierarchy-based privacy preserving access control scheme, in which data is encrypted and decrypted by the cloud using the blinded encryption and decryption technique that retains the constant computation cost to the owner and the user for any large file size, thus it makes our scheme suitable for energy-deficient devices. The experimental result shows that our scheme is more suitable for energy-deficient devices without compromising the security of data.
    Keywords: access control policy; attribute-based access control; hierarchy-based; blind encryption/decryption.

  • Enhanced low latency queuing algorithm with active queue management for multimedia applications in wireless networks   Order a copy of this article
    by Rukmani Panjanathan, Ganesan Ramachandran 
    Abstract: Low cost broadband services used widely in recent years have increased the demand for multimedia applications. Many of these applications require different Quality of Service (QoS) in terms of throughput and delay. In the current resource constrained wireless networks, resource management plays a vital role in designing the wireless networks. In order to handle the resources effectively and to increase the QoS, proper packet scheduling algorithms need to be developed. These packet scheduling algorithms will determine the order of packet delivery among the packets that are waiting in the routers ready queue. Low-latency queuing (LLQ) is a packet scheduling algorithm that combines strict priority queuing (SPQ) with class-based weighted fair queuing (CBWFQ). LLQ places delay-sensitive applications such as voice and video in the SPQ and treats them preferentially over other traffic by allowing the application to be processed and sent first from the SPQ. In this paper, an Enhanced Low Latency Queuing (ELLQ) algorithm is proposed. An additional Strict Priority Queue (SPQ) is introduced for scheduling the video applications separately, along with the dedicated SPQ for voice applications. Also, the QoS is improved by integrating the Congestion Avoidance Algorithm with ELLQ (CA-ELLQ). The performance of the proposed algorithm is compared with the existing LLQ algorithm through simulations using the OPNET modeller. Simulation results show that the proposed algorithm outperforms the existing algorithm in terms of throughput and delay for the multimedia applications.
    Keywords: low latency queuing; random early detection; multimedia applications; quality of service; scheduling algorithms; wireless networks.

  • On the field design bug tolerance on a multi-core processor using FPGA   Order a copy of this article
    by Harini Sriraman, Pattabiraman Venkatasubbu 
    Abstract: As multi-core processors are densely packed with many functional units, they are more vulnerable to hardware faults. Permanent hardware faults can be extrinsic or intrinsic. Intrinsic faults are due to the shrinkage in size of the processor components that causes the early failure of the system. Extrinsic faults are due to incorrect design that goes undetected during the testing of the processor. This paper addresses the issue of extrinsic faults, also known as design defects. With more components fabricated per unit area, it is not practically possible to verify the correctness of all the components exhaustively for different scenarios. This is the reason for the design bugs to escape into the system, irrespective of various levels of testing. These design bugs have to be handled efficiently without substantial loss of overall performance. In this paper, the proposed design uses FPGA for self-repairing of design bugs that arise on any part of the processors data-path. The FPGA is re-configured during the run-time to take over the functions of the faulty component. The extrinsic faults of Intel Eon E5 processor are analysed and categorised ito five types. An architecture and algorithm for self-repairing of these five types of design bug on the field using FPGA is proposed. To verify the effectiveness of the proposed design, one fault belonging to each of the five categories is injected into the verilog design of the MIPS architecture. The working of the proposed design, along with the area overhead calculations, is done using Cadence simulation and synthesis tools. For calculating the time overhead, the gem5 simulator is used. The proposed technique helps to handle the design bugs on the field with a very small area overhead of <1%. The execution time for handling extrinsic faults on the field has improved by 2.5% compared with existing fault repairing techniques.
    Keywords: multi-core, design bugs, fault tolerance, extrinsic faults, self-repairing and manufacturing defects, FPGA integrated design

  • A personalised movie recommendation system based on collaborative filtering   Order a copy of this article
    by Vairavasundaram Subramaniyaswamy, M Chandrasekhar, Anirudh Chella, Vijayakumar Vijayakumar, R Logesh 
    Abstract: Over the last decade, there has been a burgeoning of data owing to social media, e-commerce and overall digitisation of enterprises. The data is exploited to make informed choices, predict marketplace trends and patterns in consumer preferences. Recommendation systems have become ubiquitous after the penetration of internet services among the masses. The idea is to make use of filtering and clustering techniques to suggest items of interest to users. For a media commodity such as movies, suggestions are made to users by finding user profiles of individuals with similar tastes. Initially, user preference is obtained by letting them rate movies of their choice. Upon usage, the recommender system will be able to understand the user better and suggest movies that are more likely to be rated higher. The experiment results on the MovieLens dataset provide a reliable model which is precise and generates more personalised movie recommendations compared with other models.
    Keywords: data mining, collaborative filtering, movie recommendation, data acquisition, personalisation

  • An optimised background modelling for efficient foreground extraction   Order a copy of this article
    by M Sivagami, T Revathi, L Jeganathan 
    Abstract: Nowadays, analysing videos from a surveillance system in real time is very important for resolving the security-related social issues. Foreground extraction and object detection is a vital task in video analysis. In the proposed method, background modelling is treated as an optimisation problem and solved using particle swarm optimisation. The background is modelled at regular intervals of time for adapting the changes in the environment. Then the background subtraction is applied to the current frame with the corresponding background modelled frame to extract the foreground. Added to it, the optical flow applied image is compared with the foreground extracted image to avoid the false positives (FP) and false negatives(FN). This proposed foreground extraction technique for real-time videos produces results better than the previous algorithms with respect to the quality of extraction and space complexity.
    Keywords: particle swarm algorithm; foreground extraction; optical flow; GMM; K-means; fuzzy-c-means.

  • Fuzzy logic for decision-enablement: a novel context-awareness framework for smarter environments   Order a copy of this article
    by S Rajaraajeswari, R Selvarani, Raj Pethuru, P Mohanavadivu 
    Abstract: With the emergence of pioneering technologies and tools in the information and communication technology (ICT) space, our everyday environments (personal, social, professional, etc.) are deeply ICT-enabled to be smarter in their operations and offerings. Smarter healthcare is one shining example. Provisioning real-time and precision-centric healthcare facilities for people, especially those who are in disabled, debilitated, and diseased conditions, is being made possible with the numerous advances (the growing array of smart sensors, digitised entities, wearables, robots, controllers, and other disposable and diminutive actuators). The growing variety and volume of connected devices enable remote diagnostics, real-time monitoring, measuring, and management by transmitting various body health parameters for ensuring perfect decision-making and medication. In this paper, we have described an adaptive framework leveraging the pivotal power of fuzzy logic, which is famous for rule-based decision-making, for accelerating the systematic realisation of smarter environments, especially for the healthcare sector.
    Keywords: sensors and actuators; body area networks; smarter homes; cloud; big data analytics; internet of things; context-awareness; fuzzy logic.

  • An analysis of the performance of hash table-based dictionary implementations with different data usage models   Order a copy of this article
    by Thenmozhi Manivasagam, H Srimathi 
    Abstract: The efficiency of in memory computing applications depends on the choice of mechanism to store and retrieve strings. The Tree and Trie are the abstract data types (ADTs) that offer better efficiency for ordered dictionary. Hash tables are one among the several other ADTs that provide efficient implementation for an unordered dictionary. The performance of a data structure will depend on hardware capabilities of computing devices, such as RAM size, cache memory size and even the speed of the physical storage media. Hence, an application which will be running on real or virtualised hardware environment certainly will have restricted access to memory and other resources of the real hardware. Hashing is heavily used for such applications for speedy process. If they are using big hash tables or any other memory-intensive data structures, then their performance will surely be questionable and very much depend on the allocated virtual or real resources. Further, the time taken for any operation on a data structure relies on the data usage model, and the most significant operations/tests are very much dependent on the size of the 'character payload objects' that which we use in dictionary-like implementations. In this work, an analysis of the performance of six popular hash-table-based dictionary ADT implementations with different data usage models is carried out. The six implementations are Khash, Uthash, GoogleDenseHash, TommyHashtable, TommyHashdyn and TommyHashlin, tested under different hardware and software configurations. In this work, the performances of all the six hash table implementations were studied.
    Keywords: cache, hash table, Tommy DS, Trie

  • Privacy preserving framework for brute force attacks in cloud environment   Order a copy of this article
    by Ambika Pawar, Ajay Dani 
    Abstract: The cloud model of computing will be widely adopted by different organisations if it can support a higher level of data privacy than is currently supported. The higher level of data privacy is mandatory to store and query sensitive data in cloud-based information system applications, such as Customer Relationship Management (CRM) systems. Identity-based homomorphic encryption and tokenisation has proved its efficiency in providing privacy and simultaneously querying encrypted data. However, in a cloud-based Software-as-a-Service (SaaS) model, the adversary can run brute force attacks that can reveal the attribute values by colluding with the service provider. It is a significant challenge to detect and prevent such attacks. This paper presents a comprehensive solution using application-independent metrics consisting of different types of vulnerability measure. This paper also presents the detailed design of a system that uses application-independent metrics to prevent brute force attacks.
    Keywords: privacy, querying, cloud computing, information systems, brute force attacks, vulnerability metrics, homomorphic encryption.

  • Achieving fine-grained access control and mitigating role explosion by using ABE with RBAC   Order a copy of this article
    by Balamurugan Balusamy, Siddharth Ramachandran, NaliniPriya Anbu 
    Abstract: Cloud systems store vast amounts of sensitive data whose access must be well regulated. A good access control policy ensures the security of this data while providing high flexibility in terms of access management. In this paper, we introduce an access control architecture to mitigate the issue of role-explosion in RBAC and achieve a high degree of fine-grained access control by following an attribute-based encryption scheme with RBAC. In our model, we propose a user-tree with a hierarchical structure composed of groups and sub-groups to which a user will be assigned. These sub-groups will have their own sets of attributes as well as common inherited attributes. A user assigned to a specific sub-group will receive a key with the specific attributes of the sub-group as well as the inherited attributes.
    Keywords: cloud computing; cloud security; RBAC; fine-grained access control.

  • Customer experience and associated customer behaviour in end-user devices and technologies   Order a copy of this article
    by Sujata Joshi, Sanjay Bhatia, Kiran Raikar, Harmanpreet Pall 
    Abstract: This paper proposes to establish the relationship between customer experience and customer behavioural intentions of churn, advocacy, cross-sell, up-sell and complaint for cellular service providers for end-user devices and technologies, such as smartphones, mobile internet and mobile financial services. The method adopted incorporates various determinants across the customer lifecycle, which are sufficient in defining customer experience holistically. A primary survey was conducted on 5231 respondents by means of a questionnaire, along with personal interviews. Data was analysed using descriptive analysis as well as through statistical backing of logistic regression tests. Results indicate that there is a significant relationship between customer experience of smartphone users and mobile financial services users and their customer behavioural intentions of advocacy, churn, cross-sell, up-sell and complaints. The implications of this research can prove useful for cellular service providers in formulating their marketing strategy, cross-sell and up-sell strategy, churn management strategy and customer acquisition/ retention strategy.
    Keywords: customer experience; customer behavioural intentions; smartphone; mobile internet; mobile financial services; churn; advocacy; cross-sell; up-sell; cellular service provider.

  • DCRMRP: distributed cooperative relay-based multihop routing protocol for WiMedia networks   Order a copy of this article
    by K.S. Umadevi, Arunkumar Thangavelu 
    Abstract: WiMedia plays a major role in the domain of personal area networks owing to its high data rate support. It is used for constructing home networks, and establishing virtual multimedia connectivity for video or audio sharing with high speed data transfer. Maximising the use of the available resources in an efficient manner is one of the objectives in wireless networks. In order to provide better use of resources, parameters such as resource allocation, provisioning and monitoring need to be considered. In extremely occupied situations mobility, interference and link quality are the main issues that have to be addressed. Poor link quality may result in slow and unreliable network connection. Therefore, resource provisioning with good signal strength is a challenging task. By considering the mobility and link quality factors along with the relaying concept, a simple distributed cooperative relay-based routing protocol is designed to reduce the resource starvation and enhance the use of WiMedia. Unlike conventional routing algorithms, the proposed approach minimises the delay in turn maximizes the throughput by choosing a better link quality.
    Keywords: WiMedia MAC, distributed reservation protocol, multihop, relay, link quality, DCRMRP.

  • Live migration of virtual machines with their local persistent storage in a data intensive cloud   Order a copy of this article
    by Abhinit Modi, Raghavendra Achar, P. Santhi Thilagam 
    Abstract: Processing large volumes of data to drive their core business has been the primary objective of many firms and scientific applications in these days. Cloud computing being a large-scale distributed computing paradigm can be used to cater to the needs of data intensive applications. There are various approaches for managing the workload on a data intensive cloud. Live migration of a virtual machine is the most prominent paradigm. Existing approaches of live migration use network attached storage where just the run time state needs to be transferred. Live migration of virtual machines with local persistent storage has been shown to have performance advantages such as security, availability and privacy. This paper presents an optimised approach for migration of a virtual machine along with its local storage by considering locality of storage access. Count map combined with a restricted block transfer mechanism is used to minimise the downtime and overhead. The solution proposed is tested by varying parameters like bandwidth, write access patterns and threshold. Results shows the improvement in downtime and reduction in overhead.
    Keywords: data intensive; migration; local persistent storage; count map.

  • Bloom filter-based framework for cache management in large cloud metadata databases   Order a copy of this article
    by Anitha Balaji, Saswati Mukherjee 
    Abstract: As the data in cloud computing environment has grown exponentially over the past few years, retrieving required data in a shorter time has become more tedious. This paper proposes a probabilistic framework for efficient retrieval of data from huge datasets using a combined approach of clustering and frequent pattern analysis using a maximum frequent transaction set (MFT) algorithm based on similarity of transactions provided by a novel data structure called Bloom Matrix Filter (BMF). In the proposed model, clustering of the metadata file is done on two levels. The first level of cluster is a base cluster, which is created in an offline mode while uploading the data based on keywords using Tfidf, and the second level of cluster is a derived cluster, which is created in an online mode while downloading the data. Frequent transactions are generated based on the run time statistics of the transaction provided by the BMF analysis. Based on the run time statistics of the BMF, the dynamic cluster is derived. We have implemented the model in a cloud environment, and the experimental results show that our approach is more efficient than the existing search technology and increases throughput by handling more queries efficiently with reduced latency.
    Keywords: cloud storage; clustering; metadata; bloom filter; frequent itemset;

  • Fuzzy cost probability-based suppressed flooding multi-constrained QoS multicast routing for MANETs   Order a copy of this article
    by H. Santhi, N. Jaisankar 
    Abstract: The core objective of our approach is to build a highly strong forwarding group and a stable mesh structure using fuzzy inference system. The fuzzy inference system takes multiple interrelated QoS parameters such as Residual Bandwidth(RB), Residual Energy(RE), Link Loss Ratio(LLR), End-to-end Delay(D), Number of intermediate nodes(N) and Link Expiration Time(LET) to find a strong node from a set of mobile nodes. A node can be classified as strong when the Fuzzy Cost Probability (FCP) is high otherwise classified as weak nodes. The proposed Fuzzy Cost Probability based Multi-constrained QoS Multicast Routing (FCPMQMR) consists of two phases. The first phase performs the selection of forwarding node using fuzzy logic technique. The second phase builds a stable backbone mesh structure. In case of node failure, an alternative path from the primary route through another forwarding node is selected for communication. Simulation results demonstrate that the proposed FCPMQMR improves the packet delivery ratio by 5-10%, success ratio by 35%, decrease the average end-to-end delay by 10-15% and control overhead by 45%.
    Keywords: ad hoc networks, multicast routing, fuzzy logic, flooding, QoS, path reliability

  • Study on minimising energy consumption in wireless sensor networks using network coding   Order a copy of this article
    by Navaneethan Chenniappan, K. Helen Prabha 
    Abstract: Important breakthroughs in electronics and wireless communication technologies have enabled the development of large-scale wireless sensor networks (WSNs). Analysis of the incoming data as well as transmitting data will increase the efficiency of the network greatly. Especially, for data-based WSNs with huge information exchange operations, network coding (NC) can improve the broadcast efficiency by combining different incoming data together, using appropriate coding methods. However, there are numerous factors for WSNs, and security is vital for many of them. WSNs suffer from many constraints, including low computation capability, small memory, limited energy resources, susceptibility to physical capture, and lack of infrastructure, which impose unique security challenges and make innovative approaches desirable. In this paper, we first give a survey of security, modulation and energy saving in WSNs, and then discuss the various methods to improve these by NC.
    Keywords: wireless sensor network, network coding, security, modulation, energy saving, WSN protocols

Special Issue on: ICA3PP 2015 and PRDC 2015 Advances in Parallel and Distributed Systems and Applications

  • Diagnosabilities of regular networks under three-valued comparison models   Order a copy of this article
    by Xiang Xu, Shuming Zhou, Li Xu 
    Abstract: Under the comparison model, earlier studies (Sengupta and Ree, 1990) introduced t/x- and t[x]-diagnosis strategies based on multiple-valued logic. A multiprocessor system is t/x-(respectively, t[x]-) diagnosable if all faulty units can be uniquely identified from syndrome provided that there are no more than t faulty units and no more than x missing (respectively, incorrect) test outcomes. In this paper, we present some determinant characterisations on t/x-diagnosability and t[x]-diagnosability of multiprocessor systems based on regular network.
    Keywords: multiprocessor system; three-valued models; MM* model; diagnosability; regular network.

  • A sprouting graph-based approach to analysing timed workflow processes with shared resources   Order a copy of this article
    by Yanhua Du, Ruyue Li, Benyuan Yang 
    Abstract: Recently, temporal constraint satisfiability is regarded as an important criterion in the field of business process management to guarantee the timely completion of workflow processes. Existing methods related to this problem do not fully investigate the various modes of shared resources for workflow processes, or cannot provide the solutions for temporal violations, or cannot develop the prototype system to automatically verify them with considering shared resources. In this paper, we propose a new sprouting graph-based approach to analysing temporal constraints of workflow processes with shared resources. First, in order to deal with various modes of shared resources, we propose the corresponding patterns to record the executing information of shared resources. Then, based on the above patterns, how to automatically construct a sprouting graph is presented, and the procedure of solving temporal violations is proposed. Finally, we also develop a prototype system by extending open-source tool Platform Independent Petri net Editor (PIPE). We validate our approach and prototype system by a manufacturing enterprise in a real-life business scenario. Compared with existing works, our approach is more effective and practical, because we consider the various modes of shared resources, provide the solutions for temporal violations, and have developed a prototype system to support automatic analysis of temporal constraints.
    Keywords: timed workflow net; sprouting graph; temporal constraint; shared resource; Petri net

  • Detecting spammers using review graph   Order a copy of this article
    by Zhixiang He, Chonglin Gu, Shi Chen, Hejiao Huang, Xiaohua Jia 
    Abstract: In recent years, e-commerce has become so popular that many consumers make transactions online. In order to make more profit, some merchants hire spammers to give high ratings to promote certain products, or to give malicious negative reviews to defame products of competitors. Those misleading reviews are destructive to the fairness of the e-commerce environment. Therefore, it is very important to detect spammers who are always posting deceptive reviews. However, existing methods have low recognition rates for detecting spam reviews. In this paper, we first propose to use SCTD to reduce the whole dataset, so that we can focus on the periods when spammers are more likely to operate. Then, a similarity graph is built to describe the relationships between those reviewers who post reviews on the same products. Finally, we propose an iterative algorithm to calculate the spam score for each reviewer using the edge weight and key features of adjacent reviewers in the graph. Experiment results show that our proposed method is much more effective in spammer detection.
    Keywords: spammer detection; review spam; similarity graph

  • Evaluation and comparison of ten data race detection techniques   Order a copy of this article
    by Zhen Yu, Zhen Yang, Xiaohong Su, Peijun Ma 
    Abstract: Many techniques for dynamically detecting data races in multithreaded programs have been proposed. However, it is not yet clear how these techniques compare in terms of precision, overhead and scalability. This paper presents an experiment to evaluate ten data-race detection techniques on one hundred small-scale or middle-scale C/C++ programs. The selected ten techniques, implemented in the same Maple framework, cover not only the classical but also the state-of-the-art in dynamical data-race detection. We compare the ten techniques with each other and try to give reasonable explanations for why some techniques are weaker or stronger than other ones. Evaluation results show that no one technique performs perfectly for all programs according to the three criteria. Based on the evaluation and comparison, we give suggestions of which technique is the most suitable one to use when the target program exhibits particular characteristics. Later researchers can also benefit from our results to construct a better detection technique.
    Keywords: concurrent testing; concurrency bugs; data race; data race detection; lockset; happens-before

  • Reliable broadcast in anonymous distributed systems with fair lossy channels   Order a copy of this article
    by Jian Tang, Mikel Larrea, Sergio Arévalo, Ernesto Jiménez 
    Abstract: Reliable Broadcast (RB) is a basic abstraction in distributed systems, because it allows processes to communicate consistently and reliably with each other. It guarantees that all correct processes reliably deliver the same set of messages. This abstraction has been extensively investigated in distributed systems where all processes have different identifiers, and the communication channels are reliable. However, more and more anonymous systems appear owing to the motivation of privacy. It is significant to extend RB into an anonymous system model where each process has no identifier. On the other hand, the requirement of reliable communication channels is not always satisfied in real systems. Hence, this paper is aimed to study RB in anonymous distributed systems with fair lossy communication channels. In distributed systems, symmetry always means that two systems should be considered symmetric if they behave identically, and two components of a system should be considered symmetrical if they are indistinguishable. Hence, anonymous distributed systems are symmetrical. The design difficulty of RB algorithms lies in how to break the symmetry of the system. In this paper, we propose to use a random function to break it. Firstly, a non-quiescent RB algorithm tolerating an arbitrary number of crashed processes is given. Then, we introduce an anonymous perfect failure detector AP*. Finally, we propose an extended and quiescent RB algorithm using AP*, in which eventually no process sends messages.
    Keywords: anonymous distributed system; asynchronous system; reliable broadcast; fair lossy communication channels; failure detector; quiescent.

  • Communication-aware task scheduling algorithm for heterogeneous computing   Order a copy of this article
    by Tehui Huang, Tao Li, Qiankun Dong, Kezhao Zhao, Wenjing Ma, Yulu Yang 
    Abstract: This paper proposes a new task scheduling algorithm for a heterogeneous computing platform, called Communication-aware Earliest Finish Time (CEFT). It combines the features of list-scheduling and task duplication, where task priority is assigned according to the communication ratio (CR), a notion defined to represent communication cost. We also present a duplication mechanism, which cooperates with CR to reduce the communication overhead. The time complexity of the algorithm is O(v2p), for v tasks and p processors. The experiment results show that the CEFT algorithm improves performance by 11% compared with the state-of-the-art list-based algorithm PEFT, and 15.6% compared with duplication-based algorithm HDCPD, in terms of scheduling length ratio. CEFT is also the best algorithm compared with PEFT, HDCPD and HEFT regarding efficiency and average surplus time.
    Keywords: list scheduling; task duplication; heterogeneous computing; communication ratio.

  • An adaptive hybrid ARQ method for coexistence of ZigBee and WiFi   Order a copy of this article
    by Quan Liu, Xiaorui Li, Zizhong Quan, Duzhong Zhang, Wenjun Xu 
    Abstract: With the rapid development of modern communication technology, all kinds of radio technologies continue to emerge. As different wireless networks share the same spectrum, coexistence of heterogeneous networks becomes a challenging problem, especially the coexistence between ZigBee and WiFi. Much research has been done to solve this problem, and some coexistence methods have been proposed, such as spectrum allocation, power control and so on. However, the proposed methods are either too complex to implement or need additional hardware cost. This paper proposes an easy and feasible method called AHA for ZigBee. The method modifies the current FEC coding method and additionally colligates sensing and adaptive retransmission. By implementing the method on ZigBee using USRP N210 devices, it is demonstrated that AHA effectively improves the communication performance of ZigBee under WiFi interference.
    Keywords: heterogeneous networks, coexistence, ZigBee, WiFi, adaptive

  • Thwarting Android app repackaging by executable code fragmentation   Order a copy of this article
    by Ruxia Fan, Dingyi Fang, Zhanyong Tang, Xiaojiang Chen, Fangyuan Liu, Zhengqiao Li 
    Abstract: With the increasing popularity and adoption of Android-based smartphones, there are more and more Android malwares in the app marketplaces. What's more, most of these malwares are repackaged versions of legitimate applications (or apps). Existing solutions against such malwares have mostly focused on the repackaged application detection, which always tends to be postmortem. Similar to the packing protections on x86 architecture, Android app packing mechanism is proposed to enable self-defence for Android apps against app repackaging. However, current app packing systems are insufficient to prevent repackagers from dumping the executable file (dex file) of an app, because this file is always loaded in the process memory as an intact file in plaintext. Once the repackager has dumped the intact dex file, repackaging would be enabled again. To address this problem, we propose a more effective protection model, DexSplit, to prevent legitimate applications from being repackaged. Inspired by the weakness of current app packing model, DexSplit maintains the protected dex file as several pieces throughout the whole course of this application's lifecycle, which makes it difficult for repackagers to get the intact dex file. Experiments with a DexSplit prototype using six typical Android apps show that DexSplit effectively defends against repackaging threats on Android platform with reasonable performance overhead.
    Keywords: Android security; malware; repackaging; memory dump.

  • A service industry perspective on software defined radio access networks   Order a copy of this article
    by Casimer DeCusatis, Ioannis Papapanagiotou 
    Abstract: Despite the rapid growth of service science, relatively little attention has been paid to the service architecture requirements in software defined radio access networks (SDRAN). In this concept paper, we propose to repurpose cloud computing network services to address issues specific to SDRAN. In particular, a multi-level backhaul slicing approach derived from cloud computing networks is discussed as a way to mitigate interference limited networks with a frequency reuse factor of one. Experimental demonstration of the control plane implementation in a virtual cloud network is presented, and implications for service provider development and training are also dis-cussed.
    Keywords: software defined networks; radio access networks; SDRAN; SSME, RAN; SDN

  • U-Search: usage-based search with collective intelligence   Order a copy of this article
    by Pengfei Yin, Guojun Wang, Wenjun Jiang 
    Abstract: With the emergence of big data in the networking environments, searching the most suitable information for users is getting more challenging. We propose a usage-based search model, U-Search, to retrieve high quality resources that meet users' requirements. There are mainly two processes in our model, First-stage Retrieval (FRet) and Second-stage Retrieval (SRet). At FRet, general users collect useful resources from a variety of channels and import them into the resource-sharing platform after the resource checking. At SRet, a user first inputs his search purpose. Then, our model will match the purpose in the platform by judging the relevance between user profile and resource profile. Finally, the result list will be generated by integrating the usage of influential users and the calculated relevance. Experiment on the offline dataset shows that our model can truly provide the suitable search result to the user.
    Keywords: usage-based search; influential user discovery; resource ranking; personalised search; collective intelligence

  • An energy-aware task consolidation algorithm for cloud computing data centres   Order a copy of this article
    by Yonghua Xiong, Ya Chen, Keyuan Jiang, Yongbing Tang 
    Abstract: Energy efficiency has become an increasingly prominent issue in cloud computing. Task consolidation is an effective way to maximise use of cloud computing resources and reduce energy consumption. Presented in this paper is an improved energy-aware task consolidation algorithm to optimise the scheduling of tasks in cloud computing data centres. Our algorithm was developed based on the linear relationship between energy consumption and the CPU usage. Instead of consolidating tasks to a service node until its CPU usage reaches 100%, our algorithm uses an optimal point, where the CPU usage of a service node reaches 70%, and ensures that every task is preferentially assigned to service nodes that satisfy the 70% CPU usage. Our experiments demonstrate that our algorithm can significantly reduce the energy consumption of cloud computing data centres without performance degradation while maintaining good performance compared with the ECTC (energy-conscious task consolidation) approach.
    Keywords: cloud computing, energy-aware, task consolidation, data centre

  • Kernel mechanisms for turning a 3D MMOG from single into multi-server architecture   Order a copy of this article
    by Mei-Ling Chiang, Ching-Chung Tseng, Yen-Chu Chang, Shu-Chun Chao, Han Wang 
    Abstract: Popular Massively Multiplayer Online Games (MMOGs) are usually designed with brilliant game display, and their execution consumes lots of system resources and network bandwidth. Because of hardware restrictions, a game designed with single-server architecture is not capable of serving a large number of players at the same time. To increase the service capacity, it is necessary to either upgrade the server with more powerful equipment or redesign the game software to enable multi-server processing. This paper experiments on using operating system mechanisms to effectively support 3D MMOGs. Instead of modifying the game software, the proposed mechanisms turn a game originally designed for a single server into one with server cluster architecture. Minecraft, a very popular 3D MMOG designed with single-server architecture, is used in the experiments. Two main mechanisms are developed in our experimental game server cluster. Firstly, the game connection handoff mechanism is applied and implemented for transparently migrating a clients live game connection between servers. Moreover, the game connection and the game state remain continuous after migration. Secondly, a load distribution policy is developed, which treats a huge virtual world as a set of regions and distributes regions and players to servers in a balanced way. For system scalability and extensibility, the proposed distribution policy automatically adjusts regions of a virtual world for servers when some servers are added into or removed from the game cluster. The rearrangement incurs minimal effect on related servers. Experiments on our server cluster with a real 3D game, i.e. Minecraft, show that under some constraints, the original single-server Minecraft game is successfully turned into server cluster architecture. The system capacity, efficiency, and scalability are thus increased significantly.
    Keywords: Linux kernel; massively multiplayer online games; server cluster; load balance

  • Communication-aware virtual machine migration in cloud data centres   Order a copy of this article
    by Weilian Xue, Wenxin Li, Heng Qi, Keqiu Li 
    Abstract: The virtual machine (VM) migration technology is widely used in cloud data centres for achieving load balance, or reducing energy consumption. Despite the potential benefits of the VM migrations, relatively little work has taken the inter-VM communication into account, with researchers mainly focusing on reducing the network cost of migration or the energy cost. In this paper, we focus on the VM migration problem in the context of large volumes of inter-VM communication traffic. To this end, we make decisions on three problems of when, which, and where the VMs shall be migrated. Specifically, we formulate an optimisation of minimising the number of migrations as well as the cost of both inter-VM traffic and migration traffic. To solve the optimisation, we design an efficient communication-aware VM migration algorithm and seamlessly combine three types of resource: CPU, memory and bandwidth. Finally, we conduct comprehensive experiments based on the CloudSim simulator. Extensive simulation results show that the proposed method can outperform the state-of-art RAIL and Sandpiper, in terms of the the following three metrics: the number of migrations, the inter-VM communication cost, and the migration network cost.
    Keywords: data centre; virtual machines; inter-VM communication; VM migration.

  • Para-Join: an efficient parallel method for string similarity join   Order a copy of this article
    by Cairong Yan 
    Abstract: In the big data area, a significant challenge about string similarity join is to find all similar pairs more efficiently. In this paper, we propose an efficient parallel method, called Para-Join, which first splits the input into small sets according to the joint-frequency vector and the interval-vector of each string, and then joins the pairs for each small set in parallel. Para-RR algorithm and Para-RS algorithm are proposed to extend partition-based algorithm and adopt the multi-threading technique to implement the string similarity join within each set and between two different sets separately. We prove that Para-Join method can not only avoid reduplicate computation but also ensure the completeness of the result. We also put forward an effective pruning strategy to improve the performance. Experimental results show that our method achieves high efficiency and significantly outperforms the state-of-the-art approaches.
    Keywords: string similarity join; partition-based; parallel computation; algorithm; multi-threading

  • Hardware support for message-passing in chip multi-processors   Order a copy of this article
    by Yanhua Li, Youhui Zhang, Cihang Jiang, Weiming Zheng 
    Abstract: Compared with the traditional shared-memory programming model, message-passing models for chip multiprocessors (CMPs) have distinct advantages owing to the relative ease of validation and the fact that they are more portable. This paper proposes a design of integrating a message-passing engine into each router of the network-on-chip, as well as the programming-friendly message-passing interface for these engines. Combined with the DMA mechanism, the proposed design applies the on-chip RAM as intermediary message buffer, and frees the CPU core from message-passing operations to a large extent. The detailed design and implementation, including the register-transfer-level (RTL) descriptions of the engine, are presented. Evaluations show that, compared with the software-based solution, it can decrease the message-passing latency by one or two orders of magnitude. Co-simulation also demonstrates that the proposed designs effectively boost the performance of point-to-point communications on-chip, while the consumptions of power and chip-area are both limited.
    Keywords: message-passing; chip-multiprocessor; network-on-chip; hardware support

  • Reboot-based recovery of performance anomalies in adaptive bitrate video-streaming services   Order a copy of this article
    by Carlos Cunha, Luis Silva 
    Abstract: Performance anomalies represent a common type of failure in internet servers. Overcoming these failures without server downtimes is of the utmost importance in video-streaming services. These services have large user abandonment costs when failures occur after users watch a significant part of a video. Reboot is the most popular and effective technique for overcoming performance anomalies but it takes several minutes from the start until the server is warmed-up again to run at its full capacity. During that period, the server is unavailable or provides limited capacity to process end-users requests. This paper presents a recovery technique for performance anomalies in HTTP Streaming services, which relies on container-based virtualisation to implement an efficient multi-phase server reboot technique that minimises the service downtime. The recovery process includes analysis of variance of request-response times to delimit the server warm-up period, after which the server is running at its full capacity. Experimental results show that the virtual container recovery process completes in 72 seconds, which contrasts with the 434 seconds required for a full operating system recovery. Both recovery types generate service downtimes imperceptible to end-users.
    Keywords: performance anomalies; reboot; recovery; adaptive bitrate; video-streaming; self-healing

  • An energy-aware cooperative content distributed strategy based on MST in the mobile environment   Order a copy of this article
    by Nao Wang, Gaocai Wang, Tianxiao Xie 
    Abstract: In a mobile environment, they will consume less energy if mobile terminals download the common interesting content to each other by using short-range wireless transmission technology. This paper mainly focuses on energy-aware cooperative content distribution strategy for these mobile networks. Considering the characteristics of wireless networks and energy consumption of mobile terminals, we construct an energy consumption optimisation model with energy consumption fairness constraint and then solve it by using a Minimum Spanning Tree (MST) based on a distributed algorithm. We obtain simulation results under NS-2 tool, and compare them with the minimal energy consumption scheme and the no-cooperation scheme at total energy consumption and the fairness of energy consumption. Performance analysis results show that the proposed strategy can decrease the total energy consumption significantly for mobile terminals, and obtain the balance at the energy consumption.
    Keywords: mobile environment; energy awareness; energy consumption optimisation; fairness.

  • Optimising the deployment of virtual machine image replicas in cloud storage clusters   Order a copy of this article
    by Cong Xu, Jiahai Yang, Jianping Weng, Ye Wang, Hui Yu 
    Abstract: With the immense proliferation of cloud-based applications, the diversity and dynamicity of cloud services exacerbate the complexity of VM (Virtual Machine) instance provisioning. Addressing the performance issue, series of solutions have been proposed to optimise the VM image replica deployment in object storage clusters and speed up the VM instance installation operations. However, the majority of the existing image replica deployment mechanisms concentrate on optimizing the locations of VM image replicas based on the performance of the physical platform, or resizing the scales of different image replica sets based on service popularity; without an overall consideration of both the platform and application features. The purpose of this paper is to optimise the deployment of VM image replicas in cloud storage clusters. We study the impacts of both application characteristics and platform features on the VM instance installation rate, and further put forward a stochastic model to formulate the performance of a cloud storage cluster. Based on the stochastic model, this paper proposes a novel mechanism to optimize the deployment of VM image replicas. Our theoretical deployment mechanism has been implemented in a real cloud storage cluster built using OpenStack and Ceph. Experimental results show that our deployment mechanism improves the overall throughput of a storage cluster and speed up the image installation operations effectively.
    Keywords: object storage cluster; optimal deployment; OpenStack Glance; image replica

  • Addressing statistical significance of fault injection: empirical studies of the soft error susceptibility   Order a copy of this article
    by Qiang Guan, Nathan DeBardeleben, Sean Blanchard, Song Fu 
    Abstract: Soft errors are becoming an important issue in computing systems. Near-threshold voltage (NTV), reduced circuit sizes, high performance computing (HPC), and high altitude computing all present interesting challenges in this area. Much of the existing literature has focused on hardware techniques to mitigate and measure soft errors at the hardware level. Instead, in this paper we explore the soft error susceptibility of three common sorting algorithms at the software layer. We focus on the comparison operator and use our software fault injection tool to place faults with fine precision during the execution of these algorithms. We explore how the algorithm susceptibilities vary based on input and bit position, and relate these faults back to the source code to study how algorithmic decisions impact the reliability of the codes. Finally, we look at the question of the number of fault injections required for statistical significance. Using standard practice equations used in hardware fault injection experiments, we calculate the number of injections that should be required to achieve confidence in our results. Then we show, empirically, that more fault injections are required before we gain confidence in our experiments.
    Keywords: soft error; fault injection; resilience; vulnerability; sorting algorithms.

Special Issue on: Advances in Parallel Methods for Scientific Computing

  • Parallel solution of the discretised and linearised G-heat equation   Order a copy of this article
    by Pierre Spiteri, Amar Ouaoua, Ming Chau, Hacene Boutabia 
    Abstract: The present study deals with the numerical solution of the G-heat equation. Since the G-heat equation is defined in an unbounded domain, we firstly state that the solution of the G-heat equation defined in a bounded domain converges to the solution of the G-heat equation when the measure of the domain tends to infinity. Moreover, after time discretisation by an implicit time marching scheme, we define a method of linearisation of each stationary problem, which leads to the solution of a large scale algebraic system. Then we analyse in a unified approach the convergence of the sequential and parallel relaxation methods. Finally we present the results of numerical experiments.
    Keywords: G-heat equation; relaxation methods; parallel computing; asynchronous iteration; financial application.

Special Issue on: Advances in Information Security and Networks

  • Full secure identity-based encryption scheme over lattices for wireless sensor networks in the standard model   Order a copy of this article
    by Jizhong Wang, Chunxiao Wang 
    Abstract: This paper proposes a lattice-based identity-based encryption scheme that can be used in wireless sensor networks to achieve secure communication. More precisely, a known lattice-based chosen plain-text secure encryption scheme and the Bonsai trees primitive are used to design a identity-based encryption (IBE) scheme. To reduce the public key size of the proposed IBE scheme, a public matrices chosen rule is used that reduces the public key size of proposed scheme to be close to half of that of a known lattice-based scheme. Under the hardness of the decision variant of the learning with errors (LWE) problem, we prove that the proposed IBE scheme is indistinguishable against the adaptive chosen identities and chosen-plaintext attack in the standard model. Moreover, the message-to-ciphtertext expanse factor of this scheme is also controlled efficiently, which is close to that of Gentrys scheme. Owing to the quantum intractability of the LWE problem on which the scheme is based, the proposed IBE scheme is secure even in the quantum era.
    Keywords: wireless sensor networks, lattice-based cryptography, learning with errors problem, standard model

  • The fault diagnosis method of RVM based on FOA and improved multi-class classification algorithm   Order a copy of this article
    by Kun Wu, Jianshe Kang, Kuo Chi, Xuan Wang 
    Abstract: In order to solve the current problem of fault diagnosis and improve the traditional diagnosis models, an intelligent fault diagnosis approach of the relevance vector machine (RVM) based on the fruit fly optimisation algorithm (FOA) and improved multi-class classification algorithm is proposed. The optimal parameter values of the RVM kernel function are determined by the FOA, and an improved multi-class classification algorithm of the RVM, based on the traditional one-against-one (OAO) and one-against-rest (OAR) algorithm, is presented. The above classification method translates the multi-class classification problem into multiple three-classification problems to accelerate the running speed with high classification accuracy. Theoretical analysis and experiment results both demonstrate that the proposed method performs better than traditional methods in terms of diagnosis accuracy and running time, with more model sparsity and higher diagnosis efficiency.
    Keywords: fault diagnosis; relevance vector machine; fruit fly optimisation algorithm; multi-class classification

  • Secure data outsourcing scheme in cloud computing with attribute-based encryption   Order a copy of this article
    by Shuaishuai Zhu, Yiliang Han 
    Abstract: In our IT society, cloud computing is clearly becoming one of the dominating infrastructures for enterprises as well as end users. As more cloud-based services become available to end users, their oceans of data are outsourced in the cloud as well. Without any special mechanisms, the data may be leaked to a third party for unauthorised use. Most presented works on cloud computing emphasise computing utility or new types of application. But in the view of cloud users, such as traditional big companies, data in cloud computing systems tends to be out of control and privacy fragile. So most of data they outsource is less important. A mechanism to guarantee the ownership of data is required. In this paper, we analyse a couple of recently presented scalable data management models to describe the storage patterns of data in cloud computing systems. Then we define a new tree-based data set management model to solve the storage and sharing problems in cloud computing. A couple of operation strategies including data encryption, data boundary maintenance, and data proof are extracted from the view of different entities in the cloud. The behaviour of different users is controlled by view management on the tree. Based on these strategies, a flexible data management mechanism is designed in the model to guarantee entity privacy, data availability and secure data sharing.
    Keywords: cloud computing; outsourcing data; data privacy; database management

  • An optimisation research on two-dimensional age-replacement interval of two-dimensional product   Order a copy of this article
    by Yutao Wu, Wenyuan Song, Yongsheng Bai, Yucheng Han, Wenbin Cao 
    Abstract: For the failures of two-dimensional product affected by the calendar time and usage time, traditional one-dimensional preventive maintenance cant meet the actual maintenance demand of two-dimensional product, therefore, this paper proposes a two-dimensional preventive maintenance strategy for two-dimensional product, and makes a decision for two-dimensional product through calendar time and usage time. Firstly, we establish failure rate function expression of two-dimensional product and analyse the process of two-dimensional preventive maintenance in detail, then propose a cost model and availability model from the view of economy and mission in finite span. According to the cost and availability model, through examples, we derive the optimal two-dimensional age-replacement interval by numerical algorithm and particle swarm optimisation, and verify the models applicability and validity through contrast and analysis. Lastly, we give a brief discussion of future research area about two-dimensional maintenance.
    Keywords: two-dimensional product; two-dimensional age-replacement; math model; maintenance interval

  • Extensional schemes of multipartite non-interactive key exchange from multilinear maps and their applications   Order a copy of this article
    by Huiwen Jia, Yupu Hu 
    Abstract: The question of generalising the celebrated two-party non-interactive key exchange (NIKE), Diffie-Hellman protocol, to a multipartite setting was left as an important open problem. In 2003, Boneh and Silverberg put forward a theoretical construction of multipartite NIKE protocol from a new notion called multilinear maps. In their protocol, however, the number of users N and the multilinearity k are related by N = k+1, resulting in the system initialising another multilinear map when the number of users who want to exchange a session key changes. In this paper, we describe two extensional schemes of multipartite NIKE, which enables any less than or equal to N users derive a common shared key from an (N-1)-multilinear map. In addition, using our extensional schemes, we show a concrete scenario: The establishment of any discussion group in a user group and its privacy version. Furthermore, we analyse its security.
    Keywords: multipartite non-interactive key exchange; multilinear maps; MCDH assumption

  • Dynamic combined with static analysis for mining network protocols' hidden behaviour   Order a copy of this article
    by YanJing Hu 
    Abstract: Unknown protocols' hidden behaviour is becoming a new challenge in network security. This paper takes both the captured messages and the binary code that implement the protocol as the studied objects. Dynamic Taint Analysis combined with Static Analysis is used for protocol analysing. Firstly, we monitor and analyse the process of protocol program that parses the message in the virtual platform HiddenDisc prototype system developed by ourselves, and record the protocols public behaviour, then based on our proposed hidden behaviour perception and mining algorithm, we perform static analysis of the protocols hidden behaviour trigger conditions and hidden behaviour instruction sequences. According to the hidden behaviour trigger conditions, new protocol messages with the sensitive information are generated, and the hidden behaviours are executed by dynamic triggering. HiddenDisc prototype system can sense, trigger and analyse the protocols hidden behaviour. According to the statistical analysis results, we propose the evaluation method of protocol execution security. The experimental results show that the present method can accurately mining the protocols hidden behaviour, and can evaluate an unknown protocols execution security.
    Keywords: protocol reverse analysis; protocols' hidden behaviour; protocol message; protocol software

Special Issue on: Algorithm and Technology Advances for Future Internet

  • UOSM: user-oriented semi-automatic method of constructing domain ontology   Order a copy of this article
    by Chao Qu, Fagui Liu, Dacheng Deng, Ruifen Yuan 
    Abstract: Based on the analysis of the existing main ontology construction methods and tools, we propose a user-oriented method of semi-automatic domain ontology construction. The method builds a descriptor set according to the user's requirements, and establishes a hi-erarchy structure based on it. Users can construct ontologies by using this method cases without the participation of experts. Additionally, the method has advantages for expansion and the collaborative development of ontologies.
    Keywords: Domain Ontology; Ontology Construction; Semi-automatic; User Oriented

  • Cloud workflow scheduling algorithm based on reinforcement learning   Order a copy of this article
    by Delong Cui, Zhiping Peng, Wende Ke, Jinglong Zuo 
    Abstract: How to fairly schedule the multiple workflow with multiple priorities submitted at different time has become an increasing concern in Workflow Management Systems (WMS). To solve the problem, a novel workflow scheduling algorithm based on reinforcement learning is proposed in this study. In our scheme, we first define some basic concepts of reinforcement learning in cloud computing, such as state space, action space and immediate reward. Then single DAG and multiple DAG cloud workflow scheduling algorithm based on reinforcement learning are designed respectively. Reinforcement learning sets up a policy to maximise the cumulative rewards in the long term through the repetition of trial-and-error interactions in cloud computing environment. Finally, we analyse algorithm performance by using queuing theory. We use real cloud workflow to test the proposed scheme. Our results, on the one hand, demonstrate the proposed scheme can reasonably schedule multiple DAGs with multiple priorities and improve use rate of resources better and, on the other hand, show optimisation object function achieves fair workflow scheduling in cloud computing environment.
    Keywords: multiple DAGs; reinforcement learning; workflow scheduling; cloud computing

  • Quick convergence algorithm of ACO based on convergence grads expectation   Order a copy of this article
    by Zhongming Yang, Yong Qin, Yunfu Jia 
    Abstract: While the ACO (ant colony optimisation) can find the optimal path of a network, there are too many iterative times and the convergence speed is also very slow. This paper proposes the Q-ACO QoSR based on convergence expectation to meet the requirement of OoS routing for real-time and highly efficient networks. This algorithm defines index expectation function of a link, and proposes convergence expectation and convergence grads. As for the multi-constrain QoS routing model, the algorithm controls the iteration and searches for the optimal path that meets the QoS restriction condition under the condition of higher convergence. This algorithm can find the optimal path by comparing the convergence grads in a faster and larger probability. This algorithm improves the ability of routing and convergence speed. This rapid path-finding algorithm can be applied not only to routing algorithm but also to optimisations, such as solving the problem of task scheduling and load balancing (17-18) by using ant colony algorithm in the field of cloud computing. Besides, during the process of task scheduling, the function that forms into the expectation function is decided by constraint conditions, such as the speed of execution, scheduling time and cost calculation. It realises the optimal solution to algorithm convergence through controlling the number of iterations.
    Keywords: ACO; QoS; QoSR; CG expectation; Q-ACO

  • Short-term vegetable prices forecast based on improved gene expression programming   Order a copy of this article
    by Lei Yang, Kangshun Li, Wensheng Zhang, Yaolang Kong 
    Abstract: Gene expression programming (GEP) is an evolutionary algorithm based on genotype and phenotype, which is commonly used in network optimisation and prediction. Aiming at the problem that the traditional (GEP) is susceptible to noise interference, leading to premature convergence and falling into local solutions, this paper proposes an improved GEP algorithm, which increases 'inverted series' and 'extraction' operator. The improved algorithm can effectively increase the rate of use of genes, with convergence speed and solution precision becoming higher, and can avoid the premature phenomenon. Taking the price change trends of Chinese vegetables mooli, scallion, white gourd, eggplant, green pepper and potato as examples, this paper discusses the way to solve the forecasting problems by adopting gene expression programming, by constructing time interval unified time series data and normalised processing. Through the training and construction model, it realises the simulation and forecasting of price trends. The experimental results show that the improved GEP algorithm has fast calculation speed and high forecasting precision.
    Keywords: gene expression programming; improved gene expression programming; short-term vegetable prices forecast; use of genes; gene extraction

  • High performance energy Saving policy based on IND   Order a copy of this article
    by Lin Liu, JingGuo Dai 
    Abstract: The IND (Intelligent Network Disk) storage solution has the advantages of intelligence, low cost, and the ability to be widely used in mass storage and portable devices; however, its energy cost is a key problem. Traditional energy saving policy leads to performance delays because of frequent changes in the component status. This research focuses on the performance loss resulting from RAM due to current energy saving policy and designs an optimised IND RAM energy management policy. In our method, a replica space is established in the memory of NIC and a copy of the high frequent read-only data is put into replica space, thereby reducing the frequency of status changes in INDs memory bank during the I/O process, and extending the duration of low-cost status of IND memory and BUS. Thus, more energy is saved. Meanwhile, a copy of high frequent read-only data in replica space can reduce the frequent read-only data transfer via BUS and the time of BUSs data transfer, which can improve the system performance. The experimental results show that the optimised new policy can save more than 20% of memorys energy compared with traditional IND energy saving policy, and could improve the I/O performance of the IND system by more than 10%. Additionally, this strategy can be extended to similar structures with good theoretical and practical value.
    Keywords: IND; energy saving; high performance; dynamic power management

  • A privacy-preserving and fine-grained access control scheme in DaaS based on efficient DSP re-encryption   Order a copy of this article
    by Jin Yang 
    Abstract: Database as a service (DaaS) is an important cloud service, and preserving privacy is one of the most crucial needs in DaaS. To achieve more efficient and safer preservation of privacy in DaaS, a new access control scheme is proposed. Firstly, some definitions of data privacy value are introduced, which are used to formulate a data table. Then, an efficient DSP re-encryption mechanism is constructed based on the proposed data privacy value. Furthermore, the traditional DaaS service framework is improved to achieve more flexible and fine-grained access control. Finally, by combining the proposed mechanism with access control policy, our new access control scheme is carefully designed. Moreover, the correctness and security are analysed, which show that the new scheme is correct and IND-CCA is secure under random oracle mode. That is, it can prevent conspiracy attacks effectively and protect the privacy of data owner and data user.
    Keywords: database as a service, proxy re-encryption, privacy preserving, access control, ND-CCA security

  • A parallel immune genetic algorithm for community detection in complex networks   Order a copy of this article
    by Lu Xiong, Kangshun Li, Lei Yang 
    Abstract: This paper proposes a complex network community discovery method based on a parallel immune genetic algorithm for the problem of low efficiency, slow convergence rate and population degradation in community mining method based on a genetic algorithm. The algorithm makes use of the principle of parallel immune system to ensure the diversity of population, and enhance the searching ability by using a single path crossover operator in the initial population and crossover operation to enhance the ability to find the best use of the single path crossover operator, while using the improved character encoding and adaptive mutation operator to further reduce the search space and improve the population degradation phenomenon. Experiments show that the improved parallel immune genetic algorithm can be used to find the problem of complex network community with high accuracy, effectiveness and efficiency.
    Keywords: genetic algorithm; community discovery; parallel immune; data mining

  • An automatic traffic-congestion detection method for bad weather based on traffic video   Order a copy of this article
    by Jieren Cheng 
    Abstract: In order to solve the problem that the result of automatic traffic-congestion detection method in bad weather is inaccurate, we analyse current vehicle identification algorithms and image processing algorithms. After that, we propose a detection method of traffic congestion based on histogram equalisation and discrete-frame difference. Firstly, this method uses discrete-frame difference algorithm to extract the images that have vehicle information. It can save resources and make the method achieve real-time performance. Then, the method employs histogram equalisation algorithm to eliminate the noise of the vehicle images. We also propose a calculation method for traffic congestion index of the detection method in the paper. After that, this method recognises vehicles from the video and computes the traffic congestion index. Finally, the method transforms the dimension of traffic congestion index and obtains the state of traffic congestion. It is proved by experiments and theoretical analysis that the method decreases false negative rate and increases the accuracy rate of automatic traffic congestion detection in bad weather. The method has a wonderful prospect in some areas where there is usually bad weather.
    Keywords: traffic congestion; histogram equalization; video processing; inter-frame difference.

  • Constrained evolution algorithm based on adaptive differential evolution   Order a copy of this article
    by Kangshun Li, Liang Zhong, Lei Zuo, Zhaopeng Wang 
    Abstract: Solving constrained optimisation is widely used in science and engineering, but the slow convergence speed and premature convergence are the biggest problems researchers face. Research on a constrained evolution algorithm (CO-JADE) based on adaptive differential evolution (JADE) for solving the constrained optimisation problems is proposed in this paper. According to features of the Gaussian distribution, the Cauchy distribution and the mutation factor, we exploited the crossover probability of each individual to improve the search strategy. Aiming to effectively evaluate the relationship between the value of the objective function and the degree of constraint violation, the paper uses an improved adaptive tradeoff model to evaluate the individuals of a population. This tradeoff model uses different treatment schemes for different stages of the population and is implemented on night standard test functions. The experiments show that the CO-JADE has better accuracy and stability than the COEA/ODE and the HCOEA.
    Keywords: adaptive differential evolution; adaptive tradeoff model; constrained optimization.

Special Issue on: CloudTech'2015 Cloud Computing Technologies and Applications

  • Swift personal emergency help facilitated by the mobile cloud   Order a copy of this article
    by Hazzaa Alshareef, Dan Grigoras 
    Abstract: Many emergency cases require the swiftest possible response from an appropriate medical service if they are not to become life-threatening. In the medical emergency field, response times to emergency cases are a major concern and gain a high degree of attention. Many of the systems proposed in the literature are either intended to replace the existing emergency system with a fully automated one, or build on unreliable or less efficient frameworks that are based on some sort of social media application, such as redirecting emergency requests to Facebook friends. We have designed a mobile cloud service that works side by side with the existing emergency system and is aimed at reducing the time spent waiting for emergency help to arrive, as well as making the best use of medical professionals who may be located in close proximity to the medical case. The experimental results show that the amount of time needed to find a medical professional and establish communication was between 4 and 25 seconds, depending on the communication method used. This result means that no extra time is added to the total medical response time and could enhance the chance of a better outcome.
    Keywords: component; mobile; cloud; healthcare; medical emergency; help; doctors; nurses; SMS; medical response time

Special Issue on: ICNC-FSKD 2015 Parallel Computing and Signal Processing

  • 1.25 Gbits/s-message experimental transmission using chaos-based fibre-optic secure communications over 143 km   Order a copy of this article
    by Hongxi Yin, Qingchun Zhao, Dongjiao Xu, Xiaolei Chen, Ying Chang, Hehe Yue, Nan Zhao 
    Abstract: Chaotic optical secure communications (COSC) are a kind of fast-speed hardware encryption techniques at the physical layer. Concerning to the practical applications, high-speed long-haul message transmission is always the goal to pursue. In this paper, we reported experimentally a scheme of long-haul COSC, where the bit rate reaches 1.25 Gbits/s and the transmission distance up to 143 km. Besides, a distinct advantage of low-cost is guaranteed with the off-the-shelf optical components, and no dispersion compensating fiber (DCF) or forward-error correction (FEC) is required. To the best of our knowledge, this is the first experimental evidence of the longest transmission distance in the COSC system. Our results show that high-quality chaotic synchronization can be maintained both in time- and frequency-domain, even after 143 km transmission; the bandwidth of the transmitter is enlarged by the external optical injection, which leads to the realization of 2.5 Gbits/s-message secure transmission up to 25 km. In addition, the effects of device parameters on the COSC are discussed for supplementary details.
    Keywords: long-haul, high-speed, chaotic optical secure communications, semiconductor laser

  • Optimisation of ANFIS using mine blast algorithm for predicting strength of Malaysian small and medium enterprises   Order a copy of this article
    by Kashif Hussain, Mohd. Najib Mohd. Salleh, Abdul Mutalib 
    Abstract: Adaptive Neuro-Fuzzy Inference System (ANFIS) is a popular fuzzy inference system, as it is widely applied in business and economics. Many have trained ANFIS parameters using metaheuristic algorithms, but very few have tried optimising its rule-base. The auto-generated rules, using grid partitioning, comprise both the potential and the weak rules, increasing the complexity of ANFIS architecture as well as computational cost. Therefore, pruning less or non-contributing rules would optimise the rule-base. However, reducing the complexity and increasing the accuracy of the ANFIS network needs an effective training and optimisation mechanism. This paper proposes an efficient technique for optimising the ANFIS rule-base without compromising on accuracy. A newly developed Mine Blast Algorithm (MBA) is used to optimise ANFIS. The ANFIS optimised by MBA is employed to predict the strength of Malaysian small and medium enterprises (SMEs). Results prove that the MBA optimised ANFIS rule-base and trained parameters are more efficient than Genetic Algorithm (GA) and Particle Swarm Optimisation (PSO).
    Keywords: ANFIS; neuro-fuzzy; fuzzy system; mine blast algorithm; rule optimisation; SME

  • Spectrum prediction and aggregation strategy in multi-user cooperative relay networks   Order a copy of this article
    by Yifei Wei, Qiao Li, Xia Gong, Da Guo, Yong Zhang 
    Abstract: In order to meet the constantly increasing demand by mobile terminals for higher data rate with limited wireless spectrum resources, cooperative relay and spectrum aggregation technologies have attracted much attention owing to their capacity in improving spectrum efficiency. Combining cooperative relay and spectrum aggregation technologies, in this paper, we propose a spectrum aggregation strategy based on the Markov prediction of the state of spectrum for the cooperatively relay networks on a multi-user and multi-relay scenario aiming at ensuring the user channel capacity and maximising the network throughput. The spectrum aggregation strategy is executed through two steps. First, the state of the spectrum is predicted through Markov prediction. Based on the prediction results of state of spectrum, a spectrum aggregation strategy is proposed. Simulation results show that the spectrum prediction process can observably lower the outage rate, and the spectrum aggregation strategy can greatly improve the network throughput.
    Keywords: Markov model; spectrum aggregation; multi-user; cooperative relay; outage probability; network throughput

  • An optimised multicast data transmission scheme in IEEE 802.16j wireless relay networks   Order a copy of this article
    by Hanwu Wang 
    Abstract: Relay-enabled wireless networks play a significant role in data transmission, as it not only extends the BS coverage range but also improves the transmission efficiency of those MSs with poor channel conditions. Multicast transmission is an efficient transmission mode and quite suitable for relay-based networks. However, how to perform the desired multicast transmission in a wireless relay network has not been specified. To realise the effective multicast-based data delivery over the whole relay network, we propose an optimised transmission scheme by formulating an objective problem to achieve the desired performance goals. Specifically, an efficient multicast data transmission mechanism with respect to two schedule hops is proposed in detail to implement the optimisation process. Moreover, we also incorporate the spatial reuse into the transmission mechanism to improve the scheduling efficiency. The simulation results verify the efficacy and efficiency of the proposed transmission scheme.
    Keywords: relay network; multicast; spatial reuse; optimisation

Special Issue on: ICICS 2016 Next Generation Cloud, Mobile Cloud, Mobile Edge Computing and Internet of Things Systems and Networking

  • Classifying environmental monitoring data to improve wireless sensor network management   Order a copy of this article
    by Emad Alsukhni, Shayma Almallahi 
    Abstract: Wireless sensor networks are considered as the most useful way for collecting data and monitoring the environment. Owing to the large amount of data that are produced from these networks, data-mining techniques are required to get interesting knowledge. This paper presents the effectiveness of using data-mining techniques to discover knowledge that can improve the management of wireless sensor networks in environmental monitoring. Data reduction in a network increases the networks lifetime. The classification model can predict the effect of sensed data, which is used to reduce the number of readings that are reported to the sink, in order to improve wireless sensor network management. In this paper, we demonstrate the efficiency and accuracy of using data-mining classifiers in predicting the effect of sensed data. The results show that the accuracy of the J48 classification model, Multilayer Perceptron and REPTree classifiers reached 90%. Using the classification model, the results show that the number of reported readings decreased by 37%. Hence, this significant reduction increases the wireless sensor network's lifetime by reducing the consumed energy, i.e., the total energy dissipated.
    Keywords: classifying environmental monitoring data; data mining; wireless sensor network management; data reduction; energy consumption

  • A power-controlled interference management mechanism for femtocell-based networks   Order a copy of this article
    by Haythem Bany Salameh, Rawan Shabbar, Ahmad Al-shamali 
    Abstract: Femtocell deployment is one of the promising solutions to meet the increasing demands on wireless services and applications. Based on deploying femtocells within the already-existing macrocells, femtocells can improve indoor coverage and increase both spectral efficiency and data rate. This improvement can be achieved by reusing the available spectrum assigned for the macrocell, and being closer to the user. However, femtocell deployment faces many challenges. One of the most challenging issues is the interference management issue. In this paper, a power-controlled channel assignment algorithm to manage the interference between femtocell and macrocell networks is proposed. The proposed algorithm allows frequency reuse among femtocell users to provide better throughput performance. Simulation results have shown an improvement in throughput by up to 90% compared with previous models.
    Keywords: femtocell; macrocell.

  • Exploring the relationships between web accessibility, web traffic, and university rankings: a case study of Jordanian universities   Order a copy of this article
    by Mohammed Al-Kabi 
    Abstract: Most statistics state that disabled people constitute approximately 20% of the world population; therefore, webmasters should consider this part of the world population when they design and implement their websites by following the Web Content Accessibility Guidelines. This paper aims to explore the relationships between web accessibility, web traffic, and university rankings using the metrics of 27 Jordanian university websites as a case study. Objective evaluations are conducted using a number of online tools. An extensive analysis of the tool outputs is presented, and the relationships of these tool outputs are explored. This study represents a longitudinal overview of the accessibility of Jordanian university websites and re-explores the different relationships between web accessibility, web traffic, and university rankings, as well as the differences between the accessibility of private and public Jordanian universities. This paper explores the relationships between three parameters: accessibility, web traffic, and university rankings. Therefore, its scope is larger than that of earlier publications.
    Keywords: university websites; accessibility guidelines; WCAG 1.0 and WCAG 2.0; web accessibility; web traffic; website popularity; university rankings; automated evaluation.

Special Issue on: AINA'2016 Cloud Computing Projects and Initiatives

  • Transaction management across data stores   Order a copy of this article
    by Marta Patino 
    Abstract: Companies have evolved from a world where they only had SQL databases to a world where they use different kinds of data stores, such as key-value data stores, document-oriented data stores and graph databases. The reason why they have started to introduce this diversity of persistency models is because different NoSQL technologies bring different data models with associated query languages and/or APIs. However, they are confronted now with a problem in which they have the data scattered across different data stores. This problem lies in that when a business action requires to update the data, the data reside in different data stores, and they are subject to inconsistencies in the event of failure and/or concurrent access. These inconsistencies appear due to the lack of transactional consistency that was guaranteed in traditional SQL databases but is not guaranteed either within the NoSQL data stores or across data stores and databases. CoherentPaaS comes to remedy this need. CoherentPaaS provides an ultra-scalable transactional management layer that can be integrated with any data store with multi-versioning capabilities. The layer has been integrated with six different data stores, three NoSQL data stores and three SQL-like databases. In this paper, we describe this generic ultra-scalable transactional management layer and focus on its API and how it can be integrated in different ways with different data stores and databases.
    Keywords: data management; NoSQL; consistency.

  • Enabling and monitoring platform for cloud-based application   Order a copy of this article
    by Silviu Panica, Bogdan Irimie, Dana Petcu 
    Abstract: The deployment of applications able to consume cloud infrastructure services is currently a tedious process, especially when it should be often repeated in the application development phase. Even further, the deployed applications need to be monitored in order to detect and take action against possible anomalies. We present a technical solution that simplifies the deployment process, as being almost fully automated and capable of offering monitoring services for the deployed applications. Its integration as a module of an open-source platform for enforcing security controls by its users is further discussed.
    Keywords: cloud computing; enabling platform; automatic deployment; distributed monitoring.

  • Exploring the complete data path for data interoperability in cyber-physical systems   Order a copy of this article
    by Athanasios Kiourtis, Argyro Mavrogiorgou, Dimosthenis Kyriazis, Ilias Maglogiannis, Marinos Themistocleous 
    Abstract: The amount of digital information increases tenfold every year, owing to the exponential increase of Cyber-Physical Systems (CPS), real and virtual internet-connected sources. Most researches are focused on data processing and inter-connection fields, leading to the question concerning the interoperable use of data: if data is efficiently processed, how can unknown data be used in a different natures application? A three-stepped approach is presented in this paper, addressing this question, where following the data-lifecycle, a known CPSs dataset is firstly stored into domain-specific language, then translated into domain-agnostic language, and finally, using the fitting function of an ANN, it is compared with an unknown dataset, resulting in the translation of the unknown dataset into the first datasets domain. A scenario of that approach is provided, analysing the data interoperability challenges and needs, emerging from todays Internet of Everything evolution, studying the fields of data annotation, semantics, modelling, and characterisation.
    Keywords: cyber-physical systems; semantic; interoperability; domain agnostic; domain specific; data path; data lifecycle;.

  • Monitoring and management of a cloud application within a federation of cloud providers   Order a copy of this article
    by Rocco Aversa, Luca Tasquier 
    Abstract: Cloud federation is an emerging computing model where multiple resources from independent cloud providers are leveraged to create large-scale distributed virtual computing clusters, operating as within a single cloud organisation. This concept of service aggregation is characterised by interoperability features, which can address different problems about inter-cloud collaboration, such as vendor lock-in. Furthermore, it approaches challenges like performance and disaster-recovery through methods such as co-location and geographic distribution. One of the main issues within a cloud federation is related to the monitoring of the application deployed on resources coming from different vendors belonging to the federation. In this work we present an agent-based architecture and its prototypal implementation that aims at monitoring the user's cloud environment provided by the federation: the elasticity of the proposed architecture allows the configuration and customisation of the monitoring infrastructure to adapt it to the specific cloud application. A multi-layer architecture is proposed, where each part monitors different aspects of the multi-cloud infrastructure, starting from the detection of critical conditions on low level parameters for the computational units and composing different monitoring levels in order to check the federated SLA. The agent-based approach will introduce fault-tolerance and scalability to the monitoring architecture, while the agents' reactivity and proactivity capabilities will allow a deep and intelligent monitoring, where each agent can focus on different aspects of the monitoring activity, from low level performance indexes to the checking of the federated SLA compliance. Agents will be strengthened by algorithms and rules used to monitor QoS parameters that are critical for the specific application; the configuration of the adaptive monitoring environment will be made easier by an interface that will help the user in describing his/her application's deployment. The prototypal implementation of the proposed framework will be applied on a testbed application to validate the monitoring architecture.
    Keywords: cloud federation; agent-based monitoring; multi-cloud environment.

  • New architecture for virtual appliance deployment in the cloud   Order a copy of this article
    by Amel Haji, Asma Ben Letaifa, Sami Tabbane 
    Abstract: Cloud computing is a model that offers a convenient access to a set of configurable resources which could be provisioned and released with less of administration. The main lock is to assume interoperability and portablity between cloud services. We deal with special type of services called "Virtual Appliance". These services offers pre-packaged applications, preconfigured and encapsulated in virtual machine images running on virtualisation platforms. This type of service enables flexible and easy deployment, while ensuring interoperability, reuse and migration of services. This paper aims to offer architecture to deploy "Virtual Appliance" in cloud environment.
    Keywords: cloud computing; virtual appliance; SDN; NFV; OpenStack.

  • From business process models to the cloud: a semantic approach   Order a copy of this article
    by Beniamino Di Martino, Antonio Esposito, Giuseppina Cretella 
    Abstract: The fast evolution of IT services and the shift from server-centred applications to widely distributed frameworks and platforms, of which cloud computing is the best representative, have highly conditioned the way enterprise managers deal with their businesses. Several tools for the design and execution of business processes have been proposed, together with standards for their formal representation, such as BPMN. Despite the expressivity of such tools and representations, there is still a lack of integration with cloud services, mostly owing to the high variety of currently available offers, which create confusion among customers. Moreover, portability and interoperability issues tend to hinder the adoption of cloud solutions for the immediate deployment of Business processes. In order to overcome such issues and support the adoption of cloud services for the deployment of business processes, in this paper a semantic approach is proposed. The presented representation aims at easing the mapping process and to suggest users the best suitable Cloud services to compose. A case study that demonstrates the applicability and efficacy of the approach is also described.
    Keywords: business process; semantics; BPMN; OWL-S; cloud computing; ontologies; modelling; cloud services.

  • Migrating mission-critical application in federated cloud: a case study   Order a copy of this article
    by Alba Amato, Rocco Aversa, Massimo Ficco, Salvatore Ventcinque 
    Abstract: Although, virtualisation, elasticity and resource sharing enable new levels of flexibility, convenience and economy benefits, they also add new challenges and more areas for potential failures and security vulnerabilities, which represent major concerns for companies and public organisations that want to shift their business and mission-critical applications and sensitive-data to the cloud. This paper discusses some technical issues that must be addressed to migrate mission-critical applications on public clouds. Moreover, by using a case study, an approach to broker cloud infrastructure needs to satisfy more restrictive critical requirements is presented.
    Keywords: mission-critical applications; cloud federation; brokering; security; dependability.