Forthcoming articles


International Journal of Grid and Utility Computing


These articles have been peer-reviewed and accepted for publication in IJGUC, but are pending final changes, are not yet published and may not appear here in their final order of publication until they are assigned to issues. Therefore, the content conforms to our standards but the presentation (e.g. typesetting and proof-reading) is not necessarily up to the Inderscience standard. Additionally, titles, authors, abstracts and keywords may change before publication. Articles will not be published until the final proofs are validated by their authors.


Forthcoming articles must be purchased for the purposes of research, teaching and private study only. These articles can be cited using the expression "in press". For example: Smith, J. (in press). Article Title. Journal Title.


Articles marked with this shopping trolley icon are available for purchase - click on the icon to send an email request to purchase.


Articles marked with this Open Access icon are freely available and openly accessible to all without any restriction except the ones stated in their respective CC licenses.


Register for our alerting service, which notifies you by email when new issues of IJGUC are published online.


We also offer RSS feeds which provide timely updates of tables of contents, newly published articles and calls for papers.


International Journal of Grid and Utility Computing (59 papers in press)


Regular Issues


  • An improved multi-instance multi-label learning algorithm based on representative instances selection and label correlations   Order a copy of this article
    by Chanjuan Liu, Tongtong Chen, Hailin Zou, Xinmiao Ding, Yuling Wang 
    Abstract: Multi-instance multi-label learning (MIML) has been successfully used in image and text classification problems. It is noteworthy that few of the previous studies consider the pattern-label relations. Inevitably, there are some useless instances in a bag which will reduce the accuracy of the annotation. In this paper, we focus on this problem. Firstly, an instance selection method via joint-norms constraint is employed to eliminate the useless instances and select the representative instances by modelling the instance correlation. Then, bags are mapped to these representative instances. Finally, the classifier is trained by an optimisation algorithm based on label correlations. Experimental results on image dataset, text datasets and birdsong audio dataset show that the proposed algorithm significantly improves the performance of the MIML classifier compared with the state-of-the-art methods.
    Keywords: multi-instance multi-label learning; representative instances selection; joint-norms constraint; label correlations.

  • Mining top-k approximate closed patterns in an imprecise database   Order a copy of this article
    by Yu Xiaomei, Hong Wang, Xiangwei Zheng 
    Abstract: Over the last few years, the growth of data is exponential, leading to colossal amounts of information being produced by computational systems. Meanwhile, the data in real-life applications are usually incomplete and imprecise, which poses big challenges for researchers to obtain exact and valid analytical results with traditional frequent pattern mining methods. Since the potential faults can break the original characteristics of data patterns into multiple small fragments, it is impossible to recover the long true patterns from these fragments. To explore the huge amount of imprecise data by means of frequent pattern mining, we propose a service-oriented model that enables a new way of service provisioning based on users' QoS (quality of service) requirements.The novel model is developed to solve the problem of mining top-k approximate closed patterns in imprecise databases and will be further applied to diagnosis and treatment of potential patients in online medical applications. We test the novel model in an imprecise medical database and the experimental results show that the new model can successfully improve the health services for online customers.
    Keywords: data mining; approximate frequent pattern; frequent closed pattern; clustering; equivalence class; health service.

  • Improved quantisation mechanisms in impulse radio ultra wideband systems based on compressed sensing   Order a copy of this article
    by Yunhe Li, Qinyu Zhang Zhang, Shaohua Wu 
    Abstract: To reduce the impact of quantization noise during low-rate sampling of Impulse Radio Ultra-Wideband (IR-UWB) under compressed sensing framework, and in the meantime considering the equal information carrying of compressed measurements and characteristics of Gaussian distribution, three modified quantisation mechanisms are designed in this study: overload uniform quantisation, non-uniform quantisation, and overload non-uniform quantisation. Besides the influencing elements of overload factor in overload mechanism are mentioned to obtain the optimisation scheme close to the optimal overload by fitting. The simulation results show that all the three modified mechanisms, especially the overload non-uniform quantisation mechanism, have resulted in great improvement in performance when compared with the uniform quantisation, Whats more, the performance of the overload uniform quantisation mechanism featured with low complexity is better than that of the non-uniform quantisation mechanism with high complexity, thus providing a practical quantisation method for the IR-UWB system under the compressed sensing framework
    Keywords: compressed sensing; impulse radio ultra-wideband; quantisation mechanism; quantisation noise; overload interval factor.

  • An ontology-based cloud infrastructure service discovery and selection system   Order a copy of this article
    by Manoranjan Parhi, Binod Kumar Pattanayak, Manas Ranjan Patra 
    Abstract: In recent years, owing to the global economic downturn, many organisations have resorted to downsizing their Information Technology (IT) expenses by adopting innovative computing models, such as cloud computing, which allows business houses to reduce their fixed IT costs by promising a greener, scalable, cost-effective alternative to use their IT resources. A growing number of pay-per-use cloud services are now available on the web in the form of Software as a Service (SaaS), Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). With the increase in the number of services, there has also been an increase in demand and adoption of cloud services, making cloud service identification and discovery a challenging task. This is due to varied service descriptions, non-standardised naming conventions, heterogeneity in type and features of cloud services. Thus, selecting an appropriate cloud service according to consumer requirements is a daunting task, especially for applications that use a composition of different cloud services. In this paper, we have designed an ontology-based cloud infrastructure service discovery and selection system that defines functional and non-functional concepts, attributes and relations of infrastructure services. We have shown how the system enables one to discover appropriate services optimally as requested by consumers.
    Keywords: cloud computing; cloud service discovery and selection; infrastructure as a service; cloud ontology.

  • Certificateless multi-signcryption scheme in standard model   Order a copy of this article
    by Xuguang Wu 
    Abstract: A signcryption scheme can realise the security objectives of encryption and signature simultaneously, which has lower computational cost and communication overhead than the sign-then-encrypt approach. To adapt multi-user settings and solve the key escrow problem of ID-based multi-signcryption schemes, this paper defines the formal model of certificatless multi-signcryption scheme and proposes a certificateless multi-signcryption scheme in the standard model. The scheme is proved secure against adaptive chosen ciphertext attacks and adaptive chosen message attacks under decisional bilinear Diffie-Hellman assumption and computational Diffie-Hellman assumption, respectively.
    Keywords: signcryption; multi-signcryption; certificatless encryption.

  • Per-service security SLAs for cloud security management: model and implementation   Order a copy of this article
    by Valentina Casola, Alessandra De Benedictis, Jolanda Modic, Massimiliano Rak, Umberto Villano 
    Abstract: In the cloud computing context, Service Level Agreements (SLAs) tailored to specific Cloud Service Customers (CSCs) seem to be still a utopia, and things are even worse as regards the security terms to be guaranteed. In fact, existing cloud SLAs focus on only a few service terms, and Cloud Service Providers (CSPs) mainly provide uniform guarantees for all offered services and for all customers, regardless of any particular service characteristics or of customer-specific needs. In order to expand their business volume, CSPs are currently starting to explore alternative approaches, based on the adoption of a CSC-based per-service security SLA model. This paper presents a framework that enables the adoption of a per-service SLA model, supporting the automatic implementation of cloud security SLAs tailored to the needs of each customer for specific service instances. In particular, the process and the software architecture for per-service SLA implementation are shown. A case study application, related to the provisioning of a secure web container service, is presented and discussed, to demonstrate the feasibility and effectiveness of the proposed solution.
    Keywords: cloud security; per-service SLA; security service level agreement.

  • A (multi) GPU iterative reconstruction algorithm based on Hessian penalty term for sparse MRI   Order a copy of this article
    by Salvatore Cuomo, Pasquale De Michele, Francesco Piccialli 
    Abstract: A recent trend in the Magnetic Resonance Imaging (MRI) research field is to design and adopt machines that are able to acquire undersampled clinical data, reducing the time for which the patient is lying in the body scanner. Unfortunately, the missing information in these undersampled acquired datasets leads to artifacts in the reconstructed image, therefore computationally expensive image reconstruction techniques are required. In this paper, we present an iterative regularisation strategy with a second order derivative penalty term for the reconstruction of undersampled image datasets. Moreover, we compare this approach with other constrained minimisation methods, resulting in improved accuracy. Finally, an implementation on a massively parallel architecture environment, a multi Graphics Processing Unit (GPU) system, of the proposed iterative algorithm is presented. The resulting performance gives clinically feasible reconstruction run times, speed-up and improvements in terms of reconstruction accuracy of the undersampled MRI images.
    Keywords: compressed sensing; MRI iterative reconstruction; numerical regularisation; graphics processing unit; parallel and scientific computing.

  • Labelled evolutionary Petri nets/genetic algorithm based approach for workflow scheduling in cloud computing   Order a copy of this article
    by Manel Femmam, Okba Kazar, Laid Kahloul, Mohamed El-kabir Fareh 
    Abstract: Nowadays, more evolutionary algorithms for workflow scheduling in cloud computing are proposed. Most of those algorithms focused on the effectiveness, discarding the issue of flexibility. Research on Petri nets addresses the issue of flexibility; many extensions have been proposed to facilitate the modelling of complex systems. Typical extensions are the addition of "colour", "time" and "hierarchy". By mapping scheduling problems into Petri nets, we are able to use standard Petri net theory. In this case, the scheduling problem can be reduced to finding an optimal sequence of transitions leading from an initial marking to a final one. To find the optimal scheduling, we propose a new approach based on a recent proposed formalism Evolutionary Petri Net (EPN), which is an extension of Petri net, enriched with two genetic operators, crossover and mutation. The objectives of our research are to minimise the workflow application completion time (makespan) as well as the amount cost incurred by using cloud resources. Some numerical experiments are carried out to demonstrate the usefulness of our algorithm.
    Keywords: workflow scheduling; cloud computing; petri nets; genetic algorithm.

  • An improved SMURF scheme for cleaning RFID data   Order a copy of this article
    by He Xu, Jie Ding, Peng Li, Daniele Sgandurra, Ruchuan Wang 
    Abstract: With the increasing usage of internet of things devices, our daily life is facing Big Data. RFID technology enables the reading over a long distance, provides high storage capacity and is widely used in the internet of things environmental supply chain management for object tracking and tracing. With the expansion of the RFID technology application areas, the demand for reliability of business data is increasingly important. In order to fulfil the needs of upper applications, data cleaning is essential and directly affects the correctness and completeness of the business data, so it needs to filter and handle RFID data. The traditional statistical smoothing for unreliable RFID data (SMURF) algorithm dynamically adjusts the size of a window according to tags average reading rate of sliding window during the process of data cleaning. To some extent, SMURF overcomes the disadvantages of fixed sliding window size; however, the SMURF algorithm is only aimed at constant speed data flow in ideal situations. In this paper, we overcome the shortcomings of the SMURF algorithm, and propose a SMURF scheme improved in two aspects. The first one is based on dynamic tags, and the second one is the RFID data cleaning framework, which considers the influence of data redundancy. The experiments verify that the improved scheme is reasonable in dynamic settings of sliding window, and the accuracy of cleaning effect is improved as well.
    Keywords: RFID; data cleaning; internet of things; sliding window.

  • Adaptive co-operation in parallel memetic algorithms for rich vehicle routing problems   Order a copy of this article
    by Jakub Nalepa, Miroslaw Blocho 
    Abstract: Designing and implementing co-operation schemes for parallel algorithms has become a very important task recently. The scheme, which defines the co-operation topology, frequency and strategies for handling transferred solutions, has a tremendous influence on the algorithm search capabilities, and can help to balance the exploration and exploitation of the vast solution space. In this paper, we present both static and dynamic schemes: the former are selected before the algorithm execution, whereas the latter are dynamically updated on the fly to better respond to the optimisation progress. To understand the impact of such co-operation approaches, we applied them in the parallel memetic algorithms for solving rich routing problems, and performed an extensive experimental study using well-known benchmark sets. This experimental analysis is backed with the appropriate statistical tests to verify the importance of the retrieved results.
    Keywords: co-operation; parallel algorithm; memetic algorithm; rich routing problem; VRPTW; PDPTW.

  • Cloud computing based on agent technology, superrecursive algorithms and DNA   Order a copy of this article
    by Rao Mikkilineni, Mark Burgin 
    Abstract: Agents and agent systems are becoming more and more important in the development of a variety of fields, such as ubiquitous computing, ambient intelligence, autonomous computing, data analytics, machine learning, intelligent systems and intelligent robotics. In this paper, we examine interactions of theoretical computer science with computer and network technologies, analysing how agent technology is presented in mathematical models of computation. We demonstrate how these models are used in the novel distributed intelligent managed element (DIME) network architecture (DNA), which extends the conventional computational model of information processing networks, allowing improvement of the efficiency and resilience of computational processes. Two implementations of DNA described in the paper illustrate how the application of agent technology radically improves the current cloud computing state of the art. The first example demonstrates the live migration of a database from a laptop to a cloud without losing transactions and without using containers or moving virtual machine images. The second example demonstrates the implementation of cloud agnostic computing over a network of public and private clouds, where live computing process workflows are moved from one cloud to another without losing transactions. Both these implementations demonstrate the power of scientific thought for dramatically extending the current state of the art of cloud computing practice.
    Keywords: cloud computing; agent technology; inductive Turing machine; grid computing; DIME network architecture; intelligent systems; super-recursive algorithms.

  • Attendance management system using selfies and signatures   Order a copy of this article
    by Jun Iio 
    Abstract: There have been many proposals to optimise student attendance management in higher education. However, each method has pros and cons and we have not yet found a perfect solution. In this study, a novel framework for attendance management is proposed that consists of a mobile device and a web application. During lectures, students participating in the lecture can register their attendance on the mobile device with their selfie or their signature. After the lecture is finished, the registration data are sent to the database and they are added to the 'RollSheet'. This paper reports an overview of this system and the results of an evaluation after a trial period, which was conducted in the second semester of the 2015 fiscal year.
    Keywords: attendance management system; selfie photograph; hand-writing signature; mobile device; web application.

  • Trust modelling for opportunistic cloud services   Order a copy of this article
    by Eric Kuada 
    Abstract: This paper presents a model for the concept of trust and a trust management system for opportunistic cloud services platforms. Results from applying the systematic review methodology to review trust-related studies in cloud computing revealed that the concept of trust is used loosely without any formal specification in cloud computing discussions and trust engineering in general. Formal definition and a model of the concept of trust is, however, essential in the design of trust management systems. The paper therefore presents a model for the formal specification of the concept of trust. A trust management system for opportunistic cloud services is also presented. The applicability of the trust model and the trust management system is demonstrated for cloud computing by applying it to software as a service and infrastructure as a service usage scenarios in the context of opportunistic cloud services environments.
    Keywords: opportunistic cloud services; trust engineering; trust in cloud computing; trust modeling; trust management system; pseudo service level agreements.

  • Efficient cache replacement policy for minimising error rate in L2-STT-MRAM caches   Order a copy of this article
    by Rashidah F. Olanrewaju, Burhan Ul Islam Khan, A. Raouf Khan, Mashkuri Yaacob, Md Moktarul Alam 
    Abstract: In recent times, various challenges have been encountered in the design and development of Static-RAM (SRAM) caches, which consequently has led to a design where memory cell technologies are converted into on-chip embedded caches. The current research statistics for cache designing reveals that Spin Torque Transfer Magnetic RAMs, preferably termed as STT-MRAMs, have become one of the most promising technologies in the field of memory chip design, gaining a lot of attention from researchers owing to their dynamic direct map and data access policies for reducing the average cost, i.e. both time and energy optimisation. Though STT-MRAMs possess high density, less power rating and non-volatility, increasing rates of WRITE failures and READ disturbances highly affect the reliability of STT-MRAM caches. Besides workload behaviours, process variations directly affect these failure/disturbance rates. Furthermore, it can be seen that cache replacement algorithms play a significant part in minimising the Error Rate (ER) induced by WRITE operations. In this paper, the vulnerability of STT-MRAM caches has been investigated to examine the effect of workloads as well as process variations for characterising the reliability of STT-MRAM caches. The current study is intended to analyse and evaluate an existing efficient cache replacement policy, namely Least Error Rate (LER), which uses Hamming Distance (HD) computations to reduce the Write Error Rate (WER) of L2-STT-MRAM caches with acceptable overheads. The performance analysis of the algorithm ensures its effectiveness in reducing the WER and cost overheads compared with the conventional LRU technique implemented on SRAM cells.
    Keywords: cache replacement algorithm; field assisted STT-MRAM; error rate; L2 caches.

  • An infrastructure model for smart cities based on big data   Order a copy of this article
    by Eliza Helena Areias Gomes, Mario Antonio Ribeiro Dantas, Douglas D. J. De Macedo, Carlos Roberto De Rolt, Julio Dias, Luca Foschini 
    Abstract: The spread of projects focused on smart cities has grown in recent years. With this, the massive amount of data generated in these initiatives creates a degree of complexity in how to manage all this information. In attention to solve this problem, several approaches have been developed in recent years. In this paper, we propose an infrastructure model for big data for a smart city project. The goal of this model is to present the stages for the processing of data in the steps of extraction, storage, processing and visualisation, as well as the types of tool needed for each phase. To implement our proposed model, we used the ParticipACT Brazil, a project based in smart cities. This project uses different databases to compose its big data and uses this data to solve urban problems. We observe that our model provides a structured vision of the software to be used in big data server of ParticipACT Brazil.
    Keywords: big data; smart city; big data tools.

  • Playing in traffic: an investigation of low-cost, non-invasive traffic sensors for street lighting luminaire deployment   Order a copy of this article
    by Karl Mohring, Trina Myers, Ian Atkinson 
    Abstract: Real-time traffic monitoring is essential to the development of smart cities as well as its potential for energy savings. However, real-time traffic monitoring is a task that requires sophisticated and expensive hardware. Owing to the prohibitive cost of specialised sensors, accurate traffic counts are typically limited to intersections where traffic information is used for signalling purposes. The sparse arrangement of traffic detection points does not provide adequate information for intelligent lighting applications, such as adaptive dimming. This paper investigates the low-cost and off-the-shelf sensors to be installed inside street lighting luminaires for traffic sensing. A luminaire-mounted sensor test-bed installed on a moderately busy road trialled three non-invasive presence-detection sensors: Passive Infrared (PIR), Sonar (UVD) and lidar. The proof-of-concept study revealed that a HC-SR501 PIR motion detector could count traffic with 73% accuracy at a low cost and may be suitable for intelligent lighting applications if accuracy can be further improved.
    Keywords: commodity; internet of things; vehicle detection; sensors; smart cities; wireless sensor networks.

  • Real-time web-cast system by multihop WebRTC communications   Order a copy of this article
    by Daiki Ito, Michitoshi Niibori, Masaru Kamada 
    Abstract: A software system is developed for casting the screen images and voices from a host PC to the client web browsers on many other PCs in real time. This system is intended to be used in the classrooms. Students have only to bring their own PCs and connect to the teachers host PC by a web browser via a wireless network to see and listen to the teaching materials presented on the host PC. Then the client web-browsers are organised in the shape of a binary tree along which the video and audio data are relayed in the multihop fashion by the Web Real-time Communication (WebRTC) protocol. This structure of binary multihop relay is adopted in order not to burden the host PC with communications load. A test has shown that voice and the motion pictures in a rather small size of 320 x 240 pixels on a teachers PC have been presented at the rate of five frames per second without any noticeable delays on the web browsers running on 38 client devices for students under a local WiFi network. To host more client devices, we have to lower the frame rate as slow as the slide show of still pictures.
    Keywords: real-time web-cast system; bring your own device; WebSocket; web real-time communication.

  • Dynamic migration of virtual machines to reduce energy consumption in a cluster   Order a copy of this article
    by Dilawaer Duolikun, Tomoya Enokido, Makoto Takizawa 
    Abstract: Virtual machines are widely used to support applications with virtual service in server clusters. Here, a virtual machine can migrate from a host server to a guest server. In this paper, we consider a cluster where virtual machines are dynamically created and dropped depending on the number of processes. We propose a dynamic virtual machine migration (DVMM) algorithm to reduce the total electric energy consumption of servers. If an applications issues a process to a cluster, the most energy-efficient host server is first selected and then the process is performed on a virtual machine of the server. Then, a virtual machine migrates from a host server to a guest server so that total electric energy consumption of the servers can be reduced. In the evaluation, we show the total electric energy consumption and active time of servers and the average execution time of processes can be reduced in the DVMM algorithm.
    Keywords: energy-efficient computation; virtual machine; power consumption model; energy-aware dynamic migration of virtual machines.

  • Energy-efficient placement of virtual machines in cloud datacentres, based on fuzzy decision making   Order a copy of this article
    by Leili Salimian, Faramarz Safi-Esfahani 
    Abstract: Placement of virtual machines (VMs) on physical nodes as a sub-problem of dynamic VM consolidation has been driven mainly by energy efficiency and performance objectives. However, owing to varying workloads in VMs, placement of the VMs can cause a violation in the Service Level Agreement (SLA). In this paper, the VM placement is regarded as a bin packing problem, and a fuzzy energy-aware algorithm is proposed to estimate the host resource usage. The estimated resource usage is used to find the most energy-efficient host to reallocate the VMs. The fuzzy algorithm generates rules and membership functions dynamically to adapt to workload changes. The main objective of the proposed algorithm is to optimise the energy-performance trade-off. The effectiveness of the proposed algorithm is evaluated through simulations on the random and real-world PlanetLab workloads. Simulation results demonstrate that the proposed algorithm reduces the energy consumption, while it provides a high level of adherence to the SLAs.
    Keywords: dynamic VM consolidation; CPU usage; VM placement; fuzzy decision making.

  • An identity-based cryptographic scheme for cloud storage applications   Order a copy of this article
    by Manel Medhioub, Mohamed Hamdi 
    Abstract: The use of remote storage systems is gaining an expanding interest, namely the cloud storage based services. In fact, one of the factors that led to the popularity of cloud computing is the availability of storage resources provided at a reduced cost. However, when outsourcing the data to a third party, security issues become critical concerns, especially confidentiality, integrity, authentication, anonymity and resiliency. Based on this challenge, this work provides a new approach to ensure authentication in cloud storage applications. ID-based cryptosystems (IBC) have many advantages over certificate-based systems, such as simplification of key management. This paper proposes an original ID-based authentication approach in which the cloud tenant is assigned the IBCPrivate Key Generator (PKG) function. Consequently, it can issue public elements for its users, and can keep confidential the resulting IBC secrets. Moreover, in our scheme, the public key infrastructure is still in usage to establish trust relationships between the PKGs.
    Keywords: cloud storage; authentication; identity-based cryptography; security; Dropbox.

  • COBRA-HPA: a block generating tool to perform hybrid program analysis   Order a copy of this article
    by Thomas Huybrechts, Yorick De Bock, Haoxuan Li, Peter Hellinckx 
    Abstract: The Worst-Case Execution Time (WCET) of a task is an important value in real-time systems. This metric is used by the scheduler in order to schedule all tasks before their deadlines. However, the code and hardware architecture have a significant impact on the execution time and thus the WCET. Therefore, different analysis methodologies exist to determine the WCET, each with their own advantages and/or disadvantages. In this paper, a hybrid approach is proposed that combines the strengths of two common analysis techniques. This hybrid methodology tackles the problem that can be described as 'the gap between a machine and a human in solving problems'. The two-layer hybrid model splits the code of tasks into so-called basic blocks. The WCET can be determined by performing execution time measurements on each block and statically combining those results. The COBRA-HPA framework presented in this paper is developed to facilitate the creation of hybrid block models and automate the measurements/analysis process. Additionally, an elaborated discussion on the implementation and performance of the framework is given. In conclusion, the results of the COBRA-HPA framework show a significant reduction in analysis effort while keeping sound WCET predictions for the hybrid method compared with the static and measurement-based approach.
    Keywords: worst-case execution time; WCET; hybrid analysis methodology; COde Behaviour fRAmework; COBRA; basic block generator.

  • The big data mining forecasting model based on combination of improved manifold learning and deep learning   Order a copy of this article
    by Xiurong Chen, Yixiang Tian 
    Abstract: For the most important dilemma in big data processing that extensive redundant information and useful information mix with each other, which makes these big data difficult to be effectively used to establish prediction models, in our work we combine the manifold learning dimension reduction algorithm LLE with deep learning feature extraction algorithm CDBN as the input of RBF, constructing a mixed-feature RBF forecast model. As for depending too much on the local domain, which is not easy to determine in the LLE algorithm, we used the idea of mapping by kernel function of KECA to transfer original global nonlinear problem into global linear one under the high-dimensional kernel feature space to solve, removing the redundant information more accurately and reducing data complexity. As for the difficulty in confirming network structure and the lack of supervision in learning process of CDBN, we used the kernel entropy information computed in KECA to determine the number of network layers and supervise the learning process, which makes it more effective to extract deep features to explore the essential characteristics of big data information. In the empirical part we chose the foreign exchange rate time series to conduct research, the results show that the improved KELE can reduce dimensionality of sample data effectively which makes we obtain the more optimised and reasonable representation of original data, providing an assurance for further learning and understanding of big data. And the improved KECDBN can extract distributed features of data more effectively. Then improve the prediction accuracy of the mixed-feature RBF forecast model based on KELE and KECRBM.
    Keywords: locally liner embedding; continuous deep belief network; kernel entropy component analysis; kernel entropy liner embedding; kernel entropy continuous deep belief network.

  • Cost-aware hybrid cloud scheduling of parameter sweep calculations using predictive algorithms   Order a copy of this article
    by Stig Bosmans, Glenn Maricaux, Filip Van Der Schueren, Peter Hellinckx 
    Abstract: This paper investigates various techniques for scheduling parameter sweep calculations cost efficiently in a hybrid cloud environment. The combination of both a private and public cloud environment integrates the advantages of being cost effective and having virtually unlimited scaling capabilities at the same time. To make an accurate estimate for the required resources, multiple prediction techniques are discussed. The estimation can be used to create an efficient scheduler which respects both deadline and cost. These findings have been implemented and tested in a Java-based cloud framework that operates on Amazon EC2 and OpenNebula. Also, we present a theoretical model to further optimise the cost by leveraging the Amazon Spot Market.
    Keywords: parameter sweep; cloud computing; Amazon AWS EC2; predictive algorithms; OpenNebula; machine learning; Amazon spot market.

  • Impact of software architecture on execution time: a power window TACLeBench case study   Order a copy of this article
    by Haoxuan Li, Paul De Meulenaere, Siegfried Mercelis, Peter Hellinckx 
    Abstract: Timing analysis is used to extract the timing properties of a system. Various timing analysis techniques and tools have been developed over the past decades. However, changes in hardware platform and software architecture introduced new challenges in timing analysis techniques. In our research, we aim to develop a hybrid approach to provide safe and precise timing analysis results. In this approach, we will divide the original code into smaller code blocks, then construct a timing model based on the information acquired by measuring the execution time of every individual block. This process can introduce changes in the software architecture. In this paper we use a multi-component benchmark to investigate the impact of software architecture on the timing behaviour of a system.
    Keywords: WCET; timing analysis; hybrid timing analysis; power window; embedded systems; TACLEBench; COBRA block generator.

  • Accountability management for multi-tenant cloud services   Order a copy of this article
    by Fatma Masmoudi, Mohamed Sellami, Monia Loulou, Ahmed Hadj Kacem 
    Abstract: The widespread adoption of multi-tenancy in the Software as a Service delivery model triggers several data protection issues that could decrease the tenants' trustworthiness. In this context, accountability can be used to strengthen the trust of tenants in the cloud through providing the reassurance of the processing of personal data hosted in the cloud according to their requirements. In this paper, we propose an approach for the accountability management of multi-tenant cloud services allowing: compliance checking of services's behaviours with defined accountability requirements based on monitoring rules, accountability-violation detection otherwise, and post-violation analysis based on evidences. A tool-suite is developed and integrated into a middleware to implement our proposal. Finally, experiments we have carried out show the efficiency of our approach relying on some criteria.
    Keywords: cloud computing; accountability; multi-tenancy; monitoring; accountability violation.

  • A big data approach for multi-experiment data management   Order a copy of this article
    by Silvio Pardi, Guido Russo 
    Abstract: Data sharing among similar experiments is limited by the usage of ad hoc directory structures, data and metadata naming as well as by the variety of data access protocols used in different computing model. The Open Data and Big Data paradigms provide the context to overcome the current heterogeneity problems. In this work, we present a study for a Global Storage Ecosystem designed to manage large and distributed datasets, in the context of physics experiments. The proposed environment is entirely based on the open protocols HTTP/WebDav, together with modern data searching technologies, according to the Big Data paradigm. More specifically, the main goal is to aggregate multiple storage areas exported with open protocols and to simplify the operations of data retrieval, thanks to a set of engine-search-like tools, based on Elasticsearch and Apache Lucene library. This platform offers to physicists an effective instrument to simplify the multi-experiment data analysis, by enabling data searching, without knowing a priori the directory format or the data itself. As a proof of concept, we realised a prototype over the ReCaS Supercomputing infrastructure, by aggregating and indexing the files stored in a set of already existing storage systems.
    Keywords: big data; data federation.

  • A WLAN triage testbed based on fuzzy logic and its performance evaluation for different number of clients and throughput parameter   Order a copy of this article
    by Kosuke Ozera, Takaaki Inaba, Shinji Sakamoto, Kevin Bylykbashi, Makoto Ikeda, Leonard Barolli 
    Abstract: Many devices communicate over Wireless Local Area Networks (WLANs). The IEEE 802.11e standard for WLANs is an important extension of the IEEE 802.11 standard focusing on QoS that works with any PHY implementation. The IEEE 802.11e standard introduces EDCF and HCCA. Both these schemes are useful for QoS provisioning to support delay-sensitive voice and video applications. EDCF uses the contention window to differentiate high priority and low priority services. However, it does not consider the priority of users. In this paper, in order to deal with this problem, we propose a Fuzzy-based Admission Control System (FACS). We implemented a triage testbed using FACS and carried out an experiment. The experimental results show that the number of connected clients is increased during Avoid phase, but does not change during Monitoring phase. The experimental results show that the implemented testbed performs better than conventional WLANs.
    Keywords: WLAN triage; congestion control.

  • Enriching folksonomy for online videos   Order a copy of this article
    by Hiroki Sakaji, Masaki Kohana, Akio Kobayashi, Hiroyuki Sakai 
    Abstract: We propose a method that enriches folksonomy by using user comments on online videos. Folksonomy is a process in which users tag videos so that they can be searched for easily. On some videos, users can post tags and comments. A tag corresponds to folksonomy. One such online sharing website is Nico Nico Douga; however, users cannot post more than 12 tags on a video. Therefore, there are some important tags that could be posted but are sometimes not. We present a method for acquiring some of these missing tags by choosing new tags that score well in a scoring method developed by us. The method is based on information theory and a novel algorithm for estimating new tags by using distributed databases constructed by us.
    Keywords: text mining; distributed database; information extraction.

  • A web platform for oral exam of programming class   Order a copy of this article
    by Masaki Kohana, Shusuke Okamoto 
    Abstract: We develop a system to help an oral exam for a programming class. Our programming class has a problem about the waiting time for the students. We assume that the waiting time can be reduced when a teacher can check a source code and a result of the program smoothly. A student uploads C++ source codes and registers to a waiting list. The system compiles the code to an HTML and a JavaScript file using Emscripten. The compiled program can run on a Web browser. A teacher can check the order of the students for the oral exam. At this time, the teacher also can see the source code and the result. Our system provides a waiting list of the oral exam to keep fairness. Also, the teacher does not overlook an invalid code. It helps a teacher to grade a student correctly.
    Keywords: Oral Exam; Programming; Runtime Environment.

  • A novel test case generation method based on program structure diagram   Order a copy of this article
    by Mingcheng Qu, XiangHu Wu, YongChao Tao, GuanNan Wang, ZiYu Dong 
    Abstract: At present, embedded software has the problems of test lag, non-visual and low efficiency, depending on the test design of testers. It cannot guarantee the quality of the test cases and cannot guarantee the quality of the test. In this paper, a software program structure diagram model is established to verify the model, and the test points are planned manually. Finally, we fill in the contents of the test items, and generate the corresponding set of test cases according to the algorithm, and save them into the database for management. This method can improve the reliability and efficiency of tests, ensure the visual tracking and management of the use case, and have a strong auxiliary function to the planning and generation of the test case.
    Keywords: program structure diagram; test item planning; test case generation.

  • A dynamic cloud service selection model based on trust and service level agreements in cloud computing   Order a copy of this article
    by Yubiao Wang, Junhao Wen, Quanwang Wu, Lei Guo, Bamei Tao 
    Abstract: For high quality and trusted service selection problems, we propose a dynamic cloud services selection model (DCSSM). Cloud service resources are divided into different service levels by Service-Level Agreement Management (SLAM). Each SLAM manages some cloud service registration information. In order to make the final trust evaluation values more practical, the model performs a comprehensive trust, which consists of direct trust and indirect trust. First, combined weights consist of subjective weight and objective weight. Using rough sets, an analytic hierarchy process method is used to calculate subjective weight. The direct trust also considers transaction time and transaction amount, and then gets a accurate direct trust. Secondly, indirect trust considers user trust evaluation similarity, and it contains indirect trust of friends and indirect trust of strangers. Finally, when the transaction is completed, a direct trust dynamic update is proposed. The model is simulated using CloudSim. It is compared with three other methods, and the experimental results show that the DCSSM performs better than the other three methods.
    Keywords: dynamic cloud service; trust; service-level agreement; selection model; combining weights.

  • Research on regression test method based on multiple UML graphic models   Order a copy of this article
    by Mingcheng Qu, Xianghu Wu, Yongchao Tao, Guannan Wang, Ziyu Dong 
    Abstract: Most of the existing graph-based regression testing schemes aim at a given UML graph and are not flexible in regression testing. This paper proposes a method of a universal UML for a variety of graphical model modifications, and obtains a UML graphics module structure modified regression testing that must be retested, determined by the domain of influence analysis on the effect of UML modification on the graphical model test case generated range analysis, finally re auto generate test cases. This method has been proved to have a high logical coverage rate. In order to fully consider all kinds of dependency, it cannot limit the type modification of UML graphics, and has higher openness and comprehensiveness.
    Keywords: regression testing; multiple UML graphical models; domain analysis.

  • E-XY: an entropy-based XY routing algorithm   Order a copy of this article
    by Akash Punhani, Pardeep Kumar, Nitin Chanderwal 
    Abstract: The communication between cores or processing elements has become an important issue owing to continuous increase in their numbers on the chip. In recent years, Network on Chip (NoC) is used to handle the communication issues. The most common topology for NoC is a mesh topology. XY routing algorithm is the most commonly used algorithm for this topology. This routing algorithm is popular because of its simplicity and deadlock prevention capabilities. The major drawback of such a routing algorithm is that it is unable to handle a high traffic load, which leads to the development of adaptive routing algorithms. The adaptive routing algorithm requires information of the adjacent routers before routing the packets in order to avoid congestion. Such information about the adjacent routers is transferred through extra dedicated links as the normal links may be congested and delay in sending information leads to additional traffic to the congested links. In this paper, an E-XY (entropy-based XY) routing algorithm is proposed that generates information about the adjacent router locally based on previously communicated packets. The results have been carried out on a mesh topology of 8x8 simulated using Omnet++ 4.4.1 using HNOCS version. Different types of traffic have been considered for result computation, including uniform, bit complement, neighbour and tornado. The proposed algorithm is compared with other routing algorithms including XY, IX/Y and Odd Even. Results demonstrate that the proposed algorithm is comparable with the XY routing algorithm up to the load factor of 0.8 and performs better than the XY, IX/Y and Odd Even routing algorithms with the increase in load. The proposed algorithm helps in reducing hardware cost as extra links and ports on routers to connect these links are not required. Hence, the proposed algorithm will be a better option for communication in a parallel computing environment.
    Keywords: routing algorithm; adaptive; parallel communication; router architecture; maximum entropy model.

  • Involving users in energy conservation: a case study in scientific clouds   Order a copy of this article
    by David Guyon, Anne-Cécile Orgerie, Christine Morin, Deb Agarwal 
    Abstract: Services offered by cloud computing are convenient to users for reasons such as their ease of use, flexibility, and financial model. Yet data centres used for their execution are known to consume massive amounts of energy. The growing resource utilisation following the cloud success highlights the importance of the reduction of its energy consumption. This paper investigates a way to reduce the footprint of HPC cloud users by varying the size of the virtual resources they request. We analyse the influence of concurrent applications with different resources sizes on the system's energy consumption. Simulation results show that resources with larger size are more energy consuming regardless of faster completion of applications. Although smaller-sized resources offer energy savings, it is not always favourable in terms of energy to reduce too much the size. High energy savings depend on the user profiles' distribution.
    Keywords: cloud computing; green computing; HPC applications; energy savings; users' involvment.

  • Distributed and multi-core version of k-means algorithm   Order a copy of this article
    by Ilias Savvas, Dimitrios Tselios, Georgia Garani 
    Abstract: Nowadays, huge quantities of data are generated by billions of machines and devices. Numerous methods have been employed in order to make use of this valuable resource: some of them are altered versions of established known algorithms. One of the most seminal methods, in order to mine from data sources, is clustering, and $k$-means is a key algorithm that clusters data according to a set of attributes. However, its main shortcoming is the high computational complexity which makes the $k$-means very inefficient to perform on big datasets. Although $k$-means is a very well used algorithm, a functional distributed variant combining the multi-core power of contemporary machines has not been accepted yet by the researchers. In this work, a three phase distributed/multi-core version of $k$-means and the analysis of its results are presented. The obtained experimental results are in line with the theoretical outcomes and prove the correctness, the efficiency, and the scalability of the proposed technique.
    Keywords: parallel algorithm; clustering; multi-core; distributed; $k$-means; OpenMP; MPI.

  • A data replication strategy for document oriented NoSQL databases   Order a copy of this article
    by Khaoula Tabet, Riad Mokadem, Mohamed Ridda Laouar 
    Abstract: Cloud providers aim to maximise their profits while satisfying the tenant requirement, e.g., performance. The relational database management systems face many obstacles in achieving those needs. Therefore, the use of NoSQL databases becomes necessary when dealing with heterogeneous workloads and voluminous data. In this context, we propose a new replication strategy that balances the workload of nodes and dynamically adjusts the number of replicas while the provider profit is taken into account. The resulting analysis shows that the proposed strategy reduces resource consumption, which improves the profit of the provider while satisfying the performance service level objective of the tenants.
    Keywords: cloud environment; NoSql databases; data replication; provider profit; performance.

  • Logic programming as a service in multi-agent systems for the Internet of Things   Order a copy of this article
    by Roberta Calegari, Enrico Denti, Stefano Mariani, Andrea Omicini 
    Abstract: The widespread diffusion of low-cost computing devices, along with improvements of cloud computing platforms, is paving the way towards a whole new set of opportunities for Internet of Things (IoT) applications and services. Varying degrees of intelligence are required for supporting adaptation and self-management: yet, they should be provided in a lightweight, easy to use and customise, highly-interoperable way. In this paper we explore Logic Programming as a Service (LPaaS) as a novel and promising re-interpretation of distributed logic programming in the IoT era. After introducing the reference context and motivating scenarios of LPaaS as an effective enabling technology for intelligent IoT, we define the LPaaS general architecture, and discuss two different prototype implementations - as a web service and as an agent in a multi-agent system (MAS), both built on top of the tuProlog system, which provides the required interoperability and customisation. We finally showcase the LPaaS potential through two case studies, designed as simple examples of the motivating scenarios.
    Keywords: IoT; logic programming; multi-agent systems; pervasive computing; LPaaS; artificial intelligence; interoperability.

  • Cognitive workload management on globally interoperable network of clouds   Order a copy of this article
    by Giovanni Morana, Rao Mikkilineni, Surendra Keshan 
    Abstract: A new computing paradigm using distributed intelligent managed elements (DIME) and DIME network architecture (DNA) is used to demonstrate a globally interoperable public and private cloud network deploying cloud agnostic workloads. The workloads are cognitive and capable to adjust autonomously their structure and maintain desired quality of service. DNA is designed to provide a control architecture for workload self-management of non-functional requirements to address rapid fluctuations, either in workload demand or in available resources. Using DNA, a transaction-intensive three-tier workload is migrated from a physical server to a virtual machine hosted in a public cloud without interrupting the service transactions. After migration, cloud agnostic inter-cloud and intra-cloud auto-scaling, auto-failover and live migration are demonstrated again, without disrupting the user experience or losing transactions.
    Keywords: cloud computing; datacentre; manageability; DIME; DIME network architecture; cloud agnostic; cloud native.

  • Towards autonomous creation of service chains on cloud markets   Order a copy of this article
    by Benedikt Pittl, Irfan Ul-Haq, Werner Mach, Erich Schikuta 
    Abstract: Today, cloud services, such as virtual machines, are traded directly at fixed prices between consumers and providers on platforms, e.g. Amazon EC2. The recent development of Amazon's EC2 spot market shows that dynamic cloud markets are gaining popularity. Hence, autonomous multi-round bilateral negotiations, also known as bazaar negotiations, are a promising approach for trading cloud services on future cloud markets. They play a vital role for composing service chains. Based on a formal description we describe such service chains and derive different negotiation types. We implement them in a simulation environment and evaluate our approach by executing different market scenarios. Therefore, we developed three negotiation strategies for cloud resellers. Our simulation results show that cloud resellers as well as their negotiation strategies have a significant impact on the resource allocation of cloud markets. Very high as well as very low markups reduce the profit of a reseller.
    Keywords: cloud computing; cloud marketplace; IaaS; bazaar negotiation; SLA negotiation; cloud service chain; cloud reseller; multi-round negotiation; cloud economics.

  • OFQuality: a quality of service management module for software-defined networking   Order a copy of this article
    by Felipe Volpato, Madalena Pereira Da Silva, Mario Antonio Ribeiro Dantas 
    Abstract: The exponential growth of online devices has been causing difficulties to network management and maintenance. At the same time, we have noticed that applications are getting richer in terms of content and quality, thus requiring more and more network guarantees. To overcome this issue, new network approaches such as Software-Defined Networking (SDN), have emerged. OpenFlow, which is one of the most used protocols for SDN, is not enough to provide QoS based on queue priorisation. In this paper, we propose an architecture of a controller module that implements the Open vSwitch Database Management Protocol (OVSDB) in order to provide QoS management with queue priorisation. Our module differs from others because it features mechanisms to test and facilitate user configuration. Our experiments showed that our module behaved as expected, causing few delays when managing switch elements, therefore making a useful tool for QoS management on SDN.
    Keywords: software-defined networking; SDN; OpenFlow; quality of service; QoS; Open vSwitch; OVS; Open vSwitch database; OVSDB; management plane.

  • Cache replication for information centric networks through programmable networks   Order a copy of this article
    by Erick B. Nascimento, Edward David Moreno, Douglas D. J. De Macedo 
    Abstract: Software Defined Networking (SDN) is a new approach that decouples the control from the data transmission function and is directly programmable through the programming language. In parallel, Information Centric Network (ICN) influences the use of information through network caching and multipart communication. Moreover, owing to programmable characteristics, these projects are developed to flexibilise, solve traffic problems, and transfer content through a scalable network structure with simple management. The premise of SDN that contemplates the ICN besides decoupling, is the flexibility of the network configurations to reduce overhead of segments due to retransmission of duplicate files by the same segment. Based on this information, an architecture is designed to provide reliable content, which can be replicated in the network \cite{Trajano2016}. The ICN network architecture of this proposal stores the information through a logical volume for later access and has the possibility of connection with remote controllers to store files reliably in cloud environments.
    Keywords: software defined networking; information centric network; programmability; flexibility; management; storage; controller.

  • Winning the war on terror: using social networking tools and GTD to analyse the regularity of terrorism activities   Order a copy of this article
    by Xuan Guo, Fei Xu, Zhi-ting Xiao, Hong-guo Yuan, Xiaoyuan Yang 
    Abstract: In order to grasp the temporal and spatial characteristics and activity patterns of terrorism attacks in China, so as to make effective counter-terrorism strategies, two kinds of different intelligence sources were analysed by means of social network analysis and mathematical statistics. Firstly, using the social network analysis tool ORA, we build the terrorist activity meta-network for the text information, extract the four categories of person, places, organisations and time, and analyse the characteristics of the key nodes of the network, then the meta-network is decomposed into person-organisation, person-location, organisation-location, and organisation-time four binary subnets to analyse the temporal and spatial characteristics of terrorist activities. Then, using the GTD dataset to analyse the characteristics of China's terrorist attacks from 1989 to 2015, the geo-spatial distribution and time distribution of terrorist events are summarised. Combined with the data visualisation method, the previous results of social network analysis using open source text are verified and compared. Finally, the paper puts forward some suggestions on counter-terrorism prevention strategy in China.
    Keywords: social network analysis; GTD; meta-network; ORA; counter-terrorism; terrorism activities.

  • Model-based deployment of secure multi-cloud applications   Order a copy of this article
    by Valentina Casola, Alessandra De Benedictis, Massimiliano Rak, Umberto Villano, Erkuden Rios, Angel Rego, Giancarlo Capone 
    Abstract: The wide diffusion of cloud services, offering functionalities in different application domains and addressing different computing and storage needs, opens up to the possibility of building multi-cloud applications that rely upon heterogeneous services, offered by multiple cloud service providers (CSPs). This flexibility not only enables an efficient usage of existing resources, but also allows, in some cases, to cope with specific requirements in terms of security and performance. On the downside, resorting to multiple CSPs requires a huge amount of time and effort for application development. The MUSA framework enables a DevOps approach to develop multi-cloud applications with desired Security Service Level Agreements (SLAs). This paper describes the MUSA Deployer models and tools, which aim at decoupling the multi-cloud application modelling and development from application deployment and cloud services provisioning. With MUSA tools, application designers and developers are able to express easily and to evaluate the security requirements and, successively, to deploy automatically the application, by acquiring cloud services and by installing and configuring software components on them.
    Keywords: cloud security; multi-cloud deployment; automated deployment.

  • Improving the MXFT scheduling algorithm for a cloud computing context   Order a copy of this article
    by Paul Moggridge, Na Helian, Yi Sun, Mariana Lilley, Vito Veneziano, Martin Eaves 
    Abstract: In this paper, the Max-min Fast Track (MXFT) scheduling algorithm is improved and compared against a selection of popular algorithms. The improved versions of MXFT are called Min-min Max-min Fast Track (MMMXFT) and Clustering Min-min Max-min Fast Track (CMMMXFT). The key difference is using min-min for the fast track. Experimentation revealed that despite min-mins characteristic of prioritising small tasks at the expense of overall makespan, the overall makespan was not adversely effected and the benefits of prioritising small tasks were identified in MMMXFT. Experiments were conducted using a simulator, with the exception of one real world experiment. The real world experiment identified challenges faced by algorithms that rely on accurate execution time prediction.
    Keywords: cloud computing; scheduling algorithms; max-min.

Special Issue on: Recent Developments in Parallel, Distributed and Grid Computing for Big Data

  • GPU accelerated video super-resolution using transformed spatio-temporal exemplars   Order a copy of this article
    by Chaitanya Pavan Tanay Kondapalli, Srikanth Khanna, Chandrasekaran Venkatachalam, Pallav Kumar Baruah, Kartheek Diwakar Pingali, Sai Hareesh Anamandra 
    Abstract: Super-resolution (SR) is the method of obtaining high resolution (HR) image or image sequence from one or more low-resolution (LR) images of a scene. Super-resolution has been an active area of research in recent years owing to its applications to defence, satellite imaging, video surveillance and medical diagnostics. In a broad sense, SR techniques can be classified into external database driven and internal database driven approaches. The training phase in the first approach is computationally intensive as it learns the LR-HR patch relationships from huge datasets, and the test procedure is relatively fast. In the second approach, the super-resolved image is directly constructed from the available LR image, eliminating the need for any learning phase but the testing phase is computationally intensive. Recently, Huang et al. (2015) proposed a transformed self-exemplar internal database technique which takes advantage of the fractal nature in an image by expanding patch search space using geometric variations. This method fails if there is no patch redundancy within and across image scales and also if there is a failure in detecting vanishing points (VP), which are used to determine perspective transformation between LR image and its subsampled form. In this paper, we expand the patch search space by taking advantage of the temporal dimension of image frames in the scene video and also use an efficient VP detection technique by Lezama et al. (2014). We are thereby able to successfully super-resolve even the failure cases of Huang et al. (2015) and achieve an overall improvement in PSNR. We also focused on reducing the computation time by exploiting the embarrassingly parallel nature of the algorithm. We achieved a speedup of 6 on multi-core, up to 11 on GPU, and around 16 on hybrid platform of multi-core and GPU by parallelising the proposed algorithm. Using our hybrid implementation, we achieved 32x super-resolution factor in limited time. We also demonstrate superior results for the proposed method compared with current state-of-the-art SR methods.
    Keywords: super-resolution; self-exemplar; perspective geometry; temporal dimension; vanishing point; GPU; multicore.

  • Energy-efficient fuzzy-based approach for dynamic virtual machine consolidation   Order a copy of this article
    by Anita Choudhary, Mahesh Chandra Govil, Girdhari Singh, Lalit K. Awasthi, Emmanuel S. Pilli 
    Abstract: In cloud environment the overload leads to performance degradation and Service Level Agreement (SLA) violation while underload results in inefficient use of resources and needless energy consumption. Dynamic Virtual Machine (VM) consolidation is considered as an effective solution to deal with both overload and underload problems. However, dynamic VM consolidation is not a trivial solution as it can also lead to violation of negotiated SLA owing to runtime overheads in VM migration. Further, dynamic VM consolidation approaches need to answer many questions, such as (i) when to migrate a VM? (ii) which VM is to be migrated? and (iii) where to migrate the selected VM? In this work, efforts are made to develop a comprehensive approach to achieve better solutions to such problems. In the proposed approach, future forecasting methods for host overload detection are explored; a fuzzy logic based VM selection approach that enhances the performance of VM selection strategy is developed; and a VM placement algorithm based on destination CPU use is also developed. The performance evaluation of the proposed approaches is carried out on CloudSim toolkit using PlanetLab data set. The simulation results have exhibited significant improvement in the number of VM migrations, energy consumption, and SLA violations.
    Keywords: cloud computing; virtual machines; dynamic virtual machine consolidation; exponential smoothing; fuzzy logic.

  • A distributed framework for cyber-physical cloud systems in collaborative engineering   Order a copy of this article
    by Stanislao Grazioso, Mateusz Gospodarczyk, Mario Selvaggio, Giuseppe Di Gironimo 
    Abstract: Distributed cyber-physical systems play a significant role in enhancing group decision-making processes, as in collaborative engineering. In this work, we develop a distributed framework to allow the use of collaborative approaches in group decision-making problems. We use the fuzzy analytic hierarchy process, a multiple criteria decision-making method, as the algorithm for the selection process. The architecture of the framework makes use of open-source utilities. The information components of the distributed framework act in response to the feedback provided by humans. Cloud infrastructures are used for data storage and remote computation. The motivation behind this work is to make possible the implementation of group decision-making in real scenarios. Two illustrative examples show the feasibility of the approach in different application fields. The main outcome is the achievement of a time reduction for the selection and evaluation process.
    Keywords: distributed systems; cyber-physical systems; web services; group decision making; fuzzy AHP; product design and development.

Special Issue on: Resource Provisioning in Cloud Computing

  • A power saver scheduling algorithm using DVFS and DNS techniques in cloud computing datacentres   Order a copy of this article
    by Saleh Atiewi, Salman Yussof, Mohd Ezanee, Mutasem Zalloum 
    Abstract: Cloud computing is a fascinating and profitable area in modern distributed computing. Aside from providing millions of users with the means to use offered services through their own computers, terminals, and mobile devices, cloud computing presents an environment with low cost, simple user interface, and low power consumption by employing server virtualisation in its offered services (e.g., infrastructure as a service). The pool of virtual machines found in a cloud computing datacentre (DC) must run through an efficient task scheduling algorithm to achieve resource usage and good quality of service, thus ensuring the positive effect of low energy consumption in the cloud computing environment. In this paper, we present an energy-efficient scheduling algorithm for a cloud computing DC using the dynamic voltage frequency scaling technique. The proposed scheduling algorithm can efficiently reduce the energy consumption for executing jobs by increasing resource usage. GreenCloud simulator is used to simulate our algorithm. Experimental results show that, compared with other algorithms, our algorithm can increase server usage, reduce energy consumption, and reduce execution time.
    Keywords: DVFS; DNS; virtual machine; datacentre; cloud computing; power consumption.

  • Towards providing middleware-level proactive resource reorganisation for elastic HPC applications in the cloud   Order a copy of this article
    by Rodrigo Righi, Cristiano Costa, Vinicius Facco, Igor Fontana, Mauricio Pillon, Marco Zanatta 
    Abstract: Elasticity is one of the most important features of cloud computing, referring to the ability to add or remove resources according to the needs of the application or service. Particularly for High Performance Computing (HPC), elasticity can provide a better use of resources and also a reduction in the execution time of applications. Today, we observe the emergence of proactive initiatives to handle the elasticity and HPC duet, but they present at least one problem related to the need of a previous user experience, large processing time, completion of parameters or design for a specific infrastructure and workload setting. Concerning the aforesaid context, this article presents ProElastic, a lightweight model that uses proactive elasticity to drive resource reorganisation decisions for HPC applications. Using ARIMA-based time series and analysing the mean time to launch virtual machines, ProElastic anticipates under- and over-loaded situations, triggering elasticity actions beforehand to address them. Our idea is to explore both performance and adaptivity at middleware level in an effortless way at user perspective, who does not need to either complete elasticity parameters or rewrite the HPC application. Based on ProElastic, we developed a prototype that was evaluated with a master-slave iterative application and compared against reactive-based elasticity and non-elastic approaches. The results showed performance gains and a competitive cost (application time multiplied by consumed resources) in favour of ProElastic when confronted with these two last approaches.
    Keywords: cloud elasticity; proactive optimisation; performance; resource management; adaptivity.

  • Virtual machine placement in distributed cloud centres using bin packing algorithm   Order a copy of this article
    by Kumaraswamy S, Mydhili K. Nair 
    Abstract: A virtual machine is a logical framework to provide services on the cloud. These virtual machines form a logical partition of the physical infrastructure present in the cloud centre. Virtual machines are not only prone to cost escalation but also result in huge power consumption. Hence, the cloud centre needs to optimise cost and power consumption by placing these virtual machines from their current physical machines to other suitable physical machines. Currently, this problem has been addressed by considering the virtual machines present in a single location cloud centre. But current cloud centres are have multiple locations and all of them are synchronised to provide cloud services. In this complex paradigm, it is important to differentiate the same and different location virtual machines and provide suitable placement algorithms. In this work, the problem of virtual machine placement is modelled through a 3-slot bin packing problem. It is shown as NP-Complete and suitable approximation algorithms are proposed. Also, a polynomial time approximation scheme is designed for this problem, which provides a facility to control the quality of approximation. Empirical studies performed through simulation confirm the theoretical bounds that were obtained.
    Keywords: cloud computing; virtual machine; virtual machine placement; distributed cloud;bin packing.

Special Issue on: Emergent Peer-to-Peer Network Technologies for Ubiquitous and Wireless Networks

  • An improved energy efficient multi-hop ACO-based intelligent routing protocol for MANET   Order a copy of this article
    by Jeyalaxmi Perumaal, Saravanan R 
    Abstract: A Mobile Ad-hoc Network (MANET) consists of group of mobile nodes, and the communication among them is done without any supporting centralised structure. Routing in a MANET is a difficult because of its dynamic features, such as high mobility, constrained bandwidth, link failures due to energy loss, etc., The objective of the proposed work is to implement an intelligent routing protocol. Selection of the best hops is mandatory to provide good throughput in the network, therefore Ant Colony Optimisation (ACO) based intelligent routing is proposed. Selecting the best intermediate hop for intelligent routing includes ACO technique, which greatly reduces the network delay and link failures by validating the co-ordinator nodes. Best co-ordinator nodes are selected as good intermediate hops in the intelligent routing path. The performance is evaluated using the simulation tool NS2, and the metrics considered for evaluation are delivery and loss rate of sent data, throughput and lifetime of the network, delay and energy consumption.
    Keywords: ant colony optimisation; intelligent routing protocol; best co-ordinator nodes; MANET.

  • Analysis of spectrum handoff schemes for cognitive radio networks considering secondary user mobility   Order a copy of this article
    by K.S. Preetha, S. Kalaivani 
    Abstract: There has been a gigantic spike in the usage and development of wireless devices since wireless technology came into existence. This has contributed to a very serious problem of spectrum unavailability or spectrum scarcity. The solution to this problem comes in the form of cognitive radio networks, where secondary users (SUs), also known as unlicensed users, make use of the spectrum in an opportunistic manner. The SU uses the spectrum in a manner such that the primary or the licensed user (PU) doesnt face interference above a threshold level of tolerance. Whenever a PU comes back to reclaim its licensed channel, the SU using it needs to perform a spectrum handoff (SHO) to another channel that is free of PU. This way of functioning is termed as spectrum mobility. Spectrum mobility can be achieved by means of SHO. Initially, the SUs continuously sense the channels to identify an idle channel. Errors in the sensing channel are possible. A detection theory is put forth to analyse the spectrum sensing errors with the receiver operating characteristic (ROC) considering false alarm probability, miss detection and detection probability. In this paper, we meticulously investigate and analyse the probability of spectrum handoff (PSHO), and hence the performance of spectrum mobility, with Lognormal-3 and Hyper-Erlang distribution models considering SU call duration and residual time of availability of spectrum holes as measurement metrics designed for tele-traffic analysis.
    Keywords: cognitive radio networks; detection probability; probability of a miss; SNR; false alarm probability; primary users; secondary users.

Special Issue on: Real-time Communication Issues in the Internet of Things using Big Data Stream Computing

  • Consistent and effective energy utilisation of node model for securing data in wireless sensor networks   Order a copy of this article
    by P. Balamurugan, Marimuthu Karuppiah, A. Mummoorthy, A. Viswabharathi, R. Niranchana 
    Abstract: This paper introduces the new routing model called effective energy utilisation of node model (EEUNM) created for the energy-efficient securing of data in wireless sensor networks. EEUNM is used to deal with wireless sensor applications, such as military and forest flied. The aim of this model is to find the perfect path based on consumption of node energy. The focus of EEUNM is a new approach to simultaneously factor energy and trustworthiness of routes in the routing model. EEUNM finds and selects the path on the basis of having maximum capability of node with incurring additional cost in overhead compared with the common protocols AODV- EHA and LTB-AODV. In this work, simulation results show that the proposed model EEUNM keeps the same security level while completing more energy use of nodes for data packet delivery.
    Keywords: EEUNM; energy; utility; routing; transmission cost; efficiency.

  • Image encryption based on random scrambling and chaotic logistic map   Order a copy of this article
    by Sudeept Yadav, Y. Singh 
    Abstract: This paper aims to enhance the performance of the image encryption technique that first scrambles the location of the pixels and then applies the chaotic map by using a 32-bit symmetric key that changes the pixel values of the image. To make the plain image unidentifiable, we use the scrambling operation by using a one-dimensional vector technique that changes the correlations of all the adjacent pixels. The chaotic map algorithm produces the cipher image by changing the pixel values of the given image. To increase the security level we use the application of keys in the encryption and decryption process. For any large-sized image this encryption and decryption process is simple enough and provides high security. The proposed encryption method has been tested on different gray images and showed good results. The security level of image encryption and decryption is further increased.
    Keywords: image encryption; random scrambling; chaotic logistic map.

  • Predicting the soil profile through modified regression by discretisation algorithm for the crop yield in Trichy district   Order a copy of this article
    by M.C.S. Geetha, Elizabeth Shanthi 
    Abstract: Agriculture seems to be the most segment of the Tamil Nadu State financial system, as around 70% of the residents are occupied in agriculture and associated activities for their source of revenue. But still the state faces problems such as resource-poor farmers, fragmentation of holdings, reliance on monsoon rain, and low soil productivity. In our research we have taken up the issue of predicting the yield level by analysing the soil characteristics for the particular soil. As gathering the information from an enormous volume of data is practically a complicated task in the recent circumstances, we have concentrated only on Trichy district and the soil data samples has been collected. A preprocessing element was used to construct training and testing sets for use among the proposed and compared classifiers. The classifiers, such as linear regression, simple linear regression, additive regression, and regression by discretisation, were used for training and testing of the datasets. We modified the regression by discretisation classifier, and the results were explored in order to obtain the crop yield prediction for the soil. We have compared our methodology with various classifiers, and analysis was done on finding the error of the predicted values, root relative squared error and cover coefficient for the each classifier model. This study discovered that the modified regression by discretisation classifier is appropriate for training soil data for classes prediction.
    Keywords: agriculture; classifiers; regression by discretisation; soil; Trichy district.

  • Computing and e-learning in the perspective of distributed models   Order a copy of this article
    by Kalyanaraman Pattabiraman, S.Margret Anouncia 
    Abstract: Electronic learning systems have been an area of interest for researchers in the past few decades as they offer flexible, reusable, scalable, available, and affordable services to a wide variety of users, including content generators, content users, and content managers. An e-learning system involves digital content development, administration, and delivery of learning content that can be made available even to remote locations. With the advent of huge digital content available on the internet, an effective e-learning system requires collaboration among the content creators, users, and administrators; further, it requires huge computational power and storage capacity. Presently, the system focuses mostly on the delivery of content to learners rather than concentrating on the effective storage and efficient retrieval of the content based on the skill of the individual learners. Several approaches have been attempted towards the personalised content delivery based on the basic skill of learners. However, the system maintained a repository of verified contents relatively than a validated content. In order to maintain a validated content for efficient retrieval, distributed models with respect to a different perspective of validation may be fruitful. This paper studied various distributed models that are developed in the domain of e-learning with respect to the learning environment, learning objects and learning style.
    Keywords: e-learning; distributed model; content retrieval; learning environment; learning objects; learning style.

  • A personalised user preference and feature-based semantic information retrieval system in semantic web search   Order a copy of this article
    by Princess Maria John, S. Arockiasamy, Ranjith Jeba Thangiah 
    Abstract: The existing web Information Retrieval System (IRS) gathers appropriate details according to the keywords, which are insufficient for large quantities of information. It gives finite potential to get the idea of the users requirement and the relationship among the key terms. This issue will be rectified by a novel web architecture called semantic web search, which resolves the drawbacks of the keyword-based search technique, which is known as conceptual or semantic search technique. In this work, the ontology-based Semantic Supported Information Retrieval System (SIRS) is introduced, in which the users give the input query which is determined by the Hypertext Markup Language (HTML) parser; then the Probabilistic Latent Semantic Indexing (PLSI) algorithm is used to gather the details in an effective manner. These conversations are used, along with the assistance of the field concept of the preexisting domain ontologies and a mediator thesaurus, to determine the semantic associations amongst them. The proposed work concentrates on resolving the web search issues and also concentrates on resolving the personalised web search. In correspondence to the users browsing record, an ontological outline is produced, which is used for personalisation. So the SIRS will be correlated with the personalised search with item features, termed as Personalized User Preference and Feature based Semantic Information Retrieval System (PUFSIRS) architecture. PUFSIRS performs according to the particle agent, which accomplishes the SIRS based on the curiosity of the user through Multi-Criteria Particle Swarm Optimisation (MCPSO) for giving the users personal interest. If the PUFSIRS architecture is implemented for obtaining the web details, search performance can be enhanced and exact output will be obtained. The appropriate details for the semantic query are gathered and categorised, based on the pertinence of an MCPSO procedure. The recital investigation proves that the anticipated PUFSIRS architecture can enhance the accuracy and efficiency for gathering the appropriate web credentials, in contrast to the current schemes.
    Keywords: information retrieval system; semantic web; semantic search; ontology; semantic query; semantic indexing; user preferences.

  • Investigation on association of self esteem and students' academic performance   Order a copy of this article
    by M. Amala Jayanthi, Lakshmana Kumar Ramaswamy, S. Swathi 
    Abstract: Educationalists have formulated a taxonomy called Bloom's Taxonomy. This states that, during education, a student progresses not only in knowledge (cognitive), but also in emotions (attitude/behaviour) and skill sets (psychomotor). In general, only the knowledge of the students is assessed. This paper researches how students' self esteem (attitude) influences their academic performance. Self esteem is an emotional evaluation of self worth, either positively or negatively. Rosenberg's Self Esteem Scale is used to evaluate the individual's self esteem. The students are categorised based on their self esteem scale and performance scale, respectively, using supervised and unsupervised learning. The relation between self esteem and performance is proven using predictive and descriptive modelling. The study reveals the positive association between self esteem and performance. This research will help the teaching community to realise the influence that students' self esteem has on their performance, and help students to grow positively in their knowledge, emotions and skills.
    Keywords: Bloom’s Taxonomy; educational data mining; Rosenberg Self Esteem Scale; multilayer preceptron; criterion reference model.

Special Issue on: Emerging Scalable Edge Computing Architectures and Intelligent Algorithms for Cloud-of-Things and Edge-of-Things

  • A survey on fog computing and its research challenges   Order a copy of this article
    by Jose Dos Santos Machado, Edward David Moreno, Admilson De Ribamar Lima Ribeiro 
    Abstract: This paper reviews the new paradigm of distributed computing, which is fog computing, and it presents its concept, characteristics and areas of performance. It performs a literature review on the problem of its implementation and analyses its research challenges, such as security issues, operational issues and their standardisation. We show and discuss that many questions need to be researched in academia so that their implementation will become a reality, but it is clear that their adherence is inevitable for the internet of the future.
    Keywords: fog computing; edge computing; cloud computing; IoT; distributed computing; cloud integration to IoT.