Forthcoming articles

 


International Journal of Grid and Utility Computing

 

These articles have been peer-reviewed and accepted for publication in IJGUC, but are pending final changes, are not yet published and may not appear here in their final order of publication until they are assigned to issues. Therefore, the content conforms to our standards but the presentation (e.g. typesetting and proof-reading) is not necessarily up to the Inderscience standard. Additionally, titles, authors, abstracts and keywords may change before publication. Articles will not be published until the final proofs are validated by their authors.

 

Forthcoming articles must be purchased for the purposes of research, teaching and private study only. These articles can be cited using the expression "in press". For example: Smith, J. (in press). Article Title. Journal Title.

 

Articles marked with this shopping trolley icon are available for purchase - click on the icon to send an email request to purchase.

 

Articles marked with this Open Access icon are freely available and openly accessible to all without any restriction except the ones stated in their respective CC licenses.

 

Register for our alerting service, which notifies you by email when new issues of IJGUC are published online.

 

We also offer RSS feeds which provide timely updates of tables of contents, newly published articles and calls for papers.

 

International Journal of Grid and Utility Computing (62 papers in press)

 

Regular Issues

 

  • Trust modelling for opportunistic cloud services   Order a copy of this article
    by Eric Kuada 
    Abstract: This paper presents a model for the concept of trust and a trust management system for opportunistic cloud services platforms. Results from applying the systematic review methodology to review trust-related studies in cloud computing revealed that the concept of trust is used loosely without any formal specification in cloud computing discussions and trust engineering in general. Formal definition and a model of the concept of trust is, however, essential in the design of trust management systems. The paper therefore presents a model for the formal specification of the concept of trust. A trust management system for opportunistic cloud services is also presented. The applicability of the trust model and the trust management system is demonstrated for cloud computing by applying it to software as a service and infrastructure as a service usage scenarios in the context of opportunistic cloud services environments.
    Keywords: opportunistic cloud services; trust engineering; trust in cloud computing; trust modeling; trust management system; pseudo service level agreements.

  • Efficient cache replacement policy for minimising error rate in L2-STT-MRAM caches   Order a copy of this article
    by Rashidah F. Olanrewaju, Burhan Ul Islam Khan, A. Raouf Khan, Mashkuri Yaacob, Md Moktarul Alam 
    Abstract: In recent times, various challenges have been encountered in the design and development of Static-RAM (SRAM) caches, which consequently has led to a design where memory cell technologies are converted into on-chip embedded caches. The current research statistics for cache designing reveals that Spin Torque Transfer Magnetic RAMs, preferably termed as STT-MRAMs, have become one of the most promising technologies in the field of memory chip design, gaining a lot of attention from researchers owing to their dynamic direct map and data access policies for reducing the average cost, i.e. both time and energy optimisation. Though STT-MRAMs possess high density, less power rating and non-volatility, increasing rates of WRITE failures and READ disturbances highly affect the reliability of STT-MRAM caches. Besides workload behaviours, process variations directly affect these failure/disturbance rates. Furthermore, it can be seen that cache replacement algorithms play a significant part in minimising the Error Rate (ER) induced by WRITE operations. In this paper, the vulnerability of STT-MRAM caches has been investigated to examine the effect of workloads as well as process variations for characterising the reliability of STT-MRAM caches. The current study is intended to analyse and evaluate an existing efficient cache replacement policy, namely Least Error Rate (LER), which uses Hamming Distance (HD) computations to reduce the Write Error Rate (WER) of L2-STT-MRAM caches with acceptable overheads. The performance analysis of the algorithm ensures its effectiveness in reducing the WER and cost overheads compared with the conventional LRU technique implemented on SRAM cells.
    Keywords: cache replacement algorithm; field assisted STT-MRAM; error rate; L2 caches.

  • An infrastructure model for smart cities based on big data   Order a copy of this article
    by Eliza Helena Areias Gomes, Mario Antonio Ribeiro Dantas, Douglas D. J. De Macedo, Carlos Roberto De Rolt, Julio Dias, Luca Foschini 
    Abstract: The spread of projects focused on smart cities has grown in recent years. With this, the massive amount of data generated in these initiatives creates a degree of complexity in how to manage all this information. In attention to solve this problem, several approaches have been developed in recent years. In this paper, we propose an infrastructure model for big data for a smart city project. The goal of this model is to present the stages for the processing of data in the steps of extraction, storage, processing and visualisation, as well as the types of tool needed for each phase. To implement our proposed model, we used the ParticipACT Brazil, a project based in smart cities. This project uses different databases to compose its big data and uses this data to solve urban problems. We observe that our model provides a structured vision of the software to be used in big data server of ParticipACT Brazil.
    Keywords: big data; smart city; big data tools.

  • Playing in traffic: an investigation of low-cost, non-invasive traffic sensors for street lighting luminaire deployment   Order a copy of this article
    by Karl Mohring, Trina Myers, Ian Atkinson 
    Abstract: Real-time traffic monitoring is essential to the development of smart cities as well as its potential for energy savings. However, real-time traffic monitoring is a task that requires sophisticated and expensive hardware. Owing to the prohibitive cost of specialised sensors, accurate traffic counts are typically limited to intersections where traffic information is used for signalling purposes. The sparse arrangement of traffic detection points does not provide adequate information for intelligent lighting applications, such as adaptive dimming. This paper investigates the low-cost and off-the-shelf sensors to be installed inside street lighting luminaires for traffic sensing. A luminaire-mounted sensor test-bed installed on a moderately busy road trialled three non-invasive presence-detection sensors: Passive Infrared (PIR), Sonar (UVD) and lidar. The proof-of-concept study revealed that a HC-SR501 PIR motion detector could count traffic with 73% accuracy at a low cost and may be suitable for intelligent lighting applications if accuracy can be further improved.
    Keywords: commodity; internet of things; vehicle detection; sensors; smart cities; wireless sensor networks.

  • Real-time web-cast system by multihop WebRTC communications   Order a copy of this article
    by Daiki Ito, Michitoshi Niibori, Masaru Kamada 
    Abstract: A software system is developed for casting the screen images and voices from a host PC to the client web browsers on many other PCs in real time. This system is intended to be used in the classrooms. Students have only to bring their own PCs and connect to the teachers host PC by a web browser via a wireless network to see and listen to the teaching materials presented on the host PC. Then the client web-browsers are organised in the shape of a binary tree along which the video and audio data are relayed in the multihop fashion by the Web Real-time Communication (WebRTC) protocol. This structure of binary multihop relay is adopted in order not to burden the host PC with communications load. A test has shown that voice and the motion pictures in a rather small size of 320 x 240 pixels on a teachers PC have been presented at the rate of five frames per second without any noticeable delays on the web browsers running on 38 client devices for students under a local WiFi network. To host more client devices, we have to lower the frame rate as slow as the slide show of still pictures.
    Keywords: real-time web-cast system; bring your own device; WebSocket; web real-time communication.

  • Dynamic migration of virtual machines to reduce energy consumption in a cluster   Order a copy of this article
    by Dilawaer Duolikun, Tomoya Enokido, Makoto Takizawa 
    Abstract: Virtual machines are widely used to support applications with virtual service in server clusters. Here, a virtual machine can migrate from a host server to a guest server. In this paper, we consider a cluster where virtual machines are dynamically created and dropped depending on the number of processes. We propose a dynamic virtual machine migration (DVMM) algorithm to reduce the total electric energy consumption of servers. If an applications issues a process to a cluster, the most energy-efficient host server is first selected and then the process is performed on a virtual machine of the server. Then, a virtual machine migrates from a host server to a guest server so that total electric energy consumption of the servers can be reduced. In the evaluation, we show the total electric energy consumption and active time of servers and the average execution time of processes can be reduced in the DVMM algorithm.
    Keywords: energy-efficient computation; virtual machine; power consumption model; energy-aware dynamic migration of virtual machines.

  • A power saver scheduling algorithm using DVFS and DNS techniques in cloud computing datacentres   Order a copy of this article
    by Saleh Atiewi, Salman Yussof, Mohd Ezanee, Mutasem Zalloum 
    Abstract: Cloud computing is a fascinating and profitable area in modern distributed computing. Aside from providing millions of users with the means to use offered services through their own computers, terminals, and mobile devices, cloud computing presents an environment with low cost, simple user interface, and low power consumption by employing server virtualisation in its offered services (e.g., infrastructure as a service). The pool of virtual machines found in a cloud computing datacentre (DC) must run through an efficient task scheduling algorithm to achieve resource usage and good quality of service, thus ensuring the positive effect of low energy consumption in the cloud computing environment. In this paper, we present an energy-efficient scheduling algorithm for a cloud computing DC using the dynamic voltage frequency scaling technique. The proposed scheduling algorithm can efficiently reduce the energy consumption for executing jobs by increasing resource usage. GreenCloud simulator is used to simulate our algorithm. Experimental results show that, compared with other algorithms, our algorithm can increase server usage, reduce energy consumption, and reduce execution time.
    Keywords: DVFS; DNS; virtual machine; datacentre; cloud computing; power consumption.

  • Energy-efficient placement of virtual machines in cloud datacentres, based on fuzzy decision making   Order a copy of this article
    by Leili Salimian, Faramarz Safi-Esfahani 
    Abstract: Placement of virtual machines (VMs) on physical nodes as a sub-problem of dynamic VM consolidation has been driven mainly by energy efficiency and performance objectives. However, owing to varying workloads in VMs, placement of the VMs can cause a violation in the Service Level Agreement (SLA). In this paper, the VM placement is regarded as a bin packing problem, and a fuzzy energy-aware algorithm is proposed to estimate the host resource usage. The estimated resource usage is used to find the most energy-efficient host to reallocate the VMs. The fuzzy algorithm generates rules and membership functions dynamically to adapt to workload changes. The main objective of the proposed algorithm is to optimise the energy-performance trade-off. The effectiveness of the proposed algorithm is evaluated through simulations on the random and real-world PlanetLab workloads. Simulation results demonstrate that the proposed algorithm reduces the energy consumption, while it provides a high level of adherence to the SLAs.
    Keywords: dynamic VM consolidation; CPU usage; VM placement; fuzzy decision making.

  • An identity-based cryptographic scheme for cloud storage applications   Order a copy of this article
    by Manel Medhioub, Mohamed Hamdi 
    Abstract: The use of remote storage systems is gaining an expanding interest, namely the cloud storage based services. In fact, one of the factors that led to the popularity of cloud computing is the availability of storage resources provided at a reduced cost. However, when outsourcing the data to a third party, security issues become critical concerns, especially confidentiality, integrity, authentication, anonymity and resiliency. Based on this challenge, this work provides a new approach to ensure authentication in cloud storage applications. ID-based cryptosystems (IBC) have many advantages over certificate-based systems, such as simplification of key management. This paper proposes an original ID-based authentication approach in which the cloud tenant is assigned the IBCPrivate Key Generator (PKG) function. Consequently, it can issue public elements for its users, and can keep confidential the resulting IBC secrets. Moreover, in our scheme, the public key infrastructure is still in usage to establish trust relationships between the PKGs.
    Keywords: cloud storage; authentication; identity-based cryptography; security; Dropbox.

  • COBRA-HPA: a block generating tool to perform hybrid program analysis   Order a copy of this article
    by Thomas Huybrechts, Yorick De Bock, Haoxuan Li, Peter Hellinckx 
    Abstract: The Worst-Case Execution Time (WCET) of a task is an important value in real-time systems. This metric is used by the scheduler in order to schedule all tasks before their deadlines. However, the code and hardware architecture have a significant impact on the execution time and thus the WCET. Therefore, different analysis methodologies exist to determine the WCET, each with their own advantages and/or disadvantages. In this paper, a hybrid approach is proposed that combines the strengths of two common analysis techniques. This hybrid methodology tackles the problem that can be described as 'the gap between a machine and a human in solving problems'. The two-layer hybrid model splits the code of tasks into so-called basic blocks. The WCET can be determined by performing execution time measurements on each block and statically combining those results. The COBRA-HPA framework presented in this paper is developed to facilitate the creation of hybrid block models and automate the measurements/analysis process. Additionally, an elaborated discussion on the implementation and performance of the framework is given. In conclusion, the results of the COBRA-HPA framework show a significant reduction in analysis effort while keeping sound WCET predictions for the hybrid method compared with the static and measurement-based approach.
    Keywords: worst-case execution time; WCET; hybrid analysis methodology; COde Behaviour fRAmework; COBRA; basic block generator.

  • The big data mining forecasting model based on combination of improved manifold learning and deep learning   Order a copy of this article
    by Xiurong Chen, Yixiang Tian 
    Abstract: For the most important dilemma in big data processing that extensive redundant information and useful information mix with each other, which makes these big data difficult to be effectively used to establish prediction models, in our work we combine the manifold learning dimension reduction algorithm LLE with deep learning feature extraction algorithm CDBN as the input of RBF, constructing a mixed-feature RBF forecast model. As for depending too much on the local domain, which is not easy to determine in the LLE algorithm, we used the idea of mapping by kernel function of KECA to transfer original global nonlinear problem into global linear one under the high-dimensional kernel feature space to solve, removing the redundant information more accurately and reducing data complexity. As for the difficulty in confirming network structure and the lack of supervision in learning process of CDBN, we used the kernel entropy information computed in KECA to determine the number of network layers and supervise the learning process, which makes it more effective to extract deep features to explore the essential characteristics of big data information. In the empirical part we chose the foreign exchange rate time series to conduct research, the results show that the improved KELE can reduce dimensionality of sample data effectively which makes we obtain the more optimised and reasonable representation of original data, providing an assurance for further learning and understanding of big data. And the improved KECDBN can extract distributed features of data more effectively. Then improve the prediction accuracy of the mixed-feature RBF forecast model based on KELE and KECRBM.
    Keywords: locally liner embedding; continuous deep belief network; kernel entropy component analysis; kernel entropy liner embedding; kernel entropy continuous deep belief network.

  • Cost-aware hybrid cloud scheduling of parameter sweep calculations using predictive algorithms   Order a copy of this article
    by Stig Bosmans, Glenn Maricaux, Filip Van Der Schueren, Peter Hellinckx 
    Abstract: This paper investigates various techniques for scheduling parameter sweep calculations cost efficiently in a hybrid cloud environment. The combination of both a private and public cloud environment integrates the advantages of being cost effective and having virtually unlimited scaling capabilities at the same time. To make an accurate estimate for the required resources, multiple prediction techniques are discussed. The estimation can be used to create an efficient scheduler which respects both deadline and cost. These findings have been implemented and tested in a Java-based cloud framework that operates on Amazon EC2 and OpenNebula. Also, we present a theoretical model to further optimise the cost by leveraging the Amazon Spot Market.
    Keywords: parameter sweep; cloud computing; Amazon AWS EC2; predictive algorithms; OpenNebula; machine learning; Amazon spot market.

  • Impact of software architecture on execution time: a power window TACLeBench case study   Order a copy of this article
    by Haoxuan Li, Paul De Meulenaere, Siegfried Mercelis, Peter Hellinckx 
    Abstract: Timing analysis is used to extract the timing properties of a system. Various timing analysis techniques and tools have been developed over the past decades. However, changes in hardware platform and software architecture introduced new challenges in timing analysis techniques. In our research, we aim to develop a hybrid approach to provide safe and precise timing analysis results. In this approach, we will divide the original code into smaller code blocks, then construct a timing model based on the information acquired by measuring the execution time of every individual block. This process can introduce changes in the software architecture. In this paper we use a multi-component benchmark to investigate the impact of software architecture on the timing behaviour of a system.
    Keywords: WCET; timing analysis; hybrid timing analysis; power window; embedded systems; TACLEBench; COBRA block generator.

  • Accountability management for multi-tenant cloud services   Order a copy of this article
    by Fatma Masmoudi, Mohamed Sellami, Monia Loulou, Ahmed Hadj Kacem 
    Abstract: The widespread adoption of multi-tenancy in the Software as a Service delivery model triggers several data protection issues that could decrease the tenants' trustworthiness. In this context, accountability can be used to strengthen the trust of tenants in the cloud through providing the reassurance of the processing of personal data hosted in the cloud according to their requirements. In this paper, we propose an approach for the accountability management of multi-tenant cloud services allowing: compliance checking of services's behaviours with defined accountability requirements based on monitoring rules, accountability-violation detection otherwise, and post-violation analysis based on evidences. A tool-suite is developed and integrated into a middleware to implement our proposal. Finally, experiments we have carried out show the efficiency of our approach relying on some criteria.
    Keywords: cloud computing; accountability; multi-tenancy; monitoring; accountability violation.

  • A big data approach for multi-experiment data management   Order a copy of this article
    by Silvio Pardi, Guido Russo 
    Abstract: Data sharing among similar experiments is limited by the usage of ad hoc directory structures, data and metadata naming as well as by the variety of data access protocols used in different computing model. The Open Data and Big Data paradigms provide the context to overcome the current heterogeneity problems. In this work, we present a study for a Global Storage Ecosystem designed to manage large and distributed datasets, in the context of physics experiments. The proposed environment is entirely based on the open protocols HTTP/WebDav, together with modern data searching technologies, according to the Big Data paradigm. More specifically, the main goal is to aggregate multiple storage areas exported with open protocols and to simplify the operations of data retrieval, thanks to a set of engine-search-like tools, based on Elasticsearch and Apache Lucene library. This platform offers to physicists an effective instrument to simplify the multi-experiment data analysis, by enabling data searching, without knowing a priori the directory format or the data itself. As a proof of concept, we realised a prototype over the ReCaS Supercomputing infrastructure, by aggregating and indexing the files stored in a set of already existing storage systems.
    Keywords: big data; data federation.

  • A WLAN triage testbed based on fuzzy logic and its performance evaluation for different number of clients and throughput parameter   Order a copy of this article
    by Kosuke Ozera, Takaaki Inaba, Shinji Sakamoto, Kevin Bylykbashi, Makoto Ikeda, Leonard Barolli 
    Abstract: Many devices communicate over Wireless Local Area Networks (WLANs). The IEEE 802.11e standard for WLANs is an important extension of the IEEE 802.11 standard focusing on QoS that works with any PHY implementation. The IEEE 802.11e standard introduces EDCF and HCCA. Both these schemes are useful for QoS provisioning to support delay-sensitive voice and video applications. EDCF uses the contention window to differentiate high priority and low priority services. However, it does not consider the priority of users. In this paper, in order to deal with this problem, we propose a Fuzzy-based Admission Control System (FACS). We implemented a triage testbed using FACS and carried out an experiment. The experimental results show that the number of connected clients is increased during Avoid phase, but does not change during Monitoring phase. The experimental results show that the implemented testbed performs better than conventional WLANs.
    Keywords: WLAN triage; congestion control.

  • Enriching folksonomy for online videos   Order a copy of this article
    by Hiroki Sakaji, Masaki Kohana, Akio Kobayashi, Hiroyuki Sakai 
    Abstract: We propose a method that enriches folksonomy by using user comments on online videos. Folksonomy is a process in which users tag videos so that they can be searched for easily. On some videos, users can post tags and comments. A tag corresponds to folksonomy. One such online sharing website is Nico Nico Douga; however, users cannot post more than 12 tags on a video. Therefore, there are some important tags that could be posted but are sometimes not. We present a method for acquiring some of these missing tags by choosing new tags that score well in a scoring method developed by us. The method is based on information theory and a novel algorithm for estimating new tags by using distributed databases constructed by us.
    Keywords: text mining; distributed database; information extraction.

  • A web platform for oral exam of programming class   Order a copy of this article
    by Masaki Kohana, Shusuke Okamoto 
    Abstract: We develop a system to help an oral exam for a programming class. Our programming class has a problem about the waiting time for the students. We assume that the waiting time can be reduced when a teacher can check a source code and a result of the program smoothly. A student uploads C++ source codes and registers to a waiting list. The system compiles the code to an HTML and a JavaScript file using Emscripten. The compiled program can run on a Web browser. A teacher can check the order of the students for the oral exam. At this time, the teacher also can see the source code and the result. Our system provides a waiting list of the oral exam to keep fairness. Also, the teacher does not overlook an invalid code. It helps a teacher to grade a student correctly.
    Keywords: Oral Exam; Programming; Runtime Environment.

  • A novel test case generation method based on program structure diagram   Order a copy of this article
    by Mingcheng Qu, XiangHu Wu, YongChao Tao, GuanNan Wang, ZiYu Dong 
    Abstract: At present, embedded software has the problems of test lag, non-visual and low efficiency, depending on the test design of testers. It cannot guarantee the quality of the test cases and cannot guarantee the quality of the test. In this paper, a software program structure diagram model is established to verify the model, and the test points are planned manually. Finally, we fill in the contents of the test items, and generate the corresponding set of test cases according to the algorithm, and save them into the database for management. This method can improve the reliability and efficiency of tests, ensure the visual tracking and management of the use case, and have a strong auxiliary function to the planning and generation of the test case.
    Keywords: program structure diagram; test item planning; test case generation.

  • A dynamic cloud service selection model based on trust and service level agreements in cloud computing   Order a copy of this article
    by Yubiao Wang, Junhao Wen, Quanwang Wu, Lei Guo, Bamei Tao 
    Abstract: For high quality and trusted service selection problems, we propose a dynamic cloud services selection model (DCSSM). Cloud service resources are divided into different service levels by Service-Level Agreement Management (SLAM). Each SLAM manages some cloud service registration information. In order to make the final trust evaluation values more practical, the model performs a comprehensive trust, which consists of direct trust and indirect trust. First, combined weights consist of subjective weight and objective weight. Using rough sets, an analytic hierarchy process method is used to calculate subjective weight. The direct trust also considers transaction time and transaction amount, and then gets a accurate direct trust. Secondly, indirect trust considers user trust evaluation similarity, and it contains indirect trust of friends and indirect trust of strangers. Finally, when the transaction is completed, a direct trust dynamic update is proposed. The model is simulated using CloudSim. It is compared with three other methods, and the experimental results show that the DCSSM performs better than the other three methods.
    Keywords: dynamic cloud service; trust; service-level agreement; selection model; combining weights.

  • Research on regression test method based on multiple UML graphic models   Order a copy of this article
    by Mingcheng Qu, Xianghu Wu, Yongchao Tao, Guannan Wang, Ziyu Dong 
    Abstract: Most of the existing graph-based regression testing schemes aim at a given UML graph and are not flexible in regression testing. This paper proposes a method of a universal UML for a variety of graphical model modifications, and obtains a UML graphics module structure modified regression testing that must be retested, determined by the domain of influence analysis on the effect of UML modification on the graphical model test case generated range analysis, finally re auto generate test cases. This method has been proved to have a high logical coverage rate. In order to fully consider all kinds of dependency, it cannot limit the type modification of UML graphics, and has higher openness and comprehensiveness.
    Keywords: regression testing; multiple UML graphical models; domain analysis.

  • E-XY: an entropy-based XY routing algorithm   Order a copy of this article
    by Akash Punhani, Pardeep Kumar, Nitin Chanderwal 
    Abstract: The communication between cores or processing elements has become an important issue owing to continuous increase in their numbers on the chip. In recent years, Network on Chip (NoC) is used to handle the communication issues. The most common topology for NoC is a mesh topology. XY routing algorithm is the most commonly used algorithm for this topology. This routing algorithm is popular because of its simplicity and deadlock prevention capabilities. The major drawback of such a routing algorithm is that it is unable to handle a high traffic load, which leads to the development of adaptive routing algorithms. The adaptive routing algorithm requires information of the adjacent routers before routing the packets in order to avoid congestion. Such information about the adjacent routers is transferred through extra dedicated links as the normal links may be congested and delay in sending information leads to additional traffic to the congested links. In this paper, an E-XY (entropy-based XY) routing algorithm is proposed that generates information about the adjacent router locally based on previously communicated packets. The results have been carried out on a mesh topology of 8x8 simulated using Omnet++ 4.4.1 using HNOCS version. Different types of traffic have been considered for result computation, including uniform, bit complement, neighbour and tornado. The proposed algorithm is compared with other routing algorithms including XY, IX/Y and Odd Even. Results demonstrate that the proposed algorithm is comparable with the XY routing algorithm up to the load factor of 0.8 and performs better than the XY, IX/Y and Odd Even routing algorithms with the increase in load. The proposed algorithm helps in reducing hardware cost as extra links and ports on routers to connect these links are not required. Hence, the proposed algorithm will be a better option for communication in a parallel computing environment.
    Keywords: routing algorithm; adaptive; parallel communication; router architecture; maximum entropy model.

  • Involving users in energy conservation: a case study in scientific clouds   Order a copy of this article
    by David Guyon, Anne-Cécile Orgerie, Christine Morin, Deb Agarwal 
    Abstract: Services offered by cloud computing are convenient to users for reasons such as their ease of use, flexibility, and financial model. Yet data centres used for their execution are known to consume massive amounts of energy. The growing resource utilisation following the cloud success highlights the importance of the reduction of its energy consumption. This paper investigates a way to reduce the footprint of HPC cloud users by varying the size of the virtual resources they request. We analyse the influence of concurrent applications with different resources sizes on the system's energy consumption. Simulation results show that resources with larger size are more energy consuming regardless of faster completion of applications. Although smaller-sized resources offer energy savings, it is not always favourable in terms of energy to reduce too much the size. High energy savings depend on the user profiles' distribution.
    Keywords: cloud computing; green computing; HPC applications; energy savings; users' involvment.

  • Distributed and multi-core version of k-means algorithm   Order a copy of this article
    by Ilias Savvas, Dimitrios Tselios, Georgia Garani 
    Abstract: Nowadays, huge quantities of data are generated by billions of machines and devices. Numerous methods have been employed in order to make use of this valuable resource: some of them are altered versions of established known algorithms. One of the most seminal methods, in order to mine from data sources, is clustering, and $k$-means is a key algorithm that clusters data according to a set of attributes. However, its main shortcoming is the high computational complexity which makes the $k$-means very inefficient to perform on big datasets. Although $k$-means is a very well used algorithm, a functional distributed variant combining the multi-core power of contemporary machines has not been accepted yet by the researchers. In this work, a three phase distributed/multi-core version of $k$-means and the analysis of its results are presented. The obtained experimental results are in line with the theoretical outcomes and prove the correctness, the efficiency, and the scalability of the proposed technique.
    Keywords: parallel algorithm; clustering; multi-core; distributed; $k$-means; OpenMP; MPI.

  • A data replication strategy for document oriented NoSQL databases   Order a copy of this article
    by Khaoula Tabet, Riad Mokadem, Mohamed Ridda Laouar 
    Abstract: Cloud providers aim to maximise their profits while satisfying the tenant requirement, e.g., performance. The relational database management systems face many obstacles in achieving those needs. Therefore, the use of NoSQL databases becomes necessary when dealing with heterogeneous workloads and voluminous data. In this context, we propose a new replication strategy that balances the workload of nodes and dynamically adjusts the number of replicas while the provider profit is taken into account. The resulting analysis shows that the proposed strategy reduces resource consumption, which improves the profit of the provider while satisfying the performance service level objective of the tenants.
    Keywords: cloud environment; NoSql databases; data replication; provider profit; performance.

  • Logic programming as a service in multi-agent systems for the Internet of Things   Order a copy of this article
    by Roberta Calegari, Enrico Denti, Stefano Mariani, Andrea Omicini 
    Abstract: The widespread diffusion of low-cost computing devices, along with improvements of cloud computing platforms, is paving the way towards a whole new set of opportunities for Internet of Things (IoT) applications and services. Varying degrees of intelligence are required for supporting adaptation and self-management: yet, they should be provided in a lightweight, easy to use and customise, highly-interoperable way. In this paper we explore Logic Programming as a Service (LPaaS) as a novel and promising re-interpretation of distributed logic programming in the IoT era. After introducing the reference context and motivating scenarios of LPaaS as an effective enabling technology for intelligent IoT, we define the LPaaS general architecture, and discuss two different prototype implementations - as a web service and as an agent in a multi-agent system (MAS), both built on top of the tuProlog system, which provides the required interoperability and customisation. We finally showcase the LPaaS potential through two case studies, designed as simple examples of the motivating scenarios.
    Keywords: IoT; logic programming; multi-agent systems; pervasive computing; LPaaS; artificial intelligence; interoperability.

  • Cognitive workload management on globally interoperable network of clouds   Order a copy of this article
    by Giovanni Morana, Rao Mikkilineni, Surendra Keshan 
    Abstract: A new computing paradigm using distributed intelligent managed elements (DIME) and DIME network architecture (DNA) is used to demonstrate a globally interoperable public and private cloud network deploying cloud agnostic workloads. The workloads are cognitive and capable to adjust autonomously their structure and maintain desired quality of service. DNA is designed to provide a control architecture for workload self-management of non-functional requirements to address rapid fluctuations, either in workload demand or in available resources. Using DNA, a transaction-intensive three-tier workload is migrated from a physical server to a virtual machine hosted in a public cloud without interrupting the service transactions. After migration, cloud agnostic inter-cloud and intra-cloud auto-scaling, auto-failover and live migration are demonstrated again, without disrupting the user experience or losing transactions.
    Keywords: cloud computing; datacentre; manageability; DIME; DIME network architecture; cloud agnostic; cloud native.

  • Towards autonomous creation of service chains on cloud markets   Order a copy of this article
    by Benedikt Pittl, Irfan Ul-Haq, Werner Mach, Erich Schikuta 
    Abstract: Today, cloud services, such as virtual machines, are traded directly at fixed prices between consumers and providers on platforms, e.g. Amazon EC2. The recent development of Amazon's EC2 spot market shows that dynamic cloud markets are gaining popularity. Hence, autonomous multi-round bilateral negotiations, also known as bazaar negotiations, are a promising approach for trading cloud services on future cloud markets. They play a vital role for composing service chains. Based on a formal description we describe such service chains and derive different negotiation types. We implement them in a simulation environment and evaluate our approach by executing different market scenarios. Therefore, we developed three negotiation strategies for cloud resellers. Our simulation results show that cloud resellers as well as their negotiation strategies have a significant impact on the resource allocation of cloud markets. Very high as well as very low markups reduce the profit of a reseller.
    Keywords: cloud computing; cloud marketplace; IaaS; bazaar negotiation; SLA negotiation; cloud service chain; cloud reseller; multi-round negotiation; cloud economics.

  • OFQuality: a quality of service management module for software-defined networking   Order a copy of this article
    by Felipe Volpato, Madalena Pereira Da Silva, Mario Antonio Ribeiro Dantas 
    Abstract: The exponential growth of online devices has been causing difficulties to network management and maintenance. At the same time, we have noticed that applications are getting richer in terms of content and quality, thus requiring more and more network guarantees. To overcome this issue, new network approaches such as Software-Defined Networking (SDN), have emerged. OpenFlow, which is one of the most used protocols for SDN, is not enough to provide QoS based on queue priorisation. In this paper, we propose an architecture of a controller module that implements the Open vSwitch Database Management Protocol (OVSDB) in order to provide QoS management with queue priorisation. Our module differs from others because it features mechanisms to test and facilitate user configuration. Our experiments showed that our module behaved as expected, causing few delays when managing switch elements, therefore making a useful tool for QoS management on SDN.
    Keywords: software-defined networking; SDN; OpenFlow; quality of service; QoS; Open vSwitch; OVS; Open vSwitch database; OVSDB; management plane.

  • Cache replication for information centric networks through programmable networks   Order a copy of this article
    by Erick B. Nascimento, Edward David Moreno, Douglas D. J. De Macedo 
    Abstract: Software Defined Networking (SDN) is a new approach that decouples the control from the data transmission function and is directly programmable through the programming language. In parallel, Information Centric Network (ICN) influences the use of information through network caching and multipart communication. Moreover, owing to programmable characteristics, these projects are developed to flexibilise, solve traffic problems, and transfer content through a scalable network structure with simple management. The premise of SDN that contemplates the ICN besides decoupling, is the flexibility of the network configurations to reduce overhead of segments due to retransmission of duplicate files by the same segment. Based on this information, an architecture is designed to provide reliable content, which can be replicated in the network \cite{Trajano2016}. The ICN network architecture of this proposal stores the information through a logical volume for later access and has the possibility of connection with remote controllers to store files reliably in cloud environments.
    Keywords: software defined networking; information centric network; programmability; flexibility; management; storage; controller.

  • Winning the war on terror: using social networking tools and GTD to analyse the regularity of terrorism activities   Order a copy of this article
    by Xuan Guo, Fei Xu, Zhi-ting Xiao, Hong-guo Yuan, Xiaoyuan Yang 
    Abstract: In order to grasp the temporal and spatial characteristics and activity patterns of terrorism attacks in China, so as to make effective counter-terrorism strategies, two kinds of different intelligence sources were analysed by means of social network analysis and mathematical statistics. Firstly, using the social network analysis tool ORA, we build the terrorist activity meta-network for the text information, extract the four categories of person, places, organisations and time, and analyse the characteristics of the key nodes of the network, then the meta-network is decomposed into person-organisation, person-location, organisation-location, and organisation-time four binary subnets to analyse the temporal and spatial characteristics of terrorist activities. Then, using the GTD dataset to analyse the characteristics of China's terrorist attacks from 1989 to 2015, the geo-spatial distribution and time distribution of terrorist events are summarised. Combined with the data visualisation method, the previous results of social network analysis using open source text are verified and compared. Finally, the paper puts forward some suggestions on counter-terrorism prevention strategy in China.
    Keywords: social network analysis; GTD; meta-network; ORA; counter-terrorism; terrorism activities.

  • Model-based deployment of secure multi-cloud applications   Order a copy of this article
    by Valentina Casola, Alessandra De Benedictis, Massimiliano Rak, Umberto Villano, Erkuden Rios, Angel Rego, Giancarlo Capone 
    Abstract: The wide diffusion of cloud services, offering functionalities in different application domains and addressing different computing and storage needs, opens up to the possibility of building multi-cloud applications that rely upon heterogeneous services, offered by multiple cloud service providers (CSPs). This flexibility not only enables an efficient usage of existing resources, but also allows, in some cases, to cope with specific requirements in terms of security and performance. On the downside, resorting to multiple CSPs requires a huge amount of time and effort for application development. The MUSA framework enables a DevOps approach to develop multi-cloud applications with desired Security Service Level Agreements (SLAs). This paper describes the MUSA Deployer models and tools, which aim at decoupling the multi-cloud application modelling and development from application deployment and cloud services provisioning. With MUSA tools, application designers and developers are able to express easily and to evaluate the security requirements and, successively, to deploy automatically the application, by acquiring cloud services and by installing and configuring software components on them.
    Keywords: cloud security; multi-cloud deployment; automated deployment.

  • Improving the MXFT scheduling algorithm for a cloud computing context   Order a copy of this article
    by Paul Moggridge, Na Helian, Yi Sun, Mariana Lilley, Vito Veneziano, Martin Eaves 
    Abstract: In this paper, the Max-min Fast Track (MXFT) scheduling algorithm is improved and compared against a selection of popular algorithms. The improved versions of MXFT are called Min-min Max-min Fast Track (MMMXFT) and Clustering Min-min Max-min Fast Track (CMMMXFT). The key difference is using min-min for the fast track. Experimentation revealed that despite min-mins characteristic of prioritising small tasks at the expense of overall makespan, the overall makespan was not adversely effected and the benefits of prioritising small tasks were identified in MMMXFT. Experiments were conducted using a simulator, with the exception of one real world experiment. The real world experiment identified challenges faced by algorithms that rely on accurate execution time prediction.
    Keywords: cloud computing; scheduling algorithms; max-min.

  • Novel algorithms, emergent approaches and applications for distributed computing   Order a copy of this article
    by Ilias Savvas, Douglas Dyllon Jeronimo De Macedo 
    Abstract: The track on the Convergence of Distributed Clouds, Grids and their Management (CDCGM2017) started in 2009 to discuss the evolution of cloud computing w.r.t. the infrastructure providers who started creating next generation hardware that is service friendly, and service developers who started embedding business service intelligence in their computing infrastructure. During the last years, the state of the art of cloud computing architecture has evolved toward more complexity by piling up new layers of management over the many layers that already exist. Moreover, in the last ten years, the problem of scaling and managing distributed applications in the cloud has taken a new dimension, especially in requirement of tolerance to workload variation and proactive scaling of available computing resource pools, with particular emphasis on big data management and recent technologies. This article represents an effort to present significant extensions of the interesting papers presented at CDCGM 2017. These papers describe advances in current distributed and cloud computing practices dealing with modern techniques of parallel computing, Cognitive Workload Management for Cloud Computing, emergent Cloud XaaS, information service networks and IoT.
    Keywords: cloud computing; grid computing; SLA; data science.

  • An intelligent water drops based approach for workflow scheduling with balanced resource utilisation in cloud computing   Order a copy of this article
    by Mala Kalra, Sarbjeet Singh 
    Abstract: The problem of finding optimal solutions for scheduling scientific workflows in cloud environment has been thoroughly investigated using various nature-inspired algorithms. These solutions minimise the execution time of workflows, but they may result in severe load imbalance among Virtual Machines (VMs) in cloud data centres. Cloud vendors desire the proper utilisation of all the VMs in the data centres to have efficient performance of the overall system. Thus load balancing of VMs becomes an important aspect while scheduling tasks in a cloud environment. In this paper, we propose an approach based on the Intelligent Water Drops (IWD) algorithm to minimise the execution time of workflows while balancing the resource utilisation of VMs in the cloud computing environment. The proposed approach is compared with a variety of well-known heuristic and meta-heuristic techniques using three real-time scientific workflows, and experimental results show that the proposed algorithm performs better than these existing techniques in terms of makespan and load balancing.
    Keywords: workflow scheduling; intelligent water drops algorithm; cloud environment; evolutionary computation; directed acyclic graphs; load balancing; balanced resource utilisation; optimisation technique.

  • The energy consumption laxity based algorithm to perform computation processes in virtual machine environments   Order a copy of this article
    by Tomoya Enokido, Dilawaer Duolikun, Makoto Takizawa 
    Abstract: In information systems, server cluster systems equipped with virtual machines are widely used to realise scalable and high performance computing systems, such as cloud computing systems. In order to satisfy application requirements, such as deadline constraint for each application process, the processing loads of virtual machines have to balance with one another in a server cluster. In addition to achieving the performance objectives, the total electric energy of a server cluster to perform application processes has to be reduced, as discussed in green computing. In this paper, the energy consumption laxity based (ECLB) algorithm is proposed to allocate computation type application processes to virtual machines in a server cluster so that the total electric energy of the server cluster and the response time of each process can be reduced. We evaluate the ECLB algorithm in terms of the total electric energy of a server cluster and response time of each process compared with the basic round-robin (RR) algorithm. Evaluation results show the average total electric energy of a server cluster and average response time of each process in the ECLB algorithm can both be more reduced in the RR algorithm, respectively.
    Keywords: green computing; virtual machines; energy-efficient server cluster systems; power consumption models; energy-efficient load-balancing algorithms.

  • A new bimatrix game model with fuzzy payoffs in credibility space   Order a copy of this article
    by Cunlin Li, Ming Li 
    Abstract: Uncertain theory based on experts evaluation and non-additive measure is introduced to explore the bimatrix game with the uncertain payoffs. The uncertainty space based on the axiom of the uncertain measures is presented. Some basic characteristics of uncertain events are described and the expected values of the uncertain variables are given in uncertainty space. A new model of the bimatrix game with uncertain payoffs is established and its equivalent strategy is given. Then, we develop an expected model of uncertain bimatrix games and define the uncertain equilibrium strategy of uncertain bimatrix games. By using the expected values of uncertain variables, we transform the model into a linear programming, the expected equilibrium strategy of the uncertain bimatrix games identified through solving linear equations.
    Keywords: bimatrix game; uncertain measure; expected Nash equilibrium strategy.

  • Data analysis of CSI 800 industry index by using factor analysis model   Order a copy of this article
    by Chunfen Xu 
    Abstract: This paper studies the linkage among the industries based on CSI 800 industry index, which provides mass complicated data for industry research. Factor analysis, a useful data analysis tool, allows researchers to investigate concepts that are not easily measured directly by collapsing a large number of variables into a few interpretable underlying factors. Firstly, data of ten industries in the period from September 2009 to March 2017 is collected from CSI 800 Index and correlational analyses are conducted. Secondly, this paper establishes an appropriate evaluation system, and then uses factor analysis to do dimension reduction. Finally, some characteristics and trends in various industries are obtained.
    Keywords: CSI 800 index; correlation analysis; factor analysis; dimension reduction.

  • A real-time matching algorithm using sparse matrix
    by Aomei Li, Wanji Jiang, Po Ma, Jiahui Guo, Dehui Dai 
    Abstract: Aiming at the shortcomings of the traditional image feature matching algorithm, which is computationally expensive and time-consuming, this paper presents a real-time feature matching algorithm. Firstly, the algorithm constructs sparse matrices by Laplace operator and the Laplace weighting is carried out. Then, the feature points are detected by the FAST feature point detection algorithm. The SURF algorithm is used to assign the direction and descriptor to the feature for rotation invariance, We then use the Gaussian pyramid to make it scalable invariance. Secondly, the match pair is extracted by the violent matching method, and the matching pair is purified by Hamming distance and symmetry method. Finally, the RANSAC algorithm is used to get the optimal matrix, and the affine invariance check is used to match the result. The algorithm is compared with the classical feature point matching algorithm, which proves that the method has high real-time performance under the premise of guaranteeing the matching precision.
    Keywords: sparse matrices; Laplace weighted; FAST; SURF; symmetry method; affine invariance check.

  • How do checkpoint mechanisms and power infrastructure failures impact on cloud applications?   Order a copy of this article
    by Guto Leoni Santos, Demis Gomes, Djamel Sadok, Judith Kelner, Elisson Rocha, Patricia Takako Endo 
    Abstract: With the growth of cloud computing usage by commercial companies, providers of this service are looking for ways to improve and estimate the quality of their services. Failures in the power subsystem represent a major risk of cloud data centre unavailability at the physical level. At same time, software-level mechanisms (such as application checkpointing) can be used to maintain the application consistency after a downtime and also improve its availability. However, understanding how failures at the physical level impact on the application availability, and how software-level mechanisms can improve the data centre availability is a challenge. This paper analyses the impact of power subsystem failures on cloud application availability, as well as the impact of having checkpoint mechanisms to recover the system from software-level failure. For that, we propose a set of stochastic models to represent the cloud power subsystem, the cloud application, and also the checkpoint-based retrieval mechanisms. To evaluate data centre performance, we also model requests arrival and time to process as a queue, and feed this model with real data acquired from experiments done in a real testbed. To verify which components of the power infrastructure most impact on the data centre availability we perform sensitivity analysis. The results of the stationary analysis show that the selection of a given type of checkpoint mechanism does not present a significant impact on the observed metrics. On the other hand, improving the power infrastructure implies performance and availability gains.
    Keywords: cloud data centre; checkpoint mechanisms; availability; performance; stochatic models.

  • A review of intrusion detection approaches in cloud security systems   Order a copy of this article
    by Satyapal Singh, Mohan Kubendiran, Arun Kumar Sangaiah 
    Abstract: Cloud computing is a technology that allows the conveyance of services, storage, network, computing power, etc., over the internet. The on-demand and ubiquitous nature of this technology allows for its ease of use and availability. However, for this very reason, the cloud services, platform or infrastructure are targets for attackers. The most common attack is the one that tries taking control of one or more virtual machine instances running on the cloud. Since cloud and networking technologies go side by side, it is inevitable to keep such malicious attempts at bay. Nevertheless, intrusion detection systems are software that can detect potential intrusions within or outside a secure cloud environment. In this paper, a study of different intrusion detection systems that have been previously proposed in order to mitigate or in best case eliminate the possible threats posed by such intrusions has been done.
    Keywords: cloud computing; virtualisation; intrusion detection; networking; cyber attacks; Blockchain.

  • A dataflow runtime environment and static scheduler for edge, fog and in-situ computing   Order a copy of this article
    by Caio B. G. Carvalho, Victor Da Cruz Ferreira, Felipe Maia Galvão França, Cristiana Barbosa Bentes, Gabriele Mencagli, Tiago Assumpção De Oliveira Alves, Alexandre Da Costa Sena, Leandro Augusto Justen Marzulo 
    Abstract: In the dataflow computation model, instructions or tasks are executed according to data dependencies, instead of following program order, thus allowing natural parallelism exploitation. A wide variety of dataflow-based solutions, in different flavours and abstraction levels (from processors to runtime libraries), have been proposed as interesting alternatives for harnessing the potential of modern computing systems. Sucuri is a dataflow library for Python that allows users to specify their application as a dependency graph and execute it transparently at clusters of multicores, while taking care of scheduling issues. Recent trends in fog and in-situ computing assume that storage and network devices will be equipped with processing elements that usually have lower power consumption and performance. An important decision on such systems is whether to move data to traditional processors (paying the communication costs), or perform computation where data is sitting, using a potentially slower processor. Hence, runtime environments that deal with that trade-off are extremely necessary. This work presents a study on different factors that should be considered when running dataflow applications in edge/fog/in-situ environments. We use Sucuri to manage the execution in a small system with a regular PC and a Parallella board, emulating a smart storage (edge/fog/in-situ device). Experiments performed with a set of benchmarks show how data transfer size, network latency and packet loss rates affect execution time when outsourcing computation to the smart storage. Then, a static scheduling solution is presented, allowing Sucuri to avoid outsourcing when there would be no performance gains.
    Keywords: dataflow computing; edge computing; fog computing; scheduling techniques; smart storage.

  • A novel web image retrieval method: bagging weighted hashing based on local structure information   Order a copy of this article
    by Li Huanyu 
    Abstract: Hashing is widely used in ANN searching problems, especially in web image retrieval. A excellent hashing algorithm can help the users to search and retrieve their web images more conveniently, quickly and accurately. In order to conquer several deficiencies of ITQ in image retrieval problem, we use ensemble learning to deal image retrieval problem. A elastic ensemble framework has been proposed to guide the hashing design, and three important principles have been proposed, named high precision, high diversity, and optimal weight prediction. Basing on this, we design a novel hashing method called BWLH. In BWLH, first, the local structure information of the original data is extracted to construct the local structure data, thus to improve the similarity-preserve ability of hash bits. Second, a weighted matrix is used to balance the variance of different bits. Third, bagging is exploited to expand diversity in different hash tables. Sufficient experiments show that BWLH can handle the image retrieval problem effectively, and perform better than several state-of-the-art methods at same hash code length on dataset CIFAR-10 and LabelMe. Finally, search by image, a web-based use case scenario of the proposed hashing BWLH, is given to detail how the proposed method can be used in a web-based environment.
    Keywords: web image retrieval; hashing; ensemble learning; local structure information; weighted.

Special Issue on: Recent Developments in Parallel, Distributed and Grid Computing for Big Data

  • GPU accelerated video super-resolution using transformed spatio-temporal exemplars   Order a copy of this article
    by Chaitanya Pavan Tanay Kondapalli, Srikanth Khanna, Chandrasekaran Venkatachalam, Pallav Kumar Baruah, Kartheek Diwakar Pingali, Sai Hareesh Anamandra 
    Abstract: Super-resolution (SR) is the method of obtaining high resolution (HR) image or image sequence from one or more low-resolution (LR) images of a scene. Super-resolution has been an active area of research in recent years owing to its applications to defence, satellite imaging, video surveillance and medical diagnostics. In a broad sense, SR techniques can be classified into external database driven and internal database driven approaches. The training phase in the first approach is computationally intensive as it learns the LR-HR patch relationships from huge datasets, and the test procedure is relatively fast. In the second approach, the super-resolved image is directly constructed from the available LR image, eliminating the need for any learning phase but the testing phase is computationally intensive. Recently, Huang et al. (2015) proposed a transformed self-exemplar internal database technique which takes advantage of the fractal nature in an image by expanding patch search space using geometric variations. This method fails if there is no patch redundancy within and across image scales and also if there is a failure in detecting vanishing points (VP), which are used to determine perspective transformation between LR image and its subsampled form. In this paper, we expand the patch search space by taking advantage of the temporal dimension of image frames in the scene video and also use an efficient VP detection technique by Lezama et al. (2014). We are thereby able to successfully super-resolve even the failure cases of Huang et al. (2015) and achieve an overall improvement in PSNR. We also focused on reducing the computation time by exploiting the embarrassingly parallel nature of the algorithm. We achieved a speedup of 6 on multi-core, up to 11 on GPU, and around 16 on hybrid platform of multi-core and GPU by parallelising the proposed algorithm. Using our hybrid implementation, we achieved 32x super-resolution factor in limited time. We also demonstrate superior results for the proposed method compared with current state-of-the-art SR methods.
    Keywords: super-resolution; self-exemplar; perspective geometry; temporal dimension; vanishing point; GPU; multicore.

  • Energy-efficient fuzzy-based approach for dynamic virtual machine consolidation   Order a copy of this article
    by Anita Choudhary, Mahesh Chandra Govil, Girdhari Singh, Lalit K. Awasthi, Emmanuel S. Pilli 
    Abstract: In cloud environment the overload leads to performance degradation and Service Level Agreement (SLA) violation while underload results in inefficient use of resources and needless energy consumption. Dynamic Virtual Machine (VM) consolidation is considered as an effective solution to deal with both overload and underload problems. However, dynamic VM consolidation is not a trivial solution as it can also lead to violation of negotiated SLA owing to runtime overheads in VM migration. Further, dynamic VM consolidation approaches need to answer many questions, such as (i) when to migrate a VM? (ii) which VM is to be migrated? and (iii) where to migrate the selected VM? In this work, efforts are made to develop a comprehensive approach to achieve better solutions to such problems. In the proposed approach, future forecasting methods for host overload detection are explored; a fuzzy logic based VM selection approach that enhances the performance of VM selection strategy is developed; and a VM placement algorithm based on destination CPU use is also developed. The performance evaluation of the proposed approaches is carried out on CloudSim toolkit using PlanetLab data set. The simulation results have exhibited significant improvement in the number of VM migrations, energy consumption, and SLA violations.
    Keywords: cloud computing; virtual machines; dynamic virtual machine consolidation; exponential smoothing; fuzzy logic.

  • A distributed framework for cyber-physical cloud systems in collaborative engineering   Order a copy of this article
    by Stanislao Grazioso, Mateusz Gospodarczyk, Mario Selvaggio, Giuseppe Di Gironimo 
    Abstract: Distributed cyber-physical systems play a significant role in enhancing group decision-making processes, as in collaborative engineering. In this work, we develop a distributed framework to allow the use of collaborative approaches in group decision-making problems. We use the fuzzy analytic hierarchy process, a multiple criteria decision-making method, as the algorithm for the selection process. The architecture of the framework makes use of open-source utilities. The information components of the distributed framework act in response to the feedback provided by humans. Cloud infrastructures are used for data storage and remote computation. The motivation behind this work is to make possible the implementation of group decision-making in real scenarios. Two illustrative examples show the feasibility of the approach in different application fields. The main outcome is the achievement of a time reduction for the selection and evaluation process.
    Keywords: distributed systems; cyber-physical systems; web services; group decision making; fuzzy AHP; product design and development.

Special Issue on: Resource Provisioning in Cloud Computing

  • Towards providing middleware-level proactive resource reorganisation for elastic HPC applications in the cloud   Order a copy of this article
    by Rodrigo Righi, Cristiano Costa, Vinicius Facco, Igor Fontana, Mauricio Pillon, Marco Zanatta 
    Abstract: Elasticity is one of the most important features of cloud computing, referring to the ability to add or remove resources according to the needs of the application or service. Particularly for High Performance Computing (HPC), elasticity can provide a better use of resources and also a reduction in the execution time of applications. Today, we observe the emergence of proactive initiatives to handle the elasticity and HPC duet, but they present at least one problem related to the need of a previous user experience, large processing time, completion of parameters or design for a specific infrastructure and workload setting. Concerning the aforesaid context, this article presents ProElastic, a lightweight model that uses proactive elasticity to drive resource reorganisation decisions for HPC applications. Using ARIMA-based time series and analysing the mean time to launch virtual machines, ProElastic anticipates under- and over-loaded situations, triggering elasticity actions beforehand to address them. Our idea is to explore both performance and adaptivity at middleware level in an effortless way at user perspective, who does not need to either complete elasticity parameters or rewrite the HPC application. Based on ProElastic, we developed a prototype that was evaluated with a master-slave iterative application and compared against reactive-based elasticity and non-elastic approaches. The results showed performance gains and a competitive cost (application time multiplied by consumed resources) in favour of ProElastic when confronted with these two last approaches.
    Keywords: cloud elasticity; proactive optimisation; performance; resource management; adaptivity.

  • Virtual machine placement in distributed cloud centres using bin packing algorithm   Order a copy of this article
    by Kumaraswamy S, Mydhili K. Nair 
    Abstract: A virtual machine is a logical framework to provide services on the cloud. These virtual machines form a logical partition of the physical infrastructure present in the cloud centre. Virtual machines are not only prone to cost escalation but also result in huge power consumption. Hence, the cloud centre needs to optimise cost and power consumption by placing these virtual machines from their current physical machines to other suitable physical machines. Currently, this problem has been addressed by considering the virtual machines present in a single location cloud centre. But current cloud centres are have multiple locations and all of them are synchronised to provide cloud services. In this complex paradigm, it is important to differentiate the same and different location virtual machines and provide suitable placement algorithms. In this work, the problem of virtual machine placement is modelled through a 3-slot bin packing problem. It is shown as NP-Complete and suitable approximation algorithms are proposed. Also, a polynomial time approximation scheme is designed for this problem, which provides a facility to control the quality of approximation. Empirical studies performed through simulation confirm the theoretical bounds that were obtained.
    Keywords: cloud computing; virtual machine; virtual machine placement; distributed cloud;bin packing.

Special Issue on: Advances in Smart Learning Systems

  • Educational data modelling using curve fitting and average uniform algorithm   Order a copy of this article
    by Azmi Alazzam, Ban AlOmar 
    Abstract: One of the most widely used techniques in data modelling is curve fitting. Curve fitting is the process of creating a curve or a function that will model the data to be analysed. Often the resultant curve will not pass through all the data points, this is why it is called a best-fit curve for the data. In this paper, we propose a curve fitting quadratic function that is used to model educational data. The resultant model is then used to predict one or more output variables based on different values for input variables. In order to optimise the parameters for the quadratic function, the proposed AUA algorithm is used. The AUA is a metaheuristic optimisation that can be used to optimise a wide variety of continuous functions (linear, nonlinear, convex and non-convex). The AUA algorithm is used to find the optimal values for the coefficients of the quadratic function. Moreover, the optimal values for these coefficients are found using the classical derivative method for the sake of comparison.
    Keywords: curve fitting; modelling; optimisation.

  • How mutual information interprets anomalies using different clustering techniques   Order a copy of this article
    by Zakea Il-Agure, Belsam Attallah 
    Abstract: Anomalies detection is concerned with the problem of finding non-conforming patterns in datasets. Previously (Il-agure, 2016) a novel approach was described to measure the amount of information shared between any random anomaly variables. Two types of data were used to evaluate the proposed approach: proof of concept data in case study 1 and citation data in case study 2. The CRISP data mining methodology was updated to be applicable for link mining study. The proposed mutual information approach is to provide a semantic investigation of the anomalies, and the updated methodology can be used in other link mining studies. The purpose of this paper is to evaluate how mutual information interprets semantic anomalies, using density-based cluster technique, via trial 2, which is different from the clustering technique used in trial 1 (hierarchical based cluster). A cluster method allows for many options regarding the algorithm for combining groups, with each choice resulting in a different grouping structure. Therefore, cluster analysis can be an appropriate statistical tool for discovering underlying structures in various kinds of dataset. Cases will be grouped into clusters regardless of the nature of the data (Landau and Chis Ster, 2010).
    Keywords: mutual information; anomalies; DBSCAN algorithm.

  • A novel integrated framework for securing online instructor-student communication   Order a copy of this article
    by Maher Salem, Khalid Samara, Mohammed Saeed 
    Abstract: Academic advising is a time-consuming effort in educational institutions. Online technologies provide various solutions to move conventional advising or face-to-face onto a fully integrated part of the university services, so that students communicate with advisors anytime and anywhere. Advising and consulting students are critical, which can cause latency and additional overhead for both the instructor and student. This paper proposes an effective and secured online framework to enrich the advising experience between the instructor and student and enhance time-management. Therefore, it empowers the students innovation and creativity. It provides two main services, reserved time slots for each student and common discussion between students and instructors. Reserving a time slot means that the student can communicate with the advisor anywhere. It can start an embedded web-based virtual machine to interact with the instructor in real-time. The second service is to share a general post with all students and instructors to come up with a generic solution. All posts can be published on the Black Board Learn or even on social media. Security plays a dominant role in the framework. It uses a strong authentication mechanism and encrypts the entire traffic to keep the data confidential.
    Keywords: cyber security; online education; remote access; virtual desktop; framework.

  • Energy harvesting techniques for routing issues in wireless sensor networks   Order a copy of this article
    by Sheeraz Ahmed, Ayub Khan, Zahoor Ali Khan, Atif Ishtiaq, Taimur Ali 
    Abstract: Recent years have seen remarkable advancement in the field of Energy Harvesting Wireless Sensor Networks (EH-WSNs). The popularity of energy harvesting has grown owing to its capability of prolonging the network lifetime. The devices using energy harvesting are affected by inefficiencies, including energy leakage, and battery degradation or storage losses. Many different protocols are being developed in this domain. These protocols can be categorised in a variety of ways with respect to their mechanisms and functionalities they follow, hence it becomes important to understand their working principle. In this research, we select some recent routing protocols in the field of EH-WSNs and present a comparative analysis in accordance to the categories in which they lie. A detailed analysis of their key advantages and flaws is also identified in this research.
    Keywords: energy harvesting; routing protocol; node; gateway.

  • Evaluating the affordances of wearable technology in education   Order a copy of this article
    by Belsam Attallah, Zakea Il-agure 
    Abstract: Wearable technology, or wearable computing, is increasingly becoming popular in learning and teaching activities. Recent research in education shows a growth in the applications of this technology by educators for students of different levels. However, higher education is particularly benefitting from the advanced applications of this emerging technology, owing to the mature use of the features and facilities presented by wearable devices. Essential factors such as students interactivity and engagement in learning activities are found to be easily achieved when effectively using this evolving technology by academic organisations. This paper surveys the information and experiments published recently on wearable technology in education and its acceptance factors by consumers according to the Technology Acceptance Model (TAM). Alongside the advantages of this technology in the education field, the paper demonstrates the limitations and negative aftereffects accompanying its applications, to judge the effectiveness of its employment in developing students learning and success. The outcomes of this research revealed that there are a considerable number of restrictions limiting the wider application of this technology in the education sector, which educators should analyse and propose solutions for before suggesting further involvement of wearable devices in learning and teaching.
    Keywords: education; educational technology; learning; learning technology; teaching; wearable computing; wearable devices; wearable technology; wearables; technology acceptance model; TAM.

  • High-speed gesture modelling through boundary analysis of active signals from wearable data glove   Order a copy of this article
    by Andrews Samraj, Ramesh Kumarasamy, Kalvina Rajendran, Karthik Selvaraj 
    Abstract: Assistive technology uses communication by means of gestures with high appreciation in the fields of rehabilitation engineering and security. The necessity for such technique is to extract the communications in terms of intentions from the disabled community. Such systems play a crucial role during emergencies as well as to perform their regular communication during normal course of life. To interpret and communicate most distinctive requirements of the person with disability, the caregivers and medical support agents need a well-defined and distinguishable gesture paradigm with its recognition system. Conversion of such communicative gestures is to be made precise and easy. In this proposed work the feature construction is made with a simple modelling of the gesture signals along the time zones considerations created during the gesture activity. The identification of the most active channels for every subject involved in this experiment for different gestures contributes to a reduction of complexity in processing and hardware cost.
    Keywords: standard deviation; data glove; assistive technology; gesture modelling; wearable computing.

Special Issue on: MPP2017 and WAMCA2017 Advancements in High-level Parallel Programming Models for Edge/Fog/In-situ Computing

  • HPSM: a programming framework to exploit multi-CPU and multi-GPU systems simultaneously   Order a copy of this article
    by Joao Vicente Ferreira Lima, Daniel Di Domenico 
    Abstract: This paper presents a high-level C++ framework to explore multi-CPU and multi-GPU systems called HPSM. HPSM enables execution of parallel loops and reductions simultaneously over CPUs and GPUs using three parallel backends: Serial, OpenMP, and StarPU. We analysed HPSM development effort with AXPY program through two standard metrics (NCLOC and ES). In addition, we evaluated performance and energy with three parallel benchmarks: N-Body, Hotspot, and CFD solver. HPSM reduced code effort by up to 56.9% compared with StarPU C interface, although it resulted in 2.5x more lines of code compared to OpenMP. The CPU-GPU combination attained speedup results with Hotspot of up to 92.7x on a X86-based system with 4 GPUs and up to 108.2x on an IBM POWER8+ system with two GPUs. On both systems, the addition of GPUs improved energy efficiency.
    Keywords: high performance computing; CPU-GPU systems; parallel programming models; high-level framework; parallel loops.

  • An efficient pathfinding system in FPGA for edge/fog computing   Order a copy of this article
    by Alexandre Nery, Alexandre Sena, Leandro Guedes 
    Abstract: Pathfinding algorithms are at the heart of several classes of applications, such as network appliances (routing), GPS navigation and autonomous cars, which are all part of recent trends in artificial intelligence and Internet of Things (IoT). Moreover, advances in semiconductor miniaturisation technologies have enabled the design of efficient Systems-on-Chip (SoC) devices, with demanding performance requirements and energy consumption constraints. Such advanced systems often include Field Programmable Gate Arrays (FPGAs) to allow the design of customised co-processors/accelerators that yield lower power consumption and higher performance, as can be found today in various well-known cloud computing services: Amazon AWS, Baidu, etc. However, the amount of embedded systems with processing capabilities and internet access has led to a substantial increase of network traffic towards such cloud service systems. Therefore, this work aims at designing and evaluating an efficient pathfinding accelerator system provided with a FPGA co-processor based on Dijkstra's shortest path algorithm. The system aims to mitigate the network traffic problem with an efficient accelerator for the pathfinding problem, placed at the edge of the network. The system is designed using the Xilinx High-Level Synthesis (HLS) compiler and is implemented in the programming logic of a Xilinx Zynq FPGA, embedded with an ARM microprocessor that is not only in charge of controlling the co-processor, but also in charge of lightweight TCP/IP network communication on top of the FreeRTOS operating system. Extensive performance, circuit-area and energy consumption results show that the co-processor can find the shortest path about 2.5 times faster than the system's ARM microprocessor, on a simulation scenario test case based on touristic locations in the city of Rio de Janeiro, acquired from the OpenStreetMap database.
    Keywords: pathfinding; FPGA accelerator; high-level synthesis; fog computing; edge computing.

Special Issue on: Emergent Peer-to-Peer Network Technologies for Ubiquitous and Wireless Networks

  • An improved energy efficient multi-hop ACO-based intelligent routing protocol for MANET   Order a copy of this article
    by Jeyalaxmi Perumaal, Saravanan R 
    Abstract: A Mobile Ad-hoc Network (MANET) consists of group of mobile nodes, and the communication among them is done without any supporting centralised structure. Routing in a MANET is a difficult because of its dynamic features, such as high mobility, constrained bandwidth, link failures due to energy loss, etc., The objective of the proposed work is to implement an intelligent routing protocol. Selection of the best hops is mandatory to provide good throughput in the network, therefore Ant Colony Optimisation (ACO) based intelligent routing is proposed. Selecting the best intermediate hop for intelligent routing includes ACO technique, which greatly reduces the network delay and link failures by validating the co-ordinator nodes. Best co-ordinator nodes are selected as good intermediate hops in the intelligent routing path. The performance is evaluated using the simulation tool NS2, and the metrics considered for evaluation are delivery and loss rate of sent data, throughput and lifetime of the network, delay and energy consumption.
    Keywords: ant colony optimisation; intelligent routing protocol; best co-ordinator nodes; MANET.

  • Analysis of spectrum handoff schemes for cognitive radio networks considering secondary user mobility   Order a copy of this article
    by K.S. Preetha, S. Kalaivani 
    Abstract: There has been a gigantic spike in the usage and development of wireless devices since wireless technology came into existence. This has contributed to a very serious problem of spectrum unavailability or spectrum scarcity. The solution to this problem comes in the form of cognitive radio networks, where secondary users (SUs), also known as unlicensed users, make use of the spectrum in an opportunistic manner. The SU uses the spectrum in a manner such that the primary or the licensed user (PU) doesnt face interference above a threshold level of tolerance. Whenever a PU comes back to reclaim its licensed channel, the SU using it needs to perform a spectrum handoff (SHO) to another channel that is free of PU. This way of functioning is termed as spectrum mobility. Spectrum mobility can be achieved by means of SHO. Initially, the SUs continuously sense the channels to identify an idle channel. Errors in the sensing channel are possible. A detection theory is put forth to analyse the spectrum sensing errors with the receiver operating characteristic (ROC) considering false alarm probability, miss detection and detection probability. In this paper, we meticulously investigate and analyse the probability of spectrum handoff (PSHO), and hence the performance of spectrum mobility, with Lognormal-3 and Hyper-Erlang distribution models considering SU call duration and residual time of availability of spectrum holes as measurement metrics designed for tele-traffic analysis.
    Keywords: cognitive radio networks; detection probability; probability of a miss; SNR; false alarm probability; primary users; secondary users.

  • Link survivability rate-based clustering for QoS maximisation in VANET   Order a copy of this article
    by D. Kalaivani, P.V.S.S.R. Chandra Mouli Chandra Mouli 
    Abstract: The clustering technique is used in VANET to manage and stabilise topology information. The major requirement of this technique is data transfer through the group of nodes without disconnection, node coordination, minimised interference between number of nodes, and reduction of hidden terminal problem. The data communication among each node in the cluster is performed by a cluster head (CH). The major issues involved in the clustering approaches are improper definition of cluster structure, maintenance of cluster structure in dynamic network. To overcome these issues in the clustering technique, the link- and weight-based clustering approach is developed along with a distributed dispatching information table (DDIT) to repeatedly use the significant information for avoiding data transmission failure. In this paper, the clustering algorithm is designed on the basis of relative velocity value of two same directional vehicles by forming a cluster with number of nodes in a VANET network. Then, the CH is appropriately selected based on the link survival rate of the vehicle to provide the emergency message towards different vehicles in the cluster, along with the stored data packet information in the DDIT table for fault prediction. Finally, the efficient medium access control (MAC) protocol is used to provide a prioritised message for avoiding spectrum shortage of emergency messages in the cluster. The comparative analysis between the proposed link-based CH selection with DDIT (LCHS-DDIT) with the existing methods, such as clustering-based cognitive MAC (CCMAC), multichannel CR ad-hoc network (MCRAN), and dedicative short range communication (DSRC), proves the effectiveness of LCHS-DDIT regarding the throughput, packet delivery ratio, routing control overhead with minimum transmission delay.
    Keywords: vehicular ad-hoc networks; link survival rate; control channel; service channel; medium access control; roadside unit; on-board unit.

Special Issue on: Emerging Scalable Edge Computing Architectures and Intelligent Algorithms for Cloud-of-Things and Edge-of-Things

  • A survey on fog computing and its research challenges   Order a copy of this article
    by Jose Dos Santos Machado, Edward David Moreno, Admilson De Ribamar Lima Ribeiro 
    Abstract: This paper reviews the new paradigm of distributed computing, which is fog computing, and it presents its concept, characteristics and areas of performance. It performs a literature review on the problem of its implementation and analyses its research challenges, such as security issues, operational issues and their standardisation. We show and discuss that many questions need to be researched in academia so that their implementation will become a reality, but it is clear that their adherence is inevitable for the internet of the future.
    Keywords: fog computing; edge computing; cloud computing; IoT; distributed computing; cloud integration to IoT.

  • Hybrid coherent encryption scheme for multimedia big data management using cryptographic encryption methods   Order a copy of this article
    by Stephen Dass, J. Prabhu 
    Abstract: In todays world of technology, data has been playing an imperative role in many different technical areas. Data confidentiality, integrity and data security over the internet from different media and applications are challenging tasks. Data generation from multimedia and IoT data is another huge source of big data on the internet. When sensitive and confidential data are accessed by attacks this lead to serious countermeasures to security and privacy. Data encryption is the mechanism to forestall this issue. Many encryption techniques are used for multimedia and IoT, but when massive data are developed it there are more computational challenges. This paper designs and proposes a new coherent encryption algorithm that addresses the issue of IoT and multimedia big data. The proposed system can cause a strong cryptographic effect without holding much memory and easy performance analysis. Handling huge data with the help of GPU is included in the proposed system to enhance the data processing more efficiently. The proposed algorithm is compared with other symmetric cryptographic algorithms such as AES,DES,3-DES, RC6 and MARS based on architecture, flexibility, scalability, security level and also based on computational running time, and throughput for both encryption and decryption processes. An avalanche effect is also calculated for the proposed system to be 54.2%. The proposed framework better secures the multimedia against real time attacks when compared with the existing system.
    Keywords: big data; symmetric key encryption; analysis; security; GPU; IoT; multimedia big data.

  • A study on data deduplication schemes in cloud storage   Order a copy of this article
    by Priyan Malarvizhi Kumar, Usha Devi G, Shakila Basheer, Parthasarathy P 
    Abstract: Digital data is growing at immense rates day by day, and finding efficient storage and security mechanisms is a challenge. Cloud storage has already gained popularity because of the huge data storage capacity in storage servers made available to users by the cloud service providers. When lots of users upload data in cloud there can be too many redundant data as well and this can waste storage space as well as affect transmission bandwidth. To promise efficient storage handling of this redundant data is very important, which is done by the concept of deduplication. The major challenge for deduplication is that most users upload data in encrypted form for privacy and security of data. There are many prevailing mechanisms for deduplication, some of which handle encrypted data as well. The purpose of this paper is to conduct a survey of the existing deduplication mechanisms in cloud storage and to analyse the methodologies used by each of them.
    Keywords: deduplication; convergent encryption; cloud storage.