Forthcoming articles

 


International Journal of Grid and Utility Computing

 

These articles have been peer-reviewed and accepted for publication in IJGUC, but are pending final changes, are not yet published and may not appear here in their final order of publication until they are assigned to issues. Therefore, the content conforms to our standards but the presentation (e.g. typesetting and proof-reading) is not necessarily up to the Inderscience standard. Additionally, titles, authors, abstracts and keywords may change before publication. Articles will not be published until the final proofs are validated by their authors.

 

Forthcoming articles must be purchased for the purposes of research, teaching and private study only. These articles can be cited using the expression "in press". For example: Smith, J. (in press). Article Title. Journal Title.

 

Articles marked with this shopping trolley icon are available for purchase - click on the icon to send an email request to purchase.

 

Articles marked with this Open Access icon are freely available and openly accessible to all without any restriction except the ones stated in their respective CC licenses.

 

Register for our alerting service, which notifies you by email when new issues of IJGUC are published online.

 

We also offer RSS feeds which provide timely updates of tables of contents, newly published articles and calls for papers.

 

International Journal of Grid and Utility Computing (47 papers in press)

 

Regular Issues

 

  • An improved multi-instance multi-label learning algorithm based on representative instances selection and label correlations   Order a copy of this article
    by Chanjuan Liu, Tongtong Chen, Hailin Zou, Xinmiao Ding, Yuling Wang 
    Abstract: Multi-instance multi-label learning (MIML) has been successfully used in image and text classification problems. It is noteworthy that few of the previous studies consider the pattern-label relations. Inevitably, there are some useless instances in a bag which will reduce the accuracy of the annotation. In this paper, we focus on this problem. Firstly, an instance selection method via joint-norms constraint is employed to eliminate the useless instances and select the representative instances by modelling the instance correlation. Then, bags are mapped to these representative instances. Finally, the classifier is trained by an optimisation algorithm based on label correlations. Experimental results on image dataset, text datasets and birdsong audio dataset show that the proposed algorithm significantly improves the performance of the MIML classifier compared with the state-of-the-art methods.
    Keywords: multi-instance multi-label learning; representative instances selection; joint-norms constraint; label correlations.

  • Mining top-k approximate closed patterns in an imprecise database   Order a copy of this article
    by Yu Xiaomei, Hong Wang, Xiangwei Zheng 
    Abstract: Over the last few years, the growth of data is exponential, leading to colossal amounts of information being produced by computational systems. Meanwhile, the data in real-life applications are usually incomplete and imprecise, which poses big challenges for researchers to obtain exact and valid analytical results with traditional frequent pattern mining methods. Since the potential faults can break the original characteristics of data patterns into multiple small fragments, it is impossible to recover the long true patterns from these fragments. To explore the huge amount of imprecise data by means of frequent pattern mining, we propose a service-oriented model that enables a new way of service provisioning based on users' QoS (quality of service) requirements.The novel model is developed to solve the problem of mining top-k approximate closed patterns in imprecise databases and will be further applied to diagnosis and treatment of potential patients in online medical applications. We test the novel model in an imprecise medical database and the experimental results show that the new model can successfully improve the health services for online customers.
    Keywords: data mining; approximate frequent pattern; frequent closed pattern; clustering; equivalence class; health service.

  • Enhanced cuckoo search algorithm for virtual machine placement in cloud data centres   Order a copy of this article
    by Esha Barlaskar, Jayanta Singh, Biju Issac 
    Abstract: In order to enhance resource utilisation and power efficiency in cloud data centres it is important to perform Virtual Machine (VM) placement in an optimal manner. VM placement uses the method of mapping virtual machines to physical machines (PM). Cloud computing researchers have recently introduced various metaheuristic algorithms for VM placement considering the optimised energy consumption. However, these algorithms do not meet the optimal energy consumption requirements. This paper proposes an Enhanced Cuckoo Search (ECS) algorithm to address the issues with VM placement focusing on the energy consumption. The performance of the proposed algorithm is evaluated using three different workloads in CloudSim tool. The evaluation process includes comparison of the proposed algorithm against the existing Genetic Algorithm (GA,) Optimised Firefly Search Algorithm (OFS), and Ant Colony (AC) algorithm. The comparision results illustrate that the proposed ECS algorithm consumes less energy than the participant algorithms while maintaining a steady performance for SLA and VM migration. The ECS algorithm consumes around 25% less energy than GA, 27% less than OFS, and 26% less than AC.
    Keywords: virtual machine placement; metaheuristic algorithms; enhanced cuckoo search algorithm; cloud computing

  • SER performance optimisation of AF cooperative communication system based on directional antenna   Order a copy of this article
    by Ruilian Tan, Zhe Li, Xi Su 
    Abstract: Aiming at the cooperative communication system with directional antenna, this paper has studied the SER (Symbol Error Rate) performance under AF (Amplify-and-Forward protocol). The model of AF cooperative communication system using directional antenna is first established to deduce the closed-form expression of SER in this model, as well as the upper limit of the SER. Then, the OPA (Optimum Power Allocation) is also analysed with the purpose of minimising the SER. Combining specific simulation numerical values, the SER performance of established model is thoroughly researched. Simulation results demonstrate that the system SER decreases by adopting a cooperative communication system with directional transmitting and directional receiving. Each nodes directional gain, channel quality and power allocation method all have great influence on the systems overall performance. The OPA is also proved to be superior to EPA (Equal Power Allocation).
    Keywords: cooperative communication; directional antenna; amplify and forward; symbol error rate.

  • MT-DIPS: a new data duplication integrity protection scheme for multi-tenants sharing storage in SaaS   Order a copy of this article
    by Lin Li, Yongxin Zhang, Yanhui Ding 
    Abstract: In SaaS, the data sharing storage mode and tenant isolation requirement present a new challenge to traditional remote data duplication protection schemes. This paper aims at the new requirement of tenant data duplication protection in SaaS, and presents a tuple sampling based tenant duplication protection mechanism MTDIPS (Duplication Integrity Protection Scheme for Multi-tenants). Instead of data block sampling, MT-DIPS accommodates the data isolation requirement of different tenants by sampling tenants' physical data tuples. Through periodical random sampling, MTDIPS reduces the complexity on the service provider side of verification object construction and eliminates resource waste. Analysis and the experimental results show that if the damage rate of tenant data tuples is about 1%, the random sampling data number is about 5% of the total number of tuples. MT-DIPS makes use of homomorphism labels with auxiliary authentication structure to allow trusted third party verification without disclosing tenant data to relieve the verification burden on tenants' client sides.
    Keywords: SaaS; multi-tenant; duplication; integrity authentication; cloud computing.

  • An improved image classification based on K-means clustering and BoW model   Order a copy of this article
    by YongLang Liu, Zhong Cai, JiTao Zhang 
    Abstract: Image classification constitutes an important issue in large-scale image data process systems based on clusters. In this context, a significant number of relying BoW model and SVM methods have been proposed for image fusion systems. Some works classified these methods into generative mode and discriminative mode. Very few works deal with a classifier based on the fusion of these modes when building an image classification system. In this paper, we propose a revised algorithm based on weighted visual dictionary of K-means cluster. First, it uses SIFT and Laplace spectrum features to cluster objects respectively to get local characteristics of low dimension images (sub visual dictionary); then it clusters low dimension characteristics to get the super visual dictionaries of two features; finally, we get the final visual dictionary although most of these features have been proposed for a balance role through weighting of the parent visual dictionaries. Experimental results show that the algorithm and this model are efficient in describing image information and can provide image classification performance. It is widely used in unmanned navigation and machine vision and other fields.
    Keywords: image classification; visual dictionary; K-means; BoW model.

  • Extensible markup language keywords search based on security access control   Order a copy of this article
    by MeiJuan Wang, Jian Wang, KeJun Guo 
    Abstract: With increasing rate of storing and sharing information in the cloud by users, data storage brings new challenges to the Extensible Markup Language (XML) database in big data environments. The efficient retrieval of data with protection and privacy issues for accessing mass data in the cloud has become more and more important. Most existing research about XML data query and retrieval focuses on efficiency or establishing the index, and so on. However, these methods or algorithms do not take into account the data and data structure for their own safety issues. Furthermore, traditional access control rules read XML document nodes in a dynamic environment, and relevant dynamic query-based keyword research data security and privacy protection requirements are not many. In order to improve the search efficiency with security condition, this paper discusses how to generate the sub-tree of matching keywords that the user can access by the access control rules for the users role. The corresponding algorithm is proposed to achieve safe and efficient keywords search.
    Keywords: XML access control; keywords search; SLCA.

  • A hybrid heuristic resource allocation model for computational grid for optimal energy usage   Order a copy of this article
    by Deo Vidyarthi, Achal Kaushik 
    Abstract: Computational grid helps in faster execution of compute-intensive jobs. The resource allocation for the job execution in computational grid demands a lot of characteristic parameters to be optimised, but in the process the green aspect is ignored. Reducing the energy consumption in computational grid is a major recent research issue. The conventional systems, which offer energy-efficient scheduling strategies, ignore other quality of service parameters while scheduling the jobs. The proposed work tries to optimise the energy for resource allocation and at the same time makes no compromise on other related characteristic parameters. A hybrid model, which uses genetic algorithm and graph theory concept, has been proposed for this purpose. In this model, an energy-saving mechanism is implemented using a dynamic threshold method followed by genetic algorithm to further consolidate the saving. Eventually, a graph theory concept of minimum spanning tree is applied. The performance of the proposed model has been studied by its simulation. The result reveals the benefits achieved with the proposed model for optimal energy usage with resource allocation in the grid.
    Keywords: computational grid; green energy; resource allocation; genetic algorithm; minimum spanning tree; quality of service.

  • Improved quantisation mechanisms in impulse radio ultra wideband systems based on compressed sensing   Order a copy of this article
    by Yunhe Li, Qinyu Zhang Zhang, Shaohua Wu 
    Abstract: To reduce the impact of quantization noise during low-rate sampling of Impulse Radio Ultra-Wideband (IR-UWB) under compressed sensing framework, and in the meantime considering the equal information carrying of compressed measurements and characteristics of Gaussian distribution, three modified quantisation mechanisms are designed in this study: overload uniform quantisation, non-uniform quantisation, and overload non-uniform quantisation. Besides the influencing elements of overload factor in overload mechanism are mentioned to obtain the optimisation scheme close to the optimal overload by fitting. The simulation results show that all the three modified mechanisms, especially the overload non-uniform quantisation mechanism, have resulted in great improvement in performance when compared with the uniform quantisation, Whats more, the performance of the overload uniform quantisation mechanism featured with low complexity is better than that of the non-uniform quantisation mechanism with high complexity, thus providing a practical quantisation method for the IR-UWB system under the compressed sensing framework
    Keywords: compressed sensing; impulse radio ultra-wideband; quantisation mechanism; quantisation noise; overload interval factor.

  • Performance analysis of two WMN architectures by WMN-GA simulation system considering different distributions and transmission rates   Order a copy of this article
    by Keita Matsuo, Miralda Cuka, Takaaki Inaba, Tetsuya Oda, Leonard Barolli, Admir Barolli 
    Abstract: In this paper, we evaluate the performance of two Wireless Mesh Network (WMN) architectures considering throughput, delay and fairness index metrics. For simulations, we used a Genetic Algorithm (GA) based simulation system (called WMNGA) and ns-3. We compare the performance for the two architectures considering normal and uniform distributions, different transmission rates and OLSR protocol. The simulation results show that the throughput and delay are increased, but the fairness index is decreased as the transmission rate increases. The throughput of the hybrid WMN is higher than that of the I/B WMN, but the delay of the I/B WMN is higher than that of the hybrid WMN for normal distribution. The fairness index of the normal distribution is higher than that of the uniform distribution.
    Keywords: genetic algorithms; wireless mesh networks; NS-3; network architecture; OLSR; SGC; NCMC; normal distribution; uniform distribution; transmission rate.

  • An ontology-based cloud infrastructure service discovery and selection system   Order a copy of this article
    by Manoranjan Parhi, Binod Kumar Pattanayak, Manas Ranjan Patra 
    Abstract: In recent years, owing to the global economic downturn, many organisations have resorted to downsizing their Information Technology (IT) expenses by adopting innovative computing models, such as cloud computing, which allows business houses to reduce their fixed IT costs by promising a greener, scalable, cost-effective alternative to use their IT resources. A growing number of pay-per-use cloud services are now available on the web in the form of Software as a Service (SaaS), Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). With the increase in the number of services, there has also been an increase in demand and adoption of cloud services, making cloud service identification and discovery a challenging task. This is due to varied service descriptions, non-standardised naming conventions, heterogeneity in type and features of cloud services. Thus, selecting an appropriate cloud service according to consumer requirements is a daunting task, especially for applications that use a composition of different cloud services. In this paper, we have designed an ontology-based cloud infrastructure service discovery and selection system that defines functional and non-functional concepts, attributes and relations of infrastructure services. We have shown how the system enables one to discover appropriate services optimally as requested by consumers.
    Keywords: cloud computing; cloud service discovery and selection; infrastructure as a service; cloud ontology.

  • Certificateless multi-signcryption scheme in standard model   Order a copy of this article
    by Xuguang Wu 
    Abstract: A signcryption scheme can realise the security objectives of encryption and signature simultaneously, which has lower computational cost and communication overhead than the sign-then-encrypt approach. To adapt multi-user settings and solve the key escrow problem of ID-based multi-signcryption schemes, this paper defines the formal model of certificatless multi-signcryption scheme and proposes a certificateless multi-signcryption scheme in the standard model. The scheme is proved secure against adaptive chosen ciphertext attacks and adaptive chosen message attacks under decisional bilinear Diffie-Hellman assumption and computational Diffie-Hellman assumption, respectively.
    Keywords: signcryption; multi-signcryption; certificatless encryption.

  • Per-service security SLAs for cloud security management: model and implementation   Order a copy of this article
    by Valentina Casola, Alessandra De Benedictis, Jolanda Modic, Massimiliano Rak, Umberto Villano 
    Abstract: In the cloud computing context, Service Level Agreements (SLAs) tailored to specific Cloud Service Customers (CSCs) seem to be still a utopia, and things are even worse as regards the security terms to be guaranteed. In fact, existing cloud SLAs focus on only a few service terms, and Cloud Service Providers (CSPs) mainly provide uniform guarantees for all offered services and for all customers, regardless of any particular service characteristics or of customer-specific needs. In order to expand their business volume, CSPs are currently starting to explore alternative approaches, based on the adoption of a CSC-based per-service security SLA model. This paper presents a framework that enables the adoption of a per-service SLA model, supporting the automatic implementation of cloud security SLAs tailored to the needs of each customer for specific service instances. In particular, the process and the software architecture for per-service SLA implementation are shown. A case study application, related to the provisioning of a secure web container service, is presented and discussed, to demonstrate the feasibility and effectiveness of the proposed solution.
    Keywords: cloud security; per-service SLA; security service level agreement.

  • A (multi) GPU iterative reconstruction algorithm based on Hessian penalty term for sparse MRI   Order a copy of this article
    by Salvatore Cuomo, Pasquale De Michele, Francesco Piccialli 
    Abstract: A recent trend in the Magnetic Resonance Imaging (MRI) research field is to design and adopt machines that are able to acquire undersampled clinical data, reducing the time for which the patient is lying in the body scanner. Unfortunately, the missing information in these undersampled acquired datasets leads to artifacts in the reconstructed image, therefore computationally expensive image reconstruction techniques are required. In this paper, we present an iterative regularisation strategy with a second order derivative penalty term for the reconstruction of undersampled image datasets. Moreover, we compare this approach with other constrained minimisation methods, resulting in improved accuracy. Finally, an implementation on a massively parallel architecture environment, a multi Graphics Processing Unit (GPU) system, of the proposed iterative algorithm is presented. The resulting performance gives clinically feasible reconstruction run times, speed-up and improvements in terms of reconstruction accuracy of the undersampled MRI images.
    Keywords: compressed sensing; MRI iterative reconstruction; numerical regularisation; graphics processing unit; parallel and scientific computing.

  • Labelled evolutionary Petri nets/genetic algorithm based approach for workflow scheduling in cloud computing   Order a copy of this article
    by Manel Femmam, Okba Kazar, Laid Kahloul, Mohamed El-kabir Fareh 
    Abstract: Nowadays, more evolutionary algorithms for workflow scheduling in cloud computing are proposed. Most of those algorithms focused on the effectiveness, discarding the issue of flexibility. Research on Petri nets addresses the issue of flexibility; many extensions have been proposed to facilitate the modelling of complex systems. Typical extensions are the addition of "colour", "time" and "hierarchy". By mapping scheduling problems into Petri nets, we are able to use standard Petri net theory. In this case, the scheduling problem can be reduced to finding an optimal sequence of transitions leading from an initial marking to a final one. To find the optimal scheduling, we propose a new approach based on a recent proposed formalism Evolutionary Petri Net (EPN), which is an extension of Petri net, enriched with two genetic operators, crossover and mutation. The objectives of our research are to minimise the workflow application completion time (makespan) as well as the amount cost incurred by using cloud resources. Some numerical experiments are carried out to demonstrate the usefulness of our algorithm.
    Keywords: workflow scheduling; cloud computing; petri nets; genetic algorithm.

  • An improved SMURF scheme for cleaning RFID data   Order a copy of this article
    by He Xu 
    Abstract: With the increasing usage of internet of things devices, our daily life is facing Big Data. RFID technology enables the reading over a long distance, provides high storage capacity and is widely used in the internet of things environmental supply chain management for object tracking and tracing. With the expansion of the RFID technology application areas, the demand for reliability of business data is increasingly important. In order to fulfil the needs of upper applications, data cleaning is essential and directly affects the correctness and completeness of the business data, so it needs to filter and handle RFID data. The traditional statistical smoothing for unreliable RFID data (SMURF) algorithm dynamically adjusts the size of a window according to tags average reading rate of sliding window during the process of data cleaning. To some extent, SMURF overcomes the disadvantages of fixed sliding window size; however, the SMURF algorithm is only aimed at constant speed data flow in ideal situations. In this paper, we overcome the shortcomings of the SMURF algorithm, and propose a SMURF scheme improved in two aspects. The first one is based on dynamic tags, and the second one is the RFID data cleaning framework, which considers the influence of data redundancy. The experiments verify that the improved scheme is reasonable in dynamic settings of sliding window, and the accuracy of cleaning effect is improved as well.
    Keywords: RFID; data cleaning; internet of things; sliding window.

  • Adaptive co-operation in parallel memetic algorithms for rich vehicle routing problems   Order a copy of this article
    by Jakub Nalepa, Miroslaw Blocho 
    Abstract: Designing and implementing co-operation schemes for parallel algorithms has become a very important task recently. The scheme, which defines the co-operation topology, frequency and strategies for handling transferred solutions, has a tremendous influence on the algorithm search capabilities, and can help to balance the exploration and exploitation of the vast solution space. In this paper, we present both static and dynamic schemes: the former are selected before the algorithm execution, whereas the latter are dynamically updated on the fly to better respond to the optimisation progress. To understand the impact of such co-operation approaches, we applied them in the parallel memetic algorithms for solving rich routing problems, and performed an extensive experimental study using well-known benchmark sets. This experimental analysis is backed with the appropriate statistical tests to verify the importance of the retrieved results.
    Keywords: co-operation; parallel algorithm; memetic algorithm; rich routing problem; VRPTW; PDPTW.

  • Cloud computing based on agent technology, superrecursive algorithms and DNA   Order a copy of this article
    by Rao Mikkilineni, Mark Burgin 
    Abstract: Agents and agent systems are becoming more and more important in the development of a variety of fields, such as ubiquitous computing, ambient intelligence, autonomous computing, data analytics, machine learning, intelligent systems and intelligent robotics. In this paper, we examine interactions of theoretical computer science with computer and network technologies, analysing how agent technology is presented in mathematical models of computation. We demonstrate how these models are used in the novel distributed intelligent managed element (DIME) network architecture (DNA), which extends the conventional computational model of information processing networks, allowing improvement of the efficiency and resilience of computational processes. Two implementations of DNA described in the paper illustrate how the application of agent technology radically improves the current cloud computing state of the art. The first example demonstrates the live migration of a database from a laptop to a cloud without losing transactions and without using containers or moving virtual machine images. The second example demonstrates the implementation of cloud agnostic computing over a network of public and private clouds, where live computing process workflows are moved from one cloud to another without losing transactions. Both these implementations demonstrate the power of scientific thought for dramatically extending the current state of the art of cloud computing practice.
    Keywords: cloud computing; agent technology; inductive Turing machine; grid computing; DIME network architecture; intelligent systems; super-recursive algorithms.

  • Attendance management system using selfies and signatures   Order a copy of this article
    by Jun Iio 
    Abstract: There have been many proposals to optimise student attendance management in higher education. However, each method has pros and cons and we have not yet found a perfect solution. In this study, a novel framework for attendance management is proposed that consists of a mobile device and a web application. During lectures, students participating in the lecture can register their attendance on the mobile device with their selfie or their signature. After the lecture is finished, the registration data are sent to the database and they are added to the 'RollSheet'. This paper reports an overview of this system and the results of an evaluation after a trial period, which was conducted in the second semester of the 2015 fiscal year.
    Keywords: attendance management system; selfie photograph; hand-writing signature; mobile device; web application.

  • Trust modelling for opportunistic cloud services   Order a copy of this article
    by Eric Kuada 
    Abstract: This paper presents a model for the concept of trust and a trust management system for opportunistic cloud services platforms. Results from applying the systematic review methodology to review trust-related studies in cloud computing revealed that the concept of trust is used loosely without any formal specification in cloud computing discussions and trust engineering in general. Formal definition and a model of the concept of trust is, however, essential in the design of trust management systems. The paper therefore presents a model for the formal specification of the concept of trust. A trust management system for opportunistic cloud services is also presented. The applicability of the trust model and the trust management system is demonstrated for cloud computing by applying it to software as a service and infrastructure as a service usage scenarios in the context of opportunistic cloud services environments.
    Keywords: opportunistic cloud services; trust engineering; trust in cloud computing; trust modeling; trust management system; pseudo service level agreements.

  • Efficient cache replacement policy for minimising error rate in L2-STT-MRAM caches   Order a copy of this article
    by Rashidah F. Olanrewaju, Burhan Ul Islam Khan, A. Raouf Khan, Mashkuri Yaacob, Md Moktarul Alam 
    Abstract: In recent times, various challenges have been encountered in the design and development of Static-RAM (SRAM) caches, which consequently has led to a design where memory cell technologies are converted into on-chip embedded caches. The current research statistics for cache designing reveals that Spin Torque Transfer Magnetic RAMs, preferably termed as STT-MRAMs, have become one of the most promising technologies in the field of memory chip design, gaining a lot of attention from researchers owing to their dynamic direct map and data access policies for reducing the average cost, i.e. both time and energy optimisation. Though STT-MRAMs possess high density, less power rating and non-volatility, increasing rates of WRITE failures and READ disturbances highly affect the reliability of STT-MRAM caches. Besides workload behaviours, process variations directly affect these failure/disturbance rates. Furthermore, it can be seen that cache replacement algorithms play a significant part in minimising the Error Rate (ER) induced by WRITE operations. In this paper, the vulnerability of STT-MRAM caches has been investigated to examine the effect of workloads as well as process variations for characterising the reliability of STT-MRAM caches. The current study is intended to analyse and evaluate an existing efficient cache replacement policy, namely Least Error Rate (LER), which uses Hamming Distance (HD) computations to reduce the Write Error Rate (WER) of L2-STT-MRAM caches with acceptable overheads. The performance analysis of the algorithm ensures its effectiveness in reducing the WER and cost overheads compared with the conventional LRU technique implemented on SRAM cells.
    Keywords: cache replacement algorithm; field assisted STT-MRAM; error rate; L2 caches.

  • An infrastructure model for smart cities based on big data   Order a copy of this article
    by Eliza Helena Areias Gomes, Mario Antonio Ribeiro Dantas, Douglas D. J. De Macedo, Carlos Roberto De Rolt, Julio Dias, Luca Foschini 
    Abstract: The spread of projects focused on smart cities has grown in recent years. With this, the massive amount of data generated in these initiatives creates a degree of complexity in how to manage all this information. In attention to solve this problem, several approaches have been developed in recent years. In this paper, we propose an infrastructure model for big data for a smart city project. The goal of this model is to present the stages for the processing of data in the steps of extraction, storage, processing and visualisation, as well as the types of tool needed for each phase. To implement our proposed model, we used the ParticipACT Brazil, a project based in smart cities. This project uses different databases to compose its big data and uses this data to solve urban problems. We observe that our model provides a structured vision of the software to be used in big data server of ParticipACT Brazil.
    Keywords: big data; smart city; big data tools.

  • A negotiation-based dynamic pricing heuristic in cloud computing   Order a copy of this article
    by Gaurav Baranwal, Dinesh Kumar, Zahid Raza, Deo Prakash Vidyarthi 
    Abstract: Over the years, cloud computing has emerged as a good business platform for IT related services. In the cloud, prices of computing resource act as a lever to control the use of the resources. That is the reason, when the number of cloud customers started increasing, why cloud service providers started offering resources with various pricing schemes to attract customers. This work proposes a negotiation-based heuristic for dynamic pricing that considers the behaviour of both the service provider and the customer and tries to optimally satisfy both for pricing. Both customer and provider are reluctant to reveal information about their utility to each other. The designed utility function for the provider considers payment offered by the customer and the opinion of the provider about the customer. Similarly, the utility function for the customer considers the price offered by the provider and the opinion of the customer about the provider. This will encourage both to offer their true value. Performance study indicates that the proposed method performs well and is a potential candidate for its implementation in a real cloud.
    Keywords: cloud pricing; negotiations; cloud market; cloud agent; trustability.

  • Playing in traffic: an investigation of low-cost, non-invasive traffic sensors for street lighting luminaire deployment   Order a copy of this article
    by Karl Mohring, Trina Myers, Ian Atkinson 
    Abstract: Real-time traffic monitoring is essential to the development of smart cities as well as its potential for energy savings. However, real-time traffic monitoring is a task that requires sophisticated and expensive hardware. Owing to the prohibitive cost of specialised sensors, accurate traffic counts are typically limited to intersections where traffic information is used for signalling purposes. The sparse arrangement of traffic detection points does not provide adequate information for intelligent lighting applications, such as adaptive dimming. This paper investigates the low-cost and off-the-shelf sensors to be installed inside street lighting luminaires for traffic sensing. A luminaire-mounted sensor test-bed installed on a moderately busy road trialled three non-invasive presence-detection sensors: Passive Infrared (PIR), Sonar (UVD) and lidar. The proof-of-concept study revealed that a HC-SR501 PIR motion detector could count traffic with 73% accuracy at a low cost and may be suitable for intelligent lighting applications if accuracy can be further improved.
    Keywords: commodity; internet of things; vehicle detection; sensors; smart cities; wireless sensor networks.

  • Real-time web-cast system by multihop WebRTC communications   Order a copy of this article
    by Daiki Ito, Michitoshi Niibori, Masaru Kamada 
    Abstract: A software system is developed for casting the screen images and voices from a host PC to the client web browsers on many other PCs in real time. This system is intended to be used in the classrooms. Students have only to bring their own PCs and connect to the teachers host PC by a web browser via a wireless network to see and listen to the teaching materials presented on the host PC. Then the client web-browsers are organised in the shape of a binary tree along which the video and audio data are relayed in the multihop fashion by the Web Real-time Communication (WebRTC) protocol. This structure of binary multihop relay is adopted in order not to burden the host PC with communications load. A test has shown that voice and the motion pictures in a rather small size of 320 x 240 pixels on a teachers PC have been presented at the rate of five frames per second without any noticeable delays on the web browsers running on 38 client devices for students under a local WiFi network. To host more client devices, we have to lower the frame rate as slow as the slide show of still pictures.
    Keywords: real-time web-cast system; bring your own device; WebSocket; web real-time communication.

  • Dynamic migration of virtual machines to reduce energy consumption in a cluster   Order a copy of this article
    by Dilawaer Duolikun, Tomoya Enokido, Makoto Takizawa 
    Abstract: Virtual machines are widely used to support applications with virtual service in server clusters. Here, a virtual machine can migrate from a host server to a guest server. In this paper, we consider a cluster where virtual machines are dynamically created and dropped depending on the number of processes. We propose a dynamic virtual machine migration (DVMM) algorithm to reduce the total electric energy consumption of servers. If an applications issues a process to a cluster, the most energy-efficient host server is first selected and then the process is performed on a virtual machine of the server. Then, a virtual machine migrates from a host server to a guest server so that total electric energy consumption of the servers can be reduced. In the evaluation, we show the total electric energy consumption and active time of servers and the average execution time of processes can be reduced in the DVMM algorithm.
    Keywords: energy-efficient computation; virtual machine; power consumption model; energy-aware dynamic migration of virtual machines.

  • Energy-efficient placement of virtual machines in cloud datacentres, based on fuzzy decision making   Order a copy of this article
    by Leili Salimian, Faramarz Safi-Esfahani 
    Abstract: Placement of virtual machines (VMs) on physical nodes as a sub-problem of dynamic VM consolidation has been driven mainly by energy efficiency and performance objectives. However, owing to varying workloads in VMs, placement of the VMs can cause a violation in the Service Level Agreement (SLA). In this paper, the VM placement is regarded as a bin packing problem, and a fuzzy energy-aware algorithm is proposed to estimate the host resource usage. The estimated resource usage is used to find the most energy-efficient host to reallocate the VMs. The fuzzy algorithm generates rules and membership functions dynamically to adapt to workload changes. The main objective of the proposed algorithm is to optimise the energy-performance trade-off. The effectiveness of the proposed algorithm is evaluated through simulations on the random and real-world PlanetLab workloads. Simulation results demonstrate that the proposed algorithm reduces the energy consumption, while it provides a high level of adherence to the SLAs.
    Keywords: dynamic VM consolidation; CPU usage; VM placement; fuzzy decision making.

  • An identity-based cryptographic scheme for cloud storage applications   Order a copy of this article
    by Manel Medhioub, Mohamed Hamdi 
    Abstract: The use of remote storage systems is gaining an expanding interest, namely the cloud storage based services. In fact, one of the factors that led to the popularity of cloud computing is the availability of storage resources provided at a reduced cost. However, when outsourcing the data to a third party, security issues become critical concerns, especially confidentiality, integrity, authentication, anonymity and resiliency. Based on this challenge, this work provides a new approach to ensure authentication in cloud storage applications. ID-based cryptosystems (IBC) have many advantages over certificate-based systems, such as simplification of key management. This paper proposes an original ID-based authentication approach in which the cloud tenant is assigned the IBCPrivate Key Generator (PKG) function. Consequently, it can issue public elements for its users, and can keep confidential the resulting IBC secrets. Moreover, in our scheme, the public key infrastructure is still in usage to establish trust relationships between the PKGs.
    Keywords: cloud storage; authentication; identity-based cryptography; security; Dropbox.

  • COBRA-HPA: a block generating tool to perform hybrid program analysis   Order a copy of this article
    by Thomas Huybrechts, Yorick De Bock, Haoxuan Li, Peter Hellinckx 
    Abstract: The Worst-Case Execution Time (WCET) of a task is an important value in real-time systems. This metric is used by the scheduler in order to schedule all tasks before their deadlines. However, the code and hardware architecture have a significant impact on the execution time and thus the WCET. Therefore, different analysis methodologies exist to determine the WCET, each with their own advantages and/or disadvantages. In this paper, a hybrid approach is proposed that combines the strengths of two common analysis techniques. This hybrid methodology tackles the problem that can be described as 'the gap between a machine and a human in solving problems'. The two-layer hybrid model splits the code of tasks into so-called basic blocks. The WCET can be determined by performing execution time measurements on each block and statically combining those results. The COBRA-HPA framework presented in this paper is developed to facilitate the creation of hybrid block models and automate the measurements/analysis process. Additionally, an elaborated discussion on the implementation and performance of the framework is given. In conclusion, the results of the COBRA-HPA framework show a significant reduction in analysis effort while keeping sound WCET predictions for the hybrid method compared with the static and measurement-based approach.
    Keywords: worst-case execution time; WCET; hybrid analysis methodology; COde Behaviour fRAmework; COBRA; basic block generator.

  • The big data mining forecasting model based on combination of improved manifold learning and deep learning   Order a copy of this article
    by Xiurong Chen, Yixiang Tian 
    Abstract: For the most important dilemma in big data processing that extensive redundant information and useful information mix with each other, which makes these big data difficult to be effectively used to establish prediction models, in our work we combine the manifold learning dimension reduction algorithm LLE with deep learning feature extraction algorithm CDBN as the input of RBF, constructing a mixed-feature RBF forecast model. As for depending too much on the local domain, which is not easy to determine in the LLE algorithm, we used the idea of mapping by kernel function of KECA to transfer original global nonlinear problem into global linear one under the high-dimensional kernel feature space to solve, removing the redundant information more accurately and reducing data complexity. As for the difficulty in confirming network structure and the lack of supervision in learning process of CDBN, we used the kernel entropy information computed in KECA to determine the number of network layers and supervise the learning process, which makes it more effective to extract deep features to explore the essential characteristics of big data information. In the empirical part we chose the foreign exchange rate time series to conduct research, the results show that the improved KELE can reduce dimensionality of sample data effectively which makes we obtain the more optimised and reasonable representation of original data, providing an assurance for further learning and understanding of big data. And the improved KECDBN can extract distributed features of data more effectively. Then improve the prediction accuracy of the mixed-feature RBF forecast model based on KELE and KECRBM.
    Keywords: locally liner embedding; continuous deep belief network; kernel entropy component analysis; kernel entropy liner embedding; kernel entropy continuous deep belief network.

  • Cost-aware hybrid cloud scheduling of parameter sweep calculations using predictive algorithms   Order a copy of this article
    by Stig Bosmans, Glenn Maricaux, Filip Van Der Schueren, Peter Hellinckx 
    Abstract: This paper investigates various techniques for scheduling parameter sweep calculations cost efficiently in a hybrid cloud environment. The combination of both a private and public cloud environment integrates the advantages of being cost effective and having virtually unlimited scaling capabilities at the same time. To make an accurate estimate for the required resources, multiple prediction techniques are discussed. The estimation can be used to create an efficient scheduler which respects both deadline and cost. These findings have been implemented and tested in a Java-based cloud framework that operates on Amazon EC2 and OpenNebula. Also, we present a theoretical model to further optimise the cost by leveraging the Amazon Spot Market.
    Keywords: parameter sweep; cloud computing; Amazon AWS EC2; predictive algorithms; OpenNebula; machine learning; Amazon spot market.

  • Impact of software architecture on execution time: a power window TACLeBench case study   Order a copy of this article
    by Haoxuan Li, Paul De Meulenaere, Siegfried Mercelis, Peter Hellinckx 
    Abstract: Timing analysis is used to extract the timing properties of a system. Various timing analysis techniques and tools have been developed over the past decades. However, changes in hardware platform and software architecture introduced new challenges in timing analysis techniques. In our research, we aim to develop a hybrid approach to provide safe and precise timing analysis results. In this approach, we will divide the original code into smaller code blocks, then construct a timing model based on the information acquired by measuring the execution time of every individual block. This process can introduce changes in the software architecture. In this paper we use a multi-component benchmark to investigate the impact of software architecture on the timing behaviour of a system.
    Keywords: WCET; timing analysis; hybrid timing analysis; power window; embedded systems; TACLEBench; COBRA block generator.

  • Accountability management for multi-tenant cloud services   Order a copy of this article
    by Fatma Masmoudi, Mohamed Sellami, Monia Loulou, Ahmed Hadj Kacem 
    Abstract: The widespread adoption of multi-tenancy in the Software as a Service delivery model triggers several data protection issues that could decrease the tenants' trustworthiness. In this context, accountability can be used to strengthen the trust of tenants in the cloud through providing the reassurance of the processing of personal data hosted in the cloud according to their requirements. In this paper, we propose an approach for the accountability management of multi-tenant cloud services allowing: compliance checking of services's behaviours with defined accountability requirements based on monitoring rules, accountability-violation detection otherwise, and post-violation analysis based on evidences. A tool-suite is developed and integrated into a middleware to implement our proposal. Finally, experiments we have carried out show the efficiency of our approach relying on some criteria.
    Keywords: cloud computing; accountability; multi-tenancy; monitoring; accountability violation.

  • A big data approach for multi-experiment data management   Order a copy of this article
    by Silvio Pardi, Guido Russo 
    Abstract: Data sharing among similar experiments is limited by the usage of ad hoc directory structures, data and metadata naming as well as by the variety of data access protocols used in different computing model. The Open Data and Big Data paradigms provide the context to overcome the current heterogeneity problems. In this work, we present a study for a Global Storage Ecosystem designed to manage large and distributed datasets, in the context of physics experiments. The proposed environment is entirely based on the open protocols HTTP/WebDav, together with modern data searching technologies, according to the Big Data paradigm. More specifically, the main goal is to aggregate multiple storage areas exported with open protocols and to simplify the operations of data retrieval, thanks to a set of engine-search-like tools, based on Elasticsearch and Apache Lucene library. This platform offers to physicists an effective instrument to simplify the multi-experiment data analysis, by enabling data searching, without knowing a priori the directory format or the data itself. As a proof of concept, we realised a prototype over the ReCaS Supercomputing infrastructure, by aggregating and indexing the files stored in a set of already existing storage systems.
    Keywords: big data; data federation.

  • A WLAN triage testbed based on fuzzy logic and its performance evaluation for different number of clients and throughput parameter   Order a copy of this article
    by Kosuke Ozera, Takaaki Inaba, Shinji Sakamoto, Kevin Bylykbashi, Makoto Ikeda, Leonard Barolli 
    Abstract: Many devices communicate over Wireless Local Area Networks (WLANs). The IEEE 802.11e standard for WLANs is an important extension of the IEEE 802.11 standard focusing on QoS that works with any PHY implementation. The IEEE 802.11e standard introduces EDCF and HCCA. Both these schemes are useful for QoS provisioning to support delay-sensitive voice and video applications. EDCF uses the contention window to differentiate high priority and low priority services. However, it does not consider the priority of users. In this paper, in order to deal with this problem, we propose a Fuzzy-based Admission Control System (FACS). We implemented a triage testbed using FACS and carried out an experiment. The experimental results show that the number of connected clients is increased during Avoid phase, but does not change during Monitoring phase. The experimental results show that the implemented testbed performs better than conventional WLANs.
    Keywords: WLAN triage; congestion control.

  • Enriching folksonomy for online videos   Order a copy of this article
    by Hiroki Sakaji, Masaki Kohana, Akio Kobayashi, Hiroyuki Sakai 
    Abstract: We propose a method that enriches folksonomy by using user comments on online videos. Folksonomy is a process in which users tag videos so that they can be searched for easily. On some videos, users can post tags and comments. A tag corresponds to folksonomy. One such online sharing website is Nico Nico Douga; however, users cannot post more than 12 tags on a video. Therefore, there are some important tags that could be posted but are sometimes not. We present a method for acquiring some of these missing tags by choosing new tags that score well in a scoring method developed by us. The method is based on information theory and a novel algorithm for estimating new tags by using distributed databases constructed by us.
    Keywords: text mining; distributed database; information extraction.

  • A web platform for oral exam of programming class   Order a copy of this article
    by Masaki Kohana, Shusuke Okamoto 
    Abstract: We develop a system to help an oral exam for a programming class. Our programming class has a problem about the waiting time for the students. We assume that the waiting time can be reduced when a teacher can check a source code and a result of the program smoothly. A student uploads C++ source codes and registers to a waiting list. The system compiles the code to an HTML and a JavaScript file using Emscripten. The compiled program can run on a Web browser. A teacher can check the order of the students for the oral exam. At this time, the teacher also can see the source code and the result. Our system provides a waiting list of the oral exam to keep fairness. Also, the teacher does not overlook an invalid code. It helps a teacher to grade a student correctly.
    Keywords: Oral Exam; Programming; Runtime Environment.

  • A novel test case generation method based on program structure diagram   Order a copy of this article
    by Mingcheng Qu, XiangHu Wu, YongChao Tao, GuanNan Wang, ZiYu Dong 
    Abstract: At present, embedded software has the problems of test lag, non-visual and low efficiency, depending on the test design of testers. It cannot guarantee the quality of the test cases and cannot guarantee the quality of the test. In this paper, a software program structure diagram model is established to verify the model, and the test points are planned manually. Finally, we fill in the contents of the test items, and generate the corresponding set of test cases according to the algorithm, and save them into the database for management. This method can improve the reliability and efficiency of tests, ensure the visual tracking and management of the use case, and have a strong auxiliary function to the planning and generation of the test case.
    Keywords: program structure diagram; test item planning; test case generation.

  • A dynamic cloud service selection model based on trust and service level agreements in cloud computing   Order a copy of this article
    by Yubiao Wang, Junhao Wen, Quanwang Wu, Lei Guo, Bamei Tao 
    Abstract: For high quality and trusted service selection problems, we propose a dynamic cloud services selection model (DCSSM). Cloud service resources are divided into different service levels by Service-Level Agreement Management (SLAM). Each SLAM manages some cloud service registration information. In order to make the final trust evaluation values more practical, the model performs a comprehensive trust, which consists of direct trust and indirect trust. First, combined weights consist of subjective weight and objective weight. Using rough sets, an analytic hierarchy process method is used to calculate subjective weight. The direct trust also considers transaction time and transaction amount, and then gets a accurate direct trust. Secondly, indirect trust considers user trust evaluation similarity, and it contains indirect trust of friends and indirect trust of strangers. Finally, when the transaction is completed, a direct trust dynamic update is proposed. The model is simulated using CloudSim. It is compared with three other methods, and the experimental results show that the DCSSM performs better than the other three methods.
    Keywords: dynamic cloud service; trust; service-level agreement; selection model; combining weights.

  • Research on regression test method based on multiple UML graphic models   Order a copy of this article
    by Mingcheng Qu, Xianghu Wu, Yongchao Tao, Guannan Wang, Ziyu Dong 
    Abstract: Most of the existing graph-based regression testing schemes aim at a given UML graph and are not flexible in regression testing. This paper proposes a method of a universal UML for a variety of graphical model modifications, and obtains a UML graphics module structure modified regression testing that must be retested, determined by the domain of influence analysis on the effect of UML modification on the graphical model test case generated range analysis, finally re auto generate test cases. This method has been proved to have a high logical coverage rate. In order to fully consider all kinds of dependency, it cannot limit the type modification of UML graphics, and has higher openness and comprehensiveness.
    Keywords: regression testing; multiple UML graphical models; domain analysis.

  • E-XY: an entropy-based XY routing algorithm   Order a copy of this article
    by Akash Punhani, Pardeep Kumar, Nitin Chanderwal 
    Abstract: The communication between cores or processing elements has become an important issue owing to continuous increase in their numbers on the chip. In recent years, Network on Chip (NoC) is used to handle the communication issues. The most common topology for NoC is a mesh topology. XY routing algorithm is the most commonly used algorithm for this topology. This routing algorithm is popular because of its simplicity and deadlock prevention capabilities. The major drawback of such a routing algorithm is that it is unable to handle a high traffic load, which leads to the development of adaptive routing algorithms. The adaptive routing algorithm requires information of the adjacent routers before routing the packets in order to avoid congestion. Such information about the adjacent routers is transferred through extra dedicated links as the normal links may be congested and delay in sending information leads to additional traffic to the congested links. In this paper, an E-XY (entropy-based XY) routing algorithm is proposed that generates information about the adjacent router locally based on previously communicated packets. The results have been carried out on a mesh topology of 8x8 simulated using Omnet++ 4.4.1 using HNOCS version. Different types of traffic have been considered for result computation, including uniform, bit complement, neighbour and tornado. The proposed algorithm is compared with other routing algorithms including XY, IX/Y and Odd Even. Results demonstrate that the proposed algorithm is comparable with the XY routing algorithm up to the load factor of 0.8 and performs better than the XY, IX/Y and Odd Even routing algorithms with the increase in load. The proposed algorithm helps in reducing hardware cost as extra links and ports on routers to connect these links are not required. Hence, the proposed algorithm will be a better option for communication in a parallel computing environment.
    Keywords: routing algorithm; adaptive; parallel communication; router architecture; maximum entropy model.

Special Issue on: Recent Developments in Parallel, Distributed and Grid Computing for Big Data

  • GPU accelerated video super-resolution using transformed spatio-temporal exemplars   Order a copy of this article
    by Chaitanya Pavan Tanay Kondapalli, Srikanth Khanna, Chandrasekaran Venkatachalam, Pallav Kumar Baruah, Kartheek Diwakar Pingali, Sai Hareesh Anamandra 
    Abstract: Super-resolution (SR) is the method of obtaining high resolution (HR) image or image sequence from one or more low-resolution (LR) images of a scene. Super-resolution has been an active area of research in recent years owing to its applications to defence, satellite imaging, video surveillance and medical diagnostics. In a broad sense, SR techniques can be classified into external database driven and internal database driven approaches. The training phase in the first approach is computationally intensive as it learns the LR-HR patch relationships from huge datasets, and the test procedure is relatively fast. In the second approach, the super-resolved image is directly constructed from the available LR image, eliminating the need for any learning phase but the testing phase is computationally intensive. Recently, Huang et al. (2015) proposed a transformed self-exemplar internal database technique which takes advantage of the fractal nature in an image by expanding patch search space using geometric variations. This method fails if there is no patch redundancy within and across image scales and also if there is a failure in detecting vanishing points (VP), which are used to determine perspective transformation between LR image and its subsampled form. In this paper, we expand the patch search space by taking advantage of the temporal dimension of image frames in the scene video and also use an efficient VP detection technique by Lezama et al. (2014). We are thereby able to successfully super-resolve even the failure cases of Huang et al. (2015) and achieve an overall improvement in PSNR. We also focused on reducing the computation time by exploiting the embarrassingly parallel nature of the algorithm. We achieved a speedup of 6 on multi-core, up to 11 on GPU, and around 16 on hybrid platform of multi-core and GPU by parallelising the proposed algorithm. Using our hybrid implementation, we achieved 32x super-resolution factor in limited time. We also demonstrate superior results for the proposed method compared with current state-of-the-art SR methods.
    Keywords: super-resolution; self-exemplar; perspective geometry; temporal dimension; vanishing point; GPU; multicore.

  • Energy-efficient fuzzy-based approach for dynamic virtual machine consolidation   Order a copy of this article
    by Anita Choudhary, Mahesh Chandra Govil, Girdhari Singh, Lalit K. Awasthi, Emmanuel S. Pilli 
    Abstract: In cloud environment the overload leads to performance degradation and Service Level Agreement (SLA) violation while underload results in inefficient use of resources and needless energy consumption. Dynamic Virtual Machine (VM) consolidation is considered as an effective solution to deal with both overload and underload problems. However, dynamic VM consolidation is not a trivial solution as it can also lead to violation of negotiated SLA owing to runtime overheads in VM migration. Further, dynamic VM consolidation approaches need to answer many questions, such as (i) when to migrate a VM? (ii) which VM is to be migrated? and (iii) where to migrate the selected VM? In this work, efforts are made to develop a comprehensive approach to achieve better solutions to such problems. In the proposed approach, future forecasting methods for host overload detection are explored; a fuzzy logic based VM selection approach that enhances the performance of VM selection strategy is developed; and a VM placement algorithm based on destination CPU use is also developed. The performance evaluation of the proposed approaches is carried out on CloudSim toolkit using PlanetLab data set. The simulation results have exhibited significant improvement in the number of VM migrations, energy consumption, and SLA violations.
    Keywords: cloud computing; virtual machines; dynamic virtual machine consolidation; exponential smoothing; fuzzy logic.

  • A distributed framework for cyber-physical cloud systems in collaborative engineering   Order a copy of this article
    by Stanislao Grazioso, Mateusz Gospodarczyk, Mario Selvaggio, Giuseppe Di Gironimo 
    Abstract: Distributed cyber-physical systems play a significant role in enhancing group decision-making processes, as in collaborative engineering. In this work, we develop a distributed framework to allow the use of collaborative approaches in group decision-making problems. We use the fuzzy analytic hierarchy process, a multiple criteria decision-making method, as the algorithm for the selection process. The architecture of the framework makes use of open-source utilities. The information components of the distributed framework act in response to the feedback provided by humans. Cloud infrastructures are used for data storage and remote computation. The motivation behind this work is to make possible the implementation of group decision-making in real scenarios. Two illustrative examples show the feasibility of the approach in different application fields. The main outcome is the achievement of a time reduction for the selection and evaluation process.
    Keywords: distributed systems; cyber-physical systems; web services; group decision making; fuzzy AHP; product design and development.

Special Issue on: Resource Provisioning in Cloud Computing

  • A power saver scheduling algorithm using DVFS and DNS techniques in cloud computing datacentres   Order a copy of this article
    by Saleh Atiewi, Salman Yussof, Mohd Ezanee, Mutasem Zalloum 
    Abstract: Cloud computing is a fascinating and profitable area in modern distributed computing. Aside from providing millions of users with the means to use offered services through their own computers, terminals, and mobile devices, cloud computing presents an environment with low cost, simple user interface, and low power consumption by employing server virtualisation in its offered services (e.g., infrastructure as a service). The pool of virtual machines found in a cloud computing datacentre (DC) must run through an efficient task scheduling algorithm to achieve resource usage and good quality of service, thus ensuring the positive effect of low energy consumption in the cloud computing environment. In this paper, we present an energy-efficient scheduling algorithm for a cloud computing DC using the dynamic voltage frequency scaling technique. The proposed scheduling algorithm can efficiently reduce the energy consumption for executing jobs by increasing resource usage. GreenCloud simulator is used to simulate our algorithm. Experimental results show that, compared with other algorithms, our algorithm can increase server usage, reduce energy consumption, and reduce execution time.
    Keywords: DVFS; DNS; virtual machine; datacentre; cloud computing; power consumption.

  • Towards providing middleware-level proactive resource reorganisation for elastic HPC applications in the cloud   Order a copy of this article
    by Rodrigo Righi, Cristiano Costa, Vinicius Facco, Igor Fontana, Mauricio Pillon, Marco Zanatta 
    Abstract: Elasticity is one of the most important features of cloud computing, referring to the ability to add or remove resources according to the needs of the application or service. Particularly for High Performance Computing (HPC), elasticity can provide a better use of resources and also a reduction in the execution time of applications. Today, we observe the emergence of proactive initiatives to handle the elasticity and HPC duet, but they present at least one problem related to the need of a previous user experience, large processing time, completion of parameters or design for a specific infrastructure and workload setting. Concerning the aforesaid context, this article presents ProElastic, a lightweight model that uses proactive elasticity to drive resource reorganisation decisions for HPC applications. Using ARIMA-based time series and analysing the mean time to launch virtual machines, ProElastic anticipates under- and over-loaded situations, triggering elasticity actions beforehand to address them. Our idea is to explore both performance and adaptivity at middleware level in an effortless way at user perspective, who does not need to either complete elasticity parameters or rewrite the HPC application. Based on ProElastic, we developed a prototype that was evaluated with a master-slave iterative application and compared against reactive-based elasticity and non-elastic approaches. The results showed performance gains and a competitive cost (application time multiplied by consumed resources) in favour of ProElastic when confronted with these two last approaches.
    Keywords: cloud elasticity; proactive optimisation; performance; resource management; adaptivity.

  • Virtual machine placement in distributed cloud centres using bin packing algorithm   Order a copy of this article
    by Kumaraswamy S, Mydhili K. Nair 
    Abstract: A virtual machine is a logical framework to provide services on the cloud. These virtual machines form a logical partition of the physical infrastructure present in the cloud centre. Virtual machines are not only prone to cost escalation but also result in huge power consumption. Hence, the cloud centre needs to optimise cost and power consumption by placing these virtual machines from their current physical machines to other suitable physical machines. Currently, this problem has been addressed by considering the virtual machines present in a single location cloud centre. But current cloud centres are have multiple locations and all of them are synchronised to provide cloud services. In this complex paradigm, it is important to differentiate the same and different location virtual machines and provide suitable placement algorithms. In this work, the problem of virtual machine placement is modelled through a 3-slot bin packing problem. It is shown as NP-Complete and suitable approximation algorithms are proposed. Also, a polynomial time approximation scheme is designed for this problem, which provides a facility to control the quality of approximation. Empirical studies performed through simulation confirm the theoretical bounds that were obtained.
    Keywords: cloud computing; virtual machine; virtual machine placement; distributed cloud;bin packing.