Forthcoming and Online First Articles

International Journal of Cloud Computing

International Journal of Cloud Computing (IJCC)

Forthcoming articles have been peer-reviewed and accepted for publication but are pending final changes, are not yet published and may not appear here in their final order of publication until they are assigned to issues. Therefore, the content conforms to our standards but the presentation (e.g. typesetting and proof-reading) is not necessarily up to the Inderscience standard. Additionally, titles, authors, abstracts and keywords may change before publication. Articles will not be published until the final proofs are validated by their authors.

Forthcoming articles must be purchased for the purposes of research, teaching and private study only. These articles can be cited using the expression "in press". For example: Smith, J. (in press). Article Title. Journal Title.

Articles marked with this shopping trolley icon are available for purchase - click on the icon to send an email request to purchase.

Online First articles are published online here, before they appear in a journal issue. Online First articles are fully citeable, complete with a DOI. They can be cited, read, and downloaded. Online First articles are published as Open Access (OA) articles to make the latest research available as early as possible.

Open AccessArticles marked with this Open Access icon are Online First articles. They are freely available and openly accessible to all without any restriction except the ones stated in their respective CC licenses.

Register for our alerting service, which notifies you by email when new issues are published online.

We also offer which provide timely updates of tables of contents, newly published articles and calls for papers.

International Journal of Cloud Computing (13 papers in press)

Regular Issues

  • Hybrelastic: A Hybrid Elasticity Strategy with Dynamic Thresholds for Microservice-based Cloud Applications   Order a copy of this article
    by Jose Augusto Accorsi, Rodrigo Da Rosa Righi, Vinicius F. Rodrigues, Cristiano André Costa, Dhananjay Singh 
    Abstract: Microservices-based architectures aim to divide the application’s functionality into small services so that each one of them can be scaled, managed, implemented, and updated individually. Currently, more and more microservices are used in application modelling, making them compatible with resource elasticity. In the literature, solutions employ elasticity to improve application performance; however, most of them are based on CPU utilisation metrics and only on reactive elasticity. In this context, this article proposes the hybrelastic model, which combines reactive and proactive elasticity with dynamically calculated thresholds for CPU and network metrics. The article presents three contributions in the context of microservices: 1) combination of two elasticity policies; 2) use of more than one elasticity evaluation metric; 3) use of dynamic thresholds to trigger elasticity. Experiments with hybrelastic demonstrate 10.31% higher performance and 20.28% lower cost compared to other executions without hybrelastic.
    Keywords: elasticity; reactive elasticity; proactive elasticity; scalability; dynamic thresholds; microservices.
    DOI: 10.1504/IJCC.2024.10048365
     
  • Integration of Cloud Based Scheme with Industrial Wireless Sensor Network for Data Publishing in Privacy of Point Source   Order a copy of this article
    by RAVINDHAR NV, SASIKUMAR S, BHARATHIRAJA N 
    Abstract: Wireless sensor networks (WSNs) are normally conveyed in arbitrary regions with no security. The source area uncovers significant data about targets. In this paper, a plan dependent on the cloud utilising data publishing in privacy of point source is proposed to resolve the issue of source area security. Then, at that point, a cloud-moulded phony area of interest is made to add counterfeit parcels into the WSN to confound the enemy and give a far-reaching protection area. Every important parcel is steered through a way that is very hard for the area of interest finding enemy to discover straightforwardly. Recreation results represent that the plan can forestall antagonistic catch and keep a significant degree of security insurance simultaneously. The energy utilisation in this plan applies restricted impact on the organisation lifetime contrasted and a cloud-based plan and an all-course irregular steering calculation conspire.
    Keywords: wireless sensor networks; WSNs; privacy of point source; security; technology; research; cloud computing.
    DOI: 10.1504/IJCC.2024.10051526
     
  • A Comparative Study of Collision Avoidance Medium Access Control Protocols in Internet-of-Things   Order a copy of this article
    by Sachin Kumar, Pawan Kumar Verma 
    Abstract: In wireless communications, different collision avoidance medium access control (MAC) protocols are available to avoid contention, but none of them are accepted as standard protocols to fulfil the requirements of the internet of things (IoT). So there is a need for well-defined MAC protocols to optimise the channel access mechanism. Therefore, this paper presents the fundamentals of IoT, types of collisions, features of IoT-based communication technologies, and a comparative study of collision avoidance MAC protocols in IoT. This paper first outlines the system model of IoT networks based on a comprehensive study of the reported literature. Following that, types of collisions are discussed. Further, we have provided a comprehensive study of ALOHA, CSMA, CSMA/CA, and hybrid MAC protocols, issues in MAC protocols, and their state-of-the-art solutions to avoid collisions and to provide higher throughput. Finally, future research direction for IoT has been highlighted to underline potential real-time IoT applications.
    Keywords: internet of things; IoT; machine to machine; M2M; MAC protocols; ALOHA; CSMA/CA; quality-of-service; QoS.
    DOI: 10.1504/IJCC.2024.10051835
     
  • Load Balancing in Cloud Computing using Cuckoo Search Algorithm   Order a copy of this article
    by Brototi Mondal 
    Abstract: A competent cloud load balancer should modify its approach to the dynamic environment and the different types of tasks. Load balancing (LB) in cloud computing can be viewed as an optimisation problem. As load balancing in cloud is an NP-complete problem, the best solution cannot be found using gradient-based methods that look for optimal solutions to NP-complete problems in a reasonable amount of time. Therefore, evolutionary and meta-heuristic methods should be applied to tackle the load balancing issue. In this paper, a novel load balancing method based on cuckoo search (CS) algorithm is proposed. This method successfully distributes the load among the available virtual machines (VMs) while maintaining a low response time (RT) overall. Thus, its simulation is performed and comparative simulation results reveal that the suggested approach outperforms existing tactics like round robin (RR), stochastic hill climbing (SHC), and genetic algorithm (GA).
    Keywords: cloud computing; load balancing; cuckoo search algorithm; response time.
    DOI: 10.1504/IJCC.2024.10057822
     
  • Genetic Algorithm with Reinforcement Learning for Optimal allocation of resources in Task scheduling   Order a copy of this article
    by Deepak BB 
    Abstract: Task scheduling in cloud computing is one of primary research problem in computer science technology. Finding an optimal solution for task scheduling not only enhances the system/machine performance but also it reduces the total processing cost. There are number of task scheduling algorithms developed by previous researchers, but none of them have been globally accepted because of their own pros and cons. The current study made an attempt to solve task scheduling for optimal utilisation of virtual machines to execute the assigned tasks with the help of genetic algorithm (GA). The proposed strategy primarily focused on the minimisation of task execution time by considering it as fitness function while GA implementation. Reinforcement learning is integrated with the proposed algorithm in order to enhance its performance while finding the optimal resource allocation. Later, the methodology is validated in several cloud environments with simulated analysis results in order to check its feasibility.
    Keywords: genetic algorithm; artificial intelligence; task scheduling; cloud computing; virtual machines.
    DOI: 10.1504/IJCC.2024.10058082
     
  • An outline of swarm-based metaheuristic approaches for task scheduling in a cloud computing environment   Order a copy of this article
    by Surinder Kaur, Jaspreet Singh, Vishal Bharti 
    Abstract: Cloud computing being a new arena of research, has attracted attention from the research and industrial community. The most challenging issues of cloud service are very specific named such as task scheduling, execution, storage, energy management, and security breaches. Where task scheduling is the process of allocating user requests in terms of tasks in a certain order to maximise the usage of the resources. Through cloud technology platform, services are supplied over the Internet, where customers make their requests online, but face lots of issues during task allocation and completion like increased make span, high rate of energy consumption, and migration problems. So, in this survey, an outline study of swarm-based metaheuristic approaches for task scheduling in multi-cloud computing is presented because traditional techniques are not able to solve the existing problems considering systematic examination of those methods that contain a brand-new taxonomy that highlights both their advantages and disadvantages.
    Keywords: cloud computing; energy efficiency; task scheduling; virtual machine; meta-heuristic algorithms.
    DOI: 10.1504/IJCC.2024.10058317
     
  • Improved task scheduling strategy for balancing resource utilization and service quality in mobile edge computing environment   Order a copy of this article
    by Michael Pendo John Mahenge  
    Abstract: The rapid growth of resource-hungry and time-critical applications reflects the rise of resources needed for communication, processing, and energy consumption. Mobile edge computing (MEC) that offers cloud-computing services proximate to users at the edge of mobile network is considered to be the key technology to facilitate task scheduling closer to data-sources. The objective of this paper is to propose an improved task scheduling strategy that selects the best MEC server to process each task which reduces the system energy consumption and delay. Therefore, we proposed an improved task scheduling strategy on the basis of non-dominated sorting genetic algorithm II (NSGA-II). To improve the performance of NSGA-II, we proposed a hierarchical search policy (NSGA-H) that eliminates the number of redundant comparisons and thus, enhances time complexity. The simulation results illustrate that the proposed strategy improves average service delay, service time, energy consumption, and system utility compared to baseline approaches.
    Keywords: mobile edge computing; MEC; task scheduling; quality of experience; QoE; resource-intensive tasks; non-dominated sorting genetic algorithm II; NSGA II.
    DOI: 10.1504/IJCC.2024.10058683
     
  • Cloud based scalable resiliency pattern using PRISM   Order a copy of this article
    by Punithavathy Ellappan, Priya N 
    Abstract: Applications in distributed systems are enhanced due to microservice architecture. It enriches the cloud's unique features like availability and scalability. The distributed nature has a broad set of failure points; thereupon resilience is the predominant factor for surviving these failures. Resilient feature of a microservice-based application is substantially offered by circuit breaker pattern, which scans the failure rate and safeguards from cascading failures. This paper analyses the behaviour pattern of a microservice based application under transient failure. As a result, the execution time during failure of an application is 23% faster when working with internal circuit breakers. Model-based verification techniques such as CTMC were performed to analyse the steady state probability of completed requests between the working cases of internal circuit breakers and proxy circuit breakers. The generated probability values of the internal circuit breaker, assure the 99% availability of the service even at times of failure.
    Keywords: circuit breaker; resiliency; microservices; cascading failures; continuous-time Markov chain; CTMC; PRISM.
    DOI: 10.1504/IJCC.2024.10058869
     
  • CRAQL: A Novel Clustering-based Resource Allocation using the Q-Learning in Fog environment.   Order a copy of this article
    by Chanchal Ahlawat, Rajalakshmi Krishnamurthi 
    Abstract: Fog computing is an emerging paradigm that provides services near the end-user. The tremendous increase in IoT devices and big data leads to complexity in fog resource allocation. Inefficient resource allocation can lead to resource starvation and unable to complete the task assignment within a specific time. Hence, to enhance the efficiency of the fog resources, it is critical to perform proper resource allocation. This work targets to provide the solution to the resource allocation problem with a novel clustering-based resource allocation using the Q-learning (CRAQL) model. For this purpose, the problem is defined as a decision-making problem and formulated as Markov decision process (MDP). Next, to find the optimal resource, an enhanced optimal resource allocation (EORA) algorithm is proposed and detailed study is performed to analyse the impact of various performance parameters. Simulation results show the comparison of the EORA versus conventional Brute force method by varying the performance parameters such as learning rate and number of trials. The experimental results exhibit optimal solutions with significant improvement in learning rate at an average probability of 0.5 within limited epochs.
    Keywords: internet of things; IoT; fog computing; reinforcement learning; resource allocation; Q-learning; Markov decision process; MDP.
    DOI: 10.1504/IJCC.2024.10059667
     
  • An Intelligent Blockchain based Cryptographic Data Security (IBCDS) Model for an Efficient Data Sharing in Cloud   Order a copy of this article
    by Naga Ramya Ponnada, Ravi Prakash Reddy, Supreethi KP 
    Abstract: Secure data sharing is the most challenging and essential problem to be addressed in cloud systems. In traditional works, various blockchain and cryptographic approaches are deployed for enabling secured data storage and retrieval in cloud platform. However, the conventional frameworks require third-party entities for user authentication, data verification, and identity management. In our proposed work, an intelligent blockchain-based cryptographic data security (IBCDS) scheme is developed, where no third-party auditor is required for authentication and validation for secured data sharing over cloud systems with minimal overhead and computational complexity. In IBCDS, blockchain acts like user management system that stores information of each cloud user in the form of transactions by assuring reliability and integrity of each transaction along with tamper-proof. During analysis, IBCDS mechanism is evaluated and compared by using different parameters where security performance is improved to 99%, time complexity reduced to 98%, and overall throughput is maximised to 99%.
    Keywords: cloud systems; data security; blockchain; cryptography; transactions; encryption; decryption.
    DOI: 10.1504/IJCC.2024.10059968
     
  • A New Throttled Adapted Load Balancing (TALB) Strategy for Dynamic VM Allocations in Cloud Datacenters   Order a copy of this article
    by S.Shanmugapriya ., N.Priya . 
    Abstract: Infrastructure as a Service is a crucial service offered by Cloud Computing that delivers on-demand Virtual Machines (VMs). As the cloud is growing continuously and millions of users are requesting the IaaS cloud simultaneously for accomplishing their tasks, scheduling the VM resources is one of the major challenges. Highly faced, improper allocation of VMs leads to server overloading, low resource utilization, and maximizes the loads response time. These issues can be resolved with the Load Balancing (LB) strategy by selecting the suitable VM grounded on the changing needs. In this study, we enhanced the existing Throttled Load Balancing (TLB) strategy and proposed a new Throttled Adapted Load Balancing (TALB) approach, with the main goals of decreasing the VM's searching time and response time, datacenter processing time, cost, and workload balancing among VMs. The TALB algorithm is evaluated using the CloudAnalyst simulation tool with the existing RR, ESCE, and TLB algorithms.
    Keywords: Cloud Computing; Infrastructure as a Service; Virtual Machine; Load Balancing; Throttled Load Balancing; CloudAnalyst; Throttled Adapted Load Balancing.
    DOI: 10.1504/IJCC.2024.10060349
     
  • Load Balancing Using Improved Weighted Round Robin Algorithm in Cloud Computing Environment   Order a copy of this article
    by Sree Priya S, T. Rajendran 
    Abstract: Load balancing strategies maximise resource use, system efficiency, reliability, high access, network traffic management, and response to changing circumstances. This abstract provides the improved weighted round robin (IWRR) algorithm for cloud computing load balancing. Cloud infrastructure relies on load balancing to optimise resource use and server performance. IWRR dynamically adjusts server weights depending on real-time performance parameters like CPU utilisation and request latency, improving weighted round robin. Load-balancing solutions like the IWRR algorithm may improve cloud infrastructure scalability, dependability, and performance. Load balancing methods like round robin, IP hash and weighted round robin help distribute internet traffic across servers. For requests from the same domain name, IP hash is used, and balanced round robin is used when server capacity allows. These methods can be assessed by reaction time and capacity. Weighted round robin (WRR) dynamically assigns requests by server capabilities to reduce response time and increase throughput. Automatically distributing more requests to capable servers improves system speed. These methods eliminate resource waste, enable scalability based on consumer demand, reduce disruptions, and improve client experience by uniformly dispersing jobs over multiple resources. Load balancing lets online service providers use their physical capabilities to create stable, adaptable, and excellent solutions.
    Keywords: load balancing; throughput; response time; round robin; IP hash; weighted round robin; WRR; resource allocation; improved intelligent infrastructures.
    DOI: 10.1504/IJCC.2024.10062149
     
  • Virtual Machine Workload Prediction using Deep Learning   Order a copy of this article
    by Abhilash C. S, Chaithra Usha, Veena Garag, Priyanka H 
    Abstract: This paper presents a novel approach to optimise resource allocation in virtualised systems, aiming to maximise performance and minimise operational expenses. Leveraging deep learning models, specifically long-short-term memory (LSTM) and bidirectional gated recurrent unit (bi-GRU), the method focuses on forecasting CPU load patterns in virtual machines (VMs). Accurate predictions are crucial for proactive resource management in dynamic cloud-based infrastructures. LSTM and bi-GRU excel in handling time series forecasting due to their ability to detect temporal connections in sequential data. Using pre-processed historical CPU load data, the models undergo training with hyperparameter adjustments to enhance performance. Experimental results demonstrate that the proposed models outperform others, achieving lower average root mean square error (RMSE) values (0.05636) and mean absolute error (MAE) values (0.03721). Comparative analysis with LSTM, GRU, bi-LSTM, bi-GRU, LSTM-GRU, and bi-LSTM-GRU confirms the high predictive capabilities of LSTM and bi-GRU, with the bidirectional architecture of bi-GRU enhancing accuracy by capturing connections between previous and upcoming time steps.
    Keywords: virtual machines; VMs; long-short-term memory; LSTM; bi-GRU; CPU load prediction; cloud computing.
    DOI: 10.1504/IJCC.2024.10062593