Forthcoming and Online First Articles

International Journal of Cloud Computing

International Journal of Cloud Computing (IJCC)

Forthcoming articles have been peer-reviewed and accepted for publication but are pending final changes, are not yet published and may not appear here in their final order of publication until they are assigned to issues. Therefore, the content conforms to our standards but the presentation (e.g. typesetting and proof-reading) is not necessarily up to the Inderscience standard. Additionally, titles, authors, abstracts and keywords may change before publication. Articles will not be published until the final proofs are validated by their authors.

Forthcoming articles must be purchased for the purposes of research, teaching and private study only. These articles can be cited using the expression "in press". For example: Smith, J. (in press). Article Title. Journal Title.

Articles marked with this shopping trolley icon are available for purchase - click on the icon to send an email request to purchase.

Online First articles are published online here, before they appear in a journal issue. Online First articles are fully citeable, complete with a DOI. They can be cited, read, and downloaded. Online First articles are published as Open Access (OA) articles to make the latest research available as early as possible.

Open AccessArticles marked with this Open Access icon are Online First articles. They are freely available and openly accessible to all without any restriction except the ones stated in their respective CC licenses.

Register for our alerting service, which notifies you by email when new issues are published online.

International Journal of Cloud Computing (13 papers in press)

Regular Issues

  • Load Balancing in Cloud Computing using Cuckoo Search Algorithm   Order a copy of this article
    by Brototi Mondal 
    Abstract: A competent cloud load balancer should modify its approach to the dynamic environment and the different types of tasks. Load balancing (LB) in cloud computing can be viewed as an optimisation problem. As load balancing in cloud is an NP-complete problem, the best solution cannot be found using gradient-based methods that look for optimal solutions to NP-complete problems in a reasonable amount of time. Therefore, evolutionary and meta-heuristic methods should be applied to tackle the load balancing issue. In this paper, a novel load balancing method based on cuckoo search (CS) algorithm is proposed. This method successfully distributes the load among the available virtual machines (VMs) while maintaining a low response time (RT) overall. Thus, its simulation is performed and comparative simulation results reveal that the suggested approach outperforms existing tactics like round robin (RR), stochastic hill climbing (SHC), and genetic algorithm (GA).
    Keywords: cloud computing; load balancing; cuckoo search algorithm; response time.
    DOI: 10.1504/IJCC.2024.10057822
  • Genetic Algorithm with Reinforcement Learning for Optimal allocation of resources in Task scheduling   Order a copy of this article
    by Deepak BB 
    Abstract: Task scheduling in cloud computing is one of primary research problem in computer science technology. Finding an optimal solution for task scheduling not only enhances the system/machine performance but also it reduces the total processing cost. There are number of task scheduling algorithms developed by previous researchers, but none of them have been globally accepted because of their own pros and cons. The current study made an attempt to solve task scheduling for optimal utilisation of virtual machines to execute the assigned tasks with the help of genetic algorithm (GA). The proposed strategy primarily focused on the minimisation of task execution time by considering it as fitness function while GA implementation. Reinforcement learning is integrated with the proposed algorithm in order to enhance its performance while finding the optimal resource allocation. Later, the methodology is validated in several cloud environments with simulated analysis results in order to check its feasibility.
    Keywords: genetic algorithm; artificial intelligence; task scheduling; cloud computing; virtual machines.
    DOI: 10.1504/IJCC.2024.10058082
  • Improved task scheduling strategy for balancing resource utilization and service quality in mobile edge computing environment   Order a copy of this article
    by Michael Pendo John Mahenge  
    Abstract: The rapid growth of resource-hungry and time-critical applications reflects the rise of resources needed for communication, processing, and energy consumption. Mobile edge computing (MEC) that offers cloud-computing services proximate to users at the edge of mobile network is considered to be the key technology to facilitate task scheduling closer to data-sources. The objective of this paper is to propose an improved task scheduling strategy that selects the best MEC server to process each task which reduces the system energy consumption and delay. Therefore, we proposed an improved task scheduling strategy on the basis of non-dominated sorting genetic algorithm II (NSGA-II). To improve the performance of NSGA-II, we proposed a hierarchical search policy (NSGA-H) that eliminates the number of redundant comparisons and thus, enhances time complexity. The simulation results illustrate that the proposed strategy improves average service delay, service time, energy consumption, and system utility compared to baseline approaches.
    Keywords: mobile edge computing; MEC; task scheduling; quality of experience; QoE; resource-intensive tasks; non-dominated sorting genetic algorithm II; NSGA II.
    DOI: 10.1504/IJCC.2024.10058683
  • Cloud based scalable resiliency pattern using PRISM   Order a copy of this article
    by Punithavathy Ellappan, Priya N 
    Abstract: Applications in distributed systems are enhanced due to microservice architecture. It enriches the cloud's unique features like availability and scalability. The distributed nature has a broad set of failure points; thereupon resilience is the predominant factor for surviving these failures. Resilient feature of a microservice-based application is substantially offered by circuit breaker pattern, which scans the failure rate and safeguards from cascading failures. This paper analyses the behaviour pattern of a microservice based application under transient failure. As a result, the execution time during failure of an application is 23% faster when working with internal circuit breakers. Model-based verification techniques such as CTMC were performed to analyse the steady state probability of completed requests between the working cases of internal circuit breakers and proxy circuit breakers. The generated probability values of the internal circuit breaker, assure the 99% availability of the service even at times of failure.
    Keywords: circuit breaker; resiliency; microservices; cascading failures; continuous-time Markov chain; CTMC; PRISM.
    DOI: 10.1504/IJCC.2024.10058869
  • CRAQL: A Novel Clustering-based Resource Allocation using the Q-Learning in Fog environment.   Order a copy of this article
    by Chanchal Ahlawat, Rajalakshmi Krishnamurthi 
    Abstract: Fog computing is an emerging paradigm that provides services near the end-user. The tremendous increase in IoT devices and big data leads to complexity in fog resource allocation. Inefficient resource allocation can lead to resource starvation and unable to complete the task assignment within a specific time. Hence, to enhance the efficiency of the fog resources, it is critical to perform proper resource allocation. This work targets to provide the solution to the resource allocation problem with a novel clustering-based resource allocation using the Q-learning (CRAQL) model. For this purpose, the problem is defined as a decision-making problem and formulated as Markov decision process (MDP). Next, to find the optimal resource, an enhanced optimal resource allocation (EORA) algorithm is proposed and detailed study is performed to analyse the impact of various performance parameters. Simulation results show the comparison of the EORA versus conventional Brute force method by varying the performance parameters such as learning rate and number of trials. The experimental results exhibit optimal solutions with significant improvement in learning rate at an average probability of 0.5 within limited epochs.
    Keywords: internet of things; IoT; fog computing; reinforcement learning; resource allocation; Q-learning; Markov decision process; MDP.
    DOI: 10.1504/IJCC.2024.10059667
  • An Intelligent Blockchain based Cryptographic Data Security (IBCDS) Model for an Efficient Data Sharing in Cloud   Order a copy of this article
    by Ponnada Naga Ramya, Ravi Prakash Reddy, Supreethi KP 
    Abstract: Secure data sharing is the most challenging and essential problem to be addressed in cloud systems. In traditional works, various blockchain and cryptographic approaches are deployed for enabling secured data storage and retrieval in cloud platform. However, the conventional frameworks require third-party entities for user authentication, data verification, and identity management. In our proposed work, an intelligent blockchain-based cryptographic data security (IBCDS) scheme is developed, where no third-party auditor is required for authentication and validation for secured data sharing over cloud systems with minimal overhead and computational complexity. In IBCDS, blockchain acts like user management system that stores information of each cloud user in the form of transactions by assuring reliability and integrity of each transaction along with tamper-proof. During analysis, IBCDS mechanism is evaluated and compared by using different parameters where security performance is improved to 99%, time complexity reduced to 98%, and overall throughput is maximised to 99%.
    Keywords: cloud systems; data security; blockchain; cryptography; transactions; encryption; decryption.
    DOI: 10.1504/IJCC.2024.10059968
  • A New Throttled Adapted Load Balancing (TALB) Strategy for Dynamic VM Allocations in Cloud Datacenters   Order a copy of this article
    by S.Shanmugapriya ., N.Priya . 
    Abstract: Infrastructure as a Service is a crucial service offered by Cloud Computing that delivers on-demand Virtual Machines (VMs). As the cloud is growing continuously and millions of users are requesting the IaaS cloud simultaneously for accomplishing their tasks, scheduling the VM resources is one of the major challenges. Highly faced, improper allocation of VMs leads to server overloading, low resource utilization, and maximizes the loads response time. These issues can be resolved with the Load Balancing (LB) strategy by selecting the suitable VM grounded on the changing needs. In this study, we enhanced the existing Throttled Load Balancing (TLB) strategy and proposed a new Throttled Adapted Load Balancing (TALB) approach, with the main goals of decreasing the VM's searching time and response time, datacenter processing time, cost, and workload balancing among VMs. The TALB algorithm is evaluated using the CloudAnalyst simulation tool with the existing RR, ESCE, and TLB algorithms.
    Keywords: Cloud Computing; Infrastructure as a Service; Virtual Machine; Load Balancing; Throttled Load Balancing; CloudAnalyst; Throttled Adapted Load Balancing.
    DOI: 10.1504/IJCC.2024.10060349
  • Load Balancing Using Improved Weighted Round Robin Algorithm in Cloud Computing Environment   Order a copy of this article
    by Sree Priya S, T. Rajendran 
    Abstract: Load balancing strategies maximise resource use, system efficiency, reliability, high access, network traffic management, and response to changing circumstances. This abstract provides the improved weighted round robin (IWRR) algorithm for cloud computing load balancing. Cloud infrastructure relies on load balancing to optimise resource use and server performance. IWRR dynamically adjusts server weights depending on real-time performance parameters like CPU utilisation and request latency, improving weighted round robin. Load-balancing solutions like the IWRR algorithm may improve cloud infrastructure scalability, dependability, and performance. Load balancing methods like round robin, IP hash and weighted round robin help distribute internet traffic across servers. For requests from the same domain name, IP hash is used, and balanced round robin is used when server capacity allows. These methods can be assessed by reaction time and capacity. Weighted round robin (WRR) dynamically assigns requests by server capabilities to reduce response time and increase throughput. Automatically distributing more requests to capable servers improves system speed. These methods eliminate resource waste, enable scalability based on consumer demand, reduce disruptions, and improve client experience by uniformly dispersing jobs over multiple resources. Load balancing lets online service providers use their physical capabilities to create stable, adaptable, and excellent solutions.
    Keywords: load balancing; throughput; response time; round robin; IP hash; weighted round robin; WRR; resource allocation; improved intelligent infrastructures.
    DOI: 10.1504/IJCC.2024.10062149
  • Virtual Machine Workload Prediction using Deep Learning   Order a copy of this article
    by Abhilash C. S, Chaithra Usha, Veena Garag, Priyanka H 
    Abstract: This paper presents a novel approach to optimise resource allocation in virtualised systems, aiming to maximise performance and minimise operational expenses. Leveraging deep learning models, specifically long-short-term memory (LSTM) and bidirectional gated recurrent unit (bi-GRU), the method focuses on forecasting CPU load patterns in virtual machines (VMs). Accurate predictions are crucial for proactive resource management in dynamic cloud-based infrastructures. LSTM and bi-GRU excel in handling time series forecasting due to their ability to detect temporal connections in sequential data. Using pre-processed historical CPU load data, the models undergo training with hyperparameter adjustments to enhance performance. Experimental results demonstrate that the proposed models outperform others, achieving lower average root mean square error (RMSE) values (0.05636) and mean absolute error (MAE) values (0.03721). Comparative analysis with LSTM, GRU, bi-LSTM, bi-GRU, LSTM-GRU, and bi-LSTM-GRU confirms the high predictive capabilities of LSTM and bi-GRU, with the bidirectional architecture of bi-GRU enhancing accuracy by capturing connections between previous and upcoming time steps.
    Keywords: virtual machines; VMs; long-short-term memory; LSTM; bi-GRU; CPU load prediction; cloud computing.
    DOI: 10.1504/IJCC.2024.10062593
  • A Survey on Blockchain Architecture and Consensus Mechanism: Design Vulnerability and Security Analysis   Order a copy of this article
    by Shshikant Sharma, Dharmender Singh Kushwaha 
    Abstract: Today for an organisation, data security is the most crucial topic. An organisation needs to protect its information against cyberattacks. Cryptography, DLT, and blockchain technology provide higher security for data storage and prevent any cyberattack. The most prominent reasons for using this technology are its specific properties, such as decentralisation, transparency, autonomy (without human interaction), and robustness. This paper discusses the performance and limitations of the existing blockchain architecture, consensus mechanism, and the security aspect of the consensus mechanism in an organised way. This survey presents systematic reviews of blockchain architectures, consensus mechanisms, the performance analysis of the current blockchain consensus mechanisms, and the vulnerabilities and types of attacks. The aim is to provide a comprehensive state-of-the-art platform where a beginner can swiftly move on to research aspects.
    Keywords: blockchain; distributed ledger technology; decentralisation; consensus mechanism; byzantine fault-tolerance; security.
    DOI: 10.1504/IJCC.2024.10062772
  • Systematic Analysis of On Premise and Cloud Services   Order a copy of this article
    by Asif Ali, Asif Ali Laghari, Irfan Ali Kandhro, Kamlesh Kumar, Salman Younus 
    Abstract: There are two key distinctions between cloud and on-premise (OP) software, the cost for each varies and so does the level of control. As organisations explore to reduce costs, many data and rules are migrating to multiple clouds like GCP, AWS, AZURE, and so on. Cloud service providers provide the extensibility, resilience, and quickness that traditional OP deployments usually lose. This paper presents a comparative analysis of on-premise and cloud computing differs by outline, arrangement, administration, and devices for associations and clients. This comparison shows that cloud computing provides more flexible infrastructure and better service of data processing on-premise. In today's enterprise IT world, there are many factors that a business must consider when deciding whether a cloud infrastructure is the right choice as there are many mid-level organisations I personally know which fails to adapt cloud solutions, instead relying on their proven legacy and on-premises applications and software to do business. Therefore, it is very challenging to decide for business, either a cloud infrastructure is right choice or not. In this article, which have covered many areas and analysis of on-prem and cloud will help mid-level organisations to take decision of choosing cloud solutions for their running business applications.
    Keywords: key differences; cloud computing; on-premise pros and cons; cloud pros and cons; on-premise risk; security; cloud services provider selection; cost; compliance; cloud deployments methods.
    DOI: 10.1504/IJCC.2024.10063641
  • Resource Scheduling in Cloud Environment using Particle Swarm Search algorithm   Order a copy of this article
    by Malay Kumar Majhi, Manas Ranjan Kabat, Satya Prakash Sahoo 
    Abstract: Cloud computing has gained significant popularity as a platform for processing large-scale data analytics, offering benefits such as high availability, robustness, and cost-effectiveness. However, job scheduling in cloud systems presents a major challenge, as it directly impacts execution time and operational costs. To address these issues, this paper presents a novel multi-adaptive convergent particle swarm optimisation (MAC-PSO) algorithm designed to decrease the failure rate, minimise makespan values, and enhance resource utilisation. The round Robin scheduling method aids in task execution by determining the appropriate time-space allocation. The proposed algorithm's performance is compared to that of the TLBO algorithm, demonstrating that MAC-PSO outperforms both TLBO and the original PSO. Moreover, a comprehensive analysis is proposed to evaluate the performance metrics within the MAC-PSO algorithm. Notably, MAC-PSO effectively increases the ratio of solutions that dominate previous algorithmic approaches and identifies a greater number of solutions that cater to user preferences.
    Keywords: task scheduling; particle swarm optimisation; PSO; round Robin scheduling; cloud computing.
    DOI: 10.1504/IJCC.2024.10064262
  • AltWOA: Enhancing Query Performance with Clustering-Based Optimisation   Order a copy of this article
    by Mursubai Sandhya Rani, N.Raghavendra Sai 
    Abstract: Big data (BD) is gaining a lot of attention in the information field due to the data growth in the preceding ten years. A fundamental purpose of philosophical "query optimization (QO)" approaches in a BD environment is data retrieving. To offer beneficial and practical choices for BD query optimisation, numerous technologies that focus on the cloud have been developed. Existing significant data query optimisation approaches often struggle to efficiently process complex queries on massive datasets, leading to performance bottlenecks and resource wastage. Despite significant advancements in big data query optimisation, there remains a need for innovative techniques that can seamlessly handle diverse workloads and data distributions while optimizing resource utilisation and query performance. To solve query optimization issues, this paper suggests an Altruistic Whale Optimization Algorithm. In the following stage, the AltWOA optimizer increases the total query processing effectiveness while ignoring the energy-efficient query techniques. The metrics classification and computation time are tested for various data sizes, instances, and dataset records.
    Keywords: Big data (BD); query optimization (QO); Altruistic Whale Optimization Algorithm (AltWOA); fast Markov clustering algorithm.
    DOI: 10.1504/IJCC.2024.10064274