Forthcoming and Online First Articles

International Journal of Cloud Computing

International Journal of Cloud Computing (IJCC)

Forthcoming articles have been peer-reviewed and accepted for publication but are pending final changes, are not yet published and may not appear here in their final order of publication until they are assigned to issues. Therefore, the content conforms to our standards but the presentation (e.g. typesetting and proof-reading) is not necessarily up to the Inderscience standard. Additionally, titles, authors, abstracts and keywords may change before publication. Articles will not be published until the final proofs are validated by their authors.

Forthcoming articles must be purchased for the purposes of research, teaching and private study only. These articles can be cited using the expression "in press". For example: Smith, J. (in press). Article Title. Journal Title.

Articles marked with this shopping trolley icon are available for purchase - click on the icon to send an email request to purchase.

Online First articles are published online here, before they appear in a journal issue. Online First articles are fully citeable, complete with a DOI. They can be cited, read, and downloaded. Online First articles are published as Open Access (OA) articles to make the latest research available as early as possible.

Open AccessArticles marked with this Open Access icon are Online First articles. They are freely available and openly accessible to all without any restriction except the ones stated in their respective CC licenses.

Register for our alerting service, which notifies you by email when new issues are published online.

International Journal of Cloud Computing (6 papers in press)

Regular Issues

  • Adaptive Online Task Scheduling Algorithm for Resource Regulation on Heterogeneous Platforms   Order a copy of this article
    by Yongqing Liu, Fan Yang, Fuqiang Tian, Jun Mou, Bo Hu, Peiyang Wu 
    Abstract: As computing technology advances, resource regulation on heterogeneous platforms has emerged as a key research area for future computing environments. In cloud task scheduling, studies focus on intelligent agent models and performance indicators that balance user experience and cost-effectiveness. Research into deep reinforcement learning and deep deterministic policy gradient (DDPG) algorithms has been conducted, incorporating heterogeneous resource regulation to address the varied needs of different data centers. Key task characteristics include length, average instruction length, and average CPU utilization, with significant standard deviations. During training, a Poisson distribution parameter with a lambda value of 1 was used, leading to convergence in the loss curve. Although the DDPG algorithm had a slightly higher virtual machine usage cost and an instruction response time of 306.5, it provided notable economic benefits, demonstrating improved management and utilization of computing resources.
    Keywords: Heterogeneous platform resource regulation; Cloud task scheduling; Deep reinforcement learning algorithm; Differences in data centre environments; Computational resource management.
    DOI: 10.1504/IJCC.2025.10070909
     
  • Geo-Distributed Multi-Cloud Data Centre Storage Tiering and Selection with Zero-Suppressed Binary Decision Diagrams   Order a copy of this article
    by Brian Lim, Miguel Saavedra, Renzo Tan, Kazushi Ikeda, William Yu 
    Abstract: The exponential growth of data in recent years prompted cloud providers to introduce diverse geo-distributed storage solutions for various needs. The vast amount of storage options, however, presents organisations with a challenge in determining the ideal data placement configuration. The study introduces a novel optimisation algorithm utilising the zero-suppressed binary decision diagram to select the optimal data centre, storage tiers, and cloud provider. The algorithm takes on a holistic approach that considers cost, latency, and high availability, applicable to both geo-distributed on-premise environments and public cloud providers. Furthermore, the proposed methodology leverages the recursive structure of the zero-suppressed binary decision diagram, allowing for the enumeration and ranking of all valid configurations based on total cost. Overall, the study offers flexibility for organisations in addressing specific priorities for cloud storage solutions by providing alternative near-optimal configurations.
    Keywords: cloud provider; data centre; discrete optimisation; storage solution; storage tier; zero-suppressed binary decision diagram.
    DOI: 10.1504/IJCC.2025.10071085
     
  • Design of IoT Data Security Storage and Allocation Model Based on Cloud and Mist Integration Algorithm   Order a copy of this article
    by Keqing Guan, Xianli Kong 
    Abstract: As internet and communication technology evolve, the industrial Internet of Things (IIoT) has rapidly developed. However, existing IIoT systems struggle to ensure the timely and secure transmission of user data. This study introduces a cloud fog hybrid network architecture and establishes a latency and data security model for individual users, employing an improved ant colony algorithm for minimising latency under security constraints. For multi-user scenarios, software-defined networks enhance the architecture, and a refined allocation model is developed. Experiments indicate that, at 500 iterations, the root mean square errors (RMSE) for various algorithms were 0.51, 0.43, 0.28, and 0.14, respectively. With 5 users and a data volume of 50MB, the latencies observed were 24, 22, 18, and 14 seconds, respectively. These findings demonstrate that the proposed method effectively secures data storage and reduces latency in IIoT environments.
    Keywords: Industrial Internet of Things; Fog computing; Cloud computing; Data security; Time delay.
    DOI: 10.1504/IJCC.2025.10071459
     
  • Application of Clustering Algorithm and Cloud Computing in IoT Data Mining   Order a copy of this article
    by Xu Wu 
    Abstract: This study optimizes the density-based noise application spatial clustering algorithm using a cloud computing programming model to enhance data mining accuracy and reduce processing time in Internet of Things (IoT) systems. The improved algorithm is applied to an IoT monitoring system, showing excellent performance in extracting data features and eliminating noise with a 100% removal rate. The system identifies abnormal data in just 0.9ms with 100% accuracy. These results demonstrate that the enhanced data mining technique significantly improves mining efficiency, laying a foundation for better service quality and commercial value in IoT applications.
    Keywords: Density-based noise application spatial clustering algorithm; MapReduce cloud computing programming model; Internet of Things; Data mining; Internet of Things monitoring system.
    DOI: 10.1504/IJCC.2025.10071464
     
  • Scalable and Adaptable Hybrid LSTM Model with Multi-Algorithm Optimisation for Load Balancing and Task Scheduling in Dynamic Cloud Computing Environments   Order a copy of this article
    by Mubarak Idris, Mustapha Aminu Bagiwa, Muhammad Abdulkarim, Nurudeen Jibrin, Mardiyya Lawal Bagiwa 
    Abstract: Cloud computing delivers scalable, flexible resources, but dynamic workloads challenge efficient resource management, especially in load balancing and task scheduling. Addressing these challenges is vital for optimal performance, cost efficiency, and meeting growing application demands. This study proposes the MultiOpt_LSTM model, a hybrid approach that integrates long short-term memory (LSTM) networks with multi-algorithm optimisation techniques, including binary particle swarm optimisation (BPSO), genetic algorithm (GA), and simulated annealing (SA). The goal is to optimise resource allocation, reduce response times, and ensure balanced workload distribution across virtual machines. The proposed model is evaluated using both real-world and simulated cloud environments, comparing its performance with state-of-the-art techniques such as ANN-BPSO and heuristic-FSA. Key performance indicators like response time, resource utilisation, and degree of imbalance are used to measure efficiency. Results show that the MultiOpt_LSTM model outperforms competing methods, achieving near-zero imbalance at higher task volumes and demonstrating superior resource utilisation and reduced response times. For example, at 3,000 tasks, the model maintains a balanced distribution, outperforming traditional methods like IBPSO-LBS by a significant margin. While the simulation results are promising, future work will focus on real-world implementations to assess the models scalability and adaptability in diverse cloud environments.
    Keywords: Cloud computing; load balancing; task scheduling; hybrid LSTM model; optimization algorithms; resource utilization; response time; degree of imbalance.
    DOI: 10.1504/IJCC.2025.10071475
     
  • Modified Sorted Prioritisation-Based Task Scheduling in Cloud Computing   Order a copy of this article
    by J. Magelin Mary, George Amalarethinam 
    Abstract: Cloud computing is commonly used to provide internet-based, pay-per-use, self-service access to scalable, on-demand computing resources. Task scheduling is a major difficulty in this approach for cost efficiency and resource usage. Task scheduling optimises computing activities to save expenses and resource consumption. MSPTS is a new scheduling method introduced in this paper. Modified sorted prioritisation-based task scheduling in cloud computing improves resource allocation and efficiency with sophisticated algorithms. MSPTS sorts jobs and resources by priority and property to improve task scheduling. MSPTS chooses the best resource for each job based on resource wait time, task processing time, and task priority. This approach optimises task execution and resource allocation, enhancing system performance. The MSPTS method was compared to other scheduling algorithms using CloudSim tools, a popular cloud simulation tool, to evaluate its efficacy. MSPTS greatly outperforms standard scheduling algorithms in various areas, according to experiments. MSPTS improves makespan, cost efficiency, and resource use. These data suggest that MSPTS is a better cloud computing task scheduling solution, improving performance and resource management.
    Keywords: Cloud Computing; Pay-Per-Use; Task Scheduling; Task Priority; Resource Utilisation; Makespan and Cost; Cloudsim Tools; Cloud Environment; Task Scheduling.
    DOI: 10.1504/IJCC.2025.10071549