Open Access Article

Title: Optimisation of gradient descent algorithm and resource scheduling in big data environment

Authors: Zhendong Ji

Addresses: School of Data Science and Computer Science, Shandong Women's University, Jinan 250300, China

Abstract: With the rapid development of big data technology, traditional gradient descent algorithms face problems such as low computational efficiency, slow convergence speed, and uneven resource allocation. This article proposes a collaborative framework that integrates dynamic resource scheduling and adaptive gradient descent optimisation for distributed machine learning scenarios in big data environments. Firstly, an asynchronous gradient descent algorithm based on hierarchical batch sampling (HB-ASGD) was designed, which dynamically adjusts the local batch size and global synchronisation frequency to balance the load differences between computing nodes and reduce communication overhead. Secondly, the resource aware elastic scheduling (RAES) model is introduced to dynamically predict task computation using reinforcement learning, and combined with containerisation technology to achieve fine-grained allocation of CPU/GPU resources, prioritising the protection of computing resources for critical iterative tasks. The experiment shows that this study effectively solves the efficiency bottleneck problem in massive data iteration.

Keywords: big data; gradient descent algorithm; resource scheduling; reinforcement learning; resource aware elastic scheduling; RAES.

DOI: 10.1504/IJICT.2025.147143

International Journal of Information and Communication Technology, 2025 Vol.26 No.25, pp.19 - 32

Received: 06 May 2025
Accepted: 23 May 2025

Published online: 10 Jul 2025 *