Title: Optimising DNNs for load forecasting: the power of hyperparameter tuning

Authors: Faisal Mehmood Butt; Seong-O Shim; Safa Habibullah; Abdulwahab Ali Almazroi; Lal Hussain; Umair Ahmad Salaria

Addresses: Department of Electrical Engineering, Mirpur University of Science & Technology, Mirpur, Azad Kashmir, Pakistan; Department of Electrical Engineering, University of Azad Jammu and Kashmir (Chehla Campus), Muzaffarabad, Azad Kashmir, Pakistan ' Department of Computer and Network Engineering, College of Computer Science and Engineering, University of Jeddah, Jeddah, Makkah, Saudi Arabia ' Department of Information Systems and Technology, College of Computer Science and Engineering, University of Jeddah, Jeddah, Jeddah, Makkah, Saudi Arabia ' Department of Information Technology, College of Computing and Information Technology at Khulais, University of Jeddah, Jeddah, Makkah, Saudi Arabia ' Department of Computer Science & IT (Neelum Campus), The University of Azad Jammu and Kashmir, Athmuqam, Azad Kashmir, Pakistan; Department of Computer Science & IT (King Abdullah Campus), The University of Azad Jammu and Kashmir, Muzaffarabad, Azad Kashmir, Pakistan ' Department of Electrical Engineering, Mirpur University of Science & Technology, Mirpur, Azad Kashmir, Pakistan; Department of Electrical Engineering, University of Azad Jammu and Kashmir (Chehla Campus), Muzaffarabad, Azad Kashmir, Pakistan

Abstract: This study investigated the effectiveness of deep learning for electricity demand forecasting across different timescales. For one-day forecasts, a double hidden layer network with Rectified Linear Unit (ReLU) and sigmoid activation functions achieved the lowest Mean Absolute Percentage Error (MAPE) of 4.23%, requiring only four neurons in the hidden layers. Longer timescales necessitated more complex architectures. The one-month forecast achieved a MAPE of 2.78% with a double ReLU-sigmoid network and 12 neurons in the hidden layers. Even challenging three-month forecasts were tackled effectively by a double ReLU network with ten neurons, resulting in a MAPE of 2.75%. These findings highlight a crucial point: capturing the non-linearity and dynamic nature of long-term forecasts requires more intricate network designs, utilising a strategic selection of activation functions and enough neurons. By optimising network architecture, we can ensure the electricity grid adapts to meet demand fluctuations, optimise resource allocation, and even facilitate future planning.

Keywords: optimisation; ReLU; rectified linear unit; convolutional neural; networks; deep neural networks; signature function; neurons; layers.

DOI: 10.1504/IJGUC.2025.148535

International Journal of Grid and Utility Computing, 2025 Vol.16 No.5/6, pp.432 - 450

Received: 30 Jun 2024
Accepted: 18 Jul 2024

Published online: 11 Sep 2025 *

Full-text access for editors Full-text access for subscribers Purchase this article Comment on this article