Title: Application of intelligent system based on deep reinforcement learning in electrical engineering automation control

Authors: Zhihe Wu

Addresses: Automation and Electrical Engineering, Dalian Jiaotong University, Dalian, Liaoning, China

Abstract: The application of intelligent control technology in the power electronics industry has promoted the development of power automation. It not only changes its control and management mode, but also greatly improves efficiency. However, in the power automation system, it is necessary to fully consider its use efficiency according to the actual situation, and gradually promote the application of intelligent technology in power automation technology. This paper focuses on analysing and discussing the application of intelligent technology in power system and provides reference for future work. On this basis, a method of applying deep reinforcement learning method to automatic control of power engineering is proposed. This paper introduces a learning algorithm based on artificial emotion augmentation to improve the operational performance of power grids. The relationship between artificial emotion and reinforcement learning in artificial psychology is discussed from three perspectives: behaviour value selection, Q-value matrix update and reward value function update. According to the experimental and calculation, in the ACE simulation results, it can be known that the Q-learning method, the Q(λ) method, and the DQL method are reduced by 39.7%, 55.8%, and 61.7%, respectively, in the Δf simulation results, the Q-learning algorithm, the Q(λ)-learning algorithm and the deep Q-learning method are 58.3%, 75% and 75% lower than PID, respectively. Simulation experiments showed that the algorithm outperforms the other three algorithms.

Keywords: electrical engineering automation control; deep reinforcement learning; smart system; Q-learning algorithm; traditional power system.

DOI: 10.1504/IJGUC.2024.140117

International Journal of Grid and Utility Computing, 2024 Vol.15 No.3/4, pp.323 - 332

Received: 20 May 2023
Accepted: 27 Oct 2023

Published online: 24 Jul 2024 *

Full-text access for editors Full-text access for subscribers Purchase this article Comment on this article