Title: Research on active suspension control based on safety constraint reinforcement learning
Authors: Lin-feng Zhao; Xiao Feng; Wen-bin Shao; Zhen Mei; Jin-fang Hu
Addresses: School of Automotive and Transportation Engineering, Hefei University of Technology, Hefei, 230009, China ' School of Automotive and Transportation Engineering, Hefei University of Technology, Hefei, 230009, China ' School of Mechanical Engineering, Anhui Jianghuai Automobile Group Co., Ltd., Hefei University of Technology, Hefei, 230092, China ' School of Automotive and Transportation Engineering, Hefei University of Technology, Hefei, 230009, China ' School of Automotive and Transportation Engineering, Hefei University of Technology, Hefei, 230009, China
Abstract: To tackle the shortcomings of conventional active suspension control algorithms, which struggle to achieve a harmonious balance between ride comfort and handling stability, as well as obvious lack of practical safety considerations for suspension systems in ordinary reinforcement learning algorithms, a deep deterministic policy gradient (DDPG) reinforcement learning (RL) algorithm is proposed, which utilises suspension system safety boundary constraints as its foundation, and integrates the suspension dynamic deflection's limit stroke, as well as safety range of tyre dynamic deformation into reinforcement learning algorithm, and obtains ideal active suspension control strategy through offline training. The results clearly indicate a distinction from LQR algorithm, in contrast to which the sprung acceleration's root mean square (RMS) value using the algorithm proposed in this paper is reduced by 6.41%, there is a reduction of 3% in suspension's dynamic deflection, the pitch angular acceleration is reduced by 11.00%, and the tyre's dynamic deformation is reduced by about 25%.
Keywords: active suspension; reinforcement learning; safe boundaries; ride comfort; convex road surface.
International Journal of Vehicle Design, 2024 Vol.96 No.3/4, pp.171 - 192
Received: 16 Nov 2023
Accepted: 22 Jul 2024
Published online: 17 Jun 2025 *