You can view the full text of this article for free using the link below.

Title: Data-based reinforcement learning for lane keeping with input saturation

Authors: Rui Luo; Dianwei Qian; Qichao Zhang

Addresses: School of Control and Computer Engineering, North China Electric Power University, Beijing, China ' School of Control and Computer Engineering, North China Electric Power University, Beijing, China ' State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, University of Chinese Academy of Sciences, Beijing, China

Abstract: With the development of artificial intelligence, autonomous driving has received extensive attention. As a very complex integrated system, the autonomous vehicle has several modules. This paper is related to the control module, which is used to design an optimal or near-optimal controller to control the desired trajectory of the vehicle. In this paper, lateral control strategy for lane keeping task is proposed based on the model-free reinforcement learning. Different from the model-based methods such as linear quadratic regulator and model predictive control, our method only requires the generated data rather than the perfect knowledge of the system model to guarantee the optimal performance. At the same time, in order to meet two needs of passengers' comfort and fuel economy, input saturation should be considered in the design of the control module. A low-gain state feedback control method is adopted. It mainly solves some algebraic Riccati equations for data-based lateral control. Finally, the corresponding simulation is given and the validity of the algorithm is verified.

Keywords: lateral control; lane keeping; input saturation; model-free reinforcement learning.

DOI: 10.1504/IJAMECHS.2020.109897

International Journal of Advanced Mechatronic Systems, 2020 Vol.8 No.1, pp.9 - 15

Received: 17 May 2019
Accepted: 08 Nov 2019

Published online: 15 Sep 2020 *

Full-text access for editors Access for subscribers Free access Comment on this article