Data-based reinforcement learning for lane keeping with input saturation
by Rui Luo; Dianwei Qian; Qichao Zhang
International Journal of Advanced Mechatronic Systems (IJAMECHS), Vol. 8, No. 1, 2020

Abstract: With the development of artificial intelligence, autonomous driving has received extensive attention. As a very complex integrated system, the autonomous vehicle has several modules. This paper is related to the control module, which is used to design an optimal or near-optimal controller to control the desired trajectory of the vehicle. In this paper, lateral control strategy for lane keeping task is proposed based on the model-free reinforcement learning. Different from the model-based methods such as linear quadratic regulator and model predictive control, our method only requires the generated data rather than the perfect knowledge of the system model to guarantee the optimal performance. At the same time, in order to meet two needs of passengers' comfort and fuel economy, input saturation should be considered in the design of the control module. A low-gain state feedback control method is adopted. It mainly solves some algebraic Riccati equations for data-based lateral control. Finally, the corresponding simulation is given and the validity of the algorithm is verified.

Online publication date: Tue, 29-Sep-2020

The full text of this article is only available to individual subscribers or to users at subscribing institutions.

 
Existing subscribers:
Go to Inderscience Online Journals to access the Full Text of this article.

Pay per view:
If you are not a subscriber and you just want to read the full contents of this article, buy online access here.

Complimentary Subscribers, Editors or Members of the Editorial Board of the International Journal of Advanced Mechatronic Systems (IJAMECHS):
Login with your Inderscience username and password:

    Username:        Password:         

Forgotten your password?


Want to subscribe?
A subscription gives you complete access to all articles in the current issue, as well as to all articles in the previous three years (where applicable). See our Orders page to subscribe.

If you still need assistance, please email subs@inderscience.com