Title: Reinforcement learning of dynamic collaborative driving Part I: longitudinal adaptive control

Authors: Luke Ng, Christopher M. Clark, Jan Paul Huissoon

Addresses: Department of Mechanical and Mechatronics Engineering, University of Waterloo, 200 University Ave W., Waterloo, Ontario, N2L 3G1, Canada. ' Computer Science Department, California Polytechnic State University, San Luis Obispo, CA, USA. ' Department of Mechanical and Mechatronics Engineering, University of Waterloo, 200 University Ave W., Waterloo, Ontario, N2L 3G1, Canada

Abstract: Dynamic collaborative driving involves the motion coordination of multiple vehicles using shared information from vehicles instrumented to perceive their surroundings in order to improve road usage and safety. A basic requirement of any vehicle participating in dynamic collaborative driving is longitudinal control. Without this capability, higher-level coordination is not possible. Each vehicle involved is a composite non-linear system powered by an internal combustion engine, equipped with automatic transmission, rolling on rubber tyres with a hydraulic braking system. This paper focuses on the problem of longitudinal motion control. A longitudinal vehicle model is introduced which serves as the control system design platform. A longitudinal adaptive control system that uses Monte Carlo Reinforcement Learning (RL) is introduced. The results of the RL phase and the performance of the adaptive control system for a single automobile, as well as the performance in a multi-vehicle platoon, are presented.

Keywords: mobile robots; motion control; adaptive cruise control; collaborative driving; vehicle dynamics; vehicle simulation; machine learning; reinforcement learning; adaptive control; motion coordination; longitudinal control; multi-vehicle platoon.

DOI: 10.1504/IJVICS.2008.022355

International Journal of Vehicle Information and Communication Systems, 2008 Vol.1 No.3/4, pp.208 - 228

Published online: 02 Jan 2009 *

Full-text access for editors Full-text access for subscribers Purchase this article Comment on this article