Title: Real-time road object segmentation using improved light-weight convolutional neural network based on 3D LiDAR point cloud

Authors: Guoqiang Chen; Bingxin Bai; Zhuangzhuang Mao; Jun Dai

Addresses: School of Mechanical and Power Engineering, Henan Polytechnic University, Jiaozuo, Henan, China ' School of Mechanical and Power Engineering, Henan Polytechnic University, Jiaozuo, Henan, China ' School of Aerospace Engineering, Beijing Institute of Technology, Beijing, China ' School of Mechanical and Power Engineering, Henan Polytechnic University, Jiaozuo, Henan, China

Abstract: It is critical that autonomous navigation systems can segment the objects captured by their sensors (cameras or LiDAR scanners) in real-time. A convolutional neural networks (CNN) is proposed for real-time semantic segmentation of road objects (pedestrians, cars, cyclists) in this paper. The proposed network structure is based on the light-weight network SqueezeNet, which is small enough to be stored directly in the embedded deployment of an autonomous vehicle. The input of the proposed CNN is the transformed 3D LiDAR point cloud, and the domain transform (DT) makes the segmentation object precisely align its boundary, which results in the preferable point-wise label map as the output. In addition to comparing our segmentation results with the pipelines based on deep learning, the visual comparison with the traditional 3D point cloud segmentation pipelines is also made. Experiments show that the proposed CNN can achieve faster running time (6.2 ms per frame) and realise real-time semantic segmentation for the objects in autonomous driving scenes while ensuring the comparable segmentation accuracy.

Keywords: road object segmentation; convolutional neural network; CNN; 3D LiDAR point cloud; domain transform.

DOI: 10.1504/IJAHUC.2022.121116

International Journal of Ad Hoc and Ubiquitous Computing, 2022 Vol.39 No.3, pp.113 - 121

Received: 15 Mar 2021
Accepted: 05 May 2021

Published online: 25 Feb 2022 *

Full-text access for editors Full-text access for subscribers Purchase this article Comment on this article