Real-time road object segmentation using improved light-weight convolutional neural network based on 3D LiDAR point cloud Online publication date: Fri, 25-Feb-2022
by Guoqiang Chen; Bingxin Bai; Zhuangzhuang Mao; Jun Dai
International Journal of Ad Hoc and Ubiquitous Computing (IJAHUC), Vol. 39, No. 3, 2022
Abstract: It is critical that autonomous navigation systems can segment the objects captured by their sensors (cameras or LiDAR scanners) in real-time. A convolutional neural networks (CNN) is proposed for real-time semantic segmentation of road objects (pedestrians, cars, cyclists) in this paper. The proposed network structure is based on the light-weight network SqueezeNet, which is small enough to be stored directly in the embedded deployment of an autonomous vehicle. The input of the proposed CNN is the transformed 3D LiDAR point cloud, and the domain transform (DT) makes the segmentation object precisely align its boundary, which results in the preferable point-wise label map as the output. In addition to comparing our segmentation results with the pipelines based on deep learning, the visual comparison with the traditional 3D point cloud segmentation pipelines is also made. Experiments show that the proposed CNN can achieve faster running time (6.2 ms per frame) and realise real-time semantic segmentation for the objects in autonomous driving scenes while ensuring the comparable segmentation accuracy.
Online publication date: Fri, 25-Feb-2022
If you are not a subscriber and you just want to read the full contents of this article, buy online access here.Complimentary Subscribers, Editors or Members of the Editorial Board of the International Journal of Ad Hoc and Ubiquitous Computing (IJAHUC):
Login with your Inderscience username and password:
Want to subscribe?
A subscription gives you complete access to all articles in the current issue, as well as to all articles in the previous three years (where applicable). See our Orders page to subscribe.
If you still need assistance, please email firstname.lastname@example.org