Title: A framework for automatically constructing a dataset for training a vehicle detector

Authors: Changyon Kim; Jeonghwan Gwak; Daeyoung Shim; Moongu Jeon

Addresses: Defense Systems Division, Satrec Initiative, 1628-21 Yuseong-daero, Yuseong-gu, Deajeon 34054, South Korea ' Biomedical Research Institute, and Department of Radiology, Seoul National University Hospital, Seoul 03080, South Korea ' Department of Architecture, College of Engineering, Catholic Kwandong University, Gangneung 25601, South Korea ' School of Information and Communications, Gwangju Institute of Science and Technology, Gwangju 61005, South Korea

Abstract: Object detection based on a trained detector has been widely applied to diverse tasks such as pedestrian, face, and vehicle detection. In such approach, detectors are learned offline with an enormous number of training samples. However, the approach has a significant drawback that heavy intervention and effort, as well as domain knowledge, of a human are essentially required to construct a reliable training dataset. To remedy this drawback, we propose a framework to collect and label training samples automatically. By analysing information of foreground blobs obtained from background subtraction results, a training dataset can be constructed without any human's effort. Also, condition investigation of scenes is performed periodically to check the suitability of sample candidates. As a result, it generates an accurate vehicle detector. With the proposed method, training samples can be automatically collected only when vehicle blobs in the given scene provide suitable appearance information. The effectiveness of the proposed framework is demonstrated from vehicle detection tasks under real traffic environments.

Keywords: object detection; optimal vehicle detector; appearance model; scene condition investigation; automatic sample collection.

DOI: 10.1504/IJCVR.2019.098800

International Journal of Computational Vision and Robotics, 2019 Vol.9 No.2, pp.192 - 206

Available online: 18 Mar 2019 *

Full-text access for editors Access for subscribers Purchase this article Comment on this article