Authors: Daniel Asmar; Samir Shaker
Addresses: Mechanical Engineering Department, American University of Beirut, Beirut 1107 2020, Lebanon. ' Mechanical Engineering Department, American University of Beirut, Beirut 1107 2020, Lebanon
Abstract: Occupancy-grid simultaneous localisation and mapping (SLAM) has traditionally been implemented using range sensors such as lasers and sonars. This paper presents an implementation of occupancy grid SLAM using a single camera as the only on-board exteroceptive sensor and in a manner that is similar to a laser-based system. Range information is extracted from each image using a new approach. First, the ground plane is determined based on the orientation and height of segmented regions, then virtual rays are cast into the camera's field of view from the optical centre to the intersection of each ray's 2D projection with the ground boundaries. Occupancy-grid SLAM then uses the extracted depth information to build a dense map of the robot's setting. Experiments are conducted inside a real lab setting and results prove the success of our system.
Keywords: occupancy grid SLAM; monocular SLAM; dense depth maps; structured indoor environments; single camera; range information; robot vision; robot localisation; robot mapping; robot navigation; autonomous navigation.
International Journal of Mechatronics and Automation, 2012 Vol.2 No.2, pp.112 - 124
Available online: 25 Jul 2012 *Full-text access for editors Access for subscribers Purchase this article Comment on this article