You can view the full text of this article for free using the link below.

Title: Salient object detection using semantic segmentation technique

Authors: Bashir Ghariba; Mohamed S. Shehata; Peter McGuire

Addresses: Faculty of Engineering and Applied Science, Memorial University, St. John's, Newfoundland, A1B-3X7, Canada; Department of Electrical and Computer Engineering, Faculty of Engineering, Elmergib University, Khoms, Libya ' Department of Computer Science, Mathematics, Physics and Statistics, University of British Columbia, Kelowna, BC, Canada ' C-CORE, Captain Robert A. Bartlett Building, 1 Morrissey Road, St. John's, Newfoundland, NL, A1C-3X5, Canada

Abstract: Salient object detection (SOD) is the operation of detecting and segmenting a salient object in a natural scene. Several studies have examined various state-of-the-art machine learning approaches for SOD. In particular, deep convolutional neural networks (CNNs) are commonly applied for SOD because of their powerful feature extraction abilities. In this paper, we investigate the semantic segmentation capability of several well-known pre-trained models, including FCNs, VGGs, ResNets, MobileNet-v2, Xception and InceptionResNet-v2. These models have been trained over an ImageNet dataset, fine-tuned on a MSRA-10K dataset and evaluated using other public datasets, such as ECSSD, MSRA-B, DUTS and THUR15k. The results illustrate the superiority of ResNet50 and ResNet18, which have mean absolute errors (MAE) of approximately 0.93 and 0.92, respectively, compared to other well-known FCN models. Moreover, the most robust model against noise is ResNet50, whereas VGG-16 is the most sensitive, relative to other state-of-the-art models.

Keywords: salient object detection; SOD; deep learning; fully convolutional network; FCN; semantic segmentation.

DOI: 10.1504/IJCVR.2022.119240

International Journal of Computational Vision and Robotics, 2022 Vol.12 No.1, pp.17 - 38

Received: 08 Mar 2020
Accepted: 27 Aug 2020

Published online: 30 Nov 2021 *

Full-text access for editors Access for subscribers Free access Comment on this article