Title: A supervised multimodal search re-ranking technique using visual semantics

Authors: Nikhila T. Bhuvan; M. Sudheep Elayidom

Addresses: School of Engineering, Cochin University of Science and Technology, Kerala, India; Department of Information Technology, Rajagiri School of Engineering and Technology, Rajagiri Valley, Kakkanad, Kerala, India ' School of Engineering, Cochin University of Science and Technology, Kerala, India

Abstract: The multimedia content in a webpage is usually given least importance in webpage ranking. A better user satisfaction could be achieved if the web pages are ranked based on multiple modalities rather than just depending on the textual content. A better ranking of the web pages is proposed using natural language descriptions of images along with the textual content in a webpage is being proposed. The inter-modal correspondences between text and visual data are learned using the convolutional neural network assisted by the datasets of images and their sentence descriptors. The model is based on convolutional neural networks over images to generate the image descriptor and Dandelion API for their similarity measure with the query. The image description is algorithmically generated rather depending on the image annotations present. Finally, it has been proven that the re-ranked web pages using the generated descriptions significantly outperform the state of art retrieval models.

Keywords: automatic image annotation; convolutional neural networks; image descriptor; multimodality search; search re ranking; semantic similarity; supervised re ranking; visual semantics.

DOI: 10.1504/IJIE.2020.104660

International Journal of Intelligent Enterprise, 2020 Vol.7 No.1/2/3, pp.279 - 290

Received: 29 Nov 2018
Accepted: 14 Jan 2019

Published online: 24 Jan 2020 *

Full-text access for editors Access for subscribers Purchase this article Comment on this article