Support vector machine-based approach for text description from the video
by Vishakha Wankhede; Ramesh M. Kagalkar
International Journal of Computational Vision and Robotics (IJCVR), Vol. 8, No. 4, 2018

Abstract: Humans use communication, language either by written or spoken to describe the visual world around them. So, the study of text description for any video goes increasing. In this paper, we are representing a framework that gives output as a description for any long length video using natural language processing. The framework is divided into two sections called training and testing section. The training section is used to train the video with its description like activities of objects present in that video. This data is stored into the database with features of scenario of video. Another section is testing section. The testing section is used to test the video and retrieve the output as description of video comparing videos stored into database (i.e., in training section). Using NLP processing sentences are generated from objects and their activities. For the evaluation, a maximum of 50-second videos are used.

Online publication date: Fri, 10-Aug-2018

The full text of this article is only available to individual subscribers or to users at subscribing institutions.

 
Existing subscribers:
Go to Inderscience Online Journals to access the Full Text of this article.

Pay per view:
If you are not a subscriber and you just want to read the full contents of this article, buy online access here.

Complimentary Subscribers, Editors or Members of the Editorial Board of the International Journal of Computational Vision and Robotics (IJCVR):
Login with your Inderscience username and password:

    Username:        Password:         

Forgotten your password?


Want to subscribe?
A subscription gives you complete access to all articles in the current issue, as well as to all articles in the previous three years (where applicable). See our Orders page to subscribe.

If you still need assistance, please email subs@inderscience.com