Navigating a 3D virtual environment of learning objects by hand gestures
by Qing Chen, A.S.M. Mahfujur Rahman, Xiaojun Shen, Abdulmotaleb El Saddik, Nicolas D. Georganas
International Journal of Advanced Media and Communication (IJAMC), Vol. 1, No. 4, 2007

Abstract: This paper presents a gesture-based Human-Computer Interface (HCI) to navigate a learning object repository mapped in a 3D virtual environment. With this interface, the user can access the learning objects by controlling an avatar car using gestures. The Haar-like features and the AdaBoost learning algorithm are used for our gesture recognition to achieve real-time performance and high recognition accuracy. The learning objects are represented by different traffic signs, which are grouped along the virtual highways. Compared with traditional HCI devices such as keyboards, it is more intuitive and interesting for users using hand gestures to communicate with the virtual environments.

Online publication date: Thu, 09-Aug-2007

The full text of this article is only available to individual subscribers or to users at subscribing institutions.

 
Existing subscribers:
Go to Inderscience Online Journals to access the Full Text of this article.

Pay per view:
If you are not a subscriber and you just want to read the full contents of this article, buy online access here.

Complimentary Subscribers, Editors or Members of the Editorial Board of the International Journal of Advanced Media and Communication (IJAMC):
Login with your Inderscience username and password:

    Username:        Password:         

Forgotten your password?


Want to subscribe?
A subscription gives you complete access to all articles in the current issue, as well as to all articles in the previous three years (where applicable). See our Orders page to subscribe.

If you still need assistance, please email subs@inderscience.com