Unlabeled facial expression capture method in virtual reality system based on big data
by Feng Gao
International Journal of Information and Communication Technology (IJICT), Vol. 18, No. 3, 2021

Abstract: In view of the problems of high error rate and low efficiency in the traditional method of facial expression capture without markers, a method of facial expression capture without markers based on large amount of data is proposed. Haar feature is used to determine the initial position of human face, and active shape model is used to extract unmarked facial feature points. The extracted feature points and the generated triangle mesh are tracked by the optical flow tracking method. The displacement of the face feature points is used to promote the overall change of the mesh and complete the unmarked facial expression capture. The experimental results show that the error rate of this method is in the range of 1.2%-1.7%, the error rate is small, and it needs 20 s-34 s to capture facial expression, which is more practical and efficient.

Online publication date: Mon, 10-May-2021

The full text of this article is only available to individual subscribers or to users at subscribing institutions.

Existing subscribers:
Go to Inderscience Online Journals to access the Full Text of this article.

Pay per view:
If you are not a subscriber and you just want to read the full contents of this article, buy online access here.

Complimentary Subscribers, Editors or Members of the Editorial Board of the International Journal of Information and Communication Technology (IJICT):
Login with your Inderscience username and password:

    Username:        Password:         

Forgotten your password?

Want to subscribe?
A subscription gives you complete access to all articles in the current issue, as well as to all articles in the previous three years (where applicable). See our Orders page to subscribe.

If you still need assistance, please email subs@inderscience.com