Hybrid lip shape feature extraction and recognition for human-machine interaction
by Yi Zhang; Jiao Liu; Yuan Luo; Huosheng Hu
International Journal of Modelling, Identification and Control (IJMIC), Vol. 18, No. 3, 2013

Abstract: Dumb and deaf people are unable to interact with robots using traditional voice-based human-machine interfaces (HMI). Lip motion is a useful way for these people to communicate with machines, even for normal people in extremely noisy environments. However, the recognition of lip motion is a difficult task since the region of interest (ROI) is non-linear and noisy. This paper proposes a novel lip shape feature extraction method to deal with the difficulty, based on hybrid dual-tree complex wavelet transform (DT-CWT) and discrete cosine transform (DCT). The approximate shift invariance of DT-CWT is utilised to make the same lip shape have the same feature vector when the lips are in different positions in the ROI. Then, DCT is used to extract coefficients from the feature vector generated by DT-CWT, and to choose the larger coefficients to obtain the key information of lip shape and reduce the dimensions of a feature vector. The experimental results show that this method can greatly improve the accuracy of lip shape recognition, and enhance the robustness of the lip shape-based HMI.

Online publication date: Sat, 16-Aug-2014

The full text of this article is only available to individual subscribers or to users at subscribing institutions.

 
Existing subscribers:
Go to Inderscience Online Journals to access the Full Text of this article.

Pay per view:
If you are not a subscriber and you just want to read the full contents of this article, buy online access here.

Complimentary Subscribers, Editors or Members of the Editorial Board of the International Journal of Modelling, Identification and Control (IJMIC):
Login with your Inderscience username and password:

    Username:        Password:         

Forgotten your password?


Want to subscribe?
A subscription gives you complete access to all articles in the current issue, as well as to all articles in the previous three years (where applicable). See our Orders page to subscribe.

If you still need assistance, please email subs@inderscience.com