Multi-modal motion dictionary learning for facial expression recognition
by Jin-Chul Kim; SungYong Chun; Chan-Su Lee
International Journal of Computational Vision and Robotics (IJCVR), Vol. 7, No. 4, 2017

Abstract: Recently, dictionary learning is actively investigated in image and signal processing. A variety of dictionary learning methods for the classification problems have been proposed. In this paper, we propose a facial expression recognition system using a multi-modal motion dictionary based on motion flow composed of motion flow intensity and motion flow direction. At the dictionary learning stage, two dictionaries having different modalities are learned from motion flow intensity and from motion flow angle data of facial expression image sequences. We made a feature vector by concatenating two weight vectors of individual dictionaries from each image sequence for classification. Experimental result shows higher accuracy than conventional reconstruction-based approaches with extremely reduced classification time. The proposed approach is a promising method for real-time facial expression recognition from motion flow data.

Online publication date: Mon, 10-Jul-2017

The full text of this article is only available to individual subscribers or to users at subscribing institutions.

 
Existing subscribers:
Go to Inderscience Online Journals to access the Full Text of this article.

Pay per view:
If you are not a subscriber and you just want to read the full contents of this article, buy online access here.

Complimentary Subscribers, Editors or Members of the Editorial Board of the International Journal of Computational Vision and Robotics (IJCVR):
Login with your Inderscience username and password:

    Username:        Password:         

Forgotten your password?


Want to subscribe?
A subscription gives you complete access to all articles in the current issue, as well as to all articles in the previous three years (where applicable). See our Orders page to subscribe.

If you still need assistance, please email subs@inderscience.com