Title: Multi-modal motion dictionary learning for facial expression recognition

Authors: Jin-Chul Kim; SungYong Chun; Chan-Su Lee

Addresses: Department of Electronic Engineering, Yeungnam University, Gyeongsan, Gyeongbuk, Rep. of Korea ' Department of Electronic Engineering, Yeungnam University, Gyeongsan, Gyeongbuk, Rep. of Korea ' Department of Electronic Engineering, Yeungnam University, Gyeongsan, Gyeongbuk, Rep. of Korea

Abstract: Recently, dictionary learning is actively investigated in image and signal processing. A variety of dictionary learning methods for the classification problems have been proposed. In this paper, we propose a facial expression recognition system using a multi-modal motion dictionary based on motion flow composed of motion flow intensity and motion flow direction. At the dictionary learning stage, two dictionaries having different modalities are learned from motion flow intensity and from motion flow angle data of facial expression image sequences. We made a feature vector by concatenating two weight vectors of individual dictionaries from each image sequence for classification. Experimental result shows higher accuracy than conventional reconstruction-based approaches with extremely reduced classification time. The proposed approach is a promising method for real-time facial expression recognition from motion flow data.

Keywords: dictionary learning; facial expression recognition; motion flow estimation.

DOI: 10.1504/IJCVR.2017.084996

International Journal of Computational Vision and Robotics, 2017 Vol.7 No.4, pp.443 - 453

Received: 09 May 2015
Accepted: 03 Sep 2015

Published online: 10 Jul 2017 *

Full-text access for editors Full-text access for subscribers Purchase this article Comment on this article