Title: Comparing 2D image features on viewpoint independence using 3D anthropometric dataset

Authors: Pengcheng Xi; Chang Shu; Rafik Goubran

Addresses: Carleton University, Ottawa, Canada; National Research Council Canada, Ottawa, Ontario, Canada ' Information and Communications Technologies, National Research Council Canada, Ottawa, Ontario, Canada ' Department of Systems and Computer Engineering, Carleton University, Ottawa, Ontario, Canada

Abstract: We study the viewpoint-independence of image features in the classification of identities using multiple-view full-body images. A reliable vision system should be robust in classifying objects from images captured on novel viewpoints. To obtain a robust classifier, 3D models are collected for rendering training and testing images from various viewpoints. These images are then used for extracting features and building classifiers. In this work, we compute multiple view human-body images from a 3D anthropometry human body database. For each subject, a majority of the views are randomly selected to be included in the training dataset and the remaining views are used for testing. More specifically, we use histogram of oriented gradient (HOG) feature-based support vector machine (SVM) as the baseline to be compared with deep auto-encoders network and deep convolutional neural networks (CNN). Through experiments, we conclude that the deep CNN performs the best (deep auto-encoders network as the runner-up) in computing viewpointindependent image features for identity classifications based on 2D full-body images.

Keywords: machine learning; multi-layer neural networks; feature extraction; machine vision; 3D anthropometry; CAESAR; convolutional neural networks; deep learning.

DOI: 10.1504/IJDH.2016.084593

International Journal of the Digital Human, 2016 Vol.1 No.4, pp.412 - 425

Available online: 31 May 2017 *

Full-text access for editors Access for subscribers Purchase this article Comment on this article