Authors: Natsuki Miyata; Yuki Shimizu; Yuichi Motoki; Yusuke Maeda; Masaaki Mochimaru
Addresses: Digital Human Research Centre, National Institute of Advanced Industrial Science and Technology, 2-3-26, Aomi, Koto-Ku, Tokyo 135-0064, Japan ' JFE Engineering Corporation, Nihon Building, 2-6-2,Otemachi, Chiyoda-ku, Tokyo 100-0004, Japan ' Hitachi Solutions, Ltd., 4-12-7, Higashishinagawa, Shinagawa-ku, Tokyo 140-0002, Japan ' 79-5 Tokiwadai, Hodogaya-ku, Yokohama 240-8501, Japan ' 2-3-26, Aomi, Koto-ku, Tokyo 135-0064, Japan
Abstract: This paper proposes a method to reconstruct hand behaviour from motion capture (MoCap) data by building an individual hand model. The hand model consists of a surface skin and a skeleton link and can change its posture arbitrarily. To build the individual hand model, our system uses a palmar side photo and marker positions captured simultaneously by MoCap. From this modelling scan, several hand dimensions and marker positions are obtained. Joint centres are estimated based on regression analysis of joint centres with marker positions and hand dimensions derived from magnetic resonance image (MRI) data of eight subjects. The skin surface is built by scaling a generic hand model so that it satisfies the measured dimensions. The proposed system was validated through experiments that built hand models of ten subjects and that compared reconstructed dorsal shape of one of the subjects when grasping a cylinder.
Keywords: individual hand models; motion capture; MoCap; joint centre estimation; modelling; hand behaviour; skin surface; skeleton; hand dimensions; magnetic resonance imaging; MRI scans; dorsal shape; cylinder grasping; digital human models; digital human modelling; DHM.
International Journal of Human Factors Modelling and Simulation, 2012 Vol.3 No.2, pp.147 - 168
Available online: 19 Dec 2012 *Full-text access for editors Access for subscribers Purchase this article Comment on this article