Authors: Ajune Wanis Ismail; Mohd Shahrizal Sunar
Addresses: Media and Games Innovation Centre of Excellence (MaGIC-X), UTM-IRDA Digital Media Centre, Universiti Teknologi Malaysia, 81310 Skudai Johor, Malaysia ' Media and Games Innovation Centre of Excellence (MaGIC-X), UTM-IRDA Digital Media Centre, Universiti Teknologi Malaysia, 81310 Skudai Johor, Malaysia
Abstract: Multimodal fusion enables users to interact with computers through various input modalities like speech, gesture, and eye gaze. At the first stage to propose the multimodal interaction, various input modalities like speech and gesture, needs to be nominated before be integrated in an interface. This paper provides the progresses and issues in multimodal inputs for in augmented reality (AR). The paper presents several related works about to recap the multimodal approaches until it recently has been one of the research trends in AR. It also discusses on the multimodal fusion in AR and reports the various existing works in multimodal interaction with AR tools and techniques. In AR, multimodal considers as the solution to improve the interaction between the virtual and physical entities. It is an ideal interaction technique for AR applications since AR supports interactions in real and virtual worlds in the real-time. This paper describes the recent studies in AR developments that appeal multimodal inputs for AR environment. This paper also looks into multimodal fusion issues and its limitations, followed by the conclusion. This paper will give a guideline on multimodal fusion in AR, about how to integrate the multimodal inputs in AR environment.
Keywords: augmented reality; multimodal fusion; user interaction; object manipulation; INNS-CIIS2014; speech; gesture; eye gaze; multimodal interaction.
International Journal of Computational Vision and Robotics, 2017 Vol.7 No.3, pp.240 - 254
Received: 19 Feb 2015
Accepted: 05 Aug 2015
Published online: 23 Mar 2017 *