Multimodal information interaction and fusion for the parallel computing system using AI techniques Online publication date: Tue, 22-Feb-2022
by Yang Li; Wei Li; Na Li; Xiaoli Qiu; Karthik Bala Manokaran
International Journal of High Performance Systems Architecture (IJHPSA), Vol. 10, No. 3/4, 2021
Abstract: Recently multimodal information fusion systems are popularly used to increase the reliability of recognition systems. These systems employ data from different modalities. Since different modalities capture information with different attributes, the fusion of this information aids in achieving better solutions. In this research, they present a Multimodal Fusion for Parallel Computing scheme. Here, a novel multi-modal fusion-based parallel computing (MMFPC) Model is being proposed. Besides, a new technique for the generation of history images is as well proposed. Feature extraction using GLCM and HOG features is performed. The fusion of multimodal features using the weighted fusion technique prioritises the modes that contain more valuable data. Classification is analysed using different artificial intelligence algorithms. Finally, the proposed scheme is evaluated using a public fall detection dataset. It was observed that the proposed system achieves a high accuracy of 96.77% and a high specificity of 93.52%.
Existing subscribers:
Go to Inderscience Online Journals to access the Full Text of this article.
If you are not a subscriber and you just want to read the full contents of this article, buy online access here.Complimentary Subscribers, Editors or Members of the Editorial Board of the International Journal of High Performance Systems Architecture (IJHPSA):
Login with your Inderscience username and password:
Want to subscribe?
A subscription gives you complete access to all articles in the current issue, as well as to all articles in the previous three years (where applicable). See our Orders page to subscribe.
If you still need assistance, please email subs@inderscience.com