You can view the full text of this article for free using the link below.

Title: Multimodal fusion of different medical image modalities using optimised hybrid network

Authors: Tanima Ghosh; N. Jayanthi

Addresses: Department of Electronics and Communication Engineering, Delhi Technological University, Bawana Road, Delhi-42, India ' Department of Electronics and Communication Engineering, Delhi Technological University, Bawana Road, Delhi-42, India

Abstract: Image fusion leverages the strengths of various imaging modalities to create a more complete and informative picture of medical conditions, which leads to better identification and treatment. Accordingly, this paper implements a new multimodel image fusion approach, named pelican optimisation algorithm-based DenseNet and ResidualNet (POA+Dense-ResNet) for multimodel image fusion. Here, the POA is used to train the Dense-ResNet, which is the combination of ResidualNet and DenseNet. The input images from different modalities are pre-processed and then the transformation of the spatial domain to the spectral domain is done by dual-tree complex wavelet transform (DTCWT). These transformed images are segmented by edge-attention guidance network (ET-Net). Then, the fusion is done by the POA+Dense-ResNet. The POA+Dense-ResNet achieved minimum root mean square error (RMSE), mean square error (MSE), and maximum peak signal to noise ratio (PSNR) of 0.650, 0.423, and 53.525 dB.

Keywords: ResidualNet; edge-attention guidance network; pelican optimisation algorithm; DenseNet; dual-tree complex wavelet transform; mean square error; MSE; peak signal to noise ratio; PSNR.

DOI: 10.1504/IJAHUC.2025.143546

International Journal of Ad Hoc and Ubiquitous Computing, 2025 Vol.48 No.1, pp.19 - 33

Received: 14 Oct 2023
Accepted: 20 Feb 2024

Published online: 30 Dec 2024 *

Full-text access for editors Full-text access for subscribers Free access Comment on this article