Title:EMIF: Equivariant Multimodal Medical Image Fusion Network Via Super Token and Haar Wavelet Downsampling
Volume: 18
Author(s): Yukun Zhang, Lei Wang*, Zizhen Huang, Yaolong Han, Shanliang Yang and Bin Li
Affiliation:
- School of Computer Science and Technology, Shandong University of Technology, Zibo 255049, China
Keywords:
Medical image fusion, image fusion, transformer, image processing, medical diagnostic imaging, MRI.
Abstract:
Background: Multimodal medical image fusion is a core tool to enhance the clinical
utility of medical images by integrating complementary information from multiple images. However,
the existing deep learning-based fusion methods are not good at effectively extracting the
key target features, and easy to make the results blurry.
Objective: The main objective of the paper is to propose a medical image fusion method that effectively
extracts features from source images and preserves them in the fused results.
Methods: The prior knowledge and a dual-branch U-shaped structure are employed by the proposed
method to extract both the local and global features from images of different modalities. A
novel Transformer module is designed to capture the global correlations at the super-pixel level.
Each feature extraction module uses the Haar Wavelet downsampling to reduce the spatial resolution
of the feature maps while preserving as much information as possible, effectively reducing
the information uncertainty.
Results: Extensive experiments on public medical image datasets and a biological image dataset
demonstrated that the proposed method achieves superior performance in both qualitative and
quantitative evaluations.
Conclusion: This paper applies prior knowledge to medical image fusion and proposes a novel
dual-branch U-shaped medical image fusion network. Compared with nine state-of-the-art fusion
methods, the proposed method produces better-fused results with richer texture details and better
visual quality.