Abstract:
Detail information on objects of interest plays a vital role in current medical diagnosis. However, the existing multimodal sensor fusion methods cause problems of low contrast and color distortion during the process of integration. Therefore, the preservation of detail information in high contrast is worthy of investment in the field of medical image fusion. This paper presents a new multiscale fusion-based framework using the local Laplacian pyramid transform (LLP) and adaptive cloud model (ACM). The proposed framework, LLP+ACM, includes three key modules. First, the input images are decomposed into detail-enhanced approximate and residual images using LLP. Second, ACM is adopted to fuse the approximate images. A salience match tool is used to fuse the residual images. Third, the fused image is reconstructed using the inversed LLP. Experiments show that the proposed LLP+ACM significantly improves detail information with high contrast and reduces the color distortion of the fused images in both subjective and objective evaluations.