Enhanced Deep Blind Hyperspectral Image Fusion

Enhanced Deep Blind Hyperspectral Image Fusion

Abstract:

The goal of hyperspectral image fusion (HIF) is to reconstruct high spatial resolution hyperspectral images (HR-HSI) via fusing low spatial resolution hyperspectral images (LR-HSI) and high spatial resolution multispectral images (HR-MSI) without loss of spatial and spectral information. Most existing HIF methods are designed based on the assumption that the observation models are known, which is unrealistic in many scenarios. To address this blind HIF problem, we propose a deep learning-based method that optimizes the observation model and fusion processes iteratively and alternatively during the reconstruction to enforce bidirectional data consistency, which leads to better spatial and spectral accuracy. However, general deep neural network inherently suffers from information loss, preventing us to achieve this bidirectional data consistency. To settle this problem, we enhance the blind HIF algorithm by making part of the deep neural network invertible via applying a slightly modified spectral normalization to the weights of the network. Furthermore, in order to reduce spatial distortion and feature redundancy, we introduce a Content-Aware ReAssembly of FEatures module and an SE-ResBlock model to our network. The former module helps to boost the fusion performance, while the latter make our model more compact. Experiments demonstrate that our model performs favorably against compared methods in terms of both nonblind HIF fusion and semiblind HIF fusion.