A Deep Framework for Hyperspectral Image Fusion Between Different Satellites

A Deep Framework for Hyperspectral Image Fusion Between Different Satellites

Abstract:

Recently, fusing a low-resolution hyperspectral image (LR-HSI) with a high-resolution multispectral image (HR-MSI) of different satellites has become an effective way to improve the resolution of an HSI. However, due to different imaging satellites, different illumination, and adjacent imaging time, the LR-HSI and HR-MSI may not satisfy the observation models established by existing works, and the LR-HSI and HR-MSI are hard to be registered. To solve the above problems, we establish new observation models for LR-HSIs and HR-MSIs from different satellites, then a deep-learning-based framework is proposed to solve the key steps in multi-satellite HSI fusion, including image registration, blur kernel learning, and image fusion. Specifically, we first construct a convolutional neural network (CNN), called RegNet, to produce pixel-wise offsets between LR-HSI and HR-MSI, which are utilized to register the LR-HSI. Next, according to the new observation models, a tiny network, called BKLNet, is built to learn the spectral and spatial blur kernels, where the BKLNet and RegNet can be trained jointly. In the fusion part, we further train a FusNet by downsampling the registered data with the learned spatial blur kernel. Extensive experiments demonstrate the superiority of the proposed framework in HSI registration and fusion accuracy.