Component Substitution Model Guided Deep Spatial–Spectral Fusion of Hyperspectral Imagery

Component Substitution Model Guided Deep Spatial–Spectral Fusion of Hyperspectral Imagery

Abstract:

Hyperspectral (HS) spatial–spectral fusion technology is an important means to obtain high spatial and spectral resolution data for major strategic missions such as manned spaceflight and earth observation. The existing parameter-tuning black-box fusion model lacks the guidance of mathematical theory and is not explicable, so it is difficult to apply in practice. In this letter, we design an interpretable deep network for spatial–spectral fusion tasks, called S2Fusion, which embedded the mature component substitution (CS) fusion model into a deep neural network. As a result of the careful design of the method, each module in the S2Fusion network corresponds to a specific operation in the CS fusion model, which is easily interpretable. Compared with the traditional fusion models, the S2Fusion makes it easier to ensure the implementation mechanism of spatial–spectral fusion throughout the network flow. Extensive experiments demonstrate the superiority of S2Fusion both quantitatively and visually over the state-of-the-art methods.