Hyperspectral Image Compression via Cross Channel Contrastive Learning

Hyperspectral Image Compression via Cross Channel Contrastive Learning

Abstract:

In recent years, advances in deep learning have greatly promoted the development of hyperspectral image (HSI) compression algorithms. However, most existing compression approaches directly rely on rate–distortion (RD) optimization without other guidance during model learning. Therefore, this brings challenges to distinguishing similar features or objects that are widely available in HSIs, especially in remote sensing scenes, since quantification in lossy compression can cause informative attribute (e.g., category) collapse and loss problems at high compression ratios. In this article, we propose a novel hyperspectral compression network via contrastive learning (HCCNet) to help generate discriminative representations and preserve informative attributes as much as possible. Specifically, we design a contrastive informative feature encoding (CIFE) to extract and organize discriminative attributes from the original HSIs by enlarging the discrimination over the learned latents in different channel indexes to relieve attribute collapses. In the case of attribute losses, we define a contrastive-invariant feature recovery (CIFR) to discover the lost attributes via contrastive feature refinement. Experiments on five different HSI datasets illustrate that the proposed HCCNet can achieve impressive compression performance, such as improvement of the peak signal-to-noise ratio (PSNR) from 28.86 dB [at 0.2284 bit per pixel per band (bpppb)] to 30.30 dB (at 0.1960 bpppb) on the Chikusei dataset.