Contrastive Learning for Blind Super Resolution via A Distortion Specific Network

Contrastive Learning for Blind Super Resolution via A Distortion Specific Network

Abstract:

Previous deep learning-based super-resolution (SR) methods rely on the assumption that the degradation process is predefined (e.g., bicubic downsampling). Thus, their performance would suffer from deterioration if the real degradation is not consistent with the assumption. To deal with real-world scenarios, existing blind SR methods are committed to estimating both the degradation and the super-resolved image with an extra loss or iterative scheme. However, degradation estimation that requires more computation would result in limited SR performance due to the accumulated estimation errors. In this paper, we propose a contrastive regularization built upon contrastive learning to exploit both the information of blurry images and clear images as negative and positive samples, respectively. Contrastive regularization ensures that the restored image is pulled closer to the clear image and pushed far away from the blurry image in the representation space. Furthermore, instead of estimating the degradation, we extract global statistical prior information to capture the character of the distortion. Considering the coupling between the degradation and the low-resolution image, we embed the global prior into the distortion-specific SR network to make our method adaptive to the changes of distortions. We term our distortion-specific network with contrastive regularization as CRDNet. The extensive experiments on synthetic and real-world scenes demonstrate that our lightweight CRDNet surpasses state-of-the-art blind super-resolution approaches.