Abstract:
Nowadays, deep learning has made rapid progress in the field of multi-exposure image fusion. However, it is still challenging to extract available features while retaining texture details and color. To address this difficult issue, in this paper, we propose a coordinated learning network for detail-refinement in an end-to-end manner. Firstly, we obtain shallow feature maps from extreme over/under-exposed source images by a collaborative extraction module. Secondly, smooth attention weight maps are generated under the guidance of a self-attention module, which can draw a global connection to correlate patches in different locations. With the cooperation of the two aforementioned used modules, our proposed network can obtain a coarse fused image. Moreover, by assisting with an edge revision module, edge details of fused results are refined and noise is suppressed effectively. We conduct subjective qualitative and objective quantitative comparisons between the proposed method and twelve state-of-the-art methods on two available public datasets, respectively. The results show that our fused images significantly outperform others in visual effects and evaluation metrics. In addition, we also perform ablation experiments to verify the function and effectiveness of each module in our proposed method. The source code can be achieved at https://github.com/lok-18/LCNDR .