Looking at Boundary Siamese Densely Cooperative Fusion for Salient Object Detection

Looking at Boundary Siamese Densely Cooperative Fusion for Salient Object Detection

Abstract:

Though deep learning-based saliency detection methods have achieved gratifying performance recently, the predicted saliency maps still suffer from the boundary challenge. From the perspective of foreground–background separation, this article attempts to extract the edge information of objects by exploiting the difference between different color channels in the RGB color space and establishes a novel multicolor contrast extraction (MCE) mechanism to improve the learning ability of exquisite boundary information of the network. To make full use of the MCE outputs and RGB colors, and well depict and capture the complementary information between them, we devise a novel Siamese densely cooperative fusion (DCF) network (SDFNet) for saliency detection, which consists of two effective components: boundary-directed feature learning (BDFL) and DCF. The BDFL provides joint learning for both MCE and RGB modalities through a Siamese network, while the DCF module is devised for complementary feature discovery, in order to effectively combine the features learned from two modalities. Experiments on five well-known benchmark datasets demonstrate that the proposed method outperforms the state-of-the-art approaches in terms of different evaluation metrics. We provide a detailed analysis of these results and indicate that our joint modeling of MCE and RGB colors helps to better capture the object details, especially in the object boundaries.