Video Object Detection Guided by Object Blur Evaluation

Video Object Detection Guided by Object Blur Evaluation

Abstract:

In recent years, the excellent image-based object detection algorithms are transferred to the video object detection directly. These frame-by-frame processing methods are suboptimal owing to the degenerate object appearance such as motion blur, defocus and rare poses. The existing works for video object detection mostly focus on the feature aggregation at pixel level and instance level, but the blur impact in the aggregation process has not been exploited well so far. In this article, we propose an end-to-end blur-aid feature aggregation network (BFAN) for video object detection. The proposed BFAN focuses on the aggregation process influenced by the blur including motion blur and defocus with high accuracy and little increased computation. In BFAN, we evaluate the object blur degree of each frame as the weight for aggregation. Noteworthy, the background is usually flat which has a negative impact on the object blur degree evaluation. Therefore, we introduce a light saliency detection network to alleviate the background interference. The experiments conducted on the ImageNet VID dataset show that BFAN achieves the state-of-the-art detection performance, exactly 79.1% mAP, with 3 points improvement compared to the video object detection baseline.