Saliency Based Semantic Weeds Detection and Classification Using UAV Multispectral Imaging

Saliency Based Semantic Weeds Detection and Classification Using UAV Multispectral Imaging

Abstract:

Weeds infestation causes damage to crops and limits the agricultural production. The traditional weeds controlling methods rely on agrochemicals which demand labour-intensive practices. Various methods are proposed for the pursuit of weeds detection using multispectral images. The machine vision-based weeds detection methods require the extraction of a large number of multispectral texture features which in turn increases the computational cost. Deep neural networks are used for pixel-based weeds classification, but a drawback of these deep neural network-based weeds detection methods is that they require a large size of images dataset for network training which is time-consuming and expensive to collect particularly for multispectral images. These methods also require a Graphics Processing Unit (GPU) based system because of having high computational cost. In this article, we propose a novel weeds detection model which addresses these issues, as it does not require any kind of supervised training using labelled images and multispectral texture features extraction. The proposed model can execute on a Central Processing Unit (CPU) based system as a result its computational cost reduces. The Predictive Coding/Biased Competition-Divisive Input Modulation (PC/BC-DIM) neural network is used to determine multispectral fused saliency map which is further used to predict salient crops and detect weeds. The proposed model has achieved 94.38% mean accuracy, 0.086 mean square error, and 0.291 root mean square error.