Hercules Deep Hierarchical Attentive Multilevel Fusion Model With Uncertainty Quantification for Med

Hercules Deep Hierarchical Attentive Multilevel Fusion Model With Uncertainty Quantification for Med

Abstract:

The automatic and accurate analysis of medical images (e.g., segmentation,detection, classification) are prerequisites for modern disease diagnosis and prognosis. Computer-aided diagnosis (CAD) systems empower accurate and effective detection of various diseases and timely treatment decisions. The past decade witnessed a spur in deep learning (DL)-based CADs showing outstanding performance across many health care applications. Medical imaging is hindered by multiple sources of uncertainty ranging fromnteasurement (aleatoric) errors, physiological variability, and limited medical knowledge (epistemic errors). However, uncertainty quantification (UQ) in most existing DL methods is insufficiently investigated, particularly in medical image analysis. Therefore, to address this gap, in this article, we propose a simple yet novel hierarchical attentive multilevel feature fusion model with an uncertainty-aware module for medical image classification coined Hercules . This approach is tested on several real medical image classification challenges. The proposed Hercules model consists of two main feature fusion blocks, where the former concentrates on attention-based fusion with uncertainty quantification module and the latter uses the raw features. Hercules was evaluated across three medical imaging datasets, i.e., retinal OCT, lung CT, and chest X-ray. Hercules produced the best classification accuracy in retinal OCT (94.21%), lung CT (99.59%), and chest X-ray (96.50%) datasets, respectively, against other state-of-the-art medical image classification methods.