Domain Knowledge Powered Deep Learning for Breast Cancer Diagnosis Based on Contrast-Enhanced Ultrasound Videos
Domain Knowledge Powered Deep Learning for Breast Cancer Diagnosis Based on Contrast-Enhanced Ultrasound Videos
Abstract
In recent years, deep learning has been widely used in breast cancer diagnosis, and many high-performance models have emerged. However, most of the existing deep learning models are mainly based on static breast ultrasound (US) images. In actual diagnostic process, contrast-enhanced ultrasound (CEUS) is a commonly used technique by radiologists. Compared with static breast US images, CEUS videos can provide more detailed blood supply information of tumors, and therefore can help radiologists make a more accurate diagnosis. In this paper, we propose a novel diagnosis model based on CEUS videos. The backbone of the model is a 3D convolutional neural network. More specifically, we notice that radiologists generally follow two specific patterns when browsing CEUS videos. One pattern is that they focus on specific time slots, and the other is that they pay attention to the differences between the CEUS frames and the corresponding US images. To incorporate these two patterns into our deep learning model, we design a domain-knowledge-guided temporal attention module and a channel attention module. We validate our model on our Breast-CEUS dataset composed of 221 cases. The result shows that our model can achieve a sensitivity of 97.2% and an accuracy of 86.3%. In particular, the incorporation of domain knowledge leads to a 3.5% improvement in sensitivity and a 6.0% improvement in specificity. Finally, we also prove the validity of two domain knowledge modules in the 3D convolutional neural network (C3D) and the 3D ResNet (R3D).