Uncertainty Aware Multiview Deep Learning for Internet of Things Applications

Uncertainty Aware Multiview Deep Learning for Internet of Things Applications

Abstract:

As an essential approach in many Internet of Things (IoT) applications, multiview learning synthesizes multiple features to achieve more comprehensive descriptions of data items. Most of the previous studies on multiview learning have been dedicated to increasing the prediction accuracy, while ignoring the reliability of the decision. This would limit their deployment in high-risk IoT and industrial applications such as the automated vehicle. Although a trusted multiview classification model has been proposed recently, it cannot well deal with the highly complementary multiview data. In this work, we present an evidential multiview deep learning (EMDL) method to make reliable decisions. EMDL first seeks view-specific evidence of each category, which could be termed as the amount of support to each category collected from data. It then dynamically fuses different views at the evidence level to construct the multiview common evidence and makes reliable prediction accordingly (strong evidence indicates high prediction confidence). In particular, we establish a degradation layer to learn the mappings from the common evidence (comprehensive information) to view-specific evidences (partial information) for evidence fusion. It aims to explicitly model consistent and complementary relations in multiview data at the evidence level. We apply EMDL on a synthetic toy dataset and five real-world datasets (three datasets are related to industrial scenarios). Experiments show that EMDL outperforms state-of-the-art baseline methods.