CLACTA: Comment-Level-Attention and Comment-Type-Awareness for Fake News Detection

CLACTA: Comment-Level-Attention and Comment-Type-Awareness for Fake News Detection

Abstract:

There are many popular communication tools for news sharing in recent years. However, propagation of fake news becomes a serious issue concerning the public and government due to openness and rapidity of online communication. It is widely concerned how to automatically detect fake news as soon as possible. Nevertheless, most existing methods do not well utilize comments which contain rich semantic information or ignore their effect. Inspired by the revealing role of some comments to the original post, we propose the neural network model which consists of comment-level-attention (CLA) and comment-type-awareness (CTA) for fake news detection. In CLA, we devise the attention mechanism which considers semantic relation between the post and the comments. Based on attention weights we take the weighted sum of different comment representations for the sample as corresponding comment feature, which can capture key comment information. As similar to stance, we assume comments can gather into several different types naturally. Therefore, in CTA, we store comment type representations by the memory matrix which is learned in the training process of sample stream. Comment feature for the sample is aware of the memory matrix, and then corresponding comment type feature is obtained. We concatenate the above two auxiliary features and learned post feature to help detect fake news. Our validation experiments using the Weibo dataset and Pheme dataset demonstrate the effectiveness of the proposed model.