Abstract:
Fake news detection is a significant problem where information is available from multiple sources across the internet. Most of the research on fake news has only targeted politics-related articles, but such models would not be robust enough to tackle fake news in the real world. To solve this problem, this research work incorporated transfer learning using attention-based transformers (BERT, RoBERTa, XLNet, DeBERTa, GPT2) and trained them on multi-domain datasets FakeNews AMT and Celebrity across different domains i.e. Politics, Entertainment, Sports, Business, Education and Technology. The proposed model has obtained state-of-the-art results while doing multi-domain and cross-domain testing, having beaten previous papers conformably. Also, the model has achieved a 99.3% accuracy on FakeNewsAMT and 84% accuracy on celebrity dataset. We believe the synergy of transfer learning in a multi-domain setting will make a robust model, which would be relevant in the real world. This idea originated from the fact that multi-domain research's critical challenge is that data distribution is varying, and the key benefit of transfer learning is that it can perform well even when it is trained and tested on different data distributions.