Explainable Natural Language Inference via Identifying Important Rationales

Explainable Natural Language Inference via Identifying Important Rationales

Abstract:

Natural language inference (NLI) is an important task in the field of natural language processing (NLP), which requires certain common sense and logical reasoning abilities. Existing pretrained models applied to NLI have difficulty achieving good performance, and inference models based on deep learning also have the problem of poor interpretability. In this article, we propose an explainable NLI model by identifying important rationales, which can be obtained by the specially designed rationale extractor and selector. The pretrained models can be fine-tuned on the rationale-enhanced dataset. In addition, the extracted rationales can be used to generate natural language explanation (NLE) with a higher quality. Not only can NLE serve as an explanation for the prediction results, but it can also be combined with the original dataset to improve the performance of pretrained models. Experimental results on the Stanford natural language inference corpus (SNLI) show that the proposed model could obtain state-of-the-art results of 94.19% and 94.08%, respectively, on the SNLI development set, as well as on the test set and obtain good performance when transferred to the out-of-domain dataset (multiNLI). Furthermore, the proposed model has a certain improvement in computational efficiency due to the focus on important rationales.