MGRC An End to End Multigranularity Reading Comprehension Model for Question Answering

MGRC An End to End Multigranularity Reading Comprehension Model for Question Answering

Abstract:

Deep neural network-based models have achieved great success in extractive question answering. Recently, many works have been proposed to model multistage matching for this task, which usually first retrieve relevant paragraphs or sentences and then extract an answer span from the retrieved results. However, such a pipeline-based approach suffers from the error propagation problem, especially for sentence-level retrieval that is usually difficult to achieve high accuracy due to the severe data imbalance problem. Furthermore, since the paragraph/sentence selector and the answer extractor are closely related, modeling them independently does not fully exploit the power of multistage matching. To solve these problems, we propose a novel end-to-end multigranularity reading comprehension model, which is a unified framework to explicitly model three matching granularities, including paragraph identification, sentence selection, and answer extraction. Our approach has two main advantages. First, the end-to-end approach alleviates the error propagation problem in both the training and inference phases. Second, the shared features in a unified model improve the learning of representations of different matching granularities. We conduct a comprehensive comparison on four large-scale datasets (SQuAD-open, NewsQA, SQuAD 2.0, and SQuAD Adversarial) and verify that the proposed approach outperforms both the vanilla BERT model and existing multistage matching approaches. We also conduct an ablation study and verify the effectiveness of the proposed components in our model structure.