Quantum Language Model With Entanglement Embedding for Question Answering

Quantum Language Model With Entanglement Embedding for Question Answering

Abstract:

Quantum language models (QLMs) in which words are modeled as a quantum superposition of sememes have demonstrated a high level of model transparency and good post-hoc interpretability. Nevertheless, in the current literature, word sequences are basically modeled as a classical mixture of word states, which cannot fully exploit the potential of a quantum probabilistic description. A quantum-inspired neural network (NN) module is yet to be developed to explicitly capture the nonclassical correlations within the word sequences. We propose a NN model with a novel entanglement embedding (EE) module, whose function is to transform the word sequence into an entangled pure state representation. Strong quantum entanglement, which is the central concept of quantum information and an indication of parallelized correlations among the words, is observed within the word sequences. The proposed QLM with EE (QLM-EE) is proposed to implement on classical computing devices with a quantum-inspired NN structure, and numerical experiments show that QLM-EE achieves superior performance compared with the classical deep NN models and other QLMs on question answering (QA) datasets. In addition, the post-hoc interpretability of the model can be improved by quantifying the degree of entanglement among the word states.