Do Boat and Ocean Suggest Beach? Dialogue Summarization with External Knowledge
TLDR: We address the problem of out-of-context inference in dialogue summarization, and propose a knowledge-aware model tackling this issue.
Abstract: In human dialogues, utterances do not necessarily carry all the details. As pragmatics studies suggest, humans can infer meaning from the situational context even though the meaning is not literally expressed. It is crucial for natural language processing models to understand such an inference process. In this paper, we address the problem of inferencing Concepts Out of the Dialogue Context (CODC) in the dialogue summarization task. We propose a novel framework comprised of a CODC inference module leveraging external knowledge from WordNet and a knowledge attention module aggregating the inferred knowledge into a neural summarization model. To evaluate the inference capability of different methods, we also propose a new evaluation metric based on CODC. Experiments suggest that current automatic evaluation metrics of natural language generation may not be enough to understand the quality of out-of-context inference in generation results, and our proposed summarization model can provide statistically significant improvements on both CODC inference and traditional automatic evaluation metrics, e.g., CIDEr. Human evaluation of the model's inference ability also demonstrates the superiority of the proposed model.