Abstractive Summarization Guided by Latent Hierarchical Document Structure

Abstract

Sequential abstractive neural summarizers often do not use the underlying structure in the input article or dependencies between the input sentences. This structure is essential to integrate and consolidate information from different parts of the text. To address this shortcoming, we propose a hierarchy-aware graph neural network (HierGNN) that captures such dependencies through three main steps, 1) learning a hierarchical document structure through a latent structure tree learned by a \emph{sparse} matrix-tree computation; 2) propagating sentence information over this structure using a novel message-passing node propagation mechanism to identify salient information; 3) using graph-level attention to concentrate the decoder on salient information. Experiments confirm HierGNN improves strong sequence models such as BART, with a 0.55 and 0.75 margin in average ROUGE-1/2/L for CNN/DM and XSum. Further human evaluation demonstrates that our model summaries are more relevant and less redundant than the baseline model, into which HierGNN is incorporated. We also find HierGNN synthesizes summaries by fusing multiple source sentences more, rather than compressing a single source sentence, and that it processes long inputs more effectively.

Publication
In The 2022 Conference on Empirical Methods in Natural Language Processing
Yifu Qiu
Yifu Qiu
PhD student in Natural Language Processing

My research interests include Machine Learning, Natural Language Processing, Cognitive Science.