A Two-Stage Long Text Summarization Method Based on Discourse Structure

A Two-Stage Long Text Summarization Method Based on Discourse Structure

Xin Zhang, Qiyi Wei, Qing Song, Pengzhou Zhang
Copyright: © 2023 |Pages: 20
DOI: 10.4018/IJSI.331091
Article PDF Download
Open access articles are freely available for download

Abstract

This paper proposes a two-stage automatic text summarization method based on discourse structure, aiming to improve the accuracy and coherence of the summary. In the extractive stage, a text encoder divides the long text into elementary discourse units (EDUs). Then a parse tree based on rhetorical structure theory is constructed for the whole discourse while annotating nuclearity information. The nuclearity terminal nodes are selected based on the summary length requirement, and the key EDU sequence is output. The authors use a pointer generator network and a coverage mechanism in the generation stage. The nuclearity information of EDUs is to update the word attention distribution in the pointer generator, which not only accurately reproduces the critical details of the text but also avoids self-repetition. Experiments on the standard text summarization dataset (CNN/DailyMail) show that the ROUGE score of the proposed two-stage model is better than that of the current best baseline model, and the summary achieves corresponding improvements in accuracy and coherence.
Article Preview
Top

Introduction

The rapid development, active innovation, and widespread popularity of the internet have rapidly brought people from the era of information scarcity to the era of information explosion. According to the International Data Corporation (IDC) prediction, the global data volume may reach as high as 175ZB in 2025, with China's data volume likely to increase to 48.6ZB, accounting for 27.8% of the global data volume (IDC, 2018). Text data is an essential component of data, and although it can improve data search efficiency through keyword searches, it remains subject to the problem of information overload. In addition, with the popularization of mobile devices and the acceleration of work and life, people place increased demands on information browsing and reading methods, which have led to the new trends of digital reading and fragmentation reading. Text summarization, the simplification of text data to quickly extract adequate information, is an effective way to solve the above problems.

Text summarization is one of the applications of natural language processing and one of the most challenging and exciting problems in natural language processing. From the information theory perspective, a text abstract is an information compression process that expresses the maximum amount of information in the original text with the minimum loss of information (Peyrard, 2019). Early text summaries were manually completed, which was time-consuming, labor-intensive, and inefficient. There was an urgent need for automated summarization methods to replace manual forms. In recent years, with the progress of research on unstructured text data, automatic text summarization has received widespread attention and research. Much research has emerged around algorithm technology, datasets, evaluation indicators, and systems. Various fields such as government affairs, finance, news, medicine, and media are applied rapidly. In particular, the recently released multimodal large model GPT-4 has shown strong processing power when performing various natural language tasks (OPENAI, 2023). It can analyze a large amount of text and obtain the required information faster. However, it requires a large training dataset and may result in incorrect summary results when processing text in specific fields (Dylan et al., 2023).

This study proposed a two-stage generative model for long text summarization. To avoid introducing a large amount of redundant information, the long text was first segmented into more fine-grained discourse units, known as EDUs. Then the discourse structure was analyzed based on the rhetorical structure theory and it was constructed based on understanding semantics. At the same time, annotated nuclearity information for EDUs and the terminal node extraction depth were set according to the abstract length requirement to output key EDU sequences. In order to improve coherence and readability, pointer generation networks and coverage mechanisms were adopted. EDU nuclearity information was utilized to update attention distribution at the word level, which solved the out-of-vocabulary problem and avoided self-repetition of crucial information. This model was mainly validated on the standard text summary dataset CNN/Daily Mail.

Complete Article List

Search this Journal:
Reset
Volume 12: 1 Issue (2024)
Volume 11: 1 Issue (2023)
Volume 10: 4 Issues (2022): 2 Released, 2 Forthcoming
Volume 9: 4 Issues (2021)
Volume 8: 4 Issues (2020)
Volume 7: 4 Issues (2019)
Volume 6: 4 Issues (2018)
Volume 5: 4 Issues (2017)
Volume 4: 4 Issues (2016)
Volume 3: 4 Issues (2015)
Volume 2: 4 Issues (2014)
Volume 1: 4 Issues (2013)
View Complete Journal Contents Listing