MHLM Majority Voting Based Hybrid Learning Model for Multi-Document Summarization

MHLM Majority Voting Based Hybrid Learning Model for Multi-Document Summarization

Suneetha S. (Hasvita Institute of Engineering and Technology, Hyderabad, India) and Venugopal Reddy A. (Jawaharlal Nehru Technological University, Hyderabad, India)
DOI: 10.4018/IJAIML.2019010104

Abstract

Text summarization from multiple documents is an active research area in the current scenario as the data in the World Wide Web (WWW) is found in abundance. The text summarization process is time-consuming and hectic for the users to retrieve the relevant contents from this mass collection of the data. Numerous techniques have been proposed to provide the relevant information to the users in the form of the summary. Accordingly, this article presents the majority voting based hybrid learning model (MHLM) for multi-document summarization. First, the multiple documents are subjected to pre-processing, and the features, such as title-based, sentence length, numerical data and TF-IDF features are extracted for all the individual sentences of the document. Then, the feature set is sent to the proposed MHLM classifier, which includes the Support Vector Machine (SVM), K-Nearest Neighbors (KNN), and Neural Network (NN) classifiers for evaluating the significance of the sentences present in the document. These classifiers provide the significance scores based on four features extracted from the sentences in the document. Then, the majority voting model decides the significant texts based on the significance scores and develops the summary for the user and thereby, reduces the redundancy, increasing the quality of the summary similar to the original document. The experiment performed with the DUC 2002 data set is used to analyze the effectiveness of the proposed MHLM that attains the precision and recall at a rate of 0.94, f-measure at a rate of 0.93, and ROUGE-1 at a rate of 0.6324.
Article Preview
Top

1. Introduction

The worldwide availability of the online resources, like the text articles, web pages, news documents consumes more time for collecting the useful data from the abundant existence. In order to save the time and handle the limited screen space, bandwidth, and limited attention span associated with browsing data in mobiles, we need to condense the data from the multiple sources as a summary such that only the relevant information is provided to the user (Chen et al., 2014). The need for summarization technology led to the development of the Document summarization, which aims at providing the reduced version of a source text such that the short version remains informative (Abdi et al., 2015; Wei et al., 2010). The Document summarization can be either a single document or multiple documents-based summarizations (Mendoza et al., 2014). The multi-document summarization generates the summary from multiple documents of a single topic, but the single-document summarization uses a single document to construct a summary (Abdi et al., 2017; Rautray & Balabantary, 2017). Thus, multi-document summarization gains remarkable interest in establishing a summary from the huge volumes of the data available in the WWW leading to the development of the robust multi-document summarization system (Qiang et al., 2016).

Multi-document summarization is the method of filtering significant information from a group of documents for generating a compressed version for certain users and application. It is an extension of single document summarization. Multi-document summarization generates information reports, which are comprehensive and concise. This comprehensive summary holds the necessary information, therefore reducing the requirement for accessing original files. The multi-document summary is advantageous over the single-document counterparts (Khoo et al., 2002) that give a deep overview of the topics of the set of the documents through unsheathing the mutual contents available in the document. The multi-document summary rejects the repeated contents through detecting the unique features such that all the sub-topics are covered irrespective of the time (Qurnsiyeh & Ng, 2016). The text summarization is performed in three major phases, such as analysis, transformation, and synthesis. The input is the text, which is processed to extract the important features that are transformed into a summary. At last, the appropriate summary is provided by the synthesis phase that is highly relevant to the user (Fattah, 2014).

The multi-document text summarization may be either performed with a supervised or unsupervised learning mechanism (Fattah & Ren, 2009). The supervised summarization possesses the learning and testing phases, and in the learning phase, the training documents along with their summaries are trained such that the sentences can be categorized as the classes, summary sentences, and non-summary sentences. In the supervised summarization, classifiers, like SVM (Chali et al., 2009), NN (Fattah & Ren, 2008), and Random Forest (John & Wilscy, 2013) are used (John et al., 2017). Normally, two types of unsupervised models are used for sentence selection. The first one is based on sentence ranking, which utilizes techniques, like clustering, PageRank and topic modeling for ranking the sentences. The second one is based on the sparse reconstruction, which chooses a sparse subset of the sentences that linearly reconstruct all the sentences in the original document set.

Complete Article List

Search this Journal:
Reset
Open Access Articles: Forthcoming
Volume 10: 2 Issues (2020): Forthcoming, Available for Pre-Order
Volume 9: 2 Issues (2019)
View Complete Journal Contents Listing