A New LSA and Entropy-Based Approach for Automatic Text Document Summarization

A New LSA and Entropy-Based Approach for Automatic Text Document Summarization

Chandra Yadav, Aditi Sharan
Copyright: © 2018 |Pages: 32
DOI: 10.4018/IJSWIS.2018100101
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Automatic text document summarization is active research area in text mining field. In this article, the authors are proposing two new approaches (three models) for sentence selection, and a new entropy-based summary evaluation criteria. The first approach is based on the algebraic model, Singular Value Decomposition (SVD), i.e. Latent Semantic Analysis (LSA) and model is termed as proposed_model-1, and Second Approach is based on entropy that is further divided into proposed_model-2 and proposed_model-3. In first proposed model, the authors are using right singular matrix, and second & third proposed models are based on Shannon entropy. The advantage of these models is that these are not a Length dominating model, giving better results, and low redundancy. Along with these three new models, an entropy-based summary evaluation criteria is proposed and tested. They are also showing that their entropy based proposed models statistically closer to DUC-2002's standard/gold summary. In this article, the authors are using a dataset taken from Document Understanding Conference-2002.
Article Preview
Top

1. Introduction

Text document summarization is playing an important role in Information Retrieval (IR) because, it condenses a large pool of information into a concise form, through selecting the salient sentences and discards redundant sentences (or information) and researchers termed it as summarization process. Radev et al. (2002) have defined a summary as “a text that is produced from one or more texts that convey important information in the original texts, and that is no longer than half of the original text and usually significantly less than that”. According to Alguliev et al. (2011), Automatic text document summarization is an interdisciplinary research area of computer science that includes AI (artificial intelligence), Data Mining, Statistics as well as Psychology. Takale et al. (2016) in their work highlights the applications of Text document summarization are in Search Engine like Google, News Summarization which can extend from a single document to Multi Document, accounting, research, efficient utilization of results. Real life system which is based on Text Document summarization is “Ultimate Research Assistant” by Hoskinson, Andy (2005) that performs Text mining on Internet search, another system Newsblaster proposed by McKeown, Kathleen, et al. (2003) which automatically collect, cluster, categorize, and summarize news from different sites on the web (like CNN, Reuters, etc.) on a daily basis, and this provides facility to users to browse the results.

Broadly summarization task can be categorized into two type, abstractive summarization and extractive summarization. Abstractive summarization is a more human-like summary, which is the actual goal of text document summarization. As defined by (Mani & Maybury (1999), and Wan (2008)) abstractive summarization needs three things as Information Fusion, Sentences Compression, and Reformation. The actual challenge in Abstractive summarization is a generation of new sentences, new phrases, along with produced summary must retain the same meaning as the same source document has. According to Balaji, J., et al. (2016) abstractive summarization requires semantic representation of data, inference rules, and natural language generation. They proposed a semi-supervised bootstrapping approach to identify relevant components for an abstractive summary generation.

Extractive summarization based on extractive entities, entities may be a sentence, subpart of sentence, phrase or a word. Our work is focused on extractive based technique. Till now most work is done on extractive summarization because (1) Extraction is easy because this is based on some scoring criteria of words, sentences, phrases, (2) Evaluation of extractive summary is easy, just based on word counts or word sequences. Study done by Goldstein et al. (2000) state that human-generated summary also varies from person to person, the reason of this is may be the setup of the human mind, domain knowledge, and interest in particular domain etc. A new kind of extractive summarization is Algebraic technique based summarization. An example of Algebraic based summarization techniques is Latent Semantic Analysis (LSA), Probabilistic latent semantic analysis (PLSA), Linear Discriminant Analysis (LDA), Non-negative matrix factorization (NMF) proposed by [Steinberger, J., et al. (2007); Chiru, C. G., et al. (2014)] and Archityple analysis introduced by Canhasi, E., & Kononenko, I. (2014).

In this paper, we are proposing three models (based on two approaches) for sentence selection, relies on LSA. In the first proposed model, two sentences are extracted from the right singular matrix to maintain diversity in summary. Second and third proposed model is based on Shannon entropy, in which the score of a Latent/concept (in the second approach) and Sentence (in the third approach) is extracted based on the highest entropy. We want to mention that in this direction (using LSA of text mining task) early work has started by Deerwester S et al. (1990), their objective was indexing of text document using LSA.

Complete Article List

Search this Journal:
Reset
Volume 20: 1 Issue (2024)
Volume 19: 1 Issue (2023)
Volume 18: 4 Issues (2022): 2 Released, 2 Forthcoming
Volume 17: 4 Issues (2021)
Volume 16: 4 Issues (2020)
Volume 15: 4 Issues (2019)
Volume 14: 4 Issues (2018)
Volume 13: 4 Issues (2017)
Volume 12: 4 Issues (2016)
Volume 11: 4 Issues (2015)
Volume 10: 4 Issues (2014)
Volume 9: 4 Issues (2013)
Volume 8: 4 Issues (2012)
Volume 7: 4 Issues (2011)
Volume 6: 4 Issues (2010)
Volume 5: 4 Issues (2009)
Volume 4: 4 Issues (2008)
Volume 3: 4 Issues (2007)
Volume 2: 4 Issues (2006)
Volume 1: 4 Issues (2005)
View Complete Journal Contents Listing