Generating Summaries Through Unigram and Bigram: Text Summarization

Generating Summaries Through Unigram and Bigram: Text Summarization

Nesreen Mohammad Alsharman (WISE, Amman, Jordan) and Inna V. Pivkina (NMSU, USA)
DOI: 10.4018/IJITWE.2020010105

Abstract

This article describes a new method for generating extractive summaries directly via unigram and bigram extraction techniques. The methodology uses the selective part of speech tagging to extract significant unigrams and bigrams from a set of sentences. Extracted unigrams and bigrams along with other features are used to build a final summary. A new selective rule-based part of speech tagging system is developed that concentrates on the most important parts of speech for summarizations: noun, verb, and adjective. Other parts of speech such as prepositions, articles, adverbs, etc., play a lesser role in determining the meaning of sentences; therefore, they are not considered when choosing significant unigrams and bigrams. The proposed method is tested on two problem domains: citations and opinosis data sets. Results show that the proposed method performs better than Text-Rank, LexRank, and Edmundson summarization methods. The proposed method is general enough to summarize texts from any domain.
Article Preview
Top

Introduction

The quantity of online information is growing exponentially. Summarizing information returns main ideas of documents without reading all the content of text which helps to minimize a reading time and can aid search engines to recognize relevant content. A summary can be classified as an indicative way or informative way. Indicative way can be defined as a pointer to some parts of the original document. Informative way covers all relevant information of the text (Fan et al., 2006). In both cases the most important advantage of using a summary is the reduced reading time.

There are a lot of automatic text summarization approaches in the Natural Language Processing (NLP) subfield that summarize original text. Automatic summarization is a set of techniques which generate important component summaries of original text; the summaries are expected to be at most half the size of the original text. A good automatic text summarization should provide a good summary that should cover the most important information of the original text or a cluster of text, while being coherent, non-redundant and grammatically readable. In short, summarization must satisfy the following three important optimization properties:

  • 1.

    Summaries should contain important text units that are relevant to the user;

  • 2.

    Summaries should not contain multiple textual units that convey the similar information; and

  • 3.

    Summaries are bounded in length (Babar and Patil, 2015).

Generally, text summarization methods can be classified into extractive and abstractive summarizations, single-document and multi-document, and supervised/unsupervised summarizations.

Extractive summarization produces summaries by concatenating several sentences taken exactly as they appear in the original documents being summarized (Binwahlan et al., 2009; Garg et al., 2009). Extractive summarization selects a subset of existing words, phrases, or sentences in the original text to generate a summary. By contrast, abstractive summarization (Knight and Marcu, 2002; Riezler et al., 2003; Turner and Charniak, 2005) uses different words to describe the contents of the original documents rather than directly copying original sentences (Yao et al., 2017). Abstractive summarization uses text-to-text generation or sentences compression to build a cohesive and coherency summary without redundant information.

Depending on the number of input documents, there could be single-document summarizations and multi-document summarizations. While a single document is used as an input to create a single summary in single-document text summarization systems; multiple documents related to a single subject are used in multi-document text summarization systems.

Several different methods have been proposed in the literature for supervised approaches (Das and Martins, 2007; Pei et al., 2012), and unsupervised approaches (Erkan and Radev, 2004; Mihalcea and Tarau, 2004). The main idea of supervised approaches is given a set of training documents and their extractive summaries, the summarization process is modeled as a machine learning classification problem: sentences are classified as non-summary and summary sentences depends on the features that they possess.

The challenge of this research is to develop a single unsupervised extractive summarization method that produces a summary of any text without training data set. A unigram-bigram extractive technique with other features to build a summary is used. The unigram-bigram extraction method employs on a new selective rule-based part of speech tagging which only uses the three main categories of words: noun, verb, and adjective. The proposed method fully omits other word categories like prepositions, articles, adverbs, etc.

Top

Interest in automatic text summarization appeared as early as in the fifties. Throughout time several dominant extractive methods for producing summarizations have emerged. The methods generate summaries that retain the most significant information of a given text. Summarization methods can be categorized into frequency-based, graph-based and machine learning-based.

Complete Article List

Search this Journal:
Reset
Open Access Articles: Forthcoming
Volume 15: 4 Issues (2020): 1 Released, 3 Forthcoming
Volume 14: 4 Issues (2019)
Volume 13: 4 Issues (2018)
Volume 12: 4 Issues (2017)
Volume 11: 4 Issues (2016)
Volume 10: 4 Issues (2015)
Volume 9: 4 Issues (2014)
Volume 8: 4 Issues (2013)
Volume 7: 4 Issues (2012)
Volume 6: 4 Issues (2011)
Volume 5: 4 Issues (2010)
Volume 4: 4 Issues (2009)
Volume 3: 4 Issues (2008)
Volume 2: 4 Issues (2007)
Volume 1: 4 Issues (2006)
View Complete Journal Contents Listing