Article Preview
TopIntroduction
The quantity of online information is growing exponentially. Summarizing information returns main ideas of documents without reading all the content of text which helps to minimize a reading time and can aid search engines to recognize relevant content. A summary can be classified as an indicative way or informative way. Indicative way can be defined as a pointer to some parts of the original document. Informative way covers all relevant information of the text (Fan et al., 2006). In both cases the most important advantage of using a summary is the reduced reading time.
There are a lot of automatic text summarization approaches in the Natural Language Processing (NLP) subfield that summarize original text. Automatic summarization is a set of techniques which generate important component summaries of original text; the summaries are expected to be at most half the size of the original text. A good automatic text summarization should provide a good summary that should cover the most important information of the original text or a cluster of text, while being coherent, non-redundant and grammatically readable. In short, summarization must satisfy the following three important optimization properties:
- 1.
Summaries should contain important text units that are relevant to the user;
- 2.
Summaries should not contain multiple textual units that convey the similar information; and
- 3.
Summaries are bounded in length (Babar and Patil, 2015).
Generally, text summarization methods can be classified into extractive and abstractive summarizations, single-document and multi-document, and supervised/unsupervised summarizations.
Extractive summarization produces summaries by concatenating several sentences taken exactly as they appear in the original documents being summarized (Binwahlan et al., 2009; Garg et al., 2009). Extractive summarization selects a subset of existing words, phrases, or sentences in the original text to generate a summary. By contrast, abstractive summarization (Knight and Marcu, 2002; Riezler et al., 2003; Turner and Charniak, 2005) uses different words to describe the contents of the original documents rather than directly copying original sentences (Yao et al., 2017). Abstractive summarization uses text-to-text generation or sentences compression to build a cohesive and coherency summary without redundant information.
Depending on the number of input documents, there could be single-document summarizations and multi-document summarizations. While a single document is used as an input to create a single summary in single-document text summarization systems; multiple documents related to a single subject are used in multi-document text summarization systems.
Several different methods have been proposed in the literature for supervised approaches (Das and Martins, 2007; Pei et al., 2012), and unsupervised approaches (Erkan and Radev, 2004; Mihalcea and Tarau, 2004). The main idea of supervised approaches is given a set of training documents and their extractive summaries, the summarization process is modeled as a machine learning classification problem: sentences are classified as non-summary and summary sentences depends on the features that they possess.
The challenge of this research is to develop a single unsupervised extractive summarization method that produces a summary of any text without training data set. A unigram-bigram extractive technique with other features to build a summary is used. The unigram-bigram extraction method employs on a new selective rule-based part of speech tagging which only uses the three main categories of words: noun, verb, and adjective. The proposed method fully omits other word categories like prepositions, articles, adverbs, etc.
TopInterest in automatic text summarization appeared as early as in the fifties. Throughout time several dominant extractive methods for producing summarizations have emerged. The methods generate summaries that retain the most significant information of a given text. Summarization methods can be categorized into frequency-based, graph-based and machine learning-based.