Article Preview
Top1. Introduction
Text Summarization is considered to be one of the most widely used sub-fields of the Natural Language Processing (NLP) field. A summary is the text generated from one or more text that convey most important information, highly repeated concepts and core ideas in the original texts. The summary text is at most half in length of the original text. It is usually significantly less than half length. As the amount of information increases, systems that can automatically summarize one or more documents become increasingly desirable. Recent research has investigated types of summaries, methods to create them, and methods to evaluate them. Text summarization has been developed and improved in order to help users manage all the information available these days. A summary can be defined as a text that is produced from one or more texts, that contain a significant portion of the information in the original text(s), and that is no longer than half of the original text (Barzilay, Elhadad & McKeown, 2001). The main goal of a summary is to present the main ideas in a document in a reduced size. If all sentences in a text document were of equal importance, producing a summary would not be very effective, as any reduction in the size of a document would carry a proportional decrease in its contained information (Lloret, Ferrández, Muñoz, Palomar, 2008). There are many categories for summarization: single document, multi-document, extractive, abstractive, informative, indicative, user- focused, generic, statistical, linguistic, and machine learning approach based.
Dingding et al. (Wang, Li, Zhu & Ding, 2008) proposed a new multi-document summarization framework. First -given a set of documents which need to be summarized- they clean these documents by removing formatting characters and decompose the set of documents into sentences. Second, they calculate sentence-sentence similarities using semantic analysis and construct the similarity matrix (through parsing each sentence into frame(s) using a semantic role parser. Pair-wise sentence semantic similarity is calculated based on both the semantic role analysis and word relation discovery). Third, symmetric matrix factorization is used to group sentences into clusters. Finally, in each cluster, they identify the most semantically important sentence - using a measure combining the internal information (e.g., the computed similarity between sentences) and the external information (e.g., the given topic information) - the most informative sentences are selected from each group to form the summary. While the system provided by Sarkar, K. (2009) consists of three primary components: first, document preprocessing (deals with formatting the input document, segmentation and stop removal), then, sentence ranking (assigns scores to the sentences based on the domain knowledge, word level and sentence level features) and then, summary generation (selects top n sentences based on scores). Finally, the sentences included in to the summary are reordered to increase the readability.
The model presented by Kowsalya et al. produced an extractive summary for given set of documents based on word sequence models by extracting Maximal Frequent Sequences MFS from the given text (Kowsalya, Priya & Nithiya, 2011).. They have employed the word sequence information from the self-text for detecting the candidate text fragments for composing the summary. To compose the effective summarization they have used the MFS technique to extract and detect the most important terms in the source document and Normalized Google Dissimilarity NGD distance for sentence clustering.
Nastase (2008) introduced an approach to allow the users to specify their request for information as a query/topic. To “understand” the query it expands it using encyclopedic knowledge in Wikipedia. The expanded query is linked with its associated documents through spreading activation in a graph that represents words and their grammatical connections in these documents. The topic expanded words and activated nodes in the graph are used to produce an extractive summary.