Document Summarization Using Sentence Features

Document Summarization Using Sentence Features

Rasmita Rautray (Department of Computer Science and Engineering, Institute of Technical Education & Research, Siksha ‘O' Anusandhan University, Bhubaneswar, India), Rakesh Chandra Balabantaray (Department of Computer Science and Engineering, International Institute of Information Technology, Bhubaneswar, India) and Anisha Bhardwaj (Department of Computer Science and Engineering, Institute of Technical Education & Research, Siksha ‘O' Anusandhan University, Bhubaneswar, India)
Copyright: © 2015 |Pages: 12
DOI: 10.4018/IJIRR.2015010103
OnDemand PDF Download:
$30.00
List Price: $37.50

Abstract

Problem of exponential growth of information available electronically, there is an increasing demand for text summarization. Text summarization is the process of extracting the contents of the original text in a shorter form that provides useful information to the user. This paper presents a summarizer to produce summaries while reducing the redundant information and maximizing the summary relevancy. The proposed model takes several features into an account, including title feature, sentence weight, term weight, sentence position, inter sentence similarity, proper noun, thematic word and numerical data. The score of each feature for the model can be obtained from the document sets. However, the results of such models are evaluated to measure their performance based on F-score of extracted sentences at 20% compression rate on a C-50 data corpus. Experimental studies on C-50 data corpus, PSO summarizer show significantly better performance compared to other summarizer.
Article Preview

Overview Of Summarization System

Figure 1 illustrates the proposed automatic model for summarization. It includes three basic steps to generate summary which are preprocessing, feature extraction and summary generation.

Figure 1.

Proposed summarization model

Preprocessing

Initially the document is segmented into sentences and words for each sentence are extracted. Then the functional words or stop words like “a”, “the”, “of” (frequently occurring insignificant words) are removed from the word list. The words remaining in the sentences are stemmed.

Feature Extraction

Feature is one of the important aspects of any text mining. Therefore the following features for each sentence need to be prepared for input to the optimization model (Table 1).

Table 1.
Abbreviation of terms used to calculate feature score
TermsAbbreviations
KWDSKey Words in Sentence
KWDTKey Words in Title
SLSentence Length
LSLLongest Sentence Length
SWEach Sentence Weight
MSWMaximum Sentence Weight in document
TNSTotal Number of Sentences in document
WDSWords in Sentence
WDOSWords in Other Sentence
NPNNumber of Proper Noun in document
TNTWTotal Number of Thematic Words
TWThematic Words in sentence
NNDNumber of Numeric data in document
  • 1.

    ft1= Title Feature: It is the similarity between this sentence & the document title. The score of ft1 is calculated as follows:

    (1)

  • 2.

    ft2= Sentence Length: This feature is employed to penalize sentences that are too short, since these sentences are not expected to belong to the summary. We use longest sentence length of the sentence for normalization. Where i =1…TNS (2)

  • 3.

    ft3= Average Sentence Weight: This feature specifies the weight of each sentence by taking term frequency into an account. The score of ft3 is calculated as follows: Where i=1…TNS (3)

  • 4.

    ft4= Sentence Position: We assume that the first sentences of a paragraph are the most important. Therefore, paragraph sentences are ranked according to its position in the paragraph and considering the range between 0 to1. For instance, the first sentence in a paragraph has a score value 1; the second has reduced with some value and so on. The score of ft4 is calculated as follows:

    Where i=1…TNS (4)

  • 5.

    ft5= Inter-Sentence Similarity: It is the similarity between one sentence & other sentences in the document. The score of ft5 is calculated as follows:

    (5)

  • 6.

    ft6= Proper Noun: It is specifying the sentence inclusion of name entity i.e. the sentence containing proper noun is an important one and it is most probably included in the document summary. The score of ft6 is calculated as follows:

    (6)

  • 7.

    ft7= Thematic Word: The most frequent words are defined as thematic words. Sentence scores are functions of the thematic words’ frequencies. The score of ft7 is calculated as follows:

    (7)

  • 8.

    ft8= Numerical Data: Usually the sentence containing numerical data is an important one and it is most probably included in the document summary. The score of ft8 is calculated as follows:

    (8)

Complete Article List

Search this Journal:
Reset
Open Access Articles: Forthcoming
Volume 8: 4 Issues (2018): 1 Released, 3 Forthcoming
Volume 7: 4 Issues (2017)
Volume 6: 4 Issues (2016)
Volume 5: 4 Issues (2015)
Volume 4: 4 Issues (2014)
Volume 3: 4 Issues (2013)
Volume 2: 4 Issues (2012)
Volume 1: 4 Issues (2011)
View Complete Journal Contents Listing