Evaluation of Narrative and Expository Text Summaries Using Latent Semantic Analysis

Evaluation of Narrative and Expository Text Summaries Using Latent Semantic Analysis

René Venegas (Pontificia Universidad Católica de Valparaíso, Chile)
DOI: 10.4018/978-1-60960-741-8.ch031
OnDemand PDF Download:
$30.00
List Price: $37.50

Abstract

In this chapter I approach three automatic methods for the evaluation of summaries from narrative and expository texts in Spanish. The task consisted of correlating the evaluation made by three raters for 373 summaries with results provided by latent semantic analysis. Scores assigned by latent semantic analysis were obtained by means of the following three methods: 1) Comparison of summaries with the source text, 2) Comparison of summaries with a summary approved by consensus, and 3) Comparison of summaries with three summaries constructed by three language teachers. The most relevant results are a) a high positive correlation between the evaluation made by the raters (r= 0.642); b) a high positive correlation between the computer methods (r= 0.810); and c) a moderate-high positive correlation between the evaluations of raters and the second and third LSA methods (r= 0.585 and 0,604), in summaries from narrative texts. Both methods did not differ significantly in statistical terms from the correlation among raters when the texts evaluated were predominantly narrative. These results allow us to assert that at least two holistic LSA-based methods are useful for assessing reading comprehension of narrative texts written in Spanish.
Chapter Preview
Top

Introduction

In Latin America, comprehension skills associated with reading and writing are not effective enough for students to understand and produce texts that will allow them to perform well in today’s society (Peronard, 1989; Peronard, Gómez, Parodi & Núñez, 1998; PISA, 2007; Ibañez, 2008). There is general consensus between researchers studying reading comprehension as to the need of evaluating individuals psycholinguistic processes, highlighting several options for evaluation, such as answers to literal and inferential questions (both local and global), development of conceptual and/or mental maps, formulation of open-ended questions, paraphrasing, and summary writing.

I am specifically interested in further inquiry regarding one of these options, which is the summary as an evaluation technique. The use of this technique is theoretically supported by the proposals of van Dijk (1978). From this psycholinguistic perspective, we attempt to formalize processes by the reader and subsequently incorporate them into memory. I first propose the so-called macrorules (van Dijk, 1978) for this purpose, followed by macrostrategies (van Dijk & Kintsch, 1983) to attempt an explanation as to why what person remembers and verbalizes after reading a relatively long text does not include all the ideas originally expressed in such texts. Good comprehenders apply rules and strategies, eliminating propositions they believe to be irrelevant, and reprocessing others in order to build their own version of the text. However, this evaluation technique features some problems, mainly regarding human variables such as the cognitive burden of the evaluator, parallel attention to formal elements of written production (spelling, handwriting), subjective aspects that may intervene, systemacity in the application of evaluation criteria, consensus between multiple evaluators, and the extensive amount of time required for evaluation.

These problems and readers interest in automatically capturing core text information have encouraged the study of summarizing and summary evaluation from a computational perspective. Automatic summary construction and automatic summary evaluation are problems that have been discussed since the mid-60s, although computer systems reliable enough to perform both tasks have not yet been found. However, advances based on computer linguistics, natural language processing, and the development of several information recovery techniques lead us to think that we are closer to improving the generation and evaluation of written summaries (Mani & Maybury, 2001).

Techniques for building and automatically evaluating text summaries are generally classified into two categories: linguistic and statistical. Linguistic techniques use knowledge regarding syntax, semantics or the use of language, while statistical techniques operate by computing values for words and phrases found in the text, using statistical techniques such as frequencies, n-grams, and co-occurrences.

There are several studies focusing on comprehension and the use of computer techniques in order to represent and evaluate the comprehension process. Kintsch (1998, 2000, 2001, and 2002) introduces the possibility of using latent semantic analysis (LSA) for extracting lexico-semantic similarities from the texts and accessing text propositions by means of statistical-mathematical training to simulate the comprehension process.

Complete Chapter List

Search this Book:
Reset