Aspects of Multilingual News Summarisation

Aspects of Multilingual News Summarisation

Josef Steinberger, Ralf Steinberger, Hristo Tanev, Vanni Zavarella, Marco Turchi
DOI: 10.4018/978-1-4666-5019-0.ch012
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

In this chapter, the authors discuss several pertinent aspects of an automatic system that generates summaries in multiple languages for sets of topic-related news articles (multilingual multi-document summarisation), gathered by news aggregation systems. The discussion follows a framework based on Latent Semantic Analysis (LSA) because LSA was shown to be a high-performing method across many different languages. Starting from a sentence-extractive approach, the authors show how domain-specific aspects can be used and how a compression and paraphrasing method can be plugged in. They also discuss the challenging problem of summarisation evaluation in different languages. In particular, the authors describe two approaches: the first uses a parallel corpus and the second statistical machine translation.
Chapter Preview
Top

Introduction

News gathering and analysis systems, such as Google News or the Europe Media Monitor1, gather tens or hundreds of thousands of articles per day. Efforts to summarise such highly redundant news data are motivated by the need to automatically inform news end users of the main contents of up to hundreds of news articles talking on a particular event, e.g. by sending a breaking news text message or an email. Due to the high multilinguality of the raw news data, any summariser must be multilingual.

In this chapter, we first present an overview of summarisation approaches and a discussion of their possible application to other languages. We study deeply one particular approach based on Latent Semantic Analysis (LSA) (Steinberger et al., 2012) because LSA was shown to be a high-performing method across many different languages in the multilingual task of the Text Analysis Conference(TAC2) in 2011. We start from the basic LSA approach (Steinberger and Jezek, 2009).

We then discuss the more challenging task of aspect-based summarisation, as defined at TAC’20103. In the aspect scenario, the goal is to produce a summary from articles about a specific event which falls into a predefined domain (e.g. terrorist attacks), for which we have defined aspects that should be mentioned in the summary (e.g. what, when, where happened; who were the victims, perpetrators, etc.). This scenario forces systems to make use of information extraction and to look at the content selection from a more semantic point of view. We will show how an event extraction system can be used to detect pieces of required information and then to extract the related content (Steinberger et al., 2011).

The majority of approaches to automatically summarising documents are limited to selecting the most important sentences. We will therefore dedicate some effort to discussing sentence compression/paraphrasing approaches aiming at more human-like summaries, which typically consist of shorter sentences than automatic summaries. As the ultimate goal is to apply the approach to multiple languages, we will discuss how far we can get with a statistical sentence compression/paraphrasing method (Steinberger et al., 2010).

TAC/DUC evaluation campaigns were the most important events to perform large-scale experiments and discuss evaluation methodology in the last years. We follow the TAC roadmap and discuss the multilingual issue. Evaluation of automatically produced summaries in different languages is a challenging problem for the summarisation community because human efforts are multiplied to create model summaries for each language. At TAC’11, six research groups spent a considerable effort on creating evaluation resources in seven languages (Giannakopoulos et al., 2012). Thus compared to the monolingual evaluation, which requires writing model summaries and evaluating outputs of each system by hand, in the multilingual setting we need to obtain translations of all documents into the target language, write model summaries and evaluate the peer summaries for all the languages. We will discuss findings of the TAC’s multilingual task which was the first shared task to evaluate summaries in more than two languages. We will then propose two possibilities how to lower the huge annotation costs:

First, we will consider using a parallel corpus for the multilingual evaluation task. Because of the unavailability of parallel corpora suitable for news summarisation we will follow an effort to create such a corpus (Turchi et al., 2010). The approach is based on the manual selection of the most important sentences in a cluster of documents from a sentence-aligned parallel corpus, and by projecting the sentence selection in one language to various target languages. Although model summaries were not created, and texts were taken from a slightly different genre (news commentaries), the evaluation results are directly comparable across languages.

Complete Chapter List

Search this Book:
Reset