Analysis and Assessment of Cross-Language Question Answering Systems

Analysis and Assessment of Cross-Language Question Answering Systems

DOI: 10.4018/978-1-5225-2255-3.ch388
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Within the sphere of the Web, the overload of information is more notable than in other contexts. Question answering systems (QAS) are presented as an alternative to the traditional Information Retrieval (IR) systems, seeking to offer precise and understandable answers to factual questions instead of showing the user a list of documents related to a given search . Given that the QAS is presented as a substantial advance in the improvement of IR, it becomes necessary to determine its effectiveness for the final user. With this aim, 7 studies were undertaken to evaluate: a) in the first two, the linguistic resources and tools used in these systems for multilingual retrieval (Research 1; Research 2); and b) the performance and quality of the answers of the main monolingual and multilingual QA of general domain and specialized domain in the Web in response to different types of questions and subjects, so that different evaluation means can be applied (Research 3, Research 4, Research 5, Research 6, Research 7).
Chapter Preview
Top

Background

In the field of CLIR tools are being created that can greatly assist specialists in their work; as well as helping other users find a wide variety of information. These tools are evolving but several years of study and research are still needed to improve implementations. One of the main difficulties facing these tools is the task of translating queries made by users and the documentary sources found in response (Diekema, 2003). Given the current expansion in research, development, and the creation of CLIR systems, it was considered worthwhile analysing and evaluating the resources used by one type of these systems: multi-lingual QAS.

Frequently, a keyword query entered into a web search tool (search engine or meta-search engine) to satisfy a user’s information need, provides too many result pages – many of which are useless or irrelevant to the user. In effect, modern IR systems allow us to locate documents that might have the associated information, but the majority of them leave it to the user to extract the useful information from an ordered list (Dwivedi & Singh, 2013). In contrast to the IR scenario, a QAS processes questions formulated into Natural Language instead of keyword based queries, and retrieves answers instead of documents (Peñas et al., 2012). Therefore, the usefulness of these types of systems for quickly and effectively finding specialized information has been widely recognized (Diekema et al., 2004).

Key Terms in this Chapter

Evaluation Measures: Many different measures for evaluating the performance of information retrieval systems have been proposed. The evaluation of QA systemS has been carried out by the traditional evaluation measures based on the relevance (precision and MAP), and also, by other specific measures like MRR, TRR and FHS.

Map: Mean Average Precision: For systems that return a ranked sequence of documents, it is desirable also to consider the order in which the returned documents are presented. MAP measures the Average Precision of a set of queries for which the answers are arranged by relevance.

Information Retrieval: Fully automatic process that responds to a user query by examining a collection of documents and returning a sorted document list that should be relevant to the user requirements as expressed in the query.

Question Answering Systems: As an alternative to traditional IR systems they give correct and understandable answers to factual questions – rather than just offering a list of documents related to the search.

TRR: Total Reciprocal Rank (TRR) is useful for evaluating the existence of several correct responses offered by a system to the same query. In these cases, it is not sufficient to consider only the first correct response in the evaluations, and thus TRR takes all of them into account and assigns a weight to each answer according to it position in the list of results retrieved. Thus, if the second and fourth answers on the list of results are correct for a question, the TRR value would be ½ + ¼ = ¾.

MRR: Mean Reciprocal Rank (MRR) assigns the inverse value of the position in which the correct answer is found (1 if the first, ½ if the second, ¼ if the fourth, and so on), or zero if there is no correct response. This measure considers only the first correct response shown on the list of results offered by the system, and the final value is the average of the values found for each question. MRR assigns a high value to the responses that were in the highest classification positions.

Cross-Language QAS: The Cross-Lingual Question Answering systems are a set of coordinated monolingual systems in which each extracts responses from a collection of separate monolingual documents.

Translation: The process of translating words or text from one language into another.

CLIR: CLIR ( cross-lingual information retrieval ) involves at least two languages in this process.

FHS: First Hit Success (FHS) assigns a value of 1 if the first answer offered is correct, and a value of 0 if it is not (thus it considers only the answer that appears in the first place on the list of results).

Complete Chapter List

Search this Book:
Reset