Statistical Machine Translation

Statistical Machine Translation

Lucia Specia (University of Sheffield, UK)
DOI: 10.4018/978-1-4666-2169-5.ch004
OnDemand PDF Download:
$30.00
List Price: $37.50

Abstract

Statistical Machine Translation (SMT) is an approach to automatic text translation based on the use of statistical models and examples of translations. SMT is the current dominant research paradigm for machine translation and has been attracting significant commercial interest in recent years. In this chapter, the authors introduce the rationale behind SMT, describe the currently leading approach (phrase-based SMT), and present a number of emerging approaches (tree-based SMT, discriminative SMT). They also present popular metrics to evaluate the performance of SMT systems and discuss promising research directions in the field.
Chapter Preview
Top

1. Introduction

Statistical Machine Translation (SMT) is an approach to automatically translate text based on the use of statistical models and examples of translations. Although Machine Translation (MT) systems developed according to rule-based approaches are still in use, SMT is the dominant research paradigm today and has recently been garnering significant commercial interest. The core of SMT research has developed over the last two decades, after the seminal paper by Brown et al. (1990). The field has progressed considerably since then, moving from word-to-word translation towards phrase-to-phrase translation and other more sophisticated models that take sentence structure into account. A trend observed in recent years is the shift from using pure statistical information extracted from large quantities of data to incorporating linguistic information about the source and/or the target language.

The idea of SMT is related to the late 1940’s view of the translation task as a cryptography problem where a decoding process is needed to translate from a foreign “code” into the English language (Hutchins, 1997). This is the basis for the fundamental approach to SMT proposed in the early 1990s through the application of the Noisy Channel Model (Shannon, 1949) from the field of Information Theory. This model had proved to be successful in the area of Speech Recognition and was thus adapted to MT.

The use of the Noisy Channel Model for translation assumes that the original text has been accidentally scrambled or encrypted (using a different alphabet, for example) and the goal is to find out the original text by “decoding” the encrypted/scrambled version, as depicted in Figure 1. According to this model, the message is the input to the channel (text in a native language). gets encrypted into (text in a foreign language) using a certain coding scheme. The goal is to find a decoder that can reconstruct the input message as faithfully as possible into .

Figure 1.

The noisy channel model

In a probabilistic framework, finding , i.e., the closest possible text to , can be stated as finding the argument that maximizes the probability of recovering the original input given the noisy text, i.e.:

This problem is commonly exemplified as the task of translating from a foreign language sentence into an English sentence . Given , we seek the translation that maximizes , i.e., the most likely translation:

This problem can be decomposed in smaller and simpler problems applying the Bayes Theorem:

Since the source text f, i.e., the input for the translation task is constant across all possible translations e, can be disregarded:

Complete Chapter List

Search this Book:
Reset