The ATA Flowchart and Framework as a Differentiated Error-Marking Scale in Translation Teaching

The ATA Flowchart and Framework as a Differentiated Error-Marking Scale in Translation Teaching

Geoffrey S. Koby (Kent State University, USA)
DOI: 10.4018/978-1-4666-6615-3.ch013
OnDemand PDF Download:
$30.00
List Price: $37.50

Abstract

Translation evaluation remains problematic, with industry marking errors with points-off systems while teachers use points-off and rubrics. Many rubrics are not adequately operationalized. Needed is an error category and severity system sufficiently differentiated for useful feedback and streamlined to enable feedback to large numbers. The American Translators Association (ATA) Flowchart for Error Point Decisions and Framework for Standardized Error Marking has been adapted for the classroom. This chapter provides statistics on errors and severities marked in two groups: 63 translations by German>English graduate students marked by the author and 17 examinations from the 2006 ATA Certification Examination marked by ATA graders. The predominant categories assigned to students are Punctuation, Usage, Mistranslation, Addition, and Misunderstanding, while ATA papers show Misunderstanding, Omission, Terminology, Literalness, Ambiguity, Grammar, and Style. Misunderstanding rated as the most serious error for both. Transfer errors are more frequently marked and more severely rated than grammar or language errors.
Chapter Preview
Top

Background

In recent decades, the development of translation evaluation has led to a number of different approaches. The variety of approaches can be exemplified by House’s (1997) discussion, which subdivides evaluation approaches into three categories: first, anecdotal, biographical, and neo-hermeneutic approaches; second, response-oriented, behavioral approaches; and third, text-based approaches. The text-based approaches are further subdivided into literature-oriented, post-modernist and desconstructionist, functionalistic/action and reception-theory-related, and linguistically-oriented approaches. House’s model makes the key theoretical distinction between overt and covert translation, which can also be roughly equated to foreignizing and domesticating approaches to translation. This distinction is necessary for evaluation purpose when evaluating how well individual translations comply with the translation brief. However, while House’s model is extremely detailed in terms of the dimensions analyzed, it does not provide an error-marking scheme or rubric. Martinez Melis and Hurtado Albir (2001) point out that “we currently have a substantial and varied body of proposals for the analysis of translations, although only a few (House, Larose) have been formulated explicitly for translation evaluation:

Key Terms in this Chapter

Operationalization: A method for defining how a phenomenon that is not directly measurable should be assessed. It is used to define a concept to make it clearly measurable.

ATA Flowchart: A decision-making tool developed by the American Translators Association that uses two series of questions to guide raters of translation errors in determining the severity of an error. The first question divides the table into questions relating to language mechanics vs. those relating to transfer.

Rubric: A tool for evaluating a translation in which dimensions and levels of quality are defined and presented in a grid. Rubrics differ from error marking scales in that the various levels are expressed in positive rather than negative terms.

Translation Error: Any lack of congruence between the source text and the target text. This includes discongruities in meaning and failures in use of the target language according to standard norms, as interpreted by the evaluator. Translation errors are governed by the translation brief; a translation error under one brief can be an acceptable solution under another brief.

Error Marking Scale: A tool for evaluating a translation in which various levels of severity are defined for translation errors, and a threshold level is defined for passing (either by points off from a maximum or by error points exceeding a specified number.

Translation Brief: Instructions or specifications accompanying a translation assignment that indicate the target audience and purpose of a translation assignment.

Error Category: A classification identifying a particular type of translation error. The ATA error categories are grouped into three areas: transfer errors, language mechanics errors, and errors relating to formal properties of the examination.

ATA Framework: A table developed by the American Translators Association that lists categories of transfer errors, language mechanics errors, and errors in formal properties of the examination, with columns for 1, 2, 4, 8, or 16 points per error.

Translation Assessment: A method of examining translations focusing on learning and teaching, used to show faculty what the students are learning. This information is used to improve teaching and shared with students to improve learning. This term is often confused or conflated with translation evaluation.

Translation Evaluation: A method of examining translations that focuses on marking or scores. This term is often confused or conflated with translation assessment.

Complete Chapter List

Search this Book:
Reset