In the face of doubt about whether or not current training experiences adequately prepare interpreters to enter the field, assessment and accreditation1 practices are necessarily also placed under the spotlight. This is especially true in the case of the state-level court interpreting oral exams (referred to frequently as the “Consortium” exams) used in the United States. As the most frequently-used gateway credential to certified court interpreting in the nation, notions of quality in assessment (and indirectly, in training for the exams) are of vital importance and merit close examination2.
This chapter takes to task the quality of the construction of the Consortium exams, analyzing them through the lens of testing theory. Upon identifying several limitations to the current model, competency-based education and assessment offer a promising remedy to fill the gap in current training offerings. Furthermore, having identified a series of knowledge, skills and abilities (KSAs), dispositional traits and soft skills which are considered crucial for court interpreters but which are not currently assessed, this chapter proposes a new, hybrid model for court interpreter certification that combines competency-based and performance-based testing in order to improve the validity and reliability of current credentialing practices.
Barriers to Reliable Interpreter Assessment
Interpreting studies stakeholders, from practicing professionals to researchers, express broad consensus on the significance of and justification for further inroads into empirical studies focusing on assessment and testing in interpreting studies (see Iglesias Fernández, 2013 and Niska, 2002). Assessment is vitally important not only for the purpose of screening applicants for entry into educational programs, providing feedback for students as they progress in their training, or testing their knowledge and skills at the end of a course of study, but most germane to the present discussion, it is also essential for qualifying exams such as the certification exams used in the field of court interpreting.
A burgeoning emphasis on quality assurance in the pan-European context has recently arisen as an essential element “for ensuring legal certainty and enhancing mutual trust between countries and their respective judicial systems” (Giambruno, 2016, p. 117) as recent legislation, particularly Directive 2010/64/EU of the European Parliament and of the Council of 20 October 2010 on the Right to Interpretation and Translation in Criminal Proceedings3 impacts the training, credentialing and register-building of the 28 Member States. In the American context, quality assurance is equally challenging. Nonetheless, providing competent interpreters in the justice system is not only a moral imperative: it is a constitutionally guaranteed protection.
Identifying and certifying competent court interpreters represents one of the cornerstones of due process in the US judicial system. Due process refers to the guarantee that individuals shall not be deprived of life, liberty, or property without notice and an opportunity to defend themselves. Language access is one of the foundations of due process, as to understand (and to be understood) by the courts is indispensable to full and active participation in one’s own defense. To be sure, the urgency of continuing to make inroads in interpreting studies in the realm of assessment bears most directly upon credentialing exams.
Importance notwithstanding, barriers to the provision of quality language services in the justice system are as varied as they are ubiquitous and well-documented. These can range from a lack of compliance with existing language access legislation (Wallace, 2015), difficulty in training, recruiting and credentialing qualified practitioners, the use of ad hoc or inadequately tested interpreters (Giambruno, 2016)4, a reluctance to pay for language services, outsourcing, and deficient accreditation schemes (Ozolins, 2016). Admittedly, other barriers to quality exist that fall fully outside the realm of education or testing, such as anti-immigrant sentiment or a lack of political will and administrative action (Giambruno, 2016). Some systemic challenges are directly specific to administering interpreting tests, including assembling a team of subject matter experts to design the constructs of the exams, building in protocols to assure rating consistency, or assuring equitable testing for candidates in all language combinations tested (see Skaaden & Wadensjö, 2014).