Toward the Future of Computer-Assisted Language Testing: Assessing Spoken Performance Through Semi-Direct Tests

Toward the Future of Computer-Assisted Language Testing: Assessing Spoken Performance Through Semi-Direct Tests

Ethan Douglas Quaid, Alex Barrett
DOI: 10.4018/978-1-7998-1282-1.ch010
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Semi-direct speaking tests have become an increasingly favored method of assessing spoken performance in recent years. Underpinning evidence for their continued development and use has been largely contingent on language testing and assessment researchers' claim of their interchangeability with more traditional, direct face-to-face oral proficiency interviews through theoretical and empirical investigations from multiple perspectives. This chapter initially provides background and research synopses of four significant test facets that have formed the bases for semi-direct and direct speaking test comparison studies. These are followed by the inclusion of a recent case study comparing test taker output from a computer-based Aptis speaking test and a purposively developed identical face-to-face oral proficiency interview that found a slight register shift which may be viewed as advantageous for semi-direct speaking tests. Finally, future research directions are proposed in light of the recent developments in the semi-direct speaking testing research presented throughout this chapter.
Chapter Preview
Top

Direct And Semi-Direct Speaking Tests

Clark (1979) labels direct speaking tests as consisting of “procedures in which the examinee is asked to engage in a face-to-face communicative exchange with one or more human interlocutors” (p.36). However, a truly direct test measures proficiency inside the identified target language use (TLU) domain, and therefore it is preferable to identify test directness on a cline between the indirect structural model and live performance in the TLU domain as the most direct. Mirroring Clark’s (1979) communicative exchange, Luoma (1997) states “the main characteristic of the live test mode is that interaction in it is two-directional”, and asserts that “the construct assessed is clearly related to spoken interaction” (p. 44). Although these suppositions were likely to be a given, when early theoretical models underpinned language testing, today the relatively scripted nature of many speaking tests, especially those termed as being high stakes, may well have rendered them problematic. Thus, Fulcher’s (2014) view that direct testing only implies that direct involves live physical interaction with a human interlocutor and nothing more will be applied, and henceforth a direct test will be represented by a face-to-face OPI throughout the remainder of this chapter.

Key Terms in this Chapter

Target Language Use (TLU) Domain: The context or situation(s) where the test taker will be using the language on completion of the test.

Adjacent Agreement: A rating within one sub-level such as advanced-low and advanced- mid.

Concurrent Validity: The degree of test taker score equivalence between two tests, with one test acting as the criterion behavior.

Face Validity: The degree to which a test appears to measure the identified constructs (knowledge and/or abilities) it purports to measure based on the subjective judgment of test stakeholders.

Lexical Density: A quantitative measure of the relationship between lexical and grammatical items in spoken and written discourse.

Simulated Oral Proficiency Interview (SOPI): First generation semi-direct speaking test that often used tape-mediated delivery in lieu of computers.

Construct Irrelevant Variance: The introduction of extraneous, uncontrolled variables not reflected in the constructs tested that affect assessment outcomes.

Prosodic Features: Suprasegmental units and forms that occur when sounds are placed together in connected speech. For example, intonation, pitch, stress and rhythm.

Complete Chapter List

Search this Book:
Reset