First-Year Japanese Learners' Perceptions of Computerised vs. Face-to-Face Oral Testing: Challenges and Implications

First-Year Japanese Learners' Perceptions of Computerised vs. Face-to-Face Oral Testing: Challenges and Implications

Hiroshi Hasegawa, Julian Chen, Teagan Collopy
DOI: 10.4018/978-1-7998-2591-3.ch010
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This chapter explores the effectiveness of computerised oral testing on Japanese learners' test experiences and associated affective factors in a Japanese program at the Australian tertiary level. The study investigates (1) Japanese beginners' attitudes towards the feasibility of utilising a computer-generated program vs. a tutor-fronted oral interview to assess their oral proficiency, and (2) the challenges and implications of computerised oral testing vis-à-vis Japanese beginners. It presents the initial findings of the qualitatively analysed data collected from student responses to open-ended survey questions and follow-up semi-structured interviews. A thematic analysis approach was employed to examine student perceptions of the two different test settings and their effects on students' oral performance in relation to test anxiety. Despite the fact that computerised oral testing was overall perceived to be beneficial for streamlining the test process and reducing learners' test anxiety, the findings also identified its limitations.
Chapter Preview
Top

Background

In current foreign language oral testing and assessment, the conventional face-to-face interview-based testing format is commonly adopted (Chalhoub-Deville, 2001; Galaczi, 2010). These oral proficiency interview tests require the presence of an interviewer, who functions as an interlocutor in a pseudo-conversation and simultaneously assesses students’ oral productions at the levels of vocabulary, grammar, pronunciation and pragmatics (Amengual-Pizarro & Garcia-Laborda, 2017; Zhou, 2015). Experienced oral examiners are also attuned to the interviewees’ sociocultural/linguistic backgrounds in order to adjust interviewing approaches, while monitoring students’ command of the intonation, pacing and nonverbal cues during the oral proficiency interview (Larson, 2000; Norris, 2001).

Interviewer-initiated tests also face concerns regarding the validity of assessing students’ test performances (Alderson & Banerjee, 2002; Bachman, 2000; Chapelle & Douglas, 2006; Galaczi, 2010; Winke & Fei, 2008), and the substantial amount of teacher time required for test delivery and evaluation of the test outcomes (Amengual-Pizarro & Garcia-Laborda, 2017; Zhou, 2015). On the other hand, “Computer technology has been especially productive in the area of language testing” (Pizarro & Laborda, 2017, p. 24), especially when it comes to its administration and delivery. Availing itself of the standardised format and self-directed setting, computerised oral testing may serve as an alternative to potentially mitigate these limitations faced by its examiner-led, interview-based counterpart (Chapelle & Douglas, 2006; Douglas & Hegelheimer, 2007; Jamieson, 2005; Pizarro & Laborda, 2017; Suvorov & Hegelheimer, 2014). That said, research targeting this area, especially with respect to Japanese oral testing and assessment in Australia, is still a less charted territory.

This project was initiated in order to produce the first computerised Japanese oral testing program. Having focused on the innovative use of digital technologies, this study investigated the effectiveness of computerised oral testing conducted in a tertiary Japanese program and its impact on students’ test experiences and perceived attitudes. Specifically, it aimed to explore (1) Japanese beginners’ beliefs in the feasibility of a computer-generated program vs. a tutor-fronted interview utilised to assess their oral proficiency, and (2) the challenges and best practices of utilising computerised oral testing in a first-year Japanese unit. This chapter presents the initial findings from the qualitative data collected from students’ responses to open-ended survey questions, followed by semi-structured interviews. A thematic analysis approach was employed to examine the students’ perceptions of the two different oral test settings. This chapter explores the aspects identified in terms of the first launch of computerised oral testing, which leads to the implications for pedagogical approaches focusing on languages other than English (LOTE) oral testing and assessment.

Key Terms in this Chapter

LOTE: Languages Other Than English.

Oral Assessment: In current foreign language oral assessment, predominant formats are a face-to-face interview, a skit or an oral presentation. The face-to-face interview-based testing format is most commonly utilised.

Paralinguistic Clues: Suprasegmentalcues include intonation, pitch, speech rate and stress, while non-verbal (paralinguistic) cues include body language, such as hand gestures, facial expressions, eye contacts and movements.

Computerised Oral Testing: A computer-operated formafor students to take their oral test. For this project, a group of students were required to take their Japanese oral test, using the newly developed computerised program installed in each computer at the oral test venues.

Phatic Communication: Communication for the purpose of sociability, rather than an informative function which is to communicate information or ideas.

Japanese Beginners: The institution in this research offers two Japanese beginner units in the first-year course. The first unit (Japanese Unit 1) is the most basic of the Japanese unit offered in semester 1, thus catering for students without any prior learning experience or knowledge of Japanese, whereas the second unit (Japanese Unit 2) is the second stage of the Japanese course, focusing on students who have previously completed Japanese Unit 1 or equivalent study. “Japanese beginners” in this chapter refers to those enrolled in the latter unit.

Complete Chapter List

Search this Book:
Reset