Cluster and Time-Series Analyses of Computer-Assisted Pronunciation Training Users: Looking Beyond Scoring Systems to Measure Learning and Engagement

Cluster and Time-Series Analyses of Computer-Assisted Pronunciation Training Users: Looking Beyond Scoring Systems to Measure Learning and Engagement

John-Michael L. Nix
DOI: 10.4018/ijcallt.2014010101
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The present study utilized hierarchical agglomerative cluster (HAC) analysis to categorize users of a popular, web-based computer-assisted pronunciation training (CAPT) program into user types using activity log data. Results indicate an optimal grouping of four types: Reluctant, Point-focused, Optimal, and Engaged. Clustering was determined by aggregate data on seven indicator variables of mixed types (e.g., ratio, continuous, and categorical). It was found that measurements of effort: lines recorded and episodic effort served best to distinguish the user types. Subsequent time-series analysis of cluster members showed that groupings exhibited distinct trends in learning behavior which explain performance outcomes. Four waves of data were collected during one semester of EFL instruction wherein CAPT usage partially fulfilled course requirements. This study follows an exploratory, data-driven approach. In addition to the findings above, suggestions for future research into interactions between individual differences variables and CALL platforms are made.
Article Preview
Top

Introduction

Computer-assisted pronunciation training (CAPT) programs are a readily available option for language learners and instructors in the 21st century, and they are promoted by program developers because of the great potential for facilitating foreign language learning. CAPT programs are putatively advantageous as they provide the opportunity for independent study of L2 with corrective feedback from an automated system (Neri, Cucchiarini, Strik & Boves, 2002; Neri & Cucchiarini Strik, 2006). It is expected that by using such automated programs for pronunciation practice, learning time can become a qualitatively improved experience due to mitigation of negative affect (Neri, et al., 2002) such as loss of face (Chiu, Liou & Yeh, 2007) which could occur in live classrooms.

As the language learning environment continues to gravitate toward technological innovation, the imperative to study the efficacy of the latest technologies is apparent (Hubbard, 2006; Macaro, Handley & Walter, 2012), yet there remains a dearth of studies which have evaluated CALL materials (Chapelle, 2010; Neilson, 2011). Material evaluation studies must keep pace with CALL development to determine if the claims and promises of CALL programs are substantive or merely marketing rhetoric. A leading CALL platform, originally designed as a CAPT program which has gained much recognition among EFL teachers, and which has yet to receive systematic scrutiny by researchers, is English Central (EC)1. EC provides pedagogically enhanced videos sourced from across the Internet and partner media providers as material for pronunciation practice, as well as listening practice and vocabulary learning.

As of this writing, three studies by three respective research teams have evaluated EC either in comparison to other CAPT programs or as the sole learning platform, yielding four publicly available reports. These studies attempt to evaluate the effectiveness of EC in yielding English learning gains, as well as to probe levels of satisfaction, perceived effectiveness, and attitudes among the users. Doubtless, effectiveness of the program is of paramount concern for EFL educators and EC designers. Yet, scrutiny must be directed towards user characteristics in order to probe individual differences (ID) variables underlying effectiveness (Hubbard, 2006). Of particular concern is the amount and consistency of usage by learners, which is determined at the nexus of program interface and ID variables over the long run. Indeed a number of studies have posited that learning styles may be a crucial factor in CALL platform efficacy (Grasha & Yanbarger-Hicks, 2000; Valenta, Therriault, Dieter, & Mrtek, 2001), as there is evidence of latent trait correlation both with the degree of engagement with learning platforms and the user perceptions of the programs (Küçük, Genç-Kumtepe, & Tasci, 2010).

Therefore, the present study departs from the paradigm of variable analytic studies of effectiveness and turns towards exploration of user characteristics to identify variables of interest captured by tracking systems, which can then be useful for future material evaluation studies of EC and other language learning platforms. In contrast to research of learning styles which rely on self-report questionnaires, the present study utilizes objective, observable data obtained from the CAPT platform’s activity logs to derive cluster-based types. It is expected that a time series analysis of learners’ usage patterns by user type will yield insight that proves informative when designing program efficacy studies.

  • IGI Global’s Seventh Annual Excellence in Research Journal Awards
    IGI Global’s Seventh Annual Excellence in Research Journal AwardsHonoring outstanding scholarship and innovative research within IGI Global's prestigious journal collection, the Seventh Annual Excellence in Research Journal Awards brings attention to the scholars behind the best work from the 2014 copyright year.

Complete Article List

Search this Journal:
Reset
Volume 14: 1 Issue (2024)
Volume 13: 1 Issue (2023)
Volume 12: 5 Issues (2022)
Volume 11: 4 Issues (2021)
Volume 10: 4 Issues (2020)
Volume 9: 4 Issues (2019)
Volume 8: 4 Issues (2018)
Volume 7: 4 Issues (2017)
Volume 6: 4 Issues (2016)
Volume 5: 4 Issues (2015)
Volume 4: 4 Issues (2014)
Volume 3: 4 Issues (2013)
Volume 2: 4 Issues (2012)
Volume 1: 4 Issues (2011)
View Complete Journal Contents Listing