The Integration of Corpus Tools in the Design and Implementation of a Novel Analytical Model for the Learning of K12 Classrooms

The Integration of Corpus Tools in the Design and Implementation of a Novel Analytical Model for the Learning of K12 Classrooms

Ali Şükrü Özbay (Karadeniz Technical University, Turkey) and Tuğba Çıtlak (Karadeniz Technical University, Turkey)
Copyright: © 2024 |Pages: 31
DOI: 10.4018/979-8-3693-0066-4.ch005
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Learning analytics provides language teaching professionals both with the support to observe the learners' learning process and with tracking language teaching professionals' performances in the teaching process, enabling feedback and a trigger to be up to date. In this study, the authors proposed a framework for novel learning analytics models, which attempts to present how learning analytics and corpus linguistics can be merged to achieve the five steps of learning analytics by answering the information-based questions. To that end, they investigated collocations and lexico-grammatical patterns in English coursebooks of K12 published by the Ministry of Language Education in Türkiye and by native English speakers. It was found that the English coursebooks of K12 in Türkiye underrepresented word clusters and collocations that are mostly used in the reference native corpora of English coursebooks. Hereby, a corpus-based data-driven learning approach was suggested as a contributor to learning analytics.
Chapter Preview
Top

Introduction

The use of online technologies has been situated in conjunction with every area including teaching. One of the online technologies that can be utilized for tracking the learners’ online traces is learning analytics. Learning analytics has been one of the recent fields of scientific inquiry that specifically deals with the processes of learning from various perspectives and dimensions. It is also a term popularly used to analyze the big data of learners as a piece of evidence in the process of learning. The digital traces that emerged in online environments resulted in the emergence of big data, and the idea of using this emerging data in the processing stages of many learners at various levels and the subsequent efforts to delve into learning analytics through multi-dimensional perspectives brought new projections as well as challenges. One of the main provisions is that language teaching professionals and stakeholders are provided with the data gathered from the learners to help them create an environment in which the learning takes place better (Bozkurt, 2016). In this regard, it can be defined as “developing tools and techniques for capturing, storing, and finding patterns in large amounts of electronic data; representing them in generative and useful ways; and integrating them into intelligent tools that personalize and optimize learning environments” (Martin & Sherin, 2013, p. 511). Learning analytics can enrich the learning process by providing real-time feedback. Moreover, it provides language teaching professionals both with the support to observe the learners’ learning process and with tracking language teaching professionals’ performances in the teaching process, which enables the language teaching professionals to give the learners feedback more efficiently and to give the necessary trigger to be up to date. Learning analytics in which many approaches, methods, and tools are used together can also delve into the teaching materials given by the educational institutions and run by the states to find out to what extent these teaching materials are effective in terms of various dimensions (Bozkurt, 2016; Tutsun, 2020).

An especially significant approach is the integration of the corpus linguistics approach as a means of tracking the learners’ developments and processes of learning as part of perspectives on learning analytics. Using concordances may help to detect native and non-native language patterns by analyzing naturally occurring language samples (Hunston, 2022). This type of pattern analysis may have positive indicators for K12 learners who are good at technology and discovery-based learning approaches, searching for patterns as language detectives (Martin & Sherin, 2013).

Being an inductive instructional approach, the corpus-based data-driven learning approach holds promise for diverse groups of K12 learners at various levels and it supports the constructivist approach in that it transforms learners into linguistic researchers actively taking responsibility for their learning and language teaching professionals into research leaders guiding learners (Sinclair, 1991; Marlowe & Page, 2005; Roblyer & Doering, 2013). Since learning analytics is constituted into five steps: capture, report, predict, act, and refine (Campbell et al., 2007), DDL and corpus tools offer opportunities to achieve these five steps in a predictable order. First, it captures the data produced by the learners and reports the use of language patterns according to the frequency of usage. Then, according to the report, analysis can be done to predict what will happen next. This prediction will trigger the stakeholders to act to make the language learning environment better as well as more refined for the prospective users. The process of realizing language learning analytics should be a viable model to tap the benefits of the approach for K12 learners. Thus, the information level of learning analytics involves questions related to the past, present, and future and attempts to answer the questions of “what happened”, “what is happening now”, and “what will happen” (Davenport et al., 2010, p. 81). These questions are related to the reporting, alerts, and extrapolation processes in learning analytics.

Key Terms in this Chapter

Word Clusters: They are words found repeatedly together in each other’s company, in sequence, representing a tighter relationship than collocates.

Data-Driven Learning: An approach to foreign language learning in which most language learning is guided by teachers and textbooks and is treated as data through which students as researchers are undertaking guided discovery tasks.

Word Pattern: They are the sequences or patterns within words.

N-Grams: They are continuous sequences of words or symbols, or tokens in a document.

Corpus: Any collection of more than one text, referring to a large collection of natural texts compiled and considered to be representative of a variety or a genre of a language in machine-readable form.

Sketch Engine: A tool to explore how language works and analyzes authentic texts of billions of words to identify typicality in language.

Complete Chapter List

Search this Book:
Reset