Beyond Performance Analytics: Using Learning Analytics to Understand Learning Processes That Lead to Improved Learning Outcomes

Beyond Performance Analytics: Using Learning Analytics to Understand Learning Processes That Lead to Improved Learning Outcomes

Kirk P. Vanacore, Ji-Eun Lee, Alena Egorova, Erin Ottmar
Copyright: © 2023 |Pages: 20
DOI: 10.4018/978-1-6684-9527-8.ch009
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

To meet the goal of understanding students' complex learning processes and maximizing their learning outcomes, the field of learning analytics delves into the myriad of data captured as students use computer assisted learning platforms. Although many platforms associated with learning analytics focus on students' performance, performance on learning related tasks is a limited measure of learning itself. In this chapter, the authors review research that leverages data collected in programs to understand specific learning processes and contribute to a robust vision of knowledge acquisition. In particular, they review work related to two important aspects of the learning process—students' problem-solving strategies and behavioral engagement—then provide an example of an effective math program that focuses on the learning process over correct or incorrect responses. Finally, they discuss ways in which the findings f rom this research can be incorporated into the development and improvement of computer assisted learning platforms, with the goal of maximizing students' learning outcomes.
Chapter Preview
Top

1. Introduction

Learning is a complex process that requires exposure to content, thought, struggle, and memory (Bjork & Bjork, 2020; Koedinger et al., 2023; Lynch et al., 2018; Okano et al., 2000). To have learned something, a student must not only be able to perform a task in the moment, but demonstrate that they can retain the information or skill and apply it to new situations (Soderstrom & Bjork, 2015). Learning systems in which students encounter desirable difficulties (Bjork & Bjork, 2020) – such as varying presentation of content (Smith et al., 1978; Smith & Handy, 2014), interweaving knowledge components instead of presenting them sequentially (Rohrer et al., 2014), spacing content delivery (Cepeda et al., 2006) and retrieval practice (Karpicke & Zaromb, 2010) – increase the likelihood of learning. Yet, when desirable difficulties are designed into learning activities, students' performance often suffers, even as these design choices can positively affect learning as measured by distal outcomes (Roediger & Karpicke, 2006; Shea & Morgan, 1979; Smith & Rothkopf, 1984).

Despite the potential incongruence between performance and learning, Computer Assisted Learning Platforms (CALP) often rely heavily on students' performance within learning activities as the primary measure to evaluate students. The instructional design in these CALPs may vary greatly; some incorporate game-based learning (Siew et al., 2016), puzzles (Rutherford et al., 2010), and simulations (Martens et al., 2004; McCoy, 1996), while others focus on more traditional problem sets with tutorial instruction (Heffernan & Heffernan, 2014). While specific design and difficulty within these instructional methods may also differ, many of these CALPs use mastery learning (Barnes et al., 2016; Heffernan & Heffernan, 2014; Macaruso & Hook, 2007; Ritter et al., 2016). Mastery learning is based on the premise that students must master a knowledge component prior to progressing to new content (Bloom, 1968). Mastery of a skill is often determined using students' performance within the activities, either by modeling student knowledge or using simple criteria, such as solving three problems correctly in a row (Corbett & Anderson, 1994; Kelly et al., 2015; Reich, 2020; Yudelson et al., 2013). Early education technologies relied on modified versions of Rasch models which estimated the probability a student will get a problem correct based on a function of a problem's difficulty and a student's ability (Reich, 2020). Alternatively, knowledge tracing – which seeks to predict the probability that a student has mastered a knowledge component – has emerged as one of the main methods for assessing students' mastery of learning within a problem set (Corbett & Anderson, 1994; Yudelson et al., 2013). Furthermore, mastery learning systems often include interactive elements that provide assistance, which may further boost performance by reducing the mental effort necessary to produce correct responses, while potentially hindering learning (Koedinger & Aleven, 2007; Koedinger et al., 2008). Overall, these systems rely on students' performance data, most often represented by their binary outcomes on problems within an activity, to evaluate student learning and determine whether students should progress on to the next knowledge component.

Key Terms in this Chapter

Performance: Execution of a task during the learning activity, including whether a problem was answered correctly or a task completed sufficiently.

Action Data: Data on the actions students take within a program, including, but not limited to their submission of answers to questions or problems, use of hints and scaffolding, and derivations in problem-solving. These data will vary based upon the actions available in the program.

Learning: Permanent changes in abilities or knowledge, including long term retention of information and transferability of skills outside of the direct context in which they were learned.

Time Data: Data on the time it takes a student to take an action or series of actions.

Computer Assisted Learning Platforms (CALP): Programs that provide automated learning content and/or problems with the goal of students gaining knowledge or skills. These can include programs with varying levels of adaptivity from intelligent tutors with automated targeted feedback to digitally presented problems.

Randomized Control Trial: A research design in which units (often students) are randomized into conditions, allowing for an evaluation of the effect of those conditions in the unit.

Quasi-Experimental Studies: Research designs that estimate the effects of a condition, though the units are not randomized into conditions. This must be done by accounting for confounding that occurs, which influences what units experience what conditions. Common quasi-experimental methods include propensity score matching and regression discontinuity design.

Desirable Difficulties: Elements of learning programs, systems, or courses that create conditions for productive struggle which improve learning.

Complete Chapter List

Search this Book:
Reset