A Grading Data Warehouse Approach to Measuring and Analyzing Learning Performance: From Grading to Competency-Oriented Assessment

A Grading Data Warehouse Approach to Measuring and Analyzing Learning Performance: From Grading to Competency-Oriented Assessment

Michael Aram (Vienna University of Economics and Business, Austria), Felix Mödritscher (Vienna University of Economics and Business, Austria), Gustaf Neumann (Vienna University of Economics and Business, Austria) and Monika Andergassen (Vienna University of Economics and Business, Austria)
Copyright: © 2019 |Pages: 25
DOI: 10.4018/978-1-5225-5936-8.ch005

Abstract

E-assessment comprises a variety of activities in and beyond the classroom. However, traditional e-learning platforms support only a part of assessment (e.g., individual and group assignments, the grading of such activities, and student record management). Typically, such platforms lack competency orientation, or face performance issues due to increasing application complexity and usage intensity. To overcome technical limitations and provide a basis for competency-based assessment, the authors present an analytics component that is inspired by data warehouses. The potential of this artifact is elaborated, and the improvements are evaluated through a case study about Learn@WU, the LMS of WU Vienna. Although the focus was competency-based aggregation of learning results, early experiences show performance increases for retrieving simple grades of 45% to 98%. Sample scenarios demonstrate how to define and calculate indicators along activity hierarchies and competency graphs to enable the measurement of learning performance along both generic indicators and competency-oriented assessment.
Chapter Preview
Top

Introduction

Measuring performance (aspects) and observing the progress of a learner – and using this information to guide the further learning path – is at the heart of teaching. Thus, a big corpus of knowledge concerning educational assessment has emerged in pedagogical research fields in general, and in technology enhanced learning in particular. A number of aspects are discussed in the literature. Suskie and Banta (2009) refer to educational assessment as a cyclical “ongoing process of: establishing clear, measurable expected outcomes of student learning; ensuring that students have sufficient opportunities to achieve those outcomes; systematically gathering, analyzing and interpreting evidence to determine how well student learning matches our expectations; using the resulting information to understand and improve student learning.” Educational assessment often differentiates between a formative distinction (‘assessment for learning’) and a summative distinction (‘assessment of learning’) (Rosario Hernández, 2012), and diagnostic assessment (Michel, Goertz, Radomski, Fritsch, & Baschour, 2015), as well as between subjective and objective assessment. Continuous assessment concepts (Nitko, 1995; Romero, Guenaga, García-Zubía, & Orduña, 2014) have become popular as an alternative to systems focused solely on final examinations. According to Terenzini (1989), further dimensions comprise the object of assessment (knowledge, skills, attitudes and values, behavior) and the level of assessment (individual vs. group). One particular aspect of educational assessment is to support and measure the development of competencies, i.e. by defining learning outcomes and assessing the achievement level along criteria (Krause, Dias, & Schedler, 2015).

Learning technologies may support teachers in assessment activities through various features, ranging from quizzes, assignments and surveys, over typical grading functionalities such as points, up to evaluation tools for courses or full study programs (Mödritscher, Spiel, & García-Barrios, 2006). In the classroom situation, educators who have a profound didactical expertise achieve sophisticated assessments through adequate teaching activities. Such teachers observe their students or even apply formative assessment methods in order to improve learning of students before a course ends. In digital environments, however, teachers are often restricted to only a handful of indicators (e.g. test results) for assessing a student’s overall performance. This is especially true when large amounts of students have to be assessed, where a teacher is not able to follow the achievements of all students without tools (e.g. in introductory courses at large universities). Moreover, even these sparse indicators are traditionally aggregated into single marks, resulting in very abstract representations of a person’s knowledge and ability. User interactions with technology are rather ignored, although they could be a valuable source for assessing the learning of students, might even allow to draw conclusions on competency levels of students or help them in successfully achieving learning outcomes (Agudo-Peregrina, Iglesias-Pradas, Conde-González, & Hernández-García, 2014).

With respect to this, at least two trends are particularly important: (a) the notion of competency-oriented education and assessment, and (b) the increase of teaching and learning analytics for using data in order to support students and teachers. This chapter presents a design science research approach for a generic analytics infrastructure – namely a learning performance repository – that is capable to manage the performance of learners not only through results (i.e., points and grades) that are manually entered by teachers but also through user interaction data of a Learning Management System (LMS) that is captured in an automated way. The learning performance repository furthermore enables the mapping of these performance indicators either to grades or to a network of competencies. Additionally, the design of the learning performance repository considers that millions of entries can be collected, processed and provided in a flexible and performant way.

Key Terms in this Chapter

Competency: A human potentiality for action requiring knowledge and abilities that can be learned and involves cognitive and non-cognitive elements (i.e., factual knowledge, procedural skills, internalized orientations, values, attitudes, etc.).

Assessment: A cyclical, ongoing process of establishing clear, measurable expected outcomes of student learning. Educational assessment is differentiated in a formative distinction (assessment for learning) and a summative distinction (assessment of learning).

Competency-Based Assessment: The state-of-the-art kind of assessment that focuses on defining measurable learning outcomes, supporting the development of competency development of learners and assessing the achievement level along criteria.

Data Warehouse: A database that implements a dimensional data model and contains a copy of operational and other data in order to provide highly performant means for analytics and reporting for decision making.

Dimensional Data Model: A kind of data model arisen from the field of data warehousing and consisting of facts (measures) as well as dimensions contextualizing these facts. Data (facts) stored using such a model can be analyzed and processed in a more efficient way.

Performance Indicator: A fact (numeric value) that describes a learning-related aspect of a user in a learning management system – a performance indicator can be characterized by a set of different dimensions (such as a user, a start and end time, a verb, an educational activity, a set of related competencies etc.). It represents a teacher given measure (e.g., a grade or result) or an automatically recorded property (e.g., a system usage variable).

Learning Management System: A web-based software application for the administration, documentation, tracking, reporting and delivery of educational courses or training programs.

Complete Chapter List

Search this Book:
Reset