Utilizing Learning Analytics in an Automated Programming Assessment System to Enhance Software Practice Education

Utilizing Learning Analytics in an Automated Programming Assessment System to Enhance Software Practice Education

Copyright: © 2023 |Pages: 31
DOI: 10.4018/978-1-6684-9527-8.ch017
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This chapter explores the integration of learning analytics (LA) into ProgEdu, an automated programming assessment system (APAS), with a focus on improving students' programming education outcomes by considering code quality. Through the analysis of code quality improvements, metric calculation, visualization analysis, latent profile analysis, clustering and prediction can be performed. In individual assignments, LA enables the monitoring of student engagement, identification of student profiles, and early detection of at-risk students. For team projects, LA facilitates the assessment of individual contributions, tracking of team members' participation, identification of discrepancies in teamwork-sharing, detection of student profiles based on their contributions, and recognition of free-riders. The approach promotes the integration of LA into APAS to facilitate instructors in understanding students' learning behaviors and enable them to diagnose issues and provide targeted interventions, by which programming education in university settings is enhanced.
Chapter Preview
Top

Introduction

Assessment and feedback play a crucial role in educational activities as they enable both students and lecturers to monitor the progress of studying. Assessments offer students valuable insights into their strengths and weaknesses in specific learning objectives, enabling them to improve based on the feedback provided. This becomes particularly crucial in practical courses such as computer programming, where students frequently encounter challenges in producing satisfactory solutions during their initial attempts. However, the frequency of assessments and the level of detail in feedback significantly increase the workload for lecturers. To tackle this issue, Automated Programming Assessment Systems (APASs) have been introduced as efficient tools for automating programming assessment and feedback in computer science courses. Typically, these systems primarily concentrate on identifying syntax failures, run-time failures, and functional errors in submitted code, as these are the main areas of assessment. However, recent studies suggest that assessing novice programmers solely based on functional correctness may not be sufficient (Cardell-Oliver, 2011). Instead, software engineering (SE) metrics and the static quality of code are recommended as additional criteria to evaluate students' programming portfolios (Patton & McGill, 2006). Assessing adherence to coding conventions, which are guidelines aimed at improving software readability and maintainability, has been proposed as a means to evaluate this aspect (Smit et al., 2011). As a result, many APASs have incorporated analysis feature of static code (Keuning et al., 2018).

During the functioning of an APAS, it serves a dual purpose as both a learning assessment tool and a valuable source of data on student submissions. This data encompasses various aspects such as submission times, programming codes, stack traces, assessment results, and code analysis feedback. Code evaluation reports, which encompass examinations of code syntax, adherence to coding conventions, and the results of unit testing, constitute a valuable reservoir of data that can provide meaningful observations into students' programming conduct. Exploring this data allows instructors to extract actionable information and identify behavioral patterns that can be leveraged to offer timely interventions and improve learning performance of students. To realize these objectives, the integration of LA into the APAS during its operation is suggested.

LA is an innovative approach in teaching and learning that utilizes the potential of data science and traces of learning to improve educational achievements. (Gibson & Ifenthaler, 2020). It aligns closely with the field of Educational Data Mining (EDM), as both disciplines utilize data-driven methodologies in educational research. This interdisciplinary domain encompasses computer-assisted instruction, data extraction, artificial intelligence, statistical analysis in education, visual data exploration, educational psychology, cognitive science, and related areas. (Romero & Ventura, 2020). The outcomes of LA are usually visualized through dashboards, providing teachers and learners with insights into learning performance and identifying potential at-risk situations (Verbert et al., 2013).

In the past decade, there have been several automated feedback systems introduced for programming exercises. These systems have primarily concentrated on identifying errors in individual submissions. However, students need more than just feedback on their code to address immediate coding mistakes. They also require insights and knowledge based on their learning history to enhance their programming skills. To cater to this need, LA has been employed to analyze digital footprints, which include static demographics and dynamic behavioral logs. The aim is to develop models that can identify students at risk of struggling in programming courses. (Azcona et al., 2019). The integration of LA in programming education has been proposed in some decades (Ihantola et al., 2015). However, a significant portion of these studies were conducted as separate experiments after the courses had concluded, thus limiting the impact of LA in providing timely interventions.

Complete Chapter List

Search this Book:
Reset