Interface and Features for an Automatic ‘C' Program Evaluation System

Interface and Features for an Automatic ‘C' Program Evaluation System

Amit Kumar Mandal (IIT Kharagpur, India), Chittaranjan Mandal (IIT Kharagpur, India) and Chris Reade (Kingston University, UK)
DOI: 10.4018/978-1-60566-238-1.ch010
OnDemand PDF Download:
$30.00
List Price: $37.50

Abstract

A system for automatically testing, evaluating, grading, and providing critical feedback for submitted ‘C’ programming assignments has been implemented. The interface and key features of the system are described in detail along with some examples. The system gives proper attention towards the monitoring of a student’s progress and provides complete automation of the evaluation process, with a fine-grained analysis. It also provides online support to both the instructors and students and is designed for serviceoriented integration with a course management system using Web services.
Chapter Preview
Top

Introduction

Systems for automatic and semi-automatic evaluation of programs have been investigated since the early days of computing with a wide variety of approaches (see the discussion in the next section). This research addresses two, relatively new, requirements for automatic program evaluation systems. The first requirement is to simplify the assessment set-up process for instructors, whilst providing more sophisticated evaluation capabilities. The second requirement is to structure an evaluation tool as a service or set of services in line with new service-oriented frameworks being proposed for use with e-learning systems (Wilson et al., 2004). We have designed and implemented a system which is being used as a basis for proof of concept and experimentation around these requirements.

In this paper we present both the internal working and external interface of our e-learning tool. We discuss our approach to a simple interface for advanced aspects of evaluation (flexible component testing and performance evaluation) in the context of the system. We also consider the structure of the system in terms of a set of services.

The motivation behind the production of a new system was the very large cohorts of students in almost all big educational institutions or universities across the world where the intake of undergraduates is around six hundred or more students. As a part of their curriculum, at the place where the system was developed, the students need to attend laboratories and courses and in their laboratory sessions each student has to submit about nine to twelve assignments and take up to three laboratory-based tests. That amounts to nearly ten thousand submissions per semester. Even if the load is distributed among twenty instructors, each instructor is required to test almost five hundred assignments. Without automation, the instructors would be busy most of the time in testing and grading work at the expense of time that could be spent interacting with the students.

The evaluation tool assists instructors by automatically evaluating, marking and providing critical feedback for programming assignments submitted by students. To benefit from this automatic evaluation technique, the instructors do have to spend more time in setting up the assignment to ensure that it is amenable to automatic evaluation, so it is important to address the ease with which this can be done.

The current system is restricted to evaluating only C programs, but the design has been kept as generic as possible so that it could be adapted for other programming languages in the future.

In the rest of this paper the underlying technique for rigorous evaluation will be explained along with a discussion of the interfaces for easy assignment set-up. We address the service aspects of the design afterwards. The paper is organised in the following sections: theoretical background and related work, system overview, an explanatory example, security issues, interfaces, services and conclusions.

Complete Chapter List

Search this Book:
Reset