Short and Open Answer Question Assessment System based on Concept Maps

Short and Open Answer Question Assessment System based on Concept Maps

Safa Ben Salem (Higher Institute of Computer and Communication Techniques (ISITCom), University of Sousse, Sousse, Tunisia), Lilia Cheniti-Belcadhi (Higher Institute of Computer and Communication Techniques (ISITCom), University of Sousse, Sousse, Tunisia), Rafik Braham (Higher Institute of Computer and Communication Techniques (ISITCom), University of Sousse, Sousse, Tunisia) and Nicolas Delestre (Normandie University, Saint-Ettienne-du-Rouvray, France)
Copyright: © 2016 |Pages: 19
DOI: 10.4018/JITR.2016070104
OnDemand PDF Download:
List Price: $37.50


Computer Assisted Assessment of Short and open answers has established a great deal of work during the last years due to the need of evaluating the deep understanding of the lessons' concepts by learners that, conferring to most teachers, cannot be done by simple MCQ testing. In this paper we have reviewed the techniques underpinned this system, the description of currently available systems for marking short and open text answer and finally proposed a system that would evaluate answers using Natural Language Processing and lastly compared the results obtain by human expert graders and proposed system. We have also compared the results of proposed system with some existing systems.
Article Preview

1. Introduction

In the learning process, assessment is a critical task of asking students to demonstrate their understanding of the subject matter. Indeed, it is an integral part of instruction, as it determines whether or not the educational goals and standards of the lessons are being met (Marshall, Zhang, Chen et al., 2003). Many researchers endorse that multiple choice questions only serve to evaluate the lower levels in the Bloom’s taxonomy. But when it is necessary to measure the higher levels, open ended questions should be (Mitchell, Russell, Broomhead et al., 2002) (Jordan, 2012).

Assessment items can be widely classified as constructed response (for example free-answer questions) or selected response (for example multiple-choice questions) (Dochy, 1996). Among the subcategories of free text answer, we are interested in short and open answer and its assessment on the web. It is a constructed response item and it requires the student to construct a response in natural language and to do so without the benefit of any prompts in the question. This implies a different form of cognitive processing and memory retrieval when compared with selected response items (Nicol, 2007) (Cronbach, 1990). We propose to use the appellation of short and open answer questions in the whole paper because it is the most used and practical in the research domain.

Questions with short and open answer (SOAQs) are a subcategory of free-text answers or essays. They are open ended questions that require students to create an answer. They are commonly used in examinations to assess the basic knowledge and understanding (low cognitive levels) of a topic before more in-depth assessment questions are asked on the topic (Chan, 2009). Short and open answer questions do not have a generic structure. Short and Open Answer Questions can be used as part of a formative and summative assessment, as the structure of short answer questions are very similar to examination questions, learners are more familiar with the practice and feel less anxious.

Computer-based evaluation or known under the noun: computer assisted assessment “CAA” of free-text answers has been studied since the sixties (Ziai, Ott, & Meurers, 2012). It is a novel branch in elearning and it has attracted more attention in recent years, mainly for short answer questions assessment.

In this research task, we are interested in the assessment of the short open answers questions on the Web. These are questions for which learner must provide a short answer corrected with a standardized grid (model). They make it possible to evaluate the capacity of analysis of problem. Their reliability is relatively good, but their correction is however long and more difficult to standardize and automate.

The authors of (Ziai, Ott, & Meurers, 2012) provide a general overview of short answer questions CAA tools. They differ in terms of assessment techniques, algorithms and scoring measures. We can notice that recent developments have seen the introduction of natural language based assessment engines but no systems use a Concept Map “CMap” as a knowledge representation technique.

In the learning process, using CMap make possible to identify what the student has learned and the difficulties encountered, or even find concepts which are not yet understood and, therefore, need to be better dealt with (Novak, 1998). In our research, we will use a CMap as a novel knowledge representation in the assessment process of short answer questions. It is an effective educational technique (Canas, Leake, & Wilson, 1999).

The rest of this paper is organized as follows; in Section 2, we present the research issues that we address in our work. In Section 3, we detail the Concept map based short and open answer question assessment approach by providing the steps describing the “pipeline” of the assessment methodology where each artifact or process feeds the next and presenting the semantic similarity algorithm. Section 4 details the architecture of our proposed framework and presents its different implementation stages. The results of the application of our approach in real course are shown in Section 5. At last, we present conclusions and our future work in Section 6.

Complete Article List

Search this Journal:
Open Access Articles: Forthcoming
Volume 10: 4 Issues (2017)
Volume 9: 4 Issues (2016)
Volume 8: 4 Issues (2015)
Volume 7: 4 Issues (2014)
Volume 6: 4 Issues (2013)
Volume 5: 4 Issues (2012)
Volume 4: 4 Issues (2011)
Volume 3: 4 Issues (2010)
Volume 2: 4 Issues (2009)
Volume 1: 4 Issues (2008)
View Complete Journal Contents Listing