Framework To Approximate Label Matching For Automatic Assessment Of Use-Case Diagram

Framework To Approximate Label Matching For Automatic Assessment Of Use-Case Diagram

Vinay Vachharajani (School of Computer Studies, Ahmedabad University, Ahmedabad, India) and Jyoti Pareek (Department of Computer Science, Gujarat University, Ahmedabad, India)
Copyright: © 2019 |Pages: 21
DOI: 10.4018/IJDET.2019070105

Abstract

E-learning plays a significant role in educating large number of students. In the delivery of e-learning material, automatic e-assessment has been applied only to some extent in the case of free response answers in highly technical diagrams in domains like software engineering, electronics, etc., where there is a great scope of imagination and wide variations in answers. Therefore, the automatic assessment of diagrammatic answers is a challenging task. In this article, algorithms that compute the syntactic and semantic similarities of nodes to fulfill the objective of automatic assessment of use-case diagrams are described. To illustrate the performance of these algorithms, students' use-case diagrams are matched with model use-case diagram. Results from 13,749 labels of 445 student answers based on 14 different scenarios are analyzed to provide quantitative and qualitative feedback. No comparable study has been reported by any other label matching algorithms before in the research literature.
Article Preview
Top

1. Introduction

Use-case diagram is used to show different levels of interactions that might be possible between user and a system. It is a graphical representation of the interactions between different stakeholders of the system, known as actors, and use-cases (functionalities) in which these actors are involved and hence one of the important UML diagrams in software engineering.

With the invention of internet, web-based learning has become a reality. The learning process has three major components

  • 1.

    Delivery of learning material to students

  • 2.

    Conducting tests/quizzes/ examinations

  • 3.

    Evaluating the answers

The delivery of learning material and tests/quizzes to the students has become very easy with the facility of uploading the same on the web irrespective of the number of students.

The assessment part could be a deterrent as far as willingness of learned faculty members to participate in the whole process is concerned especially with the increasing number of students. The evaluation may not be such a big problem in certain situations like online objective exams or subjective evaluation of textual answers. The evaluation will be automatic in online objective examinations and can be automated for textual answers with the help of keywords expected in the answers. The assessment becomes really difficult in case of diagrammatic answers and this could result into shortage of teachers for some of the subjects.

The main goal of our research is to develop a framework for automatic e-assessment of use-case diagrams (Vachharajani & Pareek, 2014) and a tool to provide quantitative feedback in terms of marks as well as qualitative feedback in terms of suggestions. This research will facilitate the institutions of higher learning to reach out to a large number of students who want to pursue higher studies from such reputed institutions by encouraging learned faculty members to participate in the learning process.

This work proposes a framework of label matching to evaluate students’ diagrammatic answers. It necessarily follows approximate label matching to enable proper assessment based on exact matching as well as nearness to exact answers.

In the software engineering domain, there is a great scope of imagination and wide variations for imprecise diagrammatic answers (Jayal & Shepperd, 2009). Therefore, automatic assessment of diagrammatic answers is a challenging task. From literature review it is evident that very limited research has been done in the area of automating the assessment of any UML diagrams. Some research has been done in the area of sequence diagram (Thomas et al., 2008), activity diagram (Striewe & Goedicke, 2014) and class diagram (Striewe & Goedicke, 2011; Baghaei, Mitrovic & Irwin, 2007; Ali, Shukur & Idris, 2007). Comparing only shapes of components like use-case and actor with model use-case diagrams is not sufficient for effective matching. Since both the positions as well as shapes of components can be different labels play a significant role in providing meaning to shapes and distinguishing between shapes. This information is very useful in comparing and thus assessing the diagrams. Hence, label matching is a very essential process in the automated assessment of use-case diagrams.

Diagrammatic answers rely heavily on their labels to convey their meaning. However, many labeling practices are possible, making the choice of labels limitless. Students may write seaming different but syntactically and semantically similar labels compared to model diagram for example, by using misspelling words, abbreviating words or synonyms, which cannot be marked as incorrect.

Complete Article List

Search this Journal:
Reset
Open Access Articles
Volume 19: 4 Issues (2021): 1 Released, 3 Forthcoming
Volume 18: 4 Issues (2020)
Volume 17: 4 Issues (2019)
Volume 16: 4 Issues (2018)
Volume 15: 4 Issues (2017)
Volume 14: 4 Issues (2016)
Volume 13: 4 Issues (2015)
Volume 12: 4 Issues (2014)
Volume 11: 4 Issues (2013)
Volume 10: 4 Issues (2012)
Volume 9: 4 Issues (2011)
Volume 8: 4 Issues (2010)
Volume 7: 4 Issues (2009)
Volume 6: 4 Issues (2008)
Volume 5: 4 Issues (2007)
Volume 4: 4 Issues (2006)
Volume 3: 4 Issues (2005)
Volume 2: 4 Issues (2004)
Volume 1: 4 Issues (2003)
View Complete Journal Contents Listing