Self and Peer E-Assessment: A Study on Software Usability

Self and Peer E-Assessment: A Study on Software Usability

Rosalina Babo, Joana Rocha, Ricardo Fitas, Jarkko Suhonen, Markku Tukiainen
DOI: 10.4018/IJICTE.20210701.oa5
Article PDF Download
Open access articles are freely available for download

Abstract

In recent years, universities and other academic institutions have been performing collaborative learning to improve the students' skills, competencies, and learning outcomes. The problem-based learning approach is a learning method where students can work in groups to develop these skills. However, when working in groups, the students are not always being assessed accordingly to their contributions to the work development. Therefore, self and peer assessment are becoming a common practice in academic institutions. With the technology evolution and the emerging of assessment software tools, the evaluators' work is becoming easier. This paper presents seven different tools, as well as their features and functionalities. To evaluate and compare these tools, some parameters are presented and described, based on usability and user experience definitions. It concludes that there is a demand to develop a freeware tool, with parameters not presented in the existing evaluation tools to assist the lecturer in the assessment.
Article Preview
Top

Introduction

Situations in which students are required to collaborate in small groups to improve their learning is called collaborative learning. This instructional approach involves group exchanges to usually solve or create a project (Johnson, Johnson, Smith, & Smith, 2013) (Hernández-Sellés, Pablo-César Muñoz-Carril, & González-Sanmamed, 2019).

One method of collaborative learning is Problem-Based Learning (PBL). The aim of PBL is to improve the creative thinking skills, problem-solving skills, and learning outcomes of students (Khoiriyah & Husamah, 2018). With this method of learning, the students can work in groups, in order for them to discuss the best way to solve the problem.

When working in groups, students are actively engaged in “group learning to gain content knowledge and problem solving skills”, and thus they are “assessed based on their contributions to the group learning” (Masek & Salleh, 2015). Traditionally, the collaborative work is assessed accordingly to the final project, consequently the members of a group are given the same mark for their collaborative work (Macdonald, 2003). However, Kolmos & Holgaard (2007) affirm that a “variation in assessment practices” have been increasing, since workgroups require group resources, that is knowledge and practice of every member, therefore the assessment should “capture this ability” (Kolmos & Holgaard, 2007). Hence the student-based assessment (self and peer assessment) will “enhance the authenticity and inclusiveness in assessments of PBL” (Masek & Salleh, 2015).

In recent years, there has been an increasing focus on assessment (Tomas, Borg, & McNeil, 2015). Making self and peer assessment is becoming a huge habit among universities and other academic institutions (Tan & Keat, 2005). As this concept has been developed in recent years, also the regular assessment became a part of companies to make a constant analysis of their developed work.

However, according to Clark, Davies & Skeers (2005), the task of “performing a fair and accurate assessment of individual student contributions to the work produced by a team as well as assessing teamwork itself presents numerous challenges”. Luaces et al. (2018), affirms that the assessment is subjective and thus, “the help of an intelligent system capable of performing this task is needed”. Therefore, with the recent development of technology, there is the possibility of automation, which can make the assessment more efficient and faster. E‐assessment is defined as “the use of information and communication technology to mediate any part of the assessment process” (JISC, 2007, cited by (Tomas, Borg, & McNeil, 2015).

The unfairness of individual’s assessment was clearly noted by the lectures of Porto Accounting and Business School (ISCAP) who had increased their use of PBL in their classrooms. Since there was no manner to distinguish the workgroup member, the assessment could be biased. Usually, the lecturers would attribute the same mark to the whole group. This practice was unfair, since it most of the times did not corresponded to the actual contribution and performance of each individual.

With individuals’ constant motivation for implementing new tools or improving existing ones, to support unbiased assessments, concepts like usability, user experience, assessment and the relation between them are being studied, and the results are being considered and implemented. The application of this concepts results in some online tools which aim to facilitate the evaluators’ work.

This article presents examples of online self and peer assessment tools and usability comparative parameters. It is part of a bigger study that aims to understand how to perform fair individual assessments in workgroups, as well as if there is a good methodology to assist the evaluator in their tasks. For that, Design Science Research (DSR) was used, and this study comprehends the rigour cycle. It attempts to understand if the existing assessment frameworks are ideal for distinguishing members of workgroups and if the design of a new assessment framework would be advantageous. It is part of the knowledge base foundations, by comparing existing assessment tools to ensure actual research contributions.

In the next sections, present PBL’s characteristics and importance, some definitions of usability and user experience, and explained how these concepts can be fundamental to improve these tools, and thus achieve better results. Then the used methodology will be explained. After that, there will be the discussion on characteristics deemed crucial to the evaluation of tools and the verification of the existence of these in online tools. Finally, the conclusion section will present the features necessary to improve these tools and the motivation for the design of a new assessment tool.

Complete Article List

Search this Journal:
Reset
Volume 20: 1 Issue (2024)
Volume 19: 1 Issue (2023)
Volume 18: 3 Issues (2022)
Volume 17: 4 Issues (2021)
Volume 16: 4 Issues (2020)
Volume 15: 4 Issues (2019)
Volume 14: 4 Issues (2018)
Volume 13: 4 Issues (2017)
Volume 12: 4 Issues (2016)
Volume 11: 4 Issues (2015)
Volume 10: 4 Issues (2014)
Volume 9: 4 Issues (2013)
Volume 8: 4 Issues (2012)
Volume 7: 4 Issues (2011)
Volume 6: 4 Issues (2010)
Volume 5: 4 Issues (2009)
Volume 4: 4 Issues (2008)
Volume 3: 4 Issues (2007)
Volume 2: 4 Issues (2006)
Volume 1: 4 Issues (2005)
View Complete Journal Contents Listing