Studying and Analyzing the Evaluation Dimensions of Learning Management Systems

Studying and Analyzing the Evaluation Dimensions of Learning Management Systems

Copyright: © 2021 |Pages: 40
DOI: 10.4018/978-1-7998-4021-3.ch002
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

We are currently witnessing the launch and development of a large number of distance training devices in Moroccan universities, whose main objective is to meet society's requirements and the knowledge economy, which is fully emerging. All of the devices are based on the use of LMSs, which can be problematic for designers for different reasons (costs, utility, usability, etc.). Being conscious of the impact of these technological tools on learning, the authors propose a methodical approach that identifies the essential criteria for evaluation of LMSs to fit the needs of teachers and learners from analysis of the evaluation dimensions in multimedia documents, particularly through the dimensions of utility and usability.
Chapter Preview
Top

Introduction

One of the essential features of the knowledge economy is the acceleration of interactive software life cycles, which are part of LMSs. The choice of these LMSs, which constitute the cornerstone of initial and continuing training systems, is not obvious to most users. However, why is this so difficult?

As part of our study, we will show that choosing between the various tools available on the market forced us to identify the challenges and the specific needs. It was crucial to ensure these tools would provide rich services motivated by the online sharing of structured information and interactivity among different users. To this, we added the tracking services of the learning paths of different users, notably learners.

It was obvious that the use of any tool in the field of education and training had to be justified according to its pedagogical interest and its response to the needs of the learners, but if the LMS is supposed to address the spatial and temporal constraints between tutors and learners, it must not hinder the learning process.

Consequently, any random choice that entails a loss of money, effort, and time challenges us and invites us to ask the following questioning:

On the one hand, how can we choose an LMS that meets the norms and standards as they are acknowledged in distance education systems?

On the other hand, LMSs are objects of evaluation, so to what degree of training specificity can they respond? What are the choices in terms of multimedia engineering on which we must analyze these LMSs? What norms and standards do the requirements meet in the evaluation of these LMSs?

These issues and others are the subjects of the investigations as part of our approach to judging its intake for experimentation purposes.

For our study, the reading of the specialized literature in the LMSs analysis shows two orientations. The first is dedicated to the technical analysis (computer language, script, metadata), and the second covers the pedagogical extent of these LMSs and the development of the learners' skills.

Our approach is oriented toward a reconciliation of these two orientations. The technological and pedagogical are not mutually exclusive; however, crossing their elements in a goal or technology is at the pedagogical center.

To concretize our approach, in the “theoretical frame” section, we initially took the evaluation dimensions of interactive systems conducted by Senach (Senach, 1993), Tricot (Tricot and al., 2003), and ISO 25010 (ISO/IEC 25010, 2011). Then we categorized these works and the various discussions (Bastien and Scapin, 2001; Huart and al., 2008). After analysis, we retained the dimensions of utility and usability.

Then, in the “analysis of assessment dimensions” section, we followed a specific methodology for analyzing the evaluation dimensions obtained according to the three steps indicated:

  • Comparison of studied outcomes,

  • Personal position relative to the outcomes,

  • Analysis and results (choose criteria and tools suitable for the evaluation of the LMSs).

This section mainly consists of three sub-sections:

  • 1.

    Utility analysis: In this sub-section, we will categorize the utility characteristics discussed, according to various studies (Senach, 1993; Tricot and al., 2003; ISO/IEC 25010, 2011; ISO/IEC 9126-1, 2001). After the analysis, we retained six utility characteristics. Thereafter, we attempted to subject the tools of currently recognized LMSs to filter the criteria that met the pedagogical principles of training, based on various studies (Lablidi and al., 2009; Aska and al., 2000), to check the quality and the presence of the tools available to the various players in distance training. Then we checked their utility and their operability through interactive technologies that met the retained characteristics (ISO/IEC 25010, 2011).

  • 2.

    Usability analysis: In this sub-section, we categorize the most important usability criteria, according to various studies (Norman, 1988; Nielsen, 1995; Gerhardt-Powals, 1996; ISO 9241-11, 1998; Bastien and Scapin, 1993; ISO/IEC 9126-1, 2001; Baker and al., 2002; Tricot and al., 2003; ISO 10075-3, 2004; Shneiderman and Plaisant, 2005; Stone, 2005; Johnson, 2008; ISO 9241-171, 2008; ISO 9241-210, 2010; ISO/IEC 25010, 2011; Tognazzini, 2014). After the analysis, we selected eight usability criteria to be part of our evaluation gait.

  • 3.

    Evaluation approach of LMSs: In this sub-section, we present the measures and criteria adapted for the evaluation of LMSs according to a scalable approach and synthesis of the two analyzed dimensions.

Key Terms in this Chapter

SCORM: The SCORM® (Sharable Content Object Reference Model) was created to address these interoperability, reusability, and durability challenges. As a reference model, it was intentionally designed to leverage standard web technologies as well as existing learning technology specifications that already existed. SCORM® is comprised of a collection of interrelated technical specifications and guidelines designed to meet the DoD’s high-level requirements for creating interoperable, plug-n-play, browser-based e-learning content. It consists of three different technical specifications “books” that collectively address challenges associated with interoperability, portability, reusability, and the instructional sequencing of self-paced e-learning content (Available at https://www.adlnet.gov/adl-research/scorm ).

HSI: Human-computer interaction

Complete Chapter List

Search this Book:
Reset