Massive Open Program Evaluation: Crowdsourcing's Potential to Improve E-Learning Quality

Massive Open Program Evaluation: Crowdsourcing's Potential to Improve E-Learning Quality

Tonya B. Amankwatia
DOI: 10.4018/978-1-5225-0466-5.ch006
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Given the complexity of developing programs, services, policies, and support for e-learning, leaders may find it challenging to regularly evaluate programs to improve quality. Are there new opportunities to expand user and stakeholder input, or involve others in e-learning program evaluation? This chapter asks researchers and practitioners to rethink existing paradigms and methods for program evaluation. Crowdsourced input may help leaders and stakeholders address persistent evaluation challenges and improve e-learning quality, especially in Massive Open Online Courses (MOOCs). After reviewing selected evaluation paradigms, models, and methods, this chapter offers a possible role for crowdsourced input. This chapter examines the topics of crowd definition, affordances, and problems, to begin a taxonomical framework with possible applications for e-learning. The goal is to provide a reference for advancing the discussion and examination of crowdsourced input.
Chapter Preview
Top

Introduction

E-learning programs vary. Many can be described as accelerated, flexible, global, and open (Crawford, 2012; Moore, 2013; Trekles & Sims, 2013). E-learning can also be characterized by its different pedagogical strategies—such as student-centered, socially-negotiated, or authentic learning—that sometimes involve gaming, personal profiles, and e-portfolios (Casey, 2008; Ke & Kwak, 2013). The extent to which e-learning programs implement these strategies effectively, or to which they align to complementary materials, support, and learner outcomes, is partly a question of program quality. When e-learning programs include repackaged massive open online courses (MOOCs), additional questions arise about intellectual property, credentials, and elements of learning (Haber, 2014). Put simply, people want answers about quality.

Some education stakeholders are dubious about the quality of e-learning programs and their impact on learning (Allen & Seaman, 2013; Garrison, 2011; Millirion, 2010). Taxpayers, parents, policymakers, employers, educators, and—in some cases—even learners want answers. Does e-learning make a difference in teaching and learning? Is the technology worth the effort and cost?

The purpose of program evaluation is to provide answers for decision-makers and stakeholders. Evaluation studies should show that administrators, educators, and designers have examined e-learning’s various forms, grappled with its complex issues, and utilized representative data to undergird their decisions and designs (Barksdale & Lund, 2001; Preskill & Russ-Eft, 2005). Moreover, program evaluation experts have contended that evaluation can be a viable means for educational change and improvement when integrated as an ongoing activity, rather than as an isolated event (Patton, 2008, 2001; Shelton, 2011). Nonetheless, given the complexity of developing programs, services, policies, and support for e-learning, leaders might find it challenging to evaluate programs, make decisions, and solve problems to improve quality.

Furthermore, some historical challenges remain for e-learning program evaluation and qualitative research. For example, the time and resources required to collect, analyze, and apply evaluation data can be problematic. Also, it can be challenging to implement qualitative research paradigms and methods that aspire to extend localized findings to contexts beyond that of the evaluation, to involve stakeholders early in program design, or to use negotiation and collaboration strategies to reach consensus about evaluation methods (Denzin & Lincoln, 2003; Reeves & Hedberg, 2008). Finally, defining quality programs is a challenge, especially when several models exist for partial and holistic e-learning evaluation (Shelton & Saltsman, 2005). However, some key technological advances potentially address the complexities and challenges of e-learning program evaluation.

Complete Chapter List

Search this Book:
Reset