Reference Hub1
Massive Open Program Evaluation: Crowdsourcing's Potential to Improve E-Learning Quality

Massive Open Program Evaluation: Crowdsourcing's Potential to Improve E-Learning Quality

Tonya B. Amankwatia
Copyright: © 2019 |Pages: 22
ISBN13: 9781522583622|ISBN10: 1522583629|EISBN13: 9781522583639
DOI: 10.4018/978-1-5225-8362-2.ch004
Cite Chapter Cite Chapter

MLA

Amankwatia, Tonya B. "Massive Open Program Evaluation: Crowdsourcing's Potential to Improve E-Learning Quality." Crowdsourcing: Concepts, Methodologies, Tools, and Applications, edited by Information Resources Management Association, IGI Global, 2019, pp. 53-74. https://doi.org/10.4018/978-1-5225-8362-2.ch004

APA

Amankwatia, T. B. (2019). Massive Open Program Evaluation: Crowdsourcing's Potential to Improve E-Learning Quality. In I. Management Association (Ed.), Crowdsourcing: Concepts, Methodologies, Tools, and Applications (pp. 53-74). IGI Global. https://doi.org/10.4018/978-1-5225-8362-2.ch004

Chicago

Amankwatia, Tonya B. "Massive Open Program Evaluation: Crowdsourcing's Potential to Improve E-Learning Quality." In Crowdsourcing: Concepts, Methodologies, Tools, and Applications, edited by Information Resources Management Association, 53-74. Hershey, PA: IGI Global, 2019. https://doi.org/10.4018/978-1-5225-8362-2.ch004

Export Reference

Mendeley
Favorite

Abstract

Given the complexity of developing programs, services, policies, and support for e-learning, leaders may find it challenging to regularly evaluate programs to improve quality. Are there new opportunities to expand user and stakeholder input, or involve others in e-learning program evaluation? This chapter asks researchers and practitioners to rethink existing paradigms and methods for program evaluation. Crowdsourced input may help leaders and stakeholders address persistent evaluation challenges and improve e-learning quality, especially in Massive Open Online Courses (MOOCs). After reviewing selected evaluation paradigms, models, and methods, this chapter offers a possible role for crowdsourced input. This chapter examines the topics of crowd definition, affordances, and problems, to begin a taxonomical framework with possible applications for e-learning. The goal is to provide a reference for advancing the discussion and examination of crowdsourced input.

Request Access

You do not own this content. Please login to recommend this title to your institution's librarian or purchase it from the IGI Global bookstore.