Validation of E-Learning Courses in Computer Science and Humanities: A Matter of Context
Robert S. Friedman (New Jersey Institute of Technology, USA), Fadi P. Deek (New Jersey Institute of Technology, USA) and Norbert Elliot (New Jersey Institute of Technology, USA)
Copyright: © 2009
In order to offer a unified framework for the empirical assessment of e-learning (EL), this chapter presents findings from three studies conducted at a comprehensive technological university. The first, an archival study, centers on student performance in undergraduate computer science and humanities courses. The second study, a survey given three times within EL classes, investigates the variables of learning style, general expectation, and interaction in student performance. The third study investigates student performance on computer-mediated information literacy. Taken together, these three studies—focusing on archival, process, and performance-based techniques—suggest that a comprehensive assessment model has the potential to yield a depth of knowledge allowing shareholders to make informed decisions on the complexities of asynchronous learning in post-secondary education.
Archival Study: Two Disciplines
Research on rates of student success in EL classes tends to be drawn from single samples. Yet comparison of two disciplines—one invested in responding to empirically-oriented tasks in a limited response format (responding to multiple-choice questions or creating computer code), the other invested in responding to verbally-oriented tasks in a open response format (participating in on-line discussions or submitting essays)—seemed ideal in allowing more to be known about the specifics of EL across disciplinary frameworks (Elliot, Friedman, & Briller, 2005).