Using Interactivity to Improve Online Music Pedagogy for Undergraduate Non-Majors

Using Interactivity to Improve Online Music Pedagogy for Undergraduate Non-Majors

Eric James Mosterd
Copyright: © 2018 |Pages: 26
DOI: 10.4018/978-1-5225-5109-6.ch006
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This chapter explores two aspects of online music pedagogy. First, it covers the basic design considerations of offering an interactive music course online. This includes both pedagogical and technological aspects of online music instruction and delivery. After establishing the framework for online music course delivery, the chapter investigates the use of interactivity to improve undergraduate, non-music-major student performance in the author's online-music (jazz) appreciation course at the University of South Dakota (USD). The approach is evidence- and research-based, which is described in greater detail in the chapter. Furthermore, the tools used to enhance the interactivity of the course—specifically, adding game-based learning—are explored and connected to instructional research completed at USD.
Chapter Preview
Top

Background

A consistent challenge in teaching music to non-music majors centers on the listening component of the class, where students are asked to listen to, identify, and critically analyze a musical piece. This can lead to anxiety during the listening exams, as well as lower performance when compared to the written portion of the exam.

The problem can be exacerbated if the course is delivered via an online environment. Students will be responsible for self-directed learning in the online environment. Furthermore, a relatively simple in-class process of delivering the audio must be reviewed for the modality. This includes how the students will listen to, and assess, the excerpts.

To address these matters, online instructors at USD must go through quality assurance (QA) training based on the widely adopted and nationally recognized QM process. Courses are reviewed using a standard rubric that encapsulates the core of the QM rubric augmented by institutional research and rubrics developed by institutions across the United States (SDBOR Online QA Rubric, 2015). While this process prepares instructors and courses for the basics of online pedagogy, it does not delve into subject-matter-specific pedagogy. In addition, it does not cover technical design, delivery considerations, and processes specific to the subject matter.

USD uses IDEA Student Ratings of Instruction (SRI) to measure the effectiveness of the online course design, the review process, and the instruction. This is also used by a large number of other higher education institutions. Administered as a summative evaluation of all courses regardless of modality, it generates ratings on instructor excellence and course excellence. Historically, when comparing the institution’s online and face-to-face courses, the course excellence ratings were similar. However, the online instructor excellence ratings were low and the gap was widening.

IDEA utilizes a bell curve with the expectation that 10% of courses fall into each of the “much lower” and “much higher” converted score categories. Twenty percent fall into each of the “lower” and “higher” categories. Forty percent fall into the “similar” category for both instructor and course excellence (IDEA, 2011). See Figure 1.

Figure 1.

IDEA converted score categories

978-1-5225-5109-6.ch006.f01
Source: USD CTL (Author)

To simplify this, USD’s Center for Teaching and Learning (CTL), which is responsible for training online instructors, set semester goals to have at least 70% of the online courses rated as “similar” or above for both instructor and course excellence ratings. As stated, the course excellence ratings for the institution’s online courses are similar to that of the face-to-face courses. Therefore, the CTL focused on instructor excellence ratings.

Complete Chapter List

Search this Book:
Reset