Coding Online Learner Image and Multimedia Submissions for Assignment Fulfillment: An Early Assessment Rubric

Coding Online Learner Image and Multimedia Submissions for Assignment Fulfillment: An Early Assessment Rubric

DOI: 10.4018/978-1-5225-2679-7.ch006
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

In K12 and higher education, instructors have been eliciting student work in a variety of digital forms: text, audio, image, slideshow, video, and various combinations thereof. These files are uploaded to learning management systems, online training systems, online research suites (online survey systems), and learning applications; they are shared on content-sharing social media sites (with varying degrees of public access). Some are created on presentation sites, which enable the collation of the various media formats into coherent wholes (whether as voicethreads or slideshows or digital publications). While assignments are becoming richer, in many cases, the assessment tools for the work have not changed to accommodate the changes in modality. This chapter provides a light review of the literature, then a decomposition of how to create assessment rubrics for a variety of assignments involving submitted imagery and multimedia. The proposed draft assessment rubric provides a start for instructors, who are encouraged to define customizable parts of the rubric and to add unique requirements based on their local contexts and the requirements of the respective assignments.
Chapter Preview
Top

Introduction

In the raft of new features for learning management systems and online survey systems is a “file upload” capability, which enables individuals to upload digital imagery (screenshots, photographs, drawings, maps, data visualizations, mash-ups, and others), audio, video, slideshows, small simulations, and other digital file types. In K12 and higher education, instructors are requiring learners to submit multimodal work—well beyond “old school” writing. Today, there are assignments in which learners are required to participate in events or fieldtrips and to take a “selfie” to prove their participation or to highlight some aspect of their experience or learning. Students are required to create data tables and data visualizations, oftentimes in a reproducible research code—so others may run data analytics on their data and see if they can find the same results. Learners explore 3D immersive virtual worlds to experience simulations and to interact with other human-embodied avatars; they are asked to create screenshots and machine-cinema (machinima) to capture some of their virtual experiences. The use of Second Life in higher education brings complexity because the virtual space involves what Warburton (2009) calls the “physical layer,” the “communication layer,” and the “status layer” (p. 420). Learners map both real physical spaces and virtual ones and represent their work in diagrams. Budding strategists draw out game trees to convey both strategy and tactics, with the power coming from the ideas as much as for the depiction. While the submitted works may seem simple, many of the assigned projects involve a fair amount of technological savvy and a half-dozen multimedia authoring tools and social media systems.

While multimedia assignment submissions involve more multisensory dimensionality (including auditory, visual, and symbolic reasoning channels), 4th dimensionality (time added to 3D; sequentiality), interactivity, and other features, oftentimes, the assessment instruments for the multimedia work is a rehash of the assessment for the text-based assignment. A text-based proxy assessment for multimedia-based work is often not a very effective fit to the work, and this leaves many features of the multimedia work unaddressed and unassessed. In other cases, instructors do not define their terms of assessment. One version of this is a pass-fail approach, with a given pass as long as a person has met the basic requirements of the assignment. The thinking sometimes is that the required modality or form itself is so demanding that completion of the work is grounds for full credit. While instructors may view the submitted work with a sophisticated eye—understanding likely inputs that learners made in order to create particular works, and drawing conclusions about learner competence, knowledge, creativity, and intellect—much of what is observed is not codified in an assessment instrument. Said another way, instructors often take an intuitive “know it when I see it” or “wing-it” approach. Another variation is to allow the multimedia learners themselves to assess their own work or their peers (or professionals in a field) to assess the work.

A better way may be to create an assessment instrument for both guiding the creation of the respective works and then assessing them based on shared standards. Having a defined approach will benefit learners, so they know the standards that they are building to. Instructors themselves can have a fairer and more legally defensible grading method. In this chapter, a rubric is the selected assessment tool format; a rubric is a table that describes the criteria for a work on row headers and criteria for the assessment on the column headers.

Rubrics may be focused on the particular assigned work, but they may also be built on competence hierarchies. In this latter case, the rubrics are used to assess the learner’s apparent capabilities based on his or her work and his / her knowledge (“grasp degree”) and as such. This focus on competence is a step out from assessing the work itself to assessing the author hand behind the work, and this is based on inductive logic and inference. In this latter case, assessors are making inferences of the creator’s competence from the work submitted. As part of the feedback to learners, instructors will need to achieve three steps: “1) Identification and definition of competences. 2) Design of course activities that conforms the assessment. 3) Make competences and the level acquisition, as clear as possible for learners” (Mor, Guerrero-Roldán, Hettiarachchi, & Huertas, 2014, p. 85).

Complete Chapter List

Search this Book:
Reset