Case Study: Material Additions, Ratings, and Comments in a Course Setting

Case Study: Material Additions, Ratings, and Comments in a Course Setting

Juha Leino
DOI: 10.4018/978-1-61350-489-5.ch011
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

As recommender systems are making inroads to e-learning, the new ecosystem is placing new challenges on them. This Chapter discusses the author’s experiences of adding recommender features to additional reading materials listing page in an undergraduate-level course. Discussion is based on use-log and student questionnaire data. Students could both add materials to lecture readings and peer-evaluate the pertinence of the materials by rating and commenting them. Students were required to add one material and rate five as part of the course requirements. Overall, students perceived the system as useful and did not resent compulsoriness. In addition, perceived social presence promoted social behavior in many students. However, many students rated materials without viewing them, thus undermining the reliability of aggregated ratings. Consequently, while recommenders can enhance the e-learner experience, they need to be robust against some students trying to get points without earning them.
Chapter Preview
Top

Introduction

Internet offers students access to a vast array of materials of varying quality, ranging from specialist columns, expert articles, and scientific papers to presentation slides and video interviews. In fact, a large majority of students already use online resources to augment course materials at the college/university level and are ready to share these with fellow students (Hage & Aïmeur, 2008). Since students are already filtering a mass of potential materials down to the pertinent ones for themselves, the question is how to make use of and share the fruits of all this labor with others.

The author of this Chapter lectures a course on User-Centered Design (UCD) at undergraduate level. He decided to harness the collective intelligence and efforts of the students to collect additional materials to complement his selection and to allow students to determine the pertinence of the materials as a community. Consequently, the Lecture Slides and Reading Materials page (LSRM) on the course’s website was designed to allow adding materials to lectures and rating and commenting the added materials. The purpose was twofold, 1) to help students find high-quality materials to read, and 2) to encourage students to read more widely on UCD (in part by lowering the cost of finding pertinent materials with LSRM). The importance of the second point is underlined by the fact that information literacy, the ability to find, access, evaluate, and use information from various sources, is seen as a survival skill in the information-intensive age and as crucial for professional success after graduation (Kiliç-Çakmak, 2010).

Recommending thus took place at two levels, 1) adding materials one had found useful, and 2) rating and commenting the added materials. At both levels, the purpose of the recommender was to help students find good items (Herlocker, Konstan, Terveen, & Riedl, 2004).

Adding a material to a lecture is an act of implicit recommending, as it implies prior selection from the multitude of materials available on the Internet—users filtering materials for other users. The second level, rating and commenting, allowed students to help each other decide which materials were worth reading, thus aggregating knowledge (Neumann, 2007). Again, this goes back to the idea of collective intelligence: A community can best determine what materials are interesting, useful, and at correct level of complexity for its members. In a sense, the system enabled a kind of peer-reviewing process. As accessing LSRM required logging in, the community here was a closed one.

The fact that both the recommending approaches were non-algorithmic makes them no less recommendations, as for instance such non-algorithmic approaches as ratings and comments/reviews are widely considered bona fide recommendations (Leino & Räihä, 2007; Neumann, 2007; Schafer, Konstan, & Riedl, 1999).

In 2009, students were required to add at least one material and five ratings as part of the course requirements due to the lackluster levels of contributions in 2007 and 2008 when all contributing was voluntary. While compulsoriness brought the number of contributions to a sufficiently high level that students found the system useful, it also brought out the problem of significant number of students rating materials without reading them.

In this Chapter, we first briefly discuss recommenders in e-learning in general and then turn to the case study at hand. After describing LSRM, we look at the use statistics (collected automatically) and student views (collected with a questionnaire) to describe the use of the systems and to evaluate its usefulness. As the design decisions related to LSRM and student perceptions of ratings and commenting features have already been discussed in Leino (2010), we only recap them briefly where relevant, focusing instead on previously unreported aspects. In particular, we look at using compulsoriness to elicit contributions in e-learning and discuss the problem of dishonest ratings.

Complete Chapter List

Search this Book:
Reset