Expertiza: Managing Feedback in Collaborative Learning

Expertiza: Managing Feedback in Collaborative Learning

Edward F. Gehringer (North Carolina State University, USA)
DOI: 10.4018/978-1-60566-786-7.ch005
OnDemand PDF Download:
No Current Special Offers


Educators and accrediting agencies demonstrate a growing awareness that students learn better when they work in groups, and on projects that are more similar to those encountered on the job, where their contributions are used by others to add value to the operations of the enterprise. However, it is very time consuming to assess project work; the only scalable way to accomplish this is to have students assist in the assessment. Expertiza is a system for managing all kinds of communication that is involved in assessment: double-blind communications between authors and reviewers, assessment of teammate contributions, evaluations by course staff, and surveys of students to assess the assignment and the peer-review process. This chapter places Expertiza in context among other electronic peer-review systems, algorithms, and methodologies. It relates the results of three experiments showing that through the peer-review process, students are capable of producing work that can be used as teaching materials in later classes.
Chapter Preview


Summative assessment, in the form of exams and standardized tests, has long been a mainstay of our educational system. But the shortcomings of basing student evaluations on “high-stakes” testing are well known. Among other things, it disadvantages students (e.g., nontraditional students) who lack self-confidence, and it focuses students’ attention on passing tests, rather than honing their skills in communication and collaboration—which will be much more important to them on the job.

Nonetheless, in most American college courses, the majority of the grade is determined by exams. The situation is even more extreme in other areas of the world, where the entire grade is often based on exams, with homework being assigned, but not counting in assessment (Gehringer 2008a). Why do university faculty continue to rely on such a flawed system? By and large, it is a question of resources. The effort required to assess performance on hour-long exams, where each student answers the same question, is much less than needed to determine project grades, where each student (or team of students) comes up with a different solution after spending many hours on the task. It is also more difficult to grade fairly when there are many correct, and partially correct, solutions. Certain kinds of exam questions can even be graded by computer, which is not possible for projects.1

An important challenge, then, is to facilitate the grading of project work. A concomitant need is to provide formative assessment—feedback to help students improve their work, instead of just assigning a score. Formative assessment has been shown to “level the playing field” among all kinds of students. In a survey of 250 research papers on Classroom Assessment, Black and Wiliam (1998) concluded,

While formative assessment can help all pupils, it yields particularly good results with low achievers by concentrating on specific problems with their work and giving them a clear understanding of what is wrong and how to put it right. Pupils can accept and work with such messages, provided that they are not clouded by overtones about ability, competition, and comparison with others.

The only scalable way to provide this feedback is to get peers involved in the act. If each student is asked to assess a few (say, one to five) other students or student teams, the task requires a reasonable amount of effort; and that effort does not increase as the class gets larger. Thus, each student gets the same amount of feedback whether the class consists of 10 students or 100.

Team projects are an important form of collaboration. However, few if any other online peer-review tools support them. At the minimum, members of a team must have access to a single submission area, so that any team member can modify the group’s submission. Team members are asked to review each other’s contribution after the projects are finished. The instructor can modify the grade assigned to the team for each member, based on teammate evaluations. While students submit as teams, they review as individuals. This has important benefits, including increasing the amount of feedback and reducing the need for teams to meet.

Our Expertiza system (Gehringer et al. 2007) supports several kinds of evaluation.

Instructors/TAs (“tutors”) as well as other students can review students. The instructor might, for example, do the first review to tell the student reviewers what to watch out for. Authors can give feedback to their reviewers at any time. This too helps to mitigate the problem of “rogue” reviews, since it gives author and reviewer a double-blind method for resolving conflict. Reviewers can update their reviews after communication with authors. In fact, authors are also allowed to update their submissions at any time, even during a review period. We have developed rules for canceling review scores when they are not applicable because of a resubmission.

Expertiza is designed to allow the production of reusable learning resources through peer review of student work. Large projects (e.g., devising exercises or case studies for each chapter in a textbook, creating a glossary of terms used in a course, annotating a semester’s worth of lecture notes with hyperlinks to related material) can be parceled out to individuals or teams, each of which signs up for a chunk of the project. This is done by allowing them to select from a set of tasks listed on a Web page. Only a limited number of students/teams are allowed to select each choice. This allows the instructor to assure that multiple creators choose each chunk.

Complete Chapter List

Search this Book: