Cognitive vs. Social Constructivist Learning for Research and Training on the Angoff Method

Cognitive vs. Social Constructivist Learning for Research and Training on the Angoff Method

Ifeoma Chika Iyioke
DOI: 10.4018/978-1-6684-3881-7.ch006
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This chapter aims to revitalize the use of the Angoff method in measuring students' performance in the educational contexts by offering guidance on the constructivist learning perspective that is more appropriate for training K-12 teachers. Specifically, it compares the cognitive and social constructivist theories and the Completely Structured Training (CST) and Partially Structured Training (PST) designs for conducting training on the Angoff method. The analysis argues for the relative efficacy of the cognitive constructivist perspective of the CST based on a breakdown of the cognitive strategies of the Angoff method judgments over the social constructivist perspective of the PST that emphasizes interpersonal interactions. The chapter concludes with recommendations for empirical comparisons of the quality of judgments based on the CST and PST models.
Chapter Preview
Top

Background

The Angoff method bears the name of William Angoff who first suggested it. As originally described in the book chapter Angoff (1971), the task for measuring students’ performance is to state the probability that a group of minimally competent candidates (MCCs) would answer each item on a test correctly. The MCCs refer to students who barely possess the levels of knowledge and skill competencies operationally defined by performance level descriptors (PLDs) and measured by tests. As proposed, the Angoff method is well-suited for tests comprising multiple-choice format questions. In measurement terminology, the probability judgments represent “test items difficulty.” The sum of the test items difficulty for the MCCs represents the minimum test score (a.k.a, cut score) that meets the PLDs.

The Angoff method has been and remains a popular and most researched option for measuring students’ performance relative to standards (Plake & Cizek, 2012). It has psychometric appeal because it presents a realistic probability formulation of test-taking events, such as correct responses by examinees (Impara & Plake, 1997). More recent appraisals suggest that the method offers the best balance between technical adequacy and practicality (e.g. Plake & Cizek, 2012; Schnabel, 2018). However, the frequency with which it is used in educational settings has waned (Plake & Cizek, 2012). In fact, the bookmark method involving instead the choice of, as opposed to the judgment of, item difficulty has displaced it for the purpose of National Assessment of Educational Progress (NAEP) (Schnabel, 2018).

A primary criticism of the method is the difficulty of the participants’ task. In the 1990s, a number of studies questioned the validity due to inability of the participants, such as teachers who interact with and are familiar with the student populations to perform the task (Plake & Cizek, 2012). These studies unanimously concluded that the method has limited utility as participants struggle to make the required judgments (e.g., Impara & Plake, 1997, 1998; Shepard, Glaser, Linn, & Bohrnstedt, 1993). For instance, Impara and Plake (1997) reported that although teachers could estimate the relative difficulty of test items very well, they were unable to estimate absolute difficulty accurately. Impara and Plake (1998) found that teachers were more accurate in estimating the performance of the total group than of the MCC, but in neither case was their accuracy level high. Shepard et al.’s (1993) study with the NAEP reported similar inconsistencies in item difficulty judgments, especially for easy and difficult items. As a result, they concluded that the method was fundamentally flawed.

Complete Chapter List

Search this Book:
Reset