Summative Assessment in an IEP: A Model of Teacher Collaboration

Summative Assessment in an IEP: A Model of Teacher Collaboration

Benjamin J. White (Saint Michael's College, USA) and Sumeeta Patnaik (Marshall University, USA)
DOI: 10.4018/978-1-5225-6986-2.ch015
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The purpose of this chapter is to share an assessment model built specifically upon teacher collaboration and, more broadly, to encourage readers to consider the power of collaboration within an intensive English program (IEP). After examining traditional assessment challenges faced by IEPs, the chapter presents a collaborative assessment model, the basic premise of which is that teachers of the same students across three core courses within the same IEP level work together to create a common midterm and final exam. The model is examined in light of the five assessment principles of validity, reliability, practicality, authenticity, and washback. Finally, benefits and challenges of teacher collaboration are considered from the perspective of program administrators.
Chapter Preview
Top

Background: Iep Assessment Challenges

University-based IEPs are tasked with the mission of developing students’ L2 English proficiency, often with the ultimate goal of preparing students for study at an English-medium university or college. Not only are IEPs responsible for designing curricula across different levels of English proficiency, they are required to observe and measure student attainment at each level. Students expect to make significant gains in their English ability as they progress through program levels. In its official document on standards, the Commission on English Language Program Accreditation (CEA) states that programs need to “represent significant progress, accomplishment or proficiency gain for a student moving through the curriculum” (2016, p.10).

To confirm student progress and to check that students are, in fact, making progress, IEPs must assess their students on a regular basis. In addition to continual in-class assessment by the teacher, this often involves significant summative assessments at the midpoint and end of terms. While there are a number of ways to assess students across levels, there are also a variety of challenges that programs face in their efforts at successful student assessment. IEPs are typically constructed so that students take multiple skills-based or topics-based courses at particular levels. These levels may extend from beginner to advanced proficiency. Each course will likely have its own course objectives as well as specific student learning outcomes (SLOs). A major program challenge is to assess, for each course, whether its SLOs have been met by each student through valid and reliable means.

An initial question is how to conceptualize end-of-term assessments. Are these proficiency tests that confirm students are ready for the next level in the IEP? Or are they achievement tests that measure whether or not students learned what was covered in class? Note that the CEA quote above mentions both “proficiency gain” and “accomplishment,” seemingly remaining agnostic on the matter of proficiency versus achievement.

Following the first interpretation, programs sometimes ask teachers to administer some form of standardized test, which may sit collecting dust in a filing cabinet over the course of the term and might have limited connection to what students actually worked on with their teachers during class time. Such a breakdown in the alignment of instruction and assessment leads to some important questions. Are teachers doing their best to prepare students for end-of-term testing? Are the standardized tests actually measuring student achievement of course SLOs? Do students, having perceived a mismatch between in-class activities and midterm/final exams, question the validity of those exams? Perhaps more serious, what is the validation process in such a scenario? Teachers have much to offer in ensuring that a test is valid for their students (Hughes, 2003; Norris, 2008; Winke, 2011). Why aren’t administrators tapping into teachers’ experience, insights, and expertise with the program’s curriculum and students?

Complete Chapter List

Search this Book:
Reset