Assessing the Composition Program on Our Own Terms

Assessing the Composition Program on Our Own Terms

Sonya Borton, Alanna Frost, Kate Warrington
DOI: 10.4018/978-1-60566-667-9.ch010
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

As Jacqueline Jones Royster articulated at the 2006 Conference on College Composition and Communication, English departments are already assessing themselves and should resist suggestions by the Spellings Commission on the Future of Higher Education that a standardized method of assessing students and programs in higher education is needed. In the fall of 2006, the University of Louisville was due to be reviewed by the Southern Association of Colleges and Schools (SACS). The First-Year Composition program chose to conduct an internal assessment in the fall of 2004. This chapter details the Composition program assessment conducted at the University of Louisville and includes a comprehensive analysis of its rationale, theoretical foundations, methodologies, and results. This chapter also articulates the difficulties of such a large-scale assessment as well as the uniquely local challenges faced during the process.
Chapter Preview
Top

Introduction

“Treat program development, including formal assessment, as an adventurous space, open to explore” (Haswell, 2001, p. 188).

The Spellings Commission report on higher education, A Test of Leadership: Charting the Future of U. S. Higher Education (2006) has caused much debate and concern among postsecondary educators. One of the primary concerns educators have about this report is its call for a widespread standardized assessment of institutions of higher education in order to encourage “accountability.” Specifically the report recommends the development of a database that houses information comparing the performance, generally based upon standardized testing, of diverse groups of students across institutions of higher learning. According to the report, this collection of data will allow “meaningful interstate comparison of student learning” so that “state policymakers can [. . .] identify shortcomings as well as best practices” (p. 23). Brian Huot (2007), in his critique of the Spellings Commission report, responds to this recommendation and its goals, pointing out, “There appears to be an assumption that all students can learn equally well at all institutions, when in fact it has become increasingly apparent that educational success or failure is about whether or not students can establish relevant and productive learning relationships within a specific educational environment” (p. 519). According to Huot, as well as numerous other scholars (McLeod, Horn, and Haswell, 2005; Whithaus, 2005; Contreras-McGavin and Kezar, 2007), these kinds of standardized assessments provide little useful information about situated student learning. Rather, assessments that take into consideration the local context and culture of the institution yield significantly more information that can be used to reform higher education in a meaningful way while addressing specific student needs.

Standardized methods of assessment cannot possibly be suitable to measure the abilities of the diverse student populations at all institutions of higher education. However, the Spellings Commission report insists on using a standardized instrument, the National Assessment of Adult Literacy (NAAL) to claim that “the percentage of college graduates proficient in prose literacy has actually declined from 40 to 31 percent in the past decade” (p. 3). As Huot notes, these results may indicate that “there is a different population of students entering our doors that we must become more able to teach” (p. 518). However, as he also explains, a more appropriate response might be that we “need to find better ways of testing what people can really do, rather than creating tests that ensure their poor performance and the condemnation of the institutions charged with educating them” (p. 518). Focusing on student abilities is a more useful way of establishing benchmarks within a specific academic program, institution, or higher education system. The alternative is to attempt to assess student learning with a narrow measure of skills valued by an outside testing authority with little familiarity with the institution to be assessed and possibly, in the case of the Spellings Commission Report, little familiarity with higher education instruction in general.

Key Terms in this Chapter

Triangulation: The use of a variety of methodological instruments to gain separate perspectives of an issue.

Inter-Rater Reliability: The degree to which different readers assess the same document with the same score.

Norming: A process encouraging inter-rater reliability where readers score documents that have been pre-scored in order to move toward achieving a shared understanding of the scoring criteria and how this criteria is represented in the documents.

Rubric: A guide outlining the scoring criteria for a specific set of documents.

E-Portfolios: A collection of student work compiled in an electronic environment that documents students’ progress toward meeting specific standards.

Holistic Scoring: A scoring system where readers score documents based upon their impressions of the whole document rather than upon its separate elements.

Portfolio: A collection of student work that documents students’ progress toward meeting specific standards.

Complete Chapter List

Search this Book:
Reset