There is much in the assessment literature about the necessity of developing a culture of assessment and mandates from accrediting bodies include language related to a culture of continuous improvement. However, much of this literature discusses administration and cultural hierarchies. Because faculty must be fully engaged in the assessment process for it to be successful and improve teaching and learning, development of an environment for assessment must be faculty-focused. This chapter suggests five elements to consider: structure of assessment, qualifications of those in assessment, focus of assessment conversations, faculty development, and linkages with other areas within the institution.
As Banta, Lund, Black, and Oblander (1996) have noted, the nine principles of good practice for the assessment of student outcomes provide a guide for what works in assessment. They further note that there should also be a tenth principle, “a composite encompassing several distinct, straightforward characteristics of good practice” (p. 62). This principle, as they put it, asserts “assessment is most effective when undertaken in an environment that is receptive, supportive, and enabling” (p. 62). Developing such an environment requires the development of a new culture, one that includes core beliefs, values, behavior norms, and infrastructure. The purpose of this chapter is to provide a guide to the effective development of such a culture, illustrated through the experiences of the authors working at two institutions and serving as consultants to several others.
Assessment of student outcomes has become part of what many universities do, not because they are inherently interested in improving teaching and learning, but because they have been prompted by external forces, such as the federal government, state government, and regional and professional accrediting bodies. When faced with improvement or accountability (see Aper, Culver, & Hinkle, 1990), institutions have typically focused on accountability given the perceived importance of accreditation (Stufflebeam & Shinkfield, 2007). Faculty recognize this schism and are skeptical of what new tasks they will have to take on for their institution – new tasks that must be completed in a time when higher education faces increasing numbers of students, larger classes, and flat funding. Further, as Miller (1988) pointed out twenty years ago, there are concerns that assessment brings others into the classroom to look over the shoulder and the autonomy of individual faculty. Professors indicate concern about academic freedom and believe that “experts” outside the academy are questioning their faculty judgment. It is no wonder that, as Lee Shulman (2007) put it, “academics, in the face of the growing volume of calls for accountability, have developed a sense of higher education as victim, swept away by a powerful current over which we can exercise little influence” (p. 25). Schulman goes on to point out that faculty most typically might resist completely or adopt a ‘stance of minimal compliance” (p. 26).
In fact, the assumption of assessment professionals, most university administrators, and testing companies is that faculty have difficulty buying into the process. As noted by ACT, in its materials for the Collegiate Assessment of Academic Proficiency (CAAP), “most colleges and universities around the country have difficulty motivating their faculty and staff to engage in regular, systematic assessment activities . . . Rather than trying to get faculty and staff to engage in assessment in an environment that has traditionally not supported it to the extent that it needs to be done today, colleges and universities must consider what steps to take in order to create a culture of evidence and continuous improvement on their campuses” (ACT, 2007, p. 16). The theme seems to be that, if it weren’t for uncooperative faculty, assessment of student learning would be easily facilitated.
In reality, there may have been good reasons for faculty to be uncooperative and have a negative view of the assessment process. Often, assessment has been presented to them as just another administrative hoop to jump through in order to meet regional accreditation requirements or state mandates, many of which seem removed from what happens in their classrooms. These requirements are often reported in a very specific structure which may prevent faculty from seeing how assessment results can be used by their program in meaningful ways. Also, as faculty have seen in the past, they may believe that assessment is yet another of those flurries of activity that result in a notebook of information placed on a shelf in someone’s office and forgotten about, at least until six to eight years later when the next cycle of requirements begins.
Key Terms in this Chapter
Functional Inadequacy: A term used by Jenkins (2007) to describe when faculty members express lack of conceptual understanding and so fear the unknown to come.
Value-Added: Assessment measures are intended to gather evidence regarding progress as a result of a student’s institutional experience. The identification of change (or “value added”) can be achieved through longitudinal designs (one group measured toward the beginning and the end of the educational experience), cross-sectional designs (the measurement of a beginning group compared to an ending group), or a residual analysis (comparing end of program scores with expected end of program scores) (State Council of Higher Education for Virginia, 2007).
Principles of Good Practice in Assessment: A set of nine principles to guide effective assessment practice patterned on Chickering and Gamson’s “Seven Principles of Good Practice in Undergraduate Education” (1987) and developed by Assessment Forum members through the sponsorship of FIPSE and AAHE (Banta, Lund, Black, & Oblander, 1996).
Organizational Culture: One of the elements that defines an organization (the other being the organization’s structure). A key aspect of organizational culture is an openness to change (Owen, 2005).
Accreditation: A process intended to answer two broad questions: are institutions and their programs (and personnel) meeting minimum standards, and how can their performance be improved (Stufflebeam & Shinkfield, 2007).
Action Research: Originated by Kurt Lewin, is conducted by researchers who collect and analyze data with an eye to implementing change and improving their own practice (Chisholm & Elden, 1993).
ABET: Formerly known as the Accreditation Board for Engineering and Technology, is the recognized accreditor for college and university programs in applied science, computing, engineering, and technology (www.abet.org).
Complete Chapter List
Christopher S. Schreiner
Christopher S. Schreiner
Melissa A. Dyehouse, John Y. Baek, Richard A. Lesh
Suzanne Pieper, Erika Edwards, Brandon Haist, Walter Nolan
John Baer, Sharon S. McKool
Christine Charyton, Zorana Ivcevic, Jonathan A. Plucker, James C. Kaufman
Sheila S. Thompson, Annemarie Vaccaro
Barbara D’Angelo, Barry Maid
Sonya Borton, Alanna Frost, Kate Warrington
Victor W. Brunsden
David A. Eubanks
P. Tokyo Kang, David Gugin
Barika Barboza, Frances Singh
Lorraine Gilpin, Yasar Bodur, Kathleen Crawford
Charlotte Brammer, Rhonda Parker
Daniel F. Chambliss
Deirdre Pettipiece, Timothy Ray, Justin Everett
Sean A. McKitrick
Steven M. Culver, Ray VanDyke
Joan Hawthorne, Tatyana Dumova, April Bradley, Daphne Pederson