Developing Best Practices for Value Added Research in a Political Context

Developing Best Practices for Value Added Research in a Political Context

Valerie Osland Paton (Texas Tech University, USA), Gerry Dizinno (University of Texas at San Antonio, USA) and Roy Mathew (University of Texas at El Paso, USA)
Copyright: © 2012 |Pages: 7
DOI: 10.4018/978-1-60960-857-6.ch023

Abstract

During the period of 1970-2010 in American higher education, the burden of funding has shifted in proportion from the federal government, to the state government and, ultimately, to students and parents. Not surprisingly, during this same period, the relationship between the student and the institution shifted from beneficiary of federal and state support to payee and consumer of education. Given this change from higher education as a public good to an individual good, there has been an increasing demand for accountability and transparency from parents and students, as well as legislators and governmental bodies.
Chapter Preview
Top

Background

In 2006, the Spellings Commission articulated this shifting focus from the education experienced to the educational product:

In this consumer-driven environment, students increasingly care little about the distinctions that sometimes preoccupy the academic establishment, from whether a college has for-profit or nonprofit status to whether its classes are offered online or in brick-and-mortar buildings. Instead, they care—as we do—about results (the Spelling Commission, 2006).

In light of this intense national conversation about the role of higher education, this case study examines methods employed to demonstrate the measurement of “institutional effect” upon students, and proposes best practices for institutional researchers. (the Spelling Commission,2006, p. 1)

The national dialogue summarized above has been played out in numerous states. This case study will utilize the experience of the authors in the state of Texas, but other institutional researchers will have parallel experiences to describe as the impact of the Spellings Commission, state governments and legislative bodies, regional accrediting bodies and special interest groups have influenced statutes, regulations and policy across the U.S.

In January 2004, Texas Governor Rick Perry issued Executive Order RP 31, which required that the Texas Higher Education Coordinating Board and all Texas public institutions of higher education, work together to develop an “accountability system” that would provide “the information necessary to determine the effectiveness and quality of the education students receive at individual institutions”, and also to provide “the basis to evaluate the institutions’ use of state resources.” (http://www.txhighereddata.org/Interactive/Accountability/History.cfm, accessed May 2, 2010). The data collected for the Texas Accountability System were grouped into key accountability measures in five categories: participation, success, excellence, research, and institutional effectiveness and efficiencies.” Many of the measures aligned with the State’s plan for higher education, Closing the Gaps by 2010. Institutional researchers at Texas public institutions began to labor-intensive effort of aligning data collection to the new measures and definitions, and annual reporting began in 2005 with the results posted to the publically accessible Texas Accountability System web-site.

In 2006, the release of the Spellings Commission Report escalated existing dialogue across the nation regarding how to measure the institutional effect upon students. In response to the possibility of state-mandated assessment of institutional effect, the Association of Public Land-grant Universities (APLU) implemented the Voluntary System of Accountability (VSA) in 2007. The VSA College Portrait offers institutions an environment to document (1) consumer information, (2) student experiences and perceptions, and (3) student learning outcomes” (Voluntary System, 2010). Institutional research offices across the nation began to participate in the VSA; in 2010, nearly 300 institutions participated in the College Portrait.

To participate, institutions provide locally-developed student learning outcomes data and/or administer one of three VSA-approved, commercially developed instruments to measure institutional effect: ETS’s Measure of Academic Proficiency and Progress (MAPP), ACT’s Collegiate Assessment of Academic Proficiency (CAAP), the Council for Aid to Education’s Collegiate Learning Assessment (CLA). Institutions can elect to utilized these instruments with a cohort of 100 “traditional first-time full-time fall freshmen” and 100 “traditional Seniors” who are expected to graduate within six months of taking one of the assessments (Voluntary System, 2010). Another method, selected by few institutions, is the longitudinal sample, which includes the random assignment of “traditional first-time full-time fall freshmen” to a cohort which is assessed at entrance with a VSA-approved instrument and then tracked over four years. This methodology includes an “at exit” assessment, utilizing the same VSA-approved instrument to gather data from the cohort within six months of their expected graduation. Unfortunately, the largest VSA-approved instrument, the Collegiate Learning Assessment, no longer supported this methodology as of 2010.

Complete Chapter List

Search this Book:
Reset