Applying Evaluation to Information Science and Technology

Applying Evaluation to Information Science and Technology

David Dwayne Williams
DOI: 10.4018/978-1-60566-026-4.ch035
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

As indicated by the wide range of topics addressed by this Encyclopedia, the fields of information science and technology have grown exponentially. Likewise, the field of evaluation has evolved and become increasingly integral to learning and improving upon principles and practices associated with all fields the Encyclopedia explores. The field of evaluation is the formal transdiscipline of gathering information about the performance or nature of objects of evaluation and comparing the objects’ performance to criteria to help participants make evaluative judgments (Scriven, 2004). Evaluation includes several elements: negotiation with multiple participants regarding their values and criteria, using many different kinds of processes to document and judge the performance of various objects of evaluation, formative and summative purposes, measurement and assessment techniques, and use of quantitative and qualitative data gathering and analysis processes. This chapter documents the development of evaluation as a field; presents a framework for thinking about evaluation that is theoretically sound and practical to use; and explores ways to apply the framework to facilitate learning, improvement, decision-making, and judgment in all sub-fields of information science and technology.
Chapter Preview
Top

Evaluation Theories Or Approaches

For the last few decades, many approaches to evaluation have been evolving. In the 1960’s several social scientists, psychometricians, and others responded to government challenges to evaluate funded programs by identifying approaches that have been debated and expanded for years. Many of these approaches are summarized and discussed by Fitzpatrick, Sanders, and Worthen (2004) and Alkin (2004).

For example, one influential thinker, Daniel Stufflebeam (2004a), introduced the CIPP (context, input, process, product) approach in the early 1970’s. He elaborated the idea of meta-evaluation and guided the Joint Committee on Evaluation Standards to generate meta-evaluation standards (Stufflebeam, 2004b) for judging evaluations of programs, personnel, and students.

Patton (2004), recognizing that many evaluations, using social science research approaches, were ignored by the stakeholders that they were supposed to serve, he therefore created utilization-focused evaluation. It promotes practical ways to ascertain and target stakeholders’ criteria to raise chances of results use.

Lincoln and Guba (2004) questioned the dominant evaluation paradigms and proposed fourth generation evaluation. Its hermeneutic dialectic methods of working with stakeholders seeks to negotiate their often conflicting values to better identify criteria, standards, and questions for guiding evaluations.

Robert Stake’s (2003) responsive approach proposed radical changes to his earlier countenance approach by acknowledging that evaluation is only one of many factors that communities of stakeholders consider when negotiating with one another about evaluating objects they care about together.

Cousins, Goh, Clark, and Lee (2004) noted that evaluation is part of most organizations and something all stakeholders are doing constantly. They reviewed ways to encourage stakeholders to collaborate in various participatory approaches to formal evaluation.

Fetterman and Wandersman (2005) have proposed an approach to evaluation that some argue is more a form of social activism than evaluation. Empowerment evaluation seeks to encourage professional evaluators to coach various stakeholder groups, but particularly those that traditionally have less voice in their social and political communities, to conduct their own evaluations.

Formative and Summative Purposes

Scriven (2004) has critiqued other approaches and proposed others, such as goal-free evaluation and the key evaluation checklist. He also distinguished summative from formative evaluation, to not only test how well evaluands achieve their purposes but also to seek formative feedback to improve evaluands.

Key Terms in this Chapter

Summative Decision: Evaluation results leading to decisions to continue or discontinue an evaluand.

Measurement: Process of identifying the existence of entities by categorizing or enumerating their qualities.

Evaluand: A thing or person being evaluated.

Stakeholders: People who have an interest in an evaluand and its evaluation and who must be involved in the evaluation so they will value and use the results.

Criteria: Ideals against which evaluands should be compared.

Evaluation: Judging merit or worth by comparing what is to what should be.

Formative Decision: Evaluation results suggesting ways to improve an evaluand.

Standard: Level on a criterion to which an evaluand is expected to reach.

Complete Chapter List

Search this Book:
Reset