Evaluating Public Programs Implementation: An Exploratory Case Study

Evaluating Public Programs Implementation: An Exploratory Case Study

Maddalena Sorrentino (Università degli Studi di Milano, Italy) and Katia Passerini (New Jersey Institute of Technology, USA)
DOI: 10.4018/978-1-4666-1776-6.ch012
OnDemand PDF Download:


This paper discusses the importance of evaluating the implementation of public programs as an integral component of organizational actions performed by public administrations. Drawing on contributions from policy studies and organization theory, the authors assign a dual role to evaluation: valuable cognitive resource and accountability tool for the policymakers. This exploratory case study contributes to the literature on implementation evaluation by providing an encompassing theory-grounded perspective on a recent e-government project by the City of Milan. The authors’ preliminary findings confirm the heuristic potential of an evaluation approach where interdisciplinary inputs can enlighten not only the results, but also the process of design, adoption and the use of e-services.
Chapter Preview


Public Administrations are increasingly evaluating not only the innovation rate of policies, programs and projects launched but also their operational management of the services provided (Bhatnagar, 2004; OECD, 2004, 2009). Being accountable for the use of collective resources based on verifiable and documentable results aligned with the resources public institutions receive from society is becoming a recognized need in several spheres (Considine, 2002; Rebora, 1999).

In this paper, evaluation is defined as an opinion on the capacity of the policy (i.e., the set of actions designed to address a collective problem) to transform the state of the situation-problem in the desired direction (Dente & Vecchi, 1999). The public usually pays little attention to the policy implementation process and tends to take it for granted because the main focus of the collective interest is on the decision-making process. Implementation is seen as a “technical” phase and as such is erroneously considered neutral and devoid of discretionary power. Contrary to this widespread opinion, implementation is an uncertain phase in which discretionary power cannot be eliminated through ex-ante standardization. This happens for various reasons. First, the operational launch of a program verifies if what has been decided moves closer to (or further away from) the goal. Second, the implementation process itself can be dogged by unexpected events – such as cuts in available resources, revised priorities - placing the original project at risk. Ultimately, implementation is characterized by ambiguity because it involves the mobilization of a significant number of resources and actors whose personal agendas do not necessarily coincide with the goals of the other stakeholders.

Figure 1 displays the various areas where and when evaluation can take place. Evaluation can focus on (Lippi, 2007): a) the “products” generated by the policy (outputs) or the effects on the recipients (the outcomes and impacts); b) the implementation, that is the actions and decisions culminating in the launch of a policy, such as the provision of a service, the enactment of a law, and so forth; or c) the phases in which the political agenda takes shape (issue-making and decision-making). Depending on when it is carried out, an evaluation can be ex post, when it analyzes the results (outputs, outcomes and impacts); in itinere, when it is ongoing and conducted during implementation; or, ultimately, ex ante, when it is carried out before the implementation phase. Typically, ex ante and the ex post evaluations aim to confirm or revoke decisions already made. The in itinere evaluation – which generally responds to a broader cognitive need – seeks to account for what happens as the implementation of the policy unfolds.

Figure 1.

Policymaking and Evaluation Types (adapted from: Lippi, 2007, p. 77)

Complete Chapter List

Search this Book: