Both academics and practitioners have invested considerably in the information systems evaluation arena, yet rewards remain elusive. The aim of this chapter is to provide rich insights into some particular political and social aspects of evaluation processes. An ethnographic study of a large international financial institution is used to compare the experience of observed practice with the rhetoric of company policy, and also to contrast these observations with the process of IS evaluation as portrayed within the literature. Our study shows that despite increasing acknowledgement within the IS evaluation literature of the limitations and flaws of the positivist approach, typified by quantitative, ‘objective’ assessments, this shift in focus towards understanding social and organisational issues has had little impact on organisational practice. In addition, our observations within the research site reveal that the veneer of rationality offered by formalised evaluation processes merely obscures issues of power and politics that are enmeshed within these processes.
The Difficulties Of Is Evaluation
In considering the evaluation question (and by implication the issue of ‘value’ for money of information systems), the first observation to be made is the amount of attention that the subject has demanded, both in terms of the academic literature and the level of practitioner interest (Galliers, Merali & Spearing, 1994; Niederman, Branchaeu & Wetherbe, 1991). Yet in spite of this abundance of academic study and an increase in the organisational practice of evaluation, it appears we are nowhere nearer to finding a solution to the problems surrounding it (Ballantine, Galliers & Stray, 1999) and there is little indication that the ‘hard academic, foundational questions are being widely addressed, let alone answered’ (Farbey, Land & Targget, 1998, p. 156).
With an increased level of investment in IS, organisations are becoming increasingly concerned to find appropriate mechanisms to measure performance and decision-makers are being pressured to better justify their IS investments. Whilst there has always been a degree of scepticism over the ‘real’ benefits of IS initiatives (Earl, 1996), there is now a widespread and growing concern that IS investment does not deliver value. Yet, evaluation is seen as important to business operations, being variously described as an indispensable tool for managers, a vital organisational function, and an essential part of the management process (Hirschheim & Smithson, 1988; Love, 1991; Walsham, 1993). It is closely associated with decision-making (Farbey, Land & Targett, 1995) and with management desire to improve organisational economic productivity (Picciotto, 1999). So, if careful management is seen as necessary to achieve IS benefits realisation (Earl, 1996), the obvious question that arises is why so many investments appear to evolve without undergoing any formal assessment (Wilson, 1991). This absence of formal evaluation practices does not necessarily indicate a lack of endeavour within the academic or practitioner community to devise appropriate methods: ‘Many a scholar, consultant and practitioner has tried to devise a reliable approach to measuring the business value of IT at the level of the firm, none has succeeded’ (Keen, 1991). IS evaluation, then, appears to be characterised by a level of complexity that renders it very difficult both conceptually and practically (Hirschheim & Smithson, 1988; Willcocks & Lester, 1999; Zuboff, 1988).