Lessons on Measuring e-Government Satisfaction: An Experience from Surveying Government Agencies in the UK

Lessons on Measuring e-Government Satisfaction: An Experience from Surveying Government Agencies in the UK

Paul Waller, Zahir Irani, Habin Lee, Vishanth Weerakkody
Copyright: © 2014 |Pages: 10
DOI: 10.4018/ijegr.2014070103
(Individual Articles)
No Current Special Offers


This paper summarises lessons learned in relation to an ongoing study to collect feedback on e-government systems that have been implemented to e-enable several core administrative functions in the UK as part of the I-MEET project. Previous work summarises findings from surveys of users of such systems and this paper reports on the experience of surveying providers. An extensive survey was designed and administered to explore provider perspectives on the e-government application in general, system aspects, cost, implementation, prerequisites (e.g. policy support), various dimensions of effects, and the respondent's overall opinion of the system. The survey found a complex mix of internal and external contacted service providers and commissioners, each of whom has a different set of success measures for a service, and shared services (such as common web site providers) that were not obvious but could result in correlated observations. These findings provide signposts for future researchers to potential pitfalls. Nevertheless, when complete, the integration by the I-MEET project of user and provider perspectives will give policy makers the opportunity to balance the scale, complexity and expense of electronically delivered transactions on the government side with the usability and satisfaction from the user perspective, revealing linkages between aspects of both.
Article Preview

1. Introduction

Although several studies have attempted to develop citizen satisfaction models for e-government (such as Carter and Weerakkody, 2008; Irani et al., 2007; Welch et al., 2005; Carter and Belanger, 2005; Eyob, 2004), these models do not suggest a systematic process that can be used for evaluating citizen satisfaction and expectation of e-government services against government perception. Often the perception and expectations of the user differs from the service provider in relation to key dimensions such as efficiency, ease of use, awareness, security, trust, legislation, availability and accessibility (Weerakkody et al., 2013; Al Shafi and Weerakkody, 2008). Adebowale and Kippin (2014) argue that it is now essential for policy making on public services to take account of the voices of citizens and to build solutions around collaboration between private, civil society and public sector service providers. This is at a time when e-government (“digital”) is now the default method (Cabinet Office, 2013) for disseminating government information and administering transactions between government and citizens in the UK. The integrated approach to electronic service evaluation developed in the I-MEET1 project (funded by the Qatar National Research Foundation) combines a range of measures of value from the perspectives of users and providers covering cost, benefits, opportunity and risk and encapsulating economic, management, social, and technological factors. This supports a balanced view in policy making of the interests of government as a provider on behalf of society and the satisfaction of individuals as users.

Digital diffusion of information is often achieved at high cost to government agencies and the tax payer; conversely, citizens’ take-up of e-government services has been less than satisfactory in the UK (Carter and Weerakkody, 2008; Weerakkody et al., 2013). If the e-government services, their deliverables and relationships with their stakeholders and customers, are not prioritised, e-government projects are likely to face major challenges, such as loosing citizens’ confidence and satisfaction (Lee et al., 2008). Although several studies have discussed models and factors for understanding e-government adoption and research by independent organisations (such as the OECD, the United Nations, or SOCITM, the UK association for local government IT managers) have produced a host of statistics and league tables of good and bad practices, e-government satisfaction still remains a major research theme. This can partly be attributed to the fact that few studies have attempted to understand holistically the link between provisioning of digital information and transactions and its take-up or usage. Often, the perception and expectations of the user differs from the provider in relation to the key dimensions of cost, opportunities, benefits and risks (Osman et al., 2014). The evaluation methods and standards currently used for measuring the users’ (citizen) perception regarding the above dimensions often differ from those used to measure the providers’ (government agency) perception of what constitutes best practice. The authors argue that this background has contributed to an ever widening gap between e-government implementation, diffusion and use resulting in lack of understanding of e-government satisfaction. Current research has failed to neither take a holistic view of the acceptance standards nor offer any guidelines on how to evaluate user (citizen) expectations against the providers’ (government agency) expectations of what constitutes good practice in terms of e-government systems.

Complete Article List

Search this Journal:
Volume 19: 1 Issue (2023)
Volume 18: 4 Issues (2022): 2 Released, 2 Forthcoming
Volume 17: 4 Issues (2021)
Volume 16: 4 Issues (2020)
Volume 15: 4 Issues (2019)
Volume 14: 4 Issues (2018)
Volume 13: 4 Issues (2017)
Volume 12: 4 Issues (2016)
Volume 11: 4 Issues (2015)
Volume 10: 4 Issues (2014)
Volume 9: 4 Issues (2013)
Volume 8: 4 Issues (2012)
Volume 7: 4 Issues (2011)
Volume 6: 4 Issues (2010)
Volume 5: 4 Issues (2009)
Volume 4: 4 Issues (2008)
Volume 3: 4 Issues (2007)
Volume 2: 4 Issues (2006)
Volume 1: 4 Issues (2005)
View Complete Journal Contents Listing