E-readiness assessments are largely investigated at country-level across a number of sectors, and tend to adopt quantitative approaches that assign to countries’ numerical scores depending on how well they have performed on specific components of e-readiness measures. A weighted average is calculated based on the relative importance accorded to these components in order to determine the level of e-readiness of countries (Rizk, 2004). The results of e-readiness rankings of countries are regularly published annually by some agencies. For example, the Economist Intelligence Unit (Economist Intelligence Unit, 2001) annually publishes a comprehensive list of countries on the basis of their measured e-readiness. The ranking categorises countries on the basis of their overall e-readiness, as calculated from 89 indicators across six weighted dimensions, namely connectivity, the business environment, consumer and business adoption, the legal and regulatory environment, supporting services, and social and cultural infrastructure. The result of the calculations is the classification of the world’s largest economies on the basis of their perceived adopter category.
There are several macro e-readiness assessment tools and methods that have been developed by various organisations. These organisations include, but are not limited to: Computer Systems Policy Project (CSPP); Centre for International Development, Harvard University, 2004; Economist Intelligence Unit, 2004; United Nations Development Programme, 2004; United Nations Conference on Trade and Development (UNCTAD), 2004; and SADC E-Readiness Assessment Task Force. Some of the most commonly used macro e-readiness tools include, among others: the Readiness Guide for Living in the Networked World developed by the Computer Systems Policy Project (CSPP); Network Readiness Index of the Harvard University’s Centre for International Development (CID); E-readiness Rankings of the Economist Intelligence Unit; and the Technology Achievement Index of UNDP.
Each of these tools uses a different definition of e-readiness and methods for its measurement. Moreover, the e-readiness assessments are very diverse in their goals, strategies and results (Bridges.org, 2003). On average, however, the tools measure the level of infrastructure development; connectivity; Internet access; applications and services, or network speed; quality of network access; and ICT policy. The tools also measure:
The ICT training programs in place
Adequacy and availability of human resources
Level of computer literacy
Largely, all the tools for e-readiness assessments have been designed for macro level assessments and are used to measure, for example, policy making, the state of Internet acceptance and growth, comparative analyses of countries, and stages of ICT development in countries (Dutta and Jain, 2004). In addition, the focus of each method or tool reflects the purpose for which it was designed. Moreover, the methods used in most macro e-readiness assessment studies vary from one tool to next. For example, APEC’s Guide relies on questionnaire-based data, while Mosaic’s methodology tends to be qualitative. Other organisations or agencies, such as CID, use a combination of questionnaires and raw data. McConnell’s International provides a qualitative reference guide, whereas the World Information Technology and Service Alliance (WITSA) uses a survey-based method.