Content Evaluation Criteria for General Websites: Analysis and Comparison

Content Evaluation Criteria for General Websites: Analysis and Comparison

Hassan M. Selim (United Arab Emirates University, Al Ain, UAE)
Copyright: © 2012 |Pages: 18
DOI: 10.4018/ijom.2012070102
OnDemand PDF Download:
No Current Special Offers


Presently almost anyone can publish a website. The interest in using the Web as a competitive weapon by individuals, businesses, and governments is increasing as well. Little has addressed the applicability and implementation of several published website evaluation criteria. This work is an attempt to develop a comprehensive set of evaluation criteria for general websites in line with international standards for website design. The proposed evaluation criteria are used to analyze the top 10 websites in UAE in order to measure their compliance with the developed criteria. The proposed criteria can be used as a benchmark of website quality and compliance.
Article Preview

Introduction And Literature Review

The Internet and web technologies created a new and unprecedented environment to governments, businesses, educational institutions, and individuals enabling them to webcast any information using multimedia tools. We are seeing a proliferation of websites with enormous amount of information (Hassan & Abuelrub, 2008). The very first website was posted in August 1991 by Sir Tim Berners-Lee (Lawson, 2009). There were 130 websites on the Internet in 1993 and 47 million websites were added to the Internet in 2009 bringing the total number of websites on the Internet to 234 million (Pingdom Royal, 2010). This shows how fast the Web is spreading worldwide. The number of people using the Internet is growing exponentially the world over. There were 1.8 billion Internet users by the end of 2009 representing 26.6% global penetration (Internet World Stats, 2010). The Internet is a virtual library containing an unlimited amount of information. Anyone is allowed to publish and access this information. The websites are not monitored, edited, regulated, or approved (Brown, Hickey, & Pozen, 2002).

There is a multitude of indicators to use in and reasonable literature about evaluating a website. Several domain-specific website evaluation criteria were developed in the past few years. Criteria were developed to evaluate websites dedicated to bookstores, jobs (Terzis & Economides, 2005), museums (Pallas & Economides, 2008), airlines companies (Apostolou & Economides, 2008), ministries (Ataloglou & Economides, 2009). Examining the Webby Awards 2000 data set to understand which factors distinguish highly-rated websites from those that receive poor ratings, Smith, Hear, and Ivory evaluated 3000 websites based on six criteria (Sinha, Hearst, & Ivory, 2001): content, structure & navigation, visual design, functionality, interactivity, and over all experience. They found that the content was by far the best predictor of the overall experience, while visual design was the worst predictor of the overall experience. Targeting Web page designers; Web masters; business owners; and researchers, Viehland and Zhao determined how well New Zealand’s top 50 Web sites were following international homepage guidelines based on twelve criteria in three categories – web page design, navigation, and usability (Viehland & Zhao, 2008a). A Web Assessment Index (WAI) was developed and provided an integrated approach for evaluating websites based on four criteria: accessibility, speed, navigability, and content which were objectively evaluated and each website was given a score out of a 100 (Kargar, 2011; Mateos, Mera, Miranda Gonzalez, & Lopez, 2001; Miranda Gonzalez & Banegil Palacios, 2004). Ooi, Ho, and Amri used a list of 10 criteria to evaluate three education service providers’ websites in Malaysia (Ooi, Ho, & Amri, 2010). The ten criteria used were: source, layout, accessibility, speed, navigability, content, accuracy, level of details, current information, and appearance. They adopted a binary scoring indicating the existence or non-existence of a criterion. Using six Website evaluation dimensions (Pallas & Economides, 2008) developed museum’s site evaluation framework (MUSEF). The framework used website content, presentation, usability, interactivity, e-service, and technical as its evaluation dimensions. Each dimension contained a number of specific criteria. Sonoma State University developed a set of criteria to evaluate website content (Sonoma State University, 2005). Nielson presented evaluation criteria for websites’ interface design (Nielson Norman Group, 2006). Several other authors designed sets of criteria for evaluating website features, such as currency, navigation, authority, accuracy, and coverage (Fisher, Burstein, Lynch, & Lazarenko, 2008; Hackett & Parmanto, 2009; Kargar, 2011; Kim & Jung, 2007; Lituchy & Barra, 2008; O’ Reilly & Flood, 2008; Schmidt, Cantallops, & dos Santos, 2008; Yang & Chan, 2008).

Complete Article List

Search this Journal:
Open Access Articles: Forthcoming
Volume 12: 4 Issues (2022): Forthcoming, Available for Pre-Order
Volume 11: 4 Issues (2021): 2 Released, 2 Forthcoming
Volume 10: 4 Issues (2020)
Volume 9: 4 Issues (2019)
Volume 8: 4 Issues (2018)
Volume 7: 4 Issues (2017)
Volume 6: 4 Issues (2016)
Volume 5: 4 Issues (2015)
Volume 4: 4 Issues (2014)
Volume 3: 4 Issues (2013)
Volume 2: 4 Issues (2012)
Volume 1: 4 Issues (2011)
View Complete Journal Contents Listing