Aviation-Related Expertise and Usability: Implications for the Design of an FAA E-Government Web Site

Aviation-Related Expertise and Usability: Implications for the Design of an FAA E-Government Web Site

Ferne Friedman-Berg (FAA Human Factors Team - Atlantic City, USA), Kenneth Allendoerfer (FAA Human Factors Team - Atlantic City, USA) and Shantanu Pai (Engility Corporation, USA)
DOI: 10.4018/978-1-60960-162-1.ch005
OnDemand PDF Download:
No Current Special Offers


The Federal Aviation Administration (FAA) Human Factors Team - Atlantic City conducted a usability assessment of the www.fly.faa.gov Web site to examine user satisfaction and identify site usability issues. The FAA Air Traffic Control System Command Center uses this Web site to provide information about airport conditions, such as arrival and departure delays, to the public and the aviation industry. The most important aspect of this assessment was its use of quantitative metrics to evaluate how successfully users with different levels of aviation-related expertise could complete common tasks, such as determining the amount of delay at an airport. The researchers used the findings from this assessment to make design recommendations for future system enhancements that would benefit all users. They discuss why usability assessments are an important part of the process of evaluating e-government Web sites and why their usability evaluation process should be applied to the development of other e-government Web sites.
Chapter Preview


On November 15, 2007, President Bush announced actions to address aviation delays during the Thanksgiving holidays. As part of this announcement, he directed people to visit the Web site fly.faa.gov Web site.

Because this Web site is the public face of a large federal agency, it is important that it presents the agency in the best light possible. An agency Web site should be a positive public relations vehicle and should not, in itself, create any public relations problems. Although use of e-government Web sites is increasing annually, low user acceptance of e-government Web sites is a recognized problem (Hung, Chang, & Yu, 2006). Many factors affect whether or not someone will use or accept an e-government Web site, including past positive experience with e-government Web sites (Carter & Bélanger, 2005; Reddick, 2005); the ease of use of the Web site (Carter & Bélanger, 2005; Horst, Kuttschreutter, & Gutteling, 2007); the perceived trustworthiness of the information presented on the Web site (Carter & Bélanger; Horst, et al., 2007); the perceived usefulness of the Web site (Hung et al., 2006); and personal factors such as education level, race, level of current internet use, and income level (Reddick, 2005). If a Web site has many functional barriers, such as having a poor layout or producing incomplete search results, customers of the site may not use it (Bertot & Jaeger, 2006).

Early work in e-government has consistently ignored studying the needs of end users, and there has been little research focusing on the demand side of e-government (Reddick, 2005). That is, what are customers looking for when coming to an e-government Web site? Although there have been many benchmarking surveys conducted on e-government Web sites, benchmarking surveys often do not describe the benefits provided by a Web site and only enumerate the number of services offered by that site (Foley, 2005; Yildiz, 2007). Benchmarks do not evaluate the user’s perception of sites and do not measure real progress in the government’s delivery of e-services. However, governments often chase these benchmarks to the exclusion of all other forms of evaluation (Bannister, 2007).

E-government academics emphasize the importance of usability testing and highlight the need to focus on Web site functionality, usability, and accessibility testing (Barnes & Vigden, 2006; Bertot & Jaeger, 2006). However, despite its importance, many organizations still are not performing usability testing on e-government Web sites. Current work often does not address the needs of different user communities, employ user-centered design, or use rigorous methods to test the services being delivered (Bertot & Jaeger; Heeks & Bailur, 2007).

Complete Chapter List

Search this Book: