A Service Science Perspective on Human-Computer Interface Issues of Online Service Applications

A Service Science Perspective on Human-Computer Interface Issues of Online Service Applications

Claudio Pinhanez
DOI: 10.4018/978-1-60960-138-6.ch008
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This paper proposes a framework for online service applications based on Service Science which identifies and enables a better understanding of the different issues faced by online service designers, engineers, and delivery personnel. The application of the Service Science framework is made possible by carefully distinguishing online service applications not only from traditional personal software applications but also from online information applications, such as the ones used by news and entertainment websites, through a process of specializing Pinhanez’s definition of customer-intensive systems (Pinhanez, 2008) to online applications. To demonstrate the utility of the framework, we consider the six basic characteristics of services, as traditionally defined in Service Science — customer-as-input, heterogeneity, simultaneity, perishability, coproduction, and intangibility — and derive from these characteristics a list of 15 different issues that are highly important for the design and evaluation of the human-computer interface of online services.
Chapter Preview
Top

Introduction

In 1991, Scientific American published an extraordinary collection of essays about the upcoming era of integration of communications, computers, and networks (Dertouzos, 1991b). The issue included articles from technology visionaries such as Michael Dertouzos, Vinton Cerf, Nicholas Negroponte, Alan Kay, Mitchell Kapor, and then US Senator Al Gore. Among other things, the articles predicted the appearance of large scale broadband networks, the non-centralized structure of today’s WWW, the ubiquity of e-mail, the telecommuting phenomenon, and the emergence of India as a software outsourcing powerhouse, as well as problems such as junk mail, cyber-crime, and identity theft.

But notably, all authors failed to predict that the massive interconnection of users in cyberspace would open the space for large online service providers that could mediate the relationship between users and the vast amount of data. Among others, online service providers that collect, analyze, negotiate, process, filter public and private data, and provide simplified information access as services such as Google, Yahoo, and Mapquest; e-retailers such as Travelocity, Amazon, and others; and service providers based on social networking such as eBay, Skype, Facebook, Orkut, etc.

The common thinking 15 years ago seemed to be that the access to the myriad of computers in the network and the browsing and filtering of their data, as well as the bulk of the interpersonal connections, would be performed by personal tools or software agents that would scout and explore the Internet for information relevant to their users. A good exemplar of this view is the concept of knowledge robots, or knowbots, proposed by Robert Kahn and Vinton Cerf (Kahn & Cerf, 1988), “…programs designed by their users to travel through a network, inspecting and understanding similar kinds of information…” as described in (Dertouzos, 1991a, pg. 35). Knowbots were to be unleashed to fulfill specific user requests for information, moving “…from machine to machine, possibly cloning themselves […] dispatched to do our binding in a global landscape of networked computing and information resources.” (Cerf, 1991, pg. 44).

The problem with the agent-based vision of information search is that it does not scale up. In the current world of distributed information, this approach to information search would require each of us to run (and possibly store) the equivalent of Google’s operations of crawling the web, indexing, and search matching. What the authors of the Scientific American issue could not see is that, as networks and their users grow well beyond the academic, mostly engineering-minded users of the Internet in the early 90s, there are tremendous economies of scale when millions and millions of queries are handled by a central system that crawls and indexes all the information available (independent of specific queries) and provides information finding as an online service application.

But how different are such online service applications from traditional software applications? The goal of this article is to describe a framework for online service applications that differentiates them from traditional interactive software tools, so it can be used to explain and predict the differences between the two, particularly in issues related to the design of human-computer interface for online services. As a consequence software tools and online service applications are intrinsically different, even when used for similar tasks, and should be designed and engineered differently. There is, of course, an extensive body of practice and empirical knowledge about developing interfaces for online interactive applications — exemplified by all the knowledge built in the last decade and half about web applications, as, for example, described in (Nielsen, 2000). Also, there has been work examining HCI and usability issues in e-commerce (C.-M. Karat, Blom, & Karat, 2004; Nah & Davis, 2002; Voss, 2003), but we believe that these works suffer from not having an appropriate theoretical understanding of what an online service is and, therefore, miss an important part of the picture when reasoning about their findings.

Complete Chapter List

Search this Book:
Reset