Relationships and Etiquette with Technical Systems

Relationships and Etiquette with Technical Systems

Christopher A. Miller
DOI: 10.4018/978-1-60566-264-0.ch032
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This chapter focuses not on technology mediation of human relationships, but rather on human-like relationships with technology itself. The author argues, with supporting reasoning and data from his work and that of others, that humans have a natural tendency to generalize social interaction behaviors and interpretations (that is, domain-specific “etiquette”) learned for human-human interactions to interactions with any complex, semi-autonomous and partially unpredictable agent—including many machines and automation. This tendency can affect human trust, perceived workload, degree of confidence and authority, and so forth—all of which can in turn affect performance, safety, and satisfaction with a machine system. The author urges taking an “etiquette perspective” in design as a means of anticipating this phenomenon and either encouraging or discouraging it as appropriate.
Chapter Preview
Top

Human-Machine Etiquette: Origins Of The Idea

In 2000, while co-chairing a AAAI Spring Symposium on Adaptive User Interfaces, I produced a soapbox polemic on the topic of Human-Computer Etiquette (Miller, 2000). I wanted to draw attention to a perceived flaw in much of the exciting work in adaptive and intelligent user interfaces. Specifically, that they all too often behaved like little children: interrupting ongoing conversation or work to show off what they can do, exhibiting capabilities primarily for the sake of showing off rather than to help advance the goals of their human users (their “betters”?), and persisting in exhibiting the same behavior long after it had ceased to be useful or interesting. While this pattern of actions was tolerable in young children and, perhaps, in young systems fresh from the lab, such systems needed to grow up and participate in the rules and conventions of the societies into which they hoped to be accepted.

In fairness, I wasn’t just pointing a finger at the work of others, and I wasn’t completely original. Eric Horvitz had written about a similar concern with regards to personal computer systems (e.g., Microsoft’s Office AssistantsTM) a year earlier (Horvitz, 1999). And I had noticed similar tendencies in my own projects: for example, pilots deemed initial versions of the Rotorcraft Pilot’s Associate (RPA) (Miller and Hannen, 1999) far more willing to provide aiding than was necessary.

Interestingly, however, in that rotorcraft project we had noted that human pilots spent nearly a third of their time in inter-crew coordination, discussing their intent and plans. We designed and implemented a simple interface which allowed RPA to participate in that conversation, taking instruction and declaring its intent all in ways that were functionally similar (though usually much simpler in form) to the ways pilots communicated among themselves. This modification seems to have resulted in improvement in human + machine system performance, as well as larger gains in user acceptance (Miller and Hannen, 1999).

Key Terms in this Chapter

Intentional Agent: Any agent, whether human or machine (or even hidden and abstract such as the weather, luck or fate), that is deemed to sufficient intelligence and personal consciousness so as to have intentions (after Dennett, 1989).

Etiquette-Based Design: Design of systems and interfaces which takes into account the fact that humans are likely to interact with complex system according to the patterns of expectations and interpretations they have formed for interacting with other intentional agents—primarily other humans

Face: The “positive social value a person effectively claims for himself” (cf. Cassell and Bickmore, 2003, p. 6). It is the desire to have one’s will and interests be seen as important and valuable. Face can be saved or lost, threatened or conserved in interactions. All agents which are believed to be intentional are believed to have face.

Etiquette (as used in this chapter): The defined roles and acceptable behaviors and interaction moves of each participant in a common ‘social’ setting—that is, one that involves more than one intelligent agent (cf. intentional agent). Etiquette rules create an informal contract between participants in a social interaction, allowing expectations to be formed and used about the behavior of other parties, and defining what counts as good behavior.

Redress: Threats to one’s face are inherent in social interactions between intentional agents. Politeness behaviors can “redress” or mitigate and offset face threats.

Politeness: One (pervasive) type of etiquette which embodies a culture-specific code of verbal and non-verbal behaviors with varying weight to redress face threat and thereby signal, maintain or disrupt social relationships based on power difference, social distance (i.e., familiarity) and the raw imposition of interactions (after Brown and Levinson, 1987.)

Complete Chapter List

Search this Book:
Reset