This chapter focuses not on technology mediation of human relationships, but rather on human-like relationships with technology itself. The author argues, with supporting reasoning and data from his work and that of others, that humans have a natural tendency to generalize social interaction behaviors and interpretations (that is, domain-specific “etiquette”) learned for human-human interactions to interactions with any complex, semi-autonomous and partially unpredictable agent—including many machines and automation. This tendency can affect human trust, perceived workload, degree of confidence and authority, and so forth—all of which can in turn affect performance, safety, and satisfaction with a machine system. The author urges taking an “etiquette perspective” in design as a means of anticipating this phenomenon and either encouraging or discouraging it as appropriate.
Human-Machine Etiquette: Origins Of The Idea
In 2000, while co-chairing a AAAI Spring Symposium on Adaptive User Interfaces, I produced a soapbox polemic on the topic of Human-Computer Etiquette (Miller, 2000). I wanted to draw attention to a perceived flaw in much of the exciting work in adaptive and intelligent user interfaces. Specifically, that they all too often behaved like little children: interrupting ongoing conversation or work to show off what they can do, exhibiting capabilities primarily for the sake of showing off rather than to help advance the goals of their human users (their “betters”?), and persisting in exhibiting the same behavior long after it had ceased to be useful or interesting. While this pattern of actions was tolerable in young children and, perhaps, in young systems fresh from the lab, such systems needed to grow up and participate in the rules and conventions of the societies into which they hoped to be accepted.
In fairness, I wasn’t just pointing a finger at the work of others, and I wasn’t completely original. Eric Horvitz had written about a similar concern with regards to personal computer systems (e.g., Microsoft’s Office AssistantsTM) a year earlier (Horvitz, 1999). And I had noticed similar tendencies in my own projects: for example, pilots deemed initial versions of the Rotorcraft Pilot’s Associate (RPA) (Miller and Hannen, 1999) far more willing to provide aiding than was necessary.
Interestingly, however, in that rotorcraft project we had noted that human pilots spent nearly a third of their time in inter-crew coordination, discussing their intent and plans. We designed and implemented a simple interface which allowed RPA to participate in that conversation, taking instruction and declaring its intent all in ways that were functionally similar (though usually much simpler in form) to the ways pilots communicated among themselves. This modification seems to have resulted in improvement in human + machine system performance, as well as larger gains in user acceptance (Miller and Hannen, 1999).
Key Terms in this Chapter
Intentional Agent: Any agent, whether human or machine (or even hidden and abstract such as the weather, luck or fate), that is deemed to sufficient intelligence and personal consciousness so as to have intentions (after Dennett, 1989).
Etiquette-Based Design: Design of systems and interfaces which takes into account the fact that humans are likely to interact with complex system according to the patterns of expectations and interpretations they have formed for interacting with other intentional agents—primarily other humans
Face: The “positive social value a person effectively claims for himself” (cf. Cassell and Bickmore, 2003, p. 6). It is the desire to have one’s will and interests be seen as important and valuable. Face can be saved or lost, threatened or conserved in interactions. All agents which are believed to be intentional are believed to have face.
Etiquette (as used in this chapter): The defined roles and acceptable behaviors and interaction moves of each participant in a common ‘social’ setting—that is, one that involves more than one intelligent agent (cf. intentional agent). Etiquette rules create an informal contract between participants in a social interaction, allowing expectations to be formed and used about the behavior of other parties, and defining what counts as good behavior.
Redress: Threats to one’s face are inherent in social interactions between intentional agents. Politeness behaviors can “redress” or mitigate and offset face threats.
Politeness: One (pervasive) type of etiquette which embodies a culture-specific code of verbal and non-verbal behaviors with varying weight to redress face threat and thereby signal, maintain or disrupt social relationships based on power difference, social distance (i.e., familiarity) and the raw imposition of interactions (after Brown and Levinson, 1987.)
Complete Chapter List
Brian Whitworth, Aldo de Moor
Brian Whitworth, Aldo de Moor
Prologue: General Socio-Technical Theory
Ann Borda, Jonathan P. Bowen
Ken Eason, José Abdelnour-Nocera
Cleidson R.B. de Souza, David F. Redmiles
Prologue: Socio-Technical Perspectives
Petter Bae Brandtzæg, Jan Heim
Wilson Huang, Shun-Yung Kevin Wang
Elayne W. Coakes, Peter Smith, Dee Alwis
Prologue: Socio-Technical Analysis
Jonas Sjöström, Göran Goldkuhl
Paul J. Bracewell
Mikael Lind, Peter Rittgen
Harry S. Delugach
Dorit Nevo, Brent Furneaux
Prologue: Socio-Technical Design
Anders I. Mørch
Manuel Kolp, Yves Wautelet
Anton Nijholt, Dirk Heylen, Rutger Rienks
Jos Benders, Ronald Batenburg, Paul Hoeken, Roel Schouteten
Mary Allan, David Thorns
Rebecca M. Ellis
Christopher A. Miller
Prologue: Socio-Technical Implementation
Laura Anna Ripamonti, Ines Di Loreto, Dario Maggiorini
Mohamed Ben Ammar, Mahmoud Neji, Adel M. Alimi
Pernilla Qvarfordt, Shumin Zhai
Claire de la Varre, Julie Keane, Matthew J. Irvin, Wallace Hannum
Jeremy Birnholtz, Emilee J. Rader, Daniel B. Horn, Thomas Finholt
Prologue: Socio-Technical Evaluation
John M. Carroll, Mary Beth Rosson, Umer Farooq, Jamika D. Burge
Tanguy Coenen, Wouter Van den Bosch, Veerle Van der Sluys
Olga Kulyk, Betsy van Dijk, Paul van der Vet, Anton Nijholt, Gerrit van der Veer
Janet L. Holland
David Hinds, Ronald M. Lee
Bertram C. Bruce, Andee Rubin, Junghyun An
Prologue: The Future of Socio-Technical Systems
Peter J. Denning
Theresa Dirndorfer Anderson
Laurence Claeys, Johan Criel
Kenneth E. Kendall, Julie E. Kendall