Multimodal Feedback in Human-Robot Interaction: An HCI-Informed Comparison of Feedback Modalities

Multimodal Feedback in Human-Robot Interaction: An HCI-Informed Comparison of Feedback Modalities

Maria Vanessa aus der Wieschen, Kerstin Fischer, Kamil Kukliński, Lars Christian Jensen, Thiusius Rajeeth Savarimuthu
DOI: 10.4018/978-1-5225-0435-1.ch006
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

A major area of interest within the fields of human-computer interaction (HCI) and human-robot interaction (HRI) is user feedback. Previous work in HCI has investigated the effects of error feedback on task efficiency and error rates, yet, these studies have been mostly restricted to comparisons of inherently different feedback modalities, for example auditory and visual, and as such fail to acknowledge the many possible variations within each of these modalities, some of which being more effective than others. This chapter applies a user-centered approach to investigating feedback modalities for robot teleoperation by naïve users. It identifies the reasons why novice users need feedback when demonstrating novel behaviors to a teleoperated industrial robot and evaluates both various feedback modalities designed to prevent errors and, drawing on document design theory, studies different kinds of visual presentation regarding their effectiveness in the creation of legible error feedback screens.
Chapter Preview
Top

Introduction

Industrial robots are increasingly recognized as a time- and cost-efficient alternative or assistance to human labor, with the number of robots installed in industrial settings growing constantly. The statistical department of the International Federation of Robotics (2014) estimates the amount of industrial robots sold in 2013 to be around 168,000. However, adjusting these industrial robots to novel behaviors is usually a time-consuming and also expensive task carried out by programmers (Chernova & Thomaz, 2014); consequently, much recent work focuses on learning from demonstration, where the robot acquires novel tasks from the demonstration by people other than programmers (e.g. Muxfeldt, Kluth, & Kubus, 2014) – if however naïve users demonstrate tasks to a robot, the interface needs to be intuitive and user-friendly and has to ensure that the number of errors remains low. Thus, users need feedback when demonstrating novel tasks to the robot.

Research on feedback has been mostly restricted to comparisons of inherently different feedback modalities, for example auditory and visual. These studies fail to acknowledge that there are endless varieties of each of these modalities, of which some might be more effective than others. Auditory feedback, for instance, can be realized as verbal messages, earcons, and auditory icons (Nam & Kim, 2010). However, there are several teleoperation modalities that require the human user to be physically near the robot (Kirstein, 2014) such as the case with the MARVIN platform (Savarimuthu et al., 2013), the robotic platform that is studied in the present paper. Researchers in this field seem to focus mostly on user safety issues in these forms of teleoperation, while paying far too little attention to intuitive interaction.

Based on aus der Wieschen (2015), this chapter examines different types of error feedback for human-robot interaction, arguing that error feedback is a crucial factor in designing intuitive human-robot interaction between naïve users and industrial robots. The major objectives of this chapter are to determine the problems novice users may cause for the robot platform under consideration and to examine how feedback can help users solve these problems on their own. Furthermore, this chapter develops design guidelines for the design of error feedback screens. In particular, this research seeks to address three questions:

  • 1.

    Which errors occur when naïve users teleoperate the MARVIN platform?

  • 2.

    How can feedback prevent these errors or help users to resolve them?

  • 3.

    Which factors influence the legibility of feedback screens?

Both qualitative and quantitative research methods were adopted to provide answers to these questions. This paper comprises four studies, of which three were conducted as user tests in a lab setting, and one in form of a survey. The baseline study identifies the most common problems that occur in human-robot interaction with an industrial assembly robot and asks when and why naïve users wish for feedback. The second lab study presents novice users with three feedback modalities that the users in the baseline study requested. As one of the three modalities – visual feedback – proved to be very inefficient (even though previous research has reported on the efficiency of visual feedback), a survey was conducted in order to find out which factors influence the legibility of error feedback screens. The most legible feedback screens were then tested in the same lab setting to validate the survey results.

Key Terms in this Chapter

Document Design: The study of creating documents that meet the readers’ needs.

Feedback Modality: Classifies feedback by which sensory modality it addresses, i.e. auditory, visual, tactile, or olfactory.

Multimodal Feedback: A combination of two or more feedback modalities.

Text-Image Relationship: The way texts and images can be combined.

Affirmative Feedback: A constant feedback given to users as long as there is no problem.

Learning by Demonstration: A machine learning from a human or another machine how to perform an action, by the teacher simply demonstrating the task.

User-Centered Design: An iterative design approach that considers the users’ needs at every stage.

Error Feedback: Feedback given to users when there is a problem.

Complete Chapter List

Search this Book:
Reset