An Advanced Human-Robot Interaction Interface for Collaborative Robotic Assembly Tasks

An Advanced Human-Robot Interaction Interface for Collaborative Robotic Assembly Tasks

Christos Papadopoulos, Ioannis Mariolis, Angeliki Topalidou-Kyniazopoulou, Grigorios Piperagkas, Dimosthenis Ioannidis, Dimitrios Tzovaras
Copyright: © 2019 |Pages: 19
DOI: 10.4018/978-1-5225-8060-7.ch037
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This article introduces an advanced human-robot interaction (HRI) interface that allows teaching new assembly tasks to collaborative robotic systems. Using advanced perception and simulation technologies, the interface provides the proper tools for a non-expert user to teach a robot a new assembly task in a short amount of time. An RGBD camera is used to allow the user to demonstrate the task and the system extracts the needed information for the assembly to be simulated and performed by the robot, while the user guides the process. The HRI interface is integrated with the ROS framework and is built as a web application allowing operation through portable devices, such as a tablet PC. The interface is evaluated with user experience rating from test subjects that are requested to teach a folding assembly task to the robot.
Chapter Preview
Top

Introduction

One of the major challenges in the efforts to use robots for complex assembly tasks is reducing the amount of time and resources needed to teach the robots how to perform the assembly in question. In our days, expert roboticists can program new policies and skills within specialized domains such as manufacturing and lab experimentation, but this approach requires large amount of programming time and resources that are not always available (Wilcox, Nikolaidis & Shah, 2012). A potential commonly proposed solution to this problem is Learning from Demonstration (Argall, Chernova, Veloso & Browning, 2009). Using a Human-Robot Interaction (HRI) interface, the teacher provides demonstrations of a desired task, which are then used to plan the robot actions that need to be performed in order to successfully complete this task.

This paper focuses on the specifics of the functionality of the HRI system which is used for the demonstration of an assembly task from an inexperienced user and the simulation of the assembly task in a virtual environment. Although Learning from Demonstration (LfD) has been already used as a technique to teach a robot new skills (Osentoski et al., 2012), to the best of our knowledge it has never been used for teaching robotic assembly tasks. The corresponding HRI interface for facilitating such teaching should be simple enough so that a non-expert user can demonstrate new assembly tasks, while still enabling the user to supervise the assembly execution.

Another problem with the majority of existing HRI systems is that they require special architectures and complex interfaces to allow the user to interact with the robot (Calinon & Billard, 2007), something that adds more difficulties to the inexperienced user. To tackle this issue, we propose the use of a simple web interface that allows the user to interact freely with the HRI system, without the constraints of a specialized architecture. The user can control complex actions of the system and supervise the process through a lightweight graphical interface on a web browser using touch controls (using a tablet PC for instance). This approach supports the creation of a user-friendly robot control interface used for demonstration and simulation of complicated assembly tasks. One of the advantages of the proposed system is that it allows the interaction of an inexperienced non-expert user with a complex robotic system for teaching assembly tasks that previously required special policies and skills from experts in this field.

The developed HRI interface is interconnected with many components of the complete system as seen in Figure 1. The Perception module is responsible for the visual acquisition of the assembly task using special cameras that capture RGB and Depth data in real time, allowing the user to control the process of recording the demonstration. It is also responsible for the recognition and tracking of the assembly parts, as well as the hands of the teacher, whereas the trajectory of the parts can be calculated using key-frames from the demonstration extracted by the Key-frame Extraction module. The selection of the extracted key-frames can be changed by the user and semantic information that is going to be used in later stages can be added manually. Using the extracted trajectories the assembly task is visualized in a 3D simulation environment, allowing the user to examine the learned assembly process and confirm it before interacting with the physical robot. Afterwards, the system generates new finger design for the robot's grippers, as well as candidate grasp positions that can be edited by the user. Then, in the last phase of the training, the system performs the assembly under the instructor’s supervision, while the robot can be manipulated through physical-HRI (pHRI) in order to complete the training.

Figure 1.

The interconnections of the HRI interface with the rest of the system

978-1-5225-8060-7.ch037.f01

The main contributions of the presented work can be summarized as follows:

Complete Chapter List

Search this Book:
Reset