Article Preview
TopIntroduction
Human-robotic interaction is a broad area; it varies from fine motion manipulation to interaction between operator and exoskeleton (Schnieders & Stone, 2017) to search and rescue to sole exploration (outer space exploration). The level of automation also varies widely from manual control via human operator to semi-automation where both human and autonomous control are used as input to full automation which requires no human input. Figure 1 illustrates a typical human-robot collaboration environment where the human is provided an interface that displays video/images of a remote environment (sometimes with additional information related to the task) and controls the robot via joystick (or other type of controller). The robot travels through the environment relaying information to the operator.
Figure 1. Example of typical human-robot collaboration application environment
In all application areas, navigation or partial-navigation is a basic task required for accomplishing the system goal. This navigation or partial-navigation is when the robot moves autonomously or semi-autonomously and the operator observes the environment while gaining awareness of the surrounding environment as well as the robot’s relative orientation. Navigation is the process of accurately ascertaining position, planning, and following a route. It consists of locomotion and wayfinding (Darken & Peterson 2011; Montello & Sas, 2006). Locomotion refers to the task-executing while wayfinding defines the goal-directed task-planning. Figure 2 demonstrates a brief decomposition of the task where the operator is responsible for perceiving and understanding the situation, forming decisions about the next (several) step(s) based on what is comprehended, and executing the plan.
Figure 2. Task analysis of navigation
Wayfinding can be categorized into different groups according to the system goal and availabilities of different resources as follows: (1) wayfinding aid, (2) destination’s existence, (3) destination knowledge, (4) route knowledge, and (5) survey knowledge (familiarity with the environment. Tasks differ among categories, resulting in different requirement for task planning and information needed (Wiener, Buchner, & Holscher, 2009). Wayfinding in human-robot exploration is scoped as “uniformed search” from the taxonomy proposed by Wieer, Buchner, and Holscher (2009), which is a goal-directed search in an unfamiliar environment. This is very common among human-robot collaborated exploration applications such as military reconnaissance and urban search and rescue. In these applications, the system’s goal is to identify and localize certain targets (i.e. victims, potentially dangerous objects, enemies, etc.) in an unfamiliar/unknown environment. Tasks in such applications include (1) exploring the environment, (2) searching for targets, (3) localizing targets, and most likely (4) mapping out the environment.
For example, at the World Trade Center (Casper & Murphy, 2003), robots were sent into the area and controlled by operators to conduct search and rescue operations. Human operators and robots mainly collaborated on the search/exploration task. In that application, robots sent back video as well as environmental information from cameras and other sensors mounted on them for human operators to perceive what was going on at the remote end. Based on information perception and comprehension, the operators navigated the robot from place to place, looking for targets and victims, figuring out paths to get there, as well as learning the situation around places of interest.