Perception Effects in Ground Robotic Tele-Operation

Perception Effects in Ground Robotic Tele-Operation

Richard T. Stone, Thomas Michael Schnieders, Peihan Zhong
Copyright: © 2018 |Pages: 20
DOI: 10.4018/IJRAT.2018070103
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The focus of this article is perception affects and enhancement during ground robotic tele-operation. Three independent factors were studied, namely scale perception, distance perception, and orientation awareness. Enhancements for each factor was proposed, implemented, and evaluated. The results show that under remote perception conditions, where the operator was separated from the environment where the navigation took place, both distance perception and scale perception were significantly impaired as compared with that obtained under direct perception conditions. In addition, for each of the proposed enhancements designed for each critical factor, the corresponding factor was significantly improved. The broader impacts of this work can be applied to various human-robot collaborated applications, such as urban search and rescue. Applying the proposed enhancements will allow the operators to have fewer failures through hallways, doorways, or maneuvering around obstacles, as well as allowing a more accurate understanding of the area's layout when a map is not available.
Article Preview
Top

Introduction

Human-robotic interaction is a broad area; it varies from fine motion manipulation to interaction between operator and exoskeleton (Schnieders & Stone, 2017) to search and rescue to sole exploration (outer space exploration). The level of automation also varies widely from manual control via human operator to semi-automation where both human and autonomous control are used as input to full automation which requires no human input. Figure 1 illustrates a typical human-robot collaboration environment where the human is provided an interface that displays video/images of a remote environment (sometimes with additional information related to the task) and controls the robot via joystick (or other type of controller). The robot travels through the environment relaying information to the operator.

Figure 1.

Example of typical human-robot collaboration application environment

IJRAT.2018070103.f01

In all application areas, navigation or partial-navigation is a basic task required for accomplishing the system goal. This navigation or partial-navigation is when the robot moves autonomously or semi-autonomously and the operator observes the environment while gaining awareness of the surrounding environment as well as the robot’s relative orientation. Navigation is the process of accurately ascertaining position, planning, and following a route. It consists of locomotion and wayfinding (Darken & Peterson 2011; Montello & Sas, 2006). Locomotion refers to the task-executing while wayfinding defines the goal-directed task-planning. Figure 2 demonstrates a brief decomposition of the task where the operator is responsible for perceiving and understanding the situation, forming decisions about the next (several) step(s) based on what is comprehended, and executing the plan.

Figure 2.

Task analysis of navigation

IJRAT.2018070103.f02

Wayfinding can be categorized into different groups according to the system goal and availabilities of different resources as follows: (1) wayfinding aid, (2) destination’s existence, (3) destination knowledge, (4) route knowledge, and (5) survey knowledge (familiarity with the environment. Tasks differ among categories, resulting in different requirement for task planning and information needed (Wiener, Buchner, & Holscher, 2009). Wayfinding in human-robot exploration is scoped as “uniformed search” from the taxonomy proposed by Wieer, Buchner, and Holscher (2009), which is a goal-directed search in an unfamiliar environment. This is very common among human-robot collaborated exploration applications such as military reconnaissance and urban search and rescue. In these applications, the system’s goal is to identify and localize certain targets (i.e. victims, potentially dangerous objects, enemies, etc.) in an unfamiliar/unknown environment. Tasks in such applications include (1) exploring the environment, (2) searching for targets, (3) localizing targets, and most likely (4) mapping out the environment.

For example, at the World Trade Center (Casper & Murphy, 2003), robots were sent into the area and controlled by operators to conduct search and rescue operations. Human operators and robots mainly collaborated on the search/exploration task. In that application, robots sent back video as well as environmental information from cameras and other sensors mounted on them for human operators to perceive what was going on at the remote end. Based on information perception and comprehension, the operators navigated the robot from place to place, looking for targets and victims, figuring out paths to get there, as well as learning the situation around places of interest.

Complete Article List

Search this Journal:
Reset
Open Access Articles
Volume 6: 2 Issues (2018)
Volume 5: 2 Issues (2017)
Volume 4: 2 Issues (2016)
Volume 3: 2 Issues (2015)
Volume 2: 2 Issues (2014)
Volume 1: 2 Issues (2013)
View Complete Journal Contents Listing