A Novel Cognitive Approach for Measuring the Trust in Robots

A Novel Cognitive Approach for Measuring the Trust in Robots

Akash Dutt Dubey, Bimal Aklesh Kumar
Copyright: © 2019 |Pages: 14
DOI: 10.4018/JITR.2019070104
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

One of the major challenges in human-robot interaction is to determine the trustworthiness of the robot. In order to enhance and augment the human capabilities by establishing a human robot partnership, it is important to evaluate the reliability and dependability of the robots for the specific tasks. The trust relationship between the human and robot becomes critical especially in the cases where there is strong cohesion between humans and robots. In this article, a cognition based-trust model has been developed which measures the trust and other related cognitive parameters of the robot. This trust model has been applied on a customized robot which performs path planning tasks using three different algorithms. The simulation of the model has been done to evaluate the trust of the robot for the three algorithms. The results show that with each learning cycle of each method, the trust of the robot increases. An empirical evaluation has also been done to validate the model.
Article Preview
Top

1. Introduction

In recent times, the human-robot collaboration has attained a very important aspect in our daily lives. To increase the acceptance and safe deployment of the robots for specific tasks, it is important that the users have high degree of trust on the robots. The trust in automated robots’ decision making capabilities has been one of the major issues that have emerged ever since the human robot collaboration. An environment where the tasks, resources and the information are shared among humans and robots, trust is an important aspect that encourages supportive behavior. In order to train the operators for to develop advanced skills, it is important to measure the level of the trust that an individual has on the automated robot.

In order to provide the user the required confidence and trust on the robot, it is important to model the trust and the other cognitive factors of the robot. This modeling becomes more significant for the mobile robots which are not designed for social interaction, are task oriented and time dependent (Desai et al., 2012). These mobile robots can also end up hurting people since their intent is not expressed to their users (Mutlu and Forlizzi, 2008).

As proposed by Lee and See (2004), trust can be defined as “the attitude that an agent will help achieve an individual’s goals in a situation characterized by uncertainty and vulnerability”. The level of trust attained by the robot can serve as a key factor which influences its use and applications (Lee and See, 2004). Several researches have shown that the users tend to use the automation more if the level of trust is high (deVries, Midden and Bouwhuis, 2003; Dzindolet, 2003; Lee and Moray, 1994; Riley, 1996). On the contrary, if the user has more confidence on his expertise than the robot, he will opt for the manual control rather than the automation. Therefore, it is important to define an Expertise index which can provide confidence to the user over the automated robot.

In their studies, B. Mutlu and J. Forlizzi (2008) have analyzed the role of workflow, social and environmental factors that affect the responses to the robot and also the changes that are caused by these factors. Research works done by Dzindolet et al. (2003) and Riley (1996) have proved that the automation reliability works as a major factor which affects the trust of the robot, where lower reliability affects the operator trust in the robot negatively. Lee and Moray (1994) used the time series model (Auto Regressive Moving AVerage model: ARMAV) to calculate the trust. A neural network was used by Farrell and Lewandowsky (2000) which modeled the control allocation strategy such that it may be able to predict the future moves which can then be used to model trust. Yagoda and Gillan (2012) have proposed a trust measure for military robots for the purpose of human-robot interactions. Schaefer (2013), in his doctoral dissertation has developed trust model which could evaluate the changes in trust between the human and robot. The research work done by Dassonville et al (1996) calculated the reliability, performance and predictability of the joystick which was used to control simulated PUMA arm. The candidates provided their ratings in this work based on their experience with the errors encountered in the simulation.

Complete Article List

Search this Journal:
Reset
Volume 16: 1 Issue (2024): Forthcoming, Available for Pre-Order
Volume 15: 6 Issues (2022): 1 Released, 5 Forthcoming
Volume 14: 4 Issues (2021)
Volume 13: 4 Issues (2020)
Volume 12: 4 Issues (2019)
Volume 11: 4 Issues (2018)
Volume 10: 4 Issues (2017)
Volume 9: 4 Issues (2016)
Volume 8: 4 Issues (2015)
Volume 7: 4 Issues (2014)
Volume 6: 4 Issues (2013)
Volume 5: 4 Issues (2012)
Volume 4: 4 Issues (2011)
Volume 3: 4 Issues (2010)
Volume 2: 4 Issues (2009)
Volume 1: 4 Issues (2008)
View Complete Journal Contents Listing