Trust in Cognitive Assistants: A Theoretical Framework

Trust in Cognitive Assistants: A Theoretical Framework

Md. Abul Kalam Siddike (University of Dhaka, Dhaka, Bangladesh) and Yoji Kohda (Japan Advanced Institute of Science and Technology, Nomi, Japan)
Copyright: © 2019 |Pages: 12
DOI: 10.4018/IJAIE.2019010104

Abstract

The main purpose of this research is to develop a framework of trust determinants in the interactions between people and cognitive assistants (CAs). CAs are defined as new decision tools that can provide people with high quality recommendations and help them make data-driven decisions to understand the environment around them. Trust is defined as the belief of people that CAs will help them reach a desired decision. An extensive review on trust in psychology, sociology, economics and policy making, organizational science, automation, and robotics was conducted to determine the factors that influence people's trust in CAs. On the basis of this review, a framework of trust determinants in people's interactions with CAs was developed where reliability, attractiveness, and emotional attachments positively influence people's trust in CAs. The framework also shows that relative advantages of innovativeness positively affect the intention to use CAs. Future research directions are suggested for developing and validating more concrete scales in measuring trust determinants in the interactions between people and CAs.
Article Preview
Top

Introduction

Today, Apple’s Siri, Google’s Now, Amazon’s Echo, IBM’s Watson, and other cognitive tools are beginning to reach a level of utility that provides a foundation for a new generation of cognitive collaborators and cognitive assistants (CAs) (Siddike & Kohda, 2018a; 2018b; 2018c; Siddike, Spohrer, Demirkhan, and Kohda, Siddike, Spohrer, Demirkhan, & Kohda, 2018a; 2018b; Spohrer & Banavar, 2015). CAs are new decision tools that can augment human capabilities and expertise in understanding the environment around us with depth and clarity (Siddike, Iwano, Hidaka, Kohda, & Spohrer, 2017; Spohrer, 2016; Spohrer, Bassano, Piciocchi, & Siddike, 2017; Spohrer, Siddike, & Kohda, 2017). CAs can provide people with high-quality recommendations and help them make better data-driven decisions (Demirkan et al., 2015). Trust is an important and essential issue to consider for CAs to be adopted by society. The progression from cognitive tool to assistant to collaborator to coach to mediator is in fact a progression of trust (Siddike et al., 2018a; 2018b).

In the 19th century, people did not trust steam engines and “boilers.” The problem was that they often exploded. Over time, design and engineering improved, trust went up, and economic growth resulted (Siddike & Kohda, 2018c). For example, consider this one application of the steam engine in America (Arthur, 2011); in 1850, a decade before the Civil War, the United States’ economy was small—it was not much bigger than Italy’s. Forty years later, it was the largest economy in the world. What happened in between was the railroads (Arthur, 2011). In the 21st century, people do not fully trust CAs. Knowledge, technology, and organizations are three ways people augment themselves to become smarter (Norman, 1993). However, knowledge, technology, and organizations must be trusted to spur economic growth. Advanced cognitive systems must become trusted social entities to be effective in our culture (Forbus, 2016). Only as trusted social entities can cognitive systems augment human intellect and interact with people to co-create new knowledge, technology, and organizations (Siddike et al., 2018a; 2018b).

Complete Article List

Search this Journal:
Reset
Open Access Articles: Forthcoming
Volume 7: 2 Issues (2020): Forthcoming, Available for Pre-Order
Volume 6: 2 Issues (2019)
Volume 5: 2 Issues (2018)
Volume 4: 2 Issues (2017)
Volume 3: 2 Issues (2016)
Volume 2: 2 Issues (2014)
Volume 1: 2 Issues (2012)
View Complete Journal Contents Listing