Human-AI Collaboration: Introducing the Virtual Collaborator

Human-AI Collaboration: Introducing the Virtual Collaborator

Dominik Siemon, Timo Strohmann
DOI: 10.4018/978-1-7998-4891-2.ch005
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Continuous increase in computing power and more available data contribute to AI maturing, enabling an efficient and powerful human-AI collaboration. As a result, the nature of work will change, and more and more AI will be involved in joint work. In this study, the authors introduce the so-called Virtual Collaborator (VC), an equal partner in digital collaboration, and report the results of a conducted online study. Based on the results of our study and current research, the concept of a VC was constructed, which consists of potential roles, tasks, level of autonomy, and behavior. This construct can be used as a guideline to design and implement a VC in collaboration scenarios.
Chapter Preview
Top

Introduction And Motivation

New challenges and constantly emerging complex problems, caused by an increasingly networked world, emerging technologies and ambitious customer requirements, require companies to benefit from teamwork and collaboration (Dulebohn & Hoch, 2017; Finkbeiner & Morner, 2015). Information and communication technology (ICT) enables digital collaboration that is location- and time independent. Such digital collaboration is now part of the day-to-day business of many knowledge workers, in which various team members work together using certain systems for communication, information exchange and general collective added value creation (Driskell et al., 2003; Dulebohn & Hoch, 2017; Fiol & O’Connor, 2005). Research in this field has been conducted over the past decades, leading to a variety of guidelines and computer systems that support tasks, like decision making, project and knowledge management or creativity (Resnick et al., 2005; Siemon et al., 2017; Voigt & Bergener, 2013).

In addition to that, the continuous improvement of computing power and the general development of novel algorithms has significantly matured artificial intelligence (AI) (Russell & Norvig, 2016). This improvement has led to a number of systems and services, like Amazon’s Alexa, Apple’s Siri or IBM’s Watson. With Google’s Tensorflow, Facebook’s Wit.AI or other services, software developers are now able to implement AI within their products or services more easily. This results in smarter services using AI to interact and even collaborate with customers (Aleksander, 2017; Schwartz et al., 2019; Spinella, 2018). This upswing of AI challenges research and existing theories on collaboration mechanisms, methods and phenomena in group- or teamwork (Aleksander, 2017; Anderson et al., 2018; Schwartz et al., 2019).

The interdisciplinary research field of computer-supported collaborative work (or collaboration technology) has already revealed the various mechanisms required for successful collaboration via information systems (Borghoff & Schlichter, 2000; Grudin, 1994; Siemon et al., 2017). A number of design principles have emerged and been developed from this research, proposing guidelines and different characteristics that support interaction and group dynamics, for example to reduce negative cognitive or social group effects such as production blocking, evaluation apprehension or social loafing (Diehl & Stroebe, 1987; Voigt & Bergener, 2013). Evaluation apprehension for example, or the fear of critics, is a phenomenon that appears when individuals hold back ideas, because they apprehend negative comments and critic. This can lead to ideas and thoughts that are hold back and might be valuable for a beneficial innovation (Diehl & Stroebe, 1987; L. M. Jessup et al., 1990). Using anonymity has been proven to positively impact evaluation apprehension, as users can anonymously contribute and are less or not afraid of criticism. However, anonymity increases social loafing, a phenomenon of individuals exerting less effort in a group (L. M. Jessup et al., 1990). A study from 2015 used an AI-like support system in order to overcome the phenomena of evaluation apprehension in a group setting (Siemon et al., 2015). The researchers implemented a pseudo AI within a creativity support system, examining whether participants fear to contribute when interacting and being supported by an artificial collaborator. The phenomena of evaluation apprehension was not observed within the experiment (Siemon et al., 2015). Even though, the study is limited by virtue of the small number of participants and the results only show a tendency, it shows that novel mechanisms need to be further analyzed.

Complete Chapter List

Search this Book:
Reset