Verbal vs. Nonverbal Cues in Static and Dynamic Contexts of Fraud Detection in Crowdsourcing: A Comparative Study

Verbal vs. Nonverbal Cues in Static and Dynamic Contexts of Fraud Detection in Crowdsourcing: A Comparative Study

Wenjie Zhang, Yun Xu, Haichao Zheng, Liting Li
Copyright: © 2022 |Pages: 28
DOI: 10.4018/JGIM.310928
Article PDF Download
Open access articles are freely available for download

Abstract

As an important mode of open innovation, crowdsourcing can effectively integrate external resources, enabling enterprises to obtain stronger competitiveness and more benefits at a faster speed and lower cost. However, this mode has inevitable intellectual property protection challenges, especially on contest-based crowdsourcing platforms. Previous studies mostly focused on the protection of the rights of sponsors while ignoring the rights of workers, rarely paying attention to sponsor fraud, which may reduce the enthusiasm of participants and eventually turn crowdsourcing;' into a lemon market. This study proposes several fraud detection models to address this problem on contest-based crowdsourcing platforms. Furthermore, this paper explores and compares the value of four types of information as deception cues in crowdsourcing contexts via data mining technology and machine learning methods. The results benefit participants in crowdsourcing markets and contribute to fraud detection research and open innovation in the knowledge economy.
Article Preview
Top

Introduction

Crowdsourcing is among the most celebrated and successful new emerging digital economy business models. The ability of the online market to efficiently bring together individuals and businesses has redefined and transformed traditional ways of conducting business. Individuals with discretionary time and a shared interest congregate in online communities (Howe, 2008), which generate many participants willing to invest their effort and time in the crowdsourcing market. In particular, enterprises have increasingly leveraged online crowdsourcing marketplaces to seek solutions to business problems (Chen et al., 2020). Big enterprises like Dell have turned customer complaints into increased profit margins by tapping the crowd for solutions to their problems. Furthermore, many individuals and small and medium companies participate in third-party crowdsourcing platforms, such as Amazon Mechanical Turk (AMT), CrowdFlower, and Upwork. An essential objective of these crowdsourcing markets is to attract high-quality workers and obtain reasonable, diverse solutions (Terwiesch & Ulrich, 2009;Terwiesch & Xu, 2008). Many scholars have studied the fraudulent behaviors of workers on such platforms, some of whom often try to maximize their financial gains by producing generic answers or copying others’ solutions rather than working on the project (Eickhoff & de Vries, 2011;Hirth et al., 2010;Li et al., 2016). Instead of studying the workers’ fraudulent behaviors, this paper focus on the fraud actions of sponsors on contest-based crowdsourcing platforms for design tasks, such as 99designs and DesignCrowd. Most of the projects on the platform are related to design, including logos, clothing, and website interfaces. A designer can choose one of the crowdsourcing contests and hand over their work. If a work is selected by the sponsor as the optimal one, its designer will get the reward. This model provides a convenient source of multiple solutions for the sponsor.

Relative to the work-for-hire IT platforms, new sponsors’ fraud problems must be highlighted in the context of contest-based crowdsourcing platforms. Opportunistic, fraudulent sponsors with moral hazards have ample opportunities to misappropriate the solutions without rewarding workers. Cases of sponsor fraud are divided into three types: “double identity fraud”, “solution embezzlement,” and “payment refusal” (Pang, 2015). If a fraudulent project is completed, it will result in direct losses of workers. Furthermore, other non-fraudulent projects suffer because fraudulent projects enjoy the attention the former should have. Finally, sponsor fraud may lead to decreased user engagement and, ultimately, failure of the crowdsourcing market. Therefore, effective identification of these frauds is critical to facilitate the sound development of crowdsourcing (Deng et al., 2016;Pang, 2015;Pennebaker, 2013;Schlagwein et al., 2019). However, most of the existing fraud detection approaches originate from money-driven false fabrications designed to distinguish the true requirements from the false. In contrast, the motives and needs of the initiators in crowdsourcing are not fabricated but real, which may lead to the failure of existing fraud detection cues. This paper will use multiple machine learning models to examine the effectiveness of each type of cue to provide reliable cue selection support for this fraud scenario.

Complete Article List

Search this Journal:
Reset
Volume 32: 1 Issue (2024)
Volume 31: 9 Issues (2023)
Volume 30: 12 Issues (2022)
Volume 29: 6 Issues (2021)
Volume 28: 4 Issues (2020)
Volume 27: 4 Issues (2019)
Volume 26: 4 Issues (2018)
Volume 25: 4 Issues (2017)
Volume 24: 4 Issues (2016)
Volume 23: 4 Issues (2015)
Volume 22: 4 Issues (2014)
Volume 21: 4 Issues (2013)
Volume 20: 4 Issues (2012)
Volume 19: 4 Issues (2011)
Volume 18: 4 Issues (2010)
Volume 17: 4 Issues (2009)
Volume 16: 4 Issues (2008)
Volume 15: 4 Issues (2007)
Volume 14: 4 Issues (2006)
Volume 13: 4 Issues (2005)
Volume 12: 4 Issues (2004)
Volume 11: 4 Issues (2003)
Volume 10: 4 Issues (2002)
Volume 9: 4 Issues (2001)
Volume 8: 4 Issues (2000)
Volume 7: 4 Issues (1999)
Volume 6: 4 Issues (1998)
Volume 5: 4 Issues (1997)
Volume 4: 4 Issues (1996)
Volume 3: 4 Issues (1995)
Volume 2: 4 Issues (1994)
Volume 1: 4 Issues (1993)
View Complete Journal Contents Listing