Investigating Factors Influencing the Quality of Crowdsourced Work under Different Incentives: Some Empirical Results

Investigating Factors Influencing the Quality of Crowdsourced Work under Different Incentives: Some Empirical Results

Evangelos Mourelatos, Manolis Tzagarakis
Copyright: © 2016 |Pages: 17
DOI: 10.4018/IJIDE.2016040102
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Crowdsourcing is a new form of online labor, where the process of solving a problem is approached by soliciting contributions from a large group of people. In this paper, the authors attempt to investigate how different incentives affect the quality of work in such contexts, by completing the same task in three different environments: in a laboratory setting, on a social networking as well as on a crowdsourcing site. Analyzing the obtained results indicates that under different incentives, different factors contribute to the quality of work in crowdsourcing tasks. In general the research highlights that the identification of factors contributing positively to higher quality of work in crowdsourcing environments is a complex question, depending on the task at hand.
Article Preview
Top

1. Introduction

The rise of the web and its big impact to people’s daily routine has created suitable conditions for new economic processes to emerge. In this respect, crowdsourcing can be considered as a fast-growing activity which involves division of labor for tedious tasks. It can be used to successfully address a great variety of tasks that include funding, reviewing, idea hatching as well as a general search for answers and solutions. In general, crowdsourcing is a process which obtains needed services, ideas, or content and looks for contributions from a large group of people, in the form of an online community, rather than from traditional employees and/or suppliers. The process of crowdsourcing is often used to subdivide tedious work by combining the efforts of numerous self-identified volunteers or part-time workers, where each contributor of their own initiative adds a small portion to the greater result (Howe 2006). Nowadays, it is often used for particular types of work such as translation services, microtasks, image tagging and transcription. (Estellés-Arolas et al. 2012).

Crowdsourcing has received in recent years the interest of researchers in various fields that aim to analyze, comprehend, assess and even improve this new form of labor and finally find strategies and frameworks in order to increase the quality of the work being done in high levels (Howe 2008). An overview of the general principles of crowdsourcing aimed towards achieving high quality of work is given by existing literature (Yuen et al. 2011).

Quality of work in crowdsourcing is the extent to which the provided outcome of the worker fulfills the requirements of the requester (Allahbakhsh et al. 2013: 76-81). In general, quality of work in such contexts is considered a subjective issue. That’s why many researches try to propose various models and metrics to assess and ensure high the quality of work in such environments. With respect to the existing models, two approaches to achieving quality results can be identified: approaches based on the profile of individual workers and approaches focusing on the specification and design of the submitted task. Workers in crowdsourcing markets usually have different levels of expertise and experience and many times adjust their efforts according to incentives hence affecting the quality of the outcome. (Wang et al. 2013). On the other hand approaches focusing on task design (under which the requester describes the task that should be completed) consists of several components (task definition, user interface, granularity and compensation policy) which obviously affect the quality of the worker’s result (Finnerty et al. 2013: 16-20).

In this paper we aim to address questions related to what factor affect the quality of work in crowdsourced tasks when these are performed under different incentives. The focus is in particular whether or not different incentives are associated with different factors influencing the quality of work (Chandler 2013:123-133). Towards this, we conducted experiments where the same crowdsourcing task has been submitted under three different incentive schemes and compare the quality of work received.

This paper is structured as follows. In the next section we describe the task workers had to complete and present the three environments where the task was submitted. Consequently, we present the methodology and show the results of the analysis. The paper concludes with a summary and an some future research directions.

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024): Forthcoming, Available for Pre-Order
Volume 14: 1 Issue (2023)
Volume 13: 4 Issues (2022): 1 Released, 3 Forthcoming
Volume 12: 4 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing