Article Preview
Top1. Introduction
Crowdsourcing harnesses the power, wisdom, or financial resources of usually a large, diverse group of people to solve problems that are usually impossible, difficult or time-consuming to be solved by computers. . While the crowdsourcers benefit from the typically fast and cheap solutions that crowdsourcing promises them, the crowd also gets incentivised, usually by being paid a nominal amount of money or a sort of intangible reward such as social recognition. Consequently, crowdsourcing has become available as an alternative to outsourcing and several commercial crowdsourcing platforms are now available on the web. Amazon Mechanical Turk (Buhrmester et al. 2011, Kittur et al. 2008) and Threadless (Brabham 2010, Wu et al. 2010) are examples of such platforms.
By utilising the diverse knowledge, scale and speed of the crowd, crowdsourcing has attracted several domains of study. Crowdsourcing has been used in environmental sciences (Fraternali et al. 2012, Gao et al. 2011), in medicine (Foncubierta Rodrguez & Müller 2012, Yu et al. 2012), in business and marketing (Chanal et al. 2008, Whitla 2009), in sociology (Heinzelman et al. 2011, Wexler 2011), in astronomy (Gay et al. 2014, Harvey et al. 2014), and in computer sciences (Hosseini et al. 2014b, Sherief et al. 2014). In spite of this popularity, there is still a lack of engineering approaches for crowdsourcing projects. One of the engineering challenges is the configuration of a crowdsourcing project in terms of the choices made for each of the pillars: the crowd, the crowdsourcer, the task to be solved and the platform which accommodates the project (Hosseini et al. 2014a).
In (Hosseini et al. 2014a), we proposed a taxonomy for crowdsourcing. The taxonomy was elicited by analysing 113 papers on the crowdsourcing topic which clearly defined the concept. The taxonomy discussed the four main pillars of crowdsourcing and proposed a set of inter-relations between various crowdsourcing features. These relations are mainly about the compatibility (excludes and hinders) and dependencies (requires and supports) between features. The exploration of these relations and when they apply will guide the configuration of crowdsourcing projects. That is, similar to a configuration of a software product line (Clements & Northrop 2002), they will guide the choice between features when setting up a crowdsourcing project.
The beneficiaries of the analysis and specification of these relations between features are various. The platform developers could use that as a baseline for their configuration wizard or recommendation system. Crowdsourcers will be aware of the inter-dependencies and obstacles while taking certain options. Ultimately, the quality of the crowdsourced task and the crowd experience could also improve when setting up a crowdsourcing project is informed by best practice experience.
In this paper and to get that best practice, we consolidate our initial template of the inter-relations between features proposed in (Hosseini et al. 2014a) by consulting with experts through an expert opinion study. The study included 36 experts in crowdsourcing who applied it in practice and published research results. Our analysis led to discover further relations and different views on the relations between the different pairs of features. The diversity in views and arguments is expected, given the nature of many of the features which relate to the behaviour and perception of individuals and crowds. We report on those arguments and explain the various views. Our study enriches the body of knowledge on crowdsourcing with a guide to configure and set up a crowdsourcing project. It also encourages further research to refine and extend our findings.