Task Assignment and Personality: Crowdsourcing Software Development

Task Assignment and Personality: Crowdsourcing Software Development

Abdul Rehman Gilal, Muhammad Zahid Tunio, Ahmad Waqas, Malek Ahmad Almomani, Sajid Khan, Ruqaya Gilal
DOI: 10.4018/978-1-6684-3702-5.ch086
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

An open call format of crowdsourcing software development (CSD) is harnessing potential, diverse, and unlimited people. But, several thousand solutions are being submitted at platform against each call. To select and match the submitted task with the appropriate worker and vice versa is still a complicated problem. Focusing the issue, this study proposes a task assignment algorithm (TAA) that will behave as an intermediate facilitator (at platform) between task (from requester) and solution (from worker). The algorithm will divide the tasks' list based on the developer's personality. In this way, we can save the time of both developers and platform by reducing the searching time.
Chapter Preview
Top

Introduction

Crowdsourcing has become an emerging trend for the quick software development due to the parallel and micro-tasking. It is also cost efficient based on the knowledge of the crowd or “wisdom of the crowd”. CSD uses an open call format. This process involves three kinds of roles: 1) requester, 2) platform (i.e., the service provider) and 3) crowd-source developer (i.e., the person for coding and testing). This type of call format enables large numbers of task accessibility and self-selection. On the platform, a number of developers can register and choose a task from available set. Once after the submission of the task from developers, the platform is required to evaluate the submission to decide for the best solution from developers, to pay the rewards. Based on the Ke Mao et al. studies (Mao, Capra, Harman, & Jia, 2017; Mao, Yang, Wang, Jia, & Harman, 2015), selection of an appropriate task to reward from the extensive large set of tasks is a very hectic work for the developers. Besides, it is also a tiring and time-consuming job for the platform to evaluate thousands of submitted tasks from developers. Ye Yang and M.C Yuen (Yang, Karim, Saremi, & Ruhe, 2016; Yuen, King, & Leung, 2011) mentioned that from the task requester perspective, it is very hard to match the developer with the task and it is also very difficult to monitor the risk of the reliability of the CSD developers.

In the same view, Chilton and Eman Aldhahri (Aldhahri, Shandilya, & Shiva, 2015; Chilton, Horton, & Miller, 2010) continued to claim that matching of the improper task to improper CSD developer may not only decrease the quality of the software deliverables but it also overburdens both platform and developers. They further mentioned that most workers view a minimum number of recent tasks that are posted at the platform because tasks are posted in hundreds. By considering the low level of skills and expertise level of the crowdsourced software developers, unrealistic matching of CSD worker and the task may have an effect on the software quality. Latoza et al. (Latoza & Hoek, 2015) also emphasized on the matching of workers with their expertise and knowledge and to get maximum benefit from the CSD worker is an issue. Similar is the case is discussed in the (Geiger & Schader, 2014; Gilal, Jaafar, Omar, Basri, & Din, 2016; Gilal, Jaafar, Omar, Basri, & Waqas, 2016; Gilal, Omar, & Sharif, 2013; Tunio et al., 2017) studies that while keeping extrinsic and intrinsic choice of CSD workers self-identification principle for individual contributors to select those tasks which are the best match with their psychological preferences (i.e., personality). Psychological is an important factor to compliance with the choice and individual capabilities with the respective task requirements. Moreover, to choose a few best submissions out of thousand submissions is really a hectic job at CSD platform level. Every CSD worker is not supposed to give the best solution for each task (Dang, Liu, Zhang, & Huang, 2016). More seriously, malicious workers can also submit the tasks for reviews to increase the complexity at the platform (Carmel, de Souza, Meneguzzi, Machado, & Prikladnicki, 2016; Carpenter & Huang, 1998; Nawaz, Waqas, Yusof, Mahesar, & Shah, 2017; Nawaz, Waqas, Yusof, & Shah, 2016; Waqas, Yusof, Shah, & Khan, 2014; Waqas, Yusof, Shah, & Mahmood, 2014). Keeping it in view, Leticia Machado, et al.(Howe, 2006) stated that CSD model does not only deal with technology issues but economic as well as personal issues that make the model more complex.

Complete Chapter List

Search this Book:
Reset