Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Using the raw materials extracted from the knowledge sources, this chapter articulated and built a conceptual model supporting the establishment of crowdsourcing as an organisational business process. Such a conceptual model had important roles in this research. The model, which articulated the raw materials into organised BPC information, provided an abstract and holistic view on the BPC domain (Cross, 1982). With its articulation, the model also underpinned the conceptualisation of BPC, and thus provided a means to explore the field. This role has been suggested by Hevner et al. (2004) that design science research may start with “simplified conceptualizations and representations of problems” (p. 85). The role of the conceptual model should also be seen as a research outcome, where a conceptual model constitutes an IS artefact per se (Hevner et al., 2004).

As the built model served as an IS artefact, it should be rigorously evaluated. The current chapter evaluated the model using a case study approach. More precisely, this evaluation considered the model in two crowdsourcing projects, which confirmed the adequateness and utility of the model. When considering this evaluation in the research process, the case study provided empirical evaluation of the model, which complemented the previous research efforts to conceptualise BPC. We note that this chapter is based on the journal publication by Thuan et al. (2017) with further details.

4.1 A Process Model for BPC Establishment

To build the conceptual model, we followed guidance from Webster and Watson (2002) and Jabareen (2009) for conceptualising models from extant literature. These authors suggest that a conceptual model can be built and generalised based on a literature review. In particular, Webster and Watson (2002) suggest analysing the related literature for main concepts and processes, which are main materials for model construction. Agreeing with this suggestion, Jabareen (2009) further recommends viewing a conceptual model as not only a simple set of concepts, but rather as an organised structure where each concept plays an integral role. Following these suggestions, the current research used the key building blocks drawn from the scoping review, and structured them in a meaningful way. Since these building blocks were repeatable processes of crowdsourcing, this structure led us to construct a process model of BPC.

We structured the original BPC building blocks (Table 3.2) to construct the process model of BPC. However, structuring these building blocks was not a straightforward task, since they covered very different concerns. Addressing this difficulty, the three-stage framework discussed in Sect. 2.3.3 was used as a starting point for the structuring process. We tried to allocate each building block into one of the three stages: decision to crowdsource, design, and configuration. The allocations on the decision to crowdsource and configuration were transparent, because they exhibited strong conceptual links. For instance, building blocks such as ‘circumstance to crowdsource and decision factors’ and ‘characteristic of crowdsourcing’ were logically linked to the decision to crowdsource. Similarly, ‘technical configuration’ was also clearly linked to the configuration activity.

However, allocations of building blocks to the design activity were more difficult since the links extracted from the reviewed sources were more diffuse. To help logically organise the building blocks, we classified these building blocks into plan-time and operation-time categories according to when they are processed. The ‘task design’ and ‘workflow design’ were related with the plan-time category, as they should be done before the tasks are sent to the crowd. The remaining building blocks, including ‘crowd management’, ‘quality control’ and ‘incentive mechanism’ included activities that are operationalised while the crowd performs tasks. In particular, crowd management includes profiling the crowd; quality control includes identifying cheating behaviours; and incentive mechanism includes dynamic pricing, all of which process information while the crowd performs tasks. As a result, this structuring organisation led to the process model shown in Fig. 4.1.

Fig. 4.1
figure 1

A process model for BPC establishment

We now describe the process model in more detail. As seen in Fig. 4.1, the model adopts the input-process-output (Pedersen et al., 2013) and stage-gate configurations (Cooper, 2008) that are typical of process models. It consists of seven components structured into three stages, which are described as follows.

Decision to crowdsource. The crowdsourcing process is triggered by an opportunity to crowdsource a piece of work, which starts a decision to crowdsource. This component initially conceptualises the crowdsourcing strategy in order to “decide whether the crowdsourcing approach is appropriate to solve their internal problem/problems [tasks]” (Muhdi et al., 2011, p. 322). It is a logical antecedent to any crowdsourcing project, aligning to a ‘make or buy’ decision in outsourcing projects. By making it explicit in the model, we signal that the decision to crowdsource should be founded on a logical assessment of the crowdsourcing context adequacy.

To make a logical decision to crowdsource or not, organisations need to evaluate several decisional factors. Table 3.3 has already identified several factors influencing the decision to crowdsource. However, we note that many factors in Table 3.3 may link to each other, which needs to be further arranged. Given that, we decided to structure these factors into a decision framework in order to support managers making informed decisions when they come to crowdsourcing. Yet, to keep the flow of the current section focusing on the process model and due to the important role of the decision to crowdsource, we present this framework separately in the next section.

Design. After the decision to crowdsource has been made, this stage covers a set of design activities necessary to operationalise the decision. It includes five components: task design, workflow design, crowd management, quality control, and incentive mechanism. Task design aims at transforming the conceptual ideas about the crowdsourcing tasks into a concrete task description (Model component 2A). Most of the reviewed sources recommend clearly defining the tasks that are crowdsourced (Malone et al., 2010; Rosen, 2011). The aim of this component is to designate a complete task description that can be given to the potential crowd members who may perform the tasks. To define these tasks, the properties suggested by Zheng et al. (2011) and Tokarchuk et al. (2012), like significance, autonomy, etc., should be taken into account.

The next component concerns workflow design. This involves task decomposition and result aggregation (Model component 2B). The former decomposes the list of tasks into smaller tasks, which can often be performed with massive parallelism. This decomposition increases the potential number of workers interested in participating in the open call (Afuah & Tucci, 2012; Kulkarni, Can, & Hartmann, 2012). A counterpart of decomposition is result aggregation, which concerns the definition of how the outputs from the smaller tasks will be put together so that the objectives of the overall task may be fulfilled (Geiger et al., 2011). Result aggregation is closely linked to task decomposition as they are two sides of the same coin. Kittur et al. (2013) explain this relationship as a workflow that “facilitates decomposing tasks into subtasks, managing the dependencies between subtasks, and assembling the results” (p. 5).

Crowd management is a design component that refers to how organisations manage the crowd members in order to accomplish the defined tasks (Model component 2C). The reviewed sources suggest two sub-components of crowd management: profiling the crowd and assigning tasks. First, organisations analyse the required capacity of crowd members for performing a task (Allahbakhsh et al., 2012; Kittur et al., 2013), and use this evaluation to build member profiles. Based on these profiles, organisations can determine an overall picture of the crowd and may impose constraints to crowd recruitment (Chandler & Kapelner, 2013; Stewart et al., 2010). Second, based on the crowd profiles, task assignment can be executed. That is, tasks can be assigned to crowd members who have appropriate profiles. Examples of existing task assignment mechanisms include the auction-based mechanism (Satzger et al., 2011) and the scheduled mechanism (Khazankin, Satzger, & Dustdar, 2012b).

According to Table 3.2, quality control should be regarded as the most critical model component (Model component 2D). One distinctive characteristic of crowdsourcing is that tasks may be performed by crowd members with very different backgrounds, skills and expertise (Hirth, Hoßfeld, & Tran-Gia, 2012). This sometimes leads to a number of low-quality contributions. Thus quality control mechanisms are critical to ensure the outputs meet the organisation’s quality goals (Allahbakhsh et al., 2013; Ipeirotis et al., 2010). By and large, quality control mechanisms can be grouped into design-time and run-time mechanisms (Allahbakhsh et al., 2013). At design-time, organisations can design tasks and workflows in a robust way, to increase the chances of receiving high-quality contributions. For instance, Eickhoff and De Vries (2013) recommend that defining tasks in an unambiguous and abstract thinking way can increase quality contributions. At run-time, organisations can consider several active quality control mechanisms like expert reviews, peer reviews, gold standards, output agreements, and even peer assessments with majority voting (Allahbakhsh et al., 2013).

Crowdsourcing relies on members of the crowd voluntarily performing tasks. Thus, organisations need incentive mechanisms to attract and engage these voluntary members in their open calls (Model component 2E). The reviewed sources suggest that incentive mechanisms should be developed based on two main types of motivation: intrinsic and extrinsic. For extrinsic motivation, most of the investigated sources have examined the adoption of financial incentives (Kaufmann et al., 2011; Mason & Watts, 2009). Regarding intrinsic motivation, a variety of factors have been suggested by the extant literature, such as fun (Doan et al., 2011), meaningful tasks (Chandler & Kapelner, 2013), and love of the community (Kaufmann et al., 2011).

Configuration. The final component considers how to configure a crowdsourcing process for instantiation in computational systems. Since this activity mainly concerns an in-depth technical view, for instance, adopting specific architectures, frameworks, and proprietary or open computational platforms, the business perspective adopted by this study limits our considerations regarding this component. Besides, since several crowdsourcing platforms are readily available, we expect this component to be significantly constrained by the service providers. Furthermore, we note the extant literature has already proposed several tools supporting the configuration process. That is, we expect that in the near future, given a designed crowdsourcing process, tools may be able to automatically translate such designs into process instantiation capable of running on specific crowdsourcing platforms. Examples of such translation tools include Turkit (Little et al., 2010), Crowdforge (Kittur et al., 2011), and BPMN4Crowd (Tranquillini et al., 2015). Given that, we regard the main output of this component as a configuration file necessary for implementing the crowdsourcing process, but we do not further research the low-level details that already examined by the translation tools.

4.2 A Framework Supporting the Decision to Crowdsource

The decision to crowdsource plays an important role in the crowdsourcing process. This role has been highlighted in the process model positioning the decision to crowdsource as the first component starting BPC (Fig. 4.1). A similar role has been supported by several researchers (Lu et al., 2015; Lüttgens et al., 2014; Muhdi et al., 2011). Given the importance, researchers have proposed several factors influencing the decision to crowdsource, which have already been identified and summarised in Table 3.3.

In this section, we used the identified factors to build an analytical framework for supporting the decision to crowdsource. To this end, the ‘wisdom of researchers’ was applied to Table 3.3, leading to the elimination of factors suggested by only one reviewed source and focusing on factors suggested by multiple sources. We then structured the remaining factors in a meaningful manageable way. Specifically, we adapted the multi-layer approach proposed by Vicente (1999), which highlights the multiple concerns that need to be understood in the decision. Consequently, we classified the decision factors in four layers, including the task, people, management, and environment. These layers are depicted as a decision framework in Fig. 4.2. The framework has presented in Thuan et al. (2016) and is further explained below.

Fig. 4.2
figure 2

A framework that supports the decision to crowdsource

Task Properties. According to Table 3.3, the reviewed sources suggest tasks as a key factor in the decision to crowdsource (Kazman & Chen, 2009; Rouse, 2010; Zhao & Zhu, 2014). From these sources, using the crowd may be good for certain tasks, but not for all kinds of tasks. Consequently, it is critical to examine task characteristics for evaluation whether an organisational task is suitable to be crowdsourced or not (Muntés-Mulero et al., 2013). This key role leads us to position this factor in the core layer of the framework. In this layer, we define six task properties.

The first property is whether a task can be performed or delivered online, i.e. its inputs/outputs can be delivered and collected through the Internet. Most of the reviewed sources consistently suggest that crowdsourcing should only be used for Internet activities (Brabham, 2008a; Doan et al., 2011; Muntés-Mulero et al., 2013). Some researchers go further adding this property to the definition of crowdsourcing, which turns this factor into one of the key underpinnings of crowdsourcing activities (Sect. 2.1.1).

The second property concerns the integration between crowdsourcing and the existing organisational business processes. This integration tightens and coordinates the external tasks and internal business processes (Tranquillini et al., 2015), which is strongly aligned with the BPC perspective of the book. Furthermore, the important role of this factor is supported by several reviewed sources, which suggest examining not only individualised crowdsourcing tasks but the whole business process (Kittur et al., 2013; Sakamoto, Tanaka, Yu, & Nickerson, 2011). The importance of this factor has increased recently due to the increasing adoption of crowdsourcing for complex organisational processes, including product development processes (Djelassi & Decoopman, 2013), industrial problems (Muntés-Mulero et al., 2013), and software development processes (Mao et al., 2017; Stol et al., 2017).

Interaction is the third considered property, which focuses on the ties between the organisation and the crowd members during crowdsourcing activities. Overall, a decision to crowdsource seems unsuitable for interactive tasks that require frequent exchanges between the organisation and the crowd, or between members of the crowd (Burger-Helmchen & Pénin, 2010). The reason is that it is quite hard to promote interaction when the crowd members are anonymous agents (Afuah & Tucci, 2012). Similarly, Muntés-Mulero et al. (2013) also suggest avoiding crowdsourcing if complex training is required to fulfil a task. As a result, independent tasks that do not require a lot of interaction and training to be accomplished are more compatible to crowdsource.

Ten out of fifty reviewed sources highlight the fourth property, ‘ease of delineation’, in the decision to crowdsource (Table 3.3), which considers how the task is defined and scoped. Zogaj et al. (2014), Seltzer and Mahmoudi (2013), and Lloret et al. (2012) all suggest the positive influence of this property on the decision to crowdsource. More precisely, organisations should adopt a crowdsourcing strategy when they have well-defined and clearly-scoped tasks. The ease of delineation helps maximise the potential number of workers by increasing the crowd’s understanding and so improve their approach to the task (Afuah & Tucci, 2012). It is worth noting that task delineation may have different levels of detail, according to different stages of the crowdsourcing process, from highly abstract in the decision to crowdsource to more specific in the design and configuration.

The fifth property is whether or not tasks include confidential information, which could result in privacy and security issues. Since crowdsourcing tasks are usually sent to anonymous members of the crowd, Muntés-Mulero et al. (2013) argue that tasks with confidential information are not suitable for crowdsourcing. In a similar vein, Burger-Helmchen and Pénin (2010) suggest that the decision to crowdsource should only be made if intellectual property rights can be clearly defined. Although agreeing with the suggestion, other researchers believe that additional efforts may deal with and mitigate the problem of sensitive information. Lu et al. (2015) and Feller et al. (2012) suggest decomposing tasks into a large number of smaller tasks to conceal the overall picture, which decreases the likelihood of privacy breaches and claims regarding intellectual property.

The sixth and final property is the ease with which a task can be partitioned into smaller pieces of work. The influence of this property on the decision to crowdsource is suggested by several reviewed sources. Malone et al. (2010), when discussing the collective intelligence of the crowd, point out that a crowdsourcing strategy is more adequate for tasks that can be partitioned. Similarly, Afuah and Tucci (2012), regarding problem-solving tasks, hypothesise that this property positively influences probability of choosing a crowdsourcing strategy. Furthermore, this property indirectly affects the decision to crowdsource through strengthening the other aforementioned properties. Partitionable tasks are expected to be easier to delineate (Feller et al., 2012) and to protect sensitive information (Lu et al., 2015), each of which positively influences the decision to crowdsource.

People. When making the decision to crowdsource, an organisation should consider the role of human capital playing in the crowdsourcing process, in terms of the crowd members and internal human resources (Afuah & Tucci, 2012). The availability of the crowd members to perform tasks is the key factor deciding the choice of crowdsourcing as tasks in the crowdsourcing strategy are processed by the crowd members. In general, Djelassi and Decoopman (2013) and Doan et al. (2011) suggest that the high availability of members increases the possibility of adopting a crowdsourcing strategy. Afuah and Tucci (2012), examining crowdsourcing contests, identify a similar positive influence.

The availability of the crowd should be further considered through four sub-factors: the number of members in the crowd, Internet access, knowledge, and diversity. According to Table 3.3, the number of members and their ability to access the Internet are two determinants for crowd availability. Both Malone et al. (2010) and Marjanovic et al. (2012) indicate that the chance of an organisation choosing to crowdsource increases when there is a large pool of people to procure for the task. The requirement of Internet access within the targeted crowd is related to the fact that almost all crowdsourcing tasks are performed through the Internet. Consequently, Internet access influences the number of members available for crowdsourcing tasks (Brabham, 2008a; Saxton et al., 2013), and thus affects the decision whether to crowdsource or not. The other two sub-factors, i.e. knowledge and diversity, also play an important role in the crowd availability. Yet, their roles seem to depend on the nature of the task. For instance, some tasks, like software development (Stol & Fitzgerald, 2014), require a certain type of knowledge from the crowd members, while others, such as solving a generic problem or innovation (Boudreau & Lakhani, 2013), need a crowd with diverse backgrounds. In short, the decision to crowdsource is influenced by “the constant availability of sufficient quantity and quality [knowledge and/or diversity] of online workers” (Corney et al., 2010, p. 244).

The reviewed sources also suggest considering the availability of internal employees when making the decision to crowdsource. If an organisation has too few internal employees in comparison to large human resources required for the task, choosing crowdsourcing to fulfil the human resource gap is suggested (Malone et al., 2010). Lu et al. (2015) go further to explain this decision in terms of both number of employees and their knowledge for tasks. With some tasks, like image tagging and translation, requiring a huge number of human resources that often exceed an organisation’s capability, crowdsourcing is a good (if not the only) option. Agreeing with the suggestion, Afuah and Tucci (2012) further considered the internal human resources regarding whether the knowledge meets the requirements for tasks. Consequently, they recommend using crowdsourcing if “the knowledge required to solve the problem falls outside the focal agent’s knowledge neighbourhood” (Afuah & Tucci, 2012, p. 369).

To sum up, the framework suggests that both high availability of the crowd and scarcity of internal employees for the tasks increase the possibility to choose crowdsourcing. When comparing the two factors, the availability of the crowd should receive higher priority. The reason is that the crowd is one key underpinning of crowdsourcing (Sect. 2.1.1), which is again highlighted here by many review sources, i.e. nineteen out of fifty sources in the reviewed pool, compared to three sources suggesting the role of scarce internal employees. Furthermore, though organisations may have enough internal employees for tasks, crowdsourcing is still a good approach that can bring competitive advantages for the organisations, e.g. increasing customer relationship. This can be inferred from many existing crowdsourcing projects promoted by well-resourced organisations, like Westpac bank (Westpac, 2013).

Management. Whether to crowdsource or not is a complex decision, which can influence the success of the whole project. Thus, it has to receive major attention from managers (Djelassi & Decoopman, 2013). From a managerial perspective, Rouse (2010) advises that the decision to crowdsource should only be made after examining costs, coordination, and risks. Recent studies additionally suggest that employees’ commitment is another factor influencing the decision to crowdsource (Lüttgens et al., 2014; Simula, 2013). Consequently, the management layer in our framework focuses on four factors: the project budget, the availability of expertise to coordinate the crowdsourcing activity, risks, and internal employees’ commitment.

When evaluating whether crowdsourcing is a suitable strategy, it is important to compare its efficiency in realising organisational goals in comparison with other alternatives. As cost saving is a key criterion for measuring efficiency (Muhdi et al., 2011), the budget of a crowdsourcing project influences the decision to crowdsource. Although there is a high agreement on the important role of budget in the decision, the reviewed sources seem to disagree on how this factor influences the decision to crowdsource. As seen via Table 3.3, four sources suggest a low budget, whereas an equal number of sources suggest a reasonable budget before making the crowdsourcing decision. In particular, some sources support that crowdsourcing is a preferred option when a project does not have enough money to hire new employees, or is a small-budget project (Malone et al., 2010). Whereas, others argue that a reasonable budget is required because though the amount of money to pay the crowd may be small, other costs, like coordination and transaction costs, may increase (Lu et al., 2015). Although further studies are needed to solve this disagreement, we suggest that the decision to crowdsource should be made based on having sufficient budget. That is, the budget is not enough to perform tasks in the traditional way, i.e. internal sources and outsourcing, but is sufficient to cover the crowdsourcing process.

Another considered factor in this layer is whether organisations allocate appropriate expertise and experience to coordinate multiple activities of crowdsourcing. This factor greatly influences the success of crowdsourcing, as stated by Muhdi et al. (2011) that at the beginning of a crowdsourcing project, “a source of experience and expertise in crowdsourcing can be helpful to match company expectations and the realistic possibilities of crowdsourcing” (p. 323). As Rouse (2010) suggests, a lack of coordination can lead to a drain of resources and substantial delays.

By analysing the reviewed sources, we have identified a few risks that should be considered when deciding to crowdsource. According to Table 3.3, the most salient ones are the risks of low quality results (Kannangara & Uguccioni, 2013; Naroditskiy et al., 2013) and loss of intellectual property (Schenk & Guittard, 2011). In crowdsourcing where tasks are performed by voluntary crowd members, organisations have little control over members’ behaviour (Zhao & Zhu, 2014), and this could lead to poor contributions to the project. As a result, the risk of low quality results should be considered. Another risk is the loss of intellectual property (Marjanovic et al., 2012), which mainly links to skilled tasks. When relying on the crowd members for these types of tasks, organisational knowledge may have to be transferred to them (Afuah & Tucci, 2012) and after the tasks are accomplished, knowledge related to the task may remain in the crowd. This implies the risk of losing intellectual property. Burger-Helmchen and Pénin (2010) claim that crowdsourcing should only be seen as a viable option if intellectual property can be managed and controlled. We further note that managing intellectual property is not only about hiding sensitive information, as mentioned in the task layer, but can be extended to other mechanisms, such as patents (Burger-Helmchen & Pénin, 2010) and intermediary platforms (Feller et al., 2012). In summary, organisations have more chance of making the decision to crowdsource if they can accept and manage the two aforementioned risks.

The fourth and final factor we consider in this layer is the organisational employees’ commitment to crowdsourcing activities, a concern suggested by recent studies (Lüttgens et al., 2014; Simula, 2013). This factor refers to the conflicting interests of employees and managers regarding the crowdsourcing activity, which relates to overcome the issue of the ‘not invented here syndrome’ (Katz & Allen, 1982). Although only a few articles in crowdsourcing literature consider this factor, we believe it is an important managerial concern because limited organisational employees’ commitment “can jeopardise the success of an entire crowdsourcing project” (Muhdi et al., 2011, p. 322). This factor is further important as several tasks in a crowdsourcing project, such as task definition and workflow design, are performed internally by organisational employees and managers (Whitla, 2009; Zhao & Zhu, 2014). As a result, a lack of employees’ commitment may decrease the ability to choose crowdsourcing (Lüttgens et al., 2014).

Environment. The primary factor in this layer is the choice over the use of either internal or external crowdsourcing platforms. In terms of cost, using an external platform saves development cost, which makes the decision to crowdsource more competitive. From a resource-based view, Lu et al. (2015) support this argument by clearly specifying that “decisions on the use of online microsourcing [crowdsourcing] will be driven by the ability of online sourcing platforms to provide cheap service solutions, complement current resources, fill a resource gap, and to give access to a large pool of resources” (p. 4). Some other reasons to adopt external platforms include the large and varied pools of members, the speed of launching the crowdsourcing project, and in some cases, protecting intellectual property (Feller et al., 2012; Mason & Suri, 2012; Zogaj et al., 2014).

To sum up, the decision framework developed in this section has two characteristics. First, it structures the factors influencing the decision to crowdsource into the corresponding layers, of task, people, management, and environment, which are not apparent in individual sources of knowledge. Consequently, it can be used as a decision framework per se, supporting managers in their crowdsourcing decisions. Second, the framework details the first component of the process model (Fig. 4.1), and thus can also be seen as an integrated plugin of the process model.

4.3 Case Studies

After the construction of the process model, we now evaluated the model using case studies. The decision to use case studies was driven by three reasons. First, case studies allowed the model to be evaluated in the practical organisational environments, which is the target application of the model. Another reason came from the complex nature of crowdsourcing. Evaluating a model that captured such a high level of complexity required in-depth and detailed explanations about their components, links and overall structure. The capacity to discuss the model in such detail was a distinctive characteristic of case studies. These reasons were supported by Yin (2013b), who stated that “for evaluations, the ability to address the complexity and contextual conditions nevertheless establishes case study methods as a viable alternative among the other methodological choices” (p. 322). The third and final reason was that case studies are considered appropriate for evaluating design science artefacts in complex organisational settings (Peffers, Rothenberger, Tuunanen, & Vaezi, 2012).

4.3.1 Overview of the Approach

To evaluate the model, we had to choose its evaluation metrics. In particular, we considered the two metrics: adequateness and utility of the model. We defined adequateness as ‘the degree to which the components and their arrangement in the model align with the activities done in the studied crowdsourcing project’, and utility as ‘the usefulness of the model perceived by the crowdsourcing project managers and coordinators’. Using these two metrics, we collected and analysed data from two crowdsourcing projects.

4.3.2 Case Study Design

We followed the guidelines provided by Yin (2013a, 2013b) for designing case study evaluation research, including how to select cases, collect data, analyse data, and validity.

Case Selection

The selection of crowdsourcing projects was based on comparability and access to source material. First, we selected projects with a comparable team size, between 2 and 10 members. This range of team size was sufficiently large to include multiple project roles, which the model aims to support, but not so large as to hold a diversity of settings that overshadow the evaluation purposes. Second, we chose crowdsourcing projects where we had access to project participants and other data sources. As a result, two crowdsourcing projects, Crowd Tagging (CT) and Logo Design Contest (LDC), were selected.

The CT project was part of a bigger plan aiming to uncover the impact of New Zealand predators on biodiversity in urban areas. This plan involved the installation of motion-triggered cameras in 40 locations in New Zealand, which collected more than 65.000 pictures. The CT project aimed at identifying the animals captured in these pictures. Because of the large number of pictures that needed to be analysed, the project launched a website with an open call to help tag the pictures. The project involved a team of four members: project manager, designer, web developer, and consultant. The call went live from June to December 2014. As a result, the project attracted over 300 users. About half of them tagged more than 20 pictures.

The other project, LDC, utilised the crowd for artistic design. A University in the Mekong delta, Vietnam was founded in 2013 from what began as a tertiary education centre. As a part of this transformation process, the University needed a new logo that would represent the spirit of the University. To design the logo, the University adopted a crowdsourcing approach that opened the logo design to designers from both inside and outside the University. It was in this spirit that the LDC project was created. The project started in May 2013 and finished in December 2013, when the winning logo was officially adopted by the University. The project had a leader, who made all project decisions, and a coordinator who instantiated and controlled the contest. The project also involved the University Board, consisting of eight members, who made key strategic decisions about the project planning. When the project was launched, it received 68 logo designs from the crowd. Three of them were selected and declared as the winning solutions: two were awarded for creative prizes and one was awarded for the final winning solution, which is the current logo of the University.

Data Collection

We collected data from multiple sources, both primary and secondary. Secondary sources included press releases, the open calls, meeting reports, and project websites, all of which provided materials necessary to clarify key project activities. The activities and their relationships were further detailed and validated in interviews. Across the two case studies, we conducted three in-depth interviews with project leaders and other participants, both face-to-face and through Skype. Due to the small size of the project teams, these interviewees wore ‘many hats’ and therefore could provide insights into several perspectives of the crowdsourcing projects. Besides being interviewed about the activities performed in the projects, the interviewees were asked to analyse a printed version of the model presented in Fig. 4.1 and were asked to make a judgment and produce comments about the usefulness of the model. A summary of demographic information about the cases and their data sources is presented in Table 4.1.

Table 4.1 Demographic information about the two crowdsourcing cases

Data Analysis

To prepare data for analysis, we first arranged a full description of each case, including details about the project, project team, and project activities. We then used the process model to map the project activities into the model components, while critically analysing the interviewees’ comments about the model. More precisely, this empirical analysis included the two following activities.

Adequateness analysis: This analysis followed a pattern matching technique (Yin, 2013a). We looked for major similarities, patterns, and notable differences between the model components and the activities reported for each project. We analysed each project starting from secondary data, which included considerable information about the project activities, followed by the analysis of the interview and supplementary materials. The identified activities were finally mapped in the model for comparing the similarities and differences between them. As a result, the final list of matching patterns (both similarities and differences) was created, allowing us to map the project activities in the model for comparing between them (presented in Figs. 4.3 and 4.4).

Fig. 4.3
figure 3

Activities of crowd tagging (CT)

Fig. 4.4
figure 4

Activities of logo design contest (LDC)

Utility analysis: We gathered judgements and comments from the interviewees regarding the perceived utility of the model. During the interviews, we asked evaluation questions, such as ‘what do you think about the model components?’ and ‘what do you think about the sequence of the model components?’. Analysing answers of these questions, we then focus more on identifying patterns of ‘usefulness’, ‘future use’ and ‘future improvement’, rather than ‘yes or no’ answers as these direct answers are usually biased, which will be discussed in the next section.

4.3.3 Case Study Results

The case study results are structured according the two investigated metrics, adequateness and perceived usefulness, which are subsequently presented in this section.

Adequateness of the Model

To report on model adequateness, we graphically represent the project activities of the two cases using the model as a baseline. This highlights not only the similarities but also the differences between our model and the investigated projects. Figures 4.3 and 4.4 summarise the activities of the CT project and the LDC project respectively. To increase readability, the figures represent the similarities in normal font; differences in italic font; and sub-activities in smaller font size.

Based on these graphical representations, we observe high adequateness of the model components. Both representations show strong concordance between the model components and the projects’ activities. Examples include the strong alignment on the decision to crowdsource, task design, workflow design, incentive mechanism, quality control, and partial alignment on crowd management and technical configuration. Several project sub-activities are also aligned with the model. However, both cases reveal several additional (sub) activities that are necessary to instantiate the components in practice. Examples include developing a tutorial in the task design of the CT case, and aggregating results through voting in the workflow design of the LDC case. Nevertheless, we find a strong alignment between the model components and the two projects, which suggests high adequateness of the model.

Specifically regarding the interdependencies suggested by the model, the two investigated projects are also largely aligned, i.e. they generally adopt the sequence of steps from input, decision to crowdsource, several aspects of crowdsourcing design, configuration, and finally to output. This alignment is stronger in the LDC case where most components follow the model sequence. In the CT case, we find strong alignment in the first four components, but some differences in the relationships among the last three components. More precisely, the three last components of CT were developed in a more iterative way, rather than following a sequential relationship. More details about the activities and their interdependencies are presented below.

Crowd Tagging (CT)

The CT project started with an input consisting of a large number of pictures to be analysed. To process these pictures, the project manager decided to adopt crowdsourcing. He stated three supporting reasons: (1) limited human resources to process the vast amount of data; (2) allowing the wider community to access the collected data; and (3) increasing environmental awareness of the community. While the later reasons are specific to the nature of CT as a citizen science project, the first reason, considered as the most important factor by the project manager, is consistent with the ‘decision to crowdsource’ component of the process model. More precisely, we consider the lack of internal employees to perform tasks as a factor driving the decision to crowdsource (Afuah & Tucci, 2012; Malone et al., 2010). Another reason CT should and did use crowdsourcing is the nature of the tasks. More precisely, tasks in CT were Internet-based; did not require interactive; were not confidential, and were partitionable. Thus, they are appropriate to crowdsource (consistent to Fig. 4.2).

After deciding to crowdsource, the project manager specified the crowdsourcing process itself, starting with task design. A task description was developed to promote the general aims of the project and explain how the task could be fulfilled by the crowd: “this research aims to evaluate the use of remote cameras to estimate abundances of non-native predators in urban environments. You will be shown a series of images, taken earlier this year, from various cameras placed around the Wellington city and asked to identify the animal in the photograph” [CT, Website]. The task design is consistent with the model component 2A. We also note the project included a tutorial and a visual explanation of the task, which served to train the crowd on how to perform the tagging. Such focus on training seems appropriate for this type of task, and the literature suggests that training the crowd may improve the results (Park, Shoemark, & Morency, 2014).

The CT project designed the crowdsourcing workflow through task decomposition. First, the whole activity was divided into sub-tasks of tagging three pictures, which the project alluded to as a cluster. This clustering was directly related to how data were collected in the project: “the camera takes three pictures every time they detect something. Thus, the group of three pictures helps make the task easier to perform” [CT, Project manager]. The project also divided the whole set of pictures into three pools: sign-up pool, working pool, and finished pool. The first pool included 20 clusters (of three related pictures), and the person who just signed up would start tagging the clusters in this pool. After a user finished ten clusters from the sign-up pool, the website would direct the user to the working pool. This pool included the remaining pictures that needed to be tagged, and thus was the main working zone. When a cluster had been tagged more than three times, it was considered finished and was moved to the finished pool. This pool stored the tagging results. While the three-pool decomposition is expected to improve reliability as seen below, we note that this decomposition can, and should, be extended for training purposes. More precisely, the first group can be used as gold standard data to give instant feedback and explanations as to why the crowd submissions may be (in)correct. By doing so, the crowd can learn and possibly provide better performance (Le, Edmonds, Hester, & Biewald, 2010).

According to the proposed model, crowd management aims at understanding the targeted crowd, which enables the assignment of tasks to suitable individuals to improve performance (Allahbakhsh et al., 2012; Khazankin et al., 2012b). The CT project manages the crowd by collecting users’ information and evaluating their confidence levels on task performance. Collecting demographic information about the users was done at sign-up, which was required before a user could perform a task. More importantly, the project also managed the confidence levels by using two methods. The first method was based on the first pool with known answers for the tagging pictures. By comparing users’ tags with the known answers, “we can say how reliable the users are”. [CT, Project manager]. Another method asked the users directly how confident they are about their submissions in order to manage the confidence levels.

Since tagging was performed by voluntary users, there was no guarantee that the results would be of high quality. Thus, quality control seems necessary for projects similar to CT (Allahbakhsh et al., 2013). However, the CT project seems to have been limited in its quality control, comparing to what were suggested in the BPC model. CT was mainly based on expert evaluation after receiving tags from the crowd. This approach led to two concerns. First, this evaluation will heavily depend on the opinion of evaluator, as seen via “I see what the people say and what I say” [CT, Project manager]. Another issue was the large amount of data that needed to be evaluated; and the project currently does not yet address this issue but sees it as future work.

To attract the crowd, the project manager considered both extrinsic and intrinsic incentive mechanisms. Regarding the former, the project manager initially thought about providing vouchers to a popular, local wildlife sanctuary (Zealandia). However, he finally decided not to do so as he believed the users would be keen enough to contribute to the citizen science project anyway. As a result, the project was mainly based on intrinsic incentives. Similar to other citizen science projects (Brabham, 2012), this project suggests meaningfulness as an altruistic contribution to science, as stated in the website “every image you tag will help us to better understand the relationships between New Zealand’s invasive mammals and native species”.

In its technical configuration, CT built a crowdsourcing website that allows broadcasting the open call. This website also functioned as a platform, which enabled users to tag the pictures. CT decided to build its own website, rather than using some existing platforms, since the project members wanted to have full control over the whole set of crowdsourcing activities.

Logo Design Contest (LDC)

In LDC, the decision to crowdsource was based on two main factors: diverse solutions and external participants. The main reason for choosing crowdsourcing was the ability of the crowd to provide diverse and innovative solutions, as summarised by the project coordinator: “the university has decided to conduct the open contest to find ideas that are ‘standard’ [i.e. meeting the requirements] and creative”. This is consistent with other crowdsourcing cases where external contributors can bring unique and innovative ideas (Brabham, 2010; Leimeister et al., 2009). Another factor influencing the decision to crowdsource was to utilise design contributors from outside the university. As logo design can be seen as a complex task (Schenk & Guittard, 2011), a certain level of expertise is necessary to generate a good design. Interestingly, saving costs (compared to hiring experts) was not considered as an important factor in the decision to crowdsource.

A key activity in crowdsourcing is task design (Model component 2A). Task design in LDC was presented through the announcement that was published on the University website and the local press. This announcement included the requirements for the logo, terms and conditions to join the contest, the submission deadline, and the prizes. Within these elements, the requirements played an important role as they specified what the solution should look like (Zheng et al., 2011). This considered two aspects: meaning of the logo and technical requirements. Meaning requirements were that the designed logo should represent the spirit of the University. The technical requirements specified, for instance, how many pixels were needed and the length of the slogan. We noted that while the technical requirements were specific, the meaning requirements were quite abstract. On the one hand, this abstraction left plenty of room for creativity in the design solutions. However, on the other hand, it did not fully show what the University board desired about the solution, which led to an extension of the contest because of several queries for clarifying the requirements [LDC, Project Coordinator].

The workflow design was an interesting activity with two distinctive aspects. First, while the model, consistent to Afuah and Tucci (2012), suggested task decomposition, LDC did not crowdsource decomposed tasks, but the whole logo design. This can be explained by the nature of logo design, which could be difficult to break down into smaller tasks. Additionally, crowdsourcing a whole task has been successfully adopted for several design contests, including bus stop shelter design (Brabham, 2012) and T-shirt design (Howe, 2006b). Second, LDC published its workflow in the open call. According the LDC announcement, the project workflow consisted of four steps: the crowd designs and submit their solutions; a preliminarily evaluation is conducted by the board; a short-list of submissions is chosen and given feedback, based on the board evaluation; and the final submissions are evaluated, ranked, and awarded. This provides transparency to the participants when explaining to them what will happen during the project.

The crowd management, which is specified in the model as task assignment and profiling the crowd, was not a focus in LDC. The project did not match the task to any specific members. Another aspect of the crowd management, which includes profiling the crowd (Allahbakhsh et al., 2012), was only processed in LDC when submissions were chosen for the second round. This was considered a limitation of LDC: “the management of crowd information was limited, which might be because we did not specify rules about providing information” [LDC, Project Coordinator]. As part of the crowd management, LDC had some communication with the contestants who wanted to find out more about the requirements. From the contest point of view, this kind of communication should be limited as it may create advantages for those contestants. Instead, a ‘Q&A’ section on the website, similar to the one deployed by Threadless (2015), should have been used.

To control quality, the LDC project used expert evaluation (Zhao & Zhu, 2014). In particular, the committee for aggregating results were also the evaluators, who assessed the submission quality and provided feedbacks. Since the number of submissions was not large (68 submissions), the use of a committee was a feasible approach. The project found a few cheating submissions that were likely copied from other logos. These submissions were mainly identified by the external experts who were experienced with logos and logo design contests [LDC, Project Coordinator].

To attract participants, the project used mainly extrinsic mechanisms, which consisted of monetary rewards and recognition by others. Like other contests, the monetary rewards were only provided for the winning solutions, which, in the LDC case, were two creative prizes and one final winning prize. The creative and winning prizes are quite valuable, equivalent to one and five month’s salary of a typical office worker, respectively. Another motivation for the participants was that the project announced the winners on the University website, which is aligned with the to-be-recognised motivator (Brabham, 2012). Both of these motivations were clearly presented in the open call.

The technical configuration was rather simpler in this project, as LDC only used the website as a channel to publish the task and used emails to receive the submissions. This was because the project members were not aware of existing platforms/websites that can support crowdsourcing contests [LDC, Project Manager].

Overall, the results from the two cases confirm the adequateness of the proposed model to structure the project activities. Indeed, the two cases reveal a high alignment between the project activities and the model components. Adequateness is further confirmed in the interviews. The interviewees, when we show them the graphical representations of the project using the model, suggest these representations capture their projects activities. This quote evidences the suggestion: “we may miss some of the points, but we touch all of them” [CT, Project manager]. With the high adequateness, we expect that these members have a positive perceived utility of the model, as confirmed next.

Perceived Utility of the Model

Examining the perceived utility of the model, we interviewed the project members about the model, its components and sequence. The results were that all interviewees found the model to be a useful tool for structuring the crowdsourcing projects. This are demonstrated by the following comments.

I think it will be nice to follow the model. […]. Yes, I want to use the model, following this flow or at least have something to follow [CT, Project manager]

The model is very well constructed and all of its activities should be necessarily for the project [LDC, Coordinator]

As I said, I think this model is totally suitable. There is only slightly different on its progress, yet the meaning and purpose are similar. The approach and the steps are also similar [LDC, Project Manager]

Finding the usefulness of the model, these participants were extremely enthusiastic about applying the model for the future crowdsourcing projects:

I think that any future crowdsourcing projects should apply strictly these steps, which will create better results [LDC, Coordinator]

From my opinion, the model can be suitable for many activities that need the resources from the crowd [LDC, Project Manager]

In the model construction, we classified its components into plan-time and operation-time. It is interesting to find that the same idea was corroborated by a project manager. When we showed him the graphical representation of the model, he grouped the activities of the LDC project into planning and implementation, and states that:

The component 2A and 2B [in the model] are similar to the planning phase of the project. The other components, including 2C, 2D, 2E, and 3, are implementation [LDC, Project Manager]

These comments expressed an agreement over the perceived usefulness of the model. Furthermore, the interviewees were curious to apply the model to future projects. Interestingly, when we discussed what aspects of the model are most useful, we found slightly different views between the project manager and coordinator roles. For instance, in the LDC case, while the project manager viewed the model as a tool for making decisions and management, the project coordinator instead stressed the role of the model in supporting communication among project members and in achieving a consensus. These differences suggest that usefulness can be perceived from different angles. Through this point, we highlight that if different roles can generate different insights when using the model, then the model’s utility is expanded.

In summary, we conducted two case studies evaluating the process model. The results of the case studies found strong evidence that the model can represent the key activities of crowdsourcing projects. Furthermore, we also obtained evidence of the perceived usefulness of the model, inspired by the reception of the crowdsourcing experts. Consequently, we suggest that the proposed model addresses most organisational concerns within the crowdsourcing process, and that the model can be useful to support crowdsourcing projects.

4.4 Summary and Discussion

To guide organisations in their establishment of BPC, this chapter developed a conceptual model allowing organisations to understand the main building blocks of BPC. Using the identified building blocks extracted in the previous chapter, we constructed a process model of BPC consisting of seven components. The construction was based on the ‘wisdom of researchers’, which enabled us to build the model faithfully representing BPC. The model was evaluated using the case study approach. Two real crowdsourcing projects were used for this evaluation. The results indicate that the model is adequate and useful in structuring the main crowdsourcing activities.

Overall, the model represents the main structures of BPC to support the establishment of crowdsourcing as an organisational business process. It provides a broad view of what activities that organisations need to be considered when planning, designing and instantiating crowdsourcing processes. This broad view, on the one hand, overcomes the excessive ad hoc criticism complained in the crowdsourcing literature (Geiger & Schader, 2014; Mao et al., 2017). On the other hand, it represents only the abstract view but not the deconstructed view, both of which together characterise BPC. From the deconstructed view, the process model and its components need to be further analysed into detailed elements. The following chapter addresses this need, which builds an ontology from both abstract and deconstructed views.